forum_id
stringlengths
9
9
sections
stringlengths
13.5k
97.5k
Sk8csP5ex
[{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS: ENSEMBLES & THE ROLE OF BATCH NORMALIZATION", "section_text": "Etai Littwin & Lior Wolf\nEq.33|provides the asymptotic mean total number of critical points with non-diverging index k. It is presumed that the SGD algorithm will easily avoid critical points with a high index that have many descent directions, and maneuver towards low index critical points. We, therefore, investigate how the mean total number of low index critical points vary as the ensemble distribution embodied in er Jr>2 changes its shape by a steady increase in 3.\nFig.1(f) shows that as the ensemble progresses towards deeper networks, the mean amount of low index critical points increases, which might cause the SGD optimizer to get stuck in local minima This is, however, resolved by the the fact that by the time the ensemble becomes deep enough the loss function has already reached a point of low energy as shallower ensembles were more dominant earlier in the training. In the following theorem, we assume a finite ensemble such tha 1 Er2r ~ 0.\nTheorem 5. For any k E N, p > 1, we denote the solution to the following constrained optimization nroblems.\np e = 1 e* = argmax0g(R,e) s.t E r=2\nr = p otherwise\nThm.5|implies that any heterogeneous mixture of spin glasses contains fewer critical points of a. finite index, than a mixture in which only p interactions are considered. Therefore, for any distribu tion of e that is attainable during the training of a ResNet of depth p, the number of critical points is. lower than the number of critical points for a conventional network of depth p.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.]2015) (ResNets) are neural networks with skip connections. Thes. networks, which are a specific case of Highway Networks (Srivastava et al.]2015), present state. of the art results in the most competitive computer vision tasks including image classification anc object detection."}, {"section_index": "2", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis. play during training and to study their loss surface. In particular, we use at one point or another the. assumptions of redundancy in network parameters, near uniform distribution of network weights, in. dependence between the inputs and the paths and independence between the different copies of the. nput as described in Choromanska et al.[(2015a). The last two assumptions, i.e., the two indepen dence assumptions, are deemed in Choromanska et al.[(2015b) as unrealistic, while the remaining. are considered plausible\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However, Thm. 1 and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence between the different copies of the input. Moreover, the analysis of the dynamic behavior of residual nets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in Larsson et al.(2016), where it is noted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper networks of the ensemble become more prominent as training progresses. The authors of Larsson et al.(2016) hypothesize that this is a result of the shallower columns being stabilized at a certain point of the training process. In our work, we discover the exact driving force that comes into play.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind it. This mechanism remarkably takes place within the parameters of Batch Normalization (Ioffe & Szegedy2015), which is mostly considered as a normalization and a fine-grained whitening mechanism that addresses the problem of internal covariate shift and allows for faster learning rates\nWe show that the scaling introduced by batch normalization determines the depth distribution in the virtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the. effective ensemble distribution to bigger depths.\nIn addition, our work offers an insight into the mechanics of the recently proposed densely connecte. networks (Huang et al.[2016). Following the analysis we provide in Sec. 3, the additional shortcu paths decrease the initial capacity of the network by offering many more short paths from inpu to output, thereby contributing to the ease of optimization when training starts. The driving forc mechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase\nThe main tool we employ in our analysis is spin glass models.Choromanska et al.(2015a) have created a link between conventional networks and such models, which leads to a comprehensive study of the critical points of neural networks based on the spin glass analysis of|Auffinger et al. (2013). In our work, we generalize these results and link ResNets to generalized spin glass models. These models allow us to analyze the dynamic behavior presented above. Finally, we apply the results of Auffinger & Arous (2013) in order to study the loss surface of ResNets.\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip connections, including dense nets. This is done directly by including all of the induced sub networks in Eq.9] The reformulation of Eq.[10|would still holds, given that I, is modified accordingly.\n0k(R,e) W +w"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con- ventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensembles are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network's depth, as training progresses, it becomes deeper and deeper. The main mechanism that con- trols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models. which we also use in order to study the number of critical points in the optimization of Residual Networks.\nThe success of residual networks was attributed to the ability to train very deep networks when employing skip connections (He et al.| 2016). A complementary view is presented byVeit et al. (2016), who attribute it to the power of ensembles and present an unraveled view of ResNets that depicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution around half depth. They also present experimental evidence that short paths of lengths shorter than half-depth dominate the ResNet gradient during training\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior When starting the training process, the ensemble is dominated by shallow networks, with depths. lower than half-depth. As training progresses, the effective depth of the ensemble increases. This. Increase in depth allows the ResNet to increase its effective capacity as the network becomes more and more accurate."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have. surrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en semble behavior, which explains the ease of training such networks even at very large depths, while. still maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective. capacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic mechanism typically takes place within the outer multiplicative factor of the batch normalization. module.\nA simple feed forward fully connected network N, with p layers and a single output unit is consid ered. Let n; be the number of units in layer i, such that no is the dimension of the input, and n, = 1 It is further assumed that the ReLU activation functions denoted by R( are used. The output Y of the network given an input vector x E Rd can be expressed as"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "d p Y= (k) W. i=1 j=1 k=1\nAntonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high dimensional sphere. Annals of Probability, 41(6):4214-4247, 11 2013.\nDefinition 1. The mass o. f the network N is defined as i Y\nAnna Choromanska, Yann LeCun, and Gerard Ben Arous. Open problem: The landscape of the los surfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\n(w) =EA[max(0,1-YxY)] La(w) =EA[[Yx-Y]]\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016\nwhere Y, is a random variable corresponding to the true label of sample x. In order to equate either loss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiv preprint arXiv:1605.07648, 2016\nA4 Spherical constraint - The following is assumed:\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of these assumption was posed as an open problem in|Choromanska et al.[(2015b), where a different degree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption of Aij, were deemed unrealistic, and A2 - A4 as plausible. For example, A1 does not hold since. each input x; is associated with many different paths and x1 = x2 = ...xiy. See|Choromanska. et al.(2015a) for further justification of these approximations."}, {"section_index": "6", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[1presents the various symbols used throughout this work and their meaning\nWe briefly summarize [Choromanska et al.(2015a), which connects the loss function of multilayer networks with the hamiltonian of the p spherical spin glass model, and state their main contributions and results. The notations of our paper are summarized in Appendix|A|and slightly differ from those inChoromanska et al.(2015a).\nwhere the first summation is over the network inputs x1...xd, and the second is over all paths from input to output. There are = I=1n such paths and Vi, xi1 = x2 = ...xiy. The variable Aij E {0,1} denotes whether the path is active, i.e., whether all of the ReLU units along this path are producing positive activations, and the product II%-1 wf' represents the specific weight .(k) confi guration w1, ..w?, multiplying x, given path j. It is assumed throughout the paper that the input variables are sampled i.i.d from a normal Gaussian distribution.\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS, 2015a..\nThe variables A,; are modeled as independent Bernoulli random variables with a success probability p, i.e., each path is equally likely to be active. Therefore,\nd Y p EA[Y]= k Xij P i=1 j=1 k=1\nThe task of binary classification using the network V with parameters w is considered, using either the hinge loss Lh. r or the absolute loss L%:\nA2 Redundancy in network parameterization - It is assumed the set of all the network weights. [w1, w2...w contains only A unique weights such that A < N.. A3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the graph of connections defining the network N. Practically, this means that we assume every. node is adjacent to an edge with any one of the A unique weights..\nA 1 < w: Y i=1\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. In NIPS, 2016.\nUnder A1-A4, the loss takes the form of a centered Gaussian process on the sphere SA-1(/A) Specifically, it is shown to resemble the hamiltonian of the a spherical p-. spin glass model given by:\nA 1 r 11 Hp,A(w) = Xi1 Wik A p- 2 i1...ip k=1"}, {"section_index": "7", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x The output of layer i of network given input x The final output of the network V True label of input x Loss function of network V Hinge loss Absolute loss The depth of network V Weights of the network w E RA A positive scale factor such that ||w||2 = C Scaled weights such that w =- w The number of units in layers l > 0 The number of unique weights in the network The total number of weights in the network V The weight matrix connecting layer l - 1 to layer l in V. The hamiltonian of the p interaction spherical spin glass model. The hamiltonian of the general spherical spin glass model. A Total number of paths from input to output in network V yd Total number of paths from input to output in network N of length r Yr d ReLU activation function Bernoulli random variable associated with the ReLU activation functio Parameter of the Bernoulli distribution associated with the ReLU unit 3) multiplier associated with paths of length r in V. pnC VA Normalization factor Batch normalization multiplicative factor in layer l. The mean of the estimated standard deviation various elements in R(W\nwhere xi1... are independent normal Gaussian variables\nIn Auffinger et al.(2013), the asymptotic complexity of spherical p spin glass model is analyzed based on random matrix theory. In Choromanska et al.[(2015a) these results are used in order to shed light on the optimization process of neural networks. For example, the asymptotic complexity of spherical spin glasses reveals a layered structure of low-index critical points near the global op timum. These findings are then given as a possible explanation to several central phenomena found in neural networks optimization, such as similar performance of large nets, and the improbability of getting stuck in a \"bad' local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks anc the hamiltonian of a general multi-interaction spherical spin glass model as given by:.\np A Hp,(w)= Er II Xi1,i2...ir Wik A 2 r= i1,i2...ir=1 k=1\nwhere e1...ep are positive constants. Then, usingAuffinger & Arous(2013), we obtain insights or residual networks. The other part of our work studies the dynamic behavior of residual networks where we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the hamiltonian of the general spherical spin glass model. We consider a simple feed forward fully connected network N, with ReLU activation functions and residual connections. For simplicity oi notations without the loss of generality, we assume n1 = ... = np = n. no = d as before. In our ResNet model, there exist p -- 1 identity connections skipping a single layer each, starting from the first hidden layer. The output of layer l > 1 is given by:\nNi(x) =R(W'Ni-1(x))+Ni-1(x\nProof of Lemma[1] There are a total of r paths of length r from input to output, and a total of Ar unique r length configurations of weights. The uniformity assumption then implies that each. configuration of weights is repeated Ir times. By summing over the unique configurations, and re. indexing the input we arrive at Eq.10.\np d Yr r y=LL r)(k W r=1 i=1 j=1 k=1\nProof of Lemma[] From[12 we have that S1,2.., is defined as a sum of r inputs. Since there are only p distinct inputs, it holds that for each 1,i2..., there exists a sequence Q = (at)i=1 E N such that -1 Q; = Xr, and Si1,2.i, = 1 Q,x. We, therefore, have that E[?,...,] = |||l3 Note that the minimum value of E[&? ?, 2..r] is a solution to the following:\nmin(E[?,...]) = mina(||a|2) s.ta1 -1 E N.\nDefinition 2. The mass of a depth r subnetwork in N is defined as wr =dY\nThe properties of redundancy in network parameters and their uniform distribution, as described ir Sec.2] allow us to re-index Eq.9"}, {"section_index": "8", "section_name": "DESCRIPTION", "section_text": "A 1 w?=1 i=1\nwhere W, denotes the weight matrix connecting layer l - 1 with layer l. Notice that the first hidden layer has no parallel skip connection, and so N1(x) = R(W' x). Without loss of generality, the scalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhereA.?) E {0,1} denotes whether path j of length r is open, and Vj, j', r, r' x, = x. The i3/. residual connections in W imply that the output Y is now the sum of products of different Iengths indexed by r. Since our ResNet model attaches a skip connection to every layer except the first.. 1 < r < p. See Sec.6[regarding models with less frequent skip connections..\nEach path of length r includes r 1 non-skip connections (those involving the first term in Eq.8 and not the second, identity term) out of layers l = 2.p. Therefore, ~r = (-1)nr. We define the following measure on the network:\nLemma 1. Assuming assumptions A2 - A4 hold, and E Z, then the output can be expressed after reindexing as:\nyr p ^ AT Y = i) Wik 12... r=1i1,i2...ir=1 j=1 k=1\nlim Bap) = H() + alog() Og p->0\nProof of Thm.2 For brevity, we provide a sketch of the proof. It is enough to show that limp->00 O17 = 0 for < 1. Ignoring the constants in the binomial terms. we have\nQ1P Q1p Q1 1 lim lim 9.- lim p->0o p->0o p->0o r=1\nXr p EA[Y] = II Wik r=1i1,i2...ir=1j=1 k=1\n/here z2 which can be expressed using the Legendre polynomial of order p:\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables\nA Si1,i2...ir Xi1,i2...ir I1,i2...ix En[?,12 j=1\nLemma 2. Assuming A2 - A3 hold, and n E N then V he following holds..\nProof of Lemma|4 For simplicity, we ignore the constants in the binomial coefficient, and assume er = () r. Notice that for * = (), we have that arg max,(er(B*)) = p, arg max,(er(*)) = 1 and arg max,(er(1)) = . From the monotonicity and continuity of r, any value 1 k p can be attained. The linear dependency (C) = pnC completes the proof. A\nOLN(x,w- g) dLN(x,w) dLN(x,w) aai 9 aai aai\nThe independence assumption A1 was not assumed yet, and[14|holds regardless. Assuming A4 and denoting the scaled weights w, = w;, we can link the distribution of Y to the distribution on x:\nA I Xi1,i2...ir Wik /d A i1,i2...ir=1 k=1 A > I Xi1,i?...ir W i1,i2...ir=1 k=1\nOLN(x,w - g) gp + gpl dai\nwhere C1, C2 are positiye constants that do not. ffect the optimization process\n-\nNote that since the input variables x1...xd are sampled from a centered Gaussian distribution (de pendent or not), then the set of variables x1,i2.... are dependent normal Gaussian variables.\nWe approximate the expected output EA(Y) with Y by assuming the minimal value in|13|holds. all weight configurations of a particular length in Eq. [10|will appear the same number of times. When A n, the uniformity assumption dictates that each configuration of weights would appear approximately equally regardless of the inputs, and the expectation values would be very close to\nJsing taylor series expansion:. dLn(x, w- g). dLN(x,w) dLN(x,w) (40) aLN(x,w) Substituting Vw - (gm + gp) in40|we have: dLN(x,w- gw) I < 0 (41) 9m a 9p 9m. + And hence: dLN(x,w - gw) (42 Finally: (43) 1 OLN(x,w) 2. Since paths of length m skip layer l, we have that .. I, 9p. Therefore: dLv(x,w - g) (44) 9m9p - n? The condition ||gpl|2 > ||gm||2 implies that gmgp + l|gpll2 > 0, completing the proof.\np A L I = I1 Wik r=1 i1,2...iz=1 k=1\ndLN(x,w- gw) 9m + gp)'(gm + gp) = lgm +gplI2 < 0\ngm+gp|l2)]=|i|(1+\nThe following lemma gives a generalized expression for the binary and hinge losses of the network\nLN(x) = C1 + CY\nWe denote the important quantities:\nn\ndLN(x,w) g = (mLm(x,w) +pLp(x,w)) ngm + pgp)' dc = (mLm(x,w) +pLp(x, lgm + pgp) mgm + pgp)\nTheorem 1. Assuming p E N, we have that.. 1\n0LN(x,w - gw mgm + pgp)'(gm + gp) dC -(m||gp|l2 + p||gp|l2 + (m +p)gp gm\n1 lim -arg max( p->0o\nTheorem 2. For any Q1 Q2, and assuming Q1p, Q2p p E N, it holds that.. 1+B\nQ2P lim -1 p->0 r=Q1P\nThm.2 implies that for deep residual networks, the contribution of weight products of order far. an ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the effective depth to any value by simply controlling C..\neT(V\"_V)e eT(V+ V)\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C such tha arg max,(er(C)) = k.\nnaxe0k(R,e) < max\nThe expression for the output of a residual net in Eq.15 provides valuable insights into the machinery at work when optimizing such models. Thm.|1|and|2Jimply that the loss surface resembles that of ar ensemble of shallow nets (although not a real ensemble due to obvious dependencies), with variou depths concentrated in a narrow band. As noticed inVeit et al.(2016), viewing ResNets as ensembles of relatively shallow networks helps in explaining some of the apparent advantages of these models particularly the apparent ease of optimization of extremely deep models, since deep paths barely affect the overall loss of the network. However, this alone does not explain the increase in accuracy of deep residual nets over actual ensembles of standard networks. In order to explain the improvec performance of ResNets, we make the following claims:\nFig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task is to classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. The loss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containing. 20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed on. 10,000 samples, using SGD with minibatches of 50 samples..\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica. tive coefficient or in the weight matrices themselves. In the following experiments, it seems that\nThe model in Eq.16 has the form of a spin glass model, except for the dependency between the variables i1,i2...tr. We later use an assumption similar to A1 of independence between these vari- ables in order to link the two binary classification losses and the general spherical spin glass model However, for the results in this section, this is not necessary.\nis orthogonal to the weights. We have that L(x,w) (mLm(x, w) +pLp(x, w)). Using taylor ac series expansion we have: aLN(x,w- g) dLN(x,w) dLN(x,w) uVw (45) ac ac ac For the last term we have: dLN(x,w) V w g = (mLm(x,w) + pLp(x, w ac =(mLm(x,w) + pLp(x, W mgm + pgp)'g,(46) d n45 we have: dLN(x,w- gw) 0-(mgm+pgp)(gm+ gp) aC -(m|gp|l2+p|gp|l2+(m+p)ggm) (47) Proof of Thm[5]Inserting Eq.31|into Eq.[33|we have that: oq(r=2e?r(r-1) _=r(r-2) (48) r=2 e?r r=2e?r2 We denote the matrices V' and V\" such that Vf, = ro, and V/f = r(r -- 1)oj. We then have: eT(V\" _V')e (49) eT(V\"+ V')e maxe0k(R,e) < max min (V! - V) nax =0k(R,e*) (50)\nOLN(x,w- g) dLN(x,w) dLN(x,w) 9 ac ac ac\nThe series (er)P-1 determines the weight of interactions of a specific length in the loss surface. No- tice that for constant depth p and large enough , arg max. (er) = p. Therefore, for wide networks, where n and, therefore, are large, interactions of order p dominate the loss surface, and the effect of the residual connections diminishes. Conversely, for constant and a large enough p (deep net- works), we have that arg max,(er) < p, and can expect interactions of order r < p to dominate the loss. The asymptotic behavior of e is captured by the following lemma:\nAs the next theorem shows. the epsilons are concentrated in a narrow band near the maximal value\n2r(r-2\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an ensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig.1(a-c) for various values of . In a common weight initialization scheme for neural networks, C = - (Orr & Muller2003f[Glorot & Bengio|2010). With this initialization and A = n, = p and the maximal weight is obtained at less than half the network's depth limp->oo arg max,(er) < . Therefore, at the initialization, the loss function is primarily influenced by interactions of considerably lower order than the depth p, which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th scaling parameter C.\nFig.2|depicts the results. There are two types of plots: Fig. 2(a,c) presents for CIFAR-10 and CIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim ilar in type to Fig. 1(d) in the paper). Fig.2(b,d) depict for the two datasets the mean of these norms over all convolutional layers as a function of epoch (similar to Fig. 1(e))\np d Yr LLL LN(x,w) =C1 +C2 r)k W r=1 i=1 j=1 k=1\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNe implementation when applied to these conventional datasets: the dominance of paths with fewe. skip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the phenomenon we describe speeds up\nIn Fig. 3|we present the multiplicative coefficient of the Batch Normalization when not absorbed. As future work, we would like to better understand why these coefficients start to decrease once the learning rate is reduced. As shown above, taking the magnitude of the convolutions into account the dynamic phenomenon we study becomes even more prominent at this point. The change o1 location from the multiplicative coefficient of the Batch Normalization layers to the convolutions themselves might indicate that Batch Normalization is no longer required at this point. Indeed Batch Normalization enables larger training rates and this shift happens exactly when the training. rate is reduced. A complete analysis is left for future work.\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by deeper networks."}, {"section_index": "9", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residua networks. As we will show, batch normalization layers offer an easy starting condition for the. network, such that the gradients from early in the training process will originate from extremely. shallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out- put of each ReLU unit in layer l normalized by a factor oj and then is multiplied by some parameter A. The output of layer l > 1 is therefore:\nR(WNi-1(x))+Ni-1(x) Ni(x) = 0\nwhere oj is the mean of the estimated standard deviations of various elements in the vector R(W,' Ni-1(x)). Furthermore, a typical initialization of batch normalization parameters is to set. Vi, i = 1. In this case, providing that units in the same layer have equal variance ot, the recursive relation E[Wi+1(x)?] = 1 + E[W(x)?] holds for any unit j in layer l. This, in turn, implies that the. output of the ReLU units should have increasing variance o? as a function of depth. Multiplying the weight parameters in deep layers with an increasingly small scaling factor , effectively reduces the influence of deeper paths, so that extremely short paths will dominate the early stages of opti-. mization. We next analyze how the weight scaling, as introduced by batch normalization, provides. a driving force for the effective ensemble to become deeper as training progresses..\nWe consider a simple network of depth p, with a single residual connection skipping p - m layers. We further assume that batch normalization is applied at the output of each ReLU unit as described in Eq.22 We denote by l1...lm the indices of layers that are not skipped by the residual connection.\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza- tion multiplicative coefficients and then it moves to the convolution layers themselves. We there- fore absorb the BN coefficients into the convolutional layer using the public code of https: //github.com/e-lab/torch-toolbox/tree/master/BN-absorber Note that the multiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout our paper, since we follow the notation of|Choromanska et al.[(2015a), y refers to the number of paths. The multiplicative factor of Batch normalization appears as A in Sec. 4.\n2. During training, C changes and causes a shift of focus from a shallow ensemble to deeper and deeper ensembles, which leads to an additional capacity. 3. In networks that employ batch normalization, C is directly embodied as the scale parameter X. The starting condition of X = 1 offers a good starting condition that involves extremely shallow nets.\nFor the remainder of Sec.4, we relax all assumptions, and assume that at some point in time the loss can be expressed:\nwhere C1, C2 are some constants that do not affect the optimization process. In order to gain addi tional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to the scale parameter C. Using Eq.[9[for the output, we obtain:\np d 2r 0LN(x,w) rx(?A(g) II r)(k W ac r=1 i=1 j=1 k=1\n0.45 0.35 0.35 0.4 0.3 0.3 0.35 0.25 0.25 0.3 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 (a) (b) (c) 1.2 0.8 0.6 0.4 0.20 5000 10000 15000 20000 500 1000 1500 2000 (d) (e) (f)\nFigure 1: (a) A histogram of er(), r = 1..p, for = 0.1 and p = 100 . (b) Same for = 0.5. (c) Same for = 2. (d) Values (y-axis) of the batch normalization parameters X, (x-axis) for. 10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix |C|for. more details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a residual network, which does not employ batch normalization, as a function of the iteration. (f) The. asymptotic of the mean number of critical points of a finite index as a function of 3..\ndYm d Yp p N(x,w) =Xm m (m) II (m)(k) (m) s(p) II ,(p)(k) wij W xij 4 i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp (x.u\nWe denote by w, the derivative operator with respect to the parameters w, and the gradient g = VwL(x, w) = gm + gp evaluated at point w..\nNorm of the weights of the convolution layers for multiple epochs for cifar10 Mean norm of convolution layers as a function of epoch for cifar1o 240 21 220 30 200 (pequosqe s!1 141 25 180 20 4C an 10 120 100 15 20 20 40 60 80 100 120 140 160 180 conv layer epoch (b) (a Norm of the weights of the convolution layers for multiple epochs for cifar100 Mean norm of convolution layers as a function of epoch for cifar100 350 21 41 60 300 250 S 40 30 150 100 20 25 20 40 60 80 100 120 140 160 180 conv layer epoch (d) c\n0Ln(x,w - g)\nFigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch. Normalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph. s a different epoch, see legend. Waving is due to the interleaving architecture of the convolutiona ayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional. ayers' weights per epoch.\naLn(x,w- g) da\nThm.3 suggests that || will increase for layers l that do not have skip-connections. Conversely, if layer l has a parallel skip connection, then || will increase if ||gp||2 > l|gm|[2, where the later condition implies that shallow paths are nearing a local minima. Notice that an increase in |Aigl...lm results in an increase in [p], while [m] remains unchanged, therefore shifting the balance into deeper ensembles.\nThis steady increase of |], as predicted in our theoretical analysis, is also backed in experimen. tal results, as depicted in Fig.1[d). Note that the first layer, which cannot be skipped, behaves differently than the other layers. More experiments can be found in Appendix|C.\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be. observed without the use of batch normalization, as a steady increase in the L2 norm of the weights as shown in Fig.1[e). In order to model this, consider the residual network as discussed above. without batch normalization layers. Recalling, ||w||2 = CA, w = w, the loss of this network is. expressed as:\nd Ym d Yp p LN(x,w) =Cm m) (m) (m)(k LL m) 1(p) I1 37(p)(k) xij xij wij i in i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp(x,W\n0LN(x,w - g (m|gm|l2 + pl|gp|l2 + (m + p)gp gm) dc\nThm.4|indicates that if either l|gpl|2 or l|gml|2 is dominant (for example, near local minimas of the shallow network, or at the start of training), the scaling of the weights C will increase. This expansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in- crease the overall capacity of the residual network. This dynamic behavior of the effective depth of residual networks is of key importance in understanding the effectiveness of these models. While optimization starts off rather easily with gradients largely originating from shallow paths, the overall advantage of depth is still maintained by the dynamic increase of the effective depth.\nWe now present the results of[Auffinger & Arous(2013) regarding the asymptotic complexity in the case of limA->oo of the multi-spherical spin glass model given by:.\nA He,^=- Er A r- 2 r=2 i1,...ir=1\nA 8 1 e=1 w=1, ^ i=1 r=2\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn of the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen. epoch (see legend). Since there is no monotonic increase between the epochs in this graph, it i harder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the. multiplicative factors per epoch.\nOX = er(r-1) a2 = r=2 r=2\nNote that for the single interaction spherical spin model a2 = 0. The index of a critical point of He,A is defined as the number of negative eigenvalues in the hessian V2 He.A evaluated at the critical. point w.\nDefinition 4. For any O < k < A and u E R, we denote the random number Crtx.k(u, e) as the number of critical points of the hamiltonian in the set BX = {AX|X E (-oo, u)} with index k\nCrtA.k(u, e) = 1{He,A E Au}1{i(V2He,A)=k w:VHe,A=0\nBatch Normalization gamma per layer for multiple epochs for cifar10 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar10 160 1 10 150 161 140 30 120 110 100 10 20 25 30 35 0 20 40 80 100 120 140 15 60 160 180 conv layer epoch (a (b) Batch Normalization gamma per layer for multiple epochs for cifar100 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar100 20 200 21 18 41 190 16 180 14 162 12 170 mnea 160 150 140 25 130 10 15 30 35 20 40 80 20 60 100 120 140 160 180 conv layer epoch (d) c\nwhere J,... are independent centered standard Gaussian variables, and e = (er)r>2 are positive. real numbers such that r=2 er2r < oo. A configuration w of the spin spherical spin-glass model is a vector in RA satisfying the spherical constraint:.\nA 1. (29) =1 A =1 r=2 Note that the variance of the process is independent of e: A E[H?.A] =A1-re? 2 = A =A (30) Definition 3. We define the following:. 8 U' =) e,r, v\" =er(r-1), Q =v\" + v' (31)\n8 A 8 E[H?,A]= A1-r r e? w?)=^ e=A r=2 i=1 r=1"}]
BJbD_Pqlg
[{"section_index": "0", "section_name": "HUMAN PERCEPTION IN COMPUTER VISION / CONFERENCE E SUBMISSIONS", "section_text": "Poggio, 1999; Serre, 2014). However, the computation in trained DNN models is quite general- purpose (Huh et al., 2016; Yosinski et al., 2014) and offers unparalleled accuracy in recognition tasks (LeCun et al., 2015). Since visual computations are, to some degree, task- rather than architecture- dependent, an accurate and general-purpose DNN model may better resemble biological processing than less accurate biologically plausible ones (Kriegeskorte, 2015; Yamins & DiCarlo, 2016). We support this view by considering a controlled condition in which similarity is not confounded with task difficulty or categorization consistency.\nGoogLeNet ResNet-152 CaffeNet 90 90 86 89 90 82 90 90 65 60 57 57 60 60 4347 33 33 30 30 25 30 Figure 7: Background context for different DNN models (following figure CaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 75 90 90 r 74 61 57 60 50 60 60 10 33 30 30 30 0 0 eonnnnnnonns Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 60 55 60 448 Consistent 35 Inconsistent 30 30 10 'wnu 0 Figure 8: Background context for baseline DNN models (following figure 2). \"Caff is reproduced from Figure 7.\nRon Dekel\nDepartment of Neurobiology. Weizmann Institute of Science Rehovot. PA 7610001. Israel"}, {"section_index": "1", "section_name": "6.3.2 USE IN PSYCHOPHYSICS", "section_text": "Our results imply that trained DNN models have good predictive value for outcomes of psychophys ical experiments, permitting a zero-cost first-order approximation. Note, however, that the scope of such simulations may be limited, since learning (Sagi, 2011) and adaptation (Webster, 2011) were not considered here.\nComputer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprece- dented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Bio- logical vision (learned in life and through evolution) is also accurate and general- purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the hu- man system-level computation of visual perception has DNN correlates and con- sidered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation. crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human percep tion are a consequence of architecture-independent visual learning.\nFigure 7: Background context for different DNN models (following figure 2)"}, {"section_index": "2", "section_name": "6.3.3 USE IN ENGINEERING (A PERCEPTUAL LOSS METRIC", "section_text": "As proposed previously (Dosovitskiy & Brox, 2016; Johnson et al., 2016; Ledig et al., 2016), the saliency of small image changes can be estimated as the representational distance in trained DNNs Here, we quantified this approach by relying on data from a controlled psychophysical experiment (Alam et al., 2014). We found the metric to be far superior to simple image statistical properties and on par with a detailed perceptual model (Alam et al., 2014). This metric can be useful in image. compression, whereby optimizing degradation across image sub-patches by comparing perceptual loss may minimize visual artifacts and content loss.."}, {"section_index": "3", "section_name": "OUICK EXPERT SUMMARY", "section_text": "CaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 74 75 90 90 61 50 57 60 60 60 40 33 29 30 16 15 30 30 0 0 4 1 0 0 0 connnnnnonns Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 55 60 60 Consistent 4248 35 28 Inconsistent 30 30 8 19 'wnu 5 0 O Shape rowding Segmentation"}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Considering the learned computation of ImageNet-trained DNNs, we find\nWe thank Yoram Bonneh for his valuable questions which led to much of this work\nLarge computation changes for perceptually salient image changes (Figure 1). Gestalt: segmentation, crowding, and shape interactions in computation (Figure 2) Contrast constancy: bandpass transduction in first layers is later corrected (Figure 3)."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Md Mushfiqul Alam. Kedarnath P Vilankar, David J Field, and Damon M Chandler. Local masking in natural images: A database and analysis. Journal of vision, 14(8):22-, jan 2014. ISSN 1534- 7362. doi: 10.1167/14.8.22\nThese properties are reminiscent of human perception, perhaps because learned general-purpos classifiers (human and DNN) tend to converge\nDeep neural networks (DNNs) are a class of computer learning algorithms that have become widely used in recent years (LeCun et al., 2015). By training with millions of examples, such models achieve unparalleled degrees of task-trained accuracy (Krizhevsky et al., 2012). This is not unprece. dented on its own - steady progress has been made in computer vision for decades, and to some degree current designs are just scaled versions of long-known principles (Lecun et al., 1998). In pre- vious models, however, only the design is general-purpose, while learning is mostly specific to the context of a trained task. Interestingly, for current DNNs trained to solve a large-scale image recog nition problem (Russakovsky et al., 2014), the learned computation is useful as a building block for drastically different and untrained visual problems (Huh et al., 2016; Yosinski et al., 2014).\nFigure 8: Background context for baseline DNN models (following figure 2). \"CaffeNet iter 310K' is reproduced from Figure 7.\nFor example, orientation- and frequency-selective features (Gabor patches) can be considered general-purpose visual computations. Such features are routinely discovered by DNNs (Krizhevsky et al., 2012; Zeiler & Fergus, 2013), by other learning algorithms (Hinton & Salakhutdinov, 2006.\nMatteo Carandini, Jonathan B Demb, Valerio Mante, David J Tolhurst, Yang Dan, Bruno A Ol shausen, Jack L Gallant, and Nicole C Rust. Do we know what the early visual system does? The Journal of Neuroscience, 25(46):10577-97, nov 2005. ISsN 1529-2401. doi: 10.1523/JNEUROSCI.3726-05.2005."}, {"section_index": "6", "section_name": "ABSTRACT", "section_text": "Another fascinating option is the formation of hypotheses in terms of mathematically differentiable trained-DNN constraints, whereby it is possible to efficiently solve for the visual stimuli that opti mally dissociate the hypotheses (see Gatys et al. 2015a;b; Mordvintsev et al. 2015 and note Goodfel low et al. 2014; Szegedy et al. 2013). The conclusions drawn from such stimuli can be independent of the theoretical assumptions about the generating process (for example, creating new visual illu sions that can be seen regardless of how they were created).\nAntoine Del Cul, Sylvain Baillet, and Stanislas Dehaene. Brain dynamics underlying the nonlinea. threshold for access to consciousness. PLoS Biol, 5(10):e260, 2007. ISsN 1545-7885\nAs an extension, general-purpose computations are perhaps of universal use. For example, a dimen. sionality reduction transformation that optimally preserves recognition-relevant information may constitute an ideal computation for both DNN and animal. More formally, different learning algo. rithms with different physical implementations may converge to the same computation when similar (or sufficiently general) problems are solved near-optimally. Following this line of reasoning, DNN. models with good general-purpose computations may be computationally similar to biological vi. sual systems, even more so than less accurate and less general biologically plausible simulations (Kriegeskorte, 2015; Yamins & DiCarlo, 2016).\nGoker Erdogan and Robert A Jacobs. A 3D shape inference model matches human visual objec similarity judgments better than deep convolutional neural networks. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. Cognitive Science Society Austin, TX, 2016\nVGG-19 GoogLeNet ResNet-152 101 t Easy (ref.). 10 X 10 W8 + *+ X a 80 X X +* * * * * + * 102 x++ Hard + X 102 * 10 10-2 101 10~2 101 10-2 101 f1 CaffeNetiter 1. CaffeNetiter soK CaffeNetiter 310K 101 101 10 1 Q$8KEK O D \\0+* *< f2 Ox|* *+ X * XX 10 0 b ** 10 10 X X 10~2 10~1 10~2 101 102 101 * d GaborDecomposition SteerablePyramid Humanperception 70 0 101 * (%) errreeennney x e 101 x + 50 + +T x C X 0 40 10-2 10~1 10~2 101 60 62 64 66 68 70 MI Easy (bits) AccuracyEasy (%)\n10 Easy 10 B X X tOx+#K + a D X X +* * \\O + * * * + 10 +x Hard + + Xx + X * 102 102 10-2 101 102 10~1 102 101\nRO 10 O X X XOx+RK + a 80 X + O +* * * + \\C * 10 X Hard + X C 10 10 10~2 10~1 102 101 102 10~1 f1 CaffeNetiter1 CaffeNetiter soK CaffeNetiter 310K 101 10 101 O&KEK 10 O 0o f2 * OX* A 0 XX +x $+ 0 b 10 ** 10 104 X 10~2 101 10-2 10~1 10-2 101 * GaborDecomposition SteerablePyramid Humanperception 70 0 * 10 (%) perreeennney X e 10 60 x + 50 + x 1 c 104 10 40 10-2 10-1 102 101 60 62 64 66 68 70 AccuracyEasy (%) MI Easy (bits)\nMichele Fabre-Thorpe, Ghislaine Richard, and Simon J Thorpe. Rapid categorization of natural images by rhesus monkeys. Neuroreport, 9(2):303-308, 1998. 1SSN 0959-4965\nDavid J Field, Anthony Hayes, and Robert F Hess. Contour integration by the human visual system. evidence for a local association field. Vision research. 33(2):173-193. 1993. 1SsN 0042-6989.\nItzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics, 61(2): 103-113. 1989. ISSN 0340-1200.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A Neural Algorithm of Artistic Style aug 2015a.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. may 2015b.\nHere, we quantify the similarity between human visual perception, as measured by psychophys ical experiments, and individual computational stages (layers) in feed-forward DNNs trained on a large-scale image recognition problem (ImageNet LSVRC). Comparison is achieved by feeding the experimental image stimuli to the trained DNN and comparing a DNN metric (mean mutual information or mean absolute change) to perceptual data. The use of reduced (simplified and typi cally non-natural) stimuli ensures identical inherent task difficulty across compared categories and prevents confounding of categorization consistency with measured similarity. Perception, a system level computation, may be influenced less by the architectural discrepancy (biology vs. DNN) than are neural recordings\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680. 2014.\nJon Gottesman, Gary S Rubin, and Gordon E Legge. A power law for perceived contrast in human vision. Vision research, 21(6):791-799, 1981. ISSN 0042-6989.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. dec 2015.\nFrom a perceptual perspective, an image change of fixed size has different saliency depending on im age context (Polat & Sagi, 1993). To investigate whether the computation in trained DNNs exhibits similar contextual modulation, we used the Local Image Masking Database (Alam et al., 2014), ir which 1080 partially-overlapping images were subjected to different levels of the same random ad ditive noise perturbation, and for each image, a psychophysical experiment determined the thresholc noise level at which the added-noise image is discriminated from two noiseless copies at 75% (Fig ure 1a). Threshold is the objective function that is compared with an L1-distance correlate in the DNN representation. The scale of measured threshold was:\nMinyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes ImageNet good for transfe learning? aug 2016.\nstd (noise) 20 : l0g10 T\nwhere std (noise) is the standard deviation of the additive noise, and T is the mean image pixe value calculated over the region where the noise is added (i.e. image center).\nLee et al., 2008: 2009; Olshausen & Field, 1997), and are extensively hard-coded in computer vision (Jain & Farrokhnia, 1991). Furthermore, a similar computation is believed to underlie the spatial re-. sponse properties of visual neurons of diverse animal phyla (Carandini et al., 2005; DeAngelis et al., 1995; Hubel & Wiesel, 1968; Seelig & Jayaraman, 2013), and is evident in human visual perception (Campbell & Robson, 1968; Fogel & Sagi, 1989; Neri et al., 1999). This diversity culminates in sat- isfying theoretical arguments as to why Gabor-like features are so useful in general-purpose vision (Olshausen, 1996; Olshausen & Field, 1997).\nRelated work seems to be consistent with computation convergence. First, different DNN training regimes seem to converge to a similar learned computation (Li et al., 2015; Zhou et al., 2014). Sec. ond, image representation may be similar in trained DNN and in biological visual systems. That is, when the same images are processed by DNN and by humans or monkeys, the final DNN com putation stages are strong predictors of human fMRI and monkey electrophysiology data collected from visual areas V4 and IT (Cadieu et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Yamins et al., 2014). Furthermore, more accurate DNN models exhibit stronger predictive power (Cadieu et al., 2014; Dubey & Agarwal, 2016; Yamins et al., 2014), and the final DNN computation stage is even a strong predictor of human-perceived shape discrimination (Kubilius et al., 2016). However, some caution is perhaps unavoidable, since measured similarity may be confounded with catego rization consistency, view-invariance resilience, or similarity in the inherent difficulty of the tasks undergoing comparison. A complementary approach is to consider images that were produced by optimizing trained DNN-based perceptual metrics (Gatys et al., 2015a;b; Johnson et al., 2016; Ledig et al., 2016), which perhaps yields undeniable evidence of non-trivial computational similarity, al- though a more objective approach may be warranted.\nFigure 9: Background context for Shape. Shown for each model is the measured MI for the six 'Hard\"' shapes as a function of the MI for the \"Easy\" shape. The last panel shows an analagous comparison measured in human subjects by Weisstein & Harris (1974). A data point which lies below the dashed diagonal indicates a configuration for which discriminating line location is easier for the Easy shape compared with the relevant Hard shape.\nHinton and Salakhutdinov. Reducing the dimensionality of data with neural networks. Science (Nes York. N.Y.). 313(5786):504-7. iul 2006. ISSN 1095-9203. doi: 10.1126/science.1127647\na b 50 R2=0.6 L1 = 37.8 etne 30 E 10 L1 = 23.4 -60 -40 -20 0 Perceptual threshold (dB) -40 dB -30 dB -20 dB -10 dB 0 dB Perceptual threshold c d <-45 dB -40 dB -30 dB -20 dB >-15 dB 50% 0.6 orooreeor 0.4 r 66% 0.2 100% Computational stage\nJustin Johnson., Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer an super-resolution. arXiv preprint arXiv:1603.08155, 2016\ndata conv1 fc8 2 10-prob 6 2 8 1.5 Caarer 1.5 6 1 4 2 0.5 0.5 0 0 0 0 1 7 75 7 75 1 7 75 7 75 Contrast 1 1 1 data conv1 cIs3 fc 10-prob 0.72 6 0.5 15 0.36 1.5 0.26 1.5 10 0.18 0.126 1 1 0.086 5 0.062 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 0.0078 data conv1 fc1000 orob 6 8 1.5 1.5 6 1 7 4 2 0.5 0.5 0 0 0 0 1 7 75 1 7 75 1 7 75 1 7 75 Frequency (cycles/image)\nFigure 1: Predicting perturbation thresholds. a, For a fixed image perturbation, perceptual detection. threshold (visualized by red arrow) depends on image context. b, Measured perceptual threshold is. correlated with the average L1 change in DNN computation due to image perturbation (for DNN. model VGG-19, image scale=100%). c, Explained variability (R2) of perceptual threshold data. when L1 change is based on isolated computational layers for different input image scales. Same VGG-19 model as in (b). X-axis labels: data refers to raw image pixel data, conv*_1 and fc_* are. the before-ReLU output of a convolution and a fully-connected operation, respectively, and prob. is the output class label probabilities vector. d, Example images for whcih predicted threshold in b. is much higher than perceptually measured (\"Overshoot\"', where perturbation saliency is better than. predicted), or vise versa (\"Undershoot'). Examples are considered from several perceptual threshold. ranges (2 dB of shown number).\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 may 2015. ISSN 0028-0836. doi: 10.1038/nature14539.\nThe DNN correlate of perceptual threshold we used was the average L1 change in DNN computatio. between added-noise images and the original, noiseless image. Formally,.\nLin(I)=aiI+ noise(n))-ai(I)\nFigure 10: Contrast sensitivity (following Figure 3) for DNN architectures CaffeNet, GoogLeNet and ResNet-152.\nwhere a, (X) is the activation value of neuron i during the DNN feedforward pass for input image. X, and the inner average (denoted by bar) is taken over repetitions with random n-sized noise (noise. is introduced at random phase spectra in a fixed image location, an augmentation that follows the between-image randomization described by Alam et al., 2014; the number of repetitions was 10 or more). Unless otherwise specified, the final L1 prediction is Ln averaged across noise levels (-40. to 25 dB with 5-dB intervals) and computational neurons (first within and then across computa tional stages). Using L1 averaged across noise levels as a correlate for the noise level of perceptua. threshold is a simple approximation with minimal assumptions..\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent Learning: Dc. different neural networks learn the same representations? arXiv preprint arXiv:1511.07543. 2015\nYucheng Liu and Jan P. Allebach. Near-threshold perceptual distortion prediction based on optimal structure classification. In 2016 IEEE International Conference on Image Processing (ICIP), pp 106-110. IEEE. sep 2016. 1SBN 978-1-4673-9961-6. doi: 10.1109/ICIP.2016.7532328\nResults show that the L1 metric is correlated with the perceptual threshold for all tested DNN archi tectures (Figure 1b, 4a-c). In other words, higher values of the L1 metric (indicating larger changes in DNN computation due to image perturbation, consistent with higher perturbation saliency) are\nTomer Livne and Dov Sagi. Configuration influence on crowding. Journal of Vision, 7(2):4, 2007 ISSN 1534-7362\nHonglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area V2. In Advances in Neural Information Processing Systems, pp. 873-880, 2008.\nHonglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09, pp. 1-8, New York. New York, USA, jun 2009. ACM Press. ISBN 9781605585161. doi: 10.1145/1553374.1553453\nPeter Neri, Andrew J Parker, and Colin Blakemore. Probing the human stereoscopic system witl reverse correlation. Nature, 401(6754):695-698, 1999. ISSN 0028-0836.\nBruno A Olshausen. Emergence of simple-cell receptive field properties by learning a sparse cod for natural images. Nature, 381(6583):607-609, 1996. 1SSN 0028-0836.\nDenis G Pelli, Melanie Palomares, and Najib J Majaj. Crowding is unlike ordinary masking: Distin guishing feature integration from detection. Journal of vision, 4(12):12, 2004. ISsN 1534-7362\nNoga Pinchuk-Yacobi, Ron Dekel, and Dov Sagi. Expectation and the tilt aftereffect. Journal o vision, 15(12):39, sep 2015. 1SSN 1534-7362. doi: 10.1167/15.12.39\na Human VGG-19, prob 1 Connst connst 0.1 0.1 0.01 0.01 0.18 2.18 26.79 0.18 2.18 26.79 Frequency Frequency (cycles/deg. of vis. field) (0.18 * cycles/image) b Human VGG-19, conv1 1 coonest connst 0.1 0.1 0.01 0.01 0.09 1.09 13.39 0.09 1.09 13.39 Frequency Frequency (cycles/deg. of vis. field) (0.09 * cycles/image)\nNoga Pinchuk-Yacobi, Hila Harris, and Dov Sagi. Target-selective tilt aftereffect during texture learning. Vision research, 124:44-51, 2016. 1SSN 0042-6989\nU Polat and D Sagi. Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments. Vision research, 33(7):993-9, may 1993. ISSN 0042- 6989. R T Pramod and S P Arun. Do computational models differ systematically from human object perception? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1601-1609, 2016.\nTo quantify and compare predictive power, we considered the percent of linearly explained vari- ability (R2). For all tested DNN architectures, the prediction explains about 60% of the perceptual. variability (Tables 1, 2; baselines at Tables 3-5), where inter-person similarity representing theoreti- cal maximum is 84% (Alam et al., 2014). The DNN prediction is far more accurate than a prediction based on simple image statistical properties (e.g. RMS contrast), and is on par with a detailed per- ceptual mode1 that relies on dozens of psychophysically collected parameters (Alam et al., 2014). The Spearmann correlation coefficient is much higher compared with the perceptual model (with an absolute SROCC value of about 0.79 compared with 0.70, Table 1), suggesting that the L1 metric gets the order right but not the scale. We did not compare these results with models that fit the. experimental data (e.g. Alam et al., 2015; Liu & Allebach, 2016), since the L1 metric has no explicit. parameters. Also, different DNN architectures exhibited high similarity in their predictions (R2 of. about 0.9, e.g. Figure 4d).\nPrediction can also be made from isolated computational stages, instead of across all stages as before. This analysis shows that the predictive power peaks mid-computation across all tested image scales (Figure 1c). This peak is consistent with use of middle DNN layers to optimize perceptual metrics (Gatys et al., 2015a;b; Ledig et al., 2016), and is reminiscent of cases in which low- tc mid-level vision is the performance limiting computation in the detection of at-threshold stimuli (Campbell & Robson, 1968; Del Cul et al., 2007).\nJohannes D Seelig and Vivek Jayaraman. Feature detection and orientation tuning in the Drosophila central complex. Nature. 503(7475):262-266. 2013. 1SSN 0028-0836\nFigure 11: Comparison of contrast sensitivity. Shown are iso-output curves, for which perceived contrast is the same (Human), or for which the L1 change relative to a gray image is the same (DNN. model VGG-19). To obtain a correspondence between human frequency values (given in cycles pet degree of visual field) to DNN frequency values (given in cycles per image), a scaling was choser. such that the minima of the blue curve is given at the same frequency value. Human data is foi subject M.A.G. as measured by Georgeson & Sullivan (1975).\nThomas Serre. Hierarchical Models of the Visual System. In Encyclopedia of Computational Neu roscience, pp. 1-12. Springer, 2014. ISBN 1461473209\nFinally, considering the images for which the L1-based prediction has a high error suggests a factor which causes a systematic inconsistency with perception (Figures 1d, 6). This factor may be related to the mean image luminance: by introducing noise perturbations according to the scale of Equation 1, a fixed noise size (in dB) corresponds to smaller pixel changes in dark compared with bright images. (Using this scales reflects an assumption of multiplicative rather than additive conservation: this assumption may be justified for the representation at the final but perhaps not the intermediate computational stages considering the log-linear contrast response discussed in Section 5). Another factor may the degree to which image content is identifiable.\nEero P Simoncelli and William T Freeman. The steerable pyramid: a flexible architecture for multi scale derivative computation. In ICIP (3), pp. 444- 447, 1995.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. sep 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow. and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013\nAndrew B Watson and Albert J Ahumada. Predicting visual acuity from wavefront aberrations Journal of vision, 8(4):17.1-19, ian 2008. 1SSN 1534-7362. doi: 10.1167/8.4.17.\nTable 1: Prediction accuracy. Percent of linearly explained variability (R2), absolute value of Spear man rank-order correlation coefficient (SROCC), and the root mean squared error of the linear prediction (RMsE) are presented for each prediction model. Note the measurement scale of the threshold data being predicted (Eq. 1). (*) Thresholds linearized through a logistic transform be- fore prediction (see Larson & Chandler, 2010), possibly increasing but not decreasing measured predictive strength. (**) Average of four similar alternatives.\nMichael A Webster. Adaptation and visual coding. Journal of vision, 11(5), jan 2011. ISSN 1534 7362."}, {"section_index": "7", "section_name": "8.1 DNN MODELS", "section_text": "The previous analysis suggested gross computational similarity between human perception anc trained DNNs. Next, we aimed to extend the comparison to more interpretable properties of per ception by considering more highly controlled designs. To this end, we considered cases in which a static background context modulates the difficulty of discriminating a foreground shape, despite nc spatial overlap of foreground and background. This permits interpretation by considering the cause of the modulation.\nN. Weisstein and C. S. Harris. Visual Detection of Line Segments: An Object-Superiority Effect Science, 186(4165):752-755, nov 1974. ISsN 0036-8075. doi: 10.1126/science.186.4165.752\nTo collect DNN computation snapshots, we used MATLAB with MatConvNet version 1.0-beta2. (Vedaldi & Lenc, 2015). All MATLAB code will be made available upon acceptance of thi. manuscript. The pre-trained DNN models we have used are: CaffeNet (which is a variant of AlexNe provided in Caffe, Jia et al., 2014), GoogLeNet (Szegedy et al., 2014), VGG-19 (Simonyan & Zis. serman, 2014), and ResNet-152 (He et al., 2015). The models were trained on the same ImageNe. LSVRC. The CaffeNet model was trained using Caffe with the default ImageNet training parame. ters (stopping at iteration 310, 000) and imported into MatConvNet. For the GoogLeNet model, w. used the imported pre-trained reference-Caffe implementation. For VGG-19 and ResNet-152, w used the imported pre-trained original versions. In all experiments input image size was 224 22. 0r 227 x 227.\nDaniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356-365, 2016. ISsN 1097-6256.\nWe first consider segmentation, in which arrangement is better discriminated for arrays of consis. tently oriented lines compared with inconsistently oriented lines (Figure 2a) (Pinchuk- Yacobi et al 2016). Crowding is considered next, where surround clutter that is similar to the discriminate. arget leads to deteriorated discrimination performance (Figure 2b) (Livne & Sagi, 2007). Last t be addressed is object superiority, in which a target line location is better discriminated when i is in a shape-forming layout (Figure 2c) (Weisstein & Harris, 1974). In this case, clutter is con trolled by having the same fixed number of lines in context. To measure perceptual discriminatior these works introduced performance-limiting manipulations such as location jittering, brief presen ation, and temporal masking. While different manipulations showed different measured values order-of-difficulty was typically preserved. Here we changed all the original performance-limiting. nanipulations to location jittering (whole-shape or element-wise, see Section 8.4).\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep. neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014."}, {"section_index": "8", "section_name": "8.2 BASELINE MODELS", "section_text": "Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. nov 2013.\nAs baselines to compare with pre-trained DNN models, we consider: (a) a multiscale linear filter. bank of Gabor functions, (b) a steerable-pyramid linear filter bank (Simoncelli & Freeman, 1995), (c) the VGG-19 model for which the learned parameters (weights) were randomly scrambled within layer, and (d) the CaffeNet model at multiple time points during training. For the Gabor decom-. position, the following Gabor filters were used: all compositions of = {1, 2, 4, 8, 16, 32, 64}px. = {1, 2} , orientation= {0, /3, 2/3, , 4/3, 5/3}, and phase= {0, /2}.\nZhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object Detectors Emerg in Deep Scene CNNs. pp. 12, dec 2014.\nTo quantify discrimination difficulty in DNNs, we measured the target-discriminative information of isolated neurons (where performance is limited by location jittering noise), then averaged across all neurons (first within and then across computational layer stages). Specifically, for each neuron, we measured the reduction in categorization uncertainty due to observation, termed mutual information (MI):"}, {"section_index": "9", "section_name": "8.3 IMAGE PERTURBATION EXPERIMENT", "section_text": "where H stands for entropy, and A, is a random variable for the value of neuron i when the DNN. processes a random image from a category defined by the random variable C. For example, if a. neuron gives a value in the range of 100.0 to 200.0 when the DNN processes images from category. A, and 300.0 to 400.0 for category B, then the category is always known by observing the value, and. so mutual information is high (MI=1 bits). On the other extreme, if the neuron has no discriminative task information, then MI=0 bits. To measure MI, we quantized activations into eight equal-amount. bins, and used 500 samples (repetitions having different location jittering noise) across categories. The motivation for this correlate is the assumption that the perceptual order-of-difficulty reflects the. quantity of task-discriminative information in the representation..\nThe noiseless images were obtained from Alam et al. (2014). In main text, \"image scale\"' refers t percent coverage of DNN input. Since size of original images (149 149) is smaller than DNN input of (224 224) or (227 227), the images were resized by a factor of 1.5 so that 100% imag. scale covers approximately the entire DNN input area.\nHuman psychophysics and DNN experiments were done for nearly identical images. A slight dis. crepancy relates to how the image is blended with the background in the special case where the. region where noise is added has no image surround at one or two side. In these sides (which depenc. on the technical procedure with which images were obtained, see Alam et al., 2014), the surrounc blending here was hard, while the original was smooth..\nResults show that, across hundreds of configurations (varying pattern element size, target location. jitter magnitude, and DNN architecture; see Section 8.4), the qualitative order of difficulty in terms. of the DNN MI metric is consistent with the order of difficulty measured in human psychophysica. experiments, for the conditions addressing segmentation and crowding (Figures 2d, 7; for baseline. models see Figure 8). It is interesting to note that the increase in similarity develops gradually along. different layer types in the DNN computation (i.e. not just pooling layers), and is accompaniec. by a gradual increase in the quantity of task-relevant information (Figure 2e-g). This indicates a. link between task relevance and computational similarity for the tested conditions. Note that unlike. the evident increase in isolated unit task information, the task information from all units combine. decreases by definition along any computational hierarchy. An intuition for this result is that the. total hidden information decreases, while more accessible per-unit information increases.."}, {"section_index": "10", "section_name": "8.4.1 SEGMENTATION", "section_text": "The images used are based on the Texture Discrimination Task (Karni & Sagi, 1991). In the variant considered here (Pinchuk-Yacobi et al., 2015), subjects were presented with a grid of lines, all of which were horizontal, except two or three that were diagonal. Subjects discriminated whether the arrangement of diagonal lines is horizontal or vertical, and this discrimination was found to be more difficult when the central line is horizontal rather than diagonal ('Hard' vs. 'Easy' in Figure 2a) To limit human performance in this task, two manipulations were applied: (a) the location of each line in the pattern was jittered, and (b) a noise mask was presented briefly after the pattern. Here we only retained (a).\nFor shape formation, four out of six shapes consistently show order of difficulty like perception, anc two shapes consistently do no (caricature at Figure 2h; actual data at Figure 9)..\nA total of 90 configurations were tested, obtained by combinations of the following alternatives\nMI(A;C) = H(C)- H(CA)\nThree scales: line length of 9, 12.3, or 19.4 px (number of lines co-varied with line length,. see Figure 12). Three levels of location jittering, defined as a multiple of line length: {1, 2, 3} : 0.0625 : l. px, where l is the length of a line in the pattern. Jittering was applied separately to each. line in the pattern. Ten locations of diagonal lines: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\na b d Segmentation Crowding Shape Consistent Connnnnonns Inconsistent 90 88 90 Ase A B 66 60 1 30 24 0 2 C wnu N N uip TAS TBS MO Shap M M e f g h Segmentation Crowding Shape Inconsistent 0.3r 0.6 Easy ref (siiq) 0.2 Hard 0.2 6 0.4 NNC 0.1 Consistent 0. 0.2 Perception Computational stage\nFor each configuration, the discriminated arrangement of diagonal lines was either horizontal o vertical, and the central line was either horizontal or diagonal (i.e. hard or easy).\na b C d 5 0.2 5 R2=0.87 Ccaret 0.15 3 3 3 0.1 7 0.05 1 -60 -40 -20 0 60 -40 -20 0 -60 -40-20 0 10 30 50 Perceptual threshold Perceptual threshold Perceptual threshold L VGG-19 (dB) (dB) (dB)\nFigure 12: Pattern scales used in the different configurations of the Segmentation condition. Actual images used were white-on-black rather than black-on-white\na b VGG-19 ResNet-152 0.6 0.6 0.4 0.4 Y 0.2 0.2 Best isolated Computational stage\nhe images used are motivated by the crowding effect (Livne & Sagi, 2007; Pelli et al., 2004)\nVGG-19 ResNet-152 . 0.4 0.4 0.2 0.2 Best isolated\nFigure 2: Background context. a-c, Illustrations of reproduced discrimination stimuli for three psychophysical experiments (actual images used were white-on-black rather than black-on-white and pattern size was smaller, see Figures 12-14). d, Number of configurations for which order- of-difficulty in discrimination is qualitatively consistency with perception according to a mutual information DNN metric. Configurations vary in pattern (element size, target location, and jitter magnitude; see Section 8.4) and in DNN architecture used (CaffeNet, GoogLeNet, VGG-19, and ResNet-152). DNN metric is the average across neurons of the isolated neuron target-discriminative information (averaged first within, and then across computational layer stages), where performance is limited by location jittering (e.g. evident jitter in illustrations). e-g, The value of the MI metric across computational layers of model VGG-19 for a typical pattern configuration. The six \"'hard\" (gray) lines in Shape MI correspond to six different layouts (see Section 8.4.3). Analysis shows that for isolated computation stages, similarity to perception is evident only at the final DNN computation stages. h, A caricature summarizing the similarity and discrepancy of perception and the MI-based DNN prediction for Shape (see Figure 9).\nFor each configuration, the discriminated letter was either A, B, C, D, E, or F, and the background was either blank (easy) or composed of the letters M, N, S, and T (hard)..\nFigure 5: Prediction accuracy as a function of computational stage. a, Predicting perceptual sen sitivity for model VGG-19 using the best single kernel (i.e. using one fitting parameter, no cross validation), vs. the standard L1 metric (reproduced from Figure 1). b, For non-branch computa tional stages of model ResNet-152.\nR2 Model SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nModel R2 SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nA cornerstone of biological vision research is the use of sine gratings at different frequencies, ori- entations, and contrasts (Campbell & Robson, 1968). Notable are results showing that the lowest perceivable contrast in human perception depends on frequency. Specifically, high spatial frequen. cies are attenuated by the optics of the eye, and low spatial frequencies are believed to be attenuated due to processing inefficiencies (Watson & Ahumada, 2008), so that the lowest perceivable contrast. is found at intermediate frequencies. (To appreciate this yourself, examine Figure 3a). Thus, for. low-contrast gratings, the physical quantity of contrast is not perceived correctly: it is not preserved. across spatial frequencies. Interestingly, this is corrected for gratings of higher contrasts, for which. perceived contrast is more constant across spatial frequencies (Georgeson & Sullivan, 1975)..\nTable 2: Accuracy of perceptual sensitivity prediction and task-trained ImageNet center-crop top-1 validation accuracy for different DNN models (following Table 1 from which third row is repro duced; used scale: 100%). The quality of prediction for ResNet-152 improves dramatically if only the first tens of layers are considered (see Figure 5b)..\nFigure 13: Pattern scales used in the different configurations of the Crowding condition. Actua images used were white-on-black rather than black-on-white\nThe images used are based on the object superiority effect by Weisstein & Harris (1974), where. discriminating a line location is easier when combined with surrounding lines a shape is formed.\nThe DNN correlate we considered is the mean absolute change in DNN representation between a gray image and sinusoidal gratings, at all combinations of spatial frequency and contrast. Formally for neurons in a given layer, we measured:.\nA total of 90 configurations were tested, obtained by combinations of the following alternatives.\nNneurons 1 L1(contrast, frequency) = ai (contrast, frequency) - ai (0, 0) Nneurons i=1\nFigure 4: Predicting perceptual sensitivity to image changes (following Figure 1). a-c, The Lj. change in CaffeNet, GoogLeNet, and ResNet-152 DNN architectures as a function of perceptual. threshold. d, The L1 change in GoogLeNet as a function of the L1 change in VGG-19\nThree scales: font size of 15.1, 20.6, or 32.4 px (see Figure 13). Three levels of discriminated-letter location jittering, defined as a multiple of font size: {1, 2, 3} . 0.0625 . l px, where l is font size. The jitter of surround letters (M, N, S, and T). was fixed (i.e. the background was static).. Ten locations: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\nA A A\nThree scales: discriminated-line length of 9, 15.1, or 22.7 px (see Figure 14) Five levels of whole-pattern location jittering, defined as a multiple of discriminated-lin length: {1, 2, 5, 10, 15} : 0.0625 . l px, where l is the length of the discriminated line.\nModel R2 SROCC RMSE VGG-19, scrambled weights .18 .39 7.76 Gabor filter bank. .32 .12 8.03 Steerable-pyramid filter bank .37 .15 7.91\nwhere a; (contrast, frequency) is the average activation value of neuron i to 250 sine images. (random orientation, random phase), a (0, 0) is the response to a blank (gray) image, and Nneurons. is the number of neurons in the layer. This measure reflects the overall change in response vs. the gray image.\nResults show a bandpass response for low-contrast gratings (blue lines strongly modulated by fre quency, Figures 3, 10), and what appears to be a mostly constant response at high contrast for. end-computation layers (red lines appear more invariant to frequency), in accordance with percep. tion.\nTable 3: Accuracy of perceptual sensitivity prediction for baseline models (see Section 8.2; used scale: 100%).\nWe next aimed to compare these results with perception. Data from human experiments is generall iso-output (i.e. for a pre-set output, such as 75% detection accuracy, the input is varied to find th value which produce the preset output). However, the DNN measurements here are iso-input (i.e for a fixed input contrast the Lj is measured). As such, human data should be compared to the inter poalted inverse of DNN measurements. Specifically, for a set output value, the interpolated contras value which produce the output is found for every frequency (Figure 11). This analysis permit quantifying the similarity of iso-output curves for human and DNN, measured here as the percent o log-Contrast variability in human measurements which is explained by the DNN predictions. Thi showed a high explained variability at the end computation stage (prob layer, R2 = 94%), bu importantly, a similarly high value at the first computational stage (conv1_1 layer, R2 = 96%) Intiutively, while the 'internal representation\"' variability in terms of Lj is small, the iso-outpu number-of-input-contrast-cahnges variability is still high. For example. for the prob layer, abou the same L1 is measured for (Contrast=1.freq=75) and for (Contrast=0.18.freq=12).\nModel R2 SROCC RMSE Recognition accurac CaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nCaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nFigure 14: Pattern scales used in the different configurations of the Shape condition. Actual images used were white-on-black rather than black-on-white.\nTable 4: Accuracy of perceptual sensitivity prediction during CaffeNet model standard training (use. scale: 100%). Last row reproduced from Table 2.\nAn interesting, unexpected observation is that the logarithmically spaced contrast inputs are linearly spaced at the end-computation layers. That is, the average change in DNN representation scales logarithmically with the size of input change. This can be quantified by the correlation of output L1 with log Contrast input, which showed R2 = 98% (averaged across spatial frequencies) for prob, while much lower values were observed for early and middle layers (up to layer fc7). The same computation when scrambling the learned parameters of the model showed R2 = 60%. Because the degree of log-linearity observed was extremely high, it may be an important emergent property of the learned DNN computation, which may deserve further investigation. However, this property is only reminiscent and not immediately consistent with the perceptual power-law scaling (Gottesman et al., 1981)."}, {"section_index": "11", "section_name": "8.5 CONTRAST SENSITIVITY EXPERIMENT", "section_text": "Used images depicted sine gratings at different contrast, spatial frequency, sine phase, and sine orientation combinations.\nTable 5: Robustness of perceptual sensitivity prediction for varying prediction parameters for mode VGG-19. First three rows reproduced from Table 1. Measurements for the lower noise range of -60:-40 dB were omitted by mistake.\n0.72 a b data fc8 0.5 conv1 1 x10-prob 2 0.36 6 2 8 0.26 1.5 0.18 connnst 1.5 4 6 0.126 1 0.086 1 0.062 2 7 0.5 0.5 0.046 2 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.015 Frequency Frequency (cycles/image) 0.007\nFigure 3: Contrast sensitivity. a. Perceived contrast is strongly affected by spatial frequency a low contrast, but less so at high contrast (which preserves the physical quantity of contrast and thu termed constancy). b. The L1 change in VGG-19 representation between a gray image and image depicting sinusoidal gratings at each combination of sine spatial frequency (x-axis) and contras (color) (random orientation, random phase), considering the raw image pixel data representatior (data), the before-ReLU output of the first convolutional layer representation (conv1_1), the out put of the last fully-connected layer representation (fc8), and the output class label probabilitie. representation (prob).\nSix 'hard' background line layouts (patterns b-f of their Figure 2 and the additional patterr f of their Figure 3 in Weisstein & Harris, 1974). The \"easy\"' layout was always the same (pattern a).\nFor each configuration, the line whose location is discriminated had four possible locations (two locations are shown in Figure 2c), and the surrounding background line layout could compose a shape (easy) or not (hard)\nScale Metric Augmentation Noise range R- SROCC RMSE 100% L1 noise phase -40:25 dB .60 .79 5.40 66% L1 noise phase -40:25 dB .60 .79 5.42 50% L1 noise phase -40:25 dB .57 .77 5.57 100% L2 noise phase -40:25 dB .62 .80 5.29 100% L1 None -40:25 dB .58 .77 5.55 100% L1 noise phase -20:25 dB .59 .78 5.46 100% L1 noise phase -40:5 dB .59 .79 5.43\n0.72 a b 0.5 data conv1_1 fc8 2 x10~pr0b 0.36 8 2 0.26 Connnst 1.5 0.18 1.5 4 0.126 0.086 4 1 0.062 2 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 Frequency 0.0078 Frequency (cycles/image)\nModel Day 1 Days 2-4 Masked VGG-19 .36 .37 .15 GoogLeNet .31 .22 .16 MRSA-152 .26 .26 .11 CaffeNet iter 1. .32 .29 .39 CaffeNet iter 50K .15 .19 .16 CaffeNet iter 310K .16 .12 .18 Gabor Decomposition .26 .27 .48 Steerable Pyramid .24 .32 .25\n.36 .37 .15 .31 .22 .16 .26 .26 .11 .32 .29 .39 50K .15 .19 .16 310K .16 .12 .18 .26 .27 position .48 mid .24 .32 .25\nTable 6: Background context for Shape. Shown is the Spearmann correlation coefficient (SROCC) of perceptual data vs. model-based MI prediction across shapes (i.e. considering all shapes rather than only Easy vs. Hard; note that the original robust finding the superiority of the Easy shape) Perceptual data from Weisstein & Harris (1974), where 'Day 1\" and 'Days 2-4\" (averaged) are for the reduced-masking condition depicted in their Figure 3.).\nIt may be tempting to believe that what we see is the result of a simple transformation of visua input. Centuries of psychophysics have, however, revealed complex properties in perception, by crafting stimuli that isolate different perceptual properties. In our study, we used the same stimuli tc investigate the learned properties of deep neural networks (DNNs), which are the leading compute vision algorithms to date (LeCun et al., 2015).\nThe DNNs we used were trained in a supervised fashion to assign labels to input images. To some degree, this task resembles the simple verbal explanations given to children by their parents. Since human perception is obviously much richer than the simple external supervision provided, we were not surprised to find that the best correlate for perceptual saliency of image changes is a part of the DNN computation that is only supervised indirectly (i.e. the mid-computation stage). This similarity is so strong, that even with no fine-tuning to human perception, the DNN metric is competitively accurate, even compared with a direct model of perception.\nThis strong, quantifiable similarity to a gross aspect of perception may, however, reflect a mix of sim ilarities and discrepancies in different perceptual properties. To address isolated perceptual effects we considered experiments that manipulate a spatial interaction, where the difficulty of discrimi nating a foreground target is modulated by a background context. Results showed modulation o DNN target diagnostic, isolated unit information, consistent with the modulation found in percep tual discrimination. This was shown for contextual interactions reflecting grouping/segmentatior (Harris et al., 2015), crowding/clutter (Livne & Sagi, 2007; Pelli et al., 2004), and shape superiority (Weisstein & Harris, 1974). DNN similarity to these groupings/gestalt phenomena appeared at the end-computation stages.\nNo less interesting, are the cases in which there is no similarity. For example, perceptual effects related to 3D (Erdogan & Jacobs, 2016) and symmetry (Pramod & Arun, 2016) do not appear to have a strong correlate in the DNN computation. Indeed, it may be interesting to investigate the influence of visual experience in these cases. And, equally important, similarity should be considered in terms of specific perceptual properties rather than as a general statement.\nIn the human hierarchy of visual processing areas, information is believed to be processed in a feed. forward sweep, followed by recurrent processing loops (top-down and lateral) (Lamme & Roelf-. sema, 2ooo). Thus, for example, the early visual areas can perform deep computations. Since. mapping from visual areas to DNN computational layers is not simple, it will not be considered. here. (Note that ResNet connectivity is perhaps reminiscent of unrolled recurrent processing)\nInterestingly, debate is ongoing about the degree to which visual perception is dependent on re. current connectivity (Fabre-Thorpe et al., 1998; Hung et al., 2005): recurrent representations ar. obviously richer, but feedforward computations converge much faster. An implicit question her regarding the extent of feasible feed-forward representations is, perhaps: Can contour segmentation. contextual influences, and complex shapes be learned? Based on the results reported here for feed. forward DNNs, a feedforward representation may seem sufficient. However, the extent to which thi. is true may be very limited. In this study we used small images with a small number of lines, whil. effects such as contour integration seem to take place even in very large configurations (Field et al. 1993). Such scaling seems more likely in a recurrent implementation. As such, a reasonable hy. pothesis may be that the full extent of contextual influence is only realizable with recurrence, whil. feedforward DNNs learn a limited version by converging towards a useful computation..\nThe use of DNNs in modeling of visual perception (or of biological visual systems in general) is subject to a tradeoff between accuracy and biological plausibility. In terms of architecture, other deep models better approximate our current understanding of the visual system (Riesenhuber &\nPerceptual threshold <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB orrnrroor <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB oreoreeor <-45 dB -4O dB -35 dB -30 dB -25 dB -20 dB >-15 dE oreoreeor deonmp Gaoor\nFigure 6: Images where predicted threshold is too high (\"Overshoot\", where perturbation saliency is better than predicted) or too low (\"'Undershoot'), considered from several perceptual threshold ranges (2 dB of shown number). Some images are reproduced from Figure 1."}]
HJ0NvFzxl
[{"section_index": "0", "section_name": "LEARNING GRAPHICAL STATE TRANSITIONS", "section_text": "machines with different rules and initial tape contents, each of which simulated 6 timesteps of the Turing machine. Performance was then evaluated on 1o00 new examples generated with the same format. The models were evaluated by picking the most likely graph generated by the model, and. comparing it with the correct graph. The percent accuracy denotes the fraction of the examples for which these two graphs were identical at all timesteps. In addition to evaluating the performance on identical tasks, the generalization ability of the models was also assessed. The same trained models were evaluated on versions of the task with 20 and 30 timesteps of simulation..\nDaniel D. Johnson\nDanerD..lonnson Department of Computer Science. Harvey Mudd College. 301 Platt Boulevard.\n1. John grabbed the milk.. 2. John travelled to the bedroom. 3. Sandra took the football.. 4. John went to the garden.. 5. John let go of the milk.. 6. Sandra let go of the football.. 7. John got the football.. 8. John grabbed the milk.. Where is the milk?.\nResults are shown in Table[3] The models successfully learned the assigned tasks, reaching high levels of accuracy for both tasks. Additionally, the models show the ability to generalize to large inputs, giving a perfect output in the majority of extended tasks. For visualization purposes, Figure 3|shows the model at various stages of training when evaluated starting with a single 1 cell.\nGraph-structured data is important in modeling relationships between multiple. entities, and can be used to represent states of the world as well as many data. structures.Li et al.(2016) describe a model known as a Gated Graph Sequence. Neural Network (GGS-NN) that produces sequences from graph-structured input In this work I introduce the Gated Graph Transformer Neural Network (GGT. NN), an extension of GGS-NNs that uses graph-structured data as an intermediate. representation. The model can learn to construct and modify graphs in sophisti. cated ways based on textual input, and also to use the graphs to produce a variety. of outputs. For example, the model successfully learns to solve almost all of the. bAbI tasks (Weston et al.J2016), and also discovers the rules governing graphica. formulations of a simple cellular automaton and a family of Turing machines..\nFigure 5: Diagram of one sample story from the bAbI dataset (Task 2), along with a graphica representation of the knowledge state after the italicized sentence..\nMany methods have been proposed for combining neural networks with graphs. These methods gen. erally require the input to the network to be in graphical format. For instance, GNNs and GGS-NNs. take a graph as input, and propagate information between nodes according to the graph structure. (Gori et al.]2005] Scarselli et al.]2009]Li et al.]2016). Similarly, graph convolutional networks extract information from an existing graph structure by using approximations to spectral graph con volutions (Kipf & Welling2016). These methods are similar to GGT-NNs in that they all store. information in the nodes of a graph and use edges to determine how information flows. However,. they all use a graph with fixed structure, and can only accept graphical data. The GGT-NN model,. on the other hand, allows the graph structure to be built and modified based on unstructured input.\nGiles et al.[(1992) describe a method for extracting a finite state machine from a trained recurren neural network by quantizing the hidden states of the network, recording all possible state transi tions, and using them to construct a minimal directed graph representing the state machine. This method, however, requires postprocessing of the network to extract the graph, and is limited to ex. tracting graphs that represent state machines. Additionally, although the FSM extraction method described byGiles et al.(1992) and the GGT-NN model both produce graphs using neural networks the goals are different: the FSM extraction method aims to learn a single graph that can classify. sequences, whereas the GGT-NN model aims to learn a neural network that can manipulate graphs."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different types of data can be formulated using a graph structure. One form of data that lends itself to a graphical representation is data involving relationships (edges) between entities (nodes) Abstract maps of places and paths between them also have a natural graph representation, where places are nodes and paths are edges. In addition, many data structures can be expressed in graphical form, including linked lists and binary trees.\nThe lifted relational neural network (LRNN) is another approach to working with structured data. (Sourek et al.|2015). LRNNs require the input to be formatted as a combination of weighted predi. cate logic statements, encompassing both general rules and specific known facts. For each training example, the statements are used to construct a \"ground neural network'', with a connection patterr determined by the dependencies between the statements. LRNNs can learn to extract information b adjusting the weights of each statement, but require the rules to be composed by hand based on the. task structure. Furthermore, unlike in GGT-NNs, a LRNN has no internal state associated with the objects it describes (which are instead represented by single neurons), and the relationships betweer objects cannot be constructed or modified by the network..\nSubstantial research has been done on producing output when given graph-structured input (Kashima et al.]2003] Shervashidze et al.2011, Perozzi et al.] 2014] Bruna et al.,2013] Duvenaud et al. 2015). Of particular relevance to this work are Graph Neural Networks (Gori et al.][2005) |Scarselli et al.| 2009), or GNNs, which extend recursive neural networks by assigning states to each node in a graph based on the states of adjacent nodes. RecentlyLi et al.(2016) have modified GNNs to use gated state updates and to produce output sequences. The resulting networks, called GG-NNs and GGS-NNs, are successful at solving a variety of tasks with graph-structured input.\nFigure 6: Diagram of one example from the automaton task, along with a graphical representation of the automaton state after the fourth simulate command (italicized).\nThe current work further builds upon GG-NNs and GGS-NNs by allowing graph-structured inter mediate representations, as well as graph-structured outputs. This is accomplished using a mor flexible graph definition, along with a set of graph transformations which take a graph and othe information as input and produce a modified version of the graph. This work also introduces th Gated Graph Transformer Neural Network model (GGT-NN), which combines these transforma tions with a recurrent input model to incrementally construct a graph given natural language input and can either produce a final graph representing its current state, or use the graph to produce natural language output.\nMultiple recent architectures have included differentiable internal states. Memory Networks, as de- scribed in|Weston et al.(2014), and the fully differentiable end-to-end memory networks, described. in Sukhbaatar et al.(2015), both utilize a differentiable long-term memory component, consisting. of a set of memories that are produced by encoding the input sentences. To answer a query, an. attention mechanism is used to select a subset of these memories, and the resulting memories are processed to produce the desired output. Differentiable Neural Computers (DNCs), described in Graves et al.(2016), interact with a fixed-size memory using a set of read and write \"heads\", which. can be moved within the memory either by searching for particular content or by following temporal. \"links of association'' that track the order in which data was written..\nExtending GG-NNs in this way opens up a wide variety of applications. Since many types of dat. can be naturally expressed as a graph, it is possible to train a GGT-NN model to manipulate. meaningful graphical internal state. In this paper I demonstrate the GGT-NN model on the bAb task dataset, which contains a set of stories about the state of the world. By encoding this state a a graph and providing these graphs to the model at training time, a GGT-NN model can be traine. to construct the correct graph from the input sentences and then answer questions based on thi. internal graph. I also demonstrate that this architecture can learn complex update rules by trainin. it to model a simple 1D cellular automaton and arbitrary 4-state Turing machines. This requires th network to learn how to transform its internal state based on the rules of each task.\nMemory networks and DNCs share with the GGT-NN model the ability to iteratively construct an internal state based on textual input, and use that internal state to answer questions about the underlying structured data. However, in these models, the structure of the internal state is implicit: although the network can store and work with structured data, the actual memory consists of a set of vectors that cannot be easily interpreted, except by monitoring the network access patterns. The GGT-NN model, on the other hand, explicitly models the internal state as a graph with labeled\nFigure 7: Diagram of an example from the Turing machine task, with a graphical representation of the machine state after the second run command (italicized).\ndCLOlTS 1T TOCa cTOn John Garden Bedroom ettable is in location. gettable is in actor. Football Milk Sandra"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Zero Value edges Neighbor edges Initial cells New cells (left) New cells (right) One\n10. input symbol_0 head 11. input symbol_0 12. input symbol_0 States and rules 13. input symbol_1 14. run Current state 15. run Head Current cell 16. run Cells 17. run 18. run 19. run Zero One\ntates and rules Currentstat Head Current cell Cells Zero One\nNodes Connectivity Annotation Strength State Destination 1234567 Q D 9 9 Q 9 9 Qh Ch 4h 4h Yh sounee h 4h 9h4h 4h 9h 4 Dh 9 9 9 Q 9 9 9h 4h h Ch L 5 6 Q Q Q Q 9 Q 9 6 7 Q9999 99\nNodes Connectivity Annotation Strength State Destination 123456 . h Ch 9h 9h 9h 9h Y h4h 4h 4h 4h suunee Dh 9h 9h 0h Ch 4h Yh Lh 4h9h9h l Q Q Q 9 Q Q Q 5 Q 9 9 9 9 9 9 6 7 Q999999\nnodes and edges. This allows the produced graph to be extracted, visualized, and potentially used i downstream applications that require graph-structured input.\nHierarchical Attentive Memory (HAM) is a memory-based architecture that consists of a binary tre. built on top of an input sequence (Andrychowicz & Kurach]2016). A recurrent controller accesses. the HAM module by performing a top-down search through the tree, at each stage choosing tc. attend to either the left or right subtrees. Once this process reaches a leaf, the value of the leaf is provided to the controller to use in predicting the next output, and this leaf's value can be updatec. with a new value. This architecture is especially suited toward sequence-based tasks, and has beer. shown to generalize to longer sequences very efficiently due to the tree structure. However, it i. unclear whether a HAM module would work well with non-sequential structured data, since the tre structure is fixed by the network.\nAn example of a graph produced from the bAbI tasks is given in Figure5\nThe cellular automaton task was mapped to graphical format as follows: Nodes have 5 types: zerc. one, init-cell, left-cell, and right-cell. Edges have 2 types: value, and next-r. There is always exactl. one \"zero'' node and one \"one'' node, and all of the cell nodes form a linked list, with a \"value'' edg connecting to either zero or one, and a \"next-r' edge pointing to the next cell to the right (or no edg. for the rightmost cell).\nFigure 1: Diagram of the differentiable encoding of a graphical structure, as described in section|3. On the left, the desired graph we wish to represent, in which there are 6 node types (shown as blue. purple, red, orange, green, and yellow) and two edge types (shown as blue/solid and red/dashed). Node 3 and the edge between nodes 6 and 7 have a low strength. On the right, depictions of the node and edge matrices: annotations, strengths, state, and connectivity correspond to xy, Sv, hy, and C, respectively. Saturation represents the value in each cell, where white represents O, and. fully saturated represents 1. Note that each node's annotation only has a single nonzero entry,. corresponding to each node having a single well-defined type, with the exception of node 3, which. has an annotation that does not correspond to a single type. State vectors are shaded arbitrarily. to indicate that they can store network-determined data. The edge connectivity matrix C is three dimensional, indicated by stacking the blue-edge cell on top of the red-edge cell for a given source-. destination pair. Also notice the low strength for cell 3 in the strength vector and for the edge. between node 6 and node 7 in the connectivity matrix..\nOne advantage of the GGT-NN model over existing works is that it can process data in a distributec. fashion. Each node independently processes its surroundings, which can be beneficial for complex. tasks such as pathfinding on a graph. This is in contrast to memory networks, DNCs, and HAM. modules, which are restricted to processing only a fixed number of locations in a given timestep. On the other hand, the distributed nature of the GGT-NN model means that it is less time and space. efficient than these other networks. Since every node can communicate with every other node, the. time and space required to run a GGT-NN step scales quadratically with the size of the input. A. DNC or memory network, on the other hand, either scales linearly (since it attends to all storec. data or memories) or is constant (if restricted to a fixed-size memory), and a HAM module scale. logarithmically (due to the tree structure).\nAt the start of each training example, there are 13 timesteps with input of the form \"init X\" where X is O or 1. These timesteps indicate the first 13 initial cells. Afterward, there are 7 \"simulate\"' inputs At each of these timesteps, one new left-cell node is added on the left, one new right-cell node is added on the right, and then all cells update their value according to the Rule 30 update rules.\nAn example of the graphical format for the cellular automaton task is given in Figure6\nFor the Turing machine task, nodes were assigned to 8 types: state-A, state-B, state-C, state-D. head, cell, 0, and 1. Edges have 16 types: head-cell, next-left, head-state, value, and 12 types o the form rule-R-W-D, where R is the symbol read (O or 1), W is the symbol written (0 or 1), an D is the direction to move afterward (Left, Right, or None). State nodes are connected with rule edges, which together specify the rules governing the Turing machine. Cell nodes are connected tc. adjacent cells with next-left edges, and to the symbol on the tape with value edges. Finally, the heac. node is connected to the current state with a head-state edge, and to the current cell of the head witl. a head-cell edge."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Gated Recurrent Units (GRU) are a type of recurrent network cell introduced byCho et al.(2014] Each unit uses a reset gate r and an update gate z, and updates according to.\nr(t) =o(W,x(t) +U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1) + h(t) =(Wx+U(r(t) Oh(t-1)) +b) h(t) =zOh(t-1) +(1-z)Oh(t)\nThe GGT-NN architecture has a few advantages over the architectures described in existing works In contrast to other approaches to working with structured data, GGT-NNs are designed to work witl unstructured input, and are able to modify a graphical structure based on the input. And in contras to memory networks or DNCs, the internal state of the network is explicitly graph structured, anc complex computations can be distributed across the nodes of the graph.\nr(t) =o(W,x(t) + U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1 n(t) =(Wx+U(r(t) Oh(t-1)) +b h(t)=zOh(t-1)+1z\nwhere o is the logistic sigmoid function, is an activation function (here tanh is used), x(t) is the input vector at timestep t, h(t) is the hidden output vector at timestep t, and W, U, Wr, Ur, Wz. Uz, b, br and bz are learned weights and biases. Note that O denotes elementwise multiplication.\nOne downside of the current model is that the time and space required to train the model increase very quickly as the complexity of the task increases, which limits the model's applicability. It would be very advantageous to develop optimizations that would allow the model to train faster and with. smaller space requirements, such as using sparse edge connections, or only processing some subset. of the nodes at each timestep. Another promising direction of future work is in reducing the level of supervision needed to obtain meaningful graphs, for example by combining a few examples that have full graph-level supervision with a larger set of examples that do not have graph-level information. or using additional regularization to enable the GGT-NN model to be trained without any graph. information.\nAn example of the graphical format for the Turing machine task is given in Figure7"}, {"section_index": "4", "section_name": "2.2 GG-NN AND GGS-NN", "section_text": "The model described in Section 4 conditions the output of the model on the final graph producec by the network. This is ideal when the graph represents all of the necessary knowledge for solving the task. However, it may also be desirable for each graph to represent a subset of knowledge corre sponding to a particular time, and for the output to be based on the sequence of graphs produced. Foj nstance, in the third bAbI task (which requires reasoning about the temporal sequence of events each graph could represent the state of the word at that particular time, instead of representing the full sequence of events prior to that time. In Appendix [C] section[C.1] I describe a transformation to the tasks which allows all information to be contained in the graph. But this adds complexity tc the graphical structure. If it were possible for the model to take into account the full sequence of graphs, instead of just the final one, we could maintain the simplicity of the graph transformation\nThe Gated Graph Neural Network (GG-NN) is a form of graphical neural network model described byLi et al.[(2016). In a GG-NN, a graph G = (V,E) consists of a set V of nodes v with unique values and a set E of directed edges e = (v, v') E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD, and each edge has a type ye E {1, ... , M}.\nGG-NNs operate by first initializing the state h, of each node to correspond to the annotation xy.. Then, a series of propagation steps occur. In each step, information is transferred between nodes. across the edges, and the types of edge determine what information is sent. Each node sums the input it receives from all adjacent nodes, and uses that to update its own internal state, in the same manner as a GRU cell. Finally, the states of all nodes are used either to create a graph-level aggregate. output, or to classify each individual node."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "To this end, I present an extension of the GGT-NN model that can produce output using the full graphical sequence. In the extended model, the graphical output of the network after each input sentence is saved for later use. Then, when processing the query, the same set of query transfor- mations are applied to every intermediate graph, producing a sequence of representation vectors hanswer. These are then combined into a final summary representation vector hanswer hanswer summary\nGGS-NNs extend GG-NNs by performing a large number of propagation-output cycles. At each stage, two versions of the GG-NN propagation process are run. The first is used to predict an outpu for that timestep, and the second is used to update the annotations of the nodes for the next timestep This allows GGS-NNs to predict a sequence of outputs from a single graph.\nI would like to thank Harvey Mudd College for computing resources. I would also like to thank th developers of the Theano library, which I used to run my experiments. This work used the Extrem Science and Engineering Discovery Environment (XSEDE), which is supported by National Scienc. Foundation grant number ACI-1053575.\nIn a few of the tasks, specific entities had multi-word representations. While this works for normal input, it makes it difficult to do direct reference, since direct reference is checked on an individual word level. These tasks were modified slightly so that the entities are referred to with single words (e.g. \"red_square'\" instead of \"red square')..\nThe results presented here show that GGT-NNs are able to successfully model a wide variety of tasks using graph-structured states and potentially could be useful in solving many other types of problems. The specific GGT-NN model described here can be used as-is for tasks consisting of a sequence of input sentences and graphs, optionally followed by a query. In addition, due to the modular nature of GGT-NNs, it is possible to reconfigure the order of the transformations to produce a model suitable for a different task.\nAt the start of each training example, each of the rules for the Turing machine are given, in the. form \"rule state-X R W state-Y D\". Next, the initial state is given in the format \"start state-X\", and the initial contents of the tape (of length 4) are given sequentially in the format \"input symbol-X\". with the position for the head to start marked by \"input symbol-X head\". Finally, there are 6 \"run'. inputs, after each of which the head node updates its edges and the cell at the head updates its value. according to the rules of the Turing machine. If the head leaves the left or right of the tape, a new node is introduced there.\nThere are exciting potential uses for the GGT-NN model. One particularly interesting application would be using GGT-NNs to extract graph-structured information from unstructured textual de scriptions. More generally, the graph transformations provided here may allow machine learning to interoperate more flexibly with other data sources and processes with structured inputs and outputs\na)\nDirect reference No direct reference Task Accuracy Accuracy 3 - Three Supporting Facts 90.3% 65.4% 5 - Three Arg. Relations 89.8% 74.2%"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locall connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.\nFigure 2: Summary of the graph transformations. Input and output are represented as gray squares. a) Node addition (Tadd), where the input is used by a recurrent network (white box) to produce nev. nodes, of varying annotations and strengths. b) Node state update (Th), where each node receives input (dashed line) and updates its internal state. c) Edge update (Tc), where each existing edge. (colored) and potential edge (dashed) is added or removed according to the input and states of the. adjacent nodes (depicted as solid arrows meeting at circles on each edge). d) Propagation (Tprop). where nodes exchange information along the current edges, and update their states. e) Aggregatior. (Trepr), where a single representation is created using an attention mechanism, by summing informa. tion from all nodes weighted by relevance (with weights shown by saturation of arrows)..\nTable 4: Performance of the sequence-extended GGT-NN on the two bAbI tasks with a tempora component.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nAlgorithm 2 Sequence-Extended Pseudocode 9o > Initialize G to an empty grap for k from 1 to K do > Process each sentenc 9k Tn(9k-1,i(k)) if direct reference enabled then 9k Tdirect(9k,D(k)) end if if intermediate propagation enabled then 9k<Tprop(Gk) end if h2gh Trepr(Gr) 9k Tada(9k,[i(k) hagd]) 9k<Tc(9k,i(k)) end for hoausweary 0 > Initialize hsusweary to the zero vecto for k from 1 to K do > Process the query for each grap 9k Tquery(9k,iquery) if direct reference enabled then end if 9kTqupry(Gr) 1 hausmeary end for return foutput (hansweary)\nDavid K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan. Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pp. 2224-2232, 2015."}, {"section_index": "7", "section_name": "DIFFERENTIABLE E GRAPH TRANSFORMATIONS", "section_text": "In this section, I describe some modifications to the graph structure to make it fully differentiable and then propose a set of transformations which can be applied to a graph structure in order t transform it. In particular, I redefine a graph g = (V, C) e T as a set V of nodes v, and a connectivit matrix C E R!V||V|Y, where Y is the number of possible edge types. As before, each node has an annotation x, E RN and a hidden state h, E RD. However, there is an additional constraint tha N possible node types. Each node also has a strength s, E [0, 1]. This represents the level of belie that node v should exist, where s, = 1 means the node exists, and s, = 0 indicates that the node should not exist and thus should be ignored.\nSimilarly, elements of C are constrained to the range [0, 1], and thus one can interpret Cu,v',y as the level of belief that there should be a directed edge of type y from v to v'. (Note that it is possible for there to be edges of multiple types between the same two nodes v and v', i.e. it is possible for Cy,v',y = Cu,v',y' = 1 where y y'.) Figure[1shows the values of xu, Su, h,, and C corresponding to a particular graphical structure\nMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969, 2016\nThere are five classes of graph transformation:\nHisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs In ICML, volume 3, pp. 321-328, 2003\nusing a recurrent network such as a GRU layer, from which the output can be produced. The modi fied pseudocode for this is shown in Algorithm|2\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. ICLR, 2016.\nI evaluated the extended model on bAbI tasks 3 and 5, the two tasks which asked questions about a sequence of events. (Note that although Task 14 also involves a sequence of events, it uses a set of discrete named time periods and so is not applicable to this modification.) The model was trainec on each of these tasks, without the extra record and history nodes used to store the sequence, insteac simply using the sequence of graphs to encode the relevant information. Due to the simpler graphs produced, intermediate propagation was also disabled.\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks. 20(1:61-80. 2009"}, {"section_index": "8", "section_name": "1+ GATED GRAPH TRANSFORMER NEURAL NETWORK (GGT-NN)", "section_text": "In this section I introduce the Gated Graph Transformer Neural Network (GGT-NN), which is con structed by combining a series of these transformations. Depending on the configuration of the transformations, a GGT-NN can take textual or graph-structured input, and produce textual or graph\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advance. 44 8 0015\na)\na) Node addition (Tadd), which modifies a graph by adding new nodes and assigning ther. annotations x, and strengths s, based on an input vector.. b) Node state update (Tn), which modifies the internal state of each node using an input vector. (similar to a GRU update step). Optionally, different input can be given to nodes of each type, based on direct textual references to specific node types. This version is called a direct. reference update (Th.direct). c) Edge update (Tc), which modifies the edges between each pair of nodes based on the inter nal states of the two nodes and an external input vector.. d) Propagation (Tprop), which allows nodes to trade information across the existing edges and. then update their internal states based on the information received.. e) Aggregation (Trepr), which uses an attention mechanism to select relevant nodes and then. generates a graph-level output.\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net works. arXiv preprint arXiv:1609.02907. 2016\nEach transformation has its own trainable parameters. Together, these transformations can be com bined to process a graph in complex ways. An overview of these operations is shown in Figure 2. For details about the implementation of each of these transformations, see Appendix|B.\nResults from training the model are shown in Table4] The accuracy of the extended model appears to be slightly inferior to the original model in general, although the extended direct-reference model of task 5 performs slightly better than its original counterpart. One possible explanation for the inferiority of the extended model is that the increased amount of query processing made the model more likely to overfit on the training data. Even so, the extended model shows promise, and could be advantageous for modeling complex tasks for which preprocessing the graph would be impractical.\nAlgorithm 1 Graph Transformation Pseudocode\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy. tasks. ICLR, 2016.\nstructured output. Here I describe one particular GGT-NN configuration, designed to build an modify a graph based on a sequence of input sentences, and then produce an answer to a query.\nStephen Wolfram. A new kind of science, volume 5. Wolfram media Champaign, 2002\nWhen run, the model performs the following: For each sentence k, each word is converted to a. one-hot vector w(~), and the sequence of words (of length L) is passed through a GRU layer to -(k) p(k). The full sentence representation produce a sequence of partial-sentence representation vectors pi~). vector i(k) is initialized to the last partial representation vector p). Furthermore, a direct-reference (k) input matrix D(k) is set to the sum of partial representation vectors corresponding to the words that. (k) that directly refer to node type n. This acts like an attention mechanism, by accumulating the partial representation vectors for the words that directly reference each type, and masking out the vectors. corresponding to other words.\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual an textual question answering. In Proceedings of The 33rd International Conference on Machin Learning, pp. 2397-2406, 2016.\n. The full sentence representation\nNext, a series of graph transformations are applied, as depicted in Algorithm[1] Depending on the task, direct reference updates and per-sentence propagation can be enabled or disabled. The output. function foutput will depend on the specific type of answer desired. If the answer is a single word,. foutput can be a multilayer perceptron followed by a softmax operation. If the answer is a sequence. of words, foutput can use a recurrent network (such as a GRU) to produce a sequence of outputs. transformations with different learned weights.\nSince the processing of the input and all of the graph transformations are differentiable, at this point the network output can be compared with the correct output for that query and used to update the network parameters, including both the GRU parameters used when processing the input and the internal weights associated with each transformation."}, {"section_index": "9", "section_name": "4.1 SUPERVISION", "section_text": "As with many supervised models, one can evaluate the loss based on the likelihood of producing. an incorrect answer, and then minimize the loss by backpropagation. However, based on initial. experiments, the model appeared to require additional supervision to extract meaningful graph. structured data. To provide this additional supervision, I found it beneficial to provide the correct graph at each timestep and train the network to produce that graph. This occurs in two stages, first. when new nodes are proposed, and then when edges are adjusted. For the edge adjustment, the edge. loss between a correct edge matrix C* and the computed edge matrix C is given by.\nLedge =-C* ln(C)(1C*)ln(1C)\nThe node adjustment is slightly more complex. Multiple nodes are added in each timestep, but the. order of those nodes is arbitrary, and only their existence is important. Thus it should be possible for the network to determine the optimal ordering of the nodes. In fact, this is important because there. is no guarantee that the nodes will be ordered consistently in the training data..\nVinyals et al.(2016) demonstrate a simple method for training a network to output unordered sets the network produces a sequence of outputs, and these outputs are compared with the closest order\nTlanslonlnlallonlfseudocode 1: g 11: G Tada(9, [i(k) hadd]) 2: for k from 1 to K do 12: 9 Tc(G,i(k)) 3: G Tn(9,i(k)) 13: end for 4: if direct reference enabled then 14: G Tquery(G,iquery) 5: G Th,direct(G,D(k)) 15: if direct reference enabled then 9 Tnudrc(G, Dquery) 6: end if 16: 7: if intermediate propagation enabled then 17: end if 18: g Tpupry(9) 8: G Tprop(9) 9: end if 19: hanswer Treuery () hadd Trepr(9) 10: 20: return foutput(hanswer)\n13: end for 14: GT 15: if direct reference enabled t 16: 17: end if 18: G Tppupry( ron 19: hanswer Trequery () 20: return foutput(hanswer inswer\ninput matrix D("}, {"section_index": "10", "section_name": "APPENDIX A BACKGROUND ON GG-NNS AND GGS-NNs", "section_text": "ing of the training data, i.e., the ordering of the training data which would produce the smallest loss. when compared with the network output. Vinyals et al.|show that when using this method, the net- work arbitrarily chooses an ordering which may not be the optimal ordering for the task. However. in this case any ordering should be sufficient, and I found the arbitrary orderings selected in this. way to work well in practice. In particular, letting s*() and x*(o) denote the correct strength and annotations of node v under ordering , the loss becomes.\nThis section gives additional background on the implementation of GG-NNs and GGS-NNs, de scribed byLi et al.(2016).\n|Vnew| Lnode = s*(u) ln(s) + (1- s*(v)) ln(1- s) + x*(v) ln(xy) max TT v=|Vo1d|+1\nt) + Ul a t = tanh(Wa(t) + U(r(t) o h"}, {"section_index": "11", "section_name": "4.2 OTHER TRANSFORMATION CONFIGURATIONS", "section_text": "The structure described in Algorithm 1is designed for question-answering tasks. However, due. to the composability of the individual graph transformations, other configurations could be used to solve other tasks that operate on structured data.."}, {"section_index": "12", "section_name": "5.1 BABI TASKS", "section_text": "M Sedge(v, v', y) O Py + Sedge(V', v,y) O P' U'EV y=1\nI evaluated the GGT-NN model on the bAbI tasks, a set of simple natural-language tasks, where eacl task is structured as a sequence of sentences followed by a query (Weston et al.|2016). The gener ation procedure for the bAbI tasks includes a \"Knowledge\" object for each sentence, representing the current state of knowledge after that sentence. I exposed this knowledge object in graph format and used this to train a GGT-NN in supervised mode. The knowledge object provides names fo. each node type, and direct reference was performed based on these names: if a word in the sentence matched a node type name, it was parsed as a direct reference to all nodes of that type. For details on this graphical format, see Appendix|C\nwhere Sedge(v, v', y) is 1 if e = (v, v) E and ye = y, and O otherwise"}, {"section_index": "13", "section_name": "5.1.1 ANALYSIS AND RESULTS", "section_text": "Gated Graph Sequence Neural Networks (GGS-NN) are an extension of GG-NNs to sequential .,o(K).At each output step k, the annotation matrix I is given by (k). Output o(1). k (k)1 E R|V|Ly. A GG-NN F, is trained to predict an output sequence o(k) from (k), and another GG-NN Fx is trained to predict (k+1) from (k). Prediction of the output at. each step is performed as in a normal GG-NN, and prediction of (k+1) from the set of all final. hidden states H(k,T) (after T propagation steps of Fx) occurs according to the equation.\nResults are shown in Tables[1and2 The GGT-NN model was able to reach 95% accuracy in all but one of the tasks, and reached 1o0% accuracy in eleven of them (see Table2). Additionally, for fourteen of the tasks, the model was able to reach 95% accuracy using 500 or fewer of the 1000 training examples (see Table1).\nThe only task that the GGT-NN was unable to solve with 95% accuracy was task 17 (Positional Reasoning), for which the model was not able to attain a high accuracy. Task 17 has a larger number\nRecall from section|2.2|that GG-NNs represent a graph G = (V, &) as a set V of nodes v with unique values 1,..., V and a set E of directed edges e = (v, v) E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD. Additionally, each edge has a type Ye E {1,.::, M}\nInitially, hs') is set to the annotation x, padded with zeros. Then nodes exchange information for some fixed number of timesteps T according to the propagation model.\nHere at) represents the information received by each node from its neighbors in the graph, and the (t) matrix A E IRD|V|2D|V! has a specific structure that determines how nodes communicate. The first half of A, denoted A(out) E RD|V| D|VI, corresponds to outgoing edges, whereas the second half A(in) E RD|V| D|V| corresponds to incoming edges.\nFor instance, if a task consists of tracking relationships between a fixed set of objects, one could. construct a version of the model that does not use the new-nodes transformation (Tadd), but instead. only modifies edges. If the task was to extract information from an existing graph, a structure similar. to the GGS-NNs could be built by using only the propagation and aggregation transformations. If the. task was to construct a graph based on textual input, the query processing steps could be omitted, and. instead the final graph could be returned for processing. And if information should be gathered from. a sequence of graphs instead of from a single graph, the query processing steps could be modified. to run in parallel on the full sequence of graphs and extract information from each graph. This last modification is demonstrated in AppendixD\nhg = tanh Xo ) O tanh(j(h)\nI trained two versions of the GGT-NN model for each task: one with and one without direct refer ence. Tasks 3 and 5, which involve a complex temporal component, were trained with intermediate propagation, whereas all of the other tasks were not because the structure of the tasks made such complexity unnecessary. Most task models were configured to output a single word, but task 19. (pathfinding) used a GRU to output multiple words, and task 8 (listing) was configured to output a strength for each possible word to allow multiple words to be selected without having to consider ordering.\nk+1 X\nNode addition Node state update Edge update New states GRU Input GRU-style Dest update State GRU Q Q Q Q GRU 999(Q XH Q QQQj Propagation Aggregation blue frerd To node 2 + fbwd To node 3 X Output tanh(i) New states From node 2 From node 3 GRU-style update Input\nTable 1: Number of training examples needed before the GGT-NN model could attain 5% error. on each of the bAbI tasks. Experiments were run with 50, 100, 250, 500, and 1000 examples \"GGT-NN + direct ref\" denotes the performance of the model with direct reference, and \"GGT. NN\" denotes the performance of the model without direct reference. Dashes indicate that the model. was unable to reach the desired accuracy with 1000 examples..\nFigure 4: Diagram of the operations performed for each class of transformation. Graph state is shown in the format given by Figure [1 Input and output are shown as gray boxes. Black dots represent concatenation, and + and represent addition and multiplication, respectively. 1 - #/. represents taking the input value and subtracting it from 1. Note that for simplicity, operations are only shown for single nodes or edges, although the operations act on all nodes and edges in parallel. In particular, the propagation section focuses on information sent and received by the first node only. In that section the strengths of the edges in the connectivity matrix determine what information is. sent to each of the other nodes. Light gray connections indicate the value zero, corresponding to. situations where a given edge is not present."}, {"section_index": "14", "section_name": "APPENDIX B GRAPH TRANSFORMATION DETAILS", "section_text": "In this section I describe in detail the implementations of each type of differentiable graph trans formation!'A diagram of the implementation of each transformation is shown in Figure |4 Note that it is natural to think of these transformations as operating on a single graphical state, and each modifying the state in place. However, in the technical descriptions of these transformations, th operations will be described as functions that take in an old graph and produce a new one, similarly to unrolling a recurrent network over time."}, {"section_index": "15", "section_name": "B.1 NODE ADDITION", "section_text": "The node addition transformation Tadd : T R -> T takes as input a graph G and an input vecto a E Ra, and produces a graph g' with additional nodes. The annotation and strength of each new node is determined by a function fadd : R R -> R RN R, where a is the length of the. input vector, is the length of the internal state vector, and as before N is the number of node types The new nodes are then produced according to.\n(S|Vg|+i,X|Vg|+i,hi) = fadd(a,h-1)\nTable 2: Error rates of various models on the bAbI tasks. Bold indicates 5% error. For descriptions. of each of the tasks, see Table[1] \"GGT-NN + direct ref\" denotes the GGT-NN model with direct reference, and \"GGT-NN\"' denotes the version without direct reference. See text for details regarding. the models used for comparison. Results from LSTM and MemNN reproduced from Weston et al. (2016). Results from other existing models reproduced fromHenaff et al.(2016).\nNG-LNN NG-LNN Task Task 1 - Single Supporting Fact. 100 1000 11 - Basic Coreference 100 1000 2 - Two Supporting Facts. 250 12 - Conjunction 500 1000 3 - Three Supporting Facts. 1000 13 - Compound Coref. 100 1000 4 - Two Arg. Relations 1000 1000 14 - Time Reasoning 1000 5 - Three Arg. Relations. 500 15 - Basic Deduction 500 500 6 - Yes/No Questions 100 16 - Basic Induction 100 500 7 - Counting 250 17 - Positional Reasoning. 8 - Lists/Sets 250 1000 18 - Size Reasoning 1000 9 - Simple Negation 250 19 - Path Finding 500 0 - Indefinite Knowledge 1000 20 - Agent's Motivations. 250 250\n1,000 examples 10,000 examples CC-LNN run treee te NernN WLN-V WLST +NWC WLN DNC Task 1 0 0.7 50.0 0 0 0.7 31.5 4.4 0 0 0 0 2 0 5.7 80.0 0 8.3 56.4 54.5 27.5 0.3 0.4 0.3 0.1 3 1.3 12.0 80.0 0 40.3 69.7 43.9 71.3 2.1 1.8 1.1 4.1 4 1.2 2.2 39.0 0 2.8 1.4 0 0 0 0 0 0 5 1.6 10.9 30.0 2.0 13.1 4.6 0.8 1.7 0.8 0.8 0.5 0.3 6 0 7.7 52.0 0 7.6 30.0 17.1 1.5 0.1 0 0 0.2 7 0 5.6 51.0 15.0 17.3 22.3 17.8 6.0 2.0 0.6 2.4 0 8 0 3.3 55.0 9.0 10.0 19.2 13.8 1.7 0.9 0.3 0 0.5 9 0 11.6 36.0 0 13.2 31.5 16.4 0.6 0.3 0.2 0 0.1 10 3.4 28.6 56.0 2.0 15.1 15.6 16.6 19.8 0 0.2 0 0.6 11 0 0.2 28.0 0 0.9 8.0 15.2 0 0 0 0 0.3 12 0.1 0.7 26.0 0 0.2 0.8 8.9 6.2 0 0 0.2 0 13 0 0.8 6.0 0 0.4 9.0 7.4 7.5 0 0 0 1.3 14 2.2 55.1 73.0 1.0 1.7 62.9 24.2 17.5 0.2 0.4 0.2 0 15 0.9 0 79.0 0 0 57.8 47.0 0 0 0 0 0 16 0 0 77.0 0 1.3 53.2 53.6 49.6 51.8 55.1 45.3 0.2 17 34.5 48.0 49.0 35.0 51.0 46.4 25.5 1.2 18.6 12.0 4.2 0.5 18 2.1 10.6 48.0 5.0 11.1 8.8 2.2 0.2 5.3 0.8 2.1 0.3 19 0 70.6 92.0 64.0 82.8 90.4 4.3 39.5 2.3 3.9 0 2.3 20 0 1.0 9.0 0 0 2.6 1.5 0 0 0 0 0\nstarting with ho initialized to some learned initial state, and recurrently computing s, and x, for. each new node, up to some maximum number of nodes. Based on initial experiments, I found that. implementing fadd as a GRU layer followed by 2 hidden tanh layers was effective, although other. recurrent networks would likely be similarly effective. The node hidden states h,, are initialized to zero. The recurrence should be computed as many times as the maximum number of nodes that\n1The code for each transformation, and for the GGT-NN model itself, is available at|ht tps : / / git hub om/hexahedria/gated-graph-transformer-network\nmight be produced. The recurrent function fadd can learn to output s, = 0 for some nodes to create fewer nodes, if necessary.\nof possible entities than the other tasks: each entity consists of a color (chosen from five options and a shape (chosen from four shapes), for a total of 20 unique entities that must be represented. separately. Additionally, the stories are much shorter than those in other tasks (2 facts for each se1 of 8 questions). It is likely that these additional complexities caused the network performance tc. suffer.\nFor comparison, accuracy on the bAbI tasks is also included for a simple sequence-to-sequence. LSTM model and for a variety of existing state-of-the-art approaches (see Table 2): a simple. sequence-to-sequence LSTM model, as implemented in Weston et al.(2016), a modified Mem- ory Network model (MemNN, Weston et al.] 2016), End-To-End Memory Network (MemN2N.. Sukhbaatar et al.]2015), Recurrent Entity Network (EntNet, Henaff et al.]2016), Neural Turing. Machine (NTM, Graves et al.]2014), Dynamic NTM (D-NTM, Gulcehre et al.]2016), a larger version of the MemN2N model with weight tying and nonlinearity (MemN2N*, Sukhbaatar et al.. 2015), Differentiable Neural Computer (DNC,Graves et al.J2016), and Dynamic Memory Network (DMN+,Xiong et al.]2016). Although the GGT-NN model was trained using only 1,000 training. examples, results using 10,o00 examples have also been reproduced here for comparison. Also, it is important to note that the GGT-NN and MemNN models were trained with strong supervision:. the GGT-NN model was trained with full graph information, and the MemNN model was trained. with information on which sentences were relevant to the query. All other models were trained. end-to-end without additional supervision."}, {"section_index": "16", "section_name": "B.2 NODE STATE UPDATE", "section_text": "ry = (Wr|a x,+ Urhv +br)z zy =(Wz|axv+Uzhy+bz) h', = tanh(W[a x,] + U (r O h,) + b), h', =z, O h, +(1 -zy) O h,\nry = o(Wr[a xy] + U,hy + br), h', = tanh(W[a x,] + U (r O hy) + b)\nSince the GGT-NN and MemNN models are both strongly supervised, it is interesting to note tha each approach outperforms the other on a subset of the tasks. In particular, the GGT-NN model with direct reference attains a higher level of accuracy on the following tasks, with an improvement o1 0.4-64% depending on the task: task 5 (0.4%), task 7 (15%), task 8 (9%), task 17 (0.5%), task 18 (2.9%), and task 19 (64%). This may indicate that a graphical representation is superior to a list o sentence memories for solving these tasks. On the other hand, the MemNN model outperforms the GGT-NN model (0.1-2.9% greater accuracy) on tasks 3, 4, 10, 12, 14, and 15.\nFor some tasks, performance can be improved by providing information to nodes of a particular type only. For instance, if the input is a sentence, and one word of that sentence directly refers to a node type (e.g., if nodes of type 1 represent Mary, and Mary appears in the sentence), it can be helpful tc allow all nodes of type 1 to perform an update using this information. To accomplish this, Th car be modified to take node types into account. (This modification is denoted Th,direct.) Instead of a single vector a E R, the direct-reference transformation takes in A E RNxa, where An E Ra is the input vector for nodes with type n. The update equations then become\nOf particular interest is the performance on task 19, the pathfinding task, for which the GGT-NN model with direct reference performs better than all but one of the other models (DMN+), anc shows a large improvement over the performance of the MemNN model. This is reasonable, since pathfinding is a task that is naturally suited to a graphical representation. The shortest path betweer two nodes can be easily found by sending information across all paths away from one of the nodes ir a distributed fashion, which the GGT-NN model allows. Note that the preexisting GGS-NN mode (discussed in Section2.2) was also able to successfully learn the pathfinding task, but required the. input to be preprocessed into graphical form even when evaluating the model, and thus could no. be directly evaluated on the textual form of any of the bAbI tasks (Li et al.]2016). The curren results demonstrate that the proposed GGT-NN model is able to solve the pathfinding task wher given textual input."}, {"section_index": "17", "section_name": "B.3 EDGE UPDATE", "section_text": "The edge update transformation Tc : F R -> T takes a graph G and an input vector a E Ra, and produces a graph g' with updated edges. For each pair of nodes (v, v'), the update equations are.\nSimilarly, both variants of the GGT-NN model show improvement over many other models on task. 16, the induction task. Solving the induction task requires being able to infer relationships based on. similarities between entities. (One example from this task: Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white.) In a graphical setting, this can be done. by following a sequence of edges (Greg -> swan -> Lily > white), and the performance of the. GGT-NN model indicates that this task is particularly suited to such a representation..\nIn general, the GGT-NN model with direct reference performs better than the model without it. The. model with direct reference reaches 95% accuracy on 19/20 of the bAbI tasks, while the model without direct reference reaches that level of accuracy on 9/20 of the tasks (see Table 2). Addi-. tionally, when compared to the direct-reference model, the model without direct reference requires more training examples in order to reach the accuracy threshold (see Table|1). This indicates that although the model can be used without direct reference, adding direct reference greatly improves. the training of the model.."}, {"section_index": "18", "section_name": "5.2 RULE DISCOVERY TASKS", "section_text": "The propagation transformation Tprop : F -> I takes a graph G = g(o) and runs a series of T propagation steps (as in GG-NN), returning the resulting graph G' = G(T). The GG-NN propagation step is extended to handle node and edge strengths, as well as to allow more processing to occur to\nTo demonstrate the power of GGT-NN to model a wide variety of graph-based problems, I applied the GGT-NN to two additional tasks. In each task, a sequence of data structures were transformed into a graphical format, and the GGT-NN was tasked with predicting the data for the next timestep\nNote that in order to use information from all of the existing nodes to produce the new nodes, the input to this transformation should include information provided by an aggregation transformation. Trepr, described in sectionB.5\nThe node state update transformation Tn : I Ra -> T takes as input a graph G and an input vector a E Ra, and produces a graph g' with updated node states. This is accomplished by performing a GRU-style update for each node, where the input is a concatenation of a and that node's annotation vector x, and the state is the node's hidden state, according to\nry = o(Wr|a, xv+ Urhv + br), Zy = o(Wz[ay xu]+ Uzhy+bz) ; h', =tanh(W[a, xy] +U(rOhy) + b), h', =z, Oh,+(1-zy) O h'\nCv,v' = fset(a, Xu, hu, Xu', hy,) rv,v' = freset(a, Xv, hu, Xu', hy,) Cy,v' = (1-Cu,v') O Cu,v'+ Cu,v'O (1-ru,u')\nThe functions fset, freset : IR2N 2D -> [0, 1]Y are implemented as neural networks.(In my experiments, I used a simple 2-layer fully connected network.) cu,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be created if it does not exist, and ru,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be removed if it does. Setting both to zero results in no change for that edge, and setting both to 1 toggles the edge state.\nthe information transferred across edges. The full pr agation equations for step t are.\nv'EV y=] ) = o xy]+ U,h(t-1) + br tanh(W[ (t\nTable 3: Accuracy of GGT-NN on the Rule 30 Automaton and Turing Machine tasks\n000 iterations 2000 iterations 3000 iterations 7000 iterations Ground truth . .. .8. r .....p..\nEquation|5|has been adjusted in the most significant manner (relative to Equation|2). In particular Sy' restricts propagation so that nodes with low strength send less information to adjacent nodes. Sedge has been replaced with C to allow edges with fractional strength, and the propagation matrices. Py, P' have been replaced with arbitrary functions ffwd, fbwd : RN RD > R, where is the. length of the vector a. I used a fully connected layer to implement each function in my experiments.. Equations[6[ 7] and[8|have also been modified slightly to add a bias term.\nFigure 3: Visualization of network performance on the Rule 30 Automaton task. Top node (purple represents zero, bottom node (blue) represents 1, and middle nodes (green, orange, and red) repre sent individual cells. Blue edges indicate adjacent cells, and gold edges indicate the value of each cell. Three timesteps occur between each row.."}, {"section_index": "19", "section_name": "B.5 AGGREGATION", "section_text": "The aggregation transformation Trepr : T -> R produces a graph-level representation vector from a graph. It functions very similarly to the output representation of a GG-NN (equation|3), combining an attention mechanism with a node representation function, but is modified slightly to take into account node strengths. As in GG-NN, both i and j are neural networks, and in practice a single fully connected layer appears to be adequate for both.\nbased on the current timestep. No additional information was provided as textual input; instead, the. network was tasked with learning the rules governing the evolution of the graph structure over time"}, {"section_index": "20", "section_name": "5.2.1 CELLULAR AUTOMATON TASK", "section_text": "The first task used was a 1-dimensional cellular automaton, specifically the binary cellular automa. ton known as Rule 30 (Wolfram2002). Rule 30 acts on an infinite set of cells, each with a binary. state (either O or 1). At each timestep, each cell deterministically changes state based on its previous state and the states of its neighbors. In particular, the update rules are.\nThe knowledge graph object used during generation of the bAbI tasks is structured as a dictionary relating entities to each other with specific relationship types. Entities are identified based on thei names, and include people (John, Mary, Sandra), locations (bedroom, kitchen, garden), object (football, apple, suitcase), animals (mouse, wolf, cat), and colors (white, yellow, green), depending on the particular task. Relationships between entities are also expressed as strings, and are directed if John is holding the milk there is an \"is_in' relationship from \"milk' to \"John'; if Sandra is ir the bedroom there is an \"is_in'' relationship from \"Sandra' to \"bedroom'; if Lily is green there is a \"has_color\"' relationship from \"Lily\"' to \"green\", etc.\nCell states can be converted into graphical format by treating the cells as a linked list. Each of the cells is represented by a node with edges connecting it to the cell's neighbors, and a value edge is used to indicate whether the cell is O or 1. This format is described in more detail in AppendixC"}, {"section_index": "21", "section_name": "5.2.2 TURING MACHINES", "section_text": "The second task was simulating an arbitrary 2-symbol 4-state Turing machine. A Turing machin. operates on an infinite tape of cells, each containing a symbol from a finite set of possible symbols. It has a head, which points at a particular cell and can read and write the symbol at that cell. It als has an internal state, from a finite set of states. At each timestep, based on the current state and th contents of the cell at the head, the machine writes a new symbol, changes the internal state, and ca move the head left or right or leave it in place. The action of the machine depends on a finite set o. rules, which specify the actions to take for each state-symbol combination. Note that the version o Turing machine used here has only 2 symbols, and requires that the initial contents of the tape be al. O (the first symbol) except for finitely many 1s (the second symbol)..\nThe transformation from the knowledge object to a graph is straightforward: each entity used is assigned to a new node type, and relationships between entities are represented as edges between the corresponding nodes. To avoid confusion from overloaded relationships (such as \"is_in'' being used to represent an object being held by a person as well as a person being in a room), relation names are given a distinct edge type depending on the usage context. For instance, when a person is carrying an object, the generic \"is_in'' relationship becomes an edge of type \"gettable_is_in_actor'\nSome of the graph representations had to be modified in order to ensure that they contained all o. the necessary information. For instance, task 3 requires the network to remember where items wer in the past, but the knowledge object only contained references to their current locations. In thes cases, a linked list structure was added to the knowledge object to allow the history information t be represented in the graph.\nWhen converting a Turing machine to graphical format, the tape of the machine is modeled as a linked list of cells. Additionally, each state of the machine is denoted by a state node, and edges between these nodes encode the transition rules. There is also a head node, which connects both to the current cell and to the current state of the machine. See Appendix|C[for more details..\nIn particular, each time an item changed locations, a new \"record' node was added, with a \"previous' edge to the previous history node and a \"value\"' edge to the current location of the item. Each item then connected to the most recent history node using a \"history-head\"' edge. This ensures that the history of each node is present in the graph."}, {"section_index": "22", "section_name": "5.2.3 ANALYSIS AND RESULTS", "section_text": "The GGT-NN model was trained on 1000 examples of the Rule 30 automaton with different ini tial states, each of which simulated 7 timesteps of the automaton, and 20,o00 examples of Turing.\nOriginal Task Generalization: 20 Generalization: 30\n.. .\nhc = tanh xu)) O tanh(j(h\nCurrent neighborhood 111 110 101 100 011 010 001 000 Next value 0 0 0 1 1 1 1 0"}]
S1Bb3D5gg
[{"section_index": "0", "section_name": "LEARNING END-TO-END GOAL-ORIENTED DIALOG", "section_text": "Overall, while the methods we tried made some inroads into these tasks, there are still many challenges left unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words and symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable to perfectly handle interpreting knowledge about entities (from returned API calls) to present results tc the user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where MemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen on the realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work on breaking down, analysing and developing models over the simulated tasks should help in the real tasks as well. Results on Concierge confirm this observation: the pattern of relative performances of methods is the same on Concierge and on our series of tasks. This suggests that our synthetic data can indeed be used as an effective evaluation proxy.\nAntoine Bordes, Y-Lan Boureau & Jason Weston\nTraditional dialog systems used in goal-oriented applications require a lot of. domain-specific handcrafting, which hinders scaling up to new domains. End. to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in. chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a. testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols in order to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet. imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on. data from the second Dialog State Tracking Challenge (Henderson et al.]2014a). We show similar result patterns on data extracted from an online concierge service\nWe have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog. learning methods in a systematic and controlled way. We hope this will help foster progress of end-to. end conversational agents because (i) existing measures of performance either prevent reproducibility (different Mechanical Turk jobs) or do not correlate well with human judgements (Liu et al.|[2016). (ii) the breakdown in tasks will help focus research and development to improve the learning methods and (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the. testbed using a variant of end-to-end Memory Networks, which prove an effective model on these. tasks relative to other baselines, but are still lacking in some key areas.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their hel with the Conciergedata\nThe most useful applications of dialog systems such as digital personal assistants or bots are currentl goal-oriented and transactional: the system needs to understand a user request and complete a relatec task with a clear goal within a limited number of dialog turns. The workhorse of traditional dialo, systems is slot-filling (Lemon et al.2006f Wang and Lemon2013] Young et al.2013) whicl predefines the structure of a dialog state as a set of slots to be filled during the dialog. For a restauran reservation system, such slots can be the location, price range or type of cuisine of a restaurant Slot-filling has proven reliable but is inherently hard to scale to new domains: it is impossible tc manually encode all features and slots that users might refer to in a conversation."}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009 Supervised semantic indexing. In Proceedings of ACM CIKM, pages 187-196. ACM..\nBanchs, R. E. (2012). Movie-dic: a movie dialogue corpus for research and development. In Proceedings of th 5Oth Annual Meeting of the ACL..\nEnd-to-end dialog systems, usually based on neural networks (Shang et al.]2015} Vinyals and. Le] 2015} Sordoni et al.]2015] Serban et al.]2015a] [Dodge et al.2016), escape such limitations all their components are directly trained on past dialogs, with no assumption on the domain or dialog state structure, thus making it easy to automatically scale up to new domains. They have shown promising performance in non goal-oriented chit-chat settings, where they were trained. to predict the next utterance in social media and forum threads (Ritter et al.]2011) Wang et al. 2013 Lowe et al.2015) or movie conversations (Banchs2012). But the performance achieved on chit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure1 in a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyond language modeling, e.g., asking questions to clearly define a user request, querying Knowledge Bases (KBs), interpreting results from queries to display options to users or completing a transaction. This. makes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating chit-chat performance in itself is not straightforward (Liu et al.|[2016). In particular, it is unclear if. end-to-end models are in a position to replace traditional dialog methods in a goal-directed setting can end-to-end dialog models be competitive with traditional methods even in the well-defined narrow-domain tasks where they excel? If not, where do they fall short?.\nChen, Y.-N., Hakkani-Tur, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks wi knowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech.\nDahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and Shriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43-48. Association for Computational Linguistics.\nDodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluatin prerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR\nHenderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In 15tl Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 263\nThis paper aims to make it easier to address these questions by proposing an open resource to test end. to-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight. and easy to use. We aim to break down a goal-directed objective into several subtasks to test some crucial capabilities that dialog systems should have (and hence provide error analysis by design).."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Isbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2ooo). Cobot in lambdamoo: A social statistic agent. In AAAI/IAAI, pages 36-41.\nJafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to cha Advances in Ranking, 10\nLowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue systems with next utterance classification. arXiv preprint arXiv:1605.05414.\nMikolov. T.. Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vecto space. arXiv:1301.3781.\nPietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledg engineering review, 28(01), 59-73.\nSordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2O15). 7 neural network approach to context-sensitive generation of conversational responses. Proceedings of NAACI\nSu, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems.. arXiv preprint arXiv:1508.03386. Su, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2O15b). Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391.\nIn the spirit of the bAbI tasks conceived as question answering testbeds (Weston et al.2015b), w designed a set of five tasks within the goal-oriented context of restaurant reservation. Grounde with an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these task cover several dialog stages and test if models can learn various abilities such as performing dialog management, querying KBs, interpreting the output of such queries to continue the conversation o dealing with new entities not appearing in dialogs from the training set. In addition to showing hov the set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialo system, we also propose results on two additional datasets extracted from real interactions with users to confirm that the pattern of results observed in our tasks is indeed a good proxy for what would be observed on real data, with the added benefit of better reproducibility and interpretability.\nVinyals. O. and Le. O. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869\nWang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP\nThe goal here is explicitly not to improve the state of the art in the narrow domain of restaurant booking, but to take a narrow domain where traditional handcrafted dialog systems are known to perform well, and use that to gauge the strengths and weaknesses of current end-to-end systems with no domain knowledge. Solving our tasks requires manipulating both natural language and symbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter tracking completion of the actual goal. Figure[1depicts the tasks and Section|3|details them. Section 4|compares multiple methods on these tasks. As an end-to-end neural model, we tested Memory Networks (Weston et al.|2015a), an attention-based architecture that has proven competitive for non goal-oriented dialog (Dodge et al.]2016). Our experiments in Section 5 show that Memory Networks can be trained to perform non-trivial operations such as issuing API calls to KBs and manipulating entities unseen in training. We confirm our findings on real human-machine dialogs\nWang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference..\nWen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditione lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.\nWeston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR\nYoung, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems. A review. Proceedings of the IEEE, 101(5), 1160-1179.\nHi! Hello, what can I help you with today?. I'd like to book a table for six people in an expensive price range with British food. Task 1 I'm on it! Issuing API calls Where should it be? In London. Ok, let me look into some options for you.. api call(British, London, Six, Expensive). Actually i would prefer for four.. Task 2 Sure. Is there anything else to update? No. Updating API calls Ok let me look into some options for you.. api call(British, London, Four, Expensive). The Place R phone The Place phone. * The Place R_cuisine british. * The Place R address The_Place_address. * The Place R location london. * The_Place R_availability four. * The Place R price expensive. * The Place R rating 7. * The Fancy Pub R phone The Fancy Pub phone. The Fancy Pub R cuisine british. * The_Fancy_Pub R_address The_Fancy_Pub_address. * The Fancy Pub R location london. Task 3 * The Fancy_Pub R_availability four. Displaying options * The_Fancy_Pub R_price expensive * The_Fancy_Pub R_rating 8. What do you think of this option: The Fancy Pub. No, I don't like that.. Sure, let me find an other option for you. What do you think of this option: The_Place. Let's do it! Great let me do the reservation.. Can you provide me the address?. Here it is: The Place address.. You rock! Task 4 Is there anything else I can help you with?. Providing extra-information. No thanks. You're welcome. Task 5 Conducting full dialogs.\nKim, S., D'Haro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2O16). The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IwsDS\nRitter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings\nFigure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at. a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options. (sorted by rating) and to provide extra-information. Task 5 combines everything.\nWeston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698."}, {"section_index": "4", "section_name": "MEMORY NETWORKS IMPLEMENTATION", "section_text": "Storing and representing the conversation history As the model conducts a conversation with the. user, at each time step t the previous utterance (from the user) and response (from the model) are appended to the memory. Hence, at any given time there are ci, ... ct user utterances and c1, ... ct-1 model responses stored (i.e. the entire conversation)|The aim at time t is to thus choose the next response ct. We train on existing. full dialog transcripts, so at training time we know the upcoming utterance ct and can use it as a training target FollowingDodge et al.(2016), we represent each utterance as a bag-of-words and in memory it is represented as. a vector using the embedding matrix A, i.e. the memory is an array with entries:.\nTasks T1 T2 T3 T4 T5 T6 Concierge Number of utterances: 12 17 43 15 55 54 8 DIALOGS - user utterances 5 7 7 4 13 6 4 Average statistics - bot utterances 7 10 10 4 18 8 4 - outputs from API calls 0 0 23 7 24 40 0 Vocabulary size 3,747 1,229 8,629 Candidate set size. 4,212 2,406 11,482 DATASETS Training dialogs. 1,000 1,618 3,249 Tasks 1-5 share the Validation dialogs. 1,000 500 403 same data source Test dialogs 1,000(*) 1,117 402\nm = (A(ci),A(ci)...,A(ct-1),A(ct-1)\nwhere (.) maps the utterance to a bag of dimension V (the vocabulary), and A is a d V matrix, where. d is the embedding dimension. We retain the last user utterance ct' as the \"input\"' to be used directly in the. controller. The contents of each memory slot m; so far does not contain any information of which speaker spoke an utterance, and at what time during the conversation. We therefore encode both of those pieces of information. in the mapping by extending the vocabulary to contain T' = 1000 extra \"time features\"' which encode the index i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user. Or the model.\nfrom the restaurant reservation dataset of the 2nd Dialog State Tracking Challenge, or DSTC2. (Henderson et al.]2014a), which we converted into our task format, showing that Memory Networks. can outperform a dedicated slot-filling rule-based baseline. We also evaluate on a dataset of human. human dialogs extracted from an online concierge service that books restaurants for users. Overall. the per-response performance is encouraging, but the per-dialog one remains low, indicating tha end-to-end models still need to improve before being able to reliably handle goal-oriented dialog..\nAttention over the memory The last user utterance ct' is embedded using the same matrix A giving q = A(ct), which can also be seen as the initial state of the controller. At this point the controller read from the memory to find salient parts of the previous conversation that are relevant to producing a responst The match between q and the memories is computed by taking the inner product followed by a softmax Pi = Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to th controller is then computed by o = R t Pim; where R is a d d square matrix. The controller state is ther updated with q2 = o + q. The memory can be iteratively reread to look for additional pertinent informatioi using the updated state of the controller q2 instead of q, and in general using qn on iteration h, with a fixe. number of iterations N (termed N hops). Empirically we find improved performance on our tasks with up to or 4 hops.\nThe most successful goal-oriented dialog systems model conversation as partially observable Marko decision processes (POMDP) (Young et al.|2013). However, despite recent efforts to learn modules (Henderson et al.|2014b), they still require many hand-crafted features for the state and action space representations, which restrict their usage to narrow domains. Our simulation, used to generate goal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP (Young et al. 2013, Pietquin and Hastie2013), but for training end-to-end systems.\nwhere there are C candidate responses in y, and W is of dimension d V. In our tasks the set y is a (large) se of candidate responses which includes all possible bot utterances and API calls..\nSerban et al.[(2015b) list available corpora for training dialog systems. Unfortunately, no good. resources exist to train and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets are usually designed to train or test dialog state tracker components (Henderson et al.[[2014a) and. are hence of limited scale and not suitable for end-to-end learning (annotated at the state level and. noisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some. datasets are not open source, and require a particular license agreement or the participation to a. challenge (e.g., the end-to-end task of DSTC4 (Kim et al.|2016)) or are proprietary (e.g.,Chen et al. (2016)). Datasets are often based on interactions between users and existing systems (or ensemble of. systems) like DSTC datasets, SFCore (Gasic et al.2014) or ATIS (Dahl et al.][1994). This creates noise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect. dialog systems to users, in particular in the context of reinforcement learning, are usually built around. a crowdsourcing setting such as Amazon Mechanical Turk, e.g., (Hixon et al.]2015) Wen et al.. 2015] Su et al.l2015a b). While this has clear advantages, it prevents reproducibility and consistent. comparisons of methods in the exact same setting.\nThe entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss between a and the true label a."}, {"section_index": "5", "section_name": "B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK", "section_text": "Tables[3][4][5|and[6|display examples of predictions of the best performing Memory Network on full dialogs Task 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory for each hop (p as defined in Sec.[A). This model does not use match type features. Then, Table[7|displays an example of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example along with the values of the attention over each memory for each hop..\nTables|8|and 9respectively display the values of the hyperparameters of the best Supervised Embeddings anc Memory Networks selected for each task. These models were selected using the best validation validation sets\nThe closest resource to ours might be the set of tasks described in (Dodge et al.2016), since some of them can be seen as goal-oriented. However, those are question answering tasks rather than dialog i.e. the bot only responds with answers, never questions, which does not reflect full conversation."}, {"section_index": "6", "section_name": "GOAL-ORIENTED DIALOG TASKS", "section_text": "Table[10|provides results for additional variants of supervised embeddings, using either a dictionary that includes. all bigrams to leverage some word order information, or match type features. On some tasks, supervised embeddings perform better when the last user utterance is used as sole input, without the full dialog history (see Table 8). When no history is used, we slightly adapt match type features to only record type: a special word corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the.\nAll our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant The first five tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http : // fb. ai/babi We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users\nAPI calls are stored as bot utterances c, and KB facts resulting from such calls as user utterances c\nTable 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al.||2014a). Concierge is made of chats extracted from a real online concierge service. (*) Tasks 1-5 have two test sets, one using the. vocabulary of the training set and the other using out-of-vocabulary words..\na = Softmax(qN+1 W(y1),...,qN+1 W(yc))"}, {"section_index": "7", "section_name": "3.1 RESTAURANT RESERVATION SIMULATION", "section_text": "The simulation is based on an underlying KB, whose facts contain the restaurants that can be booke. and their properties. Each restaurant is defined by a type of cuisine (10 choices, e.g., French, Thai), . location (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a rating. (from 1 to 8). For simplicity, we assume that each restaurant only has availability for a single part. size (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB\nThe KB can be queried using API calls, which return the list of facts related to the corresponding restaurants. Each query must contain four fields: a location, a type of cuisine, a price range and a party size. It can return facts concerning one, several or no restaurant (depending on the party size)\nTime Dialog History Locutor Hop #1 Hop #2 Hop #3 1 User hi .054 .016 .020 2 Bot hello what can i help you with today .040 .024 .008 3 User may i have a table in paris .099 .137 .141 4 Bot i'm on it. .048 .028 .004 5 User <silence> .181 .099 .037 6 Bot any preference on a type of cuisine .056 .090 .014 7 User i love indian food. .159 .188 .238 8 Bot how many people would be in your party .051 .022 .010 9 User we will be six. .129 .144 .114 10 Bot which price range are looking for .039 .028 .006 11 User in a moderate price range please .128 .213 .405 12 Bot ok let me look into some options for you .016 .011 .003 User input <silence> Correct answer api_call j indian paris six moderate Predicted answer api_cal1 indian paris six moderate [Correct]\nUsing the KB, conversations are generated in the format shown in Figure[1 Each example is a dialog comprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are generated after creating a user request by sampling an entry for each of the four required fields: e.g the request in Figure|1 is [cuisine: British, location: London, party size: six, price range: expensive] We use natural language patterns to create user and bot utterances. There are 43 patterns for the user and 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the same). Those patterns are combined with the KB entities to form thousands of different utterances"}, {"section_index": "8", "section_name": "3.1.1 TASK DEFINITIONS", "section_text": "We now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn to implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn to use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks\nTask 1: Issuing API calls A user request implicitly defines a query that can contain from O to 4 o1 the required fields (sampled uniformly; in Figure 1] it contains 3). The bot must ask questions fo filling the missing fields and eventually generate the correct corresponding API call. The bot asks fo. information in a deterministic order, making prediction possible.\nTask 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update their requests between 1 and 4 times (sampled uniformly). The order in which fields are updated is random. The bot must ask users if they are done with their updates and issue the updated API call.\nTask 3: Displaying options Given a user request, we query the KB using the corresponding API call and add the facts resulting from the call to the dialog history. The bot must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop displaying options, otherwise propose the next one. Users always accept the option if this is the last remaining one. We only keep examples with API calls retrieving at least 3 options.\nTable 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the mode. correctly focuses on the 2 important pieces: the original API call and the utterance giving the update\nTime Locutor Dialog history Hop #1 Hop #2 Hop #3 3 User may i have a table in paris. .061 .072 .040 4 Bot i'm on it .026 .012 .001 5 User <silence> .087 .042 .012 6 Bot any preference on a type of cuisine. .026 .023 .001 7 User i love indian food. .081 .070 .055 8 Bot how many people would be in your party. .025 .006 .001 9 User we will be six .059 .051 .018 10 Bot which price range are looking for. .038 .043 .004 11 User in a moderate price range please. .080 .095 .096 12 Bot ok let me look into some options for you. .025 .042 .003 13 User <silence> .127 .069 .032 14 Bot api_call indian paris six moderate .062 .113 .043 15 User instead could it be with french food. .188 .311 .683 16 Bot sure is there anything else to update. .016 .007 .001 17 User no .028 .013 .007 18 Bot ok let me look into some options for you. .011 .006 .000 User input <silence> Correct answer api_call 1 french paris six moderate. Predicted answer api_call french paris six moderate. [Correct]\nTask 4: Providing extra information Given a user request, we sample a restaurant and start the. dialog as if users had agreed to book a table there. We add all KB facts corresponding to it to the dialog. Users then ask for the phone number of the restaurant, its address or both, with proportions. 25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer.\nTask 5: Conducting full dialogsWe combine Tasks 1-4 to generate full dialogs just as in Figure[1 Unlike in Task 3, we keep examples if API calls return at least 1 option instead of 3"}, {"section_index": "9", "section_name": "3.1.2 DATASETS", "section_text": "We want to test how well models handle entities appearing in the KB but not in the dialog training sets. We split types of cuisine and locations in half, and create two KBs, one with all facts abou restaurants within the first halves and one with the rest. This yields two KBs of 4,200 facts and 60( restaurants each (5 types of cuisine 5 locations 3 price ranges 8 ratings) that only share price ranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phone and addresses. We use one of the KBs to generate the standard training, validation and test dialogs and use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets.\nFor training, systems have access to the training examples and both KBs. We then evaluate on both test sets, plain and OOV. Beyond the intrinsic difficulty of each task, the challenge on the OOV test\nTable 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn - the model has to carry out the conversation with no additional input.\nsets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any training dialog - something natively impossible for embedding methods. Ideally, models could, for instance, leverage information coming from the entities of the same type seen during training\nTable 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab.[2). We do not show all memories in the table, only those with meaningful attention.\nWe generate five datasets, one per task defined in|3.1.1] Table[1gives their statistics. Training sets are relatively small (1,000 examples) to create realistic learning conditions. The dialogs from the training and test sets are different, never being based on the same user requests. Thus, we test if models car generalize to new combinations of fields. Dialog systems are evaluated in a ranking, not a generation setting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by selecting a candidate, not by generating it|'|Candidates are ranked from a set of all bot utterances and API calls appearing in training, validation and test sets (plain and OOV) for all tasks combined."}, {"section_index": "10", "section_name": "3.2 DIALOG STATE TRACKING CHALLENGE", "section_text": "Since our tasks rely on synthetically generated language for the user, we supplement our dataset. with real human-bot dialogs. We use data from DSTC2 (Henderson et al.|[2014a), that is also in the restaurant booking domain. Unlike our tasks, its user requests only require 3 fields: type of cuisine. (91 choices), location (5 choices) and price range (3 choices). The dataset was originally designed for dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be predicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted the data into the format of our 5 tasks and included it in the dataset as Task 6..\nThis dataset has similar statistics to our Task 5 (see Table[1) but is harder. The dialogs are noisier and. the bots made mistakes due to speech recognition errors or misinterpretations and also do not always have a deterministic behavior (the order in which they can ask for information varies).\nTable 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address but, as explained in Section[A|the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab.2 this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here."}, {"section_index": "11", "section_name": "3.3 ONLINE CONCIERGE SERVICE", "section_text": "Tasks 1-6 are, at least partially, artificial. This provides perfect control over their design (at least. for Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to. more realistic conditions. To quantify this, we also evaluate the models from Section 4 on data extracted from a real online concierge service performing restaurant booking: users make requests through a text-based chat interface that are handled by human operators who can make API calls. All conversations are between native English speakers.\nWe collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have been anonymized by (1) removing all user identifiers, (2) using the Stanford NER tagger to remove named entities (locations, timestamps, etc.), (3) running some manually defined regex to filter out any remaining salient information (phone numbers, etc.). The dataset does not contain results from API calls, but still records when operators made use of an external service (Yelp or OpenTable) to gather information. Hence, these have to be predicted, but without any argument (unlike in Task 2).\nThe statistics of Concierge are given in Table[1 The dialogs are shorter than in Tasks 1-6, especiall since they do not include results of API calls, but the vocabulary is more diverse and so is the candidate set; the candidate set is made of all utterances of the operator appearing in the training, validatior and test sets. Beyond the higher variability of the language used by human operators compared tc bots, the dataset offers additional challenges. The set of user requests is much wider, ranging fron managing restaurant reservations to asking for recommendations or specific information. Users dc not always stay focused on the request. API calls are not always used (e.g., the operator might use neither Yelp nor OpenTable to find a restaurant), and facts about restaurants are not structured no. constrained as in a KB. The structure of dialogs is thus much more variable. Users and operators alsc make typos, spelling and grammar mistakes.\nLowe et al.(2016) termed this setting Next-Utterance-Classification\nTime Locutor Dialog history Hop #1 Hop #2 Hop #3 14 Bot api_call indian paris six moderate .012 .000 .000 15 User instead could it be with french food. .067 .103 .147 20 Bot api_call french paris six moderate .012 .000 .000 21 User resto_1 r_phone rest_1_phone .018 .004 .000 23 User resto_1 r_cuisine french. .029 .005 .000 24 User resto_1 r_location paris .060 .292 .094 25 User resto_1 r_number six. .050 .298 .745 26 User resto_1 r_price moderate .060 .090 .002 27 User resto_1 r_rating 6. .016 .002 .000 30 User resto_2 r_cuisine french .031 .007 .000 31 User resto_2 r_location paris. .040 .081 .004 32 User resto_2 r_number six. .020 .012 .000 33 User resto_2 r_price moderate. .029 .009 .000 37 User resto_3 r_cuisine french .014 .001 .000 38 User resto_3 r_location paris .028 .016 .001 39 User resto_3 r_number six .024 .022 .004 40 User resto_3 r_price moderate .039 .015 .001 User input <silence> Correct answer what do you think of this option: resto_ Predicted answer what do you think of this option: resto_. [Correct]\nWe used the provided speech transcriptions to create the user and bot utterances, and given the dialog states we created the API calls to the KB and their outputs which we added to the dialogs. We also added ratings to the restaurants returned by the API calls, so that the options proposed by the bots can be consistently predicted (by using the highest rating). We did use the original test set but use a slightly different training/validation split. Our evaluation differs from the challenge (we do not predict the dialog state), so we cannot compare with the results from (Henderson et al.|2014a).\nTime Locutor Dialog history. Hop #1 Hop #2 Hop #3 14 Bot api_call indian paris six moderate .006 .000 .000 15 User instead could it be with french food .024 .011 .007 20 Bot api_call french paris six moderate .005 .000 .001 21 User resto_1 r_phone resto_1_phone .011 .005 .004 22 User resto_1 r_address resto_l_address .018 .004 .001 23 User resto_1 r_cuisine french .018 .003 .001 24 User resto_1 r_location paris .068 .091 .108 25 User resto_1 r_number six .086 .078 .020 26 User resto_1 r_price moderate .070 .225 369 27 User resto_1 r_rating 6 .014 .006 .008 28 User resto_2 r_phone resto_2_phone .015 .009 .006 29 User resto_2 r_address resto_2_address .014 .004 .001 31 User resto_2 r_location paris .075 .176 .193 32 User resto_2 r_number six .100 .126 .026 33 User resto_2 r_price moderate .038 .090 .167 35 User resto_3 r_phone resto_3_phone .004 .001 .001 36 User resto_3 r_address resto_3_address .005 .002 .001 37 User resto_3 r_location paris .028 .028 .026 39 User resto_3 r_number six .039 .013 .002 40 User resto_3 r_price moderate .018 .008 .013 142 Bot what do you think of this option: resto_1 .074 .001 .000 43 User let's do it. .032 .004 .001 14 Bot great let me do the reservation .003 .000 .000 User input do you have its address Correct answer here it is resto 1 address Predicted answer here it is: resto 8 address [Incorrect]"}, {"section_index": "12", "section_name": "4 MODELS", "section_text": "Table 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>. <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are no perfect English (\"rservation\", \"I'll check into it\").\nTo demonstrate how to use the dataset and provide baselines, we evaluate several learning methods or our goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised embeddings, and end-to-end Memory networks."}, {"section_index": "13", "section_name": "4.1 RULE-BASED SYSTEMS", "section_text": "Our tasks T1-T5 are built with a simulator so as to be completely predictable. Thus it is possible to hand-code a rule based system that achieves 10o% on them, similar to the bAbI tasks of|Weston et al.(2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be able to build a rule-based system to solve them, but to help analyze in which circumstances machine learning algorithms are smart enough to work, and where they fail.\nHowever, the Dialog State Tracking Challenge task (T6) contains some real interactions with users This makes rule-based systems less straightforward and not so accurate (which is where we expec machine learning to be useful). We implemented a rule-based system for this task in the followin? way. We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location anc price range. Then we analyzed the training data and wrote a series of rules that fire for triggers like word matches, positions in the dialog, entity detections or dialog state, to output particular responses API calls and/or update a dialog state. Responses are created by combining patterns extracted fron the training set with entities detected in the previous turns or stored in the dialog state. Overall we built 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priorit (when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We did not build a rule-based system for Concierge data as it is even less constrained.\nTable 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input."}, {"section_index": "14", "section_name": "4.2 CLASSICAL INFORMATION RETRIEVAL MODELS", "section_text": "TF-IDF MatchFor each possible candidate response, we compute a matching score between the input and the response, and rank the responses by score. The score is the TF-IDF weighted cosine similarity between the bag-of-words of the input and bag-of-words of the candidate response. We consider the case of the input being either only the last utterance or the entire conversation history and choose the variant that works best on the validation set (typically the latter).\nTable 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed.\nNearest Neighbor Using the input, we find the most similar conversation in the training set, and. output the response from that example. In this case we consider the input to only be the last utterance and consider the training set as (utterance, response) pairs that we select from. We use word overlap as the scoring method. When several responses are associated with the same utterance in training, we. sort them by decreasing co-occurence frequency."}, {"section_index": "15", "section_name": "4.3 SUPERVISED EMBEDDING MODELS", "section_text": "A standard, often strong, baseline is to use supervised word embedding models for scoring (conversa tion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast word embeddings are most well-known in the context of unsupervised training on raw text as in word2vec (Mikolov et al.||2013). Such models are trained by learning to predict the middle word given the surrounding window of words, or vice-versa. However, given training data consisting of dialogs, a much more direct and strongly performing training procedure can be used: predict the next response given the previous conversation. In this setting a candidate reponse y is scored against the input x: f(x, y) = (Ax)' By, where A and B are d V word embedding matrices, i.e. input and response are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B which sometimes works better, and optimize the choice on the validation set.\nTime Locutor Dialog History Hop #1 Hop #2 1 User hey concierge .189 .095 2 User could you check if i can get a rservation at <org> <date> for brunch .209 .178 3 User <number> people .197 .142 4 User <silence> .187 .167 5 Bot hi <person> unfortunately <org> is fully booked for <date> .225 .410 and there's <number> people on the waiting list User input when's the earliest availability. Correct answer i'll check Pred. answer #1 i'm on it [Incorrect] Pred. answer #2 i'll find out. [Incorrect] Pred. answer #3 i'll take a look [Incorrect] Pred. answer #4 i'll check [Correct] Pred. answer #5 i'll check into it. [Incorrect]\nTask Learning Rate Margin m Embedding Dim d Negative Cand. N Use History. Task 1 0.01 0.01 32 100 True Task 2 0.01 0.01 128 100 False 128 Task 3 0.01 0.1 1000 False Task 4 0.001 0.1 128 1000 False Task 5 0.01 0.01 32 100 True Task 6 0.001 0.01 128 100 False Concierge 0.001 0.1 64 100 False\nTask Learning Rate Margin m Embedding Dim d Negative Cand. N Nb Hops Task 1 0.01 0.1 128 100 1 Task 2 0.01 0.1 32 100 1 Task 3 0.01 0.1 32 100 3 Task 4 0.01 0.1 128 100 2 Task 5 0.01 0.1 32 100 3 Task 6 0.01 0.1 128 100 4 Concierge 0.001 0.1 128 100 2\ncandidate contains a word that appears in the knowledge base as an entity of type T, regardless of whether the same word appeared earlier in the conversation. As seen on Table[10] match type features improve performance on out-of-vocabulary tasks 1 and 5, bringing it closer to that of Memory Networks without match type features but still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help performance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup).\nThe embeddings are trained with a margin ranking loss: f(x, y) > m + f(x, y), with m the size. of the margin, and we sample N negative candidate responses y per example, and train with SGD. This approach has been previously shown to be very effective in a range of contexts (Bai et al.|2009)\nMemory Networks (Weston et al.|[2015afSukhbaatar et al.[2015) are a recent class of models tha have been applied to a range of natural language processing tasks, including question answering Weston et al.2015b), language modeling (Sukhbaatar et al.|2015), and non-goal-oriented dialog Dodge et al.||2016). By first writing and then iteratively reading from a memory component (using nops) that can store historical dialogs and short-term context to reason about the required response they have been shown to perform well on those tasks and to outperform some other end-to-enc architectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end mode baseline.\nWords denoting entities have two important traits: 1) exact matches are usually more appropriate tc deal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the name of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding into a low dimensional space makes it hard to differentiate between exact word matches, and matches between words with similar meaning (Bai et al.2009). While this can be a virtue (e.g. when using synonyms), it is often a flaw when dealing with entities (e.g. failure to differentiate between phone numbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name ol a new restaurant) not seen before in training, no word embedding is available, typically resulting ir failure (Weston et al.2015a).\nTable 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seer during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within O.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis.\nBoth problems can be alleviated with match type features. Specifically, we augment the vocabulary. with 7 special words, one for each of the KB entity types (cuisine type, location, price range, party. size, rating, phone number and address). For each type, the corresponding type word is added tc. the candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in the candidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed ever. if it has never been seen before in training dialogs. These features allow the model to learn to rel. on type information using exact matching words cues when OOV entity embeddings are not known as long as it has access to a KB with the OOV entities. We assess the impact of such features fo TF-IDF Match, Supervised Embeddings and Memory Networks.."}, {"section_index": "16", "section_name": "5 EXPERIMENTS", "section_text": "Our main results across all the models and tasks are given in Table2l(extra results are also given ir Table[10|of Appendix[D. The first 5 rows show tasks T1-T5, and rows 6-10 show the same tasks in the out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge task (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms of per-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy counts the percentage of responses that are correct (i.e., the correct candidate is chosen out of all possible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is correct. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure to achieve the goal (in this case, of achieving a restaurant booking). Note that we test Memory Networks (MemNNs) with and without match type features, the results are shown in the last two columns. The hyperparameters for all models were optimized on the validation sets; values for best performing models are given in Appendix|C\nThe classical IR method TF-IDF Match performs the worst of all methods. and much worse than th Nearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real data of T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improves performance, which however still remains far behind Nearest Neighbor IR (adding bigrams to the\nWe use the MemN2N architecture of Sukhbaatar et al.(2015), with an additional modification to leverage exact matches and types, described shortly. Apart from that addition, the main components of the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to reason about the response; and (iii) how it outputs the response. The details are given in Appendix[A\nSupervised Embeddings Memory Networks. Task no match type. + match type + bigrams no match type. + match type no bigram no bigram no match type. T1: Issuing API calls. 100 (100) 83.2 (0) 98.6 (92.4) 99.9 (99.6) 100 (100) T2: Updating API calls 68.4 (0) 68.4 (0) 68.3 (0) 100 (100) 98.3 (83.9) T3: Displaying options. 64.9 (0) 64.9 (0) 64.9 (0) 74.9 (2.0) 74.9 (0) T4: Providing information 57.2 (0) 57.2 (0) 57.3 (0) 59.5 (3.0) 100 (100) T5: Full dialogs. 75.4 (0) 76.2 (0) 83.4 (0) 96.1 (49.4) 93.4 (19.7) T1(OOV): Issuing API calls 60.0 (0) 67.2 (0) 58.8 (0) 72.3 (0) 96.5 (82.7) T2(OOV): Updating API calls 68.3 (0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4) T3(OOV): Displaying options 65.0 (0) 65.0 (0) 62.1 (0) 74.4 (0) 75.2 (0) T4(OOV): Providing inform 57.0 (0) 57.1 (0) 57.0 (0) 57.6 (0) 100 (100) T5(OOV): Full dialogs 58.2 (0) 64.4 (0) 50.4 (0) 65.5 (0) 77.7 (0) T6: Dialog state tracking 2 22.6 (0) 22.1 (0) 21.8 (0) 41.1 (0) 41.0 (0)\nTable 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standar setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been see luring training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Bes performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accurac metric, with the per-dialog accuracy given in parenthesis. (*) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the muc larger range of semantically equivalent responses among candidates (see ex. in Tab.7) . (t) we did not implemen MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.\ndictionary has no effect on performance). This is in sharp contrast to other recent results on data. driven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al.|2011) or Reddi (Dodge et al.]2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as genera. conversations on a given subject typically share many words. We conjecture that the goal-oriente nature of the conversation means that the conversation moves forward more quickly, sharing fewe. words per (input, response) pair, e.g. consider the example in Figure[1\nSupervised embeddings outperform classical IR methods in general, indicating that learning mapping between words (via word embeddings) is important. However, only one task (T1, Issuing API calls is completely successful. In the other tasks, some responses are correct, as shown by the per-respons. accuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog accuracy is O). Typically the model can provide correct responses for greeting messages, asking to wait, making API calls and asking if there are any other options necessary. However, it fails t interpret the results of API calls to display options, provide information or update the calls with nev information, resulting in most of its errors, even when match type features are provided.\nMemory Networks (without match type features) outperform classical IR and supervised embedding. across all of the tasks. They can solve the first two tasks (issuing and updating API calls) adequately On the other tasks, they give improved results, but do not solve them. While the per-response accuracy. is improved, the per-dialog accuracy is still close to O on T3 and T4. Some examples of predictions of the MemNN for T1-4 are given in Appendix[B] On the OOV tasks again performance is improved but this is all due to better performance on known words, as unknown words are simply not usec. without the match type features. As stated in Appendix [Cl optimal hyperparameters on several of the tasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation helps. e.g. on T3 using 1 hop gives 64.8% while 2 hops yields 74.7%. Appendix B|displays illustrative examples of Memory Networks predictions on T 1-4 and Concierge..\nMemory Networks with match type features give two performance gains over the same models without match type features: (i) T4 (providing information) becomes solvable because matches can be made to the results of the API call; and (ii) out-of-vocabulary results are significantly improved as well. Still, tasks T3 and T5 are still fail cases, performance drops slightly on T2 compared tc not using match type features, and no relative improvement is observed on T6. Finally, note that matching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching this idea must be combined with types and the other properties of the MemNN model.\nUnsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly. whereas our machine learning methods cannot. However. it is not easy to build an effective rule-based\nTask Rule-based TF-IDF Match Nearest Supervised Memory Networks Systems no type + type Neighbor Embeddings no match type + match type T1: Issuing API calls 100 (100) 5.6 (0) 22.4 (0) 55.1 (0) 100 (100) 99.9 (99.6) 100 (100) T2: Updating API calls 100 (100) 3.4 (0) 16.4 (0) 68.3 (0) 68.4 (0) 100 (100) 98.3 (83.9) T3: Displaying options 100 (100) 8.0 (0) 8.0 (0) 58.8 (0) 64.9 (0) 74.9 (2.0) 74.9 (0) T4: Providing information 100 (100) 9.5 (0) 17.8 (0) 28.6 (0) 57.2 (0) 59.5 (3.0) 100 (100) T5: Full dialogs 100 (100) 4.6 (0) 8.1 (0) 57.1 (0) 75.4 (0) 96.1 (49.4) 93.4 (19.7) T1(OOV): Issuing API calls 100 (100) 5.8 (0) 22.4 (0) 44.1 (0) 60.0 (0) 72.3 (0) 96.5 (82.7) T2(OOV): Updating API calls 100 (100) 3.5 (0) 16.8 (0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4) T3(OOV): Displaying options 100 (100) 8.3 (0) 8.3 (0) 58.8 (0) 65.0 (0) 74.4 (0) 75.2 (0) T4(OOV): Providing inform 100 (100) 9.8 (0) 17.2 (0) 28.6 (0) 57.0 (0) 57.6 (0) 100 (100) T5(OOV): Full dialogs 100 (100) 4.6 (0) 9.0 (0) 48.4 (0) 58.2 (0) 65.5 (0) 77.7 (0) T6: Dialog state tracking 2 33.3 (0) 1.6 (0) 1.6 (0) 21.9 (0) 22.6 (0) 41.1 (0) 41.0 (0) Concierge(*) n/a 1.1 (0.2) n/a 13.4 (0.5) 14.6 (0.5) 16.7 (1.2) n/a(t)"}]
r10FA8Kxg
[{"section_index": "0", "section_name": "5 CONCLUSIONS", "section_text": "We train shallow nets with and without convolution to mimic state-of-the-art deep convolutiona nets. If one controls for the number of learnable parameters, nets containing a single fully-connectec non-linear layer and no convolutional layers are not able to learn functions as accurate as deepe convolutional models. This result is consistent with those reported in Ba and Caruana (2014 However, we also find that shallow nets that contain only 1-2 convolutional layers also are unable to achieve accuracy comparable to deeper models if the same number of parameters are used ir the shallow and deep models. Deep convolutional nets are significantly more accurate than shallov convolutional models, given the same parameter budget. We do, however, see evidence that mode compression allows accurate models to be trained that are shallower and have fewer convolutiona layers than the deep convolutional architectures needed to learn high-accuracy models from th original 1-hot hard-target training data. The question remains why extra layers are required to trair accurate models from the original training data.\nYes, they do. This paper provides the first empirical demonstration that deep. convolutional models really need to be both deep and convolutional, even when. trained with methods such as distillation that allow small or shallow models of high accuracy to be trained. Although previous research showed that shallow. feed-forward nets sometimes can learn the complex functions previously learnec. by deep nets while using the same number of parameters as the deep models they. mimic, in this paper we demonstrate that the same methods cannot be used to train. accurate models on CIFAR-10 unless the student models contain multiple layers. of convolution. Although the student models do not have to be as deep as the. teacher model they mimic, the students need multiple convolutional layers to learr functions of comparable accuracy as the deep convolutional teacher.."}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPs, 2014\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeror Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning an Unsupervised Feature Learning NIPS 2012 Workshop, 2012.\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins. Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler In SciPy, 2010."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006\nCybenko(1989) proved that a network with a large enough single hidden layer of sigmoid units car. approximate any decision boundary. Empirical work, however, suggests that it can be difficult to. train shallow nets to be as accurate as deep nets.Dauphin and Bengio (2013) trained shallow nets. on SIFT features to classify a large-scale ImageNet dataset and found that it was difficult to train. large, high-accuracy, shallow nets. A study of deep convolutional nets suggests that for vision tasks. deeper models are preferred under a parameter budget (e.g. Eigen et al.(2014); He et al.(2015) Simonyan and Zisserman(2014); Srivastava et al.[(2015)). Similarly, Seide et al.[(2011) and Geras et al.(2015) show that deeper models are more accurate than shallow models in speech acoustic modeling. More recently, Romero et al.[(2015) showed that it is possible to gain increases in accuracy. in models with few parameters by training deeper, thinner nets (FitNets) to mimic much wider nets. Cohen and Shashua(2016);Liang and Srikant|(2016) suggest that the representational efficiency of deep networks scales exponentially with depth, but it is unclear if this applies only to pathological. problems, or is encountered in practice on data sets such as TIMIT and CIFAR.\nNadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decomposition arXiv preprint arXiv:1603.00162, 2016.\nGeorge Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303-314, 1989.\nYann N. Dauphin and Yoshua Bengio. Big neural networks waste capacity. arXiv:1301.3583, 2013\nKrzysztof J. Geras and Charles Sutton. Scheduled denoising autoencoders. In ICLR, 2015.\nBa and Caruana (2014), however, demonstrated that shallow nets sometimes can learn the functions learned by deep nets, even when restricted to the same number of parameters as the deep nets. They did this by first training state-of-the-art deep models, and then training shallow models to mimic the deep models. Surprisingly, and for reasons that are not well understood, the shallow models learned more accurate functions when trained to mimic the deep models than when trained on the original data used to train the deep models. In some cases shallow models trained this way were as accurate as state-of-the-art deep models. But this demonstration was made on the TIMIT speech recognition benchmark. Although their deep teacher models used a convolutional layer, convolution is less important for TIMIT than it is for other domains such as image classification.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition arXiv:1512.03385, 2015.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv:1503.02531 2015.\nBa and Caruana (2014) also presented results on CIFAR-10 which showed that a shallow mode could learn functions almost as accurate as deep convolutional nets. Unfortunately, the results on CIFAR-10 are less convincing than those for TIMIT. To train accurate shallow models on CIFAR-10\nAlex Krizhevsky. Learning multiple layers of features from tiny images, 2009"}, {"section_index": "3", "section_name": "DO DEEP CONVOLUTIONAL NETS REALLY NEED TO BE DEEP AND CONVOLUTIONAL?", "section_text": "Gregor Urban1, Krzysztof J. Geras?, Samira Ebrahimi Kahou3, Ozlem Aslan4, Shengjie Wang Abdelrahman Mohamed6, Matthai Philipose6, Matt Richardson6, Rich Caruana6"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Krzysztof J. Geras, Abdel-rahman Mohamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan,. Matthai Philipose, Matthew Richardson, and Charles Sutton. Blending LSTMs into CNNs. arXiv:1511.06433. 2015.\nJinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. Learning small-size dnn with output-distribution-base criteria. In INTERSPEECH, 2014\nIn this paper we show that the methods Ba and Caruana used to train shallow students to mimic deep teacher models on TIMIT do not work as well on problems such as CIFAR-1O where multiple layer of convolution are required to train accurate teacher models. If the student models have a simila number of parameters as the deep teacher models, high accuracy can not be achieved without multipl layers of convolution even when the student models are trained via distillation.\nShiyu Liang and R Srikant. Why deep neural networks? arXiv preprint arXiv:1610.04161, 2016\nTo ensure that the shallow student models are trained as accurately as possible, we use Bayesian optimization to thoroughly explore the space of architectures and learning hyperparameters. Although this combination of distillation and hyperparameter optimization allows us to train the most accurate shallow models ever trained on CIFAR-10, the shallow models still are not as accurate as deep models. Our results clearly suggest that deep convolutional nets do, in fact, need to be both deep and convolutional, even when trained to mimic very accurate models via distillation (Hinton et al.]2015)\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce ment learning. In ICLR, 2016.\nGabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. Regularizing neura networks by penalizing output distributions. ICLR, 2017..\nAdriana Romero, Ballas Nicolas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengic FitNets: Hints for thin deep nets. ICLR, 2015.\nAndrei A. Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick. Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In ICLR, 2016\nn this paper, we revisit the CIFAR-10 experiments in Ba and Caruana (2014). Unlike in that worl nere we compare shallow models to state-of-the-art deep convolutional models, and restrict th number of parameters in the shallow student models to be comparable to the number of parameters i he deep convolutional teacher models. Because we anticipated that our results might be differen ve follow their approach closely to eliminate the possibility that the results differ merely because o changes in methodology. Note that the goal of this paper is not to train models that are small or fas. as inBucila et al.(2006),Hinton et al.(2015), and Romero et al.(2015), but to examine if shallov nodels can be as accurate as deep convolutional models given the same parameter budget..\nFrank Seide, Gang Li, and Dong Yu. Conversational speech transcription using context-dependent deep neura networks. In INTERSPEECH. 2011\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition In ICLR, 2014.\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. NIPS, 2012.\nThere are many steps required to train shallow student models to be as accurate as possible: train. state-of-the-art deep convolutional teacher models, form an ensemble of the best deep models, collect and combine their predictions on a large transfer set, and then train carefully optimized shallow. student models to mimic the teacher ensemble. For negative results to be informative, it is important. that each of these steps be performed as well as possible. In this section we describe the experimental. methodology in detail. Readers familiar with distillation (model compression), training deep models. on CIFAR-10, data augmentation, and Bayesian hyperparameter optimization may wish to skip to the. empirical results in Section3\nRupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. In NIPs, 2015\nAntonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. TPAMI, 30(11), 2008\nChiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learnin, requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016."}, {"section_index": "5", "section_name": "2.1 MODEL COMPRESSION AND DISTILLATION", "section_text": "The key idea behind model compression is to train a compact model to approximate the functior. learned by another larger, more complex model. Bucila et al.(2006) showed how a single neural net. of modest size could be trained to mimic a much larger ensemble. Although the small neural net. contained 1o00 fewer parameters, often they were as accurate as the large ensembles they were. trained to mimic.\nModel compression works by passing unlabeled data through the large, accurate teacher model tc. collect the real-valued scores it predicts, and then training a student model to mimic these scores. Hinton et al.[(2015) generalized the methods of Bucila et al.(2006) and Ba and Caruana (2014 by incorporating a parameter to control the relative importance of the soft targets provided by the. teacher model to the hard targets in the original training data, as well as a temperature parameter tha. regularizes learning by pushing targets towards the uniform distribution.Hinton et al.(2015) alsc. demonstrated that much of the knowledge passed from the teacher to the student is conveyed as dar. knowledge contained in the relative scores (probabilities) of outputs corresponding to other classes as opposed to the scores given to just the output for the one correct class..\nSurprisingly, distillation often allows smaller and/or shallower models to be trained that are nearly as accurate as the larger, deeper models they are trained to mimic, yet these same small models are not as accurate when trained on the 1-hot hard targets in the original training set. The reason for this is not yet well understood. Similar compression and distillation methods have also successfully\nthey had to include at least one convolutional layer in the shallow model. and increased the numbei of parameters in the shallow model until it was 30 times larger than the deep teacher model. Despite this, the shallow convolutional student model was several points less accurate than a teacher model that was itself several points less accurate than state-of-the-art models on CIFAR-10.\nQuoc Le, Tamas Sarlos, and Alexander Smola. Fastfood-computing hilbert space expansions in loglinear time In ICML, 2013.\nZhouhan Lin, Roland Memisevic, Shaoqing Ren, and Kishore Konda. How far can we go without convolution: Improving fully-connected networks. arXiv:1511.02580v1, 2016.\nRoland Memisevic, Kishore Konda, and David Krueger. Zero-bias autoencoders and the benefits of co-adapting features. In ICLR. 2015.\nbeen used in speech recognition (e.g.Chan et al.[(2015); Geras et al.[(2015);Li et al.(2014)) anc reinforcement learningParisotto et al.(2016); Rusu et al.(2016).Romero et al.(2015) showed tha distillation methods can be used to train small students that are more accurate than the teacher models by making the student models deeper, but thinner, than the teacher model..\nWe train shallow mimic nets using data labeled by an ensemble of deep teacher nets trained on the original 1-hot CIFAR-10 training data. The deep teacher models are trained in the usual way using softmax outputs and cross-entropy cost function. Following Ba and Caruana (2014), the student mimic models are not trained with cross-entropy on the ten p values where pk = ek / , e output by the softmax layer from the deep teacher model, but instead are trained on the un-normalized log probability values z (the logits) before the softmax activation. Training on the logarithms of predictec probabilities (logits) helps provide the dark knowledge that regularizes students by placing emphasis on the relationships learned by the teacher model across all of the outputs.\nThe number of filters and hidden units for the models have the following bounds:. 1 conv. layer: 50 - 500 filters, 200 - 2000 hidden units, number of units in bottleneck is the dependent variable 2 conv. layers: 50 - 500 filters, 100 - 400 filters, number of hidden units is the dependent variable.. 3 conv. layers: 50 - 500 filters (layer 1), 100 - 300 filters (layers 2-3), # of hidden units is dependent the variable 4 conv. layers: 50 - 300 filters (layers 1-2), 100 - 300 filters (layers 3-4), # of hidden units is the dependent Variable.\nAs in Ba and Caruana (2014), the student is trained as a regression problem given training data z(T))}: ((x(1) 3(T\n1 L(W) - T t\nwhere W represents all of the weights in the network, and g(x(t); W) is the model prediction on the tth training data sample.\nTable 3: Optimization bounds for student models. (Models trained on O/1 hard targets were described in Sections[6.1and[6.2]) Abbreviations: fc (fully-connected layer, ReLu), c (convolutional, ReLu) linear (fully-connected bottleneck layer, linear activation function), dependent (dependent variable. chosen s.t. parameter budget is met).."}, {"section_index": "6", "section_name": "2.3 USING A LINEAR BOTTLENECK TO SPEED UP TRAINING", "section_text": "A shallow net has to have more hidden units in each layer to match the number of parameters ir a deep net.Ba and Caruana(2014) found that training these wide, shallow mimic models witl. backpropagation was slow, and introduced a linear bottleneck layer between the input and non-linea. layers to speed learning. The bottleneck layer speeds learning by reducing the number of parameter. that must be learned, but does not make the model deeper because the linear terms can be absorbe back into the non-linear weight matrix after learning. See[Ba and Caruana (2014) for details. To matcl their experiments we use linear bottlenecks when training student models with O or 1 convolutiona. layers, but did not find the linear bottlenecks necessary when training student models with more thar. 1 convolutional layer.\n1st layer 2nd layer 3rd layer 4th layer 5th layer No conv. layer (1M) 500 - 5000 (fc) dependent (linear) No conv. layer (3.1M) 1000 - 20000 (fc) dependent (linear) No conv. layer (10M) 5000 - 30000 (fc) dependent (linear) No conv. layer (31M) 5000 - 45000 (fc) dependent (linear) 1 conv. layer (1M) 40 - 150 (c) dependent (linear) 200 - 1600 (fc) 1 conv. layer (3.1M) 50 - 300 (c) dependent (linear) 100 - 4000 (fc) 1 conv. layer (10M) 50 - 450 (c) dependent (linear) 500 - 20000 (fc) 1 conv. layer (31M) 200 - 600 (c) dependent (linear) 1000 - 4100 (fc) 2 conv. layers (1M) 20 - 120 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (3.1M) 50 - 250 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (10M) 50 - 350 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (31M) 50 - 800 (c) 20 - 120 (c) dependent (fc) 3 conv. layers (1M) 20 - 110 (c) 20 - 110 (c) 20 - 110 (c) dependent (fc) 3 conv. layers (3.1M) 40 - 200 (c) 40 - 200 (c) 40 - 200 (c) dependent (fc) 3 conv. layers (10M) 50 - 350 (c) 50 - 350 (c) 50 - 350 (c) dependent (fc) 3 conv. layers (31M) 50 - 650 (c) 50 - 650 (c) 50 - 650 (c) dependent (fc) 4 conv. layers (1M) 25 - 100 (c) 25 - 100 (c) 25 - 100 (c) 25 - 100 (c) dependent (fc) 4 conv. layers (3.1M) 50 - 150 (c) 50 - 150 (c) 50 - 200 (c) 50 - 200 (c) dependent (fc) 4 conv. layers (10M) 50 - 300 (c) 50 - 300 (c) 50 - 350 (c) 50 - 350 (c) dependent (fc) 4 conv. layers (31M) 50 - 500 (c) 50 - 500 (c) 50 - 650 (c) 50 - 650 (c) dependent (fc)\n2n d layer"}, {"section_index": "7", "section_name": "2.4 BAYESIAN HYPERPARAMETER OPTIMIZATION", "section_text": "The goal of this work is to determine empirically if shallow nets can be trained to be as accurate as. deep convolutional models using a similar number of parameters in the deep and shallow models. If. we succeed in training a shallow model to be as accurate as a deep convolutional model, this provides an existence proof that shallow models can represent and learn the complex functions learned by. deep convolutional models. If, however, we are unable to train shallow models to be as accurate as deep convolutional nets, we might fail only because we did not train the shallow nets well enough..\nIn all our experiments we employ Bayesian hyperparameter optimization using Gaussian process regression to ensure that we thoroughly and objectively explore the hyperparameters that govern learning. The implementation we use is Spearmint (Snoek et al.|2012). The hyperparameters we optimize with Bayesian optimization include the initial learning rate, momentum, scaling of the initial random weights, scaling of the inputs, and terms that determine the width of each of the network's layers (i.e. number of convolutional filters and neurons). More details of the hyperparameter optimization can be found in Sections2.5||2.72.8|and in the Appendix\nModels in the first four rows in Table[1are trained similarly to those in Section 6.1 and are architecturally equivalent to the four convolutional student models shown in Table2|with 10 million parameters. The following hyperparameters are optimized: initial learning rate [0.0015, 0.025] (optimized on a log scale), momentum [0.68, 0.97] (optimized on a log scale), constants C1, C2 E [0, 1] that control the number of filters or neurons in different layers, and up to four different dropout rates DOc1 E [0.05, 0.4], DOc2 E [0.1, 0.6], DOc3 E [0.1, 0.7], DOf1 E [0.1, 0.7] for the different layers. Weight decay was set to 2 : 10-4 and we used the same data augmentation settings as for the student models. We use 5 5 convolutional filters, one nonlinear hidden layer in each model and each max-pooling operation is followed by dropout with a separately optimized rate We use 22 max-pooling except in the model with only one convolutional layer where we apply 3 3 pooling as this seemed to boost performance and reduces the number of parameters."}, {"section_index": "8", "section_name": "2.5 TRAINING DATA AND DATA AUGMENTATION", "section_text": "Weights of trained nets are initialized as inGlorot and Bengio (2010). The models trained in Section2.7 contain eight convolutional layers organized into three groups (2-2-4) and two fully-connected hidden layers The Bayesian hyperparameter optimization controls four constants C1, C2, C3, H1 all in the range [0, 1] that are then linearly transformed to the number of filters/neurons in each layer. The hyperparameters for which ranges were not shown in Section|2.7|are: the four separate dropout rates (DOc1, DOc2, DOc3, DOf) and the five constants Dn, Ds, D,, As, A, controlling the HSV data augmentation. The ranges we selected are DOc1 E [0.1, 0.3], DOc2 E [0.25, 0.35], DOc3 E [0.3, 0.44], DOf1 E [0.2, 0.65], DOf2 E [0.2, 0.65], Dn E [0.03, 0.11], Ds E [0.2, 0.3], D, E [0.0, 0.2], As E [0.2, 0.3], A, E [0.03, 0.2], partly guided bySnoek et al. (2015) and visual inspection of the resulting augmentations.\nAll convolutional filters in the model are sized 3 3, max-pooling is applied over windows of 22 and we use ReLU units throughout all our models. We apply dropout after each max-pooling layer with the three rates. DOc1, DOc2, DOc3 and after each of the two fully-connected layers with the same rate DOf.."}, {"section_index": "9", "section_name": "6.2 DETAILS OF TRAINING MODELS OF VARIOUS DEPTHS ON CIFAR-1O HARD O/1 LABELS", "section_text": "The CIFAR-10 (Krizhevsky2009) data set consists of a set of natural images from 10 different object. classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The dataset is a labeled subset of the 80 million tiny images dataset (Torralba et al.2008) and is divided into 50,000 train and\n10,000 test images. Each image is 32 32 pixels in 3 color channels, yielding input vectors with 3072 dimensions. We prepared the data by subtracting the mean and dividing by the standard deviation of each image vector. We train all models on a subset of 40,o00 images and use the remaining 10,ooo images as the validation set for the Bayesian optimization. The final trained models only used 80% of the theoretically available training data (as opposed to retraining on all of the data after. hyperparameter optimization).\nOur student models have the same architecture as models in Section[6.2] The model without convolutional layers consists of one linear layer that acts as a bottleneck followed by a hidden layer of ReLU units. The following. hyperparameters are optimized: initial learning rate [0.0013, 0.016] (optimized on a log scale), momentum [0.68, 0.97] (optimized on a log scale), input-scale E [0.8, 1.25], global initialization scale (after initialization E 0.4, 2.0], layer-width constants C1, C2 E [0, 1| that control the number of filters or neurons. The exact ranges. for the number of filters and implicitly resulting number of hidden units was chosen for all twenty optimization. experiments independently, as architectures, number of units and number of parameters strongly interact..\nWe employ the HsV-data augmentation technique as described by Snoek et al. (2015). Thu we shift hue, saturation and value by uniform random values: U(-Dh,Dn), s U(-Ds,Ds), , ~ U(-D,D). Saturation and value values are scaled globally: as as additional hyperparameters in the Bayesian hyperparameter optimization.\nFor the non-convolutional models we chose a slightly different hyper-parameterization. Given that all layers (in. models with \"two layers\" or more) are nonlinear and fully connected we treat all of them similarly from the hyperparameter-optimizer's point of view. In order to smoothly enforce the parameter budgets without rejecting. any samples from the Bayesian optimizer we instead optimize the ratios of hidden units in each layer (numbers between O and 1), and then re-normalize and scale them to the final number of neurons in each layer to match. the target parameter budget.\nAll training images are mirrored left-right randomly with a probability of O.5. The input images ar further scaled and jittered randomly by cropping windows of size 2424 up to 3232 at randon locations and then scaling them back to 32 32. The procedure is as follows: we sample an intege value S ~ U(24, 32) and then a pair of integers x, y ~ U(0, 32 S). The transformed resulting image is R = fspline,3(I[x : x + S, y : y + S]) with I denoting the original image and fspline, denoting the 3rd order spline interpolation function that maps the 2D array back to 32 32 (applied tc the three color channels separately).\nAll data augmentations for the teacher models are computed on the fly using different random seeds For student models trained to mimic the ensemble (see Section|2.7|for details of the ensemble teache. model), we pre-generated 160 epochs worth of randomly augmented training data, evaluated the ensemble's predictions (logits) on these samples, and saved all data and predictions to disk. All studen models thus see the same training data in the same order. The parameters for HSV-augmentation ir. this case had to be selected beforehand; we chose to use the settings found with the best single mode. (Dn = 0.06, Ds = 0.26, D, = 0.20, As = 0.21, A, = 0.13). Pre-saving the logits and augmentec data is important to reduce the computational cost at training time, and to ensure that all studen models see the same training data"}, {"section_index": "10", "section_name": "2.6 LEARNING-RATE SCHEDULE", "section_text": "We train all models using SGD with Nesterov momentum. The initial learning rate and momentum. are chosen by Bayesian optimization. The learning rate is reduced according to the evolution of the model's validation error: it is halved if the validation error does not drop for ten epochs in a row. It is. not reduced within the next eight epochs following a reduction step. Training ends if the error did not. drop for 30 epochs in a row or if the learning rate was reduced by a factor of more than 2000 in total.\nOne limitation of the CIFAR-10 experiments performed in Ba and Caruana (2014) is that the teacher models were not state-of-the-art. The best deep models they trained on CIFAR-10 had only 88% accuracy, and the ensemble of deep models they used as a teacher had only 89% accuracy. The accuracies were not state-of-the-art because they did not use augmentation and because their deepest models had only three convolutional layers. Because our goal is to determine if shallow models can be as accurate as deep convolutional models, it is important that the deep models we compare to (and. use as teachers) are as accurate as possible..\nWe train deep neural networks with eight convolutional layers, three intermittent max-pooling layer and two fully-connected hidden layers. We include the size of these layers in the hyperparamete optimization, by allowing the first two convolutional layers to contain from 32 to 96 filters each, the next two layers to contain from 64 to 192 filters, and the last four convolutional layers to contair"}, {"section_index": "11", "section_name": "6.3 DETAILS OF TRAINING STUDENT MODELS OF VARIOUS DEPTHS ON ENSEMBLE LABELS", "section_text": "Because augmentation allows us to generate large training sets from the original 50,o00 images, we. use augmented data as the transfer set for model compression. No extra unlabeled data is required\nThis schedule provides a way to train the highly varying models in a fair manner (it is not feasible to optimize all of the parameters that define the learning schedule). It also decreases the time spent to train each model compared to using a hand-selected overestimate of the number of epochs to train thus allowing us to train more models in the hyperparameter search..\nfrom 128 to 384 filters. The two fully-connected hidden layers can contain from 512 to 1536 neurons We parametrize these model-sizes by four scalars (the layers are grouped as 2-2-4) and include the scalars in the hyperparameter optimization. All models are trained using Theano (Bastien et al.]2012 Bergstra et al.|2010).\nFigure|2|is similar to|1|but includes preliminary re sults from experiments for models with 100M param. eters. We are also running experiments with 300M. parameters. Unfortunately, Bayesian optimization. on models with 100M and 300M parameters is even more expensive than for the other points in the graph.\nVe optimize eighteen hyperparameters overall: initial learning rate on [0.01, 0.05], momentum o 0.80, 0.91], l2 weight decay on [5 : 10-5,4 : 10-4], initialization coefficient on [0.8, 1.35] whic cales the initial weights of the CNN, four separate dropout rates, five constants controlling th HSV data augmentation, and the four scaling constants controlling the networks' layer widths. Th earning rate and momentum are optimized on a log-scale (as opposed to linear scale) by optimizin he exponent with appropriate bounds, e.g. LR = e-x optimized over x on [3.0, 4.6]. See th Appendix for more details about hyperparameter optimization.\nAs expected, adding capacity to the convolutional. students (top of the figure) modestly increases their accuracy. Preliminary results for the MLPs however (too preliminary to include in the graph) may not show the same increase in accuracy with increasing model size. Models with two or three hidden layers may benefit from adding capacity to each layer, but we have yet to see any benefit from adding capacity. to the MLPs with four or five hidden layers.\nWe trained 129 deep CNN models with Spearmint. The best model obtained an accuracy of 92.78%. the fifth best achieved 92.67%. See Table1Ifor the sizes and architectures of the three best models\nWe are able to construct a more accurate model on CIFAR-10 by forming an ensemble of multipl. deep convolutional neural nets, each trained with different hyperparameters, and each seeing slightl. different training data (as the augmentation parameters vary). We experimented with a number o. ensembles of the many deep convnets we trained, using accuracy on the validation set to select th best combination. The final ensemble contained 16 deep convnets and had an accuracy of 94.0% or. the validation set, and 93.8% on the final test set. We believe this is among the top published result for deep learning on CIFAR-10. The ensemble averages the logits predicted by each model before. the softmax layers.\nWe used this very accurate ensemble model as the teacher model to label the data used to train the shallower student nets. As described in Section|2.2] the logits (the scores just prior to the final softmax layer) from each of the CNN teachers in the ensemble model are averaged for each class, and the average logits are used as final regression targets to train the shallower student neural nets.\n2.8 TRAINING SHALLOW STUDENT MODELS TO MIMIC AN ENSEMBLE OF DEEP CONVOLUTIONAL MODELS\nWe trained student mimic nets with 1, 3.1d' 10 and 31.6 million trainable parameters on the. pre-computed augmented training data (Section|2.5) that was re-labeled by the teacher ensemble. (Section|2.7). For each of the four student sizes we trained shallow fully-connected student MLPs containing 1, 2, 3, 4, or 5 layers of non-linear units (ReLU), and student CNNs with 1, 2, 3 or 4 convolutional layers. The convolutional student models also contain one fully-connected ReLU layer Models with zero or only one convolutional layer contain an additional linear bottleneck layer to speed up learning (cf. Section 2.3). We did not need to use a bottleneck to speed up learning for the deeper models as the number of learnable parameters is naturally reduced by the max-pooling layers\nThe student CNNs use max-pooling and Bayesian optimization controls the number of convolutiona filters and hidden units in each layer. The hyperparameters we optimized in the student models are initial learning rate, momentum, scaling of the initially randomly distributed learnable parameters scaling of all pixel values of the input, and the scale factors that control the width of all hidder and convolutional layers in the model. Weights are initialized as in|Glorot and Bengio(2010). We intentionally do not optimize and do not make use of weight decay and dropout when training studen models because preliminary experiments showed that these consistently reduced the accuracy o1 student models by several percent. Please refer to the Appendix for more details on the individua architectures and hyperparameter ranges.\nTable[1summarizes results after Bayesian hyperparameter optimization for models trained on the original 0/1 hard CIFAR-10 labels. All of these models use weight decay and are trained with the. dropout hyperparameters included in the Bayesian optimization. The table shows the accuracy of. the best three deep convolutional models we could train on CIFAR-10. as well as the accuracy of\n13.16 ~ Sqrt(10) falls halfway between 1 and 10 on log scale\nteacher ensemble 90 compression gap 85 CNN: 1 convolutional layer de6 CNN: 2 convolutional layers CNN: 3 convolutional layers CNN: 4 convolutional layers Aeeuney 80 MLP: 1 hidden layer MLP: 2 hidden layers MLP: 3 hidden layers MLP: 4 hidden layers MLP: 5 hidden layers 75 70 compression gap 65 3 10 31 100 Number of Parameters [millions] Figure 2: See figure1\nTable 1: Accuracy on CIFAR-10 of shallow and deep models trained on the original 0/1 hard clas. labels using Bayesian optimization with dropout and weight decay. Key: c = convolution layer; m. - max-pooling layer; fc = fully-connected layer; lfc = linear bottleneck layer; exponents indicat repetitions of a layer. The last two models (*) are numbers reported by Ba and Caruana[(2014). Th. models with 1-4 convolutional layers at the top of the table are included for comparison with studer. models of similar architecture in Table[2]. All of the student models in Table2|with 1, 2, 3, and. convolutional layers are more accurate than their counterparts in this table that are trained on th. original O/1 hard targets -- as expected distillation yields shallow models of higher accuracy tha. shallow models trained on the original training data..\nModel Architecture # parameters Accuracy 1 conv. layer c-mp-lfc-fc 10M 84.6% 2 conv. layer c-mp-c-mp-fc 10M 88.9% 3 conv. layer c-mp-c-mp-c-mp-fc 10M 91.2% 4 conv. layer c-mp-c-c-mp-c-mp-fc 10M 91.75% Teacher CNN 1st 76c2-mp-126c2-mp-148c4-mp-1200fc2 5.3M 92.78% Teacher CNN 2nd 96c2-mp-171c2-mp-128c4-mp-512fc2 2.5M 92.77% Teacher CNN 3rd 54c2-mp-158c2-mp-189c4-mp-1044fc2 5.8M 92.67% Ensemble of 16 CNNs c2-mp-c2-mp-c4-mp-fc2 83.4M 93.8% Teacher CNN (*) 128c-mp-128c-mp-128c-mp-1k fc 2.1M 88.0% Ensemble, 4 CNNs (*) 128c-mp-128c-mp-128c-mp-1k fc 8.6M 89.0%\nTable 2: Comparison of student models with varying number of convolutional layers trained to mimic. the ensemble of 16 deep convolutional CIFAR-10 models in Table[1]. The best performing student models have 3-4 convolutional layers and 10M -31.6M parameters. The student models in this table are more accurate than the models of the same architecture in Table[1|that were trained on the original O/1 hard targets -- shallow models trained with distillation are more accurate than shallow. models trained on 0/1 hard targets. The student model trained byBa and Caruana(2014) is shown in. the last line for comparison; it is less accurate and much larger than the student models trained here that also have 1 convolutional layer.\n1 M 3.16 M 10 M 31.6 M 70 M Bottleneck, 1 hidden layer 65.8% 68.2% 69.5% 70.2% 2 hidden layers. 66.2% 70.9% 73.4% 74.3% 3 hidden layers. 66.8% 71.0% 73.0% 73.9% 4 hidden layers. 66.7% 69.5% 71.6% 72.0% 5 hidden layers. 66.4% 70.0% 71.4% 71.5% 1 conv. layer, 1 max-pool, Bottleneck 84.5% 86.3% 87.3% 87.7% 2 conv. layers, 2 max-pool 87.9% 89.3% 90.0% 90.3% 3 conv. layers, 3 max-pool 90.7% 91.6% 91.9% 92.3% 4 conv. layers, 3 max-pool 91.3% 91.8% 92.6% 92.6% SNN-ECNN-MIMIC-30k 128c-p-1200L-30k 85.8% trained on ensemble (Ba and Caruana2014)\nthe ensemble of 16 deep CNNs. For comparison, the accuracy of the ensemble trained by Ba anc Caruana(2014)) is included at the bottom of the table.\nTable 2summarizes the results after Bayesian hyperparameter optimization for student mod- els of different depths and number of parameters trained on soft targets (average logits) to mimic the teacher ensemble of 16 deep CNNs. For comparison, the student model trained byBa and Caruana (2014) also is shown.\nThe first four rows in Table[1|show the accuracy of convolutional models with 10 million param- eters and 1. 2. 3. and 4 convolutional layers The accuracies of these same architectures with 1M. 3.16M, 10M, and 31.6M parameters when trained as students on the soft targets predicted by the teacher ensemble are shown in Table|2 Comparing the accuracies of the models with 10 million parameters in both tables, we see that training student models to mimic the ensemble leads to significantly better accuracy in every case. The gains are more pronounced for shal- lower models. most likely because their learn- able internal representations do not naturally lead to good generalization in this task when trained on the O/1 hard targets: the difference in accuracy for models with one convolutional layer is 2.7% (87.3% vs. 84.6%) and only 0.8% (92.6% vs. 91.8%) for models with four convo lutional layers\nFigure [1summarizes the results in Table2 for student models of different depth, number of convolutional layers, and number of parame- ters when trained to mimic the ensemble teacher model. Student models trained on the ensemble logits are able to achieve accuracies previously unseen on CIFAR-10 for models with so few layers. Also, it is clear that there is a huge gap between the convolutional student models at the top of the figure, and the non-convolutional stu- dent models at the bottom of the figure: the most accurate student MLP has accuracy less than 75%. while the least accurate convolutional stu- dent model with the same number of parameters but only one convolutional layer has accuracy above 87%. And the accuracy of the convolu- tional student models increases further as more layers of convolution are added. Interestingly the most accurate student MLPs with no convo- lutional layers have only 2 or 3 hidden layers; the student MLPs with 4 or 5 hidden layers are not as accurate.\nComparing the student MLP with only one hidden layer (bottom of the graph) to the student CNN with 1 convolutional layer clearly suggests that convolution is critical for this problem even wher models are trained via distillation, and that it is very unlikely that a shallow non-convolutional mode with 100 million parameters or less could ever achieve accuracy comparable to a convolutional model It appears that if convolution is critical for teacher models trained on the original O/1 hard targets, i1\nteacher ensemble 90 compression gap 85 CNN: 1 convolutional layer CNN: 2 convolutional layers CNN: 3 convolutional layers . Aeennney 80 CNN: 4 convolutional layers MLP: 1 hidden layer O MLP: 2 hidden layer. O MLP: 3 hidden layer. MLP: 4 hidden layer. MLP: 5 hidden layer. 75 3 4 5 70 1 compression gap 65 3 10 31 Number of Parameters [millions]\nFigure 1: Accuracy of student models with differ- ent architectures trained to mimic the CIFAR10 ensemble. The average performance of the five best models of each hyperparameter-optimization experiment is shown, together with dashed lines indicating the accuracy of the best and the fifth best model from each setting. The short horizontal lines at 10M parameters are the accuracy of mod els trained without compression on the original 0/1 hard targets.\nis likely to be critical for student models trained to mimic these teacher models. Adding depth to th student MLPs without adding convolution does not significantly close this \"convolutional gap\"\nFurthermore, comparing student CNNs with 1, 2, 3, and 4 convolutional layers, it is clear that CNN students benefit from multiple convolutional layers. Although the students do not need as many layers as teacher models trained on the original O/1 hard targets, accuracy increases significantly as multiple convolutional layers are added to the model. For example, the best student with only on convolutional layer has 87.7% accuracy, while the student with the same number of parameters (31M and 4 convolutional layers has 92.6% accuracy.\nOne pattern that is clear in the graph is that all student models benefit when the number of parameters increases from 1 million to 31 million parameters. It is interesting to note, however, that the largest student (31M) with a one convolutional layer is less accurate than the smallest student (1M) with two convolutional layers, further demonstrating the value of depth in convolutional models..\nIn summary, depth-constrained student models trained to mimic a high-accuracy ensemble of deep convolutional models perform better than similar models trained on the original hard targets (the \"compression\"' gaps in Figure[1), student models need at least 3-4 convolutional layers to have high accuracy on CIFAR-10, shallow students with no convolutional layers perform poorly on CIFAR-10 and student models need at least 3-10M parameters to perform well. We are not able to compress deep convolutional models to shallow student models without significant loss of accuracy.\nWe are currently running a reduced set of experiments on ImageNet, though the chances of shallow models performing well on a more challenging problem such as ImageNet appear to be slim.."}, {"section_index": "12", "section_name": "4 DISCUSSION", "section_text": "Although we are not able to train shallow models to be as accurate as deep models, the models trained via distillation are the most accurate models of their architecture ever trained on CIFAR-10. For example, the best single-layer fully-connected MLP (no convolution) we trained achieved an accuracy. of 70.2%. We believe this to be the most accurate shallow MLP ever reported for CIFAR-10 (in. comparison to 63.1% achieved byLe et al.[(2013), 63.9% by[Memisevic et al.(2015) and 64.3% by Geras and Sutton (2015)). Although this model cannot compete with convolutional models, clearly. distillation helps when training models that are limited by architecture and/or number of parameters.. Similarly, the student models we trained with 1, 2, 3, and 4 convolutional layers are, we believe. the most accurate convnets of those depths reported in the literature. For example, the ensemble. teacher model in|Ba and Caruana[(2014) was an ensemble of four CNNs, each of which had 3. convolutional layers, but only achieved 89% accuracy, whereas the single student CNNs we train via. distillation achieve accuracies above 90% with only 2 convolutional layers, and above 92% with 3. convolutional layers. The only other work we are aware of that achieves comparable high accuracy. with non-convolutional MLPs is recent work by Lin et al.(2016). They train multi-layer Z-Lin. networks, and use a powerful form of data augmentation based on deformations that we did not use..\nInterestingly, we noticed that mimic networks perform consistently worse when trained using dropou. This surprised us, and suggests that training student models on the soft-targets from a teacher provide.. significant regularization for the student models obviating the need for extra regularization method. such as dropout. This is consistent with the observation made by Ba and Caruana[(2014) that studen. mimic models did not seem to overfit.Hinton et al. (2015) claim that soft targets convey more. information per sample than Boolean hard targets. The also suggest that the dark knowledge in the. soft targets for other classes further helped regularization, and that early stopping was unnecessary. Romero et al. (2015) extend distillation by using the intermediate representations learned by the. teacher as hints to guide training deep students, and teacher confidences further help regularizatior. by providing a measure of sample simplicity to the student, akin to curriculum learning. In othe. work,Pereyra et al.(2017) suggest that the soft targets provided by a teacher provide a form o. confidence penalty that penalizes low entropy distributions and label smoothing, both of whicl. improve regularization by maintaining a reasonable ratio between the logits of incorrect classe.\nFigure[1includes short horizontal lines at 10M parameters indicating the accuracy of non-student models trained on the original O/1 hard targets instead of on the soft targets. This \"compression gap\" is largest for shallower models, and as expected disappears as the student models become architecturally more similar to the teacher models with multiple layers of convolution. The benefits of distillation are most significant for shallow models, yielding an increase in accuracy of 3% or more"}]
Hk85q85ee
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, Gerard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In A1STATS, 2015a.\nYuandong Tian\nFacebook AI Research\nFukumizu, Kenji and Amari, Shun-ichi. Local minima and plateaus in hierarchical structures oi multilayer perceptrons. Neural Networks, 13(3):317-327, 2000.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Sur- passing human-level performance on imagenet classification. In Proceedings of the IEEE Inter-. national Conference on Computer Vision, pp. 1026-1034, 2015.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. Computer Vision anad Pattern Recognition (CVPR), 2016\nHinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep. Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net works for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012\nJanzamin, Majid, Sedghi, Hanie, and Anandkumar, Anima. Beating the perils of non-convexity Guaranteed training of neural networks using tensor methods. CoRR abs/1506.08473. 2015\nKawaguchi, Kenji. Deep learning without poor local minima. Advances in Neural Informatior Processing Systems, 2016\nLeCun, Yann A, Bottou, Leon, Orr, Genevieve B, and Muller, Klaus-Robert. Efficient backprop. Ir Neural networks: Tricks of the trade. pp. 9-48. Springer. 2012."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Saad, David and Solla, Sara A. Dynamics of on-line gradient descent learning for multilayer neural networks. Advances in Neural Information Processing Systems, pp. 302-308, 1996..\nSaxe, Andrew M, McClelland, James L, and Ganguli, Surya. Exact solutions to the nonlinear dy namics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013\nIn this paper, we focus on the first problem and use dynamical system to analyze the nonlinea. gradient descent dynamics of certain two-layered nonlinear network in the following form:.\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognitionIntern tions (CLR) 2015\nSoudry, Daniel and Carmon, Yair. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.\nSutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net works. In Advances in neural information processing systems, pp. 3104-3112, 2014.\nwhere o(x) = max(x, O) is the ReLU nonlinearity. We consider the following setting: a student. network learns the parameters that minimize the l, distance between its prediction and the super vision provided by the teacher network of the same size with a fixed set of parameters w*. We. assume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn.1is. highly nonconvex and could contain exponential number of symmetrically equivalent solutions..\nSzegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions In Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.\nTo analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks (See Lemma[2.1) in the teacher-student setting under l2 loss. Then for K = 1, we prove that the nonlinear gradient dynamics of Eqn.1|has a close form and converges to w* with at least (1 -"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we use dynamical system to analyze the nonlinear weight dynam ics of two-layered bias-free networks in the form of g(x; w) = j=1 (wJx), where o() is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of. a teacher network of the same size with fixed parameters w* using l, loss. We. first show that when K = 1, the nonlinear dynamics can be written in close form. and converges to w* with at least (1 - e)/2 probability, if random weight ini-. tializations of proper standard derivation (~ 1/d) is used, verifying empirical. practice [Glorot & Bengio (2010); He et al.(2015);LeCun et al.(2012)]. For net- works with many ReLU nodes (K > 2), we apply our close form dynamics and. symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w*. without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis..\nDeep learning has made substantial progress in many applications, including Computer Vision [He. et al.(2016); Simonyan & Zisserman(2015); Szegedy et al.(2015); Krizhevsky et al.(2012)], Nat- ural Language Processing [Sutskever et al.[(2014)] and Speech Recognition [Hinton et al.[(2012)] However, till now, how and why it works remains elusive due to a lack of theoretical understanding First, how simple approaches like gradient descent can solve a very complicated non-convex opti mization effectively. Second, how the deep models, especially deep convolutional models, achieve generalization power despite massive parameters.\nK g(x;w)=) 0(w[x) j=1\ng b (d) C layer c Student Teacher Network Network Wjk W c W W Xc k 8 layer c+ 1 X\n111 c (d) layer c Student Teacher Network Network Wjk W c W * W W c Xc k O Ok' layer c + 1\ne)/2 probability, if initialized randomly with standard derivation on the order of 1/d, verifying. commonly used initialization techniques [Glorot & Bengio (2010); He et al.(2015); LeCun et al (2012)],. When K 2, we prove that when the teacher parameters {w}j=1 form orthonorma. bases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) unde a certain symmetric breaking weight initialization, the dynamics converges to w*, without gettin. stuck into any local minima. Note that in both cases, the initialization can be arbitrarily close t the origin for a fixed w*|l, showing that such a convergence behavior is beyond the local conve. structure at w*. To our knowledge, this is the first proof of its kind..\nHere we list all detailed proof for all the theorems\nPrevious works also use dynamical system to analyze deep neural networks.. [Saxe et al.(2013) analyzes the dynamics of multilayer linear network, and [Kawaguchi(2016)] shows every loca]. minima is global for multilinear network. Very little theoretical work has been done to analyze the dynamics of nonlinear networks, especially deep ones.. [Mei et al.(2016)] shows the globa convergence when K = 1 with activation function o(x) when its derivatives o', o\", \" are bounded. and o' > 0. Similar to our approach, [Saad & Solla[(1996)] also uses the student-teacher setting and analyzes the dynamics of student network when the teacher's parameters w* forms a orthonoma. bases; however, it uses o(x) = erf(x) as the nonlinearity and only analyzes the local behaviors of. the two critical points (the saddle point in symmetric initializations, and w*). In contrast, we prove. the global convergence behavior in certain symmetry-breaking cases..\nLemma 7.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following. form\nwhere Lj and L* are N-by-N diagonal matrices. For any k E [c +1], Lk = jE[c] WjkDjLj an similarly for Lk.\nProof We prove by induction on layer. For the first layer, there is only one node with g = u - v. therefore L; = L,, = I. Suppose the condition holds for all node j E [c]. Then for node k E [c+1] we have:\nThe paper is organized as follows. Sec.2 introduces the basic formulation and some interesting. novel properties of ReLU in multilayered ReLU networks. Sec.3|and Sec.4 then analyze the two-. layered model Eqn.1|for K = 1 and K 2, respectively. Sec.5[shows that simulation results are consistent with theoretical analysis. Finally Sec.7lgives detailed proofs for all theorems.."}, {"section_index": "3", "section_name": "2.1 NOTATION", "section_text": "gk = LkLf,Uk'- LkLk'Vk'= Lk k k'\nDenote X as a N-by-d input data matrix and w* is the parameter of the teacher network with desired N-by-1 output u = g(X; w*). Now suppose we have an estimator w and the estimated. output v = g(X;w). We want to know with l2 loss E(w) = 2||u - v||2 = ||u - g(X;w)||2 whether gradient descent will converge to the desired solution w*.\n(a x=y+e Saddle point x*X* Ne Teacher's params (e,0) (1,0) x e > 0 0 <0\nFigure 1: (a) We consider the student and teacher network as nonlinear neural networks with ReLU nonlinearity. The student network updates its weight w from the output of the teacher, whose weights w* are fixed. (b)-(c) The network structure we consider in K = 1 and K > 2 cases (d) Notations used in multilayer ReLU gradient update rule (Sec.2.2)\nFigure 6: (a)-(b) Two cases in Lemma7.2 (c) Convergence analysis in the symmetric two-layered case.\n>`(L*,u- L gj = Lj j'\nMany previous works analyze nonlinear network based on the assumption of independent activa tions: the activations of ReLU (or other nonlinear) nodes are independent of the input and/or mutu ally independent. For example, [Choromanska et al.(2015a b)] relate the nonlinear ReLU network with spin-glass models when several assumptions hold, including the assumption of independent ac- tivations (A1p and A5u). [Kawaguchi (2016)] proves that every local minimum in nonlinear network is global based on similar assumptions. [Soudry & Carmon (2016)] shows the global optimality of the local minimum in a two-layered ReLU network, by assuming small sample size and applying independent multiplicative Bernoulli noise on the activations. In practice, the activations are highly dependent due to their common input. Ignoring such dependency also misses important behaviors, and may lead to misleading conclusions. In this paper, no assumption of independent activation is nade. For sigmoid activation, [Fukumizu & Amari[(2o0o)] gives quite complicated conditions for a local minimum to be global when adding a new node to a two-layered network. [Janzamin et al. 2015)] gives guarantees on recovering the parameters of a 2-layered neural network learnt with ten- sor decomposition. In comparison, we analyze ReLU networks trained with gradient descent, which is a more popular setting in practice.\nWjkDjgj = gk WikD;L L*,u LjD,WjkUx-LjDjWjk'Vk WjkD;Lj i A WjkDjLjLh,D*,w*k,Uk-WjkD;LjLzDj \"Wjk'Vk Ue' Wiki WjkD;L ik Vk k k\nThe gradient descent update is w(t+1) = w(t) + nw(t), where w(t) = -VE(w(t)). If we let n -> 0, then the update rule becomes a first-order differential equation dw/dt = - E(w), or more concisely, w = -VE(w). In this case, E = VE(w)w = -|VE(w)||2 < 0, i.e., the functior value E is nonincreasing over time. The key is to check whether there exist other critical points w w* so that VE(w) = 0.\nN E[F(e, w)] = (( -0)w + w sin 0e) 2\nIn our analysis, we assume entries of input X follow Gaussian distribution. In this situation, the gra dient is a random variable and w = -E [VE(w)]. The expected E [E(w)] is also nonincreasing no matter whether we follow the expected gradient or the gradient itself, because\nProof Note that F can be written in the following form\nE E =-E[VE(w)VE(w)]<-E[VE(w)]'E[VE(w)]<0\nIn this paper, we discover a few useful properties of ReLU that make our analysis much simpler Denote D = D(w) = diag(Xw > 0) as a N-by-N diagonal matrix. The l-th diagnonal element of D is a binary variable showing whether the neuron is on for sample l. Using this notation, we could write o(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.\nNote that for ReLU, D is also \"tranparent\"' on derivatives. For example, the Jacobian Jw[o(X w)] - o'(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in. ReLU network: suppose we have negative gradient inflow vector g (of dimension N-by-1) on the. current ReLU node with weights w, then we can simply write the update w as:.\nLemma 2.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:\n290 - sin 290 1 cos 2@o 0 1 1 R(9o E X;X! 1 cos 2o 290 + sin 200 0 N 4 0 0 290Id-2 i:$iE[0,$0] sin 200 1 - cos 2o 0] 1 1 - cos 2o sin 2o 0 4 27 0 0 0]\nThe intuition here is to start from g = u - v (true for l2 loss) at the top layer and use induction With this formulation, we could write the finite dynamics for wc (all parameters in layer c). Denote the N-by-dc+1dc matrix Rc = [LjDj]je[q]Xc and R* = [L*D*]jE[q]X*. Using gradient descent rules:\nwj =X[Djgj=X[DjLj *w* XID;L;(R*w*- Rcwc)\nE[F(e,w)]=N(R() R(O)) w - sin 20 1 cos 20 0 cos 0 N 2(-0)w-||w| 1 cos 20 sin 20 0 sin 0 4 0 0 0 0 N sin 0 -0 2 N W + sin 0e\nAw(t) = XTD(t) g(t) = XTD(t)(D*Xw* _D(t)Xw\nN EF(e,w)]=N(R(+0)-R(O))w +0)w-wsin0e 27\nF(e, w) = X i:x]e>0,x[w>0\nX;x=E[X;x[i E [0,$o]]P[i E [O,o] N i:$iE[0,$0] r sin o r cos o r sin$r cos$ .xd] p(r)p(0) 1p(xk)rdrdodx3...dxd k=3 X d\nAw = Jw[o(Xw)]'g = X'Dg\nThis can be easily applied to multilayer ReLU network. Denote j E [c] if node j is in layer c dc as the width of layer c, and u; and v; as the output of teacher network and student network respectively. A simple deduction yields the following lemma:\nLu-LjV gj = Lj i\nAwc = RI (R*w* - Rcwc\nNotice that by abuse of notation, the 0 appears in Eqn.20 is the absolute value and Eqn.20|follows\nLinear case. In this situation D(t) D* = I (no gating in either forward or backward propagation)\nn A\nNonlinear (ReLU) case. In this case, w = Xt D(D* Xw* - DXw) in which D is a function o w. Intuitively, this term goes to zero when w -> w*, and should be approximated to be (w* w) in the i.i.d Gaussian case, since roughly half of the samples are blocked. However, once we make such approximation, we lost the nonlinear behavior of the network and would draw the wrong conclusion of global convergence.\n1 sin 20 + 2 - 20 -(2 - 0) cos 0 - sin 0 M -(2 - 0) cos 0 sin 0 2 2\nThen how should we analyze it? Notice that in w, both of the two terms have the form F(e, w) = XTD(e)D(w)Xw. Using this form, E[w] = E[F(w/l|w|l,w*)]- E[F(w/l|w],w)]. Here e is a unit vector called the \"projected\" weight. In the following, we will show that E [F(e, w)] has the following close form under i.i.d Gaussian assumption on X :.\n1det(M) 2(sin 20 + 2 20) [(2 0) cos 0 + sin 0]4 2(sin 20 + 2 - 20) - (2 - 0)2 cos2 0 + (2 - 0) sin 20 + si (42 - 1) sin? 0 - 40 + 40 cos? 0 - 02 cos? 0 + 0 sin 20 (42 40 - 1) sin? 0 + 0 cos0(2 sin 0 - 0 cos 0)\nNote that the expectation analysis smooths out the non-differentiable property of ReLU, leaving. only one singularity at e = 0. The intuition is that expectation analysis involves an integration over the data distribution. With simple algebraic manipulation, E [w] takes the following closed form:\nN N E[w] = a sin 0w - 0w*\nwhere a = ||w*Il/l|w|| and 0 e [0, ] is the angle between w and w*. The first term is expected while the last two terms show the nonlinear behavior. Using Lyapunov's method, we show that the dynamics (if treated continuously) converges to w* when w(1) e = (w : ||w w*\nSee Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vectoi. w|, w*l, and the bilinear coefficient matrix is positive definite. One question arises: will the. same approach show the dynamics converges when the initial conditions lie outside the region N, ir. particular for any region that includes the origin? The answer is probably no. Note that w = O is a. singularity in which w is not continuous (if approaching from different directions towards w = 0. w is different). It is due to the fact that ReLU function is not differentiable at the origin. We could remove this singularity by \"smoothing out' ReLU around the origin. This will yield w -> 0 wher w -> 0. In this case, V(0) = 0 so Lyapunov method could only tell that the dynamics is stable bu not convergent. Note that for ReLU activation, '(x) = 0 for certain negative x even after a local. smoothing, so the global convergence claim in [Mei et al.(2016)] for l2 loss does not apply.\nVa(1) W\nRandom Initialization. Then we study how to sample w(1) so that w(1) E . We would like to sample within Q, but we don't know where is w*. Sampling around origin with big radius r 2||w* I is inefficient in particular in high-dimensional space. This is because when the sam ple is uniform, the probability of hitting the ball is proportional to (r/||w*||)d 2-d, which is exponentially small.\nwhere Vd(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is\nLemma 7.3 In the region ||w(1) - w*|| < ||w*|, following the dynamics (Eqn.11), the Lyapunov function V(w) = ||w - w*||2 has V < 0 and the system is asymptotically stable and thus w(t) >- w* when t -> +oo.\nIn the following we will show that M is positive definite when 0 E (0, /2]. It suffices to show that Mj1 > 0, M22 > 0 and det(M) > 0. The first two are trivial, while the last one is:.\nLemma 3.1 DenoteF(e, w) = XD(e)D(w)Xw where e is a unit vector, X X1,X2,... ,xv is N-by-d sample matrix and D(w) = diag(Xw > O) is a binary diagonal matrix. If x; ~ N(0, I) and are i.i.d (and thus bias-free), then..\nN E[F(e, w)] = [( -0)w + w sin 0e 21\n2\nLemma 3.2 When w(1) E = {w : ||w w*|| < |w*|I}, following the dynamics of Eqn.11 the Lyapunov function V(w) = ||w - w*|2 has V < 0 and the system is asymptotically stable and thus w(t) > w* when t -> +oo.\n1 1 Va(r) - 8Va-1> 2 2\nVd e 8 < 2 Vd-1\nVa(1) = r(d/2+1)\n|w-w*|I<|w*II (a) Convergent region (b) (c) W Sample Successful samples 0.0 -0.5 region\nI(x+1) (x+s s1- x>0,0<s<1\nFigure 2: (a) Sampling strategy to maximize the probability of convergence. (b) Relationship be tween sampling range r and desired probability of success (1 - e)/2. (c) Geometry of K = 1 2D case. There is a singularity at the origin. Initialization with random weights around the origin has decent probability to converge to w*\n-1/2 d+1\\ r(d/2+1/2) d 2 T(d/2+1) 2\nA better idea is to sample around the origin with very small radius (but not at w = O), so tha the convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the samples is usefu1 (Fig.2(a)), as shown in the following theorem:\nThe intution here is to lower-bound the probability of the shaded area (Fig.2(b)). From the proof. the conclusion could be made stronger to show r ~ 1/d. consistent with common initializatior techniques [Glorot & Bengio(2010);He et al.(2015);LeCun et al. (2012)]. Fig.2(c) shows ar example in the 2D case, in which there is a singularity at the origin, and sampling towards w* yields. the convergence. This is consistent with the analysis above..\nLemma 7.5 For *. 0 and defined in Eqn.[17"}, {"section_index": "4", "section_name": "4 MULTIPLE RELUS CASE", "section_text": "Now we are ready to analyze the network g(x) = j=1 (wJx) for K 2 (Fig.1(c)). Theoretical analysis of such networks is also the main topic in many previous works [Saad & Solla (1996) Soudry & Carmon(2016);Fukumizu & Amari](2000)]. In this case, L; = L* = I for 1 j K Then we have the following nonlinear dynamics from Eqn.7\nwe have the following relations in the triangular region eo = {(x, y) : x 0, y 0, x y + eo (Fig.6(c)):\n(1) $, * E [0, /2[ and 0 E 0, 0o) where 0o = arccos (2) cos = 1- a2(x-y)2 and sin =a(x-y) Q2(x - y)2 3 * > (equality holds only when y = 0) and * > 0\n2) cos = 1-a2(x-y)2 and sin = a(x-y)/2-a2(x-y)\nWj! wi l|wj|I\ncOs $ a2(2xy + (K -2)y2) a(2x+(K-2)y)>a(x+(K-1)y)>1 cos * Qy\nEqn.12|(and its expected version) gives very complicated nonlinear dynamics and could be hard to solve in general. Unlike K = 1, a similar approach with Lyaponov function does not yield a decisive conclusion. However, if we consider the symmetric case: w; = P,w and w* = P,w where P, is a cyclic permutation matrix that maps index j' + 1 to (j' + j mod K) + 1 (and P1 is the identity matrix), then RHS of the expected version of Eqn.12[can be simplified as follows:\nE[wj] =>`E[f(wj,Wj,w*)]=>`E[f(P;w,Pj,w,Pj,w*)] E[f(P;w,PjPj,w,PjPj,w*)] ({Pj}j1is agroup) Pj>`E[f(w,P,w,P;,w*)] (|Pw1||=||wi|l,Z(Pw1,Pw2)=(w1,w2 P;E[w1] (14\nProof We discuss the three boundaries as follows:\nCase 1: y = 0, 0 < x < 1, horizontal line. In this case, 0 = 0, = /2 and * = /2. The component of the dynamics in this line is:\nNDCC 'dt. So we have Va(1) F(d/2+1/2) (40) Vd-1(1) T(d/2+1) From Gautschi's Inequality T(x+1) x1-s (x+s x > 0, 0 < s < 1 (41) T(x+ s) with s = 1/2 and x = d/2 we have: 1/2 F(d/2+1/2) (42) T(d/2+1) Therefore, it suffices to have 2 (43) Note that this upper bound is tight when & -> 0 and d -> +oo, since all inequality involved asymp\nVa(1) r(d/2+1/2) Vd-1(1) r(d/2+1)\n2 d +\nNote that this upper bound is tight when -> 0 and d -> +oo, since all inequality involved asymp totically becomes equal.\nQ (x2+(K-1)y2)-1/2 cos 0 = ax cos * = Qy cos a?(2xy+(K -2)y2) -\nK wj=f(Wj,Wj,W j'=1\nProof Propositions (1) and (2) are computed by direct calculations. In particular, note that since cos 0 = ax = 1/1 + (K - 1)(y/x)2 and x > y 0, we have cos 0 e (1/K,1] and 0 E [0, 0o). For Preposition (3), $* = arccos ay > 0 = arccos ax because x > y. Finally, for x > y > 0, we have\nTheorem 7.6 For the dynamics defined in Eqn.16J there exists eo > 0 so that the trianglar region Neo = {(x, y) : x 0, y 0, x y + eo} (Fig.6(c)) is a convergent region. That is, the flow goes inwards for all three edges and any trajectory starting in Seo stays.\nwj]= E[f(wj,Wj,w*)]=E[f(Pw,Pj,w,Pjw*)] E[f(P;w,PjPj\"w,PjPjw*)] ({Pj}j=1is agroup) Pj>`E[f(w,Pj,w,Pj,w*)] (|Pwi|=|w1|l, Z(Pw1,Pw2)=Z(w1,w2)) PjE[w1] (14)\n2 A f1 = 1)>0 N 2\nK E[w]= E[f(w, P;w,P,w*) j=1\n2TT f2 7x -(-$)(K -1)y-0+(K -1)(asin*-sin$) +asi N -(K1) [(-$)y-(asin*-sin$)]+ (a sin0-0)\n21 )(x-1+(K-1)y)] N [(K - 1)(a sin $* - sin ) + a sin\n=(x2+(K-1)y2)-1/2 cos 0 = ax, cos * = ay, cos$ = a?(2xy+(K-2)y)\n2TT - 0 - e + (K - 1)(a sin * - sin ) + a sin 0e (56 N K - 1\nCorollary 4.2 For a bias-free two-layered ReLU network g(x; w) = , (wJx) that takes Gaus-. sian i.i.d inputs (Fig. 1), if the teacher's parameters {w*} form orthogonal bases, then when = x(1)w* + y(1) i+j Wj, where (x(1), y(1)) E = {x E (0, 1], y E [0, 1], x > y}, then the dynamics (Eqn.12) converges to { w*} without being trapped into local minima..\nLemma 7.7 (Reparametrization) Denote e = x - y > 0. The terms ax, ay and ae involved in the trigometric functions in Egn.16 has the following parameterization..\nWhen symmetry is broken, since the closure of includes the origin, there exists a path starting. at arbitrarily small neighborhood of origin to w*, regardless of how large w* is. In contrast tc traditional convex analysis that only gives the local parameter-dependent convergence basin around w*, here we obtain a convergence basin that is parameter-independent. In comparison, [Saad &. Solla(1996)] uses a different activation function (o(x) = erf(x)) and only analyzes local behaviors near the two fixed points (the symmetric saddle point and the teacher's weights w*), leaving sym metry breaking an empirical procedure. Here we show that it is possible to give global convergence. analysis on certain symmetry breaking cases for two-layered ReLU network..\n[y] -2 1 3 +(K -1)2 a x K K 2\nProof This transformation can be checked by simple algebraic manipulation. For example\nK ar K\nwhich means that if all w; and w* are symmetric under the action of cyclic group, so does their expected gradient. Therefore, the trajectory {w(t)} keeps such cyclic structure. Instead of solving a system of K equations, we only need to solve one:\n(-$)y+a(x-y)2-a2(x-y)2-a1-a2y2 -a2-a2(x-y)2 V2-a2(x-y)2-1-a2y2\nTT 2-a2(x-y)2 T- 2\n1 1 1 Oy /(x/y)2+(K-1) /(1+e/y)2+(K-1 /K\n0 = cos0 +vK - 1sin 0\n(a) Distribution of relative RMS error on angle. (b) Relative RMS error w.r.t #sample (Gaussian distribution) (c) Relative RMS error w.r.t #sample (Uniform distri.). 0.7 0.40 0.40 d=5 d=5 Id=5 0.6 /2 0.35 d=10 d=10 0.35 d=10 0.30 d=20 d=20 0.30 I d =20 d =50 d=50 d =50 0.25 0.25 0.20 0.20 0.15 0.15 0.2 0.10 0.10 0.1 0.05 0.05 0.0 0.00 0.00 0.0 0.5 1.0 1.5 2.0 2.5 3.0 10 10 102 106 10 10 10 10 10 107 103 10 0 10 10 Angle (in radius) #Samples #Samples #Samples\nTo prove Eqn.59 first we notice that K cos0 = Kax = + (K - 1)2. Therefore, we have (K cos 0 )2 - (K - 1)2? = 0, which gives 2 - 2 cos 0 + 1 - K sin2 0 = 0. Solving this quadratic equation and notice that 1, 0 e [0, /2] and we get:\n= cos 0 + cos2 0 + K sin? - 1 = cos 0 + K -1 sin 0\nFigure 3: (a) Distribution of relative RMS error with respect to 0 = (w, e). (b) Relative RMS error decreases with sample size, showing the asympototic behavior of the close form expression Eqn. 10] (c) Eqn.10]also works well when the input data X are generated by other zero-mean distribution X, e.g., uniform distribution in [1/2, 1/2].\n(a) Vector field in (x, y) plane (K = 2) (b) Vector field in (x, y) plane (K = 5) (c) Trajectory in (x, y) plane.. 1.0 1.0 0.6 y =x K =2 0.5 Saddle points K=5 K = 10 0.4 0.8/ 0.8 y o.3 iter200 i \"iter100 0.2 iter100 iter200 0.6 0.6 0.1 iter100 y > 0.2 0.4 X 0.6 0.8 1.0 K=2 0.4 0.4 (d) Convergence K=5 0.8 K = 10 0.6 0.2 0.2 0.4 0.2 0.0 0.0 I 0.00 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 200 400 600 800 1000 Y\nDenote f3(, e) = f31 + f32 where\nf31(, e') *-0- D + e' a sin 6. f32(, e') (K - 1)(a sin * - sin )e\nFigure 4: (a)-(b) Vector field in (x, y) plane following 2D dynamics (Eqn.16) for K = 2 and. K = 5. Saddle points are visible. The parameters of teacher's network are at (1,0). (c) Trajectory in (x, y) plane for K = 2, K = 5, and K = 10. All trajectories start from (10-3,0). Even the. starting point are aligned with w*, gradient descent dynamics takes detours. (d) Training curve When K is larger the convergence is faster..\n0 f31 = e'($* $) + (1- e')($* - 0)- e'0 + 2 sin0 -e'0 + 2 sin0 2 Sin 0\n1 f33(0) = sin0-0 = sin 20 + VK - 1sin2 0 - 0 2\nFrom the simulation shown in Fig.] we could see that gradient descent takes a detour to reach th desired solution w*, even when the initialization is aligned with w*. This is because in the firs stage, all ReLU nodes receive the residue and try to explain the data in the same way (both x anc y increases); when the \"obvious\" component has been explained away, then the residue change its direction and pushes some ReLU nodes to explain other components as well (x increases but g decreases).\nEmpirically this path also converges to w* under noise. We leave it a conjecture that the system con verges in the presence of reasonably large noise. If this conjecture is true, then with high probability a random initialization stays in the convergence basin and converges to a permutation of w*. The. reason is that a random initialization almost never gives ties. Without a tie, there exists one leading. component which will dominate the convergence..\nConjecture 4.3 When the initialization w(1) = x(1)w* j'+j w*, + e, where e is Gaussian. noise and (x(1), y(1)) E Q, then the dynamics Eqn.12|also converges to w* without trapped into local minima.\nQ e-1+Ky 3-a+3-2) 0 Q Q Q\n1 ((K - 1)(a sin $* - sin $) + a sin0) = -\nWe verify our close form expression of E [F(e, w)] = E [XT D(e)D(w)Xw] (Eqn.[10) with sim ulation. We randomly pick e and w so that their angle (e, w) is uniformly distributed in [0, ]. We prepare the input data X with standard Gaussian distribution and compare the close form so- lution E [F(e, w)] with F(e, w), the actual data term in gradient descent without expectation. We. use relative RMS error: err = ||E [F(e, w)] - F(e, w)|l/||F(e, w)|l. As shown in Fig.3[a), The error distribution on angles shows the properties of the close-form solution. For small 0, D(w) and\n--)e-1+Ky)-* $y + ((K - 1)(a sin $* - sin $) + a sin 0 N --e-1+Ky)- (6\n(a) Distribution of relative RMS error on angle (b) Relative RMS error w.r.t #sample (Gaussian distribution) (c) Relative RMS error w.r.t #sample (Uniform distri.) 0.7 0.40 0.40 d =5 d=5 d=5 0.6 t/2 TT 0.35 d=10 d=10 0.35 d=10 0.30 d=20 d=20 0.30 d=20 d =50 d=50 d =50 0.25 0.25 0.20 0.20 0.3 0.15 0.15 0.2 / 0.10 0.10 0.1 0.05 0.05 0.0 1.0 2.0 2.5 3.0 0.00 0.00 0.0 0.5 1.5 103 104 105 106 10 103 104 105 106 10 103 104 105 106 107\nf3 = hi()-(+(K-1) sin)\nWhen is fixed, f3 now is a monotonously decreasing function with respect to e > 0. Therefore. f3(, e) f3(, e') for 0 < e e' = 2/. If we could prove f3(, e) 0 and only attain zero at known critical point (, e) = (1, 1), the proof is complete\n1.0 1.0 1.0 1.0 0.8 noise = 0.5, top-w = 1 0.8 noise = 1.0, top-w = 1 errrr 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 noise = 1.5, top-w = 1 0.4 noise = 2.0, top-w = 1 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 60 20 40 80 100 20 40 60 80 100 20 0 20 40 80 100 0 60 0 0 40 60 80 100 #lteration #lteration #Iteration #Iteration 1.0 1.0 1.0 1.0 p noise = 0.5, top-w E [1, 2] noise = 0.5, top-w E [0.1, 1.1] 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 noise = 0.5, top-w E [0.01, 0.11] 0.2 noise = 0.5, top-w ~ N(0, 1) 0.2 0.0 0.0 60 0.0 0.0 0 20 40 60 80 100 20 40 80 100 0 20 40 60 80 100 0 20 0 40 60 80 100 #Iteration #Iteration #Iteration #Iteration\nTheorem 7.10 Any trajectory in Ne, converges to (y, e) = (1, 0), following the dynamics defined in Eqn.16\nProof We have Lyaponov function V = E [E] so that V = -E [ww] -E [w] E [w] 0. By Thm.7.9 other than the optimal solution w*, there is no other symmetric critical point. w 0 and thus V < 0. On the other hand, by Thm.7.6 the triangular region Neo is convergent, in. which the 2D dynamics is C differentiable. Therefore, any 2D solution curve &(t) will stay within. By PoincareBendixson theorem, when there is a unique critical point, the curve either converges to a limit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous decreasing along the curve. Therefore, (t) will converge to the unique critical point, which is (y, e) = (1, 0) and so does the symmetric system (Eqn.12).\nFigure 5: Top row: Convergence when the initial weights deviates from symmetric initialization: w(1) = 10-3w* + e. Here e ~ N(0, 10-3 * noise). The 2-layered network converges to w* until experiment has 8 runs. Bottom row: Convergence when we use g2(x) = j=1 ajo(wJx). Here the top weights a; is fixed at different numbers (rather than 1). Large positive a, correponds to fast convergence. When a; has positive/negative components, the network does not converge to w*.\nProof The 1D system can be computed with simple algebraic manipulations (note that when x = y,. = 0 and 0 = * = arccos(1/K)). Note that the 1D system is linear and its close form solution is x(t) = xo + Ce-K/2Nt and thus convergent.\nFig.3(a) shows that the close form expression becomes more accurate with more samples. We also examine other zero-mean distributions of X, e.g., uniform distribution in [-1/2, 1/2]. As shown in Fig.3(d), the close form expression still works for large d, showing that it could be quite general. Note that the error is computed up to a scaling constant, due to the difference in normalization constants among different distributions. We leave it to the future work to prove its usability for broader distributions.\nFig.4[a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn.16) and Fig.4(c) shows the 2D trajectory towards convergence to the teacher's parameters w*. Interestingly, even when we initialize the weights as (10-3, 0), aligning with w*, the gradient descent takes detours to reach the destination. One explanation is, at the beginning all nodes move similar direction trying to explain the data, once the data have been explained partly, specialization follows (y decreases).\nIn this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLl networks in the form of g(x; w) = j=1 o(wJx), where = max(x, 0) is the ReLU node. We assume that the input x follows Gaussian distribution and the output is generated by a teacher net work with parameters w*. In K = 1 we show a close-form nonlinear dynamics can be obtained and its convergence to w* can be proven, if we sample the initialization properly. Such initialization is consistent with common practice [Glorot & Bengio (2010); He et al.[(2015)] and is independent of the value of w*. For K 2, when the teacher parameters {w* } form a orthonormal bases, we prove that the trajectory from symmetric initialization is trapped into a saddle point, while certain sym- metric breaking initialization converges to w* without trapped into any local minima. Future work includes analysis of general cases (or symmetric case plus noise) for K 2, and a generalization to multilayer ReLU (or other nonlinear) networks.\n1.0 1.0 1.0 1.0 0.8 noise = 0.5, top-w = 1 0.8 noise = 1.0, top-w = 1 errrr 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 noise = 1.5, top-w = 1 0.4 noise = 2.0, top-w = 1 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 o: 20 40 60 80 100 20 40 60 80 100 #Iteration #Iteration #Iteration #lteration 1.0 1.0 1.0 1.0 p noise = 0.5, top-w E [1, 2] noise = 0.5, top-w E [0.1, 1.1] errorr 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 RN aareee 0.4 0.4 0.4 0.4 RRel 0.2 0.2 0.2 noise = 0.5, top-w E [0.01, 0.11] 0.2 noise = 0.5, top-w ~ N(0, 1) 0.0 0.0 0.0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 20 40 60 80 0 100 #Iteration #lteration #lteration #Iteration\n211 \\x= -K(x-x* N\nFig.5 shows empirical convergence for K > 2, when the initialization deviates from symmetric initialization in Thm.4.1 Unless the deviation is large, gradient descent converges to w*. We also check the convergence of a more general network g2(x) = j=1 ajo(wJx). When aj > 0 convergence follows; however, when some a; is negative, the network does not converge to w*"}]
Skvgqgqxe
[{"section_index": "0", "section_name": "LEARNING TO COMPOSE WORDS INTO SENTENCES WITH REINFORCEMENT LEARNING", "section_text": "all our experiments. While for smaller datasets such as SiCK the overall training time is approxi. mately 6 hours, for SNLI or IMDB it takes 3-4 days for the model to reach convergence. In general the latent syntax model and semi-supervised syntax models take about two or three times longer to. converge compared to models with predefined structures..\nDani Yogatama', Phil Blunsom1,2, Chris Dyer', Edward Grefenstette', and Wang Ling 1DeepMind and 2University of Oxford"}, {"section_index": "1", "section_name": "5 CONCLUSION", "section_text": "{dyogatama, pblunsom, cdyer, etg, lingwang}@google. com"}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large anno tated corpus for learning natural language inference. In Proc. of EMNLP, 2015.."}, {"section_index": "3", "section_name": "1 INTRODUCTION", "section_text": "David Chiang. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228 2007.\nNoam Chomsky. Syntactic Structures. Mouton. 1957\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, 1997.\nSergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios Batiz, and Av Mendizabal. UNAL-NLP: Combining soft cardinality features for semantic textual similarity. relatedness and entailment. In Proc. of SemEval, 2014..\nOur work can be understood as a compromise between the first two approaches. Rather than using. explicit supervision of tree structure, we use reinforcement learning to learn tree structures (and thus, sentence-specific compositional architectures), taking performance on a downstream task that. uses the computed sentence representation as the reward signal. In contrast to sequential RNNs.. which ignore tree structure, our model still generates a latent tree for each sentence and uses it tc.\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network fo. modelling sentences. In Prof. of ACL, 2014..\nWe presented a reinforcement learning method to learn hierarchical structures of natural language. sentences. We demonstrated the benefit of learning task-specific composition order on four tasks. sentiment analysis, semantic relatedness, natural language inference, and sentence generation. We. qualitatively and quantitatively analyzed the induced trees and showed that they both incorporate some linguistically intuitive structures (e.g., noun phrases, simple verb phrases) and are different than conventional English syntactic structures.."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "We use reinforcement learning to learn tree-structured neural networks for com outing representations of natural language sentences. In contrast with prior work. on tree-structured models, in which the trees are either provided as input or pre-. dicted using supervision from explicit treebank annotations, the tree structures. n this work are optimized to improve performance on a downstream task. Ex periments demonstrate the benefit of learning task-specific composition orders.. outperforming both sequential encoders and recursive encoders based on treebank. annotations. We analyze the induced trees and show that while they discover. some linguistically intuitive structures (e.g., noun phrases, simple verb phrases) hey are different than conventional English syntactic structures..\nJohannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity. In Proc. oj SemEval, 2014.\nanguages encode meaning in terms of hierarchical, nested structures on sequences o. vords (Chomsky 1957). However, the degree to which neural network architectures that com. ute representations of the meaning of sentences for practical applications should explicitly reflec uch structures is a matter for debate. In this work, we use reinforcement learning to learn to con. truct trees for computing sentence representations, guided by feedback from downstream tasks tha. epend on these representations. The space of structures that are considered by the learner includes. oth fully sequential structures (corresponding to traditional recurrent neural network \"encoders'') s well as all projective binary trees. Thus, although we take seriously the notion that good compo itional architectures might be tree-structured, we specify neither the form of the tree nor whether a. ree is necessary at all, and instead leave those decisions up to the learner (and the data)..\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder. for statistical machine translation. arXiv preprint, 2014.\nYoon Kim. Convolutional neural networks for sentence classification. In Proc. EMNLP, 2014\nstructure the composition. Our hypothesis is that encouraging the model to learn tree-structured compositions will bias the model toward better generalizations about how words compose to form sentence meanings, leading to better performance on downstream tasks.\nThis work is related to unsupervised grammar induction (Klein & Manning2004]Blunsom & Cohr. 2010, Spitkovsky et al.]2011, inter alia), which seeks to infer a generative grammar of an infinit. language from a finite sample of strings from the language-but without any semantic feedbacl. Previous work on unsupervised grammar induction that incorporates semantic supervision involve designing complex models for Combinatory Categorial Grammars (Zettlemoyer & Collins||2005) c marginalizing over latent syntactic structures (Naradowsky et al.2012). Since semantic feedbac. has been proposed as crucial for the acquisition of syntax (Pinker|[1984), our model offers a simple. alternative||However, our primary focus is on improving performance on the downstream model, s. the learner may settle on a different solution than conventional English syntax. We thus also explor. what kind of syntactic structures are derivable from shallow semantics..\nDan Klein and Christopher D. Manning. Accurate unlexicalized parsing. In Proc. of ACL, 2003\nAlice Lai and Julia Hockenmaier. Illinois-lh: A denotational and distributional approach to seman tics. In Proc. of SemEval, 2014\nMarco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proc. of SemEval, 2014.\nExperiments on various tasks (i.e., sentiment analysis, semantic relatedness, natural language infer. ence, and sentence generation) show that reinforcement learning is a promising direction to discove. hierarchical structures of sentences. Notably, representations learned this way outperformed botl conventional left-to-right models and tree-structured models based on linguistic syntax in down. stream applications. This is in line with prior work showing the value of learning tree structures ir. statistical machine translation models (Chiang2007). Although the induced tree structures mani. fested a number of linguistically intuitive structures (e.g., noun phrases, simple verb phrases), there. are a number of marked differences to conventional analyses of English sentences (e.g., an overall. left-branching structure).\nsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint, 2016a\nJason Naradowsky, Sebastian Riedel, and David A. Smith. Improving nlp through marginalizatior of hidden syntactic structure. In Proc. of EMNLP, 2012.."}, {"section_index": "5", "section_name": "2 MODEL", "section_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Proc. of EMNLP, 2014.\nOur model consists of two components: a sentence representation model and a reinforcement learn. ing algorithm to learn the tree structure that is used by the sentence representation model.\nSteven Pinker. Lan Learnability and L Development. Harvard, 1984"}, {"section_index": "6", "section_name": "2.1 TREE LSTM", "section_text": "Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. Semantic composition ality through recursive matrix-vector spaces. In Proc. of EMNLP, 2012..\nOur sentence representation model follows the Stack-augmented Parser-Interpreter Neural Networ (SPINN; Bowman et al., 2016), SPINN is a shift-reduce parser that uses Long Short-Term Memor (LSTM; Hochreiter and Schmidhuber, 1997) as its composition function. Given an input sentenc of N words x = {x1, 2,...,xN}, we represent each word by its embedding vector x, E RD The parser maintains an index pointer p starting from the leftmost word (p = 1) and a stack. T parse the sentence, it performs a sequence of operations a = {a1, a2,..., a2v-1}, where at {sHIFT, REDUCE}. A sHIFT operation pushes xp to the stack and moves the pointer to the nex word (p++); while a REDucE operation pops two elements from the stack, composes them to single element, and pushes it back to the stack. SPINN uses Tree LSTM (Tai et al.I|2015)Zhu et al 2015) as the REDUCE composition function, which we follow. In Tree LSTM, each element of th stack is represented by two vectors, a hidden state representation h and a memory representation c Two elements of the stack (h;, c;) and (h;, c) are composed as:\ni=o(W1[hi,hj]+b1) o = o(Wo[h,h;]+ b1 fL =o(WFL[hi,hj]+bFL] fR =o(WFR[hi,hj]+ bFR g = tanh(Wg[hi,h;] + bG) c=fOc;+fROc;+iOg h =oO c\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proc. of ACL, 2015.\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In Proc. of ICLR, 2016.\nwhere [h,, h,] denotes concatenation of h, and h;, and is the sigmoid activation function\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen earningMachinee 8:229-256.1992\nA unique sequence of {sHIFT, REDUcE} operations corresponds to a unique binary parse tree of the sentence. A sHIFT operation introduces a new leaf node in the parse tree, while a REDUCE operation combines two nodes by merging them into a constituent. See Figure1for an example. We note that for a sentence of length N, there are exactly N sHIFT operations and N 1 REDUCE operations that are needed to produce a binary parse tree of the sentence. The final sentence representation produced\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proc. of UAI, 2005.\nXiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. Long short-term memory over recursive struc tures. In Proc. of ICML, 2015.\nDan Klein and Christopher D. Manning. Corpus-based induction of syntactic structure: Models of dependency and constituenc In Proc. of ACL. 2004\n' Our model only produces an interpretation grammar that parses language instead of a generative gramma\nS, S, R, S, S, R, R S, S, S, R, R, S, R S, S, R, S, R, S, R S, S, S, S, R, R, R\nby the Tree LSTM is the hidden state of the final element of the stack hy-1 (i.e., the topmost node of the tree).\nTracking LSTM. SPINN optionally augments Tree LSTM with another LSTM that incorporate contextual information in sequential order called tracking LSTM, which has been shown to improv performance for textual entailment. It is a standard recurrent LSTM network that takes as input th hidden states of the top two elements of the stack and the embedding vector of the word indexed b the pointer at timestep t. Every time a REDUcE operation is performed, the output of the trackin LSTM e is included as an additional input in Eq.1(i.e., the input to the REDucE compositior function is [h;, h;, e] instead of [hi, h;]).\nIn previous work (Tai et al.2015 Bowman et al.2016), the tree structures that guided compositior orders of Tree LSTM models are given directly as input (i.e., a is observed and provided as an input) Formally, each training data is a triplet {x, a, y}.Tai et al.(2015) consider models where a is alsc. given at test time, whereas|Bowman et al.[(2016) explore models where a can be either observed or. not at test time. When it is only observed during training, a policy is trained to predict a at test time. Note that in this case the policy is trained to match explicit human annotations (i.e., Penn TreeBank annotations), so the model learns to optimize representations according to structures that follows. human intuitions. They found that models that observe a at both training and test time are better than models that only observe a during training..\nOur main idea is to use reinforcement learning (policy gradient methods) to discover the best tree. structures for the task that we are interested in. We do not place any kind of restrictions wher learning these structures other than that they have to be valid binary parse trees, so it may resuli. in tree structures that match human linguistic intuition, heavily right or left branching, or othe. solutions if they improve performance on the downstream task..\nWe parameterize each action a E {sHIfT, REDucE} by a policy network (a s; W R), where s i. a representation of the current state and W r is the parameter of the network. Specifically, we use : two-layer feedforward network that takes the hidden states of the top two elements of the stack h,. and h, and the embedding vector of the word indexed by the pointer x, as its input:.\nIf a is given as part of the training data, the policy network can be trained--in a supervised training regime-to predict actions that result in trees that match human intuitions. Our training data, o the other hand, is a tuple {x, y}. We use REINFORcE (Williams1992), which is an instance of broader class of algorithms called policy gradient methods, to learn W r such that the sequence o actions a = { a1, ..., aT} maximizes:\nT R(W) = E(a,s;WR) rtat t=1\nFigure 1: Four examples of trees and their corresponding sHIFT (S) and REDUcE (R) sequences. In each of the examples, there are 4 input words (4 leaf nodes), so 7 operations (4 S, 3 R) are needed to construct a valid tree. The nodes are labeled with the timesteps in which they are introduced to the trees t E {1,..., 7}. A sHIFT operation introduces a leaf node, whereas a REDUCE operation introduces a non-leaf node by combining two previously introduced nodes. We can see that different S-R sequences lead to different tree structures.\nwhere rt is the reward at timestep t. We use performance on a downstream task as the reward func- tion. For example, if we are interested in using the learned sentence representations in a classification task, our reward function is the probability of predicting the correct label using a sentence represen- tation composed in the order given by the sequence of actions sampled from the policy network, so R(W) = log p(y | T-LSTM(x); W), where we use W to denote all model parameters (Tree LSTM, policy network, and classifier parameters), y is the correct label for input sentence x, and x is rep- resented by the Tree LSTM structure in 2.1| For a natural language generation task where the goal is to predict the next sentence given the current sentence, we can use the probability of predicting words in the next sentence as the reward function, so R(W) = log p(xs+1 | T-LSTM(xs); W).\nNote that in our setup, we do not immediately receive a reward after performing an action at timestej. t. The reward is only observed at the end after we finish creating a representation for the curren. sentence with Tree LSTM and use the resulting representation for the downstream task. At eacl. timestep t, we sample a valid action according to (a s; WR). We add two simple constraints t. make the sequence of actions result in a valid tree: REDUcE is forbidden if there are fewer than twc. elements on the stack, and sHiFT is forbidden if there are no more words to read from the sentence After reaching timestep 2N - 1, we construct the final representation and receive a reward that is. used to update our model parameters.\nWe experiment with two learning methods: unsupervised structures and semi-supervised structures Suppose that we are interested in a classification task. In the unsupervised case, the objective func. tion that we maximize is logp(y T-LSTM(x); W). In the semi-supervised case, the objectiv. function for the first E epochs also includes a reward term for predicting the correct sHIFT or RE. DUcE actions obtained from an external parser-in addition to performance on the downstream task so we maximize log p(y | T-LSTM(x); W) + log (a | s; W R). The motivation behind this mode. is to first guide the model to discover tree structures that match human intuitions, before letting i. explore other structures close to these ones. After epoch E, we remove the second term from our ob jective function and continue maximizing the first term. Note that unsupervised and semi-supervise. here refer to the tree structures, not the nature of the downstream task.."}, {"section_index": "7", "section_name": "3.1 BASELINES", "section_text": "The goal of our experiments is to evaluate our hypothesis that we can discover useful task-specific tree structures (composition orders) with reinforcement learning. We compare the following com position methods (the last two are unique to our work):.\n2We choose to include right to left as a baseline since a right-branching tree structure---which is the output of a right to left composition order--has been shown to be a reliable baseline for unsupervised grammar induction (Klein & Manning2004)\n:Right to left: words are composed from right to left|2. Left to right: words are composed from left to right. This is the standard recurrent neura. network composition order. Bidirectional: A bidirectional right to left and left to right models, where the final sentenc. embedding is an average of sentence embeddings produced by each of these models.. Balanced binary tree: words are composed according to a balanced binary parse tree o. the sentence. Supervised syntax: words are composed according to a predefined parse tree of the ser tence. When parse tree information is not included in the dataset, we use Stanford parse. (Klein & Manning2003) to parse the corpus. Semi-supervised syntax: a variant of our reinforcement learning method, where for th. first E epochs we include rewards for predicting predefined parse trees given in the supei. vised model, before letting the model explore other kind of tree structures at later epoch. (i.e., semi-supervised structures in 2.2). Latent syntax: another variant of our reinforcement learning method where there is n predefined structures given to the model at all (i.e., unsupervised structures in 2.2)..\nFor learning, we use stochastic gradient descent with minibatches of size 1 and l2 regularization con. stant tune on development data from {10-4, 10-5, 10-6, 0}. We use performance on development data to choose the best model and decide when to stop training.\nTable 1: Descriptive statistics of datasets used in our experiments\nDataset # of train # of dev # of test Vocab size SICK 4,500 500 4,927 2,172 SNLI 550,152 10,000 10,000 18,461 SST 98,794 872 1,821 8,201 IMDB 441,617 223,235 223,236 29,209\nStanford Sentiment Treebank. We evaluate our model on a sentiment classification task from the Stanford Sentiment Treebank (Socher et al.]2013). We use the binary classification task where the goal is to predict whether a sentence is a positive or a negative movie review.\nWe set the word embedding size to 100 and initialize them with Glove vectors (Pennington et al. 20143] For each sentence, we create a 100-dimensional sentence representation s E R100 with. Tree LSTM, project it to a 200-dimensional vector and apply ReLU: q = ReLU(Wps + bp), anc. compute p(y = cq; wq) x exp(wq,cq+ bq\nTable 2: Classification accuracy on Stanford Sentiment Treebank dataset. The number of parameter. includes word embedding parameters and is our approximation when not reported in previous work\nModel Acc. # params. 100D-Right to left 83.9 1.2m 100D-Left to right 84.7 1.2m 100D-Bidirectional 84.7 1.5m 100D-Balanced binary tree 85.1 1.2m 100D-Supervised syntax 85.3 1.2m 100D-Semi-supervised syntax 86.1 1.2m 100D-Latent syntax 86.5 1.2m RNTN (Socher et al.. 2013) 85.4 DCNN (Kalchbrenner et al.) 2014) 86.8 CNN-random(Kim 2014 82.7 CNN-word2vec (Kim 2014 87.2 CNN-multichannel (Kim.) 2014 88.1 NSE (Munkhdalai & Yu) 2016a 89.7 5.4m NTI-SLSTM (Munkhdalai & Yu 2016b 87.8 4.4m NTI-SLSTM-LSTM (Munkhdala1 & Yu 2016b 89.3 4.8m Left to Right LSTM Tai et al 2015 84.9 2.8m Bidirectional LSTM Tai et al 87.5 2.8m Constituency Tree-LSTM-random Tai et al 82.0 2.8m Constituency Tree-LSTM-GloVe Tai et al 115 88.0 2.8m Dependency Tree-LSTM 85.7 2.8m Tai et al. 2015\nhttp://nlp.stanford.edu/projects/glove,\nWe evaluate our method on four sentence representation tasks: sentiment classification, semantic relatedness, natural language inference (entailment), and sentence generation. We show statistics of the datasets in Table[1land describe each task in detail in this subsection.\nWe run each model 3 times (corresponding to 3 different initialization points) and use the devel. opment data to pick the best model. We show the results in Table2 Our results agree with prior work that have shown the benefits of using syntactic parse tree information on this dataset (i.e., su- pervised recursive model is generally better than sequential models). The best model is the latent syntax model, which is also competitive with results from other work on this dataset. Both the latent and semi-supervised syntax models outperform models with predefined structures, demonstrating. the benefit of learning task-specific composition orders..\nSemantic relatedness. The second task is to predict the degree of relatedness of two sentences. from the Sentences Involving Compositional Knowledge corpus (SICK; Marelli et al., 2014) . In. this dataset, each pair of sentences are given a relatedness score on a 5-point rating scale. For each. sentence, we use Tree LSTM to create its representations. We denote the final representations by {S1, s2} E R100.We construct our prediction by computing: u = (S2.-s1)?, v =.S1 O s2,. R200, bg E R1 are model parameters, and [u, v] denotes concatenation of vectors inside the brackets. We learn the model to minimize mean squared error..\nWe run each model 5 times and use the development data to pick the best model. Our results are shown in Table 3| Similarly to the previous task, they clearly demonstrate that learning the tree structures yields better performance..\nWe also provide results from other work on this dataset for comparisons. Some of these models (La & Hockenmaier2014]Jimenez et al.]2014]Bjerva et al.]2014) rely on feature engineering and are designed specifically for this task. Our Tree LSTM implementation performs competitively wit most models in terms of mean squared error. Our best model-semi-supervised syntax-is bette than most models except LSTM models of Tai et al.(2015) which were trained with a differen objective function4Nonetheless, we observe the same trends with their results that show the benefi of using syntactic information on this dataset\nTable 3: Mean squared error on SICK dataset\nStanford Natural Language Inference. We next evaluate our model for natural language infer. ence (i.e., recognizing textual entailment) using the Stanford Natural Language Inference corpus. (SNLI; Bowman et al., 2015) . Natural language inference aims to predict whether two sentence. are entailment, contradiction, or neutral, which can be formulated as a three-way classification prob lem. Given a pair of sentences, similar to the previous task, we use Tree LSTM to create sentenc representations {S1, s2} E R100 for each of the sentences. FollowingBowman et al. (2016), we con- struct our prediction by computing: u = (S2-s1)2, v = S1 Os2, q = ReLU(Wp[u, v, S1, S2]+bp) and p(y =c|q;wq) x exp(wq,cq+bq), where Wp E IR200x400,bp E R200,wq E R200,bq E R are model parameters. The objective function that we maximize is the log likelihood of the correc label under the models.\nWe show the results in Table 4 The latent syntax method performs the best. Interestingly, the. sequential left to right model is better than the supervised recursive model in our experiments, which. contradicts results from Bowman et al.(2016) that show 300D-LSTM is worse than 300D-SPINN. A possible explanation is that our left to right model has identical number of parameters with the supervised model due to the inclusion of the tracking LSTM even in the left to right model (the. only difference is in the composition order), whereas the models in Bowman et al.[(2016) have.\n4Our experiments with the regularized KL-divergence objective function (Tai et al.2015) do not result ir significant improvements, so we choose to report results with the simpler mean squared error objective function\nModel MSE # params. 100D-Right to left 0.461 1.0m 100D-Left to right 0.394 1.0m 100D-Bidirectional 0.373 1.3m 100D-Balanced binary tree 0.455 1.0m 100D-Supervised syntax 0.381 1.0m 100D-Semi-supervised syntax 0.320 1.0m 100D-Latent syntax 0.359 1.0m Illinois-LH dLai & Hockenmaier 2014 0.369 UNAL-NLP(Jimenez et al 2014 0.356 Meaning Factory (Bjerva et al.. 2014 0.322 DT-RNN (Socher et al..) 2014 0.382 Mean Vectors (Tai et al. 2015 0.456 650k Left to Right LSTM. Tai et al 2015 0.283 1.0m Bidirectional LSTM Tai et al 2015 0.274 1.0m Constituency Tree-LSTM Tai et al. 2015 0.273 1.0m Dependency Tree-LSTM. Tai et al. 2015 0.253 1.0m\ndifferent number of parameters. Due to the poor performance of the supervised model relative to the unsupervised model, semi-supervised training can only mitigate the loss in accuracy, rathel than improve over unsupervised learning. Our models underperform state-of-the-art models on this dataset that have almost four times the number of parameters. We only experiment with smaller models since tree-based models with dynamic structures (e.g., our semi-supervised and latent syntax models) take longer to train. See d4|for details and discussions about training time.\nTable 4: Classification accuracy on SNLI dataset\nSentence generation. The last task that we consider is natural language generation. Given a sen- tence, the goal is to maximize the probability of generating words in the following sentence. This is a similar setup to the Skip Thought objective (Kiros et al.J|2015), except that we do not generate the previous sentence as well. Given a sentence, we encode it with Tree LSTM to obtain s E R100. We use a bag-of-words model as our decoder, so p(w; | s; V) exp(vT s), where V E R10029,209 and v; E R100 is the i-th column of V. Using a bag-of-words decoder as opposed to a recurrent neural network decoder increases the importance of producing a better representation of the current sentence, since the model cannot rely on a sophisticated decoder with a language model component to predict better. This also greatly speeds up our training time.\nWe use IMDB movie review corpus (Diao et al.]2014) for this experiment, The corpus consists of 280,593, 33,793, and 34,029 reviews in training, development, and test sets respectively. We construct our data using the development and test sets of this corpus. For training, we process 33,793 reviews from the original development set to get 441,617 pairs of sentences. For testing. we use 34,029 reviews in the test set (446,471 pairs of sentences). Half of these pairs is used as our development set to tune hyperparamaters, and the remaining half is used as our final test set Our results in Table|5|further demonstrate that methods that learn tree structures perform better than methods that have fixed structures.\nTable 5: Word perplexity on the sentence generation task. We also show perplexity of the mode that does not condition on the previous sentence (unconditional) when generating bags of words for comparison.\nIaatasol Model Acc. # params. 100D-Right to left 79.1 2.3m 100D-Left to right 80.2 2.3m 100D-Bidirectional 80.2 2.6m 100D-Balanced binary tree 77.4 2.3m 100D-Supervised syntax 78.5 2.3m 100D-Semi-supervised syntax 80.2 2.3m 100D-Latent syntax 80.5 2.3m 100D-LSTM (Bowman et a1. 2015 77.6 5.7m 300D-LSTM Bowman et al. 2016 80.6 8.5m 300D-SPINN (Bowman et al.. 2016 83.2 9.2m 1024D-GRU TVendrov et al. 2016 81.4 15.0m 300D-CNN (Mou et al. 2016 82.1 9m 300D-NTI (Munkhdala1 & Yu 2016b 83.4 9.5m 300D-NSE (Munkhdalai & Yu 2016a 84.6 8.5m\nModel Perplexity # params. 100D-Unconditional 105.6 30k 100D-Right to left 101.4 6m 100D-Left to right 101.1 6m 100D-Bidirectional 100.2 6.2m 100D-Balanced binary tree 103.3 6.2m 100D-Supervised syntax 100.8 6m 100D-Semi-supervised syntax 98.4 6m 100D-Latent syntax 99.0 6m\nFigure 2: Examples of tree structures learned by our model which show that the model discover. simple concepts such as noun phrases and verb phrases\nme fami stan outs hom men playi frisb mbe ding ide tWO are ng in the ee park V rs e\nFigure 3: Examples of unconventional tree structures"}, {"section_index": "8", "section_name": "4 DISCUSSION", "section_text": "LearnedStructures. Our results in 3show that our proposed method outperforms competing methods with predefined composition order on all tasks. The right to left model tends to perform worse than the left to right model. This suggests that the left to right composition order, similar to how human reads in practice, is better for neural network models. Our latent syntax method is able to discover tree structures that work reasonably well on all tasks, regardless of whether the task is better suited for a left to right or supervised syntax composition order.\nWe inspect what kind of structures the latent syntax model learned and how closely they match human intuitions. We first compute unlabeled bracketing F1 scores|for the learned structures and parses given by Stanford parser on SNLI and Stanford Sentiment Treebank. In the SNLI dataset, there are 10,000 pairs of test sentences (20,000 sentences in total), while the Stanford Sentiment Treebank test set contains 1,821 test sentences. The F1 scores for the two datasets are 41.73 and 40.51 respectively. For comparisons, F1 scores of a right (left) branching tree are 19.94 (41.37) for SNLI and 12.96 (38.56) for SST.\nWe also manually inspect the learned structures. We observe that in SNLI, the trees exhibit overall. left-branching structure, which explains why the F1 scores are closer to a left branching tree struc-. ture. Note that in our experiments on this corpus, the supervised syntax model does not perform. as well as the left-to-right model, which suggests why the latent syntax model tends to converge. towards the left-to-right model. We handpicked two examples of trees learned by our model and show them in Figure[2] We can see that in some cases the model is able to discover concepts such as. noun phrases (e.g., a boy, his sleds) and simple verb phrases (e.g., wearing sunglasses, is frowning). Of course, the model sometimes settles on structures that make little sense to humans. We show two. such examples in Figure[3] where the model chooses to compose playing frisbee in and outside a as. phrases.\nsun wo wea frow drag sled boy thro glas his the no man ring ning S s ugh ses W\nTraining Time. A major limitation of our proposed model is that it takes much longer to train compared to models with predefined structures. We observe that our models only outperforms mod- els with fixed structures after several training epochs; and on some datasets such as SNLI or IMDB. an epoch could take a 5-7 hours (we use batch size 1 since the computation graph needs to be recon- structed for every example at every iteration depending on the samples from the policy network). This is also the main reason that we could only use smaller 100-dimensional Tree LSTM models in"}]
BymIbLKgl
[{"section_index": "0", "section_name": "ACKNOWLEDGMENTS", "section_text": "This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 664800)\n{paigautam, twerd, ron}@cs.technion.ac.il"}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "M Ackerman. Sophus Lie's 1884 Differential Invariant Paper. Math Sci Press, 1976\nWe propose a metric learning framework for the construction of invariant geo metric functions of planar curves for the Euclidean and Similarity group of trans- formations. We leverage on the representational power of convolutional neural networks to compute these geometric quantities. In comparison with axiomatic constructions, we show that the invariants approximated by the learning archi tectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. Finally, we de velop a novel multi-scale representation in a similarity metric learning paradigm\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard. Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nAlexander M Bronstein, Michael M Bronstein, and Ron Kimmel. Numerical geometry of non-rigi shapes. Springer Science & Business Media, 2008"}, {"section_index": "2", "section_name": "INTRODUCTION", "section_text": "The discussion on invariance is a strong component of the solutions to many classical problems in numerical differential geometry. A typical example is that of planar shape analysis where one desires to have a local function of the contour which is invariant to rotations, translations and reflections like the Euclidean curvature. This representation can be used to obtain correspondence between the shapes and also to compare and classify them. However, the numerical construction of such functions from discrete sampled data is non-trivial and requires robust numerical techniques for their stable and efficient computation.\nConvolutional neural networks have been very successful in recent years in solving problems ir. image processing, recognition and classification. Efficient architectures have been studied and de-. veloped to extract semantic features from images invariant to a certain class or category of transfor-. mations. Coupled with efficient optimization routines and more importantly, a large amount of data. a convolutional neural network can be trained to construct invariant representations and semanti-. cally significant features of images as well as other types of data such as speech and language. It. is widely acknowledged that such networks have superior representational power compared to more. principled methods with more handcrafted features such as wavelets, Fourier methods, kernels etc which are not optimal for more semantic data processing tasks..\nRonan Collobert, Samy Bengio, and Johnny Mariethoz. Torch: a modular machine learning software library. Technical report, Idiap, 2002.\nohn Duchi. Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning an stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nThomas Fidler, Markus Grasmair, and Otmar Scherzer. Identifiability and reconstruction of shape from integral invariants. Inverse Problems and Imaging, 2(3):341-354, 2008\nMatthew A Grayson. The heat equation shrinks embedded plane curves to round points. Journal oj Differential geometry, 26(2):285-314, 1987.\nRaia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recogni. tion (CVPR'06), volume 2, pp. 1735-1742. IEEE, 2006.\nIn Section2|we begin by giving a brief summary of the theory and history of invariant curve repre. sentations. In Section|3|we explain our main contribution of casting the problem into the form which"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005.\nFigure 1: Comparing the axiomatic and learned invariants of a curve\nYann LeCun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006\nJonathan Masci. Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic con volutional neural networks on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 37-45, 2015.\nAn invariant representation of a curve is the set of signature functions assigned to every point of. the curve which does not change despite the action of a certain type of transformation. A powerful. theorem from E. Cartan (Cartan((1983)) and Sophus Lie (Ackerman((1976)) characterizes the space of these invariant signatures. It begins with the concept of arc-length which is a generalized notion of the length along a curve. Given a type of transformation, one can construct an intrinsic arc-. length that is independent of the parameterization of the curve, and compute the curvature with respect to this arc-length. The fundamental invariants of the curve, known as differential invariants (Bruckstein & Netravali](1995), Calabi et al.(1998)) are the set of functions comprising of the curvature and its successive derivatives with respect to the invariant arc-length. These differential. invariants are unique in a sense that two curves are related by the group transformation if and only. if their differential invariant signatures are identical. Moreover, every invariant of the curve is a. c(m)\nFarzin Mokhtarian and Alan K Mackworth. A theory of multiscale, curvature-based shape repre sentation for planar curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14. (8):789-805, 1992.\nHelmut Pottmann, Johannes Wallner, Qi-Xing Huang, and Yong-Liang Yang. Integral invariants for robust geometry processing. Computer Aided Geometric Design, 26(1):37-60, 2009\nGuillermo Sapiro and Allen Tannenbaum. Area and length preserving geometric invariant scale spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(1):67-72, 1995.\nP Cp dp= + y? dp\ndet(Cp, Cpp) XpYpp -YpXpp \\Cp 3 (x3 + y?) 2\nJin Xie, Yi Fang, Fan Zhu, and Edward Wong. Deepshape: Deep learned shape descriptor for 3d shape matching and retrieval. In Proceedings of the IEEE Conference on Computer Vision ana Pattern Recognition, pp. 1275-1283, 2015.\nThe difficulty with differential invariants is their stable numerical computation. Equations [1and 2] involve non-linear functions of derivatives of the curve and this poses serious numerical issues. for their practical implementation where noise and poor sampling techniques are involved. Apart from methods likePajdla & Van Gool|(1995) and Weiss(1993), numerical considerations motivated the development of multi-scale representations. These methods used alternative constructions of invariant signatures which were robust to noise. More importantly, they allowed a hierarchical rep- resentation, in which the strongest and the most global components of variation in the contour of the. curve are encoded in signatures of higher scale, and as we go lower, the more localized and rapid. changes get injected into the representation. Two principal methods in this category are scale-space methods and integral invariants. In scale-space methods (Mokhtarian & Mackworth|(1992); Sapiro & Tannenbaum (1995);Bruckstein et al.(1996)), the curve is subjected to an invariant evolution pro- cess where it can be evolved to different levels of abstraction. See Figure[5] The curvature function\nNetwork-Invariant Eucledian-Curvature\nenables training a convolutional neural network for generating invariant signatures to the Euclidean and Similarity group transformations. Section 4|provides a discussion on developing a multi-scale. representation followed by the experiments and discussion in Section|5.\nSiddharth Manay, Byung-Woo Hong, Anthony J Yezzi, and Stefano Soatto. Integral invariant signa tures. In European Conference on Computer Vision, pp. 87-99. Springer, 2004..\nThus, we have the Euclidean differential invariant signatures given by the set {k, Ks, Kss ...} for. every point on the curve. Cartan's theorem provides an axiomatic construction of invariant signatures and the uniqueness property of the theorem guarantees their theoretical validity. Their importance is highlighted from the fact that any invariant is a function of the fundamental differential invariants.\nel: X e {0,1} Curve1: C1 Curve2: C2 Shared Weights Network1: Network2: Output1: Se(C1) Output2: So(C2) Cost: (e)\nFigure 10: (a) Standard 1D Gaussian filters and its derivatives used for curvature and curvature scale space calculations. (b) Some of the filters from the first layer of the network proposed in this paper One can interpret the shapes of the filters in (b) as derivative kernels which are learned from data and therefore adapted to its sampling conditions.\nat each evolved time t is then recorded as an invariant. For example, {k(s, t), ks(s, t), Kss(s, t).. would be the Euclidean-invariant representations at scale t.\nIntegral invariants (Manay et al.(2004);Fidler et al.(2008);Pottmann et al.(2009); Hong & Soatto (2015)) are invariant signatures which compute integral measures along the curve. For example, for. each point on the contour, the integral area invariant computes the area of the region obtained from. the intersection of a ball of radius r placed at that point and the interior of the contour. The integral. nature of the computation gives the signature robustness to noise and by adjusting different radii of the ball r one can associate a scale-space of responses for this invariant.Fidler et al.(2o08) and. Pottmann et al.(2009) provide a detailed treatise on different types of integral invariants and their. properties.\nIt is easy to observe that differential and integral invariants can be thought of as being obtained from non-linear operations of convolution filters. The construction of differential invariants employ. filters for which the action is equivalent to numerical differentiation (high pass filtering) whereas integral invariants use filters which act like numerical integrators (low pass filtering) for stabilizing the invariant. This provides a motivation to adopt a learning based approach and we demonstrate that the process of estimating these filters and functions can be outsourced to a learning framework.. We use the Siamese configuration for implementing this idea. Such configurations have been used in signature verification (Bromley et al.[(1993)), face-verification and recognition(Sun et al.(2014); Taigman et al.(2014); Hu et al.(2014)), metric learning (Chopra et al.(2005)), image descriptors (Carlevaris-Bianco & Eustice (2014)), dimensionality reduction (Hadsell et al.(2006)) and also for generating 3D shape descriptors for correspondence and retrieval (Masci et al.(2015); Xie et al. (2015)). In these papers, the goal was to learn the descriptor and hence the similarity metric from. data using notions of only positive and negative examples. We use the same framework for estima-. tion of geometric invariants. However, in contrast to these methods, we contribute an analysis of the output descriptor and provide a geometric context to the learning process. The contrastive loss function driving the training ensures that the network chooses filters which push and pull different features of the curve into the invariant by balancing a mix of robustness and fidelity..\nA planar curve can be represented either explicitly by sampling points on the curve or using an implicit representation such as level sets (Kimmel(2012)). We work with an explicit representa- tion of simple curves (open or closed) with random variable sampling of the points along the curve Thus, every curve is a N 2 array denoting the X and Y coordinates of the N points. We build a convolutional neural network which inputs a curve and outputs a representation or signature for every point on the curve. We can interpret this architecture as an algorithmic scheme of repre senting a function over the curve. However feeding in a single curve is insufficient and instead we run this convolutional architecture in a Siamese configuration (Figure|2) that accepts a curve and a\n0.5 0.5 0.5 202 2 -0.5 0.5 0.5 0.5 3 5 2 2 3 5 0.5 0.5 0.5 0.5 2 x a'x. e 202 32 0.5 -0.5 -0.5 0.5 0.5 2 o2-x2 dx2 g(x,) e 202 52 0.5 -0.5 -0.5 0.5 (a) (b)\ntransformed version (positive) of the curve or an unrelated curve (negative). By using two identica. copies of the same network sharing weights to process these two curves we are able to extract geo metric invariance by using a loss function to require that the two arms of the Siamese configuratior. must produce values that are minimally different for curves which are related by Euclidean transfor mations representing positive examples and maximally different for carefully constructed negative. examples. To fully enable training of our network we build a large dataset comprising of positive. and negative examples of the relevant transformations from a database of curves. We choose tc. minimize the contrastive loss between the two outputs of the Siamese network as this directs the network architecture to model a function over the curve which is invariant to the transformation.."}, {"section_index": "4", "section_name": "3.1 LOSS FUNCTION", "section_text": "where is a cross validated hyper-parameter known as margin which defines the metric threshol beyond which negative examples are penalized.."}, {"section_index": "5", "section_name": "3.2 ARCHITECTURE", "section_text": "The network inputs a N 2 array representing the coordinates of N points along the curve. The. sequential nature of the curves and the mostly 1D-convolution operations can also be looked at from. the point of view of temporal signals using recurrent neural network architectures. Here however. we choose instead to use a multistage CNN pipeline. The network, given by one arm of the Siamese. configuration, comprises of three stages that use layer units which are typically considered the basic building blocks of modern CNN architectures. Each stage contains two sequential batches of convo-. lutions appended with rectified linear units (ReLU) and ending with a max unit. The convolutional. unit comprises of convolutions with 15 filters of width 5 as depicted in Figure3 The max unit. computes the maximum of 15 responses per point to yield an intermediate output after each stage. The final stage is followed by a linear layer which linearly combines the responses to yield the final. output. Since, every iteration of convolution results in a reduction of the sequence length, sufficient. padding is provided on both ends of the curve. This ensures that the value of the signature at a point. is the result of the response of the computation resulting from the filter centered around that point.\nConv ReLU Conv ReLU Max Conv ReLU Conv ReLU Max Conv ReLU Conv ReLU Linear 15 15 15 15 15 15 Filters, Filters, Filters, Filters, Filters, Filters, Linear Width=5 Width=5 Width=5 Width=5 Width=5 Width=5 Layer Output Input Sig- Curve nature\nWe employ the contrastive loss function (Chopra et al.(2005); LeCun et al.(2006)) for training our network. The Siamese configuration comprises of two identical networks of Figure 3 computing signatures for two separate inputs of data. Associated to each input pair is a label which indicates whether or not that pair is a positive ( = 1) or a negative ( = 0) example (Figure2). Let C1i. and C2i be the curves imputed to first and second arm of the configuration for the ith example of. the data with label A. Let So(C) denote the output of the network for a given set of weights O for. input curve C. The contrastive loss function is given by:.\n=N Xi I|Se(C1i)-So(C2i) lI +(1-Xi) max(0, -l|Se(C1i)-Se(C2i) Il)}\n(O Xi I|Se(C1i)-Se(C2i) lL + (1-i) max(0, -I| Se(C1i)-Se(C2i) ID)} (3) vhere is a cross validated hyper-parameter known as margin which defines the metric threshold\nO } 0.45 0.4 Test Train 0.35 JOII 0.3 0.25 0.2 0.15 M 0.1 AN 0.05 10 20 30 40 50 70 80 90 100 Epoch\nFigure 4: Contours extracted from the MPEG7 Database and the error plot for training"}, {"section_index": "6", "section_name": "3.3 BUILDING REPRESENTATIVE DATASETS AND IMPLEMENTATION", "section_text": "In order to train for invariance, we need to build a dataset with two major attributes: First, it needs to contain a large number of examples of the transformation and second, the curves involved ir the training need to have sufficient richness in terms of different patterns of sharp edges, corners smoothness, noise and sampling factors to ensure sufficient generalizability of the model. To suffi- ciently span the space of Euclidean transformations, we generate random two dimensional rotations by uniformly sampling angles from -, r|. The curves are normalized by removing the mean anc dividing by the standard deviation thereby achieving invariance to translations and uniform scaling The contours are extracted from the shapes of the MPEG7 Database (Latecki et al.(200o)) as showr in first part of Figure 4 It comprises a total of 1400 shapes containing 70 different categories of objects. 700 of the total were used for training and 350 each for testing and validation. The positive examples are constructed by taking a curve and randomly transforming it by a rotation, translatior and reflection and pairing them together. The negative examples are obtained by pairing curves which are deemed dissimilar as explained in Section|4 The contours are extracted and each contour is sub-sampled to 500 points. We build the training dataset of 10, 000 examples with approximately 50% each for the positive and negative examples. The network and training is performed using the Torch libraryCollobert et al.(2002). We trained using Adagrad Duchi et al.(2011) at a learning rate of 5 10-4 and a batch size of 10. We set the contrastive loss hyperparameter margin = 1 and Figure|4 shows the error plot for training and the convergence of the loss to a minimum. The rest of this work describes how we can observe and extend the efficacy of the trained network on new data"}, {"section_index": "7", "section_name": "4 MULTI-SCALE REPRESENTATIONS", "section_text": "A valuable insight for multi-scale representations is provided in the theorems of Gage, Hamiltor and Grayson (Gage & Hamilton(1986);Grayson[(1987)). It says that if we evolve any smooth non intersecting planar curve with mean curvature flow, which is invariant to Euclidean transformations it will ultimately converge into a circle before vanishing into a point. The curvature corresponding tc this evolution follows a profile as shown in Figure[5] going from a possibly noisy descriptive feature to a constant function. In our framework, we observe an analogous behavior in a data-dependen setting. The positive part of the loss function ( = 1) forces the network to push the outputs of the positive examples closer, whereas the negative part (X = 0) forces the weights of network to push the outputs of the negative examples apart, beyond the distance barrier of . If the training data does not contain any negative example, it is easy to see that the weights of the network will converge tc a point which will yield a constant output that trivially minimizes the loss function in Equation |3\n0. V 0.45 0.4 Test 0.35 Train JOII 0.3 E 0.25 0.2 0.15 0.1 UN 0.05 10 20 30 40 50 60 70 80 90 100 Epoch\nInvariant representations at varying levels of abstraction have a theoretical interest as well as prac tical importance to them. Enumeration at different scales enables a hierarchical method of analysis which is useful when there is noise and hence stability is desired in the invariant. As mentioned. in Section[2] the invariants constructed from scale-space methods and integral invariants, naturally allow for such a decomposition by construction..\nFigure 5: Curve evolution and the corre- sponding curvature profile\nFigure 6: Experiments with multi-scale representations. Each signature is the output of a network trained on a dataset with training examples formed as per the rows of Table[1] Index1 indicates low and 5 indicates a higher level of abstraction.\nDesigning the negative examples of the training data provides the means to obtain a multi-scale representation. Since we are training for a local descriptor of a curve, that is, a function whose value at a point depends only on its local neighborhood, a negative example must pair curves such that corresponding points on each curve must have different local neighborhoods. One such possibility is to construct negative examples which pair curves with their smoothed or evolved versions as in Table[1 Minimizing the loss function in equation 3|would lead to an action which pushes apart the signatures of the curve and its evolved or smoothed counterpart, thereby injecting the signature with fidelity and descriptiveness. We construct separate data-sets where the negative examples are drawn as shown in the rows of Tablq1 and train a network model for each of them using the loss function 3 In our experiments we perform smoothing by using a local polynomial regression with weighted. linear least squares for obtaining the evolved contour. Figure 6|shows the outputs of these different networks which demonstrate a scale-space like behavior..\nAbility to handle low signal to noise ratios and efficiency of computation are typical qualities desired in a geometric invariant. To test the numerical stability and robustness of the invariant signatures\nPositive Example Negative Example Scale Index Low High\nPositive Example\nTable 1: Examples of training pairs for different scales. Each row indicates the pattern of training examples for a different scale.\nDifferential Invariant Integral Invariant Network Invariant Differential Invariant Integral Invariant Network Invariant\nFigure 7: Stability of different signatures in varying levels noise and Euclidean transformations. The correspondence for the shape and the signature is the color. All signatures are normalized..\nwe designed two experiments. In the first experiment, we add increasing levels of zero-mean Gaus sian noise to the curve and compare the three types of signatures: differential (Euclidean curvature) integral (integral area invariant) and the output of our network (henceforth termed as network in variant) as shown in Figure[7] Apart from adding noise, we also rotate the curve to obtain a bette assessment of the Euclidean invariance property. In Figure[8] we test descriptiveness of the signatur under noisy conditions in a shape retrieval task for a set of 30 shapes with 6 different categories. Fo every curve, we generate 5 signatures at different scales for the integral and the network invariar and use them as a representation for that shape. We use the Hausdorff distance as a distance measur (Bronstein et al.(2008)) between the two sets of signatures to rank the shapes for retrieval. Figure and|8|demonstrate the robustness of the network especially at high noise levels..\nWe have demonstrated a method to learn geometric invariants of planar curves. Using just positive and negative examples of Euclidean transformations, we showed that a convolutional neural network\n0.5 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 0.5 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 -0.5 0 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 0.5 -0.5 0.5 0 100 200 300 400 500 100 200 300 400 500 U 100 200 300 400 500\nIn the second experiment, we decimate a high resolution contour at successive resolutions by ran domly sub-sampling and redistributing a set of its points (marked blue in Figure 9) and observe the signatures at certain fixed points (marked red in Figure9) on the curve. Figure 9 shows that the network is able to handle these changes in sampling and compares well with the integral invariant Figures|7|and Figure9|represent behavior of geometric signatures for two different tests: large noise for a moderate strength of signal and low signal for a moderate level of noise..\nNetwork Invariant, = 0.1 Integral Invariant, = 0.1 0.9 -Network Invariant, = 0.3 --Integral Invariant, = 0.3 0.8 uo 0.7 0.6 0.5 0.3 1 0.2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Recall\nFigure 8: 5 shape contours of 6 different categories and the shape retrieval results for this set for different noise levels.\n70% 50% 30% 20% 10% 5% : *# + Differential Invariant 70% 50% 30% 20% 10% 5% Integral Invariant. 70% 50% 30% 20% 10% 5% Network Invariant. . 70% 50% 30% 20% 0.2 10% 0.4 5% 0.6\n70% 50% 30%\n0.3 0.2 70% 0.1 50% 30% -0.1 20% 10% -0.2 5% 20 40 60 Integral Invariant 0.6 0. 70% 50% 30% 20% -0.2 10% -0.4 5% -0.6 20 40 60 Network Invariant 0.6 0.A 70% 0.2 50% 30% -0.2 20% 10% -0.4 5% -0.6 10 20 30 40 50 60\nFigure 9: Testing robustness of signatures to different sampling conditions. The signatures are evaluated at the fixed red points on each contour and the density and distribution of the blue points along the curve is varied from 70% to 5% of the total number of points of a high resolution curve.\nis able to effectively discover and encode transform-invariant properties of curves while remaining. numerically robust in the face of noise. By using a geometric context to the training process we were. able to develop novel multi-scale representations from a learning based approach without explicitly"}]
rJ8Je4clg
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Frank S. He\nDepartment of Computer Science University of Illinois at Urbana-Champaign Zhejiang University\nfrankheshibi@qmail.com\nDepartment of Electrical and Computer Engineering University of Illinois at Urbana-Champaign\nWe propose a novel training algorithm for reinforcement learning which com bines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel tech nique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improve ments in both training time and accuracy."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision speech recognition and natural language processing have tremendously improved the performance on challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla- tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deep learning is to use artificial neural networks to model complex hierarchical or compositional data abstractions and representations from raw input data (Bengio et al., 2013). However, we are still far from building intelligent solutions for many real-world challenges, such as autonomous driv- ing, human-computer interaction and automated decision making, in which software agents need tc consider interactions with a dynamic environment and take actions towards goals. Reinforcement learning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996) studies these problems and algorithms which learn policies to make decisions so as to maximize a reward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989: Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit- siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combine deep learning and reinforcement learning, has been proved to be effective on a few problems which classical AI approaches were unable to solve. Notable examples of deep reinforcement learning include human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).\nDespite these successes, its high demand of computational resources makes deep reinforcemer learning not yet applicable to many real-world problems. For example, even for an Atari game, th deep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up t hundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015 AlphaGo trained its model using a database of game records of advanced players and, in additior about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com putational resources of current deep reinforcement learning algorithms is a major bottleneck for it applicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed thus making the convergence of learning even slower.\nM. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. J. of Artificial Intelligence Research, 2013. Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. PAMI, 2013. D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. C. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis. Model Free Episodic Control. In http://arxiv.org/pdf/1606.04460v1.pdf, 2016. G. E. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. JMLR, 1996. A. Krizhevsky, I. Sutskever, , and G. E. Hinton. Imagenet classification with deep convolutional neural net- works. In Proc. NIPS, 2012. S. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proc. Int. Jt. Conf. Neural. Netw., 2010. Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 2015 L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 1992. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari with Deep Reinforcement Learning. In NIPS Deep Learning Workshop, 2013. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asyn- chronous Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1602.01783, 2016. R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement learning. In Proc. NIPS, 2016. A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, V. Panneershelvam A. De Maria, M. Suleyman, C. Beattie, S. Petersen, S. Legg, V. Mnih, K. Kavukcuoglu, and D. Silver. Massively Parallel Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1507.04296, 2015. I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep Exploration via Bootstrapped DQN. In http://arxiv.org/abs/1602.04621, 2016. W. P. Powell. Approximate Dynamic Programming. Wiley, 2011. M. Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In Proc. ECML, 2005. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized Experience Replay. In Proc. ICLR, 2016. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 2016. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proc. NIPS 2014. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. S. Thrun and A. Schwartz. Issues in using function approxima- tion for reinforcement learning. In Proc. Connectionist Models Summer School, 1993. J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. 1997 H. van Hasselt. Double Q-learning. In Proc. NIPs, 2010. H. van Hasselt, A. Guez, and D. Silver. Deep Reinforcement Learning with Double Q-learning. In https://arxiv.org/abs/1509.06461, 2015. Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling Network Architectures for Deep Reinforcement Learning. In https://arxiv.org/abs/1511.06581, 2015. C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 1992. P. Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience replay. Neural Networks, 2009.\nDepartment of Computer Science. University of Illinois at Urbana-Champaigr liu30l@illinois.edu\nDepartment of Computer Science"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast reward propagation. While current deep Q-learning algorithms rely on a set of experience replays, they only consider a single forward step for the Bellman optimality error minimization, which becomes highly inefficient when the reward signal is sparse and delayed. To better exploit long-term high-reward strategies from past experience, we design a new algorithm to capture rewards from both forward and backward steps of the replays via a constrained optimization approach. This encourages faster reward propagation which reduces the training time of deep Q-learning."}, {"section_index": "3", "section_name": "OPTIMALITY TIGHTENING FOR STOCHASTIC ENVIRONMENTS", "section_text": "Similar to the inequalities we obtained for deterministic environments, we can also derive the fol lowing sequence of inequalities holds for the optimal action-value function Q* (with the greedy policy), under the expectation of the environmental dynamics:.\nWe evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013) and show that our new strategy outperforms competing techniques in both accuracy and training. time on 30 out of 49 games despite being trained with significantly fewer data frames"}, {"section_index": "4", "section_name": "2 RELATED WORK", "section_text": "So we have the following. expectation constraint. on traiectories from state s: and action a.\nV EQ* maxQ*(Sj+k+1,a))] 0 Si,aj i=0\nNonetheless, the original DQN algorithm required millions of training steps to achieve human level performance on Atari games. To improve the stability, recently, double Q-learning was com bined with deep neural networks, with the goal to alleviate the overestimation issue observed ir Q-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea is to use two Q-networks for the action selection and Q-function value calculation, respectively. The greedy action of the target is first chosen using the current Q-network parameters, then the targe value is computed using a set of parameters from a previous iteration. Another notable advance is 'prioritized experience replay' (Schaul et al., 2016) or \"prioritized sweeping'' for deep Q-learning The idea is to increase the replay probability of experience tuples that have a high expected learning progress measured by temporal difference errors.\nE[Q*(si,aj) Lj,k] 0\nWe can also use this series of inequalities to define upper bounds, on trajectories to state s; and action aj.\nk c|Q*(s;,a; *S-k-1,a-k-1)- -k-1+i)< i=0\nE[Q*(s;,aj) -Uj,k]\nWith these expectation constraints, we can formulate a constrained optimization problem as follows.\nIn addition to the aforementioned variants of Q-learning, other network architectures have beer proposed. The dueling network architecture applies an extra network structure to learn the impor tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deej actor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016). It deploys multiple threads learning directly from current transitions. The approach is applicable tc. both value-based and policy-based methods, off-policy as well as on-policy methods, and in discrete. as well as in continuous domains. The model-free episodic control approach evaluates state-actior pairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.. 2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thus. leading to much faster learning (Osband et al., 2016).\nApplying the quadratic penalty function method, we obtain the objective\nIt is easy to see that, since we have trajectory samples in the replay memory which were drawr under the environmental dynamics, we can perform stochastic optimization using these trajectories In this way, a sample of this upper bound is identical to that in the deterministic setting in Eq. (4) As a result, our proposed algorithm can be used to optimize an upper bound of the above constrainec optimization in stochastic environments.\nPlease note that here we provide a mathematical derivation of our approach for stochastic environ ments. We expect that it would work in practice, but due to time constraints and the lack of good. stochastic simulators, we cannot provide any empirical results here.\nMore precisely, consider an agent operating over time t E {1, ..., T}. At time t the agent is in ar environment state st and reacts upon it by choosing action at E A. The agent will then observe new state St+1 and receive a numerical reward rt E R. Throughout, we assume the set of possibl actions, i.e., the set A, to be discrete\nE[rj + y max Q*(sj+1,a) k IE max ( (Sj+k+1,a) i=0\nThere have been a number of approaches improving the stability, convergence and runtime of deep reinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was first proposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcement. learning and experience replays (Lin, 1992; Wawrzynski, 2009)..\nmin (Qe(Si,aj 0 (Sj,aj,Sj+1,rj)EB minx E[Qe(Si,aj) - Lj,k] 0 )V(sj,aj) E B S.t. maxx E[Qe(Sj,aj) -Uj,k] < 0 )V(sj,aj) E B\n(Qo(Sj,aj)-yj)+ A(maxE[Lj,k Qo(s;,aj)]+maxE[(Qe(Sj,aj)-Uj,k)]) (Sj,ajrj,Sj+1)EB\nOur fast reward propagation differs from all of the aforementioned approaches. The key idea of. our method is to propagate delayed and sparse rewards during Q-network training, and thus greatly. improve the efficiency and performance. We formulate this propagation step via a constrained pro-. gram. Note that our program is also different from earlier work on off-policy Q*() algorithms. with eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016). which have been recently shown to perform poorly when used for training deep Q-networks on Atari games.\nBy applying the Jensen's inequality, we are able to obtain an upper bound by first exchanging the expectation with the max and then exchanging the expectation with the rectifier function, because both the max function and the rectifier function are convex..\n(Qo(Sj,aj)- yj)+E[X(maxLj,k-Qe(sj,aj)]+E[X(Qe(sj,aj)- maxUj,k)) Sj,aj,rj,8j+1)EB\nReinforcement learning considers agents which are able to take a sequence of actions in an environ ment. By taking actions and experiencing at most one scalar reward per action, their task is to learn a policy which allows them to act such that a high cumulative reward is obtained over time\nA well established technique to address the aforementioned reinforcement learning task is Q. learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain an action-value function, often also referred to as Q-function, Q(s, a). Given a state s, the action-value. function provides a 'value' for each action a E A which estimates the expected future reward if. action a E A is taken. The estimated future reward is computed based on the current state s or a. series of past states st if available.\nGame Random Human DQN 200M Ours 10M Alien 227.80 6875 3069 1864 Amidar 5.8 1676 739.5 565.67 Assault 222.4 1496 3359 5142.37 Asterix 210 8503 6012 5408.33 Asteroids 719.1 13157 1629 1481.67 Atlantis 12850 29028 85641 316766.67 14.2 734.4 429.7 596 Bank Heist Battle Zone 2360 37800 26300 30800 Beam Rider 363.9 5775 6846 8069 Bowling 23.1 154.8 42.4 49.3 Boxing 0.1 4.3 71.8 81.17 Breakout 1.7 31.8 401.2 229.79 Centipede 2091 11963 8309 4470.06 Chopper Command 811 9882 6687 6360 Crazy Climber 10781 35411 114103 114146 Demon Attack 152.1 3401 9711 5738.67 Double Dunk -18.6 -15.5 -18.1 -10.07 Enduro 0 309.6 301.8 672.83 Fishing Derby -91.7 5.5 -0.8 5.27 Freeway 0 29.6 30.3 31.3 Frostbite 65.2 4335 328.3 3974.11 Gopher 257.6 2321 8520 4660 Gravitar 173 2672 306.7 346.67 H.E.R.O 1027 25763 19950 19975 Ice Hockey -11.2 0.9 -1.6 -3.43 Jamesbond 29 406.7 576.7 1088.33 Kangaroo 52 3035 6740 11716.67 Krull 1598 2395 3805 9461.1 258.5 Kung-Fu Master 22736 23270 27820 Montezuma's Revenge 0 4376 0 23.33 Ms. Pacman 307.3 15693 2311 1805 Name This Game 2292 4076 7257 7314.67 Pong -20.7 9.3 18.9 19.4 Private Eye 24.9 69571 1788 342.37 Q*Bert 163.9 13455 10596 12355 River Raid 1339 13513 8316 8028.33 Road Runner 11.5 7845 18257 29346.67 Robotank 2.2 11.9 51.6 34.5 Seaquest 68.4 20182 5286 4070 Space Invaders 148 1652 1976 995 Star Gunner 664 10250 57997 16653.95 Tennis -23.8 -8.9 -2.5 -1 Time Pilot 3568 5925 5947 5423.33 Tutankham 11.4 167.6 186.7 232 Up and Down 533.4 9082 8456 14406 Venture 0 1188 380 286.67 Video Pinball 16257 17298 42684 74873.2 Wizard of Wor. 4757 3393 563.5 4716.67 Zaxxon 32.5 9173 4977 10598\nThe core idea of Q-learning is the use of the Bellman equation as a characterization of the optima future reward function Q* via a state-action-value function.\nHereby the expectation is taken w.r.t. the distribution of state St+1 and reward rt obtained afte. taking action a, and y is a discount factor. Intuitively, reward for taking action a plus best future reward should equal the best total return from the current state.\nThe choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap proaches use nonlinear deep neural networks to automatically mine intermediate features from th state (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change ha been shown to be very effective for many applications of reinforcement learning. However, auto matic mining of intermediate representations comes at a price: larger quantities of data and mor computational resources are required. Even though it is sometimes straightforward to extract larg amounts of data, e.g., when training on video games, for successful optimization, it is crucial that th algorithms operate on un-correlated samples from a dataset D for stability. A technique called \"ex perience replay\" (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as standard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experienc replays are stored as a dataset D = {(sj, aj, j, Sj+1)} which contains state-action-reward-futur state-tuples (s;, a;, r, S+1), including past observations from previous plays.\nThe characterization of optimality given in Eq. (1) combined with an \"experience replay\"' dataset D results in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episode in the initial state so; sample a mini-batch of tuples B = {(s;, aj,T, Sj+1)} C D; compute and fix the targets yj = rj + y maxa Qe-(sj+1, a) for each tuple using a recent estimate Qe- (the maximization is only considered if s; is not a terminal state); update the Q-function by optimizing the following program w.r.t. the parameters 0 typically via stochastic gradient descent:\nmin (Qe(Sj,aj) 0 (Sj,aj,rj,Sj+1)EB\nAfter having updated the parameters of the Q-function we perform an action simulation either choos. ing an action at random with a small probability e, or by following the strategy arg maxa Qe(st, a. which is currently estimated. This strategy is also called the e-greedy policy. We then obtain the. actual reward rt. Subsequently we augment the replay memory with the new tuple (St, at, ^t, St+1). and continue the simulation until this episode terminates or reaches an upper limit of steps, anc we restart a new episode. When optimizing w.r.t. the parameter 0, a recent Q-network is used tc. compute the target yj = rj + y maxa Qe- (sj+1, a). This technique is referred to as 'semi-gradient. descent, i.e., the dependence of the target on the parameter 0 is ignored.."}, {"section_index": "5", "section_name": "FAST REWARD PROPAGATION VIA OPTIMALITY TIGHTENING", "section_text": "Table S1: Raw Scores across 49 games, using 30 no-op start evaluation (5 minutes emulator time 18000 frames, e = 0.05). Results of DQN is taken from Mnih et al. (2015)\nWe present our quantitative results in Table S1 and Table S2. We also illustrate the normalized score provided in Eq. (6) over the number of episodes in Fig. S1.\n*(st,a) = E[rt +ymaxQ*(st+1, a\nInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on a set of short one-step sequences, each characterized by the tuple (sj, aj, Tj, Sj+1). Intuitively, each. step encourages an update of the parameters 0, such that the action-value function for the chosen action aj, i.e., Qe(s,, aj), is closer to the obtained reward plus the best achievable future value, i.e.,. yj = rj + y maxa Q(sj+1, a). As we expect from the Bellman optimality equation, it is instructive. to interpret this algorithm as propagating reward information from time j + 1 backwards to time j.\nTo understand the shortcomings of this procedure consider a situation where the agent only receives. a sparse and delayed reward once reaching a target in a maze. Further let P characterize the short. est path from the agents initial position to the target. For a long time, no real reward is available\nand the aforementioned algorithm propagates randomly initialized future rewards. Once the target is reached, real reward information is available. Due to the cost function and its property of prop- agating reward time-step by time-step, it is immediately apparent that it takes at least an additional O(|P) iterations until the observed reward impacts the initial state.\nIn the following we propose a technique which increases the speed of propagation and achieve mproved convergence for deep Q-learning. We achieve this improvement by taking advantage o longer state-action-reward-sequences which are readily available in the \"experience replay memory. Not only do we propagate information from time instances in the future to our current state, bu also will we pass information from states several steps in the past. Even though we expect to se substantial improvements on sequences where rewards are sparse or only available at terminal states we also demonstrate significant speedups for situations where rewards are obtained frequently. Thi is intuitive as the Q-function represents an estimate for any reward encountered in the future. Faste propagation of future and past rewards to a particular state is therefore desirable.\nSubsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo rithm that exploits longer state-transitions in experience replays by tightening the optimization via constraints. For notational simplicity, we assume that the environmental dynamics is deterministic i.e., the new state and the reward are solely determined by the current state and action. It is possibl to show that mathematically our proposed approach also approximately works in stochastic environ ments. Please see details in the appendix. From the Bellman optimality equation we know that the following series of eaualities hold for the optimal O-function Q* .\nQ*(s;,aj) =rj + y maxQ*(sj+1,a) =rj + ymax rj+1+ y max|rj+2+ y maxQ*(sj+3,d a'\nEvaluating such a sequence exactly is not possible in a reinforcement learning setting since the enumeration of intermediate states sj+i requires exponential time complexity O(|A[). It is however possible to take advantage of the episodes available in the replay memory D by noting that the following sequence of inequalities holds for the optimal action-value function Q* (with the greedy policy), irrespective of whether a policy generating the sequence of actions aj, aj+1, etc., which results in rewards r;, ri+1, etc. is optimal or not:\njaj)=rj+ y maxQ*(Sj+1,a)] ...yrj+i+k+1 max Q*(Sj+k+1,a) = Lj I\nNote the definition of the lower bounds Lj,, for sample j and time horizon k in the aforementionec series of inequalities..\nWe can also use this series of inequalities to define u per bounds. To see this note that\nk Sj-k-1,aj-k-1) i-k-1+i Si,a) > 0 i=0\nwhich follows from the definition of the lower bound by dropping the maximization over the actions and a change of indices from j -> j - k - 1. Reformulating the inequality yields an upper bound. U* , for sample j and time horizon k by fixing state s; and action a; as follows:.\nSj-k-1,(j-k-1) -k-1+i >Q*(Si,aj i=0\nIn contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we propose. which defines the largest lower bound, and Qe(Sj, aj) Umin = min%e{1,.,K} Uj,k, which speci-. fies the smallest upper bound. Hereby, Lj,k and Uj,k are computed using the Q-function Qe- with. a recent estimated parameter 0- rather than the unknown optimal Q-function Q*, and the integer K. specifies the number of future and past time steps which are considered. Also note that the target used in the Bellman equation is obtained from y; = Lj,o = r; + y maxa Qe-(s+1, a). In this. way, we ignore the dependence of the bounds and the target on the parameter 0 to stabilize the train- ing. Taking all the aforementioned definitions into account, we propose the following program for\nGame DQN 200M Ours 10M Alien 42.74% 24.62% Amidar 43.93% 33.52% Assault 246.27% 386.31% Asterix 69.96% 62.68% Asteroids 7.32% 6.13% Atlantis 449.94% 1878.60% Bank Heist 57.69% 80.78% Battle Zone 67.55% 80.25% Beam Rider 119.79% 142.39% Bowling 14.65% 19.89% Boxing 1707.14% 1930.24% Breakout 1327.24% 757.77% Centipede 62.99% 24.10% Chopper Command 64.78% 61.17% Crazy Climber 419.50% 419.67% Demon Attack 294.22% 171.95% Double Dunk 16.13% 275.16% Enduro 97.48% 217.32% Fishing Derby 93.52% 99.76% Freeway 102.36% 105.74% Frostbite 6.16% 91.55% Gopher 400.43% 213.36% Gravitar 5.35% 6.95% H.E.R.O 76.50% 76.60% Ice Hockey 79.34% 64.22% Jamesbond 145.00% 280.47% Kangaroo 224.20% 391.04% Krull 276.91% 986.59% Kung-Fu Master 102.38% 122.62% Montezuma's Revenge 0% 0.53% Ms. Pacman 13.02% 9.73% Name This Game 278.31% 281.54% Pong 132% 133.67% Private Eye 2.54% 0.46% Q*Bert 78.49% 91.73% River Raid 57.31% 54.95% Road Runner 232.92% 374.48% Robotank 509.28% 332.99% Seaquest 25.94% 19.90% Space Invaders 121.54% 56.31% Star Gunner 598.10% 166.81% Tennis 142.95% 153.02% Time Pilot 100.93% 78.72% Tutankham 112.23% 141.23% Up and Down 92.68% 162.38% Venture 31.99% 24.13% Video Pinball 2538.62% 5630.76% Wizard of Wor. 67.47% 99.04% Zaxxon 54.09% 115.59%\nOutput : Parameters 0 of a Q-function. Initialize: 0 randomly, set 0- = 0 for episode 1 to M do initialize S1; for t 1 to T do Choose action at according to e-greedy strategy;. Observe reward rt and next state St+1;. Store the tuple (St, at, Tt, :, St+1) in replay memory D;. Sample a minibatch of tuples B = {(sj, aj, Tj, Rj, Sj+1}) from replay memory D;. Update 0 with one gradient step of cost function given in Eq. (4);. Reset 0- = 0 every C steps; end for t T to 1 do Compute Rt = rt + yRt+1; Insert Rt into the corresponding tuple in replay memory D; end end Algorithm 1: Our algorithm for fast reward propagation in cement learning tasks.\n350 Our 10M Mean Our 10M Median 300 250 Nature 200M Mean (%) eeeeeeeeee e! 200 150 scorre 100 Nature200MMedian 50 0 -50 0 2 4 6 8 10 Training frames (1e6)\nAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks\nreinforcement learning tasks\nQo(sj,aj) Lmax V(sj,aj) E B min (Qo(sj,aj)- Yj s.t. Qe(Sj,aj) Umin V(sj,aj) E B 0 (Sj,aj,Sj+1,rj)EB\nFigure S1: Convergence of mean and median of normalized percentages on 49 games\nBefore doing so we describe our optimization procedure for the constrained program in Eq. (3) more carefully. The cost function is generally non-convex in the parameters 0, and so are the constraints. We therefore make use of a quadratic penalty method to reformulate the program into.\nmin (Qo(sj,aj)-yj)2+A(Lmax_Qo(sj,aj))?+ A(Qo(sj,aj)-Umin)? 0 (Sj,aj,rj,Sj+1)EB\nwhere A is a penalty coefficient and (x)+ = max(0, x) is the rectifier function. Augmenting the cos1 function with (Lmax - Q(s, aj))? and/or A(Qe(sj, aj) Umin)? results in a penalty whenever any optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim- plicity. The penalty coefficient A can be set as a large positive value or adjusted in an annealing scheme during training. In this work, we fix its value, due to time constraints. We optimize this cost function with stochastic (sub-)gradient descent using an experience replay memory from which we randomly draw samples, as well as their successors and predecessors. We emphasize that the deriva- tives correcting the prediction of Q(sj, aj) not only depend on the Q-function from the immediately successive time step Q(sj+1, a) stored in the experience replay memory, but also on more distant time instances if constraints are violated. Our proposed formulation and the resulting optimization technique hence encourage faster reward propagation, and the number of time steps depends on the constant K and the quality of the current Q-function. We summarize the proposed method in Algorithm 1.\nThe computational complexity of the proposed approach increases with the number of considerec time steps K, since additional forward passes are required to compute the bounds Lmax and Umin However, we can increase the memory size on the GPU to compute both the bounds and targets ir a single forward pass if K is not too large. If at all a problem, we can further alleviate this increase oy randomly sampling a subset of the constraints rather than exhaustively using all of them. More informed strategies regarding the choice of constraints are possible as well since we may expec lower bounds in the more distant future to have a larger impact early in the training. In contrast once the algorithm is almost converged we may expect lower bounds close to the considered time-step tc have bigger impact.\nTo efficiently compute the discounted reward over multiple time steps we add a new element to the experience replay structure. Specifically, in addition to state, action, reward and next state for\nThis program differs from the classical approach given in Eq. (2) via the constraints, which is cru- cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result in tremendously better results as we will demonstrate empirically in Sec. 5.\n% 8%266 %188 %18614 % 3312 % 6698 %441 % 969 % 999 % 8L'09 % 98'99 % 9960 %2844 %2309 %LL'17 % L8'87 %3330 % 200 % 20 % 402 % 4%9 % 4%9 % 88 % 9 l % 9% %9%l % 90 %0 % 40 0 %81- % 80Z % 982- %6%3 % 9- % 90'9- % 8%L- % 98'L- %004- %116- %586- % 011- % Z944- % 6888- % 9941- % 611- % ZL94- % L983 %112 Atinns qung boqnnk Funpuy uMoq pue dn uoxxez Riunne pnnr hsnnit JoM r paezr aaansaast oBerr Buoxng Bane aae ennns Booiig reennay buod Nann ahne name HHERO Crrmar enner Aesrreols Meeman an chhommn ehnnmnn seeneeet vennre Annnnr leekke eey Alin Bean aaer Dermn arme Breneonr Coodkr spnaeeee aaees Stnneanr\nFigure 1: Improvements of our method trained on 10M frames compared to results of 20oM frame DQN training presented by Mnih et al. (2015), using the metric given in Eq. (5)..\ntime-step j, we also store the real discounted return R; which is the discounted cumulative return achieved by the agent in its game episode. R is computed via R, = T=j -r, where T is the end of the episode and y is the discount factor. R, is then inserted in the replay memory after the termination of the current episode or after reaching the limit of steps. All in all, the structure of our experience replay memory consists of tuples of the form (s, aj, rj, Rj, Sj+1). In practice, we also found that incorporating R, in the lower bound calculation can further improve the stability of the training.\nWe leave the questions regarding a good choice of penalty function and a good choice of the penalty. coefficients to future work. At the moment we use a quadratic penalty function and a constant penalty coefficient X identical for both bounds. More complex penalty functions and sophisticated optimization approaches may yield even better results than the ones we report in the following.."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "We evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ. ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered t be one of the most challenging reinforcement learning task because of its high dimensional output. Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de. manding to find a single, general and robust algorithm and a corresponding single hyperparamete. setting which works well across all 49 games..\nFollowing existing work (Mnih et al., 2015), our agent predicts an action based on only raw image. pixels and reward information received from the environment. A deep neural network is used as. the function approximator for the Q-function. The game image is resized to an 84 84 grayscale. image st. The first layer is a convolutional layer with 32 filters of size 8 8 and a stride of 4; the. second layer is a convolutional layer with 64 filters of size 4 4 and stride of 2; the third layer is. a convolutional layer with 64 filters of size 3 3 and a stride of 1; the next fully connected laye. transforms the input to 512 units which are then transformed by another fully connected layer to ar output size equal to the number of actions in each game. The rectified linear unit (ReLU) is used as the activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015. for annealing e-greedy exploration and also applied RMSProp for gradient descent. As in previous. work we combine four frames into a single step for processing. We chose the hyperparamenter. K = 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also include. the discounted return R, = Lj,oo in the lower bound calculation to further stabilize the training. We. use the penalty coefficient X = 4 which was obtained by coarsely tuning performance on the games. Alien,' 'Amidar,' 'Assault,' and 'Asterix.' Gradients are also rescaled so that their magnitudes are. comparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X 12GB graphics card.\n35393 %23533 % 5333 % 65'66 % 69:96 %476 % L4:88 %86.98 %8098 %60'08 % 89't % 0'29 % 24 99 % 069 % 2989 % 9319 % 98'8t % 62 20 %9204 %3524 %331 %3225 %3031 % 955 % 58 44 % 4111 %0:0% % 0011 % 2724 %661 % 351 % 0013 %L %436 % LZ'8 % 98 % 0'8 % 339 %6%9 % 49 4 %000 %33% % 99'0 % 8930 % 930 %41% % 90'9- 853 Alns qung aoqnik puoqsawnd Tile nmor hsnnit uexxez uMog pue dn oonebuey Jrm oo paezr Freenay oerrr chhnmmn ennnaan ieekke kee Funpuo Riunnr annr Benr aner Gonder fennns Crnnr enner Buoxlg Sunnneannr Bann aaie yennnre spnaeeee gaees Asttx seennest aaans mnst Alin Buoiig Aasrolliss wame rane aame 6uod HHERO meeman an\nFigure 2: Improvements of our method trained on 10M frames compared to results of 10M frame DQN training, using the metric given in Eq. (5)."}, {"section_index": "7", "section_name": "5.1 EVALUATION", "section_text": "We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as 30 no-op evaluation.' During both training and testing, at the start of the episode, the agent always performs a random number of at most 30 no-op actions. During evaluation, our agent plays each game 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An e- greedy policy with e = 0.05 is used. Specifically, for each run, the game episode starts with at most 30 no-op steps, and ends with death' or after a maximum of 5 minute game-play, which corresponds to 18000 frames.\nOur training consists of M = 40 epochs, each containing 250000 frames, thus 10M frames ir total. For each game, we evaluate our agent at the end of every epoch, and, following commor practice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent's evaluation as the result of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) anc Nair et al. (2015).\nScoreAgent - ScoreBaseline max{ ScoreHuman, ScoreBaseline} -- ScoreRandom\nFig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et a. (2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10N frames, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games our algorithm exceeds the baseline using only 5% of the baseline's training frames, sometime. drastically, e.g., in games such as 'Atlantis,' 'Double Dunk,' and 'Krull.' The remaining 19 games. often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level o. performance.\nIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015). the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. We. compare to those baseline results obtained after 20oM frames using our proposed algorithm which. ran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead of training more than 10 days we manage to finish training in less than one day. Furthermore, for a fair. comparison, we replicate the DQN results and compare the performance of the proposed algorithm. after 10M frames to those obtained when training DQN on only 10M frames..\nTo compare the performance of our algorithm to the DQN baseline, we follow the approach of Wang et al. (2015) and measure the improvement in percent using\nWe select this approach because the denominator choice of either human or baseline score prevents nsignificant changes or negative scores from being interpreted as large improvements.\nTable 1: Mean and median human-normalized scores. DON baseline and D-DON results are from Mnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method is trained with 10M frames. Note that our approach can be combined with the D-DQN method\nFrostbite Atlantis Zaxxon 3500 180000 8000 Ours Ours Ours DQN 160000 DQN 7000 DQN 3000 DQN + returr DQN + retur DQN + returr 140000 6000 DQN(A) DQN() DQN() 2500 120000 5000 100000 4000 S 1500 80000 3000 60000 2000 1000 40000 1000 500 20000 1000 5 8 10 10 10 Training Frames (1e6). Training Frames (1e6) Training Frames (1e6). H.E.R.O. Q*Bert Chopper Command 20000 12000 5000 Ours Ours Ours DQN DQN DQN DQN + return 10000 DQN + return 4000 DQN + return 15000 DQN() DQN(A) DQN() 8000 3000 scrre 10000 SOOS 6000 2000 4000 5000 1000 2000 2 4 6 10 6 10 6 10 Training Frames (1e6). Training Frames (1e6) Training Frames (1e6)\nAs suggested by van Hasselt et al. (2015). we use the following score\nScOreHuman - ScoreRandom to summarize the performance of our algorithm in a single number. We normalize the scores of our algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hassel et al., 2015), and report the training time, mean and median in Table 1. We observe our technique with 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (van Hasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. We believe that our method can be readily combined with other techniques developed for DQN, such as D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), dueling networks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve the accuracy and training speed.\nIn Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In addition we demonstrate two additional techniques: 'DQN+return' and DQN().' 'DQN+return' uses only the discounted future return as a bound, but does not take advantage of the additional constraints we propose. 'DQN(A)' combines TD-A with the DQN algorithm. We illustrate the performance of. those four algorithms on the six games 'Frostbite,' 'Atlantis,' 'Zaxxon,' 'H.E.R.O,' 'Q*Bert,' and 'Chopper Command.' We observe our method to achieve higher scores than the three baselines on. the majority of the games. We refer the reader to the supplementary material for additional results.."}, {"section_index": "8", "section_name": "6 CONCLUSION", "section_text": "In this paper we proposed a novel program for deep Q-learning which propagates promising rewards to achieve significantly faster convergence than the classical DQN. Our method significantly outper forms competing approaches even when trained on a small fraction of the data on the Atari 2600 domain. In the future, we plan to investigate the impact of penalty functions, advanced constrained optimization techniques and explore potential synergy with other techniques.\nFigure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN(A) yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 points is applied.\nIn order to further illustrate the effectiveness of our method, we compare our results with our imple mentation of DQN trained on 1OM frames. The results are illustrated in Fig. 2. We observe a better performance on 46 out of 49 games, demonstrating in a fair way the potential of our technique"}]
By1snw5gl
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Johannes Brust, Jennifer B. Erway, and Roummel F. Marcia. On solving 1-sr1 trust-region subprob lems. arXiv.org, 8 2016. arXiv:1506.07222v3.\nVivek Ramamurthy\nRichard H. Byrd, Jorge Nocedal, and Robert B. Schnabel. Representations of quasi-newton matrice and their use in limited-memory methods. Mathematical Programming. 63(1):129-156. 1 1994\nvivek.ramamurthy@sentient.ai\nWe describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep net- works. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Fur- thermore, we perform an experimental analysis of L-SR1 with respect to its hyper- parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks.\nYann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op timization. CoRR, abs/1406.2572, 2014. URLhttp://arxiv.0rg/abs/1406.2572"}, {"section_index": "1", "section_name": "1 MOTIVATION", "section_text": "Second order methods hold great potential for distributing the training of deep neural networks Due to their use of curvature information, they can often find good minima in far fewer steps thar first order methods such as stochastic gradient descent (SGD). Moreover, stochastic second ordei methods can benefit from larger mini-batches (Le et al.12011). This is because they estimate seconc derivatives via differences between estimated gradients. The gradient estimates need to have less variance, so that when we take their differences, the result has low variance. As a result they provid a different trade-off between number of steps and mini-batch size than do SGD-like methods. This trade-off is interesting, because while steps must be evaluated sequentially, a mini-batch may be evaluated in parallel. Thus, second order methods present an opportunity to extract more parallelisn in neural network training. In particular, when mini-batches are sufficiently large, their evaluatior may be distributed. Furthermore, there are relatively fewer hyperparameters to tune in second order methods, compared to variants of stochastic gradient descent.\nJohn E. Dennis Jr. and Robert B. Schnabel. Numerical methods for unconstrained optimization ana nonlinear equations. Prentice Hall, 1 edition, 1983.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CoRR. abs/1512.03385.2015b. URLhttp://arxiv.org/abs/1512.03385\nL-BFGS (Nocedal] [1980] Liu & Nocedal]1989) is perhaps the most commonly used second orde1 method in machine learning. BFGS is a quasi-Newton method that maintains an approximation tc. the inverse Hessian of the function being optimized. L-BFGS is a limited memory version of BFGS. that stores the most recent updates to the inverse Hessian approximation and can therefore be used. practically for large scale problems. L-BFGS is typically combined with a line search technique tc choose an appropriate step size at each iteration. L-BFGS has been used to good effect in convex. optimization problems in machine learning, but has not found effective use in large scale non-convex problems such as deep learning.\nHumaid Khalfan, Richard H. Byrd, and Robert B. Schnabel. A theoretical and experimental study of the symmetric rank one update. SIAM Journal on Optimization, 3(1):1-24, 1993\nThree critical weaknesses have been identified. First, we know that training deep neural networks involves minimizing non-convex error functions over continuous, high dimensional spaces. It has been argued that the proliferation of saddle points in these problems presents a deep and profounc difficulty for quasi-Newton optimization methods (Dauphin et al.2014). Furthermore, it has beer argued that curvature matrices generated in second order methods are often ill-conditioned, anc these need to be carefully repaired. A variety of approaches to this have been suggested, including the use of an empirical Fisher diagonal matrix (Martens2016). Finally, popular quasi-Newtor\nNigel Duffy"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Dong C. Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimizatior Mathematical Programming, 45(1):503-528. 1989\nWe propose L-SR1, a second order method that addresses each of these concerns. SR1 (Symmetri. Rank One) is a quasi-Newton method that uses a rank one update for updating the Hessian approx. imation of the function being optimized (Nocedal & Wright]2006). Unlike BFGS, the SR1 update. does not guarantee positive definiteness of the updated matrix. This was considered a major problen in the early days of nonlinear optimization when only line search iterations were used, and possibl led to the obscurity of SR1 outside the optimization community. However, with the development o trust-region methods, the SR1 updating formula is potentially very useful, and its ability to generat. indefinite Hessian approximations can actually prove to be advantageous..\nAryan Mokhtari and Alejandro Ribeiro. RES: regularized stochastic BFGS algorithm. IEEE Trans Signal Processing, 62(23):6089-6104, 2014. doi: 10.1109/TSP.2014.2357775. URLhttp: //dx.d01.0rg/10.1109/TSP.2014.2357775\nJorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of Computation 35(151):773-782, 7 1980.\nJorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer-Verlag, New York, 2 edition, 2006.\nTwo other insights make L-SR1 practical by removing the requirement for a line search and ad- dressing the conditioning problem. First, we replace the line search using a trust region approach While L-BFGS using line search is well studied, recently, an L-BFGS method that uses a trust- region framework has also been proposed (Burke et al.]2008). Second, we combine L-SR1 with batch normalization. Batch normalization is a technique of normalizing inputs to layers of a neural network, used to address a phenomenon known as internal covariate shift during training (Ioffe & Szegedy 2015). Our hypothesis is that batch normalization may cause parameters of a neural net- work to be suitably scaled so that the Hessian becomes better conditioned. We tested this hypothesis empirically and outline the results below.\nWe now briefly summarize some other second order approaches that have been suggested in the literature, in order to place our approach in context.Pearlmutter(1994) derived a technique that directly calculated the product of the Hessian with an arbitrary vector, and applied this technique to a few variants of backpropagation, thereby showing a way to use the full Hessian without needing to compute and store it. Martens[(2010) used a generalization of this technique, introduced by Schrau- dolph[(2002), to develop a second order optimization method based on the \"Hessian-free\"' approach, using it to train deep auto-encoders (Martens2010), as well as recurrent neural networks (Martens & Sutskever2011). The \"Hessian-free\" approach is essentially a line search Newton-CG (Conju- gate Gradient) method, also known as the truncated Newton method (Nocedal & Wright]2006), in which the search direction is computed by applying CG to the Newton method, and terminating it once it has made sufficient progress. This approach differs from ours in its use of line search instead of a trust region method. Moreover, it computes Hessian-vector products using finite differencing, as opposed to the limited-memory symmetric rank one update with trust region method, used in our approach. The cost of skipping the Hessian calculation in a truncated Newton method is one ad- ditional gradient evaluation per CG iteration (Nocedal & Wright2006). As mentioned previously, Dauphin et al. (2014) argue, that in high dimensional problems of practical interest, the proliferation of saddle points poses greater difficulty than local minima. In a bid to escape these saddle points, they propose second order optimization method called the saddle-free Newton method. Key to this"}, {"section_index": "3", "section_name": "BACKGROUND", "section_text": "In the following, we provide a brief primer on line search and trust region methods, as well as or. quasi-Newton methods and their limited memory variants. Further details may be found in Noceda & Wright (2006)\nIn any optimization algorithm, there are two main ways of moving from the current point xk tc a new iterate xk+1. One of them is line search. In it, the algorithm picks a descent direction pk and searches along this direction from the current iterate xk for a new iterate with a lower function value. The distance to move along px can be found by solving the following one-dimensional minimization problem:\nmin f(xk + apk a>0\nInstead of an exact minimization which may be expensive, the line search algorithm generates a limited number of trial step lengths until it finds one that generates a sufficient decrease in function\n1 The reference[Brust et al.(2016) describes an approach to solve the trust region sub-problem encounterec in an L-SR1 method, but does not describe the L-SR1 method itself..\nJames Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna tional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735-. 742,2010. URLhttp://www.icm12010.org/papers/458.pdf\nJames Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimiza tion. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pp. 1033-1040, 2011.\nWe believe that it is possible to overcome saddle points using rank-one update based second order methods. The more common rank-two methods, e.g. L-BFGS, maintain a positive definite approx imation to the inverse of the Hessian, by design (Nocedal & Wright]2006). At saddle-points, the true Hessian cannot be well approximated by a positive definite matrix, causing commonly used second order methods to go uphill (Dauphin et al.] 2014). On the other hand, rank-one approaches such as SR1 don't maintain this invariant, so they can go downhill at saddle points. Numerical ex- periments (Conn et al.1991) suggest that the approximate Hessian matrices generated by the SR1 method show faster progress towards the true Hessian than those generated by BFGS. This suggests that a limited memory SR1 method (L-SR1, if you like) could potentially outperform L-BFGS in the task of high dimensional optimization in neural network training. The building blocks needed to construct an L-SR1 method have been suggested in the literature (Byrd et al.|1994] Khalfan et al. 1993). To the best of our knowledge, however, there is no complete L-SR1 method previously de scribed in the literature[ This prompted us to develop and test the approach, specifically in the large scale non-convex problems that arise in deep learning.\nBarak A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation, 6:147-160 1994.\napproach is the definition of a class of generalized trust region methods. This class extends classica. trust region methods in a couple of ways. A first order Taylor expansion of the function is mini. mized, instead of the second order Taylor expansion. Moreover, the constraint on the step norm is. replaced by generalized constraint on the distance between consecutive iterates. Our approach, by. contrast, uses a a classical trust-region method. Rather than compute the Hessian exactly, Dauphil et al.(2014) use an approach similar Krylov subspace descent (Vinyals & Povey2012). The func tion is optimized in a lower-dimensional Krylov subspace, which is determined through Lanczos. iteration of the Hessian (Vinyals & Povey2012). The Lanczos method may be considered a gen. eralization of the CG method that can be applied to indefinite systems, and may be used to aid the. CG method by gathering negative curvature information (Nocedal & Wright2006). The Lanczos. method also involves finding an approximate solution to a trust-region subproblem in the range of a. Krylov basis that it generates. This trust region problem differs from the one we solve, in that the. Krylov basis generated has a special structure due to its mapping to a tridiagonal matrix (Nocedal &. Wright2006).\nvalue. At the new point, the process of computing the descent direction and step length is repeated The other way is to use a trust region method. In a trust region method, the information about f is used to construct a model function mk, which is supposed to approximate f near the current point xk. Since the model m, may not approximate f well when x is far from xk, the search for a minimizer of mk is restricted to some trust region within a radius x around xk. To wit, the candidate step p approximately solves the following sub-problem:\nmin mk(xk+p) p:[pk\nIf the candidate solution does not produce a sufficient decrease in f, the trust region is considere too large for the model function to approximate f well. So we shrink the trust region and re-solve Essentially, the line search and trust region approaches differ in the order in which they choose th direction and magnitude of the move to the next iterate. In line search, the descent direction pk i fixed first, and then the step length ag to be taken along that direction is computed. In trust region, a maximum distance equal to the trust-region radius is first set, and then a direction is determinec within this radius, that achieves the best improvement in the objective value. If such a directio does not yield sufficient improvement, the model function is determined to be a poor approximatior to the function, and the trust-region radius is reduced until the approximation is deemed goo enough. Conversely, as long as the model function appears to approximate the objective functior well, the trust region radius is increased until the approximation is not good enough.\nIt is worth noting that several approaches have been proposed to overcome the weaknesses of L. BFGS. First, it has been proposed to initialize L-BFGS with a number of SGD steps. However, this. diminishes the potential for parallelism (Dean et al.]2012) Le et al.]2011). Second, it has beer proposed to use \"forgetting\"', where every few (say, for example, 5) steps, the history for L-BFGS is. discarded. However, this greatly reduces the ability to use second order curvature information. There has also been a recent spurt of work on stochastic quasi-Newton methods for optimization.Byrc. et al.(2016) propose a stochastic quasi-Newton method which uses the classical L-BFGS formula but collects curvature information pointwise, at regular intervals, through sub-sampled Hessian vec tor products, rather than at every iteration. Mokhtari & Ribeiro (2014) propose RES, a regularizec. stochastic version of BFGS to solve convex optimization problems with stochastic objectives, anc. prove its convergence for bounded Hessian eigenvalues.Mokhtari & Ribeiro(2015) propose an on. line L-BFGS method for solving optimization problems with strongly convex stochastic objectives. and establish global almost sure convergence of their approach for bounded Hessian eigenvalues o. sample functions. In the case of nonconvex stochastic optimization, Wang et al.(2014) propose. based on a general framework, two concrete stochastic quasi-Newton update strategies, namely stochastic damped-BFGS update and stochastic cyclic Barzilai-Borwein-like update, to adaptively. generate positive definite Hessian approximations. They also analyze the almost sure convergence of these updates to stationary points.Keskar & Berahas (2015) propose ADAQN, a stochastic quasi Newton algorithm for training RNNs. This approach retains a low per-iteration cost while allowing. for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method also uses a novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS cur vature pairs. Finally,Curtis (2016) proposes a variable-metric algorithm for stochastic nonconvex. optimization which exploits fundamental self-correcting properties of BFGS-type updating, and uses. it to solve a few machine learning problems. As one may notice, all of these approaches adapt the. BFGS-style rank two updates in different ways to solve convex and non-convex problems. In con. trast, our approach uses SR1-type updates, which we think can help better navigate the pathologica. saddle points present in the non-convex loss functions found in deep learning, by not constraining. the Hessian approximation to be positive definite, as in the case of BFGS-style updates. Comparisor. of our approach with one of these recent stochastic second order methods is an interesting next step. In the Appendix, we provide a brief primer on line search and trust region methods, as well as or. quasi-Newton methods and their limited memory variants.."}, {"section_index": "4", "section_name": "LIMITED MEMORY OUASI-NEWTON METHODS", "section_text": "Quasi-Newton methods are a useful alternative to Newton's method in that they do not require com. putation of the exact Hessian, and yet still attain good convergence. In place of the true Hessian V2 fk, they use an approximation Bk, which is updated after each step based on information gained. during the step. At each step, the new Hessian approximation Bk+1 is required to satisfy the follow-. ing condition, known as the secant equation:.\nSk =Xk+1-Xk,Yk = Vfk+1- Vfk\nTypically, Bk+1, is also required to be symmetric (like the exact Hessian), and the difference be- tween successive approximations Bk and Bk+1 is constrained to have low rank. One of the most popular formulae for updating the Hessian approximation Bg is the BFGS formula, named after its inventors, Broyden, Fletcher, Goldfarb, and Shanno, which is defined by\nBESkSt Bk YkYk Bk+1 = B sT BkSk Sk\nA less well known formula, particularly in the machine learning community, is the symmetric-rank one (SR1) formula, defined by\nThe former update is a rank-two update, while the latter is a rank-one update. Both updates satisfy the secant equation and maintain symmetry. The BFGS update always generates positive definite approximations whenever the initial approximation Bo is positive definite and sf yk > 0. Often, in practical implementations of quasi-Newton methods, the inverse Hessian approximation Hg is used instead of the Bk,and the corresponding update formulae can be generated using the Sherman Morrison-Woodbury matrix identity (Hager1989).\nLimited-memory quasi-Newton methods are useful for solving large problems where computation of Hessian matrices is costly or when these matrices are dense. Instead of storing fully dense n n approximations, these methods save only a few vectors of length n that capture the approximations. Despite these modest storage requirements, they often converge well. The most popular limited memory quasi-Newton method is L-BFGS, which uses curvature information from only the most recent iterations to construct the inverse Hessian approximation. Curvature information from earlier\nBk+1Sk = Yk\nyk - BkSk)(yk - BkSk Bk+1 = Bk + (Yk - BkSk)Tsk\nOur algorithm is synthesized as follows. We take the basic SR1 algorithm described in|Nocedal & Wright(2006) (Algorithm 6.2), and represent the relevant input matrices using the limited-memory representations described in|Byrd et al.(1994). The particular limited-memory representations used in the algorithm vary, depending on whether we use trust region or line search methods as sub- routines to make parameter updates, as does some of the internal logic. For instance, if k updates resulting matrix Bk can be expressed as (Nocedal & Wright|2006).\nBk = Bo+(Yk BoSk)(Dk + Lk + LfSfBoSk)-(Yk BoSk)\nwhere Sr. Ye. Dr. and Lk are defined as follows.\niterations, which is less likely to be useful to modeling the actual behavior of the Hessian at the current iteration, is discarded in order to save memory.\nLimited-memory quasi-Newton approximations can be used with line search or trust region methods As described in Byrd et al.[(1994), we can derive efficient limited memory implementations ol several quasi-Newton update formulae, and their inverses..\nif i > j 19i-1 otherwise\nDk = diag[so yo, Sk-1Yk-1 The self-duality of the SR1 method (Nocedal & Wright2006) allows the inverse formula Hg to be obtained simply by replacing B, s, and y by H, y, and s, respectively, using standard matrix. identities. Limited-memory SR1 methods can be derived exactly like in the case of the BFGS method. Additional details are present in the pseudocode provided in the Appendix. The algorithm we develop is general enough to work with any line search or trust region method. While we tested the algorithm with line search approaches described in Dennis Jr. & Schnabel(1983), and with. the trust region approach described in|Brust et al.(2016), in this paper, we focus our experimental. investigations on using the trust region approach, and the advantage that provides over using other. first and second order optimization methods..\nNETWORK ARCHITECTURES AND HYPERPARAMETER SETTINGS"}, {"section_index": "5", "section_name": "MNIST", "section_text": "The layers of the LeNet5 architecture used, are described below. All the batch normalization layer. were removed, in the 'without batch normalization' case.\nWe also make a note here about the space and time complexity of our algorithm. We respectively denote by m and n, the memory size, and parameter dimensions. We assume m << n. As dis cussed in Section 7.2 ofNocedal & Wright(2006), the limited-memory updating procedure of B requires approximately 2mn + O(m) operations, and matrix vector products of the form Bv can be performed at a cost of (4m + 1)n + O(m4) multiplications. Moreover, the Cholesky and eigen value decompositions we perform within our trust-region method for m m matrices require O(m3 operations. It follows quite easily2[from this that the space complexity of our algorithm is O(mn) and the per iteration time complexity of our algorithm is O(mn)."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "Additionally, the network was trained with L2 regularization with parameter 0.0o01. Training loss was measured as softmax cross entropy, while test loss was measured as multi-class error count In the case of the first order methods, the learning rate was set to O.003 where needed, and the momentum was set to 0.9, where needed. AdaDelta did not take any parameters.\nIn the following, we summarize the results of training standard neural networks on the MNIST and CIFAR10 datasets using our approach, and benchmarking the performance with respect to other. first and second order methods. First, we compared our L-SR1 (with trust region) approach, with Nesterov's Accelerated Gradient Descent (NAG), L-BFGS with forgetting every 5 steps, defauli. SGD, AdaDelta, and SGD with momentum, by training small standard networks on the MNIST anc CIFAR10 datasets. On these problems, we also studied the effect of varying the minibatch size, fo. L-SR1, Adam (Kingma & Ba2014), and NAG. Next, we compared our L-SR1 with trust regior. approach with default hyperparameters, with a benchmark SGD with momentum, and Adam, by. training a 20-layer deep residual network on the CIFAR10 dataset. Following that, we varied each. hyperparameter of the L-SR1 with trust region approach to observe its effect on training the residual. network on CIFAR10."}, {"section_index": "7", "section_name": "CIFAR10", "section_text": "The layers of the architecture used, are described below. All the batch normalization layers were removed, in the 'without batch normalization' case."}, {"section_index": "8", "section_name": "4.1 LENET-LIKE NETWORKS", "section_text": "For each approach, and for each dataset, we considered the case where our networks had batch normalization layers within them, and the case where they did not. The parameters of the networks were randomly initialized. All experiments were repeated 10 times to generate error bars"}, {"section_index": "9", "section_name": "4.1.1 MNIST", "section_text": "We considered the LeNet5 architecture in this case, which comprised 2 convolutional layers, fol- lowed by a fully connected layer and an outer output layer. Each convolutional layer was followed by a max-pooling layer. In the case where we used batch-normalization, each convolutional and fully connected layer was followed by a spatial batch normalization layer. We used a mini-batch size of 20 for the first order methods like NAG, SGD, AdaDelta and SGD with momentum, and a mini-batch size of 400 for the second order methods like L-SR1 and L-BFGS. The memory size was set to 5 for both L-SR1 and L-BFGS. The networks were trained for 20 epochs. Further details on the network architecture and other parameter settings are provided in the Appendix.\n2Deep neural networks typically have paramater dimensions in the tens of millions, while the memory size typically does not exceed 10. So n is indeed several orders of magnitude larger than m.\nConvolutional Layer - filter size 5 5, 20 feature maps, stride 1, padding 0, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Convolutional Layer - filter size 5 5, 50 feature maps, stride 1, padding 0, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Fully Connected Layer - 500 hidden units, and a tangent hyperbolic activation function Spatial Batch Normalization Layer Outer Output Layer - 10 outputs and output standard deviation of 0.1\nConvolutional Layer - filter size 5 5, 32 feature maps, stride 1, padding 2, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Activation Layer - ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Convolutional Layer - filter size 5 5, 32 feature maps, stride 1, padding 2, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Convolutional Layer - filter size 5 5, 64 feature maps, stride 1, padding 2, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Fully Connected Layer - 64 hidden units, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Spatial Batch Normalization Layer\nOuter Output Layer - 10 outputs and output standard deviation of 0.1\nMNIST with batch normalization MNIST without batch normalization 0.024 0.09 I I NAG I I NAG 0.022 I I L-SR1 0.08 I I L-SR1 I I L-BFGS with forgetting I I L-BFGS with forgetting 0.020 I I SGD 0.07 I I SGD I I AdaDelta I I AdaDelta 0.018 0.06 II sGD with momentum II sGD with momentum 0.016 0.05 LSoT SSoT 0.014 0.04 0.012 0.03 0.010 0.02 0.008 0.01 0.006 0.00 0 5 10 15 20 0 5 10 15 20 Epoch index Epoch index\nAdditionally, the network was trained with L2 regularization with parameter 0.001. Training loss was measured as softmax cross entropy, while test loss was measured as multi-class error count. Ir the case of the first order methods, the learning rate was set to 0.01 where needed, and the momentum was set to 0.9, where needed. AdaDelta did not take any parameters."}, {"section_index": "10", "section_name": "PSEUDOCODE", "section_text": "Algorithm[1provides the pseudocode for L-SR1 with trust region method, while Algorithm2|pr vides the pseudocode for L-SR1 with line search.\nFigure 1: Variation of test loss with number of epochs, on the MNIST dataset, with and without batch normalization. Note that the scales on the y-axes are different."}, {"section_index": "11", "section_name": "4.1.2 CIFAR10", "section_text": "We considered a slight modification to the 'LeNet5' architecture described above. We used a mini batch size of 96 for NAG, SGD, AdaDelta and SGD with momentum. The other mini-batch sizes and memory sizes for L-SR1 and L-BFGS were as above. As above, the networks were trained for 20 epochs. Further details on the network architecture and other parameter settings are provided ir the Appendix.\nCIFAR10 with batch normalizatior. CIFAR10 without batch normalization 0.50 0.8 I I NAG II NAG I I L-SR1 I I L-SR1 0.45 I I L-BFGS with forgetting 0.7 I I L-BFGS with forgetting I I SGD I I SGD I I AdaDelta I I AdaDelta 0.40 0.6 I I sGD with momentum II sGD with momentum 0.30 0.4 0.25 0.3 0.20 0.2 0 5 10 15 20 0 5 10 15 20 Epoch index Epoch index\nFigure 2: Variation of test loss with number of epochs, on the CIFAR10 dataset, with and without batch normalization. Note that the scales on the y-axes are different"}, {"section_index": "12", "section_name": "4.1.3 VARIATION OF MINIBATCH SIZE", "section_text": "We also compared the variation of test loss between L-SR1, Adam and NAG, as we varied the. mini-batch size from 500 to 1000 to 10000, in the presence of batch normalization. The network. architectures were as above. For minibatch sizes 500 and 1000. we trained the networks for 5C epochs, while for the minibatch size of 10000, the networks were trained for 200 epochs.\nMNIST with batch normalization: Minibatch size 500 MNIST with batch normalization: Minibatch size 1000 MNIST with batch normalization: Minibatch size 10000 0.013 0.016 0.024 IINAG IINAG 0.022 0.012 L-SR1 0.014 L-SR1 0.020 .Adam #Adam 0.011 0.012 0.018 0.010 0.016 55O 0.010 0.009 0.014 0.008 0.012 0.008 II NAG 0.010 0.006 H L-SR1 0.007 0.008 HAdam 0.006 0.004 0.006 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140 160 180 200 Epoch index Epoch index Epoch index\nCIFAR with batch normalization: Minibatch size 500 CIFAR with batch normalization: Minibatch size 1000 CIFAR with batch normalization: Minibatch size 10000 0.34 0.36 0.50 INAG INAG INAG 0.32 L-SR1 0.34 L-SR1 L-SR1 0.45 HAdam Adam Adam 0.30 0.32 0.40 M0.30 SSo 0.35 0.26 0.28 0.30 0.24 0.26 0.22 0.24 0.25 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140 160 180 200 Epoch index Epoch index Epoch index\nCIFAR with batch normalization: Minibatch size 500 CIFAR with batch normalization: Minibatch size 1000 CIFAR with batch normalization: Minibatch size 10000 0.34 0.36 0.50 INAG II NAG IINAG 0.32 L-SR1 0.34 L-SR1 L-SR1 0.45 Adam Adam Adam 0.30 0.32 0.40 0.28 0.35 0.26 0.28 0.30 0.24 0.26 0.22 0.24 0.25 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140160 180 200 Epoch index Epoch index Epoch index\nFigure 4: Variation of test loss with number of epochs, on the CIFAR10 dataset, with batch nor malization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are different."}, {"section_index": "13", "section_name": "4.1.4 DISCUSSION", "section_text": "Our first set of experiments (Figures[1] 2) suggest that L-SR1 performs as well as, or slightly better. than all the first order methods on both the MNIST and CIFAR10 datasets, with or without batch normalization. L-SR1 is substantially better than L-BFGS in all settings, with or without forgetting. Forgetting appears to be necessary in order to get L-BFGS to work. Without forgetting, the approach. appears to be stuck where it is initialized. For this reason, the plots for L-BFGS without forgetting. have not been included. Batch normalization appears to improve the performance of all approaches,. particularly the early performance of second order approaches like L-SR1 and L-BFGS.\nThe experiments with variation of minibatch sizes (Figures 3] 4), seem to provide compelling evi dence of the potential for distributed training of deep networks, as may be seen from Table|1] First we note that first order methods like NAG are not as sensitive to size of the minibatch, as commonly understood. For example, a 20 fold increase in minibatch size did not decrease the speed of conver gence by the same or higher order of magnitude. Furthermore, approaches like L-SR1 and Adam appear to be much less sensitive to increasing minibatch size than NAG. This strengthens the case for their application to distributed training of deep neural networks. Finally, while Adam makes much faster initial progress than the other approaches, its final test loss by the end of training is worse than in the case of L-SR1.\nOne of the limitations of SR1 updating is that the denominator in the update can vanish. The liter ature however suggests that this happens rarely enough that the updates can be skipped when this. phenomenon occurs, without affecting performance. In this regard, we had some interesting obser. vations from our experiments. While in most cases, updates were either never skipped, or skippec. less than 2.5% of the time, the cases of MNIST training with batch normalization, yielded abnor.\nFigure 3: Variation of test loss with number of epochs, on the MNIST dataset, with batch normal ization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are different.\nTable 1: Speed of conve nce of NAG, L-SR1, and Adam, with varying minibatch sizes\nmally high levels of skipped updates, ranging all the way from 7% to higher than 60% (for minibatcl size 10oo0). While this did not seem to affect performance adversely, it certainly warrants future investigation. Moreover, a better understanding of the interplay between batch normalization and optimization could help inform potential improvements in optimization approaches"}, {"section_index": "14", "section_name": "4.2 RESIDUAL NETWORKS", "section_text": "We next considered a deeper residual network architecture described in section 4.2 of He et al. (2015b), with n = 3. This led to a 20-layer residual network including 9 shortcut connections. As in He et al.(2015b), we used batch normalization (Ioffe & Szegedy2015) and the same initialization method (He et al.2015a).\nWe trained the residual network using the benchmark SGD with momentum, and other parameter. settings as described in He et al.[(2015b). We also trained the network using L-SR1 with defauli. settings. These included, a memory size of 5, a trust-region radius decrease factor of 0.5, and. a trust-region radius increase factor of 2.0. Finally, we also compared with Adam, with defauli. settings (Kingma & Ba] 2014). We used the same mini-batch size of 128 for all algorithms. Based. on the learning rate schedule used, the learning rate was equal to 0.1 through the first 80 epochs. 0.01 up to 120 epochs, and 0.001 thereafter, for SGD with momentum. Figure 5|shows variation. of test loss, over epochs, and by time. It needs to be noted that default L-SR1, with no parameter. tuning at all, has a superior final test loss to Adam, and is competitive with SGD with momentum which used custom parameters that were tuned carefully. L-SR1 does make slower progress over. time, which can be further optimized. Finally, we note that the test loss for L-SR1 bounces around. a lot more than the test loss for the other algorithms. This bears further exploration.."}, {"section_index": "15", "section_name": "4.2.2 VARIATION OF L-SR1 HYPERPARAMETERS", "section_text": "We varied the hyperparameters of L-SR1 in turn, keeping the remaining fixed. In each case, we trained the network for 200 epochs. We first considered varying the increase and decrease factors together. We considered a trust-region radius decrease factor of 0.2, 0.5 and 0.8, and a trust-region radius increase factor 1.2 and 2.0. The respective default values of these factors are 0.5 and 2.0 respectively. This led to six different combinations of decrease and increase factors. We kept the memory size and mini-batch size fixed at 5 and 128 respectively. Next, we considered memory sizes of 2 and 10 (in addition to 5, which we tried earlier), keeping the mini-batch size, decrease factor, and increase factor fixed at 128, 0.5, and 2.0 respectively. Finally, we considered mini-batch sizes of 512, 2048 and 8192 (in addition to 128, which we tried earlier), keeping the memory size, decrease factor, and increase factor fixed at 5, 0.5, and 2.0 respectively. Figure|6 shows the results.\nThe following may be noted, based on the experiments with L-SR1 for training a residual network on CIFAR10. While there is potential value in increasing and decreasing the trust region radius at different rates, our experiments suggest that it may not be necessary to tune these hyperparameters. There is no noticeable performance gain from using a higher memory size in L-SR1. Furthermore using a smaller memory size performs at least as well as in the default case. This is good news, due to the consequent savings in storage and computational resources. L-SR1 is relatively insensitive to a 4-fold increase in mini-batch size from 128 to 512, and a further 4-fold increase to 2048. The minibatch sensitivity of L-SR1 seems to be higher in the case of the residual network, compared\nL-SR1 vs SGD vs Adam on a residual network L-SR1 vs SGD vs Adam on a residual network. 0.7 0.7 x x sGD with momentum Test (benchmark. SGD with momentum Test benchmark L-SR1 Test (default) 0.6 0.6 L-SR1 Test (default) + + Adam Test (default) Adam Testdefault 0.5 0.5 0.4 0.4 SSO SSoT 0.3 0.3 0.2 0.2 0.1 0.1 0.0 0.0 5000 10000 15000 20000 25000 30000 35000 40000 45000 0 50 100 150 200 Epoch index Time in seconds\nFigure 5: LSR1 vs SGD vs Adam, on the CIFAR10 dataset, using a residual network. The x-axis on the left shows number of epochs. while the x-axis on the right shows time in seconds\nL-SR1 - Variation of increase and decrease factors L-SR1 - Variation of mini-batch size L-SR1 - Variation of memory size 0.8 0.9 0.8 Test: Decrease -0.2,Increase - 1.2 Test: minibatch size 128 x xTest: memory size 2 0.7 Test: Decrease - 0.2, Increase - 2.0 0.8 Test:minibatch size 512 0.7 Test: memory size 5 Test: Decrease - 0.5, Increase - 1.2 Test: minibatch size 2048 Test: memory size 10 Test: Decrease - 0.5, Increase - 2.0 0.7 0.6 +Train: minibatch size 8192 0.6 Test: Decrease - 0.8, Increase - 1.2 0.6 0.5 Test: Decrease -0.8,Increase - 2.0 0.5 0.5 SSO 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 Epoch index Epoch index Epoch index\nL-SR1 - Variation of increase and decrease factors L-SR1 - Variation of mini-batch size L-SR1 - Variation of memory size 0.8 0.9 0.8 x x Test: Decrease - 0.2, Increase - 1.2 x xTest: minibatch size 128 x Test: memory size 2 0.7 Test: Decrease - 0.2, Increase - 2.0 0.8 Test: minibatch size 512 0.7 Test: memory size 5 Test: Decrease - 0.5, Increase - 1.2 Test: minibatch size 2048 Test: memory size 10 0.6 + Test: Decrease - 0.5, Increase - 2.0 0.7 ++ Train: minibatch size 8192 0.6 Test: Decrease - 0.8, Increase - 1.2 0.6 0.5 Test: Decrease - 0.8, Increase - 2.0 0.5 0. 50.4 SSO 50.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 50 100 150 200 50 100 150 200 50 100 150 200 Epoch index Epoch index Epoch index\nFigure 6: Variation of trust region radius increase and decrease factors, mini-batch size and memory size with number of epochs, on the CIFAR10 dataset, using a residual network. Note that the scales on the y-axes are different.\nwith the Le-Net like networks seen earlier. Finally, we found the proportion of skipped updates in the case of residual networks to be less than 0.5% in all cases."}, {"section_index": "16", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we have described L-SR1, a new second order method to train deep neural networks Our experiments suggest that this approach is at the very least, competitive, with other first order. methods, and substantially better than L-BFGS, a well-known second order method. Our experi- ments also appear to validate our intuition about the ability of L-SR1 to overcome key challenges. associated with second order methods, such as inappropriate handling of saddle points, and poor conditioning of the Hessian. Our experimentation with the hyperparameters of L-SR1 suggested that it is relatively robust with respect to them, and requires minimal tuning. Furthermore, we have evidence to suggest that L-SR1 is much more insensitive to larger minibatch sizes than a first order method like NAG. This suggests that L-SR1 holds promise for distributed training of deep networks,. and we see our work as an important step toward that goal.."}]
S1Y0td9ee
[{"section_index": "0", "section_name": "SHIET AGGREGATE EXTRACT NETWORKS", "section_text": "same expressive power. Moreover, experiments on REDDIT-MULT15K and REDDIT-MULT112K have only been possible thanks to the size reduction operated by the algorithm as the script exhausted the memory while executing the training step on the uncompressed files.\nFrancesco Orsini. Daniele Baracchi and Paolo Frasconi"}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "P Baldi and G Pollastri. The principled design of large-scale recursive neural network architectures-. dag-rnns and the protein structure prediction problem. J Mach Learn Res, 4(Sep):575-602, 2003\nThe Shift Aggregate Extract Network (sAEN) is an architecture for learning repre sentations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying shift, aggregate and extract operations on the vector representations of its parts. We propose an algorithm for domain compression which takes ad- vantage of symmetries in hierarchical decompositions to reduce the memory us- age and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "D Haussler. Convolution kernels on discrete structures. Technical report, Citeseer, 1999\nMany different problems in various fields of science require the classification of structured data. i.e. collections of objects bond together by some kind of relation. A natural way to represent such structures is through graphs, which are able to encode both the individual objects composing the. collection (as vertices) and the relationships between them (as edges). A number of approaches to. the graph classification problem has been studied in graph kernel and neural network literature..\nH Kashima, K Tsuda, and A Inokuchi. Marginalized kernels between labeled graphs. In ICML-03 volume 3, pp. 321-328, 2003\nGraph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel. 2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,. 2010). The similarity between two graphs is then computed by comparing the respective sets of. parts. Methods based on recursive neural networks unfold a neural network over input graphs and learn vector representations of their nodes employing backpropagation though structure (Goller &. Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-. ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).. An advantage of recursive neural networks over graph kernels, is that the vector representations of. the input graphs are learnt rather than handcrafted..\nM Mladenov, B Ahmadi, and K Kersting. Lifted linear programming. In AISTATS-12, pp. 788-797 2012.\nWe propose Shift Aggregate Extract Networks (sAEN), a neural network architecture for learning. representations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiple. strata of objects. Objects in each stratum are connected by \"part-of' relations to the objects to the stratum above.\nWe proposed sAEN, a novel architecture for learning vector representations of H-decompositions of input graphs. We applied sAEN for graph classification on 6 real world social network datasets. outperforming the current state of the art on 4 of them and obtaining state-of-the-art classification accuracy on the others. Another important contribution of this paper is the domain compression algorithm which greatly reduces memory usage and allowed us to speedup the training time of a factor of at least 4."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Learning on social network data can be considerably hard due to their peculiar structure: as opposed. to chemical compounds and parse trees, the structure of social network graphs is highly irregular Indeed in social networks it is common to have nodes in the same graph whose degree differs by. orders of magnitude. This poses a significant challenge for the substructure matching approach used. by some graph kernels as the variability in connectivity generates a large number of unique patterns. leading to diagonally dominant kernel matrices.\nA Vullo and P Frasconi. Disulfide connectivity prediction using recursive neural networks and evolutionary information. Bioinformatics, 20(5):653-659, 2004. P Yanardag and SVN Vishwanathan. Deep graph kernels. In Proc. of KDD-15, pp. 1365-1374, 2015.\nIn case we wish to classify graphs we can use an H-hierarchical decomposition in which the top stratum contains the graph G that we want to classify, while the intermediate strata contain subgraphs of G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the. vertices v of G.\nUnlike R-convolution relations in kernel methods (which decompose objects into the set of thei. parts), H-hierarchical decompositions are deep as they can represent the parts of the parts of al object.\nFrancesco Orsini', Daniele Baracchi and Paolo Frasconi.\nRecursive neural networks associate to the vertices of the input graphs vector representations impos ing that they have identical dimensions. Moreover, the propagation follows the edge connectivity and weights are shared over the whole input graph. If we consider that vector representations of nodes (whose number of parents can differ by orders of magnitude) must share the same weights. learning on social network data with recursive neural networks might be nontrivial..\nSAEN compensates the limitations of recursive neural networks by adding the following degrees o flexibility:\n1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the input graph, 2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratun basis instead of globally..\nAnother contribution of this paper is the introduction of a domain compression algorithm, that we. use in our experiments to reduce memory usage and runtime. Domain compression collapses objects in the same stratum of an H-hierarchical decomposition into a compressed one whenever these objects are indistinguishable for the SAEN computation schema. In particular objects made of the. same sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical. decomposition we store counts on symmetries adopting some mathematical results from lifted linear programming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the. work of Sperduti & Starita (1997) in which common substructures of recursive neural networks are collapsed in order to reduce the computational cost.\nMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler 1999). We extend this approach by decomposing graphs into a hierarchy of -parametrized \"part of\"' relations. Formally, an H-hierarchical decomposition is a pair ({St}|=o, {R,n}I=1) where:\nThe membership type is used to represent the roles of the parts of an object. For example, we could decompose a graph as a multiset of -neighborhood subgraphs ' in which is the radius o the neighborhoods (see Figure 1 on the left). Another possible use of the r membership type is tc\nIThe r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G consisting of all vertices whose shortest-path distance from v is at most r."}, {"section_index": "4", "section_name": "APPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS", "section_text": "Indeed sAeN allows to use vector representations of different sizes for different strata of objects (e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes the vector representation of each object by applying shift, aggregate and extract operations on the vector representations of its parts.\nIn Table A1 we report for each dataset: the radiuses r of the neighborhood subgraphs used in the EGNN decomposition and the number of units in the hidden layers for each stratum..\nFigure A1: Parameters for the neural networks used in the experiments DATASET A\nDATASET RADIUSES HIDDEN UNITS So S1 S2 r COLLAB 0,1 15 - 5 5 -2 5 - 3 0,1, 2 2 5-2 IMDB-BINARY 5-3-1 0,1,2 2 5-2 IMDB-MULTI 5 - 3 REDDIT-BINARY 0,1 10 5 5 -2 5-3-1 REDDIT-MULTI5K 0,1 10 10 6 - 5 REDDIT-MULTI12K 0,1 10 10 20 11 MUTAG 0,1, 2, 3 10 5 - 5 5-5-1 PTC 0,1 15 15 15-1 NC11 0,1,2, 3 15 15 15 10 - 1 PROTEINS 0,1,2, 3 3 -2 6-5-4 6 -3- 1 D&D 0,1,2, 3 10 5 -2 5-3-1\nSo S1 S2 r 0,1 15 - 5 5-2 5 - 3 0,1, 2 2 5-2 5-3-1 0,1,2 2 5 -2 5 - 3 0,1 10 5 5 -2 5-3-1 0,1 10 10 6 - 5 K 0,1 10 10 20 11 0,1, 2, 3 10 5 - 5 5-5-1 0,1 15 15 15 -1 0,1,2, 3 15 15 15-10-1 0,1, 2, 3 3 - 2 6-5-4 6-3-1 0,1, 2, 3 10 5 -2 5-3-1\nWe propose a neural network architecture that takes as input an undirected attributed graph G =. (V, E, X) where V is the vertex set, E C V V is the edge set, and X = {x, E RP}vev is a. set of p-dimensional vertex attributes. When vertices do not have associated attributes (for example this happens in some of the social network datasets of 4.1), we can set x, to some vertex invariant. such as node centrality or betweenness.\n{St}I=o are disjoint sets of objects St called strata, or levels of the hierarchy. The bottom stratum So contains non-decomposable objects (e.g. individual vertices), while the other strata S, l =- 1, . .., L contain composite objects, o; E St, whose parts o; E St-1 belong to the preceding stratum, S1-1. o {Rt,}f=1 is a set of l, -parametrized R,n-convolution relations. A pair (0i, 0j) E St St-1 belongs to Rt. iff \"o; is part of o, with membership type \". For notational convenience, the parts of o: are denoted as R-1(o:) -\nS2) S1) 0 1( S1) S2) Ego Graph Graph T : 0 T : 1 : 0 T ROOT . t : ELEM A T: 0 0 TT ELEM T.: :1 0 1\nS2) S1) 0 1( S1) S2) Ego Graph Graph t : 0 : 0 0 T : .... .1. #ELEM T : 0 7 ELEM T K : 0 0\nFigure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in $ 4.2) On the left we decompose a graph into rooted ego graphs of radius O and 1, while on the right we. decompose an ego graph into the set of its vertices. The directed arrows represent \"part of' relations. labeled with their membership type . The membership type r represents the radius = 0, 1 of the. ego graphs (decomposition on the left) and the role (i.e. = ROoT, ELEm) of a vertex in the ego. graph (decomposition on the right) respectively..\nAn H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and i reduces to an R-convolution relation for L = 1.\nWe propose Shift Aggregate Extract Network (sAen) to learn vector representations for all th work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract (SAE schema.\nAccording to the SAE schema the vector representation of each object in the H-hierarchical decom- position is either computed by applying a neural network on the vertex attributes (for the objects in bottom stratum) or defined in terms of the vector representations of its parts (for the other objects).\nMore formally, the SAE schema associates a di-dimensional representation h; E Rdi to each object O, E S of the H-hierarchical decomposition according to the following formula:.\nf0(Xv;;O0) if o; E Sq fi (zn O hj);Oi otherwise EII 0jER(0i) Shift Aggregate Extract\nwhere f(:: OD). , are multilay r neural networks with parameters Oj\nThe recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (sAE) schema\nfo(xv;;Oo) if 0; E So fi (zn O hj);Oi otherwise EII 0jER(0i) Shift Aggregate Extract\nWith respect to the base case (first branch of Eq. 1) we have that each object o; in the bottom stratum So is in one-to-one correspondence with the vertices v; E V of the graph that we are decomposing Indeed the vector representations h; are computed by evaluating fo(; Oo) in correspondence of the vertex attributes Xu, E X.\nH H\nFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see $ 4.1) together with its compressed version..\nThe shift and aggregate steps, that we have seen so far, are identical to those used in kernel desigr when computing the explicit feature of a kernel k(x, z) derived from a sum en1 k (x, z) of base kernels k(x, z), E II. In principle, it would be indeed possible to turn SAEN into a kernel methoc by removing the extraction step E from the SAE schema. However, such an approach would increase the dimensionality of the feature space by a multiplicative factor II for each level l of the H hierarchical decomposition, thus leading to an exponential number of features. When using SAEN the feature space growth is prevented by exploiting a distributed representation (via a multilayerec neural network) during the E step of the SAE schema. As a result, SAEN can easily cope with H hierarchical decompositions consisting of multiple strata."}, {"section_index": "5", "section_name": "2.3 EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION", "section_text": "In this section we propose a technique, called domain compression, which allows to save memory and speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de compositions by collapsing equivalent objects in each stratum. The greater the number of collapse objects the highest the compression ratio.\nTwo objects a, b in a stratum S are collapsable a ~ b if they share the same representation (i.e. stratum S w.r.t. the collapsibility relation ~. We assume that the attributes of the elements in the bottom stratum So are categorical, so that the same vector representation can be shared by multiple. elements with non-zero probability. 2 While objects in the bottom stratum So are collapsable when. their attributes are identical, for all the other strata St, l = 1, . . ., L, objects are collapsable if they. are made by the same sets of parts for all the membership types r..\nIn Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical decomposition (EGNN, described in 4.2). On the left we show the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see $ 4.1) together with its compressed version on the right."}, {"section_index": "6", "section_name": "2.3.1 DOMAIN COMPRESSION ALGORITHM", "section_text": "In order to compress H-hierarchical decompositions we adapt the lifted linear programming tech nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M E Rnp has\n2 Vectors of real valued attributes could be discretized using clustering techniques. However, we leave discretization in SAEN to future works.\nmake sure that vector representations h; of object parts will fall in the same slot if and only if they have the same membership type r. .Aggregate: the shifted representations (z h,) of the parts o; are then aggregated with a sum. . Extract: the aggregated representation is compressed to a d-dimensional space by a Ot parametrized nonlinear map fi(, Ot) : R|II,d-1! -> Rd implemented with a multilayer neural network.\nm n distinct rows it can be decomposed as the product DMcomp where Mcomp is a compressed version of M in which the distinct rows of M appear exactly once. The Boolean decompression matrix, D, encodes the collapsibility relation among the rows of M so that Di, = 1 iff the ith row of M falls in the equivalence class j of ~. A pseudo-inverse C of D can be computed by dividing the rows of DT by their sum (where D is the transpose of D).\nExample 1 If we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding [0, 0, 0] rows 3 and 5 share the encoding [1,1, 0] while the encoding 1, 0,1 appears only once at row 2 Matrix Mcomp is the compressed version of M..\n[0 0 07 1 0 07 1 0 1 [0 0 0] 0 1 0 [1/2 0 0 1/2 01 = 1 1 0 Mcomp 1 0 1 D = 0 0 1 C = 0 1 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1/2 0 1/2] 1 1 0] 0 0 1\nMatrix M can be expressed as the matrix product between the decompression matrix D and the compressed version of Mcomp (i.e. M = DMcomp), while the matrix multiplication between the. compression matrix C and the M leads to the compressed matrix Mcomp (i.e.Mcomp = CM).\nTo apply domain compression we rewrite Eq. 1 in matrix form as follows.\nfo(X;Oo) |So|x do\n. Hj E R|Si|di is the matrix that represents the di-dimensional encodings of the objects in St. The rows of Hi are the vector representations h, in Eq. 1, while the rows of Hi-1 are the vector representations h; in Eq. 1:.\no X E R|So|p is the matrix that represents the p-dimensional encodings of the vertex attributes in V (i.e. the rows of X are the xu, of Eq. 1); fi(; Oi) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise; . Ri, E R|St||St-1! Vr E II are the matrix representations of the Ri,-convolution relations of Eq. 1 whose elements are (Rt.); = 1 if (o;, 0) E Rt. and 0 otherwise.\nDomain compression on Eq. 3 is performed by the DOmAIN-COmPREss1ON procedure (see Algo rithm 3) that takes as input the attribute matrix X and the part-of matrices Ri, and returns their the procedure COmPUTE-CD on X to obtain the compression and decompression matrices Co and Do respectively. The compression matrix Co is used to compress X (line 2) then we start iterating over the levels l = 0, ..., L of the H-hierarchical decomposition (line 4) and compress the Rt, matrices. The compression of the Ri. matrices is done by right-multiplying them by the decom pression matrix Di-1 of the previous level l - 1 (line 5). In this way we collapse the parts of relation Rt. (i.e. the columns of Ri.) as these were identified in stratum Si-1 as identical objects (i.e those objects corresponding to the rows of X or Ri-1, collapsed during the previous step). The result is a list Rcol_comp = [Ri,nDi-1, V = 1, ...,|II|] of column compressed Rt,n-matrices. We proceed collapsing equivalent objects in stratum St, i.e. those made of identical sets of parts: we find symmetries in Rcol_comp by invoking COMpUTE-cD (line 6) and obtain a new pair Ct, Dj of compression, and decompression matrices respectively. Finally the compression matrix Ci is ap- plied to the column-compressed matrices in Rcol comp in order to obtain the II compressed matrices\nfo(X;Oo) ifl =0 |So|xdo Hi- Hj = Ru1,.. (3) ., R1. .,RII ;Oi otherwise f 0 Hj |Si||II||Si-1| |HIi|S-1||IIi|d-1 |Si|xdl\n1 Co, Do = COMPUTE-CD(X) 2 Xcomp = CoX // Compress the X matrix.. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L Rcol comp = [Ri,Di-1, V = 1,..., |II|] // column compression 5 6 Ct, Di = COMPUTE-CD(Rcol_comp) 7 for = 1 toII 8 9 return Xcomp, Rcomp.\nof stratum S (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3 Hi with Hcomp. Willing to recover the original encodings Hi we just need to employ the decom-.\nAs we can see by substituting St with Scomp, the more are the symmetries (i.e. when |Scomp| St[) the greater the domain compression will be.\nWhen learning with graph inputs two fundamental design aspects that must be taken into account are:. the choice of the pattern generator and the choice of the matching operator. The former decomposes the graph input in substructures while the latter allows to compare the substructures..\nAmong the patterns considered from the graph kernel literature we have paths, shortest paths. walks (Kashima et al., 2003), subtrees (Ramon & Gartner, 2003; Shervashidze et al., 2011) and neighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G' is. computed by counting the number of matches between their common the substructures (i.e. a kernel. on the sets of the substructures). The match between two substructures can be defined by using. graph isomorphism or some other weaker graph invariant..\nWhen the number of substructures to enumerate is infinite or exponential with the size of the graph. (perhaps this is the case for random walks and shortest paths respectively) the kernel between the two graphs is computed without generating an explicit feature map. Learning with an implicit fea. ture map is not scalable as it has a space complexity quadratic in the number of training examples (because we need to store in memory the gram matrix)..\nOther graph kernels such as the Weisfeiler-Lehman Subtree Kernel (wLsT) (Shervashidze et al., 2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NspDk) (Costa & De Grave, 2010) deliberately choose a pattern generator that scales polynomially and produces an explicit. feature map. However the vector representations produced by wLsT and NspDK are handcrafted and not learned.\nA recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as graphlets, shortest paths and wLsT subtrees to transform input graphs into documents. The gener ated substructures are then treated as words and embedded in the Euclidean space with a CBOW or a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the counts of the substructures by the square root of their word-vector self similarity.\nAnother recent work by Niepert et al. (2016) upgrades the convolutional neural networks cNNs fo1 images to graphs. While the receptive field of a cNn is usually a square window (Niepert et al. 2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific temporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the nodes of the subgraphs/receptive fields..\nIn order to answer the experimental questions we tested our method on six publicly available dataset first proposed by Yanardag & Vishwanathan (2015)."}, {"section_index": "7", "section_name": "4.2 EXPERIMENTS", "section_text": "Before applying EGNN we turn unattributed graphs (V, E) into attributed graphs (V, E, X) by an notating their vertices v E V with attributes x, E X. We label vertices v of G with their degree and encode this information into the attributes x, by employing the 1-hot encoding\nwith the following strata (see Figure 1 for a pictorial representation of EGNN):. stratum So contains objects o, that are in one-to-one correspondence with the vertices v E V. stratum S1 contains Vroot-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (Vroot, Ve, Ee). of radius r = 0, 1,..., R and has part-of alphabet II = {RooT, ELEm}. Objects o E So are. \"ELEM-part-of\" ego graph e if v E Ve \\{vroot}, while the are \"ROOT-part-of\" ego graph e if. U = Uroot: stratum S2 contains the graph G that we want to classify and has part-of alphabet II = {0,1}. which correspond to the radius of the ego graphs e E S1 of which G is made of..\nE1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each dataset, we manually chose the parameters of sAeN, i.e. the number of hidden layers for each stratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al. activation function on all the units. We report the chosen parameters in Table A1 of the appendix In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss.\nThe classification accuracy of sAEN was measured with 10-times 10-fold cross-validation. We man ually chose the number of layers and units for each level of the part-of decomposition; the numbe. of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs oi the 10-times 10-fold cross-validation.\nn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network EGNN), that mimics the graph kernel NsPDK with the distance parameter set to 0.\nEGNN decomposes attributed graphs G = (V, E, X) into a 3 level H-hierarchical decomposition with the following strata (see Figure 1 for a pictorial representation of EGNn):.\nFigure 4: Comparison of accuracy results. DATASET DGK PSCN SAEN (Yanardag et al. 2015) (Niepert et al., 2016) (our method) COLLAB 73.09 0.25 72.60 2.16 75.63 0.31 IMDB-BINARY 66.96 0.56 71.00 2.29 71.260.74 IMDB-MULTI 44.55 0.52 45.23 2.84 49.11 0.64 REDDIT-BINARY 78.04 0.39 86.30 1.58 86.08 0.53 REDDIT-MULT15K 41.27 0.18 49.10 0.70 52.24 0.38 REDDIT-MULTI12K 32.22 0.10 41.32 0.42 46.72 0.23\nThe mean accuracies and their standard deviations obtained by our method are reported in Ta ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015) and by Niepert et al. (2016)\nAlthough our method was conceived for social network data, it can also handle other types of graph. For the sake of completeness in Table 5 we report the mean accuracies obtained with sAEN on th molecule and protein datasets studied in previous works (e.g. Niepert et al. (2016))..\nTable 1: Comparison of sizes and runtimes of the datasets before and after the compression\nSIZE (MB) RUNTIME DATASET ORIGINAL COMP. RATIO ORIGINAL COMP. SPEEDUP 1190 448 COLLAB 0.38 43' 18' 8' 20\" 5.2 68 34 0.50 3' 9\" 0' 30\" IMDB-BINARY 6.3 74 40 0.54 7' 41\" 1' 54\" IMDB-MULTI 4.0 326 56 REDDIT-BINARY 0.17 TO 2' 35\" 100.0 952 162 0.17 OOM 9' 51\" REDDIT-MULTI5 K REDDIT-MULTI12K 1788 347 0.19 OOM 29' 55\"\nE2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression. together with the data compression ratio. 3 We also estimate the benefit of the relational compression. from a computational time point of view and report the measurement of the runtime for 1 run with and without compression together with the speedup factor..\nFor the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeor E5-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server's memory during the test are marked as \"oom'' (out of memory) in the table, while those who exceeded the time limit of 100 times the time needed for the uncompressed version are marked as \"To\" (timeout)"}, {"section_index": "8", "section_name": "4.3 DISCUSSION", "section_text": "A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the. social network datasets. This confirms that the chosen H-hierarchical decomposition is effective or this kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line. with the current state of the art. A2 The (\n3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratic Indeed the last version of our code compresses the files on the fly.\nFigure 5: Comparison of accuracy on bio-informatics datasets\nA2 The compression algorithm has proven to be effective in improving the computational cost of our method. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the"}]
B16dGcqlx
[{"section_index": "0", "section_name": "THIRD-PERSON IMITATION LEARNING", "section_text": "How sensitive is our proposed algorithm to the selection of hyper-parameters used in deployment?. Figure 6|shows the effect of the domain confusion coefficient X, which trades off how much we. should weight the domain confusion objective vs. the standard cost-recovery objective, on the final. performance of the algorithm. Setting too low results in slower learning and features that are not domain-invariant. Setting A too high results in an objective that is too quick to destroy information.. which makes it impossible to recover an accurate cost..\nFor multi-time step input, one must choose the number of look-ahead frames that are utilized. Ii too small a window is chosen, the agent's actions have not affected a large amount of change in the environment and it is difficult to discern any additional class signal over static images. If too large a time-frame passes, causality becomes difficult to interpolate and the agent does worse than simply being trained on static frames. Figure 7 illustrates that no number of look-ahead frames is consistently optimal across tasks. However, a value of 4 showed good performance over all tasks. and so this value was utilized in all other experiments.\nReinforcement learning (RL) makes it possible to train agents capable of achiev ing sophisticated goals in complex and uncertain environments. A key difficulty i1 reinforcement learning is specifying a reward function for the agent to optimize Traditionally, imitation learning in RL has been used to overcome this problem Unfortunately, hitherto imitation learning methods tend to require that demonstra tions are supplied in the first-person: the agent is provided with a sequence o states and a specification of the actions that it should have taken. While powerful this kind of imitation learning is limited by the relatively hard problem of collect ing first-person demonstrations. Humans address this problem by learning fron third-person demonstrations: they observe other humans perform tasks, infer th task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learn ing. Here third-person refers to training an agent to correctly achieve a simpl goal in a simple environment when it is provided a demonstration of a teache\nReacher Reward vs dom confusion coefficient Pendulum Reward vs dom confusion coefficient Point Reward vs dom confusion coefficient 30 -10 -2000 10 -6000 -20 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 0.00 0.25 0.50 0.75 1.00 Domain Confusion Coefficient Domain Confusion Coefficient Domain Confusion Coefficient\ntask, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learn- ing. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum.\nFigure 6: Reward of final trained policy vs domain confusion weight X for reacher, inverted pendu lum, and point environments."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reacher Reward vs look-ahead frames Inverted Pendulum Reward vs look-ahead frames Point Reward vs look-ahead frames -5.75- -1000 27.5 6.00 22.5 1500 -6.50 15 1750 0 15 20 0 20 0 10 15 Look-ahead frames Look-ahead frames Look-ahead frames 20\nReacher Reward vs look-ahead frames Inverted Pendulum Reward vs look-ahead frames Point Reward vs look-ahead frames 5.75 -1000 27.5 -6.00 22.5 1500 6.50 -1750 0 5 1Q 15 20 0 15 20 5. 15 20 Look-ahead frames Look-ahead frames Look-ahead frames\nReinforcement learning (RL) is a framework for training agents to maximize rewards in large, un known, stochastic environments. In recent years, combining techniques from deep learning witl reinforcement learning has yielded a string of successful applications in game playing and robotics Mnih et al.(2015]2016); Schulman et al.(2015a); Levine et al.(2016). These successful appli cations, and the speed at which the abilities of RL algorithms have been increasing, makes it ar exciting area of research with significant potential for future applications\nOne of the major weaknesses of RL is the need to manually specify a reward function. For each. task we wish our agent to accomplish, we must provide it with a reward function whose maximizer. will precisely recover the desired behavior. This weakness is addressed by the field of Inverse. Reinforcement Learning (IRL). Given a set of expert trajectories, IRL algorithms produce a reward. function under which these the expert trajectories enjoy the property of optimality. Recently, there has been a significant amount of work on IRL, and current algorithms can infer a reward functior from a very modest number of demonstrations (e.g,.Abbeel & Ng(2004); Ratliff et al.(2006): Ziebart et al.(2008);Levine et al.(2011); Ho & Ermon(2016); Finn et al.(2016)).\nFigure 7: Reward of final trained policy vs number of look-ahead frames for reacher, inverted pen dulum, and point environments.\nWhile IRL algorithms are appealing, they impose the somewhat unrealistic requirement that the. demonstrations should be provided from the first-person point of view with respect to the agent. Human beings learn to imitate entirely from third-person demonstrations - i.e., by observing other humans achieve goals. Indeed, in many situations, first-person demonstrations are outright impossi- ble to obtain. Meanwhile, third-person demonstrations are often relatively easy to obtain."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Reacher Reward vs dom confusion coefficient Pendulum Reward vs dom confusion coefficient Point Reward vs dom confusion coefficient 30 10 2000 10 6000 -20 0.00 Domain Confusion Coefficiente .25 0.50 0.75 0.00 Domain Confusion Coefficient 0.75 0.00 Domain Confusion Coefficient 0.50 0.75 1.00\nHow sensitive is our algorithm to changes in camera angle? We present graphs for the reacher. and point experiments wherein we exam the final reward obtained by a policy trained with third-. person imitation learning vs the camera angle difference between the first-person and third-person perspective. We omit the inverted double pendulum experiment, as the color and not the camera. angle changes in that setting and we found the case of slowly transitioning the color to be the.. definition of uninteresting science.\nThe goal of this paper is to develop an algorithm for third-person imitation learning. Future advance- ments in this class of algorithms would significantly improve the state of robotics, because it will enable people to easily teach robots news skills and abilities. Importantly, we want our algorithm to be unsupervised: it should be able to observe another agent perform a task, infer that there is an underlying correspondence to itself, and find a way to accomplish the same task.\nWe offer an approach to this problem by borrowing ideas from domain confusionTzeng et al.(2014 and generative adversarial networks (GANs) Goodfellow et al. (2014). The high-level idea is to in- troduce an optimizer under which we can recover both a domain-agnostic representation of the agent's observations, and a cost function which utilizes this domain-agnostic representation to cap- ture the essence of expert trajectories. We formulate this as a third-person RL-GAN problem, and our solution builds on the first-person RL-GAN formulation by|Ho & Ermon|(2016).\nSurprisingly, we find that this simple approach has been able to solve the problems that are pre sented in this paper (illustrated in Figure[1), even though the student's observations are related in a complicated way to the teacher's demonstrations (given that the observations and the demonstrations are pixel-level). As techniques for training GANs become more stable and capable, we expect our algorithm to be able to infer solve harder third-person imitation tasks without any direct supervision\nPoint Experiment Third-Person vs.Baselines 1000- aeerp 30000 first on third 3000- first-person third-persor Iteration Reacher Experiment Third-Person vs.Baselines 6 Reerrp 8 10- first on third first-person rl 122 third-person Iteration\n1000- Reearp 3000- first on third 3000- first-person r/ third-person"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Imitation learning (also learning from demonstrations or programming by demonstration) consider the problem of acquiring skills from observing demonstrations. Imitation learning has a long history with several good survey articles, including (Schaal!|1999 Calinon 2009] Argall et al.||2009). Tw main lines of work within imitation learning are: 1) behavioral cloning, where the demonstrations. are used to directly learn a mapping from observations to actions using supervised learning, po. tentially with interleaving learning and data collection (e.g.,Pomerleau|(1989); Ross et al.(2011) 2) Inverse reinforcement learning (Ng et al.] 2o00), where a reward function is estimated that ex. plains the demonstrations as (near) optimal behavior. This reward function could be represente as nearness to a trajectory (Calinon et al.|2007) Abbeel et al.[|2010), as a weighted combination o features (Abbeel & Ng2004|Ratliff et al. 2006Ramachandran & Amir2007 Ziebart et al.]2008 Boularias et al.]2011} Kalakrishnan et al. 2013 Doerr et al.[2015), or could also involve featur. learning (Ratliff et al.| 2007Levine et al. 2011 Wulfmeier et al.]2015f Finn et al.]2016f Ho & Ermon2016).\nFigure 9: Learning curves for third-person imitation vs. three baselines: 1)RL with true reward, 2) first-person imitation, 3) attempting to use first-person features on the third-person agent\nPoint Camera Angle vs Reward. Reacher Camera Angle vs Reward -400 4.5 500 -5.0 600 Rey -700 -6.0 -800 -6.5 0 10 20 30 0 5 10 15 Difference in Camera Angle (degrees). Difference in Camera Angle (degrees).\nFigure 8: Point and reacher final reward after 20 epochs of third-person imitation learning vs the camera angle difference between the first and third-person perspective. We see that the point follows a fairly linear slope in regards to camera angle differences, whereas the reacher environment is more stochastic against these changes.\n8 10- first on third first-person rl 2 third-person\nFigure 1: From left to right, the three domains we consider in this paper: pointmass, reacher, and. pendulum. Top-row is the third-person view of a teacher demonstration. Bottom row is the agent's. view in their version of the environment. For the point and reacher environments, the camera angles differ by approximately 40 degrees. For the pendulum environment, the color of the pole differs..\nHow does our method compare against reasonable baselines? We consider the following base. lines for comparisons against third-person imitation learning. 1) Standard reinforcement learning with using full state information and the true reward signal. This agent is trained via TRPO. 2)\nThis past work, however, is not directly applicable to the third person imitation learning setting. In third-person imitation learning. the observations and actions obtained from the demonstration are not the same as what the imitator agent will be faced with. A typical scenario would be: the. imitator agent watches a human perform a demonstration, and then has to execute that same task. As discussed inNehaniv & Dautenhahn(2o01) the 'what and how to imitate\" questions become. significantly more challenging in this setting. To directly apply existing behavioral cloning or inverse. reinforcement learning techniques would require knowledge of a mapping between observations anc. actions in the demonstrator space to observations and actions in the imitator space. Such a mapping. is often difficult to obtain, and it typically relies on providing feature representations that captures. the invariance between both environments Carpenter et al.[(2002);Shon et al.(2005);Calinon et al (2007); Nehaniv(2007); Gioioso et al.(2013); Gupta et al.(2016).Contrary to prior work, we consider third-person imitation learning from raw sensory data, where no such features are made. available.\nWe compare all three of these baselines to third-person imitation learning. As we see in figure 9: 1) Standard RL, which (unlike the imitation learning approaches) has access to full state anc true reward, helps calibrate performance of the other approaches. 2) First-person imitation learning is faced with a simpler imitation problem and accordingly outperforms third-person imitation, ye third-person imitation learning is nevertheless competitive. 3) Applying the first-person policy tc the third-person agent fails miserably, illustrating that explicitly considering third-person imitatior is important in these settings\nSomewhat unfortunately, the different reward function scales make it difficult to capture information on the variance of each learning curve. Consequently, in Appendix A we have included the full. learning curves for these experiments with variance bars, each plotted with an appropriate scale to examine the variance of the individual curves.."}, {"section_index": "4", "section_name": "DISCUSSION AND FUTURE WORK", "section_text": "Our work also closely builds on advances in generative adversarial networks Goodfellow et al. (2014), which are very closely related to imitation learning as explained in Finn et al.[(2016);Ho & Ermon(2016). In our optimization formulation, we apply the gradient flipping technique from[Ganin & Lempitsky(2014).\nIn this paper, we presented the problem of third-person imitation learning. We argue that this prob lem will be important going forward, as techniques in reinforcement learning and generative adver sarial learning improve and the cost of collecting first-person samples remains high. We presented an algorithm which builds on Generative Adversarial Imitation Learning and is capable of solving simple third-person imitation tasks.\nThe problem of adapting what is learned in one domain to another domain has been studied exten sively in computer vision in the supervised learning setting[Yang et al.(2007);[Mansour et al.(2009) Kulis et al.(2011);Aytar & Zisserman(2011); Duan et al.(2012); Hoffman et al.(2013); Long & Wang (2015). It has also been shown that features trained in one domain can often be relevant tc other domains Donahue et al.(2014). The work most closely related to ours is Tzeng et al.(2014 2015), who also consider an explicit domain confusion loss, forcing trained classifiers to rely or features that don't allow to distinguish between two domains. This work in turn relates to earlie work by Bromley et al.[(1993); Chopra et al.(2005), which also considers supervised training of deep feature embeddings.\nOne promising direction of future work in this area is to jointly train policy features and cost features at the pixel level, allowing the reuse of image features. Code to train a third person imitation learnin. agent on the domains from this paper is presented here: https://github. com/bstadie/"}, {"section_index": "5", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This work was done partially at OpenAI and partially at Berkeley. Work done at Berkeley was supported in part by Darpa under the Simplex program and the FunLoL program.\nA discrete-time finite-horizon discounted Markov decision process (MDP) is represented by a tuple M = (S, A, P, r, po, , T), in which S is a state set, A an action set, P : S A S > R+ a transition probability distribution, r : S A -> R a reward function, po : S -> R+ an initial state distribution, y E [0, 1] a discount factor, and T the horizon.\nD. Barber and F. V. Agakov. Kernelized infomax clustering. NIPs, 2005.\nIn the (first-person) imitation learning setting, we are not given the reward function. Instead we are given traces (i.e., sequences of states traversed) by an expert who acts according to an unknow. policy g. The goal is to find a policy e that performs as well as the expert against the unknow. reward function. It was shown in[Abbeel & Ng(2004) that this can be achieved through inverse reinforcement learning by finding a policy e that matches the expert's empirical expectation ove discounted sum of all features that might contribute to the reward function. The work byHo & Ermon|(2016) generalizes this to the setting when no features are provided as follows: Find a polic e that makes it impossible for a discriminator (in their work a deep neural net) to distinguish state visited by the expert from states visited by the imitator agent. This can be formalized as follows:\nStandard GAIL (first-person imitation learning). Here, the agent receives first-person demonstration and attempts to imitate the correct behavior. This is an upper bound on how well we can expect to do, since we have the correct perspective. 3) Training a policy using first-person data and applying it to the third-person environment.\nThe most closely related work to ours is byFinn et al.(2016);[Ho & Ermon|(2016);[Wulfmeier et al. (2015), who also consider inverse reinforcement learning directly from raw sensory data. However. the applicability of their approaches is limited to the first-person setting. Indeed, matching raw sensory observations is impossible in the 3rd person setting.\nOur approach to third-person imitation learning relies on reinforcement learning from raw sensory data in the imitator domain. Several recent advances in deep reinforcement learning have made this practical, including Deep Q-Networks (Mnih et al.2015), Trust Region Policy Optimization (Schul- man et al.2015a), A3C Mnih et al.(2016), and Generalized Advantage Estimation (Schulman et al. 2015b). Our approach uses Trust Region Policy Optimization.\nYusuf Aytar and Andrew Zisserman. Tabula rasa: Model transfer for object category detection. In 2011 International Conference on Computer Vision, pp. 2252-2259. IEEE, 2011.\nmax min Ee(logDr(s)-EE log(1Dr(s)) TT e DR\nHere, the expectations are over the states experienced by the policy of the imitator agent, e, and by. the policy of the expert, E, respectively. Dr is the discriminator, which outputs the probability of a state having originated from a trace from the imitator policy e. If the discriminator is perfectly. able to distinguish which policy originated state-action pairs, then Dr will consistently output a probability of 1 in the first term, and a probability of O in the second term, making the objective. its lowest possible value of zero. It is the role of the imitator agent e to find a policy that makes. it difficult for the discriminator to make that distinction. The desired equilibrium has the imitator agent making it impractical for the discriminator to distinguish, hence forcing the discriminator to. assign probability 0.5 in all cases.Ho & Ermon(2016) present a practical approach for solving. this type of game when representing both e and Dr as deep neural networks. Their approach repeatedly performs gradient updates on each of them. Concretely, for a current policy e traces can. be collected, which together with the expert traces form a data-set on which Dr can be trained with supervised learning minimizing the negative log-likelihood (in practice only performing a modest. number of updates). For a fixed Dr, this is a policy optimization problem where - log Dr(s, a). is the reward, and policy gradients can be computed from those same traces. Their approach uses. trust region policy optimization (Schulman et al.2015a) to update the imitator policy from those. gradients.\nMalinda Carpenter, Josep Call, and Michael Tomasello. Understanding prior intentions enables. two-year-olds to imitatively learn a complex task. Child development. 73(5):1431-1441. 2002.\nIn our work we will have more terms in the objective, so for compactness of notation, we will realize the discriminative minimization from Eqn. (1) as follows:\nLixin Duan, Dong Xu, and Ivor Tsang. Learning with augmented features for heterogeneous domai adaptation. arXiv preprint arXiv:1206.4660, 2012\nmax min LR = CE(DR(si) TT 0 DR 2\nWhere s; is state i, ce, is the correct class label (was the state s; obtained from an expert vs. from non-expert), and CE is the standard cross entropy loss.\nFormally, the third-person imitation learning problem can be stated as follows. Suppose we are given two Markov Decision Processes M and Me. Suppose further there exists a set of traces p = {(s1, ..., Sn)}?=o which were generated under a policy acting optimally under some unknown. reward R . In third-person imitation learning, one attempts to recover by proxy through p a policy Te = f(p) which acts optimally with respect to Re.."}, {"section_index": "6", "section_name": "5.1 GAME FORMULATION", "section_text": "In this section, we discuss a simple algorithm for third-person imitation learning. This algorithr. is able to successfully discriminate between expert and novice policies, even when the policies are executed under different environments. Subsequently, this discrimination signal can be used to train. expert policies in new domains via RL by training the novice policy to fool the discriminator, thus. forcing it to match the expert policy.\nWe begin by recalling that in the algorithm proposed by Ho & Ermon[(2016) the loss in Equation[2 is utilized to train a discriminator Dr capable of distinguishing expert vs non-expert policies. Un-. fortunately, (2) will likely fail in cases when the expert and non-expert act in different environments,. since D p will quickly learn these differences and use them as a strong classification signal.\nBrian Kulis. Kate Saenko. and Trevor Darrell. What you saw is not what you get: Domain adaptatio. using asymmetric kernel transforms. In Computer Vision and Pattern Recognition (CVPR), 201. IEEE Conference on, pp. 1785-1792. IEEE, 2011.\nTo handle the third-person setting, where expert and novice are in different environments, we con-. sider that Dr works by first extracting features from Ot, and then using these features to make a\nJudy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of domain-invariant image representations. arXiv preprint arXiv:1301.3224, 2013.\nIn third-person learning, observations are more typically available rather than direct state access. so going forward we will work with observations ot instead of states st as representing the expert traces. The top row of Figure[8jillustrates what these observations are like in our experiments.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). 2014\nclassification. Suppose then that we partition Dr into a feature extractor DF and the actual clas sifier which assigns probabilities to the outputs of D. Overloading notation, we will refer to the classifier as Dr going forward. For example, in case of a deep neural net representation, DF would correspond to the earlier layers, and Dr to the later layers. The problem is then to ensure that D y contains no information regarding the rollout's domain label de (i.e., expert vs. novice domain) This can be realized as\nnax min LR = ) CE(DR(DF(0i)),ce TT0 s.t. MI(DF(0i);di) = 0\nYishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bound. and algorithms. arXiv preprint arXiv:0902.3430. 2009\nWhere MI is mutual information and hence we have abused notation by using DR, DF, and de to. mean the classifier, feature extractor, and the domain label respectively as well as distributions over these objects.\nThe mutual information term can be instantiated by introducing another classifier Dp, which takes. features produced by D and outputs the probability that those features were produced by in the expert vs. non-expert environment. (SeeBridle et al.(1992);Barber & Agakov(2005);Krause et al. (2010);Chen et al. (2016) for further discussion on instantiating the information term by introducing. another classifier.) If o; = DF(o,), then the problem can be written as.\nmax min max LR + Lp = ) CE(DR(0i),cei) + CE(DD(0i),dei DR Dp TT e\nChrystopher L Nehaniv and Kerstin Dautenhahn. Like me?-measures of correspondence and imita tion. Cybernetics & Systems, 32(1-2):11-51, 2001.\nIn words, we wish to minimize class loss while maximizing domain confusion\nOften, it can be difficult for even humans to judge a static image as expert vs. non-expert because it. does not convey any information about the environmental change affected by the agent's actions. For example, if a pointmass is attempting to move to a target location and starts far away from its goal. state, it can be difficult to judge if the policy itself is bad or the initialization was simply unlucky. In response to this difficulty, we give Dr access to not only the image at time t, but also at some future time t + n. Define t = DF(ot) and Ot+n = DF(Ot+n). The classifier then makes a prediction. DR(Ot, Ot+n) = Cl.\nDean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances ii Neural Information Processing Systems, pp. 305-313, 1989.\nThis renders the following formulation.\nmax min maxLR + Lp = CE(DR(Oi,0i+n),ce;)+CE(DD(0i),dei) TTe DR DD\nN. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitatior learning. 2007.\nStephane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and struc tured prediction to no-regret online learning. In A1STATS, volume 1, pp. 6, 2011.\nNote we also want to optimize over DF, the feature extractor, but it feeds both into Dr and into Dp which are competing (hidden under o), which we will address now.\nTo deal with the competition over D=, we introduce a function G that acts as the identity when moving forward through a directed acyclic graph and flips the sign when backpropagating through the graph. This technique has enjoyed recent success in computer vision. See, for example, (Ganin & Lempitsky|[2014). With this trick, the problem reduces to its final form\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust regior policy optimization. Arxiv preprint 1502.05477, 2015a.\nTo ensure sufficient signal for discrimination between expert and non-expert, we collect third-perso. demonstrations in the expert domain from both an expert and from a non-expert.\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Pieter Abbeel, Sergey. Levine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.\nOur complete formulation is graphically summarized in Figure2\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015\nmaxminmaxLR+ Lp=>~CE(DR(Oi,0i+n),cei) + CE(DD(oi),dei) TT e DRDD\nmin LR+Lp=> CE(DR(0i,0i+n),ce;) + \\ CE(DD(G(0i),dei) max Te DR,Dp,DF\nIn Equation (5), we flip the gradient's sign during backpropagation of DF with respect to the domain classification loss. This corresponds to stochastic gradient ascent away from features that are useful for domain classification, thus ensuring that DF produces domain agnostic features. Equation 5|can be solved efficiently with stochastic gradient descent. Here X is a hyperparameter that determines. the trade-off made between the objectives that are competing over DF..\nJun Yang, Rong Yan, and Alexander G Hauptmann. Cross-domain video concept detection usin adaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, pI 188-197. ACM, 2007.\nB. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning In AAAI Conference on Artificial Intelligence, 2008..\nFigure 2: Architecture diagram for third-person imitation learning. Images at time t and t + 4 are. sent through a feature extractor to obtain F(ot) and F(ot+4). Subsequently, these feature vectors. are reused in two places. First. they are concatenated and used to predict whether the samples are drawn from expert or non-expert trajectories. Second, F(ot) is utilized to predict a domain labe. (expert vs. novice domain). During backpropogation, the sign on the domain loss Lp is flippec. to destroy information that was useful for distinguishing the two domains. This ensures that the. feature extractor F is domain agnostic. Finally, the classes probabilities that were computed using. this domain-agnostic feature vector are utilized as a cost signal in TRPO; which is subsequently. utilized to train the novice policy to take expert-like actions and collect further rollouts..\nHere, we plot the learning curves for each of the baselines mentioned in the experiments section as. a standalone plot. This allows one to better examine the variance of each individual learning curve"}, {"section_index": "7", "section_name": "5.2 ALGORITHM", "section_text": "To solve the game formulation in Equation (5), we perform alternating (partial) optimization over the policy e and the reward function and domain confusion encoded through DR, Dp, DF\nOur generator (e) step is similar to the generator step in the algorithm by (Ho & Ermon2016). We. simply use log Dr as the reward. Using policy gradient methods (TRPO), we train the generator. to minimize this cost and thus push the policy further towards replicating expert behavior. Once the generator step is done, we start again with the discriminator step. The entire process is summarized. in algorithm 1.\nWe seek to answer the following questions through experiments\nFigure 10: Inverted Pendulum performance under a policy trained on RL, first-person imitation learning, third-person imitation, and a first-person policy applied to a third-person agent.\naLE d0F aLE d0E Loss LE Input Ot+4 features 0t+4 concat Expert vs. Non-expert Label reatures E TRPO VnE,0) Domain Label Feature Extractor F D Input Ot -aLp aLp a0F Loss Lp d0E\nOt d0F aLE d0E Loss LE Input Ot+4 features Ot+4 concat Expert vs. Non-expert Label features E TRPO Vn(E,ot) Domain Label Feature Extractor F D Input Ot -aLp aLp d0F Loss Lp d0E\nInverted DP RL Reward vs Iteration. Inverted DP First Person Imitation Reward vs Iteratic 50 50 20 10 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 iteration iteration DP First Person Policy on Third Person Agent. Inverted DP Third Person Imitation Reward vs Iteratic 22.5 40 20.0- reerrn 17.5 12.5 10 10.0- 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 iteration iteration\nInverted DP First Person Imitation Reward vs Iteration\nTheoptimization Over Dr,Dp,DF is done through stochastic gradient descent with ADAM Kingma & Ba(2014)\n50 40 30 20 10 2.5 5.0 7.5 10.0 iteration\nInverted DP Third Person Imitation Reward vs Iteratior\n1. Is it possible to solve the third-person imitation learning problem in simple settings? I.e. given a collection of expert image-based rollouts in one domain, is it possible to train a policy in a different domain that replicates the essence of the original behavior? 2. Does the algorithm we propose benefit from both domain confusion and velocity? 3. How sensitive is our proposed algorithm to the selection of hyper-parameters used in de- ployment? 4. How sensitive is our proposed algorithm to changes in camera angle? 5. How does our method compare against some reasonable baselines?\nAlgorithm 1 A third-person imitation learning algorithm\nFigure 11: Reacher performance under a policy trained on RL, first-person imitation learning, third person imitation, and a first-person policy applied to a third-person agent.\nPoint: A pointmass attempts to reach a point in a plane. The color of the target and the camera angl change between domains.\nReacher: A two DOF arm attempts to reach a designated point in the plane. The camera angle the length of the arms, and the color of the target point are changed between domains. Note tha changing the camera angle significantly alters the image background color from largely gray t oughly 30 percent black. This presents a significant challenge for our method.\nInverted Pendulum: A classic RL task wherein a pendulum must be made to balance via control. For this domain, We only change the color of the pendulum and not the camera angle. Since there. is no target point, we found that changing the camera angle left the domain invariant representation. with too little information and resulted in a failure case. In contrast to some traditional renderings.\nReacher RL Reward vs Iteration. Reacher First Person Imitation Reward vs Iteration 4 A 6 5 -6 8 -7 10 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration Reacher First Person Policy on Third Person Agent. Reacher Third Person Imitation Reward vs Iteration 6 8 9 10 -11 -9 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration\n-8 meeran nean 9 10 -11 9 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration\nPoint RL Reward vs Iteration Point First Person Imitation Reward vs Iteration 600 250 800 500 -1000 -750 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration Point First Person Policy on Third Person Agent Point Third Person Imitation Reward vs Iteration -1000 4000 ueaw 2000 -5000 -3000 6000 2.5 5.0 7.5 10.0 0 5 10 15 20 25 iteration iteration\nTo evaluate our algorithm, we consider three environments in the MuJoCo physics simulator. There are two different versions of each environment, an expert variant and a novice variant. Our goal is to train a cost function that is domain agnostic, and hence can be trained with images on the expert domain but nevertheless produce a reasonable cost on the novice domain. See Figure 1 for a visualization of the differences between expert and novice environments for the three tasks.\nFigure 12: Point performance under a policy trained on RL, first-person imitation learning, third person imitation, and a first-person policy applied to a third-person agent..\nof this problem, we do not terminate an episode when the agent falls but rather allow data collectior to continue for a fixed horizon.\nJoint Feature Extractor: Input is images are size 50 x 50 with 3 channels, RGB. Layers are 2 convolutional layers each followed by a max pooling layer of size 2. Layers use 5 filters of size 3 each.\nIs it possible to solve the third-person imitation learning problem in simple settings? In Figure|3 we see that our proposed algorithm is indeed able to recover reasonable policies for all three tasks we examined. Initially, the training is quite unstable due to the domain confusion wreaking havoc on the learned cost. However, after several iterations the policies eventually head towards reasonable local minima and the standard deviation over the reward distribution shrinks substantially. Finally, we note that the extracted feature representations used to complete this task are in fact domain-agnostic as seen in Figure[9] Hence, the learning is properly taking place from a third-person perspective.\nDomain Discriminator and the Class Discriminator: Input is domain agnostic output of con volutional layers. Layers are two feed forward layers of size 128 followed by a final feed forwarc layer of size 2 and a soft-max layer to get the log probabilities\nADAM is used for discriminator training with a learning rate of O.001. The RL generator uses the off-the-shelf TRPO implementation available in RLLab.\nReacher Reward vs Iteration. Inverted Pendulum Reward vs Iteration Point Reward vs Iteration. 30 -1000 2000 15 0 10 20 30 2.5 5.0 7.5 10.0 0 5 10 15 20 25 iteration iteration iteration\nReacher Reward vs Iteration Inverted Pendulum Reward vs Iteration Point Reward ys Iteration 30 mnea -2000 20 30 2.5 7.5 10.0 20 25 10 iteration 1.0 iteration 15\nFigure 3: Reward vs training iteration for reacher, inverted pendulum, and point environments. The learning curves are averaged over 5 trials with error bars represent one standard deviation in the reward distribution at the given point.\nReacher domain class acc vs iteration Pendulum domain class acc vs iteration Point domain class acc vs iteration 1.00 1.00 1.00 0.25 0.25 0.25 0.00 0.00 0.00 12 12 iteration 12 iteration iteration\nReacher domain class acc vs iteration Pendulum domain class acc vs iteration Point domain class acc vs iteration 1.00 1.00 1.00 0.00 12 0.00 0.00- 8 8 12 8 12 iteration iteration iteration\nFigure 4: Domain accuracy vs. training iteration for reacher, inverted pendulum, and point environ ments."}, {"section_index": "8", "section_name": "Does the algorithm we ose benefit from both domain confusion and the multi-time step input", "section_text": "aigoriinm we propose oene eee-leneeseepeepl We answer this question with the experiments summarized in Figure 5] This experiment compare. our approach with: (i) our approach without the domain confusion loss; (ii) our approach without th multi-time step input; (iii) our approach without the domain confusion loss and without the multi. time step input (which is very similar to the approach in Ho & Ermon (2016)). We see that adding. domain confusion is essential for getting strong performance in all three experiments. Meanwhile. adding multi-time step input marginally improves the results. See also Figure7|for an analysis o the effects of multi-time step input on the final results..\nvelo and domain confusion reacher velo and domain confusion inverted pendulum velo and domain confusion point 25 -2000- variable variable variable dem vanilla vanilla dom vanilla dom_plus_velo dom_plus_velo dom_plus_velo 15 -6000 12 -8000 30 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 Iteration Iteration Iteration\nvelo and domain confusion reacher velo and domain confusion inverted pendulum velo and domain confusion point -2000 25 Reerrp variable variable variable vanilla dem vanilla vom vanilla dom_plus_velo dom_plus_velo dom_plus_velo 15 -6000 -12 10 8000 10 20 30 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 Iteration Iteration Iteration\nFigure 5: Reward vs iteration for reacher, inverted pendulum, and point environments with no do main confusion and no velocity (red), domain confusion (orange), velocity (brown), and both do main confusion and velocity (blue)."}]
HysBZSqlx
[{"section_index": "0", "section_name": "PLAYING SNES IN THE RETRO LEARNING ENVIRONMENT", "section_text": "required rather than generalize the game. Therefore, in many episodes, little interaction between the two agents occur, leading to a semi-random outcome.\nIn our second experiment, we continued the training process of a the D-DQN network by letting it. compete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30. episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet. when faced again against the in-game AI its performance deteriorated drastically (from an average of 17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow. 2012 oentcnlovedth\nNadav Bhonker*, Shai Rozenberg* and Itay Hubara\nIn our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, sucl that in each episode a different rival was playing as the opponent with the intention of preventing the agent from learning a policy suitable for just one opponent. The new agent was able to achieve a score of 162,966 (compared to the 'normal'' dueling D-DQN which achieved 169,633). As new and objective measure of generalization, we've configured the in-game AI difficulty to be \"'ver hard' (as opposed to the default \"medium' difficulty). In this metric the alternating version achieve 83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus proving that the agent learned to generalize to other policies which weren't observed while training\n{nadavbh, shairoz}@tx.technion.ac.il itayhubara@gmail.com\nMastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer pro- gram is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were intro duced, aiming to learn how to perform human tasks such as playing video games As a result, the Arcade Learning Environment (ALE) (Bellemare et al.[|2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outper form humans. In this paper we introduce a new learning environment, the Retro Learning Environment -- RLE, that can run games on the Super Nintendo Enter- tainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE Moreover, RLE is compatible with Python and Torch. SNES games pose a signif- icant challenge to current algorithms due to their higher level of complexity and versatility."}, {"section_index": "1", "section_name": "4.4 FUTURE CHALLENGES", "section_text": "As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition to being able to learn all available games, the task of learning games in which reward delay is extreme,. such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games,. such as Super Mario, feature several stages that differ in background and the levels structure. The task of generalizing platform games, as in learning on one stage and being tested on the other, is another unexplored challenge. Likewise surpassing human performance remains a challenge since current state-of-the-art algorithms still struggling with the many SNES games.."}, {"section_index": "2", "section_name": "5 CONCLUSION", "section_text": "We introduced a rich environment for evaluating and developing reinforcement learning algorithm which presents significant challenges to current state-of-the-art algorithms. In comparison to othe environments RLE provides a large amount of games with access to both the screen and the in game state. The modular implementation we chose allows extensions of the environment with nev consoles and games, thus ensuring the relevance of the environment to RL algorithms for years tc come (see Table (2)). We've encountered several games in which the learning process is highly dependent on the reward definition. This issue can be addressed and explored in RLE as rewarc definition can be done easily. The challenges presented in the RLE consist of: 3D interpretation delayed reward, noisy background, stochastic AI behavior and more. Although some algorithm were able to play successfully on part of the games, to fully overcome these challenges, an agen must incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platforn for future RL research."}, {"section_index": "3", "section_name": "INTRODUCTION", "section_text": "Controlling artificial agents using only raw high-dimensional input data such as image or sound is a difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs in. the field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz. et al.]2016), navigation (Bischoff et al.2013) and more. Agent interaction with the real world is. usually either expensive or not feasible, as the real world is far too complex for the agent to perceive. Therefore in practice the interaction is simulated by a virtual environment which receives feedback on a decision made by the algorithm. Traditionally, games were used as a RL environment, dating back to Chess (Campbell et al.]2002), Checkers (Schaeffer et al.[1992), backgammon (Tesauro 1995) and the more recent Go (Silver et al.][2016). Modern games often present problems and tasks. which are highly correlated with real-world problems. For example, an agent that masters a racing game, by observing a simulated driver's view screen as input, may be usefull for the development of an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning. Environment (ALE) (Bellemare et al.]2013) which provides a common interface to dozens of Atari. 2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-. form, allowing a controlled experiment setup for algorithm evaluation and comparison. The main. challenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-. ing a score higher than an expert human player) without providing the algorithm any game-specific. information (i.e., using the same input available to a human - the game screen and score). A key work to tackle this problem is the Deep Q-Networks algorithm (Mnih et al.|2015), which made a breakthrough in the field of Deep Reinforcement Learning by achieving human level performance. on 29 out of 49 games. In this work we present a new environment - the Retro Learning Environ ment (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as well as more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment\nThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred. Agrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "System (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only one was able to outperform an expert human player. As an additional feature, RLE supports research of multi-agent reinforcement learning (MARL) tasks (Busoniu et al.|2010). We utilize this feature by training and evaluating the agents against each other, rather than against a pre-configured in-game AI. We conducted several experiments with this new feature and discovered that agents tend to learn how to overcome their current opponent rather than generalize the game being played. However, if an agent is trained against an ensemble of different opponents, its robustness increases. The main contributions of the paper are as follows:\nM. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016\nM. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence, 134(1):57-83, 2002\nlibRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03\nThe Arcade Learning Environment is a software framework designed for the development of RI algorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to select an action and receive the Atari screen and a reward in every step. The action is the equivalent to a human's joystick button combination and the reward is the difference between the scores at time stamp t and t - 1. The diversity of games for Atari provides a solid benchmark since differen games have significantly different goals. Atari 2600 has over 500 games, currently over 70 of them are implemented in ALE and are commonly used for algorithm comparison.\nM. J. Mataric. Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73-83 Springer, 1997."}, {"section_index": "5", "section_name": "2.2 INFINITE MARIO", "section_text": "Infinite Mario (Togelius et al.]2o09) is a remake of the classic Super Mario game in which levels are. randomly generated. On these levels the Mario AI Competition was held. During the competition. several algorithms were trained on Infinite Mario and their performances were measured in terms o. the number of stages completed. As opposed to ALE, training is not based on the raw screen data. but rather on an indication of Mario's (the player's) location and objects in its surrounding. This. environment no longer poses a challenge for state of the art algorithms. Its main shortcoming lii in the fact that it provides only a single game to be learnt. Additionally, the environment provide. hand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the use. of planning algorithms that highly outperform any learning based algorithm.."}, {"section_index": "6", "section_name": "2.3 OPENAI GYM", "section_text": "Universe. Universe. universe.openai.com, 2016. Accessed: 2016-12-13."}, {"section_index": "7", "section_name": "2.4 OPENAI UNIVERSE", "section_text": "Universe (Universel2016) is a platform within the OpenAI framework in which RL algorithms can train on over a thousand games. Universe includes very advanced games such as GTA V, Portal as well as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn't run the games locally and requires a VNC interface to a server that runs the games. This leads to a lower frame rate and thus longer training times.\nB. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcement learning for robot navigation. In ESANN, 2013. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. L. Busoniu, R. Babuska, and B. De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in Multi-Agent Systems and Applications-1, pages 183-221. Springer, 2010.\nIntroducing a novel RL environment with significant challenges and an easy agent evalu- ation technique (enabling agents to compete against each other) which could lead to new and more advanced RL algorithms. A new method to train an agent by enabling it to train against several opponents, making the final policy more robust. Encapsulating several different challenges to a single RL environment.\nR. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information Processing Systems 8. Citeseer, 1996. I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligence experimentation. In International Joint Conference On Artificial Intelligence (IJCAI), page 4246, 2016.\nSpringer, 1997 V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-. miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championship. caliber checkers program. Artificial Intelligence, 53(2):273-289, 1992. S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term prediction. arXiv preprint arXiv:1602.01580, 2016. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural. networks and tree search. Nature, 529(7587):484- 489, 2016. G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58-68, 1995. J. Togelius, S. Karakovskiy, J. Koutnik, and J. Schmidhuber. Super mario evolution. In 2009 IEEE Symposium on Computational Intelligence and Games, pages 156-161. IEEE, 2009.\nThe OpenAI gym (Brockman et al.2016) is an open source platform with the purpose of creating an interface between RL environments and algorithms for evaluation and comparison purposes. OpenAI Gym is currently very popular due to the large number of environments supported by it.. For example ALE, Go, MouintainCar and VizDoom (Zhu et al.]2016), an environment for the learning of the 3D first-person-shooter game \"Doom\"'. OpenAI Gym's recent appearance and wide. usage indicates the growing interest and research done in the field of RL..\nUniverse. Universe. universe.openai.com, 2016. Accessed: 2016-12-13. H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR. abs/1509.06461, 2015. Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual. navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143,. 2016."}, {"section_index": "8", "section_name": "Appendices", "section_text": "Malmo (Johnson et al.| 2016) is an artificial intelligence experimentation platform of the famous. game \"Minecraft\". Although Malmo consists of only a single game, it presents numerous challenges since the \"Minecraft\" game can be configured differently each time. The input to the RL algorithms include specific features indicating the \"'state\"' of the game and the current reward..\nExperimental Results\nTable 3: Average results of DON, D-DON, Dueling D-DON and a Human player"}, {"section_index": "9", "section_name": "2.6 DEEPMIND LAB", "section_text": "DeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithms. on several different challenges: static/random map navigation, collect fruit (a form of reward) anc a laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In. LAB the agent observations are the game screen (with an additional depth channel) and the velocity. of the character. LAB supports four games (one game - four different modes).."}, {"section_index": "10", "section_name": "2.7 DEEP O-LEARNING", "section_text": "In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al. 2015) an RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose action that maximize the final score). The state of the game is simply the game screen, and the action is a combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learns through trial and error while trying to estimate the \"Q-function\", which predicts the cumulative discounted reward at the end of the episode given the current state and action while following a policy . The Q-function is represented using a convolution neural network that receives the screer as input and predicts the best possible action at it's output. The Q-function weights 0 are updated according to:\n0t+1(St,at) = 0t+ a(Rt+1+ymax(Qt(St+1,a;0))-Qt(St,at;0t))VeQt(St,at;0t)\nwhere st, St+1 are the current and next states, at is the action chosen, a is the step size, y is the. discounting factor Rt+1 is the reward received by applying at at st. 0' represents the previous. weights of the network that are updated periodically. Other than DQN, we examined two leadin, algorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al.]2015), a DQN based algorithm with a modified network update rule. Dueling Double DQN (Wang et al.2015] a modification of D-DQN's architecture in which the Q-function is modeled using a state (screen dependent estimator and an action dependent estimator.\nThe Super Nintendo Entertainment System (SNES) is a home video game console developed by Nintendo and released in 1990. A total of 783 games were released, among them, the iconic Supei Mario World, Donkey Kong Country and The Legend of Zelda. Table (1) presents a comparisor between Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and Genesis games are far more complex."}, {"section_index": "11", "section_name": "3.2 IMPLEMENTATION", "section_text": "To allow easier integration with current platforms and algorithms, we based our environment on the ALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly coupled with the Atari emulator, Stella| RLE takes a different approach and separates the learning environment from the emulator. This was achieved by incorporating an interface named LibRetro (li- bRetro site), that allows communication between front-end programs to game-console emulators Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti- mated total of over 7,000 games that can potentially be supported using this interface. Examples of supported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,\nDQN D-DQN Dueling D-DQN Human F-Zero 3116 3636 5161 6298 Gradius III 7583 12343 16929 24440 Mortal Kombat 83733 56200 169300 132441 Super Mario 11765 16946 20030 36386 Wolfenstein 100 83 40 2952\n0t+1(St,at) =0t+a(Rt+1+ ,a;0D))- Qt(St,At;0t))VoQt(St,At;0t), C\nSaturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple mented using the snes9x2|as it's games present interesting, yet plausible to overcome challenges Additionally, we utilized the Genesis-Plus-Gx3|emulator, which supports several Sega consoles: Genesis/Mega Drive, Master System, Game Gear and SG-1000."}, {"section_index": "12", "section_name": "3.3 SOURCE CODE", "section_text": "RLE is fully available as open source software for use under GNU's General Public Licensd' The environment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Adding a new game to the environment is a relatively simple process.."}, {"section_index": "13", "section_name": "3.4 RLE INTERFACE", "section_text": "RLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper to. the LibRetro interface. Initialization of the environment is done by providing a game (ROM file). and a gaming-console (denoted by 'core'). Upon initialization, the first state is the initial frame of the game, skipping all menu selection screens. The cores are provided with the RLE and installed together with the environment. Actions have a bit-wise representation where each controller button is represented by a one-hot vector. Therefore a combination of several buttons is possible using. the bit-wise OR operator. The number of valid buttons combinations is larger than 7O0, therefore only the meaningful combinations are provided. The environments observation is the game screen, provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The. reward can be defined differently per game, usually we set it to be the score difference between. two consecutive frames. By setting different configuration to the environment, it is possible to alter in-game properties such as difficulty (i.e easy, medium, hard), its characters, levels, etc..\nAtari 2600 SNES Genesis Number of Games. 565 783 928 CPU speed 1.19MHz 3.58MHz 7.6 MHz ROM size 2-4KB 0.5-6MB 16 MBytes RAM size 128 bytes 128KB 72KB Color depth. 8 bit 16 bit 16 bit Screen Size 160x210 256x224 or 512x448 320x224 Number of controller buttons. 5 12 11 Possible buttons combinations 18 over 720 over 100"}, {"section_index": "14", "section_name": "3.5 ENVIRONMENT CHALLENGES", "section_text": "Integrating SNES and Genesis with RLE presents new challenges to the field of RL where visua information in the form of an image is the only state available to the agent. Obviously, SNES games are significantly more complex and unpredictable than Atari games. For example in sports games such as NBA, while the player (agent) controls a single player, all the other nine players' behavior is determined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES games exhibit delayed rewards in the course of their play (i.e., reward for an actions is given many time steps after it was performed). Similarly, in some of the SNES games, an agent can obtain a reward that is indirectly related to the imposed task. For example, in platform games, such as Supe Mario, reward is received for collecting coins and defeating enemies, while the goal of the challenge is to reach the end of the level which requires to move to keep moving to the right. Moreover upon completing a level, a score bonus is given according to the time required for its completion Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too much time. Analysis of such games is presented in section4.2 Moreover, unlike Atari that consists of\nTable 1: Atari 2600. SNES and Genesis comparison\neight directions and one action button, SNES has eight-directions pad and six actions buttons. Since combinations of buttons are allowed, and required at times, the actual actions space may be large than 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES is very rich, filled with details which may move locally or across the screen, effectively acting a non-stationary noise since it provided little to no information regarding the state itself. Finally, w note that SNES utilized the first 3D games. In the game Wolfenstein, the player must navigate a maze from a first-person perspective, while dodging and attacking enemies. The SNES offers plent of other 3D games such as flight and racing games which exhibit similar challenges. These games are much more realistic, thus inferring from SNES games to \"'real world'' tasks, as in the case o self driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, is presented in Figure (1).\nFigure 1: Atari 2600 and SNES game screen comparison: Left: \"Boxing\"' an Atari 2600 fighting game , Right: 'Mortal Kombat' a SNES fighting game. Note the exceptional difference in the amount of details between the two games. Therefore, distinguishing a relevant signal from noise is much more difficult.\nTable 2: Comparison between RLE and the latest RL environments"}, {"section_index": "15", "section_name": "4.1 EVALUATION METHODOLOGY", "section_text": "The evaluation methodology that we used for benchmarking the different algorithms is the popular method proposed by (Mnih et al.]2015). Each examined algorithm is trained until either it reached convergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by performing 30 episodes of every game. Each episode ends either by reaching a terminal state or after 5 minutes. The results are averaged per game and compared to the average result of a human player. For each game the human player was given two hours for training, and his performances were evaluated over 20 episodes. As the various algorithms don't use the game audio in the learning process, the audio was muted for both the agent and the human. From both, humans and agents\n12 1:33 PUSHSTART SUB-ZERO SCORPION AETIVISION\nCharacteristics RLE OpenAI Inifinte ALE Project DeepMind Universe Mario Malmo Lab Number of Games 8 out of 7000+ 1000+ 1 74 1 4 In game Yes NO No No Yes Yes adjustments1 530fps(SNES) 60fps Frame rate 5675fps2 120fps <7000fps <1000fps Observation (Input) screen, Screen hand crafted screen, hand crafted screen + depth RAM features RAM features and velocity\nscore, a random agent score (an agent performing actions randomly) was subtracted to assure tha learning indeed occurred. It is important to note that DQN's e-greedy approach (select a randon action with a small probability e) is present during testing thus assuring that the same sequenc of actions isn't repeated. While the screen dimensions in SNES are larger than those of Atari, ir our experiments we maintained the same pre-processing of DQN (i.e., downscaling the image t 84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn't affec a human's ability to play the game, therefore suitable for RL algorithms as well. To handle the large action space, we limited the algorithm's actions to the minimal button combinations whicl provide unique behavior. For example, on many games the R and L action buttons don't have an use therefore their use and combinations were omitted."}, {"section_index": "16", "section_name": "4.1.1 RESULTS", "section_text": "A thorough comparison of the four different agents' performances on SNES games can be seen in Figure Q. The full results can be found in Table (3). Only in the game Mortal Kombat a trained. agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games..\nOne example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks navigating in a maze and detecting object. As evident from figure (2), all agents produce poor results indicating a lack of the required properties. By using e-greedy approach the agents weren't able tc explore enough states (or even other rooms in our case). The algorithm's final policy appeared as a random walk in a 3D space. Exploration based on visited states such as presented in|Bellemare et al.(2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling flight-shooter game. While the trained agent was able to master the technical aspects of the game which includes shooting incoming enemies and dodging their projectiles, it's final score is still fa from a human's. This is due to a hidden game mechanism in the form of 'power-ups\", which can be accumulated, and significantly increase the players abilities. The more power-ups collected withou being use - the larger their final impact will be. While this game-mechanism is evident to a human the agent acts myopically and uses the power-up straight away"}, {"section_index": "17", "section_name": "4.2 REWARD SHAPING", "section_text": "As part of the environment and algorithm evaluation process, we investigated two case studies. Firs is a game on which DQN had failed to achieve a better-than-random score, and second is a game on which the training duration was significantly longer than that of other games.\nIn the first case study, we used a 2D back-view racing game 'F-Zero\". In this game, one is requirec to complete four laps of the track while avoiding other race cars. The reward, as defined by the score of the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lap may last as long as 30 seconds, which span over 450 states (actions) before reward is received. Since DQN's exploration is a simple e-greedy approach, it was not able to produce a useful strategy. We approached this issue using reward shaping, essentially a modification of the reward to be a functior of the reward and the observation, rather than the reward alone. Here, we define the reward to be the sum of the score and the agent's speed (a metric displayed on the screen of the game). Indeec when the reward was defined as such, the agents learned to finish the race in first place within a shor training period.\nThe second case study is the famous game of Super Mario. In this game the agent, Mario, is require to reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foun this case interesting as it involves several challenges at once: dynamic background that can chang drastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemie and pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach th end of the level without any reward shaping, this was possible since the agent receives rewards fo events (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player causing the agent to prefer moving right. However, the training time required for convergence wa significantly longer than other games. We defined the reward as the sum of the in-game reward anc a bonus granted according the the player's position, making moving right preferable. This rewar\n5 A video demonstration can be found at https://youtu.be/nU19XLMveEU\n120 100 DQN Noore annnrrnre D-DQN 80 Duel-DDQN 60 40 20 0 F-Zero (speed bonus) Gradius 3 Mortal Kombat Super Mario Wolfenstein Algorithms\nFigure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the a random agent's score and dividing by the human player score. Thus 100 represents a human player. and zero a random agent.\nproved useful, as training time required for convergence decreased significantly. The two game. above can be seen in Figure (3)\nFigure (4) illustrates the agent's average value function . Though both were able complete the stage. trained upon, the convergence rate with reward shaping is significantly quicker due to the immediate realization of the agent to move rightwards..\nFigure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent to master them game after less training time. Right: The game F-Zero. By granting a reward for speed the agent was able to master this game, as oppose to using solely the in-game reward.\nMARIO WORLD TIME 000200 01 386 00000 POWER 101 SAFE SPEEODOOmh RAN 000900 READY 1\nWORLD TIME 01 00000 POWER 101 386 SAFE SPEEO0OOM RANK 0.00400 READY\n0.8 0.6 0.4 0.2 - Super Mario With Right Bonus Super Mario Without Right Bonus 10 20 30 40 50 60 70 Epoch\nFigure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving righ (blue) and without (red)."}, {"section_index": "18", "section_name": "4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS", "section_text": "We chose the game Mortal Kombat, a two character side viewed fighting game (a screenshot of. the game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties:. both players share the same screen, the agent's optimal policy is heavily dependent on the rival's behavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained. using the same characters maintaining the identity of rival and agent. Furthermore, to remove the. impact of the starting positions of both agents on their performances, the starting positions were. initialized randomly.\nIn the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN Each agent was trained against the in-game AI until convergence. Then 50 matches were performed between the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 against D-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn't far from the random case, since the algorithms converged into a policy in which movement towards the opponent is not\nIn this section we describe our experiments with RLE's multi-agent capabilities. We consider the case where the number of agents, n = 2 and the goals of the agents are opposite, as in r1 = -r2 This scheme is known as fully competitive (Busoniu et al. 2010). We used the simple single agent RL approach (as described byBusoniu et al.(2010) section 5.4.1) which is to apply to sin gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto (1996) and Mataric(1997). More elaborate schemes are possible such as the minimax-Q algo- rithm (Littman1994), (Littman2001). These may be explored in future works.We conducted hree experiments on this setup: the first use was to train two different agents against the in-game AI, as done in previous sections, and evaluate their performance by letting them compete against each other. Here, rather than achieving the highest score, the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the in-game AI, and resume the training while competing against each other. In this case, we evaluated the agent by playing again against the in-game AI, separately. Finally, in our last experiment we try to boost the agent capabilities by alternated it's opponents, switching between the in-game AI and other trained agents."}]
rkE3y85ee
[{"section_index": "0", "section_name": "CATEGORICAL REPARAMETERIZATION I GUMBEL-SOFTMAX WITH", "section_text": "Shixiang Gu\nEric Jang\nUniversity of Cambridge MPI Tubingen\nGoogle Brain\nejang@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, anc reinforcement learning domains. For example, discrete variables have been used to learn probabilis tic latent representations that correspond to distinct semantic classes (Kingma et al.2014), imag regions (Xu et al.]2015), and memory locations (Graves et al.]2014] Graves et al.2016). Discrete representations are often more interpretable (Chen et al.2016) and more computationally efficien (Rae et al.|2016) than their continuous analogues.\nHowever, stochastic networks with discrete variables are difficult to train because the backprop. agation algorithm - while permitting efficient computation of parameter gradients - cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally. focused on either score function estimators augmented with Monte Carlo variance reduction tech- niques (Paisley et al.]2012f Mnih & Gregor2014] Gu et al.[2016]Gregor et al.2013), or biased path derivative estimators for Bernoulli variables (Bengio et al.]2013). However, no existing gra. dient estimator has been formulated specifically for categorical variables. The contributions of this. work are threefold:\nThe practical outcome of this paper is a simple, differentiable approximate sampling mechanism fo categorical variables that can be integrated into neural networks and trained using standard back propagation.\nWork done during an internship at Google Brain\nBeng1o.N. eonara.and Courv1lle Estimating or propagating gradients through stochastic. neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-. gan: Interpretable representation learning by information maximizing generative adversarial nets.. CoRR, abs/1606.03657, 2016. J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint. arXiv:1609.01704, 2016. P. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the. ACM, 33(10):75-84, 1990. A. Graves. G. Wayne. M. Reynolds. T. Harley. I. Danihelka. A. Grabska-Barwinska. S. G. Col-\nBen Poole\nStanford University\npoole@cs.stanford.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables. due to the inability to backpropagate through samples. In this work, we present an. efficient gradient estimator that replaces the non-differentiable sample from a cat- egorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax esti- mator outperforms state-of-the-art gradient estimators on structured output predic tion and unsupervised generative modeling tasks with categorical latent variables. and enables large speedups on semi-supervised classification..\nGregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive network arXiv preprint arXiv:1310.8499, 2013. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochast Neural Networks. ICLR, 2016. J. Gumbel. Statistical theory of extreme values and some practical applications: a series lectures. Number 33. US Govt. Print. Office, 1954.. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.611 2013. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with dee generative models. In Advances in Neural Information Processing Systems, pp. 3581-3589, 201 Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, volume pp. 2, 2011. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In Advances in Neural Information Pr cessing Systems, pp. 3086-3094, 2014. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxatic of Discrete Random Variables. ArXiv e-prints, November 2016.. Mnih and K. Gregor. Neural variational inference and learning in belief networks. ICML, 3 2014. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv prepri arXiv:1602.06725, 2016. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArX e-prints, June 2012. briel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networl by penalizing confident output distributions. 2016. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicra Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-prini October 2016. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforwai\n1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx- imate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick. 2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es-. timators on both Bernoulli variables and categorical variables.. 3. We show that this estimator can be used to efficiently train semi-supervised models (e.g Kingma et al.(2014)) without costly marginalization over unobserved categorical latent. variables.\nCPP1 ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014a. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer ence in deep generative models. In Proceedings of The 31st International Conference on Machin Learning, pp. 1278-1286, 2014b.\nWe begin by defining the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let z be a categorical variable with class probabilities 1, 2, ...7tk. For the remainder of this paper we assume categorical samples are. encoded as k-dimensional one-hot vectors lying on the corners of the (k - 1)-dimensional simplex. k-1. This allows us to define quantities such as the element-wise mean Ep[z] = [1, .., k] of. these vectors.\nJ. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016\nexp((log(;) + gi)/T) Yi for i = 1, ..., k =1 exp((log(;) + gi)/)\nk k k L I(ri/y+1) Pn,r(Y1,...,Yk) =T(k)7k-1 Ti/yi i=1 i=1\nThis distribution was independently discovered by[Maddison et al.(2016), where it is referred to as the concrete distribution. As the softmax temperature t approaches 0, samples from the Gumbel- Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution p(z).\n(a) (c) N(O,1) N(O,1) G(0, 1) b Deterministic, N(O,1) differentiable node Stochastic node\na) Categorical T = 0.1 T = 0.5 T = 1.0 T = 10.0 erreeeeeon 6 gadwes category\nFigure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor ical distributions and continuous categorical densities. (a) For low temperatures (- = 0.1, t = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a cate- gorical random variable with the same logits. As the temperature increases ( = 1.0, = 10.0), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel-. Softmax distributions are identical to samples from a categorical distribution as t -> 0. At higher. temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as t -> oo.\nFigure 6: Semi-supervised generative model proposed by Kingma et al.(2014). (a) Generative model pe(x[y, z) synthesizes images from latent Gaussian \"style\"' variable z and categorical class variable y. (b) Inference model qo(y, z|x) samples latent state y, z given x. Gaussian z can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when y is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel- Softmax reparameterizes y so that backpropagation is also possible through y without encountering stochastic nodes."}, {"section_index": "3", "section_name": "2.1 GUMBEL-SOFTMAX ESTIMATOR", "section_text": "The Gumbel-Softmax distribution is smooth for > 0, and therefore has a well-defined gradi ent dy/a with respect to the parameters . Thus, by replacing categorical samples with Gumbel-. Softmax samples we can use backpropagation to compute gradients (see Section|3.1). We denote."}, {"section_index": "4", "section_name": "B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION", "section_text": "Here we derive the probability density function of the Gumbel-Softmax distribution with proba bilities 1, ..., k and temperature t. We first define the logits x; = log , and Gumbel samples\n1The Gumbel(0,1) distribution can be sampled using inverse transform sampling by drawing u ~ Uniform(0, 1) and computing g = log(- log(u))\nThe Gumbel-Max trick (Gumbel!1954] Maddison et al.2 2014) provides a simple and efficient way to draw samples z from a categorical distribution with class probabilities :\nz = one hot arg max gi + log i\nwhere g1...gk are i.i.d samples drawn from Gumbel(0, 11 We use the softmax function as a continu. ous, differentiable approximation to arg max, and generate k-dimensional sample vectors y E k-1 where\nFigures|6|and7|describe the architecture used in our experiments for semi-supervised classification (Section4.3).\n(a) conv2 conv2 conv2 5x5 5x5 5x5 FC X stride=2 stride=2 stride=2 qq(y l x) 10 N=32 N=64 N=128 ReLU ReLU ReLU (b) conv2 conv2 conv2 5x5 5x5 5x5 FC [x,y] stride=2 stride=2 stride=2 qq(z 1 x) 32 N=32 N=64 N=128 ReLU ReLU ReLU c conv2_T conv2_T conv2_T conv2_T FC 3x3 3x3 3x3 3x3 [x, y] >FC pe(x y,z) 64 stride=2 stride=2 stride=2 stride=2 N=128 N=64 N=32 N=32\nWhile Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre- sponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large. and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1. In practice, we start at a high temperature and anneal to a small but non-zero temperature..\nIn our experiments, we find that the softmax temperature t can be annealed according to a variety of schedules and still perform well. If t is a learned parameter (rather than annealed via a fixed. schedule), this scheme can be interpreted as entropy regularization (Szegedy et al.2015, Pereyra. et al.[ 2016), where the Gumbel-Softmax distribution can adaptively adjust the. e\"confidence'of proposed samples during the training process."}, {"section_index": "5", "section_name": "2.2 STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR", "section_text": "Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre sentations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize y using arg max but use our continuous approximation in the backward pass by approxi-. mating Vez ~ Vey. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in|Bengio et al.(2013). ST Gumbel-Softmax allows. samples to be sparse even when the temperature t is high..\nFigure 7: Network architecture for (a) classification qs(y[x) (b) inference qo(z[x, y), and (c) gen erative pe(x[y, z) models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from."}, {"section_index": "6", "section_name": "3 RELATED WORK", "section_text": "exp((xi+ gi)/T) for i = 1, ..., Yi j=1 exp((x; + gj)/T)\nIn this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al.2015) with. discrete random variable z whose distribution depends on parameter 0, and cost function f(z). The objective is to minimize the expected cost L(0) = Ez~pe(z)[f(z)] via gradient descent, which requires us to estimate VEz~pe(z)[f(z)].\nThe mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the. normalization of the softmax operation removes one degree of freedom. To compensate for this, we define an equivalent sampling process that subtracts off the last element, (xk + gk)/- before the. softmax:"}, {"section_index": "7", "section_name": "3.1 PATH DERIVATIVE GRADIENT ESTIMATORS", "section_text": "For distributions that are reparameterizable, we can compute the sample z as a deterministic functior g of the parameters 0 and an independent random variable e, so that z = g(0, e). The path-wise gradients from f to 0 can then be computed without encountering any stochastic nodes:.\nexp((xi+gi-(xk+ gk))/T) for i = 1,... i=1 exp((x;+ gj- (xk+ gk))/T)\nU=Xi+gi-(xk+ gk for i = 1,..., k - 1\nBiased path derivative estimators can be utilized even when z is not reparameterizable. In general we can approximate Vez ~ Vem(0), where m is a differentiable proxy for the stochastic sample For Bernoulli variables with mean parameter 0, the Straight-Through (ST) estimator (Bengio et al. 2013) approximates m = e(z), implying Vem = 1. For k = 2 (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by|Chung et al.(2016), bu uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016) considers an al ternative approach where each binary latent variable parameterizes a continuous mixture model Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables.\nOne limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance\nconv2 T conv2_T conv2_T conv2 T FC 3x3 3x3 3x3 3x3 [x,y] >FC po(x y,z) 64 stride=2 stride=2 stride=2 stride=2 N=128 N=64 N=32 N=32\n., 9k, where g; ~ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as\na d df dg Ee[f(g(0,e))]=Ec~p Az~Pe f(z))] de ae dg d0\nTo derive the density of this equivalent sampling process, we first derive the density for the 'cen tered' multivariate Gumbel density corresponding to:\nFor example, the normal distribution z ~ N(, ) can be re-written as + : N(0, 1), making. it trivial to compute dz/a and dz/ao. This reparameterization trick is commonly applied to train ing variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling][2013} Rezende et al.]2014b). As shown in Figure [2] we exploit such a trick in the con- struction of the Gumbel-Softmax estimator.\n(U1,..., Uk. dgk P(U1,..., Uk|gk)P(gk k-1 dgk P(9k) ]I p(ui|gk) i=1 k-1 X dgk f(gk,O) f(xk+ gk,Xi-Ui i=1 k-1 11 gk-e-gk i=1\n(a) (b) (c) (d) (e) f(x) f(z) f Vlog Pe(Z) f(z) f(y) df d f d f df ax d z dzV dy f(z) x(0) f(z) d x A dy de au Pe(Z) log P,(Z) Pe(Z) log Pg(Y) dPe(Z) dPe(Z) ae Pe(Z) ae Deterministic, Pe(Z) log Pg(Y) differentiable node Stochastic node e dPe(Z) a logPg(Y) a0 ae Forward pass 0 Backpropagation\n(a) (b) (c) (d) (e) f(x) f(z) f Vlog Pe(Z) f(z) f(y) df af df A df ax dz azV dy f(z) Z x(0) f (z) d x dy de du log Po(Z) Pe(Z) Po(Z) log Pg(Y) dPe(Z) dPe(Z) a0 a0 Deterministic, Pe(Z) Pe(Z) log P,(Y differentiable node 0 Stochastic node dPe(Z) d log Pg(Y) ae ae Forward pass e Backpropagation\nWe perform a change of variables with v = e-9k, so dv = -e-9k dgx and dgk = dv e9k = dv/v and define uk = 0 to simplify notation:.\n11 pu1,..., Uk,-1)=Suk=0 Xk i=1 k-1 exp Xk+) (xi i=1 k k exp L i=1 i=1 k k I(k) exp(xi-Ui exp i=1 i=1\nFigure 2: Gradient estimation in stochastic computation graphs. (1) e.f(x) can be computed via. backpropagation if x(0) is deterministic and differentiable. (2) The presence of stochastic node. z precludes backpropagation as the sampler function does not have a well-defined gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of Ve f (x) by backpropagating along a surrogate loss f log pe(z), where f = f (x) - b and. b is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates Vez ~ 1. (5) Gumbel-Softmax is a path derivative estimator for. a continuous distribution y that approximates z. Reparameterization allows gradients to flow from. f (y) to 0. y can be annealed to one-hot categorical variables over the course of training..\nk-1 1+ Yk = exp(u) j=1\nGumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre sponding discrete sample z.\nVeEz[f(z)]= Ez[f(z)Ve logpe(z)\nSF only requires that pe(z) is continuous in 0, and does not require backpropagating through f or the sample z. However, SF suffers from high variance and is consequently slow to converge. Ir particular, the variance of SF scales linearly with the number of dimensions of the sample vectoi (Rezende et al.|2014a), making it especially challenging to use for categorical distributions.\nThe determinant of the Jacobian can then be computed\nOh-1(y1:k-1) k-1 k-1 k 11 11 1 - yi1 Yj Oy1:k-1 j=1 i=1 i=1\nThe variance of a score function estimator can be reduced by subtracting a control variate b(z) from the learning signal f, and adding back its analytical expectation s = Ez [b(z)Ve log pe(z)] to keep the estimator unbiased:\nk k k 1 9 k 11 p(y1,..,yk) = T(k exp ( exp Yi \\i=1 i=1 i=1 -k k k - 11 exp(xi) /y exp(xi)/yT+1 i=1 i=1\nVeEz[f(z)]= Ez[f(z)Ve logpe(z) + (b(z)Ve logpe(z) -b(z)Ve logpe(z) =Ez[(f(z) -b(z))VelogPe(z)]+ b\nk-1 I1 VeXi-Ui-xk-Ve*i-uj-xk (15) U1, ..., Uk,-1. i=1 exp (16) Xk+ k ) exp (17) k =T(k) Iexp (18) exp =1 -\nven samples u1, ..., uk,-1 from the centered Gumbel distribution, we can apply a deterministi nsformation h to yield the first k - 1 coordinates of the sample from the Gumbel-Softmax:\nexp(ui/T) Y1:k = h(u1:k-1), h = 1+ exp(uj/T)\nWe can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the first k - 1 variables:\nOh- Y1:k-1 p(y1:k) = p ( (y1:k-1) Oy1:k-1\nSo to compute the probability of the Gumbel-Softmax we need two more pieces: the inverse of h and its Jacobian determinant. The inverse of h is:.\nk-1 (y1:k-1 =TX log yi - log 1 Yj j=1\nVeEz[f(z)] =Ez[f(z)Velogpe(z) +(b(z)Ve logpe(z)-b(z)Velogpe(z)) =Ez[(f(z)-b(z))Velogpe(z)]+\nNVIL (Mnih & Gregor2014) uses two baselines: (1) a moving average f of f to center the. learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network"}, {"section_index": "8", "section_name": "3.3 SEMI-SUPERVISED GENERATIVE MODELS", "section_text": "Semi-supervised learning considers the problem of learning from both labeled data (x, y) ~ D. and unlabeled data x ~ Du, where x are observations (i.e. images) and y are corresponding labe. (e.g. semantic class). For semi-supervised classification,Kingma et al.(2014) propose a variationa. autoencoder (VAE) whose latent state is the joint distribution over a Gaussian \"style'' variable and a categorical \"semantic class\"' variable y (Figure 6] Appendix). The VAE objective trains. discriminative network qo(y|x), inference network qo(z|x, y), and generative network pe(x[y, end-to-end by maximizing a variational lower bound on the log-likelihood of the observation unde. the generative model. For labeled data, the class y is observed, so inference is only done on z . q(z|x, y). The variational lower bound on labeled data is given by:.\nlogpe(x, y) -L(x, y) [log Pe(x|y,z)] - KL[q(z|x, y)||pe(y)p(z)]\nFor unlabeled data, difficulties arise because the categorical distribution is not reparameterizable Kingma et al.(2014) approach this by marginalizing out y over all classes, so that for unlabeled. data, inference is still on go(z[x, y) for each y. The lower bound on unlabeled data is:.\nlog pe(x) -U(x) = Ez~q+(y,z(x)[logpe(x|y,z) + logPe(y) + logp(z) - qp(y,z|x )`qg(y|x)(-L(x,y) +H(qs(y|x)))\nThe full maximization obiective is\nJ = E(x,y)~Dt [-L(x,y)] + Ex~Du [-U(x)] + Q:E(x,y)~D1[logqg(y|x)]\nwhere a is the scalar trade-off between the generative and discriminative objectives"}, {"section_index": "9", "section_name": "EXPERIMENTAL RESULTS", "section_text": "In our first set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and\nfitted to f - f (a control variate for the centered learning signal itself). Finally, variance. normalization divides the learning signal by max(1, f), where o? is a moving average of. Var[f]. DARN (Gregor et al.] 2013) uses b = f(z) + f'(z)(z - z), where the baseline corre-. sponds to the first-order Taylor approximation of f(z) from f(z). z is chosen to be 1/2 for Bernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores the correction term , in the estimator expression.. MuProp (Gu et al.[2016) also models the baseline as a first-order Taylor expansion: b = f(z) + f'(z)(z - z) and b = f'(z)VeE, [z]. To overcome backpropagation through discrete sampling, a mean-field approximation fmF(e(z)) is used in place of f(z) to compute the baseline and derive the relevant gradients.. VIMCO (Mnih & Rezende]2016) is a gradient estimator for multi-sample objectives that uses the mean of other samples b = 1/m ji f (z) to construct a baseline for each sample. Zi E z1:m. We exclude VIMCO from our experiments because we are comparing estimators for single-sample objectives, although Gumbel-Softmax can be easily extended to multi- sample objectives\nOne limitation of this approach is that marginalization over all k class values becomes prohibitively expensive for models with a large number of classes. If D, I, G are the computational cost of sam- pling from qo(y[x), qs(z[x, y), and pe(x[y, z) respectively, then training the unsupervised objective requires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through y ~ qo(y[x) for single sample gradient estimation, and achieves a cost of O(D + I + G) per training step. Experimental comparisons in training speed are shown in Figure|5\nSlope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction an (2) variational training of generative models. We use the MNIST dataset with fixed binarizatior for training and evaluation, which is common practice for evaluating stochastic gradient estimators Salakhutdinov & Murray 2008Larochelle & Murray2011)\nLearning rates are chosen from {3e-5, 1e-5, 3e-4, 1e-4, 3e-3, 1e-3}; we select the best learn ing rate for each estimator using the MNIST validation set, and report performance on the tes set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but ar. discretized to one-hot vectors during evaluation. We also found that variance normalization was nec essary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activatior functions for binary (Bernoulli) neural networks and softmax activations for categorical variables Models were trained using stochastic gradient descent with momentum 0.9."}, {"section_index": "10", "section_name": "4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS", "section_text": "The objective of structured output prediction is to predict the lower half of a 28 28 MNIST digit given the top half of the image (14 28). This is a common benchmark for training stochastic binary. networks (SBN) (Raiko et al.]2014] Gu et al.] 2016, Mnih & Rezende2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood. objective, Eh~pe(h;|uppr)[m =1 log pe(lower|hi)], where m = 1 is used for training and m = 1000 is used for evaluation.\nWe trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi. narized activations (denoted as 392-(20 10)-(20 10)-392)\nBernoulli SBN Categorical SBN SF SF DARN DARN ST Slope-Annealed st Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 200 400 500 100 200 400 500 Steps (x1e3) Steps (x1e3) (a) (b)\nFigure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarizec MNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) anc (b) categorical latent variables (392-(20 10)-(20 10)-392)."}, {"section_index": "11", "section_name": "4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS", "section_text": "We train variational autoencoders (Kingma & Welling2013), where the objective is to learn a gener- ative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (20 10). We use a learned cat- egorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice\nAs shown in Figure[3] ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari- ables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a fixed t = 1.\nSF SF DARN PDARN ST ST Slope-Annealed ST Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 400 100 400 500 Steps (x1e3) Steps (x1e3)\nwe find that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with m = 1000.\nThe temperature is annealed using the schedule t = max(0.5, exp(-rt)) of the global training step t, where t is updated every N steps. N E {500, 1000} and r E {1e-5, 1e-4} are hyperparameters for which we select the best-performing estimator on the validation set and report test performance\nAs shown in Figure4] ST Gumbel-Softmax outperforms other estimators for Categorical variables and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables.\nBernoulli VAE Categorical VAE SF SF DARN DARN ST ST Slope-Annealed ST Slope-Annealed ST 120 120 MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 110 110 105 100 200 0 400 500 100 200 400 500 Steps (xle3) Steps (xle3) (a) (b)\nFigure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables (784 - 200 784) and (b) categorical latent variables (784 - (20 10) 200).\nWe apply the Gumbel-Softmax estimator to semi-supervised classification on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al.|2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax\nWe trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model qo(y|x) and inference model q(z|x, y) are each im- plemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model pe(x[y, z) is a 4-layer convolutional-transpose network with ReLU activations. Experimental details are provided in Appendix A\nEstimators were trained and evaluated against several values of a = {0.1, 0.2,0.3, 0.8,1.0} ang the best unlabeled classification results for test sets were selected for each estimator and reportec\nBernoulli VAE Categorical VAE SF SF DARN DARN ST ST Slope-Annealed ST Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 400 400 500 Steps (x1e3) Steps (x1e3) (a) (b)\nTable 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to. negative variational lower bounds (nats) on the log-likelihood (lower is better)..\nSF DARN MuProp ST Annealed ST Gumbel-S ST Gumbel-S. SBN (Bern.) 72.0 59.7 58.9 58.9 58.7 58.5 59.3 SBN (Cat.) 73.1 67.9 63.0 61.8 61.1 59.0 59.7 VAE (Bern.) 112.2 110.9 109.7 116.0 111.5 105.0 111.5 VAE (Cat.) 110.6 128.8 107.0 110.9 107.8 101.5 107.8\nTable 2: Marginalizing over y and single-sample variational inference perform equally well when applied to image classification on the binarized MNIST dataset (Larochelle & Murray) 2011).We report variational lower bounds and image classification accuracy for unlabeled data in the test set.\n35 Gumbel 30 Marginalization (rte/sdees) 25 20 15 5 0 K=5 K=10 K=100 Number of classes y (a) (b)\nsppeeeeeee) peeee 25 20 15 10 5 0"}, {"section_index": "12", "section_name": "5 DISCUSSION", "section_text": "The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose. corresponding estimator affords low-variance path derivative gradients for the categorical distri-. bution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic. gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax. enables dramatic speedups in inference over discrete latent variables."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "in Table[2 We used an annealing schedule of t = max(0.5, exp(-3e-5 : t)), updated every 2000 steps.\nIn|Kingma et al.[(2014), inference over the latent state is done by marginalizing out y and using the reparameterization trick for sampling from qo(z|x, y). However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint qo(y, z[x), achieving drastic speedups in training without compromising generative or classification performance. (Table|2 Figure[5)\nIn Figure5l we show how Gumbel-Softmax versus marginalization scales with the number of cat- egorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is 2 as fast for 10 classes and 9.9 as fast for 100 classes.\n85 Gumbel 80 Marginalization 0123 79 25 20 5 L0 0193456789 5 0183456789 0 K=5 K=10 K=100 Number of classes. 5\nFigure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior qo(y[x) providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza tion (Kingma et al.]2014) on a semi-supervised VAE. Evaluations were performed on a GTX Titan X GPU. (b) Visualization of MNIST analogies generated by varying style variable z across each row and class variable y across each column.\nWe sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback."}]
B1KBHtcel
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Peter Potash, Alexey Romanov & Anna Rumshisky\n{ppotash, aromanov, arum}@cs.uml.edu\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nSamuel R Bowman, Christopher Potts, and Christopher D Manning. Recursive neural networks can learn logical semantics. arXiv preprint arXiv:1406.1827, 2014.\nAmparo Elizabeth Cano-Basave and Yulan He. A study of the impact of persuasive argumentatior in political debates. In Proceedings of NAACL-HLT, pp. 1405-1413, 2016.\nRobin Cohen. Analyzing the structure of argumentative discourse. Computational linguistics, 13 (1-2):11-24, 1987.\nTrudy Govier. A practical study of argument. Cengage Learning, 2013"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Computational approaches to argument mining/understanding have become very popular (Persing & Ng]2016f Cano-Basave & He2016]|Wei et al.|2 2016Ghosh et al. 2016Palau & Moens2009 Habernal & Gurevych2016). One important avenue in this work is to understand the structure ir argumentative text (Persing & Ng]2016] Peldszus & Stede2015] Stab & Gurevych2016] Nguyer & Litman2016). One fundamental assumption when working with argumentative text is the pres ence of Arguments Components (ACs). The types of ACs are generally characterized as a claim o a premise (Govier2013), with premises acting as support (or possibly attack) units for claims. Tc model more complex structures of arguments, some annotation schemes also include a major clain AC type (Stab & Gurevych]2016}2014b)\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nNamhee Kwon, Liang Zhou, Eduard Hovy, and Stuart W Shulman. Identifying and classifying sub- jective claims. In Proceedings of the 8th annual international conference on Digital government research: bridging disciplines & domains, pp. 76-81. Digital Government Society of North Amer- ica, 2007.\nThere are two key assumptions our work makes going forward. First, we assume subtask 1 has. been completed, i.e. ACs have already been identified. Second, we follow previous work that assumes a tree structure for the linking of ACs (Palau & Moens]2009] Cohen1987]Peldszus & Stede 2015} Stab & Gurevych 2016) Specifically, a given AC can only have a single outgoing. link, but can have numerous incoming links. Furthermore, there is a 'head' component that has.\nHuy V Nguyen and Diane J Litman. Context-aware argumentative relation mining. 2016\nRaquel Mochales Palau and Marie-Francine Moens. Argumentation mining: the detection, classifi cation and structure of arguments in text. In Proceedings of the 12th international conference o artificial intelligence and law, pp. 98-107. ACM, 2009.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew. Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-. cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten-. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: //tensorf1ow. org/| Software available from. tensorflow.org."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One of the major goals in automated argumentation mining is to uncover the argu- ment structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining,. focusing on extracting links between argument components, with a secondary fo- cus on classifying types of argument components. In order to solve this problem.. we propose to use a modification of a Pointer Network architecture. A Pointer. Network is appealing for this task for the following reasons: 1) It takes into ac- count the sequential nature of argument components; 2) By construction, it en- forces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model. that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model. achieves state-of-the-art results on two separate evaluation corpora. Furthermore. our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.\nSamuel R Bowman, Christopher D Manning, and Christopher Potts. Tree-structured composition in. neural networks without tree-structured architectures. arXiv preprint arXiv:1506.04834, 2015.\nZhengping Che, David Kale, Wenzhe Li, Mohammad Taha Bahadori, and Yan Liu. Deep compu tational phenotyping. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 507-516. ACM, 2015.\nAlex Graves and Jurgen Schmidhuber. Offline handwriting recognition with multidimensional re current neural networks. In Advances in neural information processing systems, pp. 545-552 2009.\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nGenerally, the task of processing argument structure encapsulates four distinct subtasks: 1) Given a sequence of tokens that represents an entire argumentative text, determine the token subsequences that constitute non-intersecting ACs; 2) Given an AC, determine the type of AC (claim, premise etc.); 3) Given a set/list of ACs, determine which ACs have a link that determine overall argument structure; 4) Given two linked ACs, determine whether the link is of a supporting or attacking relation. In this work. we focus on subtasks 2 and 3.\nFirst, [cloning will be beneficial for many people who are in need of organ transplants]Ac1. In addition, [it shortens the healing process]ac2. Usually, [it is very rare to find an appropriate organ donor]Ac3 and [by using cloning in orderto raise required organs the waiting time can be shortened tremendouslylAc4\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532- 43. 2014.\nAnthony J Robinson. An application of recurrent nets to phone probability estimation. IEEE transactions on Neural Networks. 5(2):298-305. 1994\nFigure 1: An example of argument structure with four ACs. The left side shows raw text that has been annotated for the presence of ACs. Squiggly and straight underlining means an AC is a claim or premise, respectively. The ACs in the text have also been annotated for links to other ACs, which. is show in the right figure. ACs 3 and 4 are premises that link to another premise, AC2. Finally, AC2 links to a claim, AC1. AC1 therefore acts as the central argumentative component..\nNiall Rooney, Hui Wang, and Fiona Browne. Applying kernel methods to argumentation mining. Ir FLAIRS Conference, 2012.\nno outgoing link (the top of the tree). Figure[1 shows an example that we will use throughout the paper to concretely explain how our approach works. First, the left side of the figure presents the raw text of a paragraph in a persuasive essay (Stab & Gurevych]2016), with the ACs contained in square brackets. Squiggly verse straight underlining differentiates between claims and premises, respectively. The ACs have been annotated as to how the ACs are linked, and the right side of the figure reflects this structure. The argument structure with four ACs forms a tree, where AC2 has two incoming links, and AC1 acts as the head, with no outgoing links. We also specify the type of AC, with the head AC marked as claim and the remaining ACs marked as premise. Lastly, we note that the order of arguments components can be a strong indicator of how components should related. Linking to the first argument component can provide a competitive baseline heuristic (Peldszus & Stede2015} Stab & Gurevych2016).\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nGiven the task at hand, we propose a modification of a Pointer Network (PN) (Vinyals et al.]2015b A PN is a sequence-to-sequence model that outputs a distribution over the encoding indices at eac. decoding timestep. The PN is a promising model for link extraction in argumentative text becaus. it inherently possesses three important characteristics: 1) it is able to model the sequential nature 0. ACs; 2) it constrains ACs to have a single outgoing link, thus partly enforcing the tree structure; 3. the hidden representations learned by the model can be used for jointly predicting multiple subtask.. We also note that since a PN is a type of sequence-to-sequence model (Sutskever et al.2014), allows the entire sequence to be seen before making prediction. This is important because if th. problem were to be approached as standard sequence modeling (Graves & Schmidhuber2009 Robinson 1994), making predictions at each forward timestep, it would only allow links to AC. hat have already been seen. This is equivalent to only allowing backward links. We note that we d test a simplified model that only uses hidden states from an encoding network to make predictions as opposed to the sequence-to-sequence architecture present in the PN (see Section5).\nPNs were originally proposed to allow a variable length decoding sequence (Vinyals et al.]2015b) Alternatively, the PN we implement differs from the original model in that we decode for the same number of timesteps as there are input components. We also propose a joint PN for both extracting links between ACs and predicting the type of AC. The model uses the hidden representation of ACs produced during the encoding step (see Section |3.4). Aside from the partial assumption o1 tree structure in the argumentative text, our models do not make any additional assumptions abou the AC types or connectivity, unlike the work of Peldszus(2014). We evaluate our models on the corpora of Stab & Gurevych (2016) and Peldszus (2014), and compare our results with the results of the aformentioned authors.\nRecent work in argumentation mining offers data-driven approaches for the task of predicting links between ACs. Stab & Gurevych (2014b) approach the task as a binary classification problem. The\nAC1 Claim AC2 Premise AC3 AC4 Premise Premise\nIsaac Persing and Vincent Ng. End-to-end argumentation mining in student essays. In Proceedings of NAACL-HLT, pp. 1384-1394, 2016\nauthors train an SVM with various semantic and structural features.Peldszus & Stede (2015 have also used classification models for predicting the presence of links. Various authors hav. also proposed to jointly model link extraction with other subtasks from the argumentation mining. pipeline, using either an Integer Linear Programming (ILP) framework (Persing & Ng] 2016] Stal & Gurevych2016) or directly feeding previous subtask predictions into another model. The forme. joint approaches are evaluated on annotated corpora of persuasive essays (Stab & Gurevych]2014a 2016), and the latter on a corpus of microtexts (Peldszus2014). The ILP framework is effectiv in enforcing a tree structure between ACs when predictions are made from otherwise naive base. classifiers.\nUnrelated to argumentation mining specifically, recurrent neural networks have previously beer proposed to model tree/graph structures in a linear manner.Vinyals et al.(2015c) use a sequence to-sequence model for the task of syntactic parsing. The authors linearize input parse graphs using a depth-first search, allowing it to be consumed as a sequence, achieving state-of-the-art results on several syntactic parsing datasets.Bowman et al.(2015) experiment on an artificial entailmen dataset that is specifically engineered to capture recursive logic (Bowman et al.[2014). The text is annotated with brackets, in an original attempt to provide easy input into a recursive neural network However, standard recurrent neural networks can take in complete sentence sequences, brackets included, and perform competitively with a recursive neural network.\nIn this section we will describe how we use a PN for the problem of extracting links between ACs We begin by giving a general description of the PN model."}, {"section_index": "3", "section_name": "3.1 POINTER NETWORK", "section_text": "A PN is a sequence-to-sequence model (Sutskever et al.]2014) with attention (Bahdanau et al. 2014) that was proposed to handle decoding sequences over the encoding inputs, and can be ex tended to arbitrary sets (Vinyals et al.2015a). The original motivation for a pointer network wa to allow networks to learn solutions to algorithmic problems, such as the traveling salesperson an convex hull, where the solution is a sequence over candidate points. The PN model is trained o input/output sequence pairs (E, D), where E is the source and D is the target (our choice of E,D i meant to represent the encoding, decoding steps of the sequence-to-sequence model). Given mode parameters O, we apply the chain rule to determine the probability of a single training example:\nm(E) 11 p(DE;O) = p(Di|D1,..., Di-1, E;O i=1\nwhich is the sum over all training example pairs\nThe PN uses Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber 1997) for sequentia. modeling, which produces a hidden layer h at each encoding/decoding timestep. In practice, the PI. has two separate LSTMs, one for encoding and one for decoding. Thus, we refer to encoding hidde layers as e, and decoding hidden layers as d..\ntanh(Wie;+ Wdi)\nwhere the function m signifies that the number of decoding timesteps is a function of each individual training example. We will discuss shortly why we need to modify the original definition of m for. our application. By taking the log-likelihood of Equation 1] we arrive at the optimization objective:\n9* * = argmax logp(D E;O E,D\nThe PN uses a form of content-based attention (Bahdanau et al.|2014) to allow the model to produce a distribution over input elements. This can also be thought of as a distribution over input indices. wherein a decoding step 'points' to the input. Formally, given encoding hidden states (e1,..., en). The model calculates p(D|D1, ..., D-1, E) as follows:\nFigure 2: Applying a Pointer Network to the example paragraph in Figure[1|with LSTMs unrolled Over time.\np(Di|D1,.., D-1, E) = softmax(u\nIn order to make the PN applicable to the problem of link extraction, we explicitly set the number of decoding timesteps to be equal to the number of input components. Using notation from Equation|1 the decoding sequence length for an encoding sequence E is simply m(E) = {C1, ..., Cn}, which is trivially equal to n. By constructing the decoding sequence in this manner, we can associate decoding timestep i with AC Ci.\nFrom Equation4] decoding timestep D; will output a distribution over input indices. The result of. this distribution will indicate to which AC component C; links. Recall there is a possibility that an AC has no outgoing link, such as if it's the root of the tree. In this case, we state that if AC C; does not have an outgoing link, decoding step D, will output index i. Conversely, if D, outputs index. j, such that j is not equal to i, this implies that C, has an outgoing link to Cs. For the argument. structure in Figure [1] the corresponding decoding sequence is (1, 1, 2, 2). The topology of this decoding sequence is illustrated in Figure2 Note how C1 points to itself since it has no outgoing. link.\nFinally, we note that we modify the PN structure to have a Bidirectional LSTM as the encoder. Thus e; is the concatenation of forward and backward hidden states e , and 'e n-i+1, produced by two. separate LSTMs. The decoder remains a standard forward LSTM.\nAt each timestep of the decoder, the network takes in the representation of an AC. Each AC is itself a sequence of tokens, similar to the recently proposed Question-Answering dataset (Weston et al. 2015). We follow the work of[Stab & Gurevych|(2016) and focus on three different types of features\nWe also experimented with relu and elu activations, but found sigmoid to yeild the best performance\nLSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM E1 E2 E3 E4 D1 D2 D3 D4 A A A Component 1 Component 2 Component 3 Component 4\nwhere matrices W1, W2 and vector v are parameters of the model (along with the LSTM parameters used for encoding and decoding). In Equation[3] prior to taking the dot product with v, the resulting. transformation can be thought of as creating a joint, hidden representation of inputs i and j. Vector u' in equation|4|is of length n, and index j corresponds to input element j. Therefore, by taking the. softmax of u', we are able to create a distribution over the input..\nA given piece of text has a set of ACs, which occur in a specific order in the text, (C1, ..., Cn).. Therefore, at encoding timestep i, the model is fed a representation of C,. Since the representation is large and sparse (see Section|3.3|for details on how we represent ACs), we add a fully-connected layer before the LSTM input. Given a representation R, for AC C, the LSTM input A, becomes:.\nA; = o(WrepRi + brep\nwhere Wrep, brep in turn become model parameters, and o is the sigmoid function' (similarly, the decoding network applies a fully-connected layer with sigmoid activation to its inputs, see Figure 3j. At encoding step i, the encoding LSTM produces hidden layer e, which can be thought of as a hidden representation of AC C .\nClaim Premise Premise Premise A A FC2 FC2 FC2 FC2 A Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM LSTM LSTM LSTM LSTM E1 E2 E3 E4 D1 D2 D3 D4 Bidirectional LSTM Encoder FC3 FC3 FC3 FC1 FC1 FC1 FC1 4 ^ 4 Component 1 Component 2 Component 3 Component 4\nFigure 3: Architecture of the joint model applied to the example in Figure1\nto represent our ACs: 1) Bag-of-Words of the AC; 2) Embedding representation based on GloVe embeddings (Pennington et al.[2014); 3) Structural features: Whether or not the AC is the first AC in a paragraph, and Whether the AC is in an opening, body, or closing paragraph. See Section|6|for an ablation study of the proposed features\nUp to this point, we focused on the task of extracting links between ACs. However, recent worl has shown that joint models that simultaneously try to complete multiple aspects of the subtasl pipeline outperform models that focus on a single subtask (Persing & Ng2016f Stab & Gurevych 2014bf Peldszus & Stede2015). Therefore, we will modify the architecture we proposed in Sectior 3 so that it would allow us to perform AC classification (Kwon et al.]2007] Rooney et al.2012 together with link prediction. Knowledge of an individual subtask's predictions can aid in othe subtasks. For example, claims do not have an outgoing link, so knowing the type of AC can aid ir the link prediction task. This can be seen as a way of regularizing the hidden representations fron the encoding component (Che et al.2015).\nPredicting AC type is a straightforward classification task: given AC Ct, we need to predict whether it is a claim or premise. Some annotation schemes also include the class major claim (Stab & [Gurevych]2014a), which means this can be a multi-class classification task. For encoding timestep i, the model creates hidden representation e;. This can be thought of as a representation of AC C; Therefore, our joint model will simply pass this representation through a fully connected layer as follows:\nZi = Wclsei + bcls\nConsequently, the probability of predicting component type at timestep i is defined as\nFinally, combining this new prediction task with Equation|2] we arrive at the new training objective\n= arg max Q logp(DE;O) +(1-)) logp(EO) E,D E\nwhich simply sums the costs of the individual prediction tasks, and the second summation is the cost for the new task of predicting argument component type. Q E 0, 1is a hyperparameter that\np(C)=p(E|Ei,Ei;O)\np(E;|Ei, Ei;O) =softmax(zi)\nspecifies how we weight the two prediction tasks in our cost function. The architecture of the join model, applied to our ongoing example, is illustrated in Figure[3."}, {"section_index": "4", "section_name": "EXPERIMENTAL DESIGN", "section_text": "As we have previously mentioned, our work assumes that ACs have already been identified. That. is, the token sequence that comprises a given AC is already known. The order of ACs corresponds directly to the order in which the ACs appear in the text. Since ACs are non-overlapping, there. is no ambiguity in this ordering. We test the effectiveness of our proposed model on a dataset of. persuasive essays (Stab & Gurevych2016), as well as a dataset of microtexts (Peldszus2014) The feature space for the persuasive essay corpus has roughly 3,ooo dimensions, and the microtext. corpus feature space has between 2,500 and 3,000 dimensions, depending on the data split (see. below).\nThe persuasive essay corpus contains a total of 402 essays, with a frozen set of 80 essays held out. for testing. There are three AC types in this corpus: major claim, claim, and premise. We follow the. creators of the corpus and only evaluate ACs within a given paragraph. That is, each training/test. example is a sequence of ACs from a paragraph. This results in a 1,405/144 training/test split. The. microtext corpus contains 112 short texts. Unlike, the persuasive essay corpus, each text in this. corpus is itself a complete example. Since the dataset is small, the authors have created 10 sets of 5-fold cross-validation, reporting the the average across all splits for final model evaluation. This. corpus contains only two types of ACs (claim and premise) The annotation of argument structure of. the microtext corpus varies from the persuasive essay corpus; ACs can be linked to other links, as. opposed to ACs. Therefore, if AC C; is annotated to be linked to link l, we create a link to the source. AC of l. On average, this corpus has 5.14 ACs per text. Lastly, we note that predicting the presence. of links is directional (ordered): predicting a link between the pair C, C,(i j) is different than. Ci, Ci.\nWe implement our models in TensorFlow (Abadi et al.]2015). Our model has the following param eters: hidden input dimension size 512, hidden layer size 256 for the bidirectional LSTMs, hidder layer size 512 for the LSTM decoder, equal to 0.5, and dropout (Srivastava et al.2014) of 0.9 We believe the need for such high dropout is due to the small amounts of training data (Zarrella & Marsh! 2016), particularly in the Microtext corpus. All models are trained with Adam optimizer (Kingma & Ba]2014) with a batch size of 16. For a given training set, we randomly select 10% tc become the validation set. Training occurs for 4,o00 epochs. Once training is completed, we seleci the model with the highest validation accuracy (on the link prediction task) and evaluate it on the held-out test set. At test time, we take a greedy approach and select the index of the probability distribution (whether link or type prediction) with the highest value."}, {"section_index": "5", "section_name": "5 RESULTS", "section_text": "The results of our experiments are presented in Tables [1and 2 For each corpus, we present f1 scores for the AC type classification experiment, with a macro-averaged score of the individual class f1 scores. We also present the f1 scores for predicting the presence/absence of links between ACs, as well as the associated macro-average between these two values..\nWe implement and compare four types of neural models: 1) The previously described PN-based model depicted in Figure 3 (called PN in the tables); 2) The same as 1), but without the fully. connected input layers; 3) The same as 1), but the model only predicts the link task, and is therefore not optimized for type prediction; 4) A non-sequence-to-sequence model that uses the hidden layers produced by the BLSTM encoder with the same type of attention as the PN (called BLSTM in the table). That is, d, in Equation[3|is replaced by ei.\nIn both corpora we compare against the following previously proposed models: Base Classifier (Stab & Gurevych]2016) is feature-rich, task-specific (AC type or link extraction) SVM classifier. Neither of these classifiers enforce structural or global constraints. Conversely, the ILP Joint Model (Stab & Gurevych] 2016) provides constrains by sharing prediction information between the base classifier. For example, the model attempts to enforce a tree structure among ACs within a given paragraph, as well as using incoming link predictions to better predict the type class claim. For the\nTable 1: Results on persuasive essay corpus\nTable 2: Results on microtext corpus\nType prediction Link prediction Model Macro f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 Simple .817 - - .663 .478 .848 Best EG .869 1 .693 .502 .884 MP+p .831 .720 .546 .894 - Base Classifier .830 .712 .937 .650 .446 .841 ILP Joint Model .857 .770 .943 .683 .486 .881 PN .813 .692 .934 .740 .577 .903\nmicrotext corpus only, we have the following comparative models: Simple (Peldszus & Stede||2015 is a feature-rich logistic regression classifier. Best EG (Peldszus & Stede]2015) creates an Evidenc Graph (EG) from the predictions of a set of base classifier. The EG models the potential argumen structure, and offers a global optimization objective that the base classifiers attempt to optimize b adjusting their individual weights. Lastly, MP+p (Peldszus & Stede] 2015) combines prediction from base classifiers with a MSTParser, which applies 1-best MIRA structured learning."}, {"section_index": "6", "section_name": "6 DISCUSSION", "section_text": "First, we point out that the PN model achieves state-of-the-art on 10 of the 13 metrics in Tables|1. and 2] including the highest results in all metrics on the Persuasive Essay corpus, as well as link. prediction on the Microtext corpus. The performance on the Microtext corpus is very encouraging. for several reasons. First, the fact that the model can perform so well with only a hundred training. examples is rather remarkable. Second, although we motivate the use of a PN due to the fact that it partially enforces the tree structure in argumentation, other models explicitly contain further con- straints. For example, only premises can have outgoing links, and there can be only one claim in an AC. As for the other neural models, the BLSTM model performs competitively with the ILP Joint Model on the persuasive essay corpus, but trails the performance of the PN model. We believe this. is because the PN model is able to create two different representations for each AC, one each in the encoding/decoding state, which benefits performance in the dual tasks, whereas the BLSTM model must encode information relating to type as well as link prediction in a single hidden representation. On one hand, the BLSTM model outperforms the ILP model on link prediction, yet it is not able to match the ILP Joint Model's performance on type prediction, primarily due to the BLSTM's poor. performance on predicting the major claim class. Another interesting outcome is the importance of the fully-connected layer before the LSTM input. The results show that this extra layer of depth is crucial for good performance on this task. Without it, the PN model is only able to perform com. petitively with the Base Classifier. The results dictate that even a simple fully-connected layer with sigmoid activation can provide a useful dimensionality reduction for feature representation. Finally the PN model that only extracts links suffers a large drop in performance, conveying that the joint aspect of the PN model is crucial for high performance in the link prediction task..\nTable 3 shows the results of an ablation study for AC feature representation. Regarding link pre diction, BOw features are clearly the most important, as their absence results in the highest drop in performance. Conversely, the presence of structural features provides the smallest boost in perfor- mance, as the model is still able to record state-of-the-art results compared to the ILP Joint Model. This shows that, one one hand, the PN model is able to capture structural ques through sequence\nType prediction Link prediction Model Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 Base Classifier .794 .891 .611 .879 .717 .508 .917 ILP Joint Model .826 .891 .682 .903 .751 .585 .918 BLSTM .810 .830 .688 .912 .754 .589 .919 PN No FC Input .791 .826 .642 .906 .708 .514 .901 PN No Type .709 .511 .906 - - - - PN .849 .894 .732 .921 .767 .608 .925\nTable 3: Feature ablation study. * indicates that both BOw and Structural are present, as well as th stated embedding type.\nTable 4: Results of binning test data by length of AC sequence. * indicates that this bin does not contain any major claim labels, and this average only applies to claim and premise classes. However. we do not disable the model from predicting this class: the model was able to avoid predicting this class on its own.\nTable 4 shows the results on the Persuasive Essay test set with the examples binned by sequenc length. First, it is not a surprise to see that the model performs best when the sequences are th shortest. As the sequence length increases, the accuracy on link prediction drops. This is possibl due to the fact that as the length increases, a given AC has more possibilities as to which other AC i can link to, making the task more difficult. Conversely, there is actually a rise in no link predictio accuracy from the second to third row. This is likely due to the fact that since the model predicts a most one outgoing link, it indirectly predicts no link for the remaining ACs in the sequence. Sinc the chance probability is low for having a link between a given AC in a long sequence, the no lin performance is actually better in longer sequences."}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "In this paper we have proposed how to use a modified PN (Vinyals et al.f 2015b) to extract links between ACs in argumentative text. We evaluate our models on two corpora: a corpus of persuasive essays (Stab & Gurevych]|2016), and a corpus of microtexts (Peldszus||2014). The PN model records state-of-the-art results on the persuasive essay corpus, as well as achieving state-of-the-art results for link prediction on the microtext corpus, despite only having 90 training examples. The results show that jointly modeling the two prediction tasks is crucial for high performance, as well as the presence of a fully-connected layer prior to the LSTM input. Future work can attempt to learn the AC representations themselves, such as in|Kumar et al.(2015). Lastly, future work can integrate subtasks 1 and 4 into the model. The representations produced by Equation 3|could potentially be used to predict the type of link connecting ACs, i.e. supporting or attacking; this is the fourth subtask in the pipeline. In addition, a segmenting technique, such as the one proposed by|Weston et al.(2014), can accomplish subtask 1.\nType prediction Link prediction Model Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1. No Link f1. No structural .808 .824 .694 .907 .760 .598 .922 No BOW .796 .833 .652 .902 .728 .543 .912 No Embeddings .827 .874 .695 .911 .750 .581 .918 Only Avg Emb* .832 .873 .717 .917 .751 .583 .918 Only Max Emb* .843 .874 .732 .923 .766 .608 .924 Only Min Emb* .838 .878 .719 .918 .763 .602 .924 All features .849 .894 .732 .921 .767 .608 .925\nType prediction Link prediction Bin Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 1len < 4 .863 .902 .798 .889 .918 .866 .969 4len<8 .680 .444 .675 .920 .749 .586 .912 8 len< 12 .862* .000* .762 .961 .742 .542 .941\nmodeling and semantics (the ILP Joint Model directly integrates these structural features), however the PN model still does benefit from their explicit presence in the feature representation. When con-. sidering type prediction, both BOw and structural features are important, and it is the embedding. features that provide the least benefit. The Ablation results also provide an interesting insight into the effectiveness of different 'pooling' strategies for using individual token embeddings to create a multi-word embedding. The popular method of averaging embeddings (which is used by Stab &. Gurevych (2016) in their system) is in fact the worst method, although its performance is still com- petitive with the previous state-of-the-art. Conversely, max pooling produces results that are on par with the PN results from Table1"}]
ryWKREqxx
[{"section_index": "0", "section_name": "EMERGENT PREDICATION STRUCTURE IN VECTOR REPRESENTATIONS OF NEURAL READERS", "section_text": "Hai Wang. Takeshi Onishi Kevin Gimpel David McAllester\nThe performance of various recent readers on CNN, DailyMail and CBTest are summarized in Table 3] For purposes of comparison we only present results on single models. Model ensembles generally. perform better than single models but are require more computation to train making comparisons more difficult. More experimental details can be found in appendix..\nReading comprehension is a question answering task where the answer is to be. found in a given passage about entities and events not mentioned in general knowl-. edge sources. A significant number of neural architectures for this task (neural. readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of \"predication structure'' in. the hidden state vectors of a class of neural readers including the Attentive Reader. and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a \"predicate vector' P and a \"constant. symbol vector\"' c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating \"aggregation read-. ers\" such as the Attentive Reader and the Stanford Reader to \"explicit reference. readers'' such as the Attention-Sum Reader. the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show thai. the addition of linguistics features to the input to existing neural readers signifi. cantly boosts performance yielding the best results to date on the Who-did-What. dataset\nTable 3: Accuracy on CNN, DailyMail, CBTest NE and CBTest CN. All results are based on a singl model. Results other than those involving pointer or linguistic feature annotations are taken fron. the original publications. Readers in the first group are explicit reference readers. Readers in th second group are aggregation readers. The final reader defies this classification.."}, {"section_index": "1", "section_name": "INTRODUCTION AND OVERVIEW", "section_text": "Reading comprehension is a type of question answering task where the answer is to be found in a passage about particular entities and events not otherwise familiar to the reader. In particular, the entities and events should not be mentioned in structured databases of general knowledge. Reading comprehension problems are intended to measure a systems ability to extract semantic information about entities and relations directly from unstructured text. Several large scale reading comprehen- sion datasets have been introduced recently. In particular the CNN & DailyMail datasets (Hermann et al.l2015), the Children's Book Test (CBT) (Hill et al.l|2016), and the Who-did-What dataset (On- ishi et al.|2016). The large sizes of these datasets enable the application of deep learning. These are all cloze-style datasets where a question is constructed by deleting a word or phrase from an article summary (in CNN/DailyMail), from a sentence in a Children's story (in CBT), or by delet ing a person from the first sentence of a different news article on the same entities and events (in Who-did-What).\nFigure 9: Heat map a for Attention Sum Reader\nIn table 3] all the high-performance approaches are proposed very recently. Blue color represents the second highest accuracy and bold font indicates the state-of-the-art accuracy. Note that the result of Stanford Reader we report here is the one without relabeling since relabeling procedure doesn't follow the protocol used in Hermann et al.(2015).\nIn this paper we present empirical evidence for the emergence of predication structure in a certai class of neural readers. To understand predication structure is it helful to review the anonymizatio performed in the CNN/DailyMail dataset. In this dataset named entities are replaced by anonymou entity identifiers such as \"entity37'. The passage might contain \"entity52 gave entity24 a rousin applause\"' and the question might be \"X received a rounding applause from entity52\". The tas is to fill in X from a given multiple choice list of candidate entity identifiers. A fixed relativel small set of the same entity identifiers are used over all the problems and the same problem i presented many times with the entity identifiers shuffled. This prevents a given entity identifier fror having any semantically meaningful vector embedding. The embeddings of the entity identifiers ar"}, {"section_index": "2", "section_name": "7 DISCUSSION", "section_text": "Explicit reference architectures rely on reference resolution - a specification of which phrases in the given passage refer to candidate answers. Our experiments indicate that all existing readers ben- efit greatly from this externally provided information. Aggregation readers seem to demonstrate a stronger learning ability in that they essentially learn to mimic explicit reference readers by iden tifying reference annotation and using it appropriately. This is done most clearly in the pointer reader architectures. Furthermore, we have argued for, and given experimental evidence for, an in terpretation of aggregation readers as learning emergent logical structure - a factoring of neural representations into a direct sum of a statement (predicate) representation and an entity (argument) representation.\nReal value feature: position of the token's first occurrence in the passage as a percentage of the passage length. Binary feature: whether the text surrounding token match the text surrounding the place- holder in the question. We only have features for matching both left and right one word. One hot vector: Part-of-speech (POS) tagging. We only use such feature on CBT dataset. One hot vector: Name Entity Recognition (NER). We only use such feature on CBT dataset."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "@entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six. attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special military unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched. @entity14 village in the @entity15 , \" he said . @entity14 is a village that borders @entity17 and. has been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents. have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle. that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the. @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2_, whose name translates as \" @entity44. education is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme. version of @entity42 law in @entity29 . @entity2's tactics have intensified in recent years , from. battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids. on villages , mass kidnappings , assassinations , market bombings and attacks on churches and. unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring. countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in. @entity63 , @entity5 , contributed to this report ..\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor\npresumably just pointers to semantics-free tokens. We will write entity identifiers as logical constant symbols such as c rather than strings such as \"entity37'\nAt a very high level our analysis and experiments support a central role for reference resolution in reading comprehension. Automating reference resolution in neural models, and demonstrating its value on appropriate datasets, would seem to be an important area for future research..\nAggregation readers, including Memory Networks (Weston et al. Sukhbaatar et al. 2015), the At- tentive Reader (Hermann et al.| 2015) and the Stanford Reader (Chen et al.[2016), use bidirectional LSTMs or GRUs to construct a contextual embedding ht of each position t in the passage and also an embedding q of the question. They then select and answer c using a criterion similar to\nOf course there is great interest in \"learning representations\"'. The current state of the art in reading comprehension is such that systems still benefit from externally provided linguistic features includ ing externally annotated reference resolution. It would seem desirable to develop fully automatec neural readers that perform as well as readers using externally provided annotations. It is of course important to avoid straw man baselines when making any such claim.\nargmax ht. (c) t\nargmax < e(c) t\nHere . aght is viewed as a vector representation of the passage"}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "We argue that for aggregation readers, roughly defined by (2), the hidden state ht of the passage at position (or word) t can be viewed as a vector concatenation ht = e(t), e'(ct) where t is a property (or statement or predicate) being stated of a particular constant symbol ct. A logician might write this as ht = t[ct]. Furthermore, the question can be interpreted as having the form I[x] where the problem is to find a constant symbol c such that the passage implies I[c]. Assuming h+ = [e(+).e'(ct)l and q = [e().0l and e(c) 1 we can rewrite (1) as 0.e\nWe thanks the support of NVIDIA Corporation with the donation of GPUs used for this work"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the ACL, 2016.\nargmax < e(t),e() > <e'(ct),e'(c) > C t\nZewei Chu, Hai Wang, Kevin Gimpel, and David McAllester. Broad context language modeling reading comprehension. Arxiv, 2016.\nThe first inner product in (3) is interpreted as measuring the extent to which t[x] implies I[x] for any x. The second inner product is interpreted as restricting t to positions talking about the constant symbol c.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over attention neural networks for reading comprehension. Arxiv, 2016.\nNote that the posited decomposition of ht is not explicit in (2) but instead must emerge during. training. We present empirical evidence that this structure does emerge. The empirical evidence is somewhat tricky as the direct sum structure that divides ht into its two parts need not be axis aligned. and therefore need not literally correspond to vector concatenation..\nWe also consider a second class of neural readers that we call explicit reference readers. Explicit reference readers avoid (2) and instead use.\nPaperno. Denis, Germn Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernndez. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the ACL, 2016.\nargmax Qt C tER(c)\nBhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. Arxiv, 2016\nwhere R(c) is the subset of the positions where the constant symbol (entity identifier) c occurs Note that if we identify at with < e(t), e() > and assume that < e'(c), e'(ct) > is either O or 1 depending on whether c = ct, then (3) and (4) agree. In explicit reference readers the hidden state ht need not carry a pointer to ct as the restriction on t is independent of learned representations. Ex- plicit reference readers include the Attention Sum Reader (Kadlec et al.|2016), the Gated Attention Reader (Dhingra et al.]2016), the Attention-over-Attention Reader (Cui et al.]2016) and others (a list can be found in section6)\nKarm Moritz Hermann, Tom Kocisk, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su leyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), 2015.\nSo far we have only considered anonymized datasets that require the handling of semantics-free constant symbols. However, even for non-anonymized datasets such as Who-Did-What, it is helpful to add features which indicate which positions in the passage are referring to which candidate an- swers. This indicates, not surprisingly, that reference is important in question answering. The fact that explicit reference features are needed in aggregation readers on non-anonymized data indicates that reference is not being solved by the aggregation readers. However, as reference seems to be important for cloze-style question answering, these problems may ultimately provide training data from which reference resolution can be learned.\nSections 2 and 3 review various existing datasets and models respectively. Section4 presents the logical structure interpretation of aggregation readers in more detail and the empirical evidence supporting it. Section|5|proposes new models that enforce the direct sum structure of the hidden\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceeding. of the 3rd International Conference on Learning Representations, 2015.\nWe are hesitant to make any more detailed comments on the differences between the architectural details of the readers discussed in this paper. The differences in scores between the leading read- ers are comparable to differences in scores that can be achieved by aggressive search over meta parameters or the statistical fluctuations in the quality of models learned by noisy statistical train- ing procedures. More careful experiments over a longer period of time are needed. More dramatic improvements in performance would of course provide better support for particular innovations.\nwhere e(c) is the vector embedding of the constant symbol (entity identifier) c. In practice the inner-product < ht, q > is normalized over t using a softmax to yield an attention at over t and (1. becomes.\nTsendsuren Munkhdalai and Hong Yu. Reasoning with memory augmented neural networks foi language comprehension. Arxiv, 2016.\nBefore presenting various models for machine comprehension we give a general formulation of th. machine comprehension task. We take an instance of the task be a four tuple (q,p, a, A), wher. q is a question given as sequence of words containing a special taken for a \"blank'' to be filled ir p is a document consisting of a sequence of words, A is a set of possible answers and a E A i the ground truth answer. All words are drawn from a vocabulary V. We assume that all possibl answers are words from the vocabulary, that is A C V, and that the ground truth answer appears i the document, that is a E p. The problem can be described as that of selecting the answer a E . that answers question q based on information from p..\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the EMNLP, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 1oo,ooo+ questions for machine comprehension of text. In Proceedings of International Conference on Empirical Methods in Natural Language Processing, 2016.\nPascanu Razvan, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. In Proceedings of 1CML, pp. 1310-1318, 2013.\nCNN & DailyMail:Hermann et al.(2015) constructed these datasets from a large number of news. articles from the CNN and Daily Mail news websites. The main article is used as the context.. while the cloze style question is formed from one short highlight sentence appearing in conjunction. with the published article. To avoid the model using external world knowledge when answering. the question, the named entities in the entire dataset were replaced by anonymous entity IDs which were then further shuffled for each example. This forces models to rely on the context document to.. answer each question. In this anonymized corpus the entity identifiers are taken to be a part of the . vocabulary and the answer set A consists of the entity identifiers occurring in the passage..\nAndrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dy namics of learning in deep linear neural networks. Arxiv, 2013.\nYelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. Arxiv, 2016..\nWho-did-What (WDW): The Who-did-What dataset (Onishi et al.]2016) contains 127,000 mul tiple choice cloze questions constructed from the LDC English Gigaword newswire corpus (David & Cieri]2003). In contrast with CNN and Daily Mail, it avoids using article summaries for ques tion formation. Instead, each problem is formed from two independent articles: one is given as the passage to be read and a different article on the same entities and events is used to form the ques tion. Further, Who-did-What avoids anonymization, as each choice is a person named entity. In thi dataset the answer set A consists of the person named entities occurring in the passage. Finally, the problems have been filtered to remove a fraction that are easily solved by simple baselines. It has two training sets. The larger training set (\"relaxed') is created using less baseline filtering, while the smaller training set (\"strict') uses the same filtering as the validation and test sets.\nAlessandro Sordonif, Phillip Bachmanf, and Yoshua Bengio. Iterative alternating neural attention for machine reading. Arxiv, 2016..\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. Arxiv, 2016.\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel : Frameworks for deep learning. Arxiv, 2015.\nChildren's Book Test (CBT) Hill et al.(2016) developed the CBT dataset in a slightly different fashion to the CNN/DailyMail datasets. They take any sequence of 21 consecutive sentences from a children's book: the first 20 sentences are used as the passage, and the goal is to infer a missing word in the 21st sentence. The task complexity varies with the type of the omitted word (verb, preposition, named entity, or common noun). According to the original study on this dataset (Hill et al.]2016), n-gram and recurrent neural network language models are sufficient for predicting verbs or prepositions. However, for named entities and common nouns, current solvers are still far from human performance.\nDirk Weissenborn. Separating answers from queries for neural reading comprehension. Arxiv, 2016\nJason Weston. Sumit Chopra. and Antoine Bordes\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merrinboer, Armand Joulin, and Tomas Mikolov. Towards ai complete question answering: A set of prerequisite toy tasks. In Proceedings of the 4th International Cc onference on Learning Representations. 2016\nOtherRelated Datasets.] It is also worth mentioning several related datasets. The MCTest dataset (Richardson et al.l|2013) consists of children's stories and questions written by crowdsourced workers. The dataset only contains 660 documents and is too small to train deep models. The bAbI dataset (Weston et al.]2016) is constructed automatically using synthetic text generation and can be perfectly answered by hand-written algorithms (Lee et al.|2016). The SQuAD dataset (Ra- jpurkar et al.2016) consists passage-question pairs where the passage is a wikipedia article and the questions are written by crowdsourced workers. Although crowdsourcing is involved, the dataset contains over 200,O00 problems. But the answer is often a word sequence which is dificult to handle with the reader models considered here. The LAMBADA dataset (Denis et al.]2016) is a word prediction dataset which requires a broad discourse context and the correct answer might not in the context. Nonetheless, when the correct answer is in the context, neural readers can be applied effectively(Chu et al.]2016).\nstate vectors. It is shown that these new models perform well on the Who-did-What dataset provided. that reference annotations are added as input features. Section 5 also describes additional linguistic features that can be added to the input embeddings and show that these improve the performance of existing models resulting in the best single-model performance to date on the Who-did-What. dataset.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks\nFre de ric Bastien. Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud. Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. NIPs Workshop Deep Learning and Unsupervised Feature Learning, 2012.."}, {"section_index": "6", "section_name": "AGGREGATION READERS AND EXPLICIT REFERENCE READERS", "section_text": "Here we classify readers into aggregation readers and explicit reference readers. Aggregation readers. appeared first in the literature and include Memory Networks (Weston et al.) Sukhbaatar et al.. 2015), the Attentive Reader (Hermann et al. 2015), and the Stanford Reader (Chen et al. 2016). Aggregation readers are defined by equations (8) and (10) below. Explicit reference readers incluce. the Attention-Sum Reader (Kadlec et al.]2016), the Gated-Attention Reader (Dhingra et al.]2016), and the Attention-over-Attention Reader (Cui et al.|[2016). Explicit reference readers are defined by equation (14) below. We first present the Stanford Reader as a paradigmatic aggregation Reader and the Attention-Sum Reader as a paradigmatic explicit reference reader..\nFor Stanford Reader and One-Hot Pointer Reader, we simply follows the Stanford Reader's set. ting and didn't tune it on each dataset. For gated attention reader, the lookup table was ran. domly initialized with uniform distribution from the interval [-0.2, 0.2] on CBT dataset, but o CNN&DailyMail, the lookup table was initialized by Glove vector (Jeffrey et al.J2014) traine. on the train&validatation set (we found that the pre-trained word vector doesn't improve the ac. curacy but will accelerate the training) on CNN&DailyMail. On WDW dataset, the lookup tabl. was initialized by pre-trained Glove vector?! It should be noticed that if we initialize the lookuj. table with pre-trained Glove vector from //nlp.stanford.edu/data/glove.6B.zip, it will slightly boos. the accuracy compared with using the Glove vector trained on train&validation set. Input to hidde. state weights were initialized by random orthogonal matrices (Saxe et al.[2013) and biases wer. initialized to zero. Hidden to hidden state weights were initialized by identity matrices to force th. model can remember longer information. To compute the attention weight, we at = htTWaq an. initialize W with random uniform distribution. We also used the gradient clipping (Razvan et al. 2013) with threshold of 10 and batches of size 32."}, {"section_index": "7", "section_name": "3.1 AGGREGATION READERS", "section_text": "h biLSTM(e(p)) [fLSTM(e(q))|q], bLSTM(e(q))1 q\n1(C(9))|q|;0LO 11V1(C(9))1] In equations (5) and (6) we have that e(p) is the sequence of word embeddings e(w;) for w; E p and similarly for e(q). The expression biLSTM(s) denotes the sequence of hidden state vectors resulting from running a bi-directional LSTM on the vector sequence s. We write biLSTM(s); for the ith vector in this sequence. Similarly fLSTM(s) and bLSTM(s) denote the sequence of vectors resulting from running a forward LSTM and a backward LSTM respectively and :, :] denotes vector concatenation. The Stanford Reader, and various other readers, then compute a bilinear attention over the passage which is then used to construct a single weighted vector representation of the passage\nDuring training we randomly shuffled all examples within each epoch. To speedup training, we always pre-fetched 10 batches worth of examples and sorted them according to document length as did byKadlec et al.(2016). When trained on CNN, DailyMail and WDw (anonymization case) dataset, we randomly reshuffled the entity identifier to match the procedure proposed in Hermann et al.(2015).\nDuring training we evaluated the accuracy after each epoch and stopped the training when the accu racy on the validation set started decreasing. We tried limiting the vocabulary to the most frequent tokens but didn't observed any performance improvement compared with using all the distinct tokens as vocabulary. Since part of our experiments need to check the word embedding assignment issues, finally we use all the distinct tokens as vocabulary. To find the optimal embedding and hidden state dimension, we tried several groups of different combinations, the optimal value and corresponding training statistics in Gated Attention readers are summarized in Table.4] When anonymize the Who did-What dataset, we can either use simple string match to replace answer in question and story with entity identifier, or we can use Name Entity Recognition(NER) tool|to detect name entities and then replace the answer name entities in question and story with entity identifier, we found the later one generally will bring 2 % improvement compared with simple string match. More experimental details can be found in code.\nHere e,(a) is an \"output embedding\" of the answer a. On the CNN dataset the Stanford Reader trains an output embedding for each the roughly 500 entity identifiers used in the dataset. In cases where the answer might be any word in V an output embedding must be trained for the entire vocabulary.\nTable 4: Training Details on Different Datasets\nMemory Networks. Memory Networks (Weston et al.; Sukhbaatar et al.]2015) use (8) and (10) but have more elaborate methods of constructing \"memory vectors\" ht not involve LSTMs. Memory networks use (8) and (10) but replace (9) with\nDataset Embedding Hidden State Time Per Epoch Trained Epochs K CNN 128 256 18 hours 5 3 DailyMail 128 256 2 days 5 3 WDW Relaxed 200 384 2.5 hours 8 1 CBT NE 384 384 1 hour 8 1 CBT CN 384 256 1 hour 7 1\nP(w|p,q,A) = P(w|p,q) = softmax e,(w)To WEV\nP(w(p,q,A) = P(w(p,q) = softmaxe,(w)To\nIt should be noted that (11) trains output vectors over the whole vocabulary rather than just those items occurring in the choice set A. This is empirically significant in non-anonymized datasets such as CBT and Who-did-What where choices at test time may never have occurred as choices in the training data.\nAttentive Reader. The Stanford Reader was derived from the Attentive Reader (Hermann et al 2015). The Attentive Reader uses Qt = softmaxt MLP([ht, ql) instead of (7). Here MLP(x) is th output of a multi layer perceptron (MLP) given input x. Also, the answer distribution in the attentiv. reader is defined over the full vocabulary rather than just the candidate answer set A.\nWe randomly choose one article from CNN dataset and show softmax(e,(a)ht) for t E [0, [p] fo each answer candidate a in figure2 figure[3] figure|4] figure5[and figure|6] Red color indicate\n2http://nlp.stanford.edu/data/glove.6B.zip http://nlp.stanford.edu/software/CRF-NER.shtml\nStanford Reader. The the Stanford Reader (Chen et al.]2016) computes a bi-directional LSTM representation of both the passage and the question..\nsoftmax h' Wa q Qt t 0\np(a[d, q,A) softmax eo. aEA a argmax eo aEA\nThe reader is trained with log-loss ln 1/P(a[p, q, A) where a is the correct answer. At test time the reader is scored on the percentage of problems where a = a.\nP(w[p,q, A) = P(w|p, q) = softmax e,(w)MLP([o, q]) WEV\nEquation (12) is similar to (11) in that it leads to the training of output vectors for the full vocabulary rather than just those items appearing in choice sets in the training data. As in memory networks. this leads to improved performance on non-anonymized data sets..\nlarger probability and orange indicates smaller probability and the remaining indicates very low probability that can be ignored. From those figures, we can see that our assumption that e,(a) is used to pick up its occurrence is reasonable.\nHere we think of R(a, p) as the set of references to a in the passage p. It is important to note that (13) is an equality and that P(a|p, q, A) is not normalized to the members of R(a, p). When training with the log-loss objective this drives the attention at to be normalized - to have support only on the positions t with t E R(a, p) for some a. See the heat maps in the appendix\nrated-Attention Reader. The Gated Attention Reader Dhingra et al.(2016) involves a K-laye iGRU architecture defined by the following equations..\n[fGRU(e(q))Iql,bGRU(e(q))1] 1 < l < K h1 biGRU(e(p)) hl biGRU(hl 2 <l< K\nFigure 2: Heat map of softmax(e.(a)ht) when a = entity0\nWe randomly choose one article from CNN dataset and show the attention map at softmax(qTW.ht) for different readers (in Attention Sum and Gated Attention Reader, W. is ider tity matrix). From figure[7] figure[8|and figure[9] we can see that different readers essential put th weights on the entity identifiers.\nAttention-over-Attention Reader, The Attention-over-Attention Reader (Cui et al.. 2016) uses a more elaborate method to compute the attention dt. We will use t to range over positions in the passage and j to range over positions in the question. The model is then defined by the following equations.\nBj=pt t,j Qt=j jQt,j\nNote that the final equation defining Qt can be interpreted as applying the attention , to the atten tions Qt,j. This reader uses (13) and (14)\nAs discussed in the introduction the entity identifiers such as \"entity37' introduced in the CNN/DailyMail dataset cannot be assigned any semantics other than their identity. We should think of them as pointers or semantics-free constant symbols. Despite this undermining of semantics aggregation readers using (8) and (10) are able to perform well. Here we posit that this is due to an. emergent predication structure in the hidden vectors ht. Intuitively we want to think of the hidder state vector ht as a concatenation [e(t), e'(at)] where t carries semantic information true of at. We think of ht as representing t[at] for semantic statement t[x] asserted of the constant symbol.\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a. @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a. preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,. @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding. in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in. statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief edito. felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed ir. the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born. @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days. of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5. satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported. from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . alerv: they hid\nAttention-Sum Reader. In the Attention-Sum Reader (Kadlec et al.]2016) h and q are computed with equations (5) and (6) as in the Stanford Reader but using GRUs rather than LSTMs. The attention at is computed similarly to (7) but using a simple inner product Qt = softmaxt ht q rather than a trained bilinear form. Most significanlty, however, equations (9) and (10) are replaced by the following where t E R(a, p) indicates that a reference to candidate answer a occurs at position t in p.\nP(a|p,q,A) Qt tER(a,p) a argmax Qt a tER(a,p)\nfGRU(e(q))lql,bGRU(e(q))1] 1 < l < K h1 biGRU(e(p)) hl biGRU(hl 2 <l < K\nquery: they hid in a cold room during the attack in @entity0 by gunman @ placeholder\nHere the question embeddings q' for different values of l are computed with different GRU model parameters. Here h O q abbreviates the sequence h1 O q, h2 O q, ... h|pl O q. Note that for K = 1. we have only q' and h' as in the attention-sum reader. An attention is then computed over the final layer hK with at = softmaxt (hK)T qK in the attention-sum reader. This reader uses (13) and. (14).\nat. We also think of the vector representation q of the question as having the form [e(), 0] and vector embedding eo(a) as having the form [0, e'(a)].\nif t E R(a,p) C 0 otherwise\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\nand hence (10) and (14) agree - the aggregation readers and the explicit reference readers are using essentially the same answer selection criterion..\nEmpirical evidence for (16) is given in the first three rows of table [1] The first row empirically measures the \"constant' c in (16) by measuring eo(a)' ht for those cases where t E R(a,p). The. second row measures \"0' in (16) by measuring eo(a)' ht in those cases where t R(a,p). Addi-. tional evidence for (16) is given in figure[1[showing that the output vectors eo(a) for different entity. identifiers a are nearly orthogonal. Orthogonality of the output vectors is required by (16) provided. that each output vector eo(a) is in the span of the hidden state vectors ht,p for which t E R(a, p). Intuitively, the mean of all vectors ht,p with t E R(a, p) should be approximately equal to eo(a). Of. course empirically this will only be approximately true..\nEquation (16) would suggest that the vector embedding of the constant symbols should have di-. mension at least as large as the number of distinct constants. However, in practice is sufficient that. e(a)' e(a') is small for a a'. This allows the vector embeddings of the constants to have dimen-. sion much smaller than the number of constants. We have experimented with two-sparse constant. symbol embeddings where the number of embedding vectors in dimention d is 2d(d - 1) (d choose. 2 times the four ways of setting the signs of the non-zero coordinates). Although we do not report. results here. these designed and untrained constant embeddings worked reasonably well\nFigure 4: Heat map of softmax(e.(a)ht) when a = entity16\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a. @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor's office wednesday . the media outlet ,. @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a. statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor. felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause. concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born. @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15. customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the. lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported. from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report .\nUnfortunately, the decomposition of hy into this predication structure need not be axis aligned Rather than posit an axis-aligned concatenation we posit that the hidden vector space H is a possibly non-aligned direct sum\nH=SE\nwhere S is a subspace of \"statement vectors' and E is an orthogonal subspace of \"entity pointers\". Each hidden state vector h E H then has a unique decomposition as h = +e for E S and e E E This is equivalent to saying that the hidden vector space H is some rotation of a concatenation of. the vector spaces S and E.\nWe now present empirical evidence for this decomposition structure. We first note that the predi cation decomposition implies that e,(a) ' ht equals eo(a)' eo(at). This suggests the following for some fixed positive constant c.\nAssuming the predication structure we have c = eo(a)|[2. We note that if different entity constants. had different norms then answers would be biased toward occurrences of the constant symbol of larger norm. But we need to have that all constant symbols are equivalent. We note that (??) gives.\nargmax eo argmax eo(a): Qtht a a t argmax Qt eo(a) ' ht = argmax Qt a a t tER(a,p)\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16_, is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause. concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15. customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,. were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the. lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report .\nCNN Dev CNN Test samples mean variance samples mean variance (a)Tht, tE R(a,p) 222,001 10.66 2.26 164,746 10.70 2.45 (a)ht, tR(a,p) 93,072,682 -0.57 1.59 68,451,660 -0.58 1.65 (a)'ht1, tE R(a,p) 443,878 2.32 1.79 329,366 2.25 1.84 osine(q,ht),a t E R(a,p) 222,001 0.22 0.11 164,746 0.22 0.12 osine(q, eo(a)), Va 103,909 -0.03 0.04 78,411 -0.03 0.04\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\n0 210 100 180 150 200 120 90 300 60 30 400 0 -30 500 0 100 200 300 400 500\nFigure 1: Plot of e,(a;)' e,(a;) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3.\nThis interpretation is exactly correct if some of the dimensions of the vector space correspond tc predicates, is a 0-1 vector representing a conjunction predicates, and is also 0-1 on these di mensions indicating whether a predicate is implied by the context. Of course in practice one expect. the dimension to be smaller than the number of possible predicates.."}, {"section_index": "8", "section_name": "5 POINTER ANNOTATION READERS", "section_text": "It is of course important to note that anonymization provides reference information - anonymiza. tion assumes that one can determine coreference so as to replace coreferent phrases with the same entity identifier. Anonymization allows the reference set R(a, p) to be directly read off of the pas-. sage. Still, an aggregation reader must learn to recover this explicit reference structure..\nAggregation readers can have difficulty when anonymization is not done. The Stanford Reade. achieves just better than 45% on Who-did-What dataset while Attention Sum Reader can get neai 60%. But if we anonymize the Who-did-What dataset and then re-train the Stanford Reader, the accuracy jumps to near 65%. Anonymization has two effects. First, it greatly reduces the number. of output word e,(a) to be learned - we need only learn output embeddings for the relatively small number of entity identifiers needed. Second, anonymization suppresses the semantics of the reference phrases and leaves only a semantics-free entity identifier. This suppression of semantics may facilitate the separation of the hidden state vector space H into a direct sum S E with q E S. and eo(a) E E.\nFigure 6: Heat map of softmax(e,(a)ht.) when a = entity47\nWe can think of anonymization as providing additional linguistic input for the reader - it explicitly marks positions of candidate answers and establishes coreference. A natural question is whether\n@entity0 ( @entity1) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief edito felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report .\nAs another testable predication we note that the posited decomposition of the hidden state vectors implies\nq(h+ eo(a))=qh,\nOC This equation is equivalent to q' eo(a) = 0. Experimentally, however, we cannot expect q' eo(a). to be exactly zero and (17) seems to provides a more experimentally meaningful test. Empirical evidence for (17) is given in the fourth and fifth row of table1 The fourth row measures the cosine. of the angle between the question vector q and the hidden state ht averaged over passage positions. t at which some entity identifier occurs. The fifth row measures the cosine of the angle between q. and eo(a) averaged over the entity identifiers a.\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,. @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding. in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor. felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in. the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . the nlaceholde1\nA question asks for a value of x such that a statement I[x] is implied by the passage. For a question I we might even suggest the following vectorial interpretation of entailment..\nI[x] implies |x iff q'y>1\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\nOne-Hot Pointer Annotation: The Stanford Reader involves both input embeddings of words anc output embeddings of entity identifiers. In the Who-did-What dataset each problem has at most five. choices in the multiple choice answer list. This means that we need only five entity identifiers and we can use a five dimensional one-hot vector representation for answer identifiers. If an answer choice exists at position t in the passage let it be the index of that choice on the choice list. If nc choice occurs t take it to be zero. Take e'(i) to be the zero vector if i = 0 and otherwise to be the one-hot vector for i. We defined pointer annotation to be the result of adding e'(it) as additional features to the input embedding.\nWe then define a one-hot pointer reader by designates five dimensions of the hidden state as indica tors of the answer and take the probability of choice i to be defined as.\np(i|d, q) = softmax 0i 2\nGeneral Pointer Annotation: In the CNN dataset there are roughly 500 entity identifier and a one hot representation is not desirable. Instead we can let e'(i) be a fixed set of \"pointers vectors'' - vectors distributed widely on the unit sphere so that for i j we have that e'(i) ' e'(j) is small. We. again use (18) but replace (19) with\np(i[d, q) = softmax [0, e'(i)]' c\nIn the general pointer reader the pointer embeddings e'(i) are held fixed and not trained\nFigure 8: Heat map Q+ for Gated Attention Reader\nBinary feature: whether current token occurs in the question Real value feature: the frequency of current token in the passage.\nTable 2: Accuracy on WDW dataset. All these results are based on single model. Results for neural readers other than NSE are based on replications of those systems. All models were trained on the. relaxed training set which uniformly yields better performance than the restricted training set. The first group of models are explicit reference models and the second group are aggregation models. + indicates anonymization with better reference identifier..\n( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first. time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special. military unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched. @entity14 village in the @entity15 , \" he said . @entity14 is a village that borders @entity17 and. has been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents. have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle. that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the. @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2_, whose name translates as \" @entity44. education is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme. version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from. battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids. on villages , mass kidnappings , assassinations , market bombings and attacks on churches and. unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring. countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in. @entity63 , @entity5 , contributed to this report ..\nthis information can be provided without anonymization by simply adding additional coreference features to the input. Here we evaluate two architectures inspired by this question. This evaluation is done on the Who-did-What dataset which is not anonymized. In each architecture we add features to the input to mark the occurrences of candidate answers. These models are simpler than the Stanford reader but perform comparably. This comparable performance in table 2|further supports our analysis of logical structure in aggregation readers.\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor\ne(wt) =[e(wt),e'(it)\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor.\nLinguistic Features. Each model can be modified to include additional input features for each input token in the question and passage. More specifically we can add the following features to the word embeddings."}]
r1LXit5ee
[{"section_index": "0", "section_name": "EPISODIC EXPLORATION FOR DEEP DETERMINISTIC POLICIES FOR STARCRAFT MICROMANAGEMENT", "section_text": "Table 2: Test win rates over 1oo0 battles for the training scenarios, for all methods and for heuristic baselines. The best result for a given map is in bold..\nNicolas Usunier*, Gabriel Synnaeve*, Zeming Lin, Soumith Chintala Facebook AI Research\n{usunier,gab, zlin, soumith}@fb.com\nTable 3: Win rates over 1000 games for out-of-training-domain maps, for all methods. The map on. which this method was trained on is indicated on the left. The best result is in bold, the best result out of the reinforcement learning methods is in italics.\nWe consider scenarios from the real-time strategy game StarCraft as benchmarks. for reinforcement learning algorithms. We focus on micromanagement, that is, the. short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the. state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement. scenarios with deep neural network controllers from raw state features given by. the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This. algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., e-greedy exploration. Experiments show that this. algorithm allows to successfully learn non-trivial strategies for scenarios with. armies of up to 15 agents, where both Q-learning and REINFORCE struggle..\ntrain map test map best heuristic DQN PG ZO train map test map best heuristic DQN PG zO m15v16 m5v5 .96 (wc/c) .96 .79 .80 w15v17 w5v5 .78 (c) .70 .70 .74 m15v15 .97 (c) .27 .16 .80 w15v13 1. (rand_nc/c) 1. .99 1. m18v18 .98 (c/noop) .18 .25 .82 w15v15 .95 (c) .87 .61 .99 m18v20 .63 (noop) .00 .01 .17 w18v18 .99 (c) .92 .56 1. w18v20 .71 (c) .31 .24 .76\na focus firing heuristic (e.g. \"attack weakest') by identifying and locking on a feature, than to alsc learn not to \"overkill''. We interpret the learned behaviors in Appendix[F.\nWe then studied how well a model trained on one map performs on maps with a different number of. units, to test generalization. Table 3 contains the results for this experiment. We observe that DQN performs the best on m5v5 when trained on m15v16, because it learned a simpler (but more efficient on m5v5) heuristic. \"Noop\"' and \"attack closest\"' are quite good with the large Marines map because they generate less moves (and less collisions). Overall, ZO is consistently significantly better than other RL algorithms on these generalization tasks, even though it does not reach an optimal strategy"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We also played the best model on each map against each other. We modify the maps in this case such that they are all symmetric, but with the same army composition. Table|4|shows the results for this experiment. It seems that PG and DQN learned very different strategies on wXvY, DQN beats PG consistently when trained on w15v17, while the PG model trained on w15v15 has an edge over DQN Overall, ZO comes out ahead in every match-up except for m5v5, often by a significant margin.\nStarCraf[is a real-time strategy (RTS) game in which each player must build an army and control. individual units to destroy the opponent's army. As of today, StarCraft is considered one of the. most difficult games for computers, and the best bots only reach the level of high amateur human. players (Churchill2015). The main difficulty comes from the need to control a large number of. units in partially observable environment, with very large state and action spaces: for example, in a. typical game, there are at least 101685 possible states whereas the game of Go has about 10170 states. Because of simultaneous and durative actions, StarCraft provides an ideal environment to study the control of many agents at large scale, and an opportunity to define tasks of increasing difficulty, from. micromanagement, which concerns the short-term, low-level control of fighting units during battles. to long-term strategic and hierarchical planning under uncertainty. While building a controller for the full game based on machine learning is out-of-reach with current methods, we propose, as a first step. to study reinforcement learning (RL) algorithms in micromanagement scenarios in StarCraft.."}, {"section_index": "2", "section_name": "8 CONCLUSION", "section_text": "This paper presents two main contributions. First, it establishes StarCraft micromanagement scenarios as complex benchmarks for reinforcement learning: with durative actions, delayed rewards, and large action spaces making random exploration infeasible. Second, it introduces a new reinforcement learning algorithm that performs better than prior work (DQN, PG) for discrete action spaces in these micromanagement scenarios, with robust training (see Figure[2) and episodically consistent exploration (exploring in the policy space).\nBoth the work on Atari games (Mnih et al.]2013) and the recent Minecraft scenarios studied by researchers (Abel et al.| 2016f Oh et al.|2016) focus on the control of a single agent, with a fixed limited set of actions. Coherently controlling multiple agents (units) is the main challenge of. reinforcement learning for micromanagement tasks. This comes with two main challenges. The first one is to efficiently explore the large action space. The implementation of a coherent strategy requires. the units to take actions that depend on each other, but it also implies that any small alteration of a strategy must be maintained for a sufficiently long time to properly evaluate the long-term effect of. that change. In contrast to this requirement of consistency in exploration, the reinforcement learning algorithms that have been successful in training deep neural network policies such as Q-learning (Watkins & Dayan]1992) Sutton & Barto1998) and REINFORCE (Williams1992]Deisenroth et al. 2013), perform exploration by randomizing actions. In the case of micromanagement, randomizing. actions mainly disorganizes the units, which then rapidly lose the battle without collecting relevant\nThis work leaves several doors open and calls for future work. Simpler embedding models of state and actions, and variants of the model presented here, have been tried, none of which produced efficien units movement (e.g. taking a unit out of the fight when its hit points are low). There is ongoing\nTable 4: Win rates over 2000 games against each other\nheuristics. RL map rand_nc noop c wc nok_nc DQN PG ZO dragoons_zealots .14 .49 .67 .83 .50 .61 .69 .90 m5v5 .49 .84 .94 .96 .83 .99 .92 1. m15v16 .00 .81 .81 .10 .68 .13 .19 .79 w15v17 .19 .10 .20 .02 .12 .16 .14 .49"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "trained on. dragoons_zealots m15v16 m5v5 w15v15 w15v17 tested on dragoons_zealots m15v15 m18v18 m5v5 w15v15 w18v18 w15v15 w18v18 PG > DQN .74 .46 .47 .49 .61 .69 .09 .04 ZO > PG .76 .82 .79 .44 .82 .77 .98 .99 ZO > DQN .93 .85 .86 .39 .88 .90 .79 .80\nwork on convolutional networks based models that conserve the 2D geometry of the game (while embedding the discrete components of the state and actions). The zero order optimization technique presented here should be studied more in depth, and empirically evaluated on domains other thar StarCraft (e.g. Atari). As for StarCraft scenarios specifically, the subsequent experiments will include self-play in training, multi-map training (more generic models), and more complex scenarios whicl include several types of advanced units with actions other than move and attack. Finally, the goal o playing full games of StarCraft should not get lost, so future scenarios would also include the actions of \"recruiting\" units (deciding which types of unit to use), and how to best make use of them.\nfeedback. The second challenge of micromanagement is that there is no obvious way to parameterize the policy given the state and the actions, because actions are relations between entities of the state e.g. (unit A, attack, unit B) or (unit A, move, position B) and are not restricted to a few constant symbols such as \"move left' or \"move right\"'. Multi-class architectures, such as these used for Atari games (Mnih et al.]2015), cannot evaluate actions that are parameterized by an entity of the state.\nThe contribution of this paper is twofold. First, we propose several micromanagement tasks fron StarCraft (Section |3), then we describe our approach to tackle them and evaluate well knowr reinforcement learning algorithms on these tasks (Section|4). In particular, we present an approacl of greedy inference to break out the complexity of taking the actions at each step. We also describ the features used to jointly represent states and actions, as well as a deep neural network model fo the policy (Section|5). Second, we propose the zero order (ZO) reinforcement learning algorithm tc address the difficulty of exploration in these tasks (Section|6). Compared to algorithms for efficien direct exploration in parameter space, the novelty of our algorithm is to explore directly in policy space by mixing parameter randomization and plain gradient descent."}, {"section_index": "4", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Y-Lan Boureau. Antoine Bordes. Florent Perronnin, Dave Churchill, Leon Bottou and Alexander Miller for helpful discussions and feedback about this work and earlier versions of the paper We thank Timothee Lacroix and Alex Auvolat for technical contributions to our StarCraft/Torch bridge. We thank Davide Cavalca for his support on Windows virtual machines in our cluster environment."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Lucian Busoniu. Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, And Cybernetics-Part C: Applications and Reviews, 38 (2) 2008, 2008.\nDavid Churchill, Abdallah Saffidine, and Michael Buro. Fast heuristic search for rts game combat scenarios. Ir AIIDE, 2012.\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.\nMarc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1-142, 2013\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochasti optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nSaeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic progran ming. SIAM Journal on Optimization, 23(4):2341-2368, 2013\nMohammad Ghavamzadeh, Sridhar Mahadevan, and Rajbala Makar. Hierarchical multi-agent reinforcemer learning. Autonomous Agents and Multi-Agent Systems, 13(2):197-229, 2006.\nJunling Hu and Michael P Wellman. Multiagent reinforcement learning: theoretical framework and an algorithn In ICML, volume 98, pp. 242-250, 1998\nDavid Abel. Alekh Agarwal, Fernando Diaz, Akshay Krishnamurthy, and Robert E Schapire. Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119, 2016.\nMulti-agent reinforcement learning has been an active area of research (Busoniu et al.|[2008). Most of. the focus has been on learning agents in competitive environments with adaptive adversaries (Littman. 1994] [Hu & Wellman|1998f Tesauro2003). Some work has looked at learning control policies for individual agents in a collaborative setting with communication constraints (Tan]1993) Bernstein. et al.]2002), with applications such as soccer robot control (Stone & Veloso1999), and methods such as hierarchical reinforcement learning for communicating high-level goals (Ghavamzadeh. et al.|2006), or learning an efficient communication protocol (Sukhbaatar et al.|2016). While the. decentralized control framework is most likely relevant for playing full games of StarCraft, here we. avoid the difficulty of imperfect information, therefore we use the multi-agent structure only as a means to structure the action space. As in the approach of (Maes et al.] 2009) with reinforcement. learning for structured output prediction, we use a greedy sequential inference scheme at each time. frame: each unit decides on its action based solely on the state combined with the actions of units. that came before it in the sequence..\nAndrew G Barto. Richard S Sutton. and Charles W Anderson. Neuronlike adaptive elements that can solv difficult learning control problems. IEEE transactions on systems, man, and cybernetics, (5):834-846, 198\nDaniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralizec control of markov decision processes. Mathematics of operations research, 27(4):819-840, 2002\nAlgorithms that have been used to train deep neural network controllers in reinforcement learning include Q-learning (Watkins & Dayan 1992 Mnih et al.]2015), the method of temporal differences (Sutton]1988]Tesauro1995), policy gradient and their variants (Williams]1992| Deisenroth et al. 2013), and actor/critic architectures (Barto et al. 1983 Silver et al.20142016). Except for the deterministic policy gradient (DPG) (Silver et al. 2014), these algorithms rely on randomizing the actions at each step for exploration. DPG collects traces by following deterministic policies that remain constant throughout an episode, but can only be applied when the action space is continuous. Hausknecht & Stone(2015) apply DPG with paramterized action spaces, in which discrete actions (e.g. \"move') are parameterized by continuous variables (e.g. the target location). Our work is most closely related to works that explore the parameter space of policies rather than the action space Several approaches have been proposed that randomize the parameters of the policy at the beginning of an episode and run a deterministic policy throughout the entire episode, borrowing ideas from gradient-free optimization, e.g. (Mannor et al.[2003] Sehnke et al.]2008] Szita & Lorincz2006). However, these algorithms rely on gradient-free optimization for all parameters, which does not scale well with the number of parameters. Osband et al.(2016b) describe another type of algorithm where the parameters of a deterministic policy are randomized at the beginning of an episode, and learn a posterior distribution over the parameters as in Thomson sampling (Thompson1933). Their approach was proved to be efficient, but applies only to linear functions and scales quadratically with the number of parameters. The bootstrapped deep Q-networks (BDQN) (Osband et al.|2016a) are a practical implementation of the ideas of (Osband et al.|[2016b) for deep neural networks. However, BDQN still performs exploration in the action space at the beginning of the training, and there is no randomization of the parameters. BDQN keeps several versions of the last layer of the deep neural network, and selects a single version per episode to perform Q-learning updates, while it ensembles all such \"heads\"' as test time. In contrast, we randomize the parameters of the last layer once at the\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376. 2011..\nohn C Duchi. Michael I Jordan. Martin J Wainwright. and Andre Wibisono. Optimal rates for zero-order convex optimization: the power of two function evaluations. arXiv preprint arXiv:1312.2139, 2013\nSylvain Gelly and Yizao Wang. Exploration exploitation in go: Uct for monte-carlo go. In NIPs: Neura. Information Processing Systems Conference On-line trading of Exploration and Exploitation Workshop, 2006\nMatthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. arXiv preprint arXiv:1511.04143, 2015.\nIn the context of RTS micromanagement, a large spectrum of AI approaches have been studied There has been work on Bayesian fusion of hand-designed influence maps (Synnaeve & Bessiere 2011), fast heuristic search in a simplified simulator (Churchill et al.2012), and even evolutionary optimization (Liu et al.]2014). Overmind (Klein et al.2010) used threat-aware A* pathing and RL-tuned potential fields. Closer to this work, Marthi et al.[(2005) employ concurrent hierarchical Q-learning (units Q-functions are combined at the group level),Wender & Watson(2012) successfully applied tabular Q-learning (Watkins & Dayan|[1992) and SARSA (Sutton & Bartol|1998), with and without experience replay (\"eligibility traces''), with a reward similar to the one used in several of our experiments. However, the action space was reduced to pre-computed \"meta-actions': fight and retreat, and the features were hand-crafted. None of these approaches are used as is in existing StarCraft bots, for a lack of robustness, completeness (both can be attributed to hand-crafting), or computational efficiency. For a more detailed overview of AI research on StarCraft, the reader should consult (Ontanon et al.|2013).\nJack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annal of Mathematical Statistics, 23(3):462-466, 1952.\nMichael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings o the eleventh international conference on machine learning, volume 157, pp. 157-163, 1994.\nFrancis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning Machine learning, 77(2-3):271-301, 2009.\nBhaskara Marthi. Stuart J Russell. David Latham, and Carlos Guestrin. Concurrent hierarchical reinforcemen learning. In IJCAI, pp. 779-785, 2005\nWe focus on micromanagement, which consists of optimizing each unit's actions during a battle. The tasks presented in this paper represent only a subset of the complexity of playing StarCraft. As. StarCraft is a real-time strategy (RTS) game, actions are durative (are not fully executed on the nex. frame), and there are approximately 24 frames per second. As we take an action for each unit every. few frames (e.g. every 9 frames here, more details can be found in AppendixD), we only conside. actions that can be executed in this time frame, which are: the 8 move directions, holding the curren position, an attack action for each of the existing enemy units. During training, we always control all. units from one side, and the opponent (built-in AI in the experiments) is attacking us:.\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception and action in minecraft. arXiv preprint arXiv:1605.09128, 2016\nSantiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and A1 in Games, IEEE Transactions on, 5(4):293-311, 2013.\nIan Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrappe dqn. arXiv preprint arXiv:1602.04621, 2016a.\nFrank Sehnke, Christian Osendorfer, Thomas Ruckstie, Alex Graves, Jan Peters, and Jurgen Schmidhube Policy gradients with parameter-based exploration for control. In Artificial Neural Networks-ICANN 2o08, pp. 387-396. Springer, 2008\nFrank Sehnke, Christian Osendorfer, Thomas Ruckstie, Alex Graves, Jan Peters, and Jurgen Schmidhuber Parameter-exploring policy gradients. Neural Networks, 23(4):551-559, 2010..\nDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministi policy gradient algorithms. In ICML, 2014.\nFor all these scenarios, a human expert can win 100% of the time against the built-in AI, by moving away units that are hurt (thus conserving firepower) and with proper focus firing\nJames C Spall. A one-measurement form of simultaneous perturbation stochastic approximation. Automatica 33(1):109-112, 1997.\nPeter Stone and Manuela Veloso. Team-partitioned, opaque-transition reinforcement learning. In Proceedings o the third annual conference on Autonomous Agents, pp. 206-212. ACM, 1999\nFormalism The environment is approximated as a Markov Decision process (MDP), with a finite set. of states denoted by S. Each state s has a set of units U(s), and a policy has to issue a command c E C to each of them. The set of commands is finite. An action in that MDP is represented as a sequence of (unit, command) pairs a = ((u1, c1), ..., (u|s|, C|sj)) such that {u1, ..., ujs|} = U(s). [s| denotes the. number of units in state s and A(s) = (U(s) C)|sI the set of actions in state s. We denote by p(s'|s, a. the transition probability of the MDP and by p1 the probability distribution of initial states. When there is a transition from state st to a state st+1, the agent receives the reward rt+1 = r(st, st+1). where r : S S -> R is the reward function. We assume that commands are received and\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998\nRichard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057-1063, 1999\nDan Klein, David Burkett, David Hall, Taylor-Kirkpatrick Berk, John Blitzer, John DeNero, Haomiao Huang. Eugene Ma, Yewen Pu, Jie Tang, Nicholas Hay, Oriol Vinyals, and Jason Wolfe. The berkeley overmind project, 2010. URLhttp: //overmind. cs.berkeley.edu/\nSiming Liu, Sushil J Louis, and Christopher Ballinger. Evolving effective micro behaviors in rts game. In Computational Intelligence and Games (CIG), 2014 IEEE Conference on, pp. 1-8. IEEE, 2014\nShie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. In ICML pp. 512-519, 2003.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In Proceedings of NIPS, 2013\nm5v5 is a task in which we control 5 Marines (ranged ground unit), against 5 opponent Marines A good strategy here is to focus fire, e.g. order all Marines to attack a single opponent.. m1 5v1 6: same as above, except we have 15 Marines and the opponent has 16. A good strategy. here is also to focus fire, while avoiding \"overkill\" 7 Marines attacking simultaneously kills an opponent in a single volley, so using more marines to simultaneously target an enemy causes attacks to be wasted, resulting in \"overkill.'. dragoons_zealots: symmetric armies with two types of units: 3 Zealots (melee ground unit) and 2 Dragoons (ranged ground unit). Here a strategy requires to focus fire, and if possible to 1) not spend too much time having the Zealots walk instead of fight, 2) focus the Dragoons who die more easily but deal more damage.. w15v17: we control 15 Wraiths (ranged flying unit) while the opponent has 17. Flying units have no \"collision\"', so multiple units can occupy the same tile and reach their target more quickly. It only takes 6 wraiths to kill an opponent in a single volley. Hence, it is important not to \"overkill. on this map. other mXvY or wXvY scenarios. The 4 scenarios above are the ones on which we train our models but they can learn strategies that overfit a given number of units, so we have similar scenarios but with different numbers of units (on each side)..\nan Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions n 2386 2016b\nSainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation arXiv preprint arXiv:1605.07736, 2016\nRichard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9-44 1988.\nGabriel Synnaeve and Pierre Bessiere. A bayesian model for rts units control applied to starcraft. In Computa tional Intelligence and Games (CIG), 2011 IEEE Conference on, pp. 190-196. IEEE, 2011.\nThe \"greedy' MDP One way to break out the complexity of jointly inferring the commands tc each individual unit is to perform greedy inference at each step: at each state, units choose a command one by one, knowing the commands that were previously taken by other units. Learning a greedy policy boils down to learning a policy in another MDP with fewer actions per state but exponentially more states, where the additional states correspond to the intermediate steps of the greedy inference This reduction was previously proposed in the context of structured prediction by Maes et al.(2009) who proved that an optimal policy in this new MDP has the same cumulative reward as an optima policy in the original MDP. We expand on this in Appendix[B\nGerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58-68, 1995\nT. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Divide the gradient by a running average of its recen magnitude. COURSERA: Neural Networks for Machine Learning, 2012.\nNormalized cumulative rewards Immediate rewards are necessary to provide feedback that guide exploration. In the case of micromanagement, a natural reward signal is the difference betwee damage inflicted and incurred between two states. The cumulative reward over an episode is th total damage inflicted minus the total damage incurred along the episode. However, the scale of thi quantity heavily depends on the number of units (both our units and enemy units, which significantl decreases along an episode) that are present in the state. Without proper normalization with respec to the number of units in the current state z(s), learning will be artificially biased towards the larg immediate rewards at the beginning of the episode. Then, instead of considering cumulative reward from a starting state st, we define normalized cumulative rewards nt..T as the following recursiv computation over an episode:\nHado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. arXi preprint arXiv:1509.06461, 2015.\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.\nnt+1..1 Vt E{1,...,T-1}, nt..T\nWe use the sum of maximum hit points of all units in the state s' as normalization factor z(st), which implies that nt..T E [--0.5, 0.5]. One way to look at this normalization process is to consider that the. reward is rt+1 plays the role of an (adaptive) discount factor, which is chosen to be at. z(stJ, and z(st+1) Z(st) most 1, and strictly smaller than 1 when the number of units change.\nFor policy gradient and our algorithm described in section[6. we directly use nt..T. We describe i Appendix[C|how we adapted the update rule for Q-learning.."}, {"section_index": "6", "section_name": "FEATURES AND MODEL FOR MICROMANAGEMENT IN STARCRAFT", "section_text": "We describe in this section the features and the neural network architecture we use to parameterize the policy. Since we consider the greedy inference described in the previous section, the underlying MDP will contain states of the form s = (s, a1..k, uk+1), where: s is the current state of the game given by the game engine, k is the number of units which already \"played' at this frame, a1..k is the sequence of the k pairs (unit, command) that correspond to the k commands the have already been chosen, and finally uk+1 is the unit to play. For each unit, we consider two types of commands (1) attack a given enemy unit, and (2) move to a specific position. In order to reduce the number oi possible move commands, we only consider 9 move commands, which either correspond to a move in one of the 8 basic directions, or staying at the same position.\nThere are several challenges to represent states and actions in RTS games:\nThe number of units and actions are not bound a priori and varies in time Commands must be evaluated in context of all currently executing command Attack actions must resolve the reference to its target\nexecuted concurrently, so that the order of commands in an action does not alter the transition. probabilities. Finally, we consider the episodic reinforcement learning scenario, with finite horizon. T and undiscounted rewards. The learner has to learn a (stochastic) policy (a|s), which defines. a probability distribution over actions in A(s) for every s E S. The objective is to maximize the where the expectation is taken with respect to s1 ~ 01, st+1 p(.at. st) and at ~ (.st\nstvan Szita and Andras Lorincz. Learning tetris using the noisy cross-entropy method. Neural computation, 18 (12):2936-2941, 2006.\nIing Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the teni international conference on machine learning, pp. 330-337, 1993.\nGerald Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in neural informatior processing systems, pp. None, 2003.\nVilliam R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidenc of two samples. Biometrika, 25(3/4):285-294, 1933.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning Machine learning, 8(3-4):229-256. 1992\nTo address the first two challenges, we adopt an approach based on a joint encoding of states and commands. Denoting by s = (s, a1..k, uk+1) the current state of the greedy MDP and c a\nTable 1: Unit features as given by the game engine, their abbreviated name and their type: cat. means the feature is caterogical and 1-hot encoded, real-valued features comme with the re-scaling constant"}, {"section_index": "7", "section_name": "We here briefly describe the two algorithms we use as baseline, Q-learning (Sutton & Bartol1998 and REINFORCE (Williams]1992).", "section_text": "hit points shield cooldown is enemy unit type (hp, E R, /20) (shield, E R, /20) (cd, E R, /10) (nmy, bool) (type, cat.) position previous target chosen target prev. cmd type chosen cmd type (pos, E R2, /20) (tgt_pos, E R2, /20) (next_pos, E R2, /20) (prev_cmd, cat.) (next_cmd, cat.)\nQ-learning The Q-learning algorithm in the finite-horizon setting learns an action-value function Q by solving the Bellman equation.\nVs E S,Va E A(s),Qt(s,a) = >p(s'|s,a)(r(s,s')+ max.. a'EA(s' s'ES\ncandidate action, we learn the parameters w and 0 of a (state, command) value function of the form f(s,c) = (w, e(s, c)) where w E Rd and o(s,c) is the output of a embedding network that maps (state, command) pairs to Rd, with parameters 0. In Q-learning and our algorithm presented in the next section, we directly use f as the state/action value function, whereas in policy gradient the probability to take command c in state s is given by the Gibbs distribution over f(s, c) with o f (s, c) /T\nexploration: at state s and stage t, an action in argmaxae.A(s) Qt(s, a) is chosen with probability 1 e or an action in A(s) is chosen uniformly at random with probability e. In practice, we use stationary Q functions (i.e., Qt = Qt+1), which are neural networks, as described in Section 5] Training is carried out using the standard online update rule for Q learning with function approximation (see (Mnih et al.] 2015) for DQN), which we apply in mini-batches (hyper-parameters are detailed in AppendixE).\nTo tackle the last challenge, we identify units with their (x, y coordinates in the map. We add two fields to the unit features that contain the coordinates of their corresponding target, or its own location if it does not have a target. To evaluate a command c = (<actor unit>, <attack or move>, <target>), we compute pairwise distances between the actor and the target. Note that with this kind of representation, the input of the embedding network e is a joint representation of the state s and the com mand c to evaluate. A complete list of unit features is given in Table[1] Hit points are the remaining life points of the unit, shield corresponds to additional hit points that are not affected by armor and regenerate slowly, cooldown is the time to wait until damages can be inflicted.\nREINFORCE E The algorithm REINFORCE belongs to the family of policy gradient algorithms (Sutton et al.]1999). Given a stochastic policy o parameterized by O, learning is carried out by generating traces (st, at, st+1, rt+1)t=1,.,T-1 by following the current policy. Then, stochastic. gradient updates are performed, using the gradient estimate:.\nT r(st..T')Ve log(no(at(st)) t=1\nThe full scoring approach is depicted in Figure[1] In our ap proach, a state is represented as a list of units. The raw features are transformed by a featurizer that 1) takes the 3 unit features (pos, tgt_pos and next_pos) and computes their distances with the position the acting unit and its target (posc and tgtc). All 4 categorical variables are passed through a 10-dimensional linear embedding (not shown in figure). In addition to the 4 real valued unit features, we have a 40 dimensional feature vector per unit as input to our network.\nWe use a Gibbs policy (with temperature parameter +) as the stochastic policy\nexp($e(a, s)/t) te(as)\nwhere e is a neural network with paramters O that gives a real-valued score to each (state, action pair. For testing, we use the deterministic policy o(s) = argmaxge.A(s) $o(a, s)..\nEach unit feature vector then goes through the unit-level embed- ding network. We then concatenate the max and mean poolings across units with an embedding of the command type. Then the resultant 210 dimensional vector is passed through a final state-command embedding network. Both the unit-level and state-command embedding networks have a hidden dimension of 100, and ELU nonlinearities in the intermediate layer (Clev- ert et al.]2015). We use tanh for the final unit-level network nonlinearty, and a ReLU for the final state-command network nonlinearity. We did not extensively experiment with the struc- ture of the network, but we found the maxpooling and tanh nonlinearity to be particularly important."}, {"section_index": "8", "section_name": "B THE GREEDY MDP", "section_text": "We settled on iteratively choosing a unit, then a command to apply to that unit, which yields ar algorithm with 2[s| steps for state s, linear in the number of units. Since the commands are executec concurrently by the environment after all commands have been decided, the cumulative reward does not depend on the order in which we choose the units, for instance: uniformly at random among remaining units. More formally, using the notation a1..k to denote the k first (unit, command) pairs of an action a (with the convention a1. = 0). the state space S of the greedy MDP is defined by\nThe advantage of this approach is to rely on raw features only, and does not require any encoding ol the game dynamics, in contrast to previous works on RL for micromanagement (see e.g. (Wender & Watson2012)) that used domain knowledge handcrafted in the features (such as the damages inflicted by an attack). The distance-based encoding is also a simple way to represent the differen relationships between units that correspond to previous/chosen attacks.\n2The policy may not be determistic if we break ties randomly in the argmax.\nS,Va EA(s),Qt(s,a) = )`p(s'|s,a)(r(s,s')+ max Qt+1(s', a')), a'EA(s') s'ES\nwhere Qt is the state-action value function at stage t of an episode, and QT(s, a) = 0 by convention Qt(s, a) is also 0 whenever a terminal state is reached, and transitions from a terminal state only go to the same terminal state.\n1 2 n unit Randur its hp pos tgt_pos cmd posc featurizer tgtc typec hp feannes d(pos, posc) d(pos, tgtc) Linear (40x100) ELU Linear (100x100) Tanh pooling Embedding (dim 10) Max Mean Linear (210x100) ELU Linear (100x100) ReLU\nThis training phase is distinct from the test phase, in which we record the average cumulative reward of the deterministic policys +> argmaxae.A(s) Q(s, a).\nA natural way to define the greedy MDP (Section|4) is to define the set of atomic actions of the greedy policy as all possible (unit, command) pairs for the units whose command is still not decided This would lead to an inference with quadratic complexity with respect to the number of units, which is undesirable.\nFigure 1: Representation of the joint (state, command) featuriza- tion and scoring process.\nS ={(s,a1..k, Uk+1) s E S,0 k < [s],a =((u1,c1),..., (u|sl,ClsD)) E A(s)}"}, {"section_index": "9", "section_name": "COMBINING BACKPROPAGATION AND ZERO-ORDER OPTIMIZATION", "section_text": "Our preliminary experiments with Q-learning or REINFORCE made it clear that structured explo. ration was necessary to learn non-trivial strategies with substantial armies. The randomization ol actions lead to the disorganization of the army and a rapid defeat, which prevents the algorithms from evaluating alterations to the current policy in the long run. Whereas gradient-free optimization. that performs episode-based exploration (e.g.Mannor et al.(2003); Sehnke et al.(2010)) would be a valid choice, it only scales to few parameters. Preliminary experiments with direct exploration in the parameter space of the deep neural network confirmed that a more efficient scheme was needed..\nFinally, using the same notation as above, the reward function r between states that represent intermediate steps of the algorithm is 0 and the last unit to play receives the reward:\nThe deterministic policy. y we consider takes action a in state s according to the rule\nF((s, ai.. W. and r((s s.0. =r(s.s\nr((s, a1.k-1, Uk), (S, a1.k, Uk+1)) )=0, and r((s,aj. u[sD,(s',0,u)=r(s,s\n= argmax(w, e(s, a)) aEA(s)\nIt can be shown that an optimal policy for this greedy MDP chooses actions that are optimal for the original MDP, because the immediate reward in the original MDP does not depend on the order in which the actions are taken. This result only applies if the family of policies has enough capacity. In practice, some ordering may be easier to learn than others, but we did not investigate this issue because the gain, in terms of computation time, of the random ordering was critical for the experiments.\nWe use the notation (s, a) for state and actions in an MDP for the presentation of the algorithm, even though in our experiments we use it with states s of the greedy MDP and unit-level commands c. Likewise, we describe the algorithm in the standard cumulative reward setup, while in our experiments. we use the normalized cumulative rewards..\nThis form of policy naturally allows to perform structured exploration by only randomizing parts of. the network. More specifically, the parameters w of the last layer affect all states and actions in a similar way along an episode. The approach we follow is then to perform gradient-free optimization. on these parameters w only. Following stochastic methods for zero-th order optimization (Kiefer et al.]1952} Nemirovsky et al.]1982] Spall]1997] Duchi et al.2013] Ghadimi & Lan]2013), the gradient of a differentiable function x E IRd can be estimated by\nThe normalized rewards (from Section4) maintain the invariant nt..T z(st) ; but more importantly, the normalization can be applied to the Bellman equation (2), which becomes\nVs E S,Va E A(s),Q(s,a) = > +zs max Q(s'. a'EA(s') s'ES\nwhere the expectation is taken over the vector u sampled on the unit sphere (Nemirovsky et al.]1982 chapter 9.3). The constant d is absorbed by learning rates, so we ignore it in the following. Given a (state, action) pair (s, a) and the observed cumulative reward r1..t for an episode of length t, an. estimate of the gradient of the expected cumulative reward with respect to w is thus r1..t u. In practice. 1 rk.. t ather thar -1..t o. Ihich. odien 1lativ\nThe stochastic gradient updates for Q-learning can easily be modified accordingly, as well as the gradient estimate in REINFORCE (3) in which we replace r by n.\nThe overall algorithm is described in Algorithm[1 At the beginning of an episode, a perturbation u is sampled from the unit sphere of Rd and the policy s +> w+su,e(s) is run through the entire episode ( is a hyperparameter of the algorithm). The perturbation vector plays both role of performing. structured exploration and providing the gradient estimate of the cumulative reward with respect to w. The algorithm performs a minibatch update at the end of the episode. The second loop in Algorithm"}, {"section_index": "10", "section_name": "D STARCRAFT SPECIFICS", "section_text": "We advocate that using existing video games for RL experiments is interesting because the simulators. are oftentimes complex, and we (the AI programmers) do not have control about the source code of the simulator. In RTS games like StarCraft, we do not have access to a simulator (and writing one. would be a daunting task), so we cannot use (Monte Carlo) tree search (Gelly & Wang2006) directly. even less so in the setting of full games (Ontanon et al.]2013). In this paper, we consider the problem. of micromanagement scenarios, a subset of full RTS play. Micromanagement is about making good. use of a given set of units in an RTS game. Units have different features, like range, cooldown hit points (health), attack power, move speed, collision box etc. These numerous features and the. dynamics of the game advantage player that take the right actions at the right times. Specifically for. the game(s) StarCraft, for which there are professional players, very good competitive players anc. professional players perform more than 300 actions per minute during intense battles..\nWe ran all our experiments on simple scenarios of battles of an RTS game: StarCraft: Broodwar. These scenarios can be considered small scale for StarCraft, but they already deem challenging for. existing RL approaches. The joint action space is in O((#commands per unit)#units), with a peak. number of units of about 400 (Synnaeve & Bessiere| 2011). For an example scenario of 15 units. (that we control) against 16 enemy units, even while reducing the action space to \"atomic\" actions. (surrounding moves, and attacks), we obtain 24 (8+16) possible discrete actions per unit for our. controller to choose from (2415 actions total) at the beginning of the battle. Battles last for tens of. seconds, with durative actions, simultaneous moves, and at 24 frames per second. The strategies that. we need to learn consist in coordinated sets of actions that may need to be repeated, e.g. focus firing. without overkill. We use a featurization that gives access only to the state from the game. we do not\ngradient with respect to 0 when the network input is (s' and the backward step uses z as inpui\nThe action space A(s) of each state 3 E S is constant and equal to the set of commands C. Moreover for each state s of the original MDP, any action a = ((u1, c1), ..., (u|s|, C|sj) E A(s), the transition. probabilities p in the greedy MDP are defined by.\n1 Vk E{0,...,[s|- 1}, p((s, a1..k,Uk+1)|(s,a1..k-1, Uk), c and Vs' E S,Vu' EU(s'), p((s', 0,u)|(s,a1.Is]-1,U|sl),\nVf(x) ~ E (x+Su)u]\nThis normalization does not change the optimal policy because it maintains the invariant that the expected normalized cumulative reward from a given state s to the end of an episode (by following the optimal deterministic policy) is the expected cumulative reward from this s divided by a value that depends only on s.\nThe deterministic exploration along an episode does not provide any update rule for the parameters of. the embedding network, because the randomization is the same for every (state, action) pair. We pro pose a heuristic rule to update the parameters 0 of the embedding network, motivated by the following. remark: given a function (w E Rd,v E Rd) +> F((w, v)) E R, we have VwF = F'((w,v))v and. V,F = F'((w, v))w. Denoting by \" the term-by-term division of vectors (assuming v contains. only non-zero values) and O the term-by-term multiplication operator, we obtain:.\nW V,F=(VwF) U\ngation. In practice, we use the sign of W to avoid exploding gradients due to the division by\nOur tasks (\"maps') represent battles with homogeneous types of units, or with little diversity (2 types of unit for each of the players). For instance, they may use a unit of type Marine, that is one soldier with 40 hit points, an average move speed, an average range (approximately 10 times its collision size), 15 frames of cooldown, 6 of attack power of normal damage type (so a damage per second of 9.6 hit points per second, on a unit without armor). On symmetric and/or monotyped maps, strategies that are required to win (on average) are \"focus firing\", without overkill (not more units targeting a unit than what is needed to kill it). For perfect win rates, some maps may require that the AI moves its units out from the focus firing of the opponent.."}, {"section_index": "11", "section_name": "E HYPER-PARAMETERS", "section_text": "G(0) + backprope(s) W\nTaking an action on every frame (24 times per second at the speed at which human play StarCraft) for every unit would spam the game needlessly, and it would actually prevent the units from moving3 We take actions for all units synchronously on the same frame, even skip_frames frames. We tried several values of this hyper-parameter (5, 7, 9, 11, 13, 17) and we only saw smooth changes in performance. We ran all the following experiments with a skip_frames of 9 (meaning that we take about 2.6 actions per unit per second).We also report the strongest numbers for the baselines over all these skip frames. We optimize all the models after each battle (episode), with RMSProp (momentum 0.99 or 0.95), except for zero-order for which we optimized with Adagrad (Adagrad did not seem to work better for DQN nor REINFORCE). In any case, the learning rate was chosen among {10-2, 10-3, 10-4}.\nAlgorithm 1: Zero-order (ZO) backpropagation algorithm\nThe reasoning above is only an intuitive motivation of the update rule (**) of Algorithm 1] because. we neglected that a single u is sampled for an entire episode. We also neglected the argmax operatior. that chooses the actions. Nonetheless, considering (**) as a crude approximation to some real. estimator of the gradient seems to work very well in practice, as we shall see in our experiments Finally, we use Adagrad (Duchi et al.2011) to update the parameters of the different layers. We. found the use of Adagrad's update scheme fairly important in practice, compared to other approaches. such as e.g. RMSProp (Tieleman & Hinton|2012), even though RMSProp tended to work slightly. better with Q-learning or REINFORCE in our experiments..\nFor all methods, we tried experience replay, either with episodes (battles) as batches (of sizes 20, 50 100), or additionally with random batches of (st, at, Tt+1, St+1, terminal?) quintuplets in the case of Q-learning, it did not seem to help compared to batching with the last battle. So, for consistency we only present results where the training batches consisted of the last episode (battle).\nFor REINFORCE we searched over t E {0.1,0.5,1,10}\nWe use Torch7 (Collobert et al.]2011) for all our experiments. We connect our Torch code and model: to StarCraft through a socket server, as described in (Synnaeve et al.]2016). We ran experiments with deep Q networks (DQN) (Mnih et al.]2013), policy gradient (PG) (Williams1992) (detailed in Appendix[A), and zero order (ZO). We did an extensive hyper-parameters search, in particulai over e (for epsilon-greedy exploration in DQN), (for policy gradient's softmax), learning rates optimization methods, RL algorithms variants, and potential annealings (detailed Appendix E)..\nFor zero-order, we tried E {0.1, 0.01, 0.001}"}, {"section_index": "12", "section_name": "7.2 BASELINE HEURISTICS", "section_text": "As all the results that we report are against the built-in AI, we compare our win rates to the ones of baseline heuristics. Some of these heuristics often perform the micromanagement in full-fledged StarCraft bots (Ontanon et al.]2013), and are the basis of heuristic search (Churchill et al.2012) The baselines are the following:.\n3Because several actions are durative, including moves. Moves have a dynamic consisting of per-unit-type turn rate, max speed, and acceleration parameters..\nFor most of these tasks (\"maps'), the number of units that our RL agent has to consider changes over an episode (a battle), as do its number of actions. The fact that we are playing in this specific adversarial environment is that if the units do not follow a coherent strategy for a sufficient amount of time, they will suffer an unrecoverable loss, and the game will be in a state of the game where the units will die very rapidly and make little damage, independently of how they play - a state that is mostly useless for learning.\nFor Q-learning (DQN), we tried two schemes of annealing for epsilon greedy, e =. EO V1+ea.E0.t with t the optimization batch, and e = max(0.01, ot), Both with eo E {0.1, 1}, and respectively. eg E {0, eo} and eg E {10-5, 10-4, 10-3}. We found that the first works marginally better and used. that in the subsequent experiments with eo = 1 and ea = 1 for most of the scenarios. We also used Double DQN as in (Van Hasselt et al.]2015) (thus implemented as target DQN). For the target/double. network, we used a lag of 100 optimizations, thus a lag of 100 battles in all the following experiments.. According to our initial runs/sweep, it seems to slightly help for some cases of over-estimation of the Q value.\nWe visually inspected the model's performance on large battles. On the larger Marines map (m15v16) DQN learned to focus fire. Because this map has many units, focus firing leads to units bumping into each other to try to focus on a single unit. The PG player seemed to have a policy that attacks the closest marine, though it doesn't do a good job switching targets. The Marines that are not in range often bump into each other. Our zero order optimization learns a hybrid between focus firing\nFigure 2: Example of the training uncertainty (one standard deviation) on 5 different initialization fo. DQN (left) and zero-order (right) on the m5v5 scenario..\nand attacking the closest unit. Units would switch to other units in range if possible, but still focus on specific targets. This leads to most Marines attacking constantly, as well as focus firing wher they can. However, the learned strategy was not perfected, since Marines would still split their fire Occasionally when left with few units.\nWin rate Win rate 1.0 1.0 0.8 0.8 Mwwwwwww/wmwwwwwwwy 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000\nWin rate Win rate 1.0 1.0 0.8 0.8 0.6 MwwWwwww/wmwwwwwwwwy 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000\nIn the Wraiths map (w15v17), the DQN player's strategy was hard to decipher. The most likely explanation is that they tried to attack the closest target, though it is likely the algorithm did not converge to a specific strategy. The PG player learned to focus fire. However, because it only takes 6. Wraiths to kill another, 9 actions are \"wasted\" during the focus firing (at the beginning of the fight when all our units are alive). Our zero order player learns that focusing only on one enemy is not good, but it does not learn how many attacks are necessary. This leads to a much higher win rate, but. the player still assigns more than 6 Wraiths to an enemy target (maybe for robustness to the loss of one of our units), and occasionally will not focus fire when only a few Wraiths are remaining. This is similar to what the zero order player learned during the Marines scenario..\nrandom no change (rand_nc): select a random target for each of our units and do not change t) target before it dies (or our unit dies). This spreads damage over several enemy units, but wh there are collisions, it may make our units to move a lot to be in range of their target. noop: send no action. In this case, the built-in AI will control our units, so this exhibit t symmetry (or not!) of a given scenario. As we are always in a defensive position, with the ene commanded to walk towards us, all other things considered equal, it should be easier for t defending built-in AI than for the attacking one. Our models cannot send a noop command. closest (c): each of our units targets the enemy unit closest to it. This is not a bad heuristic enemy units formation will make it so that several of our units have the same opponent unit closest unit (some form of focus firing), but not all of them (no overkill). It is also quite robust melee units (e.g. Zealots) as it means they spend less time moving and more time attacking weakest closest (wc): each of our units targets the weakest enemy unit. The distance of the ene. unit to the center of mass of our units is used for tie-breaking. This may overkill. no overkill no change (nok_nc): same as the weakest closest heuristic, but register the number our units that target each opponent unit, choosing another target to focus fire when it becon overkill to keep targeting a given unit. Each of our units keep firing on their target without changi (that would lead to erratic behavior). Our implementation of the \"no overkill'' component dc not take all the dynamics of the game into account, and so if our units die without doing th expected damage on their target, \"no overkill' can be detrimental."}, {"section_index": "13", "section_name": "7.3 RESULTS", "section_text": "Overall, the zero order optimization outperforms both DQN and PG (REINFORCE) on most of the maps. The only map on which DQN and PG perform well is m5v5. It seems to be easier to learn\nThe first thing that we looked at were sliding average win rates over 400 battles during training against the built-in AI of the various models. In Figure[2] we can see than DQN is much more dependent on initialization and variable than zero order (ZO). DQN can unlearn, reach suboptimal plateau, or overall need a lot of exploration to start learning (high sample complexity).\nFor all the results that we present in Tables2|and[3] we ran the models in \"test mode\" by making them deterministic. For DQN we remove the epsilon-greedy exploration (set e = 0), for PG we do not sample from the Gibbs policy but instead take the value-maximizing action, and for ZO we do not add noise to the last layer.\nWe can see in Table|2|that m15v16 is at the advantage of our player's side (noop is at 81% win rate), whereas w15v17 is hard (c is at 20% win rate). By looking just at the results of the heuristics, we can see that overkill is a problem on m15v16 and w15v17 (nok_nc is better than wc). \"Attack closest\"' (c) is approximatively as good as nok_nc at spreading damage, and thus better on m15v16 because there are lots of collisions (and attacking the closest unit is going to trigger less movements)."}]
By14kuqxx
[{"section_index": "0", "section_name": "BIT-PRAGMATIC DEEP NEURAL NETWORK COMPUT ING", "section_text": "jorge, juddpatr, delmasll, sayeh, moshovos}@ece.utoronto.ca\nTable 2: Per convolutional layer activation precision profiles\nconfiguration. Section |6.4|analyzes the contribution of the software provided precisions. Finally Section|6.5|reports performance for designs using an 8-bit quantized representation.\nWe quantify a source of ineffectual computations when processing the multiplica tions of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragmatic (PRA), an architecture that exploits it improving performance and en ergy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms that is, products of the multiplicand and powers of two, which added together pro duce the final product|Wallace(1964). At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non- zero terms resulting in a design whose execution time for convolutional layers is ideally proportional to the number of activation bits that are 1. Measurements demonstrate that for the convolutional layers on Convolutional Neural Networks and during inference, PRA improves performance by 4.3x over the DaDiaNao (DaDN) accelerator Chen et al.(2014) and by 4.5x when DaDN uses an 8-bit quantized representation Warden (2016). DaDN was reported to be 300x faster than commodity graphics processors.\nMethodology: The same methodology is used for all systems for consistency. A custom cycle. accurate simulator models execution time. For all systems, computation was scheduled to minimize energy, which led to the same schedule for all. To estimate power and area, the designs were synthe sized with the Synopsis Design CompilerSynopsys for a TSMC 65nm library. The NBin and NBou. SRAM buffers were modeled using CACTI|Muralimanohar & Balasubramonian The eDRAM are. and energy were modeled with Destiny Poremba et al.(2015). To compare against STR, the pe. layer numerical representation requirements reported in Table[2|were found using the methodology. of Judd et al.Judd et al.(2016b). All PRA configurations studied exploit software provided preci sions as per Section[5.1] Section|6.4|analyzes the impact of this information on overall performance All performance measurements are for the convolutional layers only which account for more thar 92% of the overall execution time in DaDN Chen et al.(2014). PRA does not affect the executior. time of the remaining layers."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Performance: Figure7|shows the performance of STR (leftmost bars) and of PRA variants relative to DaDN. The PRA systems are labelled with the number of bits used to operate the first-stage weight shifters, e.g., the weight shifters of \"2-bit\" , or PRA2b, are able to shift to four bit positions (0-3). \"4-bit' or PRA4b, is the single-stage Pragmatic, or PRAsingle of Sections 55.1|whose weight shifters can shift to 16 bit positions (0-15). It has no second stage shifter.\nDeep Neural Network (DNN) hardware typically uses either 16-bit fixed-point Chen et al. (2014 or quantized 8-bit numbers|Warden (2016) and bit-parallel compute units. For convolutional layers that account for most of the execution time in Convolutional Neural Networks (CNNs) during image classification, these bit-parallel engines perform many ineffectual computations. Specifically, these layers perform several several inner products, where multiple pairs of weights and activations are multiplied and then reduced into an output activation. Any time a zero bit of an activation or a weigh is multiplied it adds nothing to the final output activations. These ineffectual bits are introduced by the conventional positional number representation and if avoided it would take even less time tc calculate each product improving energy and performance. As a first step, this work targets the ineffectual bits of activations only. Section 2 shows that in recent image classification networks 93% and 69% of activation bit and weight products are ineffectual when using respectively 16-bit fixed-point and 8-bit quantized representations.\nPRAsingle improves performance by 2.59 on average over DaDN compared to the 1.85 average improvement with STR. Performance improvements over DaDN vary from 2.11 for VGG19 to 2.97 for VGGM. As expected the 2-stage PRA variants offer slightly lower performance than PRAsingle, however, performance with PRA2, and PRA3b is always within 0.2% of PRAsingle. Even PRAos which does not include any weight shifters outperforms STR by 20% on average. Given a set of oneffsets, PRAos will accommodate the minimum non-zero oneffset per cycle via its second level shifter.\nArea and Power: Table 3 shows the absolute and relative to DaDN area and power. Two area measurements are reported: 1) for the unit excluding the SB, NBin and NBout memory blocks, and 2) for the whole chip comprising 16 units and all memory blocks. Since SB and NM dominate chip area, the per area area overheads Given the performance advantage of PRA, the area and powei overheads are justified. PRA2s is particularly appealing as its overall area cost over BASE is only. 1.35 and its power 2.03 while its performance is 2.59 on average. Accordingly, we restrict. attention to this configuration in the rest of this evaluation..\nThis work presents Pragmatic (PRA) a DNN accelerator whose goal is to process only the essentia. (non-zero) bits of the input activations PRA employs the following four key techniques: 1) on-the fly conversion of activations from a storage representation (e.g., conventional positional numbers or quantized) into an explicit representation of the essential bits only, 2) bit-serial activation/bit. parallel weight processing, an idea borrowed from STR Judd et al.(2016b a) but adapted for the. aforementioned representation, 3) judicious SIMD (single instruction multiple data) lane grouping. to maintain wide memory accesses and to avoid fragmenting and enlarging the multi-MB on-chip. weight memories (Sections 5 and 5.1), and 4) computation re-arrangement (Section [5.1) to reduce datapath area. All evaluated PRA variants maintain wide memory accesses and use highly-paralle. SIMD-style (single-instruction multiple-data) computational units. PRA introduces an additional. dimension upon which software can improve performance and energy efficiency by controlling ac.\nPerformance: Figure [8 reports the relative performance for PRA2s with column synchronization and as a function of the number of SSRs as per Section5.1.\nPer Layer Network Activation Precision in Bits. AlexNet 9-8-5-5-7 NiN 8-8-8-9-7-8-8-9-9-8-8-8 GoogLeNet 10-8-10-9-8-10-9-8-9-10-7 VGG_M 7-7-7-8-7 VGG_S 7-8-9-7-9 VGG_19 12-12-12-11-12-10-11-11-13-12- 13-13-13-13-13-13\nJorge Albericio , Patric Judd, Alberto Delmas Lascorz, Sayeh Sharify & Andreas Moshovos Electrical and Computer Engineering."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Stripes O-bit 1-bit 2-bit 3-bit 4-bit 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 1: Sources of ineffectual computation with conventional positional representation and fixed length hardware precision.\nFigure 7: Pragmatic's performance relative to DaDianNao using 2-stage shifting and per- pallet synchronization.\nExperimental measurements with recent CNNs for image classification demonstrate that most straightforward PRA variant, boosts average performance for the convolutional layers to 2.59x over the state-of-the-art DaDN accelerator. Pragmatic's average energy efficiency is 1.48x over DaDN and its area overhead is 1.35x. Another variant further boosts performance to 3.1x over DaDN at the expense of an additional 0.7% area.\nDDN STR 0-bit 1-bit 2-bit 3-bit 4-bit Area U. 1.55 3.05 3.11 3.16 3.54 4.41 5.75 Area U. 1.00 1.97 2.01 2.04 2.29 2.85 3.71 Area T. 90 114 115 116 122 136 157 Area T. 1.00 1.27 1.28 1.29 1.35 1.51 1.75 Power T. 18.8 30.2 31.4 34.5 38.2 43.8 51.6 Power T. 1.00 1.60 1.67 1.83 2.03 2.33 2.74"}, {"section_index": "3", "section_name": "2 MOTIVATION", "section_text": "Table 3: Area [mm21 and power [W] for the unit and the whole chip. Pallet synchronization\nArea and Power: Table 4 reports the area per unit, and the area and power per chip. The bes\nWith such a hardware arrangement there are two sources of ineffectual computations that result. from: 1) an Excess of Precision (EoP), and 2) Lack of Explicitness (LoE). Figure|1|shows an example illustrating these sources with a bit-parallel multiplier using an 8-bit unsigned fixed-point number. with 4 fractional and 4 integer bits. While 10.101(2) requires just five bits, our 8-bit bit-parallel. multiplier will zero-extend it with two prefix and one suffix bits. This is an example of EoP and is. due to the fixed-precision hardware. Two additional ineffectual bits appear at positions 1 and -2 as a result of LoE in the positional number representation. In total, five ineffectual bits will be processec. generating five ineffectual terms.\nEnergy Efficiency: Figure 10 shows the energy efficiency of various configurations of Pragmatic Energy Efficiency, or simply efficiency for a system NEw relative to BAsE is defined as the ratic Ebase/Enew of the energy required by BAsE to compute all of the convolution layers over that of NEw. For the selected networks, STR is 16% more efficient than DaDN. The power overhead oi PRAsingle (PRA4s) is more than the speedup resulting in a circuit that is 5% less efficient thar DaDN. PRA2s reduces that power overhead while maintaining performance yielding an efficiency of 28%. PRA2R yields the best efficiency at 48% over DaDN.\nOur number could be represented with an explicit list of its three constituent powers of 2: (1,-1, 3). While such a representation may require more bits and thus be undesirable for storage, coupled. with the abundant parallelism that is present in DNNs layers, it provides an opportunity to revisit. hardware design improving performance and energy efficiency..\nStripes PRA-Ob-Pallet PRA-1b-Pallet PRA-2b-Pallet PRA-2b-1R 5 4 3 2 Alexnet NiN Google VGGM VGGS VGG19 geo\nTable|5reports the essential bit content of the activation stream of recent CNNs for two commonly. used fixed length representations: 1) 16-bit fixed-point of DaDianNao [Chen et al.(2014), 2) 8-bit quantized of Tensorflow|Warden(2016). The essential bit content is the average number of non-zero. bits that are 1. Two measurements are presented per representation: over all neuron values (\"All'),. and over the non-zero neurons (NZ\"') as accelerators that can skip zero activations for fixed-point representations have been recently proposedHan et al.(2016);[Albericio et al.(2016).\nWhen considering all activations, the essential bit-content is at most 12.7% and 38.4% for the fixed point and the quantized representations respectively. Even when considering the non-zero activa tions the essential bit content remains well below 50% suggesting that the potential exists to improve performance and energy efficiency over approaches that target zero valued activations only.\nFigure 9: Relative performance of Pragmatic using Improved Oneffset Encoding for different configurations. Marked: performance not using IOE\nThis section illustrates the idea behind Pragmatic via a simplified example\nStripes - 1-reg 4-regs 16-regs perCol-ideal 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nBit-Parallel Hardware Precision Required prefix precision suffix 00 101010 Essential bits (1,-1,-3)\nFigure 8: Relative performance of PRA2b with column synchronization and as a func tion of the SB registers used.\ntivation values judiciously in order to reduce their essential bit content while maintaining accuracy This work explores such an alternative, where the software explicitly communicates how many pre. fix and suffix bits to discard after each layer\nLet us assume a p-bit bit-parallel multiplier using a straightforward implementation of the \"Shift and Add' algorithm where n s is calculated as t=o n : (s < i), where n, the i-th bit of n. The. multiplier computes p terms, each a product of s and of a bit of n, and adds them to produce the final result. The terms and their sum can be calculated concurrently to reduce latency Wallace[(1964).\nStripes PRA-4b PRA-2b PRA-2b-1R 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 10: Relative energy efficiency\nTable 1: Average fraction of non-zero bits per activation for two fixed-length representations: 16-bit fixed-point, and 8-bit quantized. All: over all activations. NZ: over non-zero activation only.\nTable 4: Area [mm2] and power [W] for the unit and the whole chip for column synchronizatior and PRA2b\nTable 5: Performance benefit due to software guidance"}, {"section_index": "4", "section_name": "6.3 IMPROVED ONEFFSET ENCODING", "section_text": "Figure 9|reports performance for Pragmatic when using the enhanced oneffset generator described in Section5.1 The considered configurations include PRAob, PRA1b and PRA2b (with pallet syn of 26%, 48%, and 41% respectively. A cause of degradation for PRAos is the increased spread of oneffset values (for example, the pair of neurons 011101, 010101 takes 4 cycles with conventional encoding and 5 with enhanced encoding even though the total count of oneffsets is reduced from 7 to 6).\nFigure 2: An Example Illustrating How Pragmatic Skips Ineffectual Activation Bits Yet Exceeding the Performance of a Bit-Parallel Engine"}, {"section_index": "5", "section_name": "6.4 THE IMPACT OF SOFTWARE", "section_text": "The bit-parallel unit of Figure2a multiplies two activations with their respective weights and via. an adder reduces the two products. The unit reads all activation and weight, (no = 001(2), n1 = 010(2)) and (so = 001(2), S1 = 111(2)) respectively in a single cycle. As a result, the two sources. of inefficiency EoP and LoE manifest here: no and n1 are represented using 3 bits instead of 2 respectively due to EoP. Even in 2 bits, they each contain a zero bit due to LoE. As a result, four ineffectual terms are processed when using standard multipliers such as those derived from the Shift and Add algorithm. In general, given N activation and weight pairs, this unit will take [N/2] cycles. to process them regardless of their precision and the essential bit content of the activations..\nFigure2b|shows a simplified PRA engine. In this example, activations are no longer represented as vectors of bits but as vectors of offsets of the essential bits. For example, activation no = 001(2) is represented as ono = (0), and a activation value of 111(2) would be represented as (2, 1, 0). An out- of-band bit (wire) not shown indicates the activation's end. A shifter per activation uses the offsets to effectively multiply the corresponding weight with the respective power of 2 before passing it to the adder tree. As a result, PRA processes only the non-zero terms avoiding all ineffectual computations that were due to EoP or LoE. To match the throughput of the bit-parallel engine of Figure2a] we take advantage of weight reuse and processes multiple activations groups in parallel. In this example, six activations (no = 001(2),n1 = 010(2),no = 000(2),n = 010(2),n = 010(2),n = 000(2)) are combined with the two weights as shown. For this example, PRA would process the six activation and weight pairs in a single cycle, a speedup of 3 over the bit-parallel engine.\nFigure[11reports performance for DaDN and PRA configurations using the 8-bit quantized repre sentation used in Tensorflow|Warden|(2016);Google (2016). This quantization uses 8 bits to specify arbitrary minimum and maximum limits per layer for the activations and the weights separately, anc maps the 256 available 8-bit values linearly into the resulting interval. This representation has highe\nStripes perPall perPall-2bit perCol-1reg-2bit perCol-ideal 5 3 2 0 Alexnet NiN Google VGGM VGGS VGG19 geo"}, {"section_index": "6", "section_name": "1+ BASELINE SYSTEM: DADIANNAO", "section_text": "Pragmatic is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed by Chen et al.Chen et al.(2014). Figure[3a|shows a DaDN tile which processes 16 filters concurrently calculating 16 activation and weight products per filter for a total of 256 products per cycle. To do,. each cycle the tile accepts 16 weights per filter for total of 256 weight, and 16 input activations. The tile multiplies each weight with only one activation whereas each activation is multiplied with 16 weight, one per filter. The tile reduces the 16 products into a single partial output activation per filter, for a total of 16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each\nFigure 11: Performance: 8-bit quantized repre sentation (marked: without IOE).\n0 MSB no|o Nernnns LSB ono 0 1 0 on1 1 on'. 0 on' 1 0 on\" 2 sradeues So on 0 0 1 So 0 1 1 1 (a) Bit-Parallel Unit (b) Pragmatic Unit\nAll PRA configurations studied thus far, used software provided per layer activation precisions to. reduce essential bit content. PRA does not require these precisions to operate. Table|5|shows what. tion studied. The results demonstrate that: 1) PRA would outperform the other architectures even without software guidance, and 2) on average, software guidance improves performance by 19%.\nNBin NBin Window Offset Neuron nO Lane 0 -4 Lane 0 Lane 0 ... ... Offset n15 Lane 15 1-4 from central : from central : eDRAM eDRAM Offset Neuron Lane 240 nO 1-4 Window : : Lane 15 Offset Lane 15 n15} Lane 255 1-4 SB (eDRAM) + IPO PIP0,0 PIP15,0 Synapse Synapse 16 ... Lane o + Lane 0 Filter <+ Filter x NBout Lane 0 : SR ... Lane 0 : Synapse P to central Synapse Lane 15 eDRAM Lane 15 16 to central ... eDRAM ... : : : Synapse 1C NBout Synapse Lane 0 IP15 Filter Lane 0 X Lane 15 : Filter : ... Synapse Lane 15 Lane 15 PIP0,15 PIP(15,15) Synapse X 16 Lane 15 SB (eDRAM) (a) (b)\ndesigns is left for future work, however, the absolute area and energy needed by all will be lower. due to the narrower representation. Moreover, given that the tile logic will occupy relatively less area for the whole chip and given that the SB and NM account for significant area and energy, the overall overheads of the PRA designs over DaDN will be lower than that measured for the 16-bit fixed-point configurations."}, {"section_index": "7", "section_name": "7 RELATED WORK", "section_text": "Figure 3: a) DaDianNao Tile. b) Pragmatic Tile\nThe acceleration of Deep Learning is an active area of research and has yielded numerous proposals for hardware acceleration. DaDianNao (DaDN) is the de facto standard for high-performance DNN acceleration Chen et al.(2014). In the interest of space, this section restricts attention to methods that are either directly related to DaDN, or that follow a value-based approach to DNN acceleration, as Pragmatic falls under this category of accelerators. Value-based accelerators exploit the properties of the values being processed to further improve performance or energy beyond what is possible by exploiting computation structure alone. Cnvlutin [Albericio et al.[(2016) and Stripes Judd et al. (2016b|Judd et al.(2016a) are such accelerators and they have been already discussed and compared against in this work.\nprocessing a different set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes 16 activations and 256 16 = 4K weights producing 16 16 = 256 partial output activations.\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per synapse. lane, 2) an input neuron buffef'[(NBin) which provides 16 activations per cycle through 16 neuron lanes, and 3) a neuron output buffer (NBout) which accepts 16 partial output activations per cycle. In the tile's datapath, or the Neural Functional Unit (NFU) each neuron lane is paired with 16 synapse. lanes one from each filter. Each synapse and neuron lane pair feed a multiplier and an adder tree per filter lane reduces the 16 per filter products into a partial sum. In all, the filter lanes produce each a partial sum per cycle, for a total of 16 partial output activations per NFU. Once a full window is processed, the 16 resulting sums, are fed through a non-linear activation function, f, to produce the 16 final output activations. The multiplications and reductions needed per cycle are implemented via 256 multipliers one per synapse lane and sixteen 17-input (16 products plus the partial sum from. NBout) adder trees one per filter lane..\nPuDianNao is a hardware accelerator that supports seven machine learning algorithms including. DNNs Liu et al.(2015). ShiDianNao is a camera-integrated low power accelerator that exploits. integration to reduce communication overheads and to further improve energy efficiency Du et al.. [2015). Cambricon is the first instruction set architecture for Deep LearningLiu et al.(2016). Min. erva is a highly automated software and hardware co-design approach targeting ultra low-voltage,. highly-efficient DNN accelerators Reagen et al.(2016). Eyeriss is a low power, real-time DNN ac- celerator that exploits zero valued activations for memory compression and energy reduction Chen,. Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne(2016). The Efficient Inference Engine (EIE) exploits efficient activation and weight representations and pruning to greatly reduce communication costs, to improve energy efficiency and to boost performance by avoiding certain. ineffectual computationsHan et al.(2016]Han et al.(2015). EIE targets fully-connected (FC) lay- ers and was shown to be 12 more efficient than DaDN on FC layers, and 2 less efficient for convolutional layers. All aforementioned accelerators use bit-parallel units. While this work has. demonstrated Pragmatic as a modification of DaDN, its computation units and potentially, its gen. eral approach could be compatible with all aforementioned accelerator designs. This investigation. is interesting future work\nDaDN's main goal was minimizing off-chip bandwidth while maximizing on-chip compute utiliza tion. To avoid fetching weights from off-chip, DaDN uses a 2MB eDRAM SB per tile for a total of 32MB eDRAM. All inter-layer activations except for the initial input and the final output are stored in a 4MB shared central eDRAM Neuron Memory (NM) which is connected via a broadcast interconnect to the 16 NBin buffers. Off-chip accesses are needed only for reading the input image. the filter weights once per layer, and for writing the final output.\nProfiling has been used to determine the precision requirements of a neural network for a hardwirec implementation [Kim et al.(2014). EoP has been exploited in general purpose hardware and othe application domains. For example, Brooks et al. Brooks & Martonosi[(1999) exploit the prefix bit due to EoP to turn off parts of the datapath improving energy. Park et al.Park et al.(2010), use a similar approach to trade off image quality for improved energy efficiency. Neither approach directly improves performance.\nProcessing Approach: Processing starts by reading from external memory the first layer's weights synapses, and the input image. The weights are distributed over the SBs and the input is stored into NM. Each cycle an input activation brick is broadcast to all units. Each units reads 16 weigh bricks from its SB and produces a partial output activation brick which it stores in its NBout. Once computed, the output activations are stored through NBout to NM and then fed back through the NBins when processing the next layer. Loading the next set of activations from external memory can be overlapped with the processing of the current layer as necessary."}, {"section_index": "8", "section_name": "8 CONCLUSION", "section_text": "To the best of our knowledge Pragmatic is the first DNN accelerator that exploits not only the pe layer precision requirements of CNNs but also the essential bit information content of the activatio values. While this work targeted high-performance implementations, Pragmatic's core approac should be applicable to other hardware accelerators. We have investigated Pragmatic only for in ference and with image classification convolutional neural networks. While desirable, applying th. same concept to other network types, layers other than the convolutional one, is left for future work. It would also be interesting to study how the Pragmatic concepts can be applied to more genera. purpose accelerators or even graphics processors.\nChen et al.[(2014) used the terms neuron and synapse to refer to activations and weights respectively an. named the various components accordingly. We maintain this terminology for the design's components.\nflexibility and better utilization than the reduced precision approach of Stripes since the range doesnt. have to be symmetrical and the limits dont have to be powers of two, while still allowing straight forward multiplication of the values. The limit values are set to the maximum and the minimum activation values for each layer and the quantization uses the recommended rounding mode.\nTerminology: For clarity, in what follows n(x, y, i) and o(x, y, i) refer to an input and an output. activation at coordinates (x, y,i) respectively. The weight of filter f at coordinates (x, y,i) is de-. noted as sf (x,y,i). The term brick refers to a set of 16 elements of a 3D activation or weight array which are contiguous along the i dimension, e.g., n(x, y, i)...n(x, y, i + 15). Bricks will be. denoted by their origin element with a B subscript, e.g., nb(x, y, i). The term pallet refers to a set. of 16 bricks corresponding to adjacent, using a stride S, windows along the x or y dimensions, e.g.,. nB(x, y, i)...nb(x, y + 15 S, i) and will be denoted as np(x, y, i). The number of activations per. brick, and bricks per pallet are design parameters.."}, {"section_index": "9", "section_name": "5 Pragmatic", "section_text": "Jorge Albericio, Patrick Judd, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, and An. dreas Moshovos. Cnvlutin: Ineffectual-neuron-free deep neural network computing. In 2016 IEEE/ACM International Conference on Computer Architecture (ISCA), 2O16.\nPRA's goal is to process only the essential bits of the activations. To do so PRA a) converts, on-the. fly, the input activation representation into one containing only the essential bits, and b) processes one essential bit per activation and a full 16-bit weight per cycle. Since PRA processes activatior bits serially, it may take up to 16 cycles to produce a product of a activation and a weight. To always. match or exceed the performance of the bit-parallel units of DaDN, PRA processes more activations concurrently exploiting the abundant parallelism of the convolutional layers. The remaining of this section describes in turn: 1) an appropriate activation representation, 2) the way PRA calculates. terms, 3) how multiple terms are processed concurrently to maintain performance on par with DaDN. in the worst case, and 4) how PRA's units are supplied with the necessary activations from NM..\nDavid Brooks and Margaret Martonosi. Dynamically exploiting narrow width operands to improve processor power and performance. In Proceedings of the 5th International Symposium on Higl Performance Computer Architecture, HPCA '99, pp. 13-, Washington, DC, USA, 1999. IEEF Computer Society. ISBN 0-7695-0004-8. URLhttp://d1.acm.org/citation.cfm? id=520549.822763\nThat is, each cycle, the weight s multiplied by f, the next constituent power two of n, and the resul is accumulated. This multiplication can be implemented as a shift and an AND..\nBoosting Compute Bandwidth over DaDN: To match DaDN's performance PRA needs to pro cess the same number of effectual terms per cycle. Each DaDN tile calculates 256 activation and weight products per cycle, or 256 16 = 4K terms. While most of these terms will be in practice ineffectual, to guarantee that PRA always performs as well as DaDN it should process 4K terms per cycle. For the time being let us assume that all activations contain the same number of essential bits so that when processing multiple activations in parallel, all units complete at the same time and thus can proceed with the next set of activations in sync. The next section will relax this constraint.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. arXiv:1602.01528 [cs], February 2016. URLhttp://arxiv.0rg/abs/1602.01528 arXiv: 1602.01528.\nSince PRA processes activations bits serially, it produces one term per activation bit and weight pair. and thus needs to process 4K such pairs concurrently. The choice of which 4K activation bit and weight pairs to process concurrently can adversely affect complexity and performance. For example . it could force an increase in SB capacity and width, or an increase in NM width, or be ineffective. due to unit underutilization given the commonly used layer sizes.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raquel Urtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets, arXiv:1511.05236v4 [cs.LG] . arXiv.org, 2015.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, and Andreas Moshovos. Stripes Bit-serial Deep Neural Network Computing . In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-49, 2016a.\nFortunately, it is possible to avoid increasing the capacity and the width of the SB and the NM while keeping the units utilized as in DaDN. Specifically, a PRA tile can read 16 weight brick. and the equivalent of 256 activation bits as DaDN's tiles do (DaDN processes 16 16-bit activation or 256 activation bits per cycle). Specifically, as in DaDN, each PRA tile processes 16 weigh bricks concurrently, one per filter. However, differently than DaDN where the 16 weight bricks ar combined with just one activation brick which is processed bit-parallel, PRA combines each weigh brick with 16 activation bricks, one from each of 16 windows, which are processed bit-serially The same 16 activation bricks are combined with all weight bricks. These activation bricks forn a pallet enabling the same weight brick to be combined with all. For example, in a single cycle : PRA title processing filters 0 through 15 could combine combine s(x, y, 0), ..., s5(x, y, 0) with nPRA(x,y, 0), nPRA(x +2, y, 0), ..nPRA(x +31, y, 0) assuming a layer with a stride of 2. In this case. DNA s4(x,y,2) would be paired with nPRA(x, y,2), nPRA(x + 2, y,2), .., nPRA(x + 31, y,2) to produce the output weights on(x, y, 4) through on(x + 15, y, 4).\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network Computing . Computer Architecture Letters, 2016b.\nAs the example illustrates, this approach allows each weight to be combined with one activation per window whereas in DaDN each weight is combined with one activation only. In total, 256 essential activation bits are processed per cycle and given that there are 256 weights and 16 windows, PRA\nInput Activation Representation: PRA starts with an input activation representation where it is straightforward to identify the next essential bit each cycle. One such representation is an explicit list of oneffsets, that is of the constituent powers of two. For example, an activation n = 5.5(1o) = 0101.1(2) would be represented as n = (2, 0, -1). In the implementation de- scribed herein, activations are stored in 16-bit fixed-point in NM, and converted on-the-fly in the PRA representation as they are broadcast to the tiles. A single oneffset is processed per activation per cycle. Each oneffset is represented as (pow, eon) where pow is a 4-bit value and eon a sin- gle bit which if set indicates the end of a activation. For example, n = 101(2) is represented as nPRA = ((0010, 0) (0000, 1)).\nYunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen Zhiwei Xu, Ninghui Sun, and O. Temam. Dadiannao: A machine-learning supercomputer. Ir Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pp. 609 622. Dec 2014. doi: 10.1109/MICR0.2014.58\nCalculating a (weight, activation) product: PRA calculates the product of weight s and activation\ns xn= S X n V f EnPRA V f EnPRA\nDaofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. PuDianNao: A Polyvalent Machine Learning Accelerator. In Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS '15, pp. 369-381, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-2835-7. doi: 10.1145/2694344.2694358. URLhttp://doi.acm. org/10.1145/2694344.2694358 PuDianNao.\nprocesses 256 16 = 4K activation bit and weight pairs, or terms per cycle producing 256 partia output activations, 16 per filter, or 16 partial output activation bricks per cycle.\nSupplying the Inputs: Thus far it was assumed that all input activations have the same numbe. of essential bits. Under this assumption, all neuron lanes complete processing their terms at th. same time, allowing PRA to move on to the next activation pallet and the next set of weight brick in one step. This allows PRA to reuse STR's approach for fetching the next pallet from the single. ported NM Judd et al.[(2016b a). Briefly, with unit stride the 256 weights would be typically a. stored in the same NM row or at most over two adjacent NM rows and thus can be fetched in a. most two cycles. When the stride is more than one, the weights will be spread over multiple row. and thus multiple cycles will be needed to fetch them all. Fortunately, fetching the next pallet ca. be overlapped with processing the current one. Accordingly, if it takes NMc to access the nex. oallet from NM, while the current pallet requires Pc cycles to process, the next pallet will begi. orocessing after max(NMc. Pc) cycles. When NMc > Pc performance is lost waiting for NM\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large caches\nIn practice it highly unlikely that all activations will have the same number of essential bits. In. general, each neuron lane if left unrestricted will advance at a different rate. In the worst case, each. neuron lane may end up needing activations from a different activation brick, thus breaking PRA's ability to reuse the same weight brick. This is undesirable if not impractical as it would require. partitioning and replicating the SB so that 4K unrelated weight could be read per cycle, and it would. also increase NM complexity and bandwidth.\nSynopsys. Design Compiler. http://www.synopsys.com/Tools/ Implementation/RTLSynthesis/DesignCompiler/Pages\nFortunately, these complexities can be avoided with pallet-level neuron lane synchronization wher all neuron lanes \"wait' (a neuron lane that has detected the end of its activation forces zero term while waiting) for the one with the most essential bits to finish before proceeding with the nex pallet. Under this approach it does not matter which bits are essential per activation, only how man exist. Since, it is unlikely that most pallets will contain an activation with 16 essential terms, PR will improve performance over DaDN. Section|5.1|will discuss finer-grain synchronization scheme that lead to even better performance. Before doing so, however, we detail PRA's design.\nSynapse 16 x16: o_nbout Synapse 16 max 1st prec Done cycle shift_B i_nbout\nSynapse 16 x16: o_nbout Synapse << 1 16 max r 1st prec Done cycle shift_B i_nbout\nThis appendix complements the analysis of Section 2 by estimating the potential of an idealized Pragmatic accelerator that can skip any term (product of a full precision weight and one input. activation bit) while also improving execution time proportionally. Note the number of terms is considered before the Improved Oneffset Encoding described in Section|5.1 is applied..\nTo estimate PRA's potential, this section compares the number of terms that would be processed by various computing engines for the convolutional layers of recent CNNs (see Section 6) for the two aforementioned baseline activation representations.\nFigure 4: Pragmatic Inner Product Unit"}, {"section_index": "10", "section_name": "5.1 STRUCTURE AND PERFORMANCE AND AREA OPTIMIZATIONS", "section_text": "16-bit Fixed-Point Representation: The following computing engines are considered: 1) baseline. representative of DaDN using 16-bit fixed-point bit-parallel units[Chen et al.(2014), 2) a hypothet ical enhanced baseline ZN, that can skip all zero valued activations, 3) Cnvlutin (CVN) a practical design that can skip zero value activations for all but the first layer [Albericio et al.(2016), 4) STR that avoids EoP (see Table[2] Section[6) Judd et al.(2016b), 5) an ideal, software-transparent PRA, PRA-fp16 that processes only the essential activation bits, and 6) an ideal PRA, PRA-red, where. software communicates in advance how many prefix and suffix bits can be zeroed out after each layer (see Section5.1)\nFigure 3b|shows the Pragmatic tile architecture which comprises an array of 16 16 = 256 prag matic inner product units (PIPs). PIP(i,j) processes an activation oneffset from the i-th window anc. its corresponding weight from the j-th filter. Specifically, all the PIPs along the i-th row receive the. same weight brick belonging to the i-th filter and all PIPs along the j-th column receive an oneffse. from each activation from one activation brick belonging to the j-th window. The necessary activa\n1st stage 16xN 2nd stage cycle 1 cycle 2 cycle 3 cycle 4 1st Synapse 100100010 85 85 +1 16 2nd x16 ... + 010000001 70 0 0 7 6 << E Synapse, bau < < 011010000 764 764 0 76 0 7 << << 16 << Neuron values Oneffsets Done 116 PIP (a) (b)\nOn average, STR reduces the number of terms to 53% compared to DaDN while skipping just the. zero valued activations could reduce them to 39% if ZN was practical and to 63% in practice with. CVN. PRA-fp16 can ideally reduce the number of additions to just 10% on average, while with software provided precisions per layer, PRA-red reduces the number of additions further to 8% on\nFigure 5: 2-stage shifting. a) Modified PIP. b) Example: Processing three 9-bit weight and activation pairs with L = 2.\nFigure [12a|reports the number of terms normalized over DaDN where each multiplication is ac counted for using an equivalent number of terms or equivalently additions: 16 for DaDN, ZN, and CVN, p for a layer using a precision of p bits for STR, and the number of essential activation bits. for PRA-fp16, and for PRA-red. For example, for n = 10.001(2), the number of additions counted would be 16 for DaDN and CVN+, 5 for STR as it could use a 5-bit fixed-point representation, and 2 for PRA-fp16 and PRA-red.\n16xN y tstage 2nd stage cycle 1 cycle 2 cycle 3 N cycle 4 Synapse 1st 1001 00010 851 85 2nd < 16 x16 ... [01 0000001 70 0 0 7 3 4 6 << < << Synapse bau << 011010000 764 760 0 76 0 7 16 << << << ... Neuron values Oneffsets Done 116 PIP (a) (b)\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 10 IGG (a) 16-bit fixed-point (b) 8-Bit Quantized Figure 12: Convolutional layer computational demands conv1conv2= conv3= conv4 conv5 70% 40% 60% 609 35% 50% 50% 30% 25% 40% 40% 20% 30% 15% 20% 20% 10% 10% 10% 0% 0% 0%\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 0 NiN N S 19 NiN N S 19 VG AVG C VGG VGG VGG VGG VGG VGG.. googler\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 0 GG (a) 16-bit fixed-point (b) 8-Bit Quantized\nFigure 6: Per-column synchronization example: one extra synapse register and 1x2 PIP array capa. ble of processing two windows in parallel. The two numbers per brick show: the first from the top i the brick's index, (0, 1, 2) and (0', 1', 2) for the bricks of the first and second window. The secon. is the maximum count of oneffsets in its activations, (2, 4, 4) and (5, 2, 2) respectively. The number in the registers indicate the index of the corresponding bricks, i.e., a synapse register containing a K stores the weights corresponding to activation bricks with indexes K and K'. In cycles 3 to 8. thicker lines indicate registers being loaded or wires being used..\ntion oneffsets are read from NBin where they have been placed by the Dispatcher and the Oneffset generators units as Section|5.1|explains. Every cycle NBin sends 256 oneffsets 16 per window lane. All the PIPs in a column receive the same 16 oneffsets, corresponding to the activations of a sin gle window. When the tile starts to process a new activation pallet, 256 weights are read from SB. through its 256 synapse lanes as in DaDN and are stored in the synapse registers (SR) of each PIP.. The weights and oneffsets are then processed by the PIPs..\naverage. The potential savings are robust across all CNNs remaining above 87% for all DNNs with PRA-red.\nDispatcher and Oneffset Generators The Dispatcher reads 16 activation bricks from NM, as ex pected by the PRA tiles. The oneffset generator converts their activations on-the-fly to the oneffset. representation, and broadcasts one oneffset per activation per cycle for a total of 256 oneffsets to all titles. Fetching and assembling the 16 activation bricks from NM is akin to fetching words with a stride of S from a cache structure. Once the 16 activation bricks have been collected, 256 oneffset generators operate in parallel to locate and communicate the next oneffset per activation.. A straightforward 16-bit leading one detector is sufficient. The latency of the oneffset generators. and the dispatcher can be readily hidden as they can be pipelined as desired overlapping them with. processing in the PRA tiles.\n8-bit Quantized Representation: Figure [12b|shows the relative number of terms processed for: 1) a bit-parallel baseline, 2) an ideal, yet impractical bit-parallel engine that skips all zero activa- tions, and 3) PRA. In the interest of space and since PRA subsumes STR and CVN they are not considered. Pragmatic's benefits are significant even with an 8-bit quantized representation. On average, skipping all the zero valued activations would eliminate only 30% of the terms whereas Pragmatic would remove up to 71% of the terms..\n9.2 ESSENTIAL BIT CONTENT DISTRIBUTIONS\nReducing Title Area with 2-Stage Shifting: Any shift can be performed in two stages as two smaller shifts: a < K = a < (K' + C) = ((a < K) < C). Thus, to shift and add T weights by different offsets Ko, ..., KT, we can decompose the offsets into sums with a common term C. e.g., K, = K! + C. Accordingly, PIP processing can be rearranged using a two stage processing where the first stage uses a per weight specific offset K?, and the second stage, the common across all weights offset C. This arrangement can be used to reduce the width of the weight shifters and of the adder tree by sharing one common shifter after the adder tree as Figure|5h shows. A design parameter, L, defines the number of bits controlling the weight shifters so that the design can process oneffsets which differ by less than 2 in a single cycle. This reduces the size of the weight shifters and reduces the size of the adder tree to support terms of 16 + 2L - 1 bits only.\nThis section reports the distributions of the essential bit count for the activations processed pe convolutional layers for the networks studied. Three distributions are shown per network for th activations for three different representations: 1) 16-bit fixed-point, 2) per layer fixed-point, and 3 8-bit Quantized. A peak appears for values having four bits that are 1 for the quantized representatior since the value zero is mapped to a non-zero index having four bits that are one (114). Note that, as in Section9.1] the distributions are taken before Improved Oneffset Encoding.\nIncreasing Performance with Per-Column Neuron Lane Synchronization: The pallet neuron lane synchronization scheme of Section|5|is one of many possible synchronization schemes. Finer- grain neuron lane synchronization schemes are possible leading to higher performance albeit at a cost. Among them, per column neuron lane synchronization is an appealing scheme offering a good balance of cost vs. performance. Here each PIP column operates independently but all the PIPs along the same column synchronize before moving to the next activation brick. Since the PIPs along the same column operate in sync, they all process one set of 16 weight bricks which can be read using the existing SB interface. However, given that different PIP columns operate now out-of-\nBrick Indexes: 2] cycle 1 cycle 3 21 cycle 6 2 cycle 7 2 cycle 8 Max # oneffsets: 4 44 4 4 Bricks: 2 1 22 1 2' 2 2 2 2 Synapses corresponding SR SR to brick # SB SR SB >r2l Extra Synapse Registers+\nFigure 12: Convolutional layer computational demands\nI conv1 conv5 I conv1 conv2 conv3 conv4 conv5 conv1conv2 conv3 conv4 conv 70% 40% 60% 35% 50% 30% 25% 40% 20% 30% 15% 20% 10% 10% 5% 0% 1011 1213141516 0 6-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized\nPragmatic Inner-Product Unit: Figure 4|shows the PIP internals. Every cycle, 16 weights are combined with their corresponding oneffsets. Each oneffsets controls a shifter effectively multiply. ing the weight with a power of two. The shifted weights are reduced via the adder tree. An AND gate per weight supports the injection of a null terms when necessary. In the most straightforward design, the oneffsets use 4-bits, each shifter accepts a 16-bit weight and can shift it by up to 15 bit positions producing a 31-bit output. Finally, the adder tree accepts 31-bit inputs. Section 5.1 presents an enhanced design that requires narrower components improving area and energy..\nFigure 13: AlexNet: Per Layer '1'-bit Count Distributions\nsync, the SB would be accessed more frequently and could become a bottleneck. There are two. concerns: 1) different PIP columns may need to perform two independent SB reads while there are only one SB port and one common bus connecting the PIP array to the SB, and 2) there will be repeat accesses to SB that will increase SB energy, while the SB is already a major consumer of. energy. These concerns are addressed as follows: 1) only one SB access can proceed per cycle thus. a PIP column may need to wait when collisions occur. 2) A set of registers, or synapse set registers (SSRs) are introduced in front of the SB each holding a recently read set of 16 weight bricks. Since all PIP columns will eventually need the same set of weight bricks, temporarily buffering them avoids fetching them repeatedly from the SB. Once a weight set has been read into an SSR, it stays there until all PIP columns have copied it (a 4-bit down counter is sufficient for tracking how many. PIP columns have yet to read the weight set). This policy guarantees that the SB is accessed the. same number of times as in DaDN. However, stalls may incur as a PIP column has to be able to store a new set of weights into an SSR when it reads it from the SB. Figure|6 shows an example Since each neuron lane advances independently, in the worst case, the dispatcher may need to fetch 16 independent activation bricks each from a different pallet. The Dispatcher can buffer those pallets. to avoid rereading NM, which would, at worst, require a 256 pallet buffer. However, given that the. number SSRs restricts how far apart the PIP columns can be, and since Section|6.2 shows that only one SSR is sufficient, a two pallet buffer in the dispatcher is all that is needed..\nconv1 conv2 conv3 conv4-1024 cccp1 cccp2 conv1 conv2 conv3 conv4-1024 cccp1 cccp2 conv1 conv2 conv3 conv4-1024 cccp1 cccp2 cccp3 cccp4 cccp5 cccp6 cccp7-1024 cccp8-1024 cccp3 cccp4 cccp5 cccp6 cccp7-1024 cccp8-1024 cccp3 cccp4 cccp5 cccp6 I cccp7-1024 = cccp8-1024 60% 60% 40% 50% 50% 35% 30% 40% 40% 25% 30% 30% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 0% 1 10111213141516 6 0 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activationss\nFigure 14: NiN: Per Layer '1'-bit Count Distributions\nconv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b = incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b 70% 70% 45% 60% 60% 40% 35% 50% 50% 30% 40% 40% 25% 30% 30% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 0% 1 10 11 12 1314 1516 0 8 10 0\nFigure 15: GoogLeNet: Per Layer '1'-bit Count Distributions\nThis improved generator reduces runs of adjacent oneffsets a...b into pairs of the form a + 1, -b. Single oneffsets or gaps inside runs are represented by a positive or negative oneffset, respectively. For example a neuron value of 11011 that would normally be encoded with oneffsets (4, 3, 1, 0) car instead be represented with (5, -3, +2, 0) or even more economically with (5, -2, 0). This is. equivalent to a Radix-4 Booth encoding and will never emit more than + 1 oneffsets, where x is the neuron precision.\nThis encoding will never produce more oneffsets compared to the baseline encoding. However, because of the 2-stage shifting, it is possible that this encoding will increase the number of cycles needed. This will happen when the oneffset distribution among the bit groups being processed together during 2-stage shifting changes\nFinally, booth encoding is conventionally used to reduce the number of cycles needed to perforn multiplication in single shift-and-add multipliers typically reserved for low cost low performance de signs, or to reduce the depth of bit-parallel multipliers. Pragmatic with its 2-stage shifting and judi cious lane synchronization enables its practical use in a massively data-parallel accelerator boostin, performance beyond what is possible with bit-parallel units.\nFigure 16: VGG_M: Per Layer '1'-bit Count Distributions\nThe Role of Software: PRA enables an additional dimension upon which hardware and software can attempt to further boost performance and energy efficiency, that of controlling the essential activation value content. This work investigates a software guided approach where the precision requirements of each layer are used to zero out a number of prefix and suffix bits at the output of each layer. Using the profiling method of Judd et al., Judd et al.(2015), software communicates the precisions needed by each layer as meta-data. The hardware trims the output activations before. writing them to NM using AND gates and precision derived bit masks..\nFigure 17: VGG_S: Per Layer '1'-bit Count Distributions\nconv1_1 = conv1_2 = conv2_1 conv2_2 = conv3_1 = conv3_2 = conv3_3 = conv3_4 conv1_1 conv1_2 conv2_1 conv2_2 conv3_1 conv3_2 conv3_3 = conv3_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 60% 60% 50% 45% 50% 50% 40% 40% 40% 35% 30% 30% 30% 25% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 10111213141516 12 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nAfter reviewing the experimental methodology the rest of this section is organized as follows: Sec- tions 6.1and6.2 explore the PRA design space considering respectively single- and 2-stage shift ing configurations, and column synchronization. Section |6.2|reports energy efficiency for the best\nFigure 18: VGG_19: Per Layer '1'-bit Count Distributions\nconv1 conv2 incept_3a = incept_3b = incept_4a = incept_4b conv1 conv2 incept_3a = incept_3b = incept_4a = incept_4b conv1 conv2 incept_3a incept_3b = incept_4a incept_4b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c =incept_4d = incept_4e =incept_5a = incept_5b 70% 45% 60% 40% 35% 50% 30% 40% 25% 30% 20% 20% 15% 10% 10% 5% 0% 0% 10111213141516 1 10 3 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision. (c) Quantized: Activations\nFurther Increasing Performance with Improved Oneffset Encoding: Since PIPs in Pragmatic can negate any input term, it is possible to enhance the oneffset generator to generate fewer oneffsets for neuron values containing runs of ones by allowing signed oneffsets[Booth (1951)\nconv1conv2conv3 conv4 conv5 conv1conv2conv3 conv4 conv5 conv1conv2conv3 conv4conv 80% 60% 0% 70% 50% 60% 50% 40% 0% 40% 30% 0% 30% 20% 20% 10% 10% % 0% 10111213141516 0 2 3 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nThe performance, area and energy efficiency of Pragmatic is compared against DaDN Chen et al.. (2014) and Stripes Judd et al.(2016b), two state-of-the-art DNN accelerators. DaDN is the fastest bit-parallel accelerator proposed to date that processes all activations regardless of theirs values, and. STR improves upon DaDN by exploiting the per layer precision requirements of DNNs. Cnvlutin improves upon DaDN by skipping most zero- or near-zero-valued activations[Albericio et al. (2016) however, Stripes has been shown to outperform it.."}]
Bk3F5Y9lx
[{"section_index": "0", "section_name": "EPITOMIC VARIATIONAL AUTOENCODER", "section_text": "layers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned\nIn contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the numbe of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling ir generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.\nSerena Yeung\nStanford University\nTable 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage or smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.\nStanford University\nfeifeili}@cs.stanford.edu\nIn this paper, we propose epitomic variational autoencoder (eVAE), a probabilis. tic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called 'epitome' such that each epitome par tially shares its encoder-decoder architecture with other epitomes in the composi- tion. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by. presenting qualitative and quantitative results on MNIST and TFD datasets.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).\nA commonly known problem with the VAE lower bound is that it is known to self-prune or un- der utilize the model's capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization tech-. niques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent. cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed. discussion is provided in 2.1.\nTable 1: Parzen log-densities in nats of VAE, mVAE and eVAE for increasing model parameters, trained or. MNIST and TFD with different dimensions D of latent variable z. For mVAE and eVAE models on MNIST the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used All epitomes are non-overlapping. Across each row shows performance as the number of encoder and decode. layers L increases for a fixed number of hidden units H in each layer, and as H increases. Number of active. units are indicated in parentheses.\nIn this paper, we take a model-based approach to directly address this problem. We present an exten sion of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE.. for short) that automatically learns to utilize its model capacity more effectively, leading to better. generalization. Consider the task of learning a D-dimensional representation for the examples in. a given dataset. The motivation for our model stems from the hypothesis that a single example in. the dataset can be sufficiently embedded in a smaller K-dimensional (K < D) subspace of D.. However, different data points may need different subspaces, hence the need for D. Sparse coding. methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional cat egorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable. activates only a contiguous subset of latent stochastic variables to generate an observation. This."}, {"section_index": "2", "section_name": "4.4 COMPARISON WITH OTHER MODELS", "section_text": "In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density VAE-, mVAE-, and eVAE- refer to models trained using the same architecture as Adversarial Autoencoders for comparison. Encoders and decoders have L = 2 layers of H = 1ooo deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L, H, D) = (3, 500, 8), mVAE is (3,1000, 24) and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500,15), mVAE is (3,1000, 50), and eVAE is (3, 500, 25).\nWe observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7..\n*Work done during an internship at Facebook AI Research\nAnitha Kannan & Yann Dauphin\n{akannan, ynd}@fb.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "H = 500 H = 1000 L =1 L =2 L = 3 L =1 L = 2 L =3 MNIST VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) D =8 mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8) VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) D = 24 mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24) VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) D = 48 mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48) TFD VAE 2173(15) 2180(15) 2149(15) 2116(15) D = 15 mVAE 2276(15) 2314(15) 2298(15) 2343(15) eVAE 2298(15) 2353(15) 2278(15) 2367(15) 1 - VAE 2067(25) 2085(25) 2037(25) 2101(25) D = 25 mVAE 2287(25) 2306(25) 2332(25) 2351(25) eVAE 2309(25) 2371(25) 2297(25) 2371(25) 1 VAE 1920(50) 2062(29) 1886(50) 2066(30) D = 50 mVAE 2253(50) 2327(50) 2280(50) 2358(50) eVAE 2314(50) 2359(50) 2302(50) 2365(50)\nUnsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an. approach to unsupervised learning wherein an explicit stochastic generative model of data is defined,. such that independent draws from this model are likely to produce the original data distribution,. while the learned latent structure itself is useful in prediction, classification and visualization tasks.\nThe rest of the paper is organized as follows. We first describe variational autoencoders and math ematically show the model pruning effect in $ 2. We then present our epitomic VAE model in $ 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in 4. We finally provide more general context of our work in the related work in 5, anc conclude with discussions.\npe(xz) = N(x;f1(z);exp(f2(z))\nGiven a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood oi the parameters to have generated the data, p(X [). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the Zi when conditioned on x.\nVariational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parame- ters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters that outputs the posterior distribution of the form qo(z|x) = IIa q(zi[x). This results in the lower bound given by\nlog pe(X) t=1 T Eqs(z|x(t))[logp(x(t)|z)]- KL qo(zx )I p(z t=1\nFigure 7: eVAE samples for MNIST (left) and TFD (right)\nT D Cuae =-Eqs(z|x()[logp(x(t)|z)]+KL(qg t=1 t=1 i=1"}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generativ. model for images is proposed in which the generator of the VAE is an attention-based recurrent model that i. conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative mode. that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that i updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants o. VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xu. et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopte. strategies that takes away the clean mathematical formulation of VAE. We have discussed these in $ 2.1..\nOf particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In. particular, the model needs to only ensure that the overall KL term is minimized, on average, and. not per component wise. The easiest way for the model to do this is to have a large number of. components that satisfies the KL term effectively, by turning off the units so that the posterior for. those units becomes the same as the prior'. This effect is quite pronounced in the early iterations of.\nSince log variance is modeled using the neural network, turning it off will lead to a variance of 1\nenables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epit omic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.\nMethod MNIST(10K) TFD(10K) DBN 138 2 1909 66 Deep CAE 1211 2110 50 Deep GSN 2141 1890 29 GAN 225 2 2057 26 GMMN + AE 2822 2204 20 Adversarial AE 340 2 2252 16 VAE 290 2 2149 23 mVAE- 333 2 2298 23 eVAE 331 2 2278 26 VAE 325 2 2180 20 mVAE 338 2 2358 20 eVAE 337 2 2371 20\nThe generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian\np(z) =N(z;0;I\nTable 2: Parzen log-densities in nats on MNIST and TFD. VAE-, mVAE-, and eVAE- refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1.\nX 560364 594 A3 4010 568 7 33 603559142 4709097839 8609003897\nVAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound\nA complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.\nRelated is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data."}, {"section_index": "5", "section_name": "6 CONCLUSION", "section_text": "Figure 1: Sorted activity level of latent units and corresponding generations on MNIST, for a 50-d VAE with a. hidden layer of 500 units. Shown for varying values of the KL weight X. When = 1, only 30 units are active As X is decreased, more units are active; however generation does not improve since the model uses the capacity. to model increasingly well only regions of the posterior manifold near training samples (see reconstructions in. Fig. 8).\nThis paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of mode. over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on th. intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAI. models the latent space as multiple shared subspaces that have learned specializations. We show how this mode. addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative. analysis of how eVAE enables increased utilization of the model capacity to model greater data variability We believe that modeling the latent space as multiple structured subspaces is a promising direction of work. and allows for increased effective capacity that has potential to be combined with methods for increasing th. flexibility of posterior inference.\n2 6 R 6 F 7 0 6 $ 9 S 4 4 S 0 0 3 q a G 3 R 3 5 7 7 9 2. 9 0 9 8 0 5 9 05989 7 F 8 8 9 9 4 6 All units Active units Dead units"}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "Figure 2: Only active units contribute to generation, whereas units that have \"died' have no effect. Shown for a 50-d VAE with X = 1.\nWe thank the reviewers for constructive comments. Thanks to helpful discussions with Marc'Aurelio Ranzato Joost van Amersfoort and Ross Girshick. We also borrowed the term epitome' from an earlier work of Jojic et al. (2003).\ntraining: the model for log p(x[z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almos impossible for them to resurrect, and hence the full capacity of the model is not utilized."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "A quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or \"active\", if Au = Covx(Eu~q(u|x)[u]) > 0.02.\nYuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR, 2015\nA commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter so that the cost is.\nD C =-Eqg(z|x)[logp(x|z)]+XKL(qp(zi|x)|p(zi) i=1\nFig. 1 shows the effect of on unit activity and generation, with = 1 being the correct objective to optimize. While tuning down increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units. only, for X = 1. The model spends its capacity in ensuring that reconstruction of the training set is. optimized (reconstruction visualizations are shown in s 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for (Bowman. et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the. latent units (Kingma et al., 2016).\nIn this paper, we present a model based approach called \"epitomic variational autoencoder' to ad- dress the problem of over pruning.."}, {"section_index": "8", "section_name": "3 MODEL", "section_text": "D.P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014\nWe propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensiona subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick\nYann LeCun. Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits. 1998\n1.4 = 1.0 X = 0.7 t9761163 9F3 4 9 031 3 M 1.2 1 3 E ? X = 0.5 67.724 3 8. 3 $888 = 0.2 1.0 Fa8att1 3 R54 4 6 4 0Z4 I77548 6 933 7 3 .3. f 7304014210 53372773 2 6) 2340 7750935028 3 4 )39591654 KOE 3 3 2 5 9 F 0.4 5789515379 X=1 X = 0.5 X = 0.2 0.2 0.0 0 10 20 30 40 50 Unit\nMethods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende &. Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over. projections of stochastic latent variables. However, the problem of over pruning still persists: for instance,. Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used..\n1.4 =1.0 = 0.7 163 3 1.2 3 7 3 4 ? I 7 E ? X 9 = 0.5 2 3 8 4 X = 0.2 1.0 a 4DR 3 3 0 i. 933 R s b X$ 9 9 7 3 3. 30 4 19 5 A 5 3602% 313 7 2 K0 3 ). 3 3 2 4. 5 9 0.4 1 6ecne ynee 1477y 0.2 X=1 X = 0.5 X = 0.2 0.0 ..................... 0 10 20 30 40 50\nS. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton Attend, infer, repeat: Fast scene understanding with generative models. CoRR, abs/1603.08575, 2016\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.046239, 2015\ny=1 y=2 3322222222 82 2 2 2 2 2 22 B222222322 8 8 8 2 2 Z 2 2 2 2 8282223338 88 8 8 8 2 2 8 8 8 8888888 88 8 8 2 88 88888888 88 8 8 8 2 88 88888888 88 8 8 8 8 8888888888 88 8 8 8 y a 8 888888888 88 8 8 8 8 8888888888 88 8 8 8888888888 899999 9 94 y=3 y=4 0555559 5555 22 055 5 5 5 5 5 5 5 2222222 055 5 5 5 5 5 5 5 442222222 0555 5 5 5 5 5 5 44222 22 22 0555 5 5 5 5 5 5 4442222 22 06555 5 5 5 5 5 444 22 06555556 66 444322 06666 6 6 6 6 6 4447777322 0666666666 4447777772 0666666666 9447777777\nD.J.C. Mackay. Local minima, symmetry-breaking, and model pruning in variational free energy minimization 2001.\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprin arXiv:1505.05770, 2016.\nSalah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractiv auto-encoders. arXiv preprint arXiv:1206.6434, 2012\nJosh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep, 3, 2010.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generatior from visual attributes. CoRR. abs/1512.00570. 2015.\nness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions o D are needed to capture the variability in strokes of some digits (see Fig. 3).\nEpitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this. paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous. dimensions of D are active2.\nThe generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N(z;0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N-dimensional observation x is then drawn from a Gaussian distribution:\nm, enforces the epitome constraint: it is also a D-dimensional vector that is zero everywhere except. in the active dimensions of the epitome. O is element-wise multiplication between the two operands.. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates. this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure. is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our. model generalizes the VAE and collapses to a VAE when D = K = s.\nf1 (>) and f2() define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but insteac. deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the.\n2The model also allows for incorporating other forms of structured sparsity\n3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.\nTianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. arXiv preprint arXiv:1607.02586. 2016.\nFigure 3: Left: Illustration of an epitomic VAE with dimension D=8, epitome size K=2 and stride S=2. In this depiction, the second epitome is active. Right: Learned manifolds on MNIST for 4 different epitomes in a 20-d eVAE with size K = 2 and stride s = 1. We observe that each epitome specializes on a coherent subset of examples.\npe(x[y,z) = N(x;f1(my O z),exp(f2(my O z)))\nsparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones. which manifests itself in the representation space; an intrinsic ordering in the variability is learned"}, {"section_index": "9", "section_name": "3.1 OVERCOMING OVER-PRUNING", "section_text": "Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate pos terior inference, with the functional form\nq(z,yx) q(yx)q(z[y,x) q(y|x)W(z;my O ,exp(my O )\n= h(x) are neural networks that map x to D dimensional space where u = hi (x) and o.\n71318232 7 7 53 3 1 556550445 5 0 22134042%3 07027413/ 540837$38 1070530972 258350044y 73 7 7138823207 7814862377 63599499/1 5315202336 9940483453 351881230 5555504431 55n02a4287 504j00lFa 9 2218464283 8070a7413/ 5408375386 X = 1.0 X = 0.5 X = 0.2\nWe use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z[y, x).\nAs in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cos function (negative bound) is\n[logp(x(t)|y,z) evae .u|x(t)) t=1 KLqo(y|x] ) l| pe(y)-qo(y|x(t)KL|qs(z|y,x(t) || pe(z t=1 y\nFigure 8: Reconstructions for a 50-d VAE with KL weight X = 1, 0.5, and 0.2. The top half of each figur are the original digits, and the bottom half are the corresponding reconstructions..\nThe epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:.\nqp(y|x(t)KL|qg(z|y,x(t)|po(z t=1 y T q qp(y|x(t)KL|N(z;my O Ou(t (t))) l|N(z;0,I ex t=1 y T qs(y|x(t))1[md,y=1]KL|] (za;t),exp($(t)) I| N(0,1) t=1 y d=1\nwhere 1[*] is an indicator variable that evaluates to 1 if only if its operand is true\nFor a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as\nThis is quite in contrast to how VAE optimizes Cyae (. 2.1). For Cyae to have a small contribution from the. KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples. in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead. units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful. in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction. of examples in the training set contributes a possible non-zero value to zd's KL term in Cevae. This added. flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different. epitomes, leading to a more balanced use of z..\nIn Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAF. our model is able to better use the model capacity. In the same figure, we also compare with adding dropout tc the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes. poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along. with the explanation in $ 4.1 where we compare generation results for all three models..\nWe visualize VAE reconstructions as the KL term weight X is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.\n1070s300 72 25t0500+ty 7 7. 3 823207 737486377 T13 635 9 5313202330 4740483y 3 5 481 C 5665504451 55702ay2B7 50y 2213404283 807027413/ 5# 5 1070530972 25835004y 7 3 7 7138823207 4862377 7 8n 63S 949 F 5315202336 9940483453 351 881230 5555504431 55702a9282 504001a 2218464283 8070a7413/ 5408315386 X = 1.0 X = 0.5 X = 0.2"}, {"section_index": "10", "section_name": "8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION", "section_text": "In 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAI models. Here we show the effect of the same factor on reconstruction quality for the models. The top half o each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimensio. of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), bu generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAI is able to achieve both good reconstruction and generation.\n2-d 5-d 10-d 20-d 8/70932100 6ye08Z 823 3687 6/869 9 67864043) 7 $ 1q 4031 q5 6 6 so5 3 527533 7 4 5 4 9 9 511943#8 1508 022 0749 273 7% 451308576 5 6 624300565 8367 y70y 405804086 YAA 806G0b1387 90920 l y q y709 24577 392001 3 3 8/70932/00 62480 2 23 2687 8 8 0/3899 674864093 7 3/97199031 19887 0 642 5053//9599 0 1527533/ 9 6931196488 180801 622 0749273781 4513085769 - 6629300869 G12/47 6 972 8367991904 4058040867 8666001389 9909206699 9749724577 2392070133 2504242553 287243/183 084a662320 804506 54 5475 8254 449157+$23 026205 9042 8946009 5 YAP 87 714330 7478 3920 67l700038 1 @7586l 7 5 24390y20 2190656773 45504119) 34049109 00922597 qnodoug 7392366/37 9783266207 2768849 3 3 P 893535 2892681888 088662820 8045068 64 359589389 4041890823 026204092 8996005669 9 9889959350 4940859302 9200588 1075866752 5933392930 8840086943 1458641193 3404910923 9900933399 9808866889 9983866284 2.920848983 4J5242076 0711061191 7726039752 578191405 84841230 916756688 065554379 D 7613470440 3755664227 3147080y08 184/14/0 880409444 145499 0 8905433414 1363380947 /8308/6 CVAR 27161/099 331168609 0980911043 350 9853920704 0711061141 7726039752 578/711405 8864791950 9 168566880 0655848790 +613490420 3793069227 3 19/056908 51184/19 8980268444 1459576840 8905433816 1363882997 818305/666 290/01/299 3311696290 09809/1062 3540088593\nFigure 4: Adding dropout to a VAE (here, dropout rate 0.5 is shown) can prevent the model from pruning units, shown for MNIST. However, in contrast to eVAE, it uses the additional units to encode redundancy, not additional information, and therefore does not address the problem. Generation results are shown in Fig. 5."}, {"section_index": "11", "section_name": "3.2 TRAINING", "section_text": "The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10\nFor the stochastic continuous variable z. we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary Variables.\nFor the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x by a point estimate y* so that q(y|x) = d(y = y*), where & evaluates to 1 only if y = y* and the best. y* = arg minCevae. We also explored modeling q(y[x) = Cat(h(x)) as a discrete distribution with h. being a neural network. In this case, the backward pass requires either using REINFORCE or passing througl gradients for the categorical sampler. In our experiments, we found that these approaches did not work well especially when the number of possible values of y becomes large. We leave this as future work to explore.\nFigure 9: Reconstructions from VAE, Dropout VAE, and eVAE models for different dimensions of laten variable z. Across each row are 2-d, 5-d, 10-d, and 20-d models. The top half of each figure are the origina digits, and the bottom half are the corresponding reconstructions. The eVAE models multiple shared subspace by maintaining 2-d (overlapping) epitomes as the latent dimension is increased. eVAE is the only model tha achieves both good reconstruction and generation.\nAlgorithm 1 Learning Epitomic VAE\n1: 0, Initialize parameters 2: for until convergence of parameters (0, ) do Assign each x to its best y* = arg min Cevae 3: 4: Randomize and then partition data into minibatches with each minibatch having proportion ate number of examples V y 5: for k E numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for"}, {"section_index": "12", "section_name": "4 EXPERIMENTS", "section_text": "We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database. (TFD) (Susskind et al., 201O). We show generation results that illustrate eVAE's ability to better utilize mode capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Fi nally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize. that in all experiments, we keep the weight of the KL term X = 1 to evaluate performance under optimizing the. true derived lower bound, without introducing an additional hyperparameter to tune..\n1.4 VAE Dropout VAE 1.2 eVAE 1.0 0.8 AAAAAAAAAAAAAA AA 0.6 AAAAAAAA 0.4 0.2 0.0 ................. 0 10 20 30 40 50 Unit\n1.4 VAE A Dropout VAE 1.2 eVAE 1.0 0.8 E AA AAAAAA 0.6 AAAAAA AAAAAAAAA 0.4 0.2 0.0 ................... 0 10 20 30 40 50 Unit\nThe recognition network first computes and . It is then combined with the optimal y* for each example, tc arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropaga tion with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment."}, {"section_index": "13", "section_name": "8.3 EVALUATION METRIC FOR GENERATION", "section_text": "We use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fully connected networks, and we show results for different depths and number of units of per layer. ReLU non linearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs. (MNIST) and 250 epochs (TFD), with base learning rate 0.001..\n1913931399 6 6 3 7 40733 9 2 464 2. 8 6 5 9 YAA 0996992992 2 36 0 3 7 9 9 3 3998438168 43 8 8 S19050603 Y C 5 82 6 4 213099004 a 31 008 7982744534 8 2018691938 2 3 9 66 4569117706 6 03908 i9 0465696900 979 182 9594295883 39599329 5 6 24402999 188498634 7823031071 14719340 S56358 8 0 6 6 93 489 94 9 3 68 3 5 9 2 2 CO 3 9 4 3 4 PAE prnpuur 3 9 3 4 8 9 6 6 3 2 & 6 5 2 9 9 6 3 9 8 S C HE X 5 - 9 6 6 3 - 9 9 8 3 4 7 5 a 9 9 S 8 9 3 ae 5 9389983 9 66 849355995 480933 D 02 53 9 3 P 8 5 8 5 3 5 6 488 5835488 43 9 0 5 8 8 DO 5 6 8 B 4 545 5 6 3 0 5 8 7 9 3 3 0 8343 5 5 CVAR 9 9 9 3 3 0 5 3 998 3 8 6898085 2 309 - 3 5490535 201369 9 8 84633588 2 0465696900 S 8863 5 6 5 6524194584 2940299922 3924 96742 0095 1886 5525465386\nTable 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimension D of laten variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decrease. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample qualit also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decrease and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples fc D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerat or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not th best-performing models reported in our experiments.) Since our work is motivated by the generation task, w therefore use log-density as the evaluation metric in our experiments.\nIntuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 1O). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand. eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper.\nFigure 5: Generations from VAE, Dropout VAE, and eVAE models for different dimensions of latent variable z. Across each row are 2-d, 5-d, 10-d, and 20-d models. VAE generation quality (1st row) degrades as latent dimension increases, and it is unable to effectively use added capacity to model greater variability. Adding dropout to the VAE (2nd row) fails to solve the problem since additional units are used to encode redundancy not additional information. eVAE (3rd row) overcomes the problem by modeling multiple shared subspaces here 2-d (overlapping) epitomes are maintained as the latent dimension is increased. Learned epitome manifolds from the 20-d model are shown in Fig. 3. Boxed digits highlight the difference in variability that the VAE vs eVAE model is able to achieve.\nTable 3: Likelihood bound and log-density for VAE and eVAE as dimension D of latent variable z is increased The encoder and decoder for all models consist of a single deterministic layer with 500 units. eVAE models have epitomes of size K = 4 for D = 8, and K = 8 for D = 24 and D = 48. The breakdown of the likelihood bound into reconstruction term and KLD term is also shown."}, {"section_index": "14", "section_name": "4.1 OVERCOMING OVER-PRUNING", "section_text": "We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to mode greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for differen dimensions D of latent variable z. With D = 2, VAE generates realistic digits but suffers from lack of diversity When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due tc VAE's propensity to use only a portion of its latent units for modeling the training data and the rest to minimiz the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of th posterior manifold near training samples, instead of generalizing to model the space of possible generations The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.\nAdding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased\nThere have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.\nRec. term KLD term Likelihood bound Log-density D =8 -89.4 -16.6 -106.0 278 VAE D = 24 -61.1 -29.3 -90.4 152 D = 48 -59.1 -30.3 -89.4 151 D = 8 -110.1 -9.6 -119.7 298 eVAE D = 24 -84.2 -15.7 -99.9 274 D = 48 -82.8 -14.2 -97.0 284\nVAE -KL term eVAE-KL term VAE - Reconstruction term eVAE -Reconstruction term VAE - Log-likelihood bound eVAE - Log-likelihood bound 140 120 100 80 60 40 20 0 D = 8 D = 24 D = 48\nFigure 1O: Likelihood bound for VAE and eVAE as D increases (shown as NLL). VAE improvement of the. bound is due to significant reduction of reconstruction error. but at high cost of KL. which is closely related to generation. eVAE improves reconstruction more moderately, but also maintains lower KL, and has stronger generation overall.\nwhile maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased."}, {"section_index": "15", "section_name": "4.2 CHOICE OF EPITOME SIZE", "section_text": "We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the. generative models quantitatively through their samples by measuring the log-density with a Parzen window. estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on. MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are non. overlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also. show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed. so that the total number of parameters is comparable to eVAE.\nAs we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, for D values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides mor comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.\nD = 8 D = 24 D = 48 300 250 200 2 4 2 34 8 3 4 8 Epitome size IVAEmVAEeVAE\nFigure 6: Epitome size vs. Parzen log-density (nats) on MNIST, grouped by different dimensions D of latent variable z. VAE performance for equivalent D is shown for comparison, as well as mVAE (ablative version of eVAE without parameter sharing). For each D, the optimal epitome size is significantly smaller than D\nFigure 11: Generation samples for VAE and eVAE as dimension D of latent variable z is increased. VAE sample quality decreases, which is consistent with log-density but not likelihood bound.."}, {"section_index": "16", "section_name": "4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER", "section_text": "Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer..\nTable 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping\nWe observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network. depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is. still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple.\n3 3 5 4. 0062334139 4 6611743714 09 40r3 40809 99716%2 5 6 PA 5 34 50191 3 2 410783 G y 9 5 0 551849749 59 5G 2377435 678939397 69588940 8425719139 1a233R2100 3918484734 VAE (D = 8) VAE (D = 24) VAE (D = 48) d Q 3 9 3 9 ly 7 4 4 9 8 9 3 5 9 1 4 5 6 3 / 7 2 7 8 5 9 3 3 3 S D 4 3 K L 8 X 6 2 3 7 X 3 43481 6 63 V 4 39 6y 022 78288986 3 4 18726172 eVAE (D = 8) eVAE (D = 24) eVAE (D = 48)\neVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of pa rameter sharing in eVAE is that each epitome can also benefit from general features learned across the training Set."}]
ry2YOrcge
[{"section_index": "0", "section_name": "LEARNING A NATURAL LANGUAGE INTERFACI WITH NEURAL PROGRAMMER", "section_text": "Operation Program in Table3 Amount (%) Scalar Answer Only Count 1 6.5 Comparison + Count 2 2.1 Select + Count 3 22.1 Scalar Answer 1,2,3 30.7 Lookup Answer Most Frequent Entry + Print. 4 1.7 First/Last + Print 5 9.5 Superlative + Print 6,7 13.5 Select + Print 8 17.5 Select + {first, last, previous, next, superlative} + Print 9-11 27.1 Lookup Answer 4-11 69.3\nArvind Neelakantan\nUniversity of Massachusetts Amherst\nUniversity of Massachusetts Amhers mccallum@cs.umass.edu\nTable 4: Statistics of the different sequence of operations among the examples answered correctly by the model in the development set. For each sequence of operations in the table, we also poin to corresponding example programs in Table3] Superlative operations include argmax and argmin while comparison operations include greater than, less than, greater than or equal to and less than or equal to. The model induces a program that results in a scalar answer 30.7% of the time while the induced program is a table lookup for the remaining questions. print and select are the two most common operations used 69.3% and 66.7% of the time respectively."}, {"section_index": "1", "section_name": "3.4.2 INDUCED PROGRAMS", "section_text": "Table 3 shows few examples of programs induced by Neural Programmer that yield the correc answer in the development set. The programs given in Table 3|show a few characteristics of the learned model. First, our analysis indicates that the model can adapt to unseen column names at tes time. For example in Question 3, the word outcome occurs only 8 times in the training set and i replaced with the unknown word token. Second, the model does not always induce the most efficien (with respect to number of operations other than the reset operation that are picked) program to solve a task. The last 3 questions in the table can be solved using simpler programs. Finally, the mode does not always induce the correct program to get the ground truth answer. For example, the last 2 programs will not result in the correct response for all input database tables. The programs woulc produce the correct response only when the select operation matches one entry in the table."}, {"section_index": "2", "section_name": "BACKGROUND AND INTRODUCTION", "section_text": "Databases are a pervasive way to store and access knowledge. However, it is not straightforward for users to interact with databases since it often requires programming skills and knowledge abou database schemas. Overcoming this difficulty by allowing users to communicate with databases via natural language is an active research area. The common approach to this task is by semantic parsing, which is the process of mapping natural language to symbolic representations of meaning In this context, semantic parsing yields logical forms or programs that provide the desired response when executed on the databases (Zelle & Mooney|1996). Semantic parsing is a challenging problem that involves deep language understanding and reasoning with discrete operations such as counting and row selection (Liang2016)."}, {"section_index": "3", "section_name": "3.4.3 CONTRIBUTION OF DIFFERENT OPERATIONS", "section_text": "Table4 shows the contribution of the different operations. The model induces a program that results in a scalar answer 30.7% of the time while the induced program is a table lookup for the remaining questions. The two most commonly used operations by the model are print and select.."}, {"section_index": "4", "section_name": "3.4.4 ERROR ANALYSIS", "section_text": "To conclude this section, we suggest ideas to potentially improve the performance of the mode. First, the oracle performance with 15 Neural Programmer models is 50.5% on the development se. while averaging achieves only 37.5% implying that there is still room for improvement. Next, th. accuracy of a single model on the training set is 53% which is about 20% higher than the accurac. in both the development set and the test set. This difference in performance indicates that the mode. suffers from significant overfitting even after employing strong regularization. It also suggests tha. the performance of the model could be greatly improved by obtaining more training data. Neverthe. less, there are limits to the performance improvements we may reasonably expect: in particular, a. shown in previous work (Pasupat & Liang2015), 21% of questions on a random set of 200 exam. ples in the considered dataset are not answerable because of various issues such as annotation error and tables requiring advanced normalization..\nRecently, many neural network models have been developed for program induction (Andreas et al. 2016,Jia & Liang 2016. Reed & Freitas 2016 Zaremba et al. 2016 Yin et al 2015), despite\nWork done at Google Brain\nMartin Abadi\nGoogle Brain\nabadi@google.com\nqvl@google.com\ndamodei@openai.com"}, {"section_index": "5", "section_name": "ABSTRACT", "section_text": "Learning a natural language interface for database tables is a challenging task that involves deep language understanding and multi-step reasoning. The task is often approached by mapping natural language queries to logical forms or programs that provide the desired response when executed on the database. To our knowledge this paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset. We enhance the objective func- tion of Neural Programmer, a neural network with built-in discrete operations, and apply it on WikiTableQuestions, a natural language question-answering dataset. The model is trained end-to-end with weak supervision of question-answer pairs, and does not require domain-specific grammars, rules, or annotations that are key elements in previous approaches to program induction. The main experimental result in this paper is that a single Neural Programmer model achieves 34.2% ac- curacy using only 10,000 examples with weak supervision. An ensemble of 15 models, with a trivial combination technique, achieves 37.7% accuracy, which is competitive to the current state-of-the-art accuracy of 37.1% obtained by a tradi- tional natural language semantic parser.\nThe first learning methods for semantic parsing require expensive annotation of question-program. pairs (Zelle & Mooney1996f Zettlemoyer & Collins]2005). This annotation process is no longer necessary in the current state-of-the-art semantic parsers that are trained using only question-answer pairs (Liang et al.]2011Kwiatkowski et al.2013]|Krishnamurthy & Kollar|2013||Pasupat & Liang 2015). However, the performance of these methods still heavily depends on domain-specific gram-. mar or pruning strategies to ease program search. For example, in a recent work on building semantic. parsers for various domains, the authors hand-engineer a separate grammar for each domain (Wang. et al.2 2015)."}, {"section_index": "6", "section_name": "OTHER RELATED WORK", "section_text": "While we discuss in detail various semantic parsing and neural program induction techniques in Section 1, here we briefly describe other relevant work. Recently, Kocisky et al.(2016) develop a semi-supervised semantic parsing method that uses question-program pairs as supervision. Con- currently to our work,Liang et al.(2016) propose neural symbolic machine, a model very similar to Neural Programmer but trained using the REINFORCE algorithm (Williams1992). They use only 2 discrete operations and run for a total of 3 timesteps, hence inducing programs that are much simpler than ours. Neural networks have also been applied on question-answering datasets that do not require much arithmetic reasoning (Bordes et al.]2014) Iyyer et al.]2014) Sukhbaatar et al. 2015] [Peng et al.]2015] Hermann et al.]2015f Kumar et al.|2016).Wang & Jiang(2016) use a neu- ral network model to get state-of-the-art results on a reading comprehension task (Rajpurkar et al. 2016)."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this paper, we enhance Neural Programmer to work with weaker supervision signals to mak it more broadly applicable. Soft selection during training enables the model to actively explore the space of programs by backpropagation with superior sample complexity. In our experiments we show that the model achieves performance comparable to a state-of-the-art traditional semantic parser even though the training set contains only 10,o00 examples. To our knowledge, this is th first instance of a weakly supervised, end-to-end neural network model that induces programs on real-world dataset."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "the notorious difficulty of handling discrete operations in neural networks (Joulin & Mikolov|2015 Kaiser & Sutskever 2016). Most of these approaches rely on complete programs as supervision (Jia & Liang2016) Reed & Freitas 2016) while others (Zaremba et al. 2016] Yin et al.]2015 have been tried only on synthetic tasks. The work that is most similar to ours is that of|Andreas et al. (2016) on the dynamic neural module network. However, in their method, the neural network is employed only to search over a small set of candidate layouts provided by the syntactic parse of the question, and is trained using the REINFORCE algorithm (Williams|1992). Hence, theii method cannot recover from parser errors, and it is not trivial to adapt the parser to the task at hand Additionally, all their modules or operations are parametrized by a neural network, so it is difficult to apply their method on tasks that require discrete arithmetic operations. Finally, their experiments concern a simpler dataset that requires fewer operations, and therefore a smaller search space, thar WikiTableQuestions which we consider in our work. We discuss other related work in Section 4.\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. NAACL, 2016..\nNeural Programmer (Neelakantan et al.||2016) is a neural network augmented with a set of discrete operations. It produces both a program, made up of those operations, and the result of running the program against a given table. The operations make use of three variables: row selector, scalar answer, and lookup answer, which are updated at every timestep. lookup answer and scalar answer store answers while row selector is used to propagate information across time steps. As input, a model receives a question along with a table (Figure |1). The model runs for a fixed number of time steps, selecting an operation and a column from the table as the argument to the operation at each time step. During training, soft selection (Bahdanau et al.[2014) is performed so that the model can be trained end-to-end using backpropagation. This approach allows Neural Programmer to explore the search space with better sample complexity than hard selection with the REINFORCE algorithm (Williams|1992) would provide. All the parameters of the model are learned from a weak supervision signal that consists of only the final answer; the underlying program, which consists of a sequence of operations and of selected columns, is latent.\nAntoine Bordes, Sumit Chopra, and Jason Weston. Question answering with subgraph embeddings EMNLP, 2014.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. NIPs, 2015\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997\ntimestep t What was the. total number of. Neural Network. goals scored in. 2005 Scalar Row Lookup Answer Selector Answer Column Operation Selection Selection 1 Season Team Country Competition Matches Goals Operations 1999 Djurgardens IF Sweden Allsvenskan 15 1 Count 2000 Djurgardens IF Sweden Superettan 15 3 2001 Djurgardens IF Sweden Allsvenskan 22 7 Select 2002-2003 Grazer AK Austria Bundesliga 24 6 ArgMax 2003 Denizlispor Turkey Super Lig 3 0 ArgMin 2003 Landskrona BolS Sweden Allsvenskan 11 3 2004 Landskrona BolS Sweden Allsvenskan 22 4 2005 Djurgardens IF Sweden Allsvenskan 24 12 2006 Djurgardens IF Sweden Allsvenskan 17 6 > 2007 Djurgardens IF Sweden Allsvenskan 23 4 < Row Selector: 2008 Djurgardens IF Sweden Allsvenskan 29 6 Print from t-1 2008-09 Esbjerg fB Denmark Superliga 6 0 2010 AaB Denmark Superliga 3 1 2011 Assyriska FF Sweden Superettan 19 5 Data from Table. Total Total Total Total 233 58 Table\nAcknowledgements We are grateful to Panupong Pasupat for answering numerous questions. about the dataset, and providing pre-processed version of the dataset and the output of the semantic parser. We thank David Belanger, Samy Bengio, Greg Corrado, Andrew Dai, Jeff Dean, Nando de Freitas, Shixiang Gu, Navdeep Jaitly, Rafal Jozefowicz, Ashish Vaswani, Luke Vilnis, Yuan Yu and Barret Zoph for their suggestions and the Google Brain team for the support. Arvind Neelakantan is supported by a Google PhD fellowship in machine learning..\nFigure 1: Neural Programmer is a neural network augmented with a set of discrete operations. The model runs for a fixed number of time steps, selecting an operation and a column from the table at every time step. The induced program transfers information across timesteps using the row selector variable while the output of the model is stored in the scalar answer and lookup answer variables.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre. gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good. fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz. Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Gor don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kuna Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viegas, Oriol Vinyals. Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow:. Large-scale machine learning on heterogeneous distributed systems. ArXiv, 2016..\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. NIPS, 2016.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. NIPs, 2015\nIn this work, we develop an approach to semantic parsing based on Neural Programmer. We show. how to learn a natural language interface for answering questions using database tables, thus inte. grating differentiable operations that are typical of neural networks with the declarative knowledge contained in the tables and with discrete operations on tables and entries. For this purpose, we make several improvements and adjustments to Neural Programmer, in particular adapting its objective. function to make it more broadly applicable..\nMohit Iyyer, Jordan L. Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, anc Hal Daume III. A neural network for factoid question answering over paragraphs. EMNLP. 2014.\nRobin Jia and Percy Liang. Data recombination for neural semantic parsing. ACL, 2016\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. NIPS, 2015.\nLukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. ICLR, 2016\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2014\nOur main experimental results concern WikiTableQuestions (Pasupat & Liang2015), a real-worl. question-answering dataset on database tables, with only 10,o00 examples for weak supervision. This dataset is particularly challenging because of its small size and the lack of strong supervision. and also because the tables provided at test time are never seen during training, so learning require. adaptation at test time to unseen column names. A state-of-the-art, traditional semantic parser tha relies on pruning strategies to ease program search achieves 37.1% accuracy. Standard neural net. work models like sequence-to-sequence and pointer networks do not appear to be promising for this. dataset, as confirmed in our experiments below, which yield single-digit accuracies. In compari. son, a single Neural Programmer model using minimal text pre-processing, and trained end-to-end. achieves 34.2% accuracy. This surprising result is enabled primarily by the sample efficiency o1. Neural Programmer, by the enhanced objective function, and by reducing overfitting via strong reg. ularization with dropout (Srivastava et al.]2014] Iyyer et al.2015 Gal & Ghahramani2016) anc weight decay. An ensemble of 15 models, even with a trivial combination technique, achieves 37.7% accuracy.\nJayant Krishnamurthy and Thomas Kollar. Jointly learning to parse and perceive: Connecting natura language to the physical world. TACL, 2013.\nPercy Liang. Learning executable semantic parsers for natural language understanding. ACM, 2016\nPercy Liang, Michael I. Jordan, and Dan Klein. Learning dependency-based compositional seman tics. ACL, 2011."}, {"section_index": "9", "section_name": "2 NEURAL PROGRAMMER", "section_text": "In this section we describe in greater detail the Neural Programmer model and the modifications we made to the model. Neural Programmer is a neural network augmented with a set of discrete operations. The model consists of four modules:\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Neural programmer Inducing latent programs with gradient descent. ICLR, 2016.\nPanupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables ACL, 2015.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, O00+ questions for machine comprehension of text. ArXiv, 2016.\nScott Reed and Nando De Freitas. Neural programmer-interpreters. ICLR, 2016\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. JMLR. 2014\nA more detailed description of the basic model can be found in|Neelakantan et al.(2016). The mode runs for fixed total of T timesteps. The parameters of the operations. selector module. question an\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. NIPs, 2015\nMohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daume III. Deep unordered compo sition rivals syntactic methods for text classification. ACL, 2015..\nIn earlier work, Neural Programmer is applied only on a synthetic dataset. In that dataset, when. the expected answer is an entry in the given table, its position is explicitly marked in the table.. However, real-world datasets certainly do not include those markers, and lead to many ambiguities. (e.g., (Pasupat & Liang2015)). In particular, when the answer is a number that occurs literally. in the table, it is not known, a priori, whether the answer should be generated by an operation or selected from the table. Similarly, when the answer is a natural language phrase that occurs. in multiple positions in the table, it is not known which entry (or entries) in the table is actually. responsible for the answer. We extend Neural Programmer to handle the weaker supervision signal. by backpropagating through decisions that concern how the answer is generated when there is an. ambiguity.\nTomas Kocisky, Gabor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. ArXiv,. 2016.\nTom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. Scaling semantic parsers with on-the-fly ontology matching. EMNLP, 2013.\nChen Liang, Jonathan Berant, Quoc Le, Kenneth Forbus, and Ni Lao. Neural symbolic machines.. Learning semantic parsers on freebase with weak supervision. NAMPI Workshop, NIPS, 2016\nQuestion RNN that processes the question and converts the tokens to a distributed repre sentation. We use an LSTM network (Hochreiter & Schmidhuber1997) as the questio RNN. A list of discrete operations such as counting and entry selection that are manually definec Each operation is parameterized by a real-valued vector that is learned during training. A selector module that induces two probability distributions at every time step, one ove the set of operations and another over the set of columns. The input to the selector i obtained by concatenating the last hidden state of the question RNN, the hidden state of th history RNN from the current timestep, and the attention vector obtained by performin soft attention (Bahdanau et al.2014) on the question using the history vector. Followin Neelakantan et al.(2016), we employ hard selection at test time. History RNN modeled by a simple RNN (Werbos!1990) with tanh activations which re members the previous operations and columns selected by the model. The input to th history RNN at each timestep is the result of concatenating the weighted representations o operations and columns with their corresponding probability distributions produced by th selector at the previous timestep\nBaolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. Towards neural network-based reason ing. ArXiv, 2015.\nlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks NIPS, 2014.\nhistory RNNs are all learned with backpropagation using a weak supervision signal that consists of the final answer. Below, we discuss several modifications to the model to make it more broadl applicable, and easier to train.\nYushi Wang, Jonathan Berant, and Percy Liang. Building a semantic parser overnight. ACL, 2015"}, {"section_index": "10", "section_name": "2.1 OPERATIONS", "section_text": "We use 15 operations in the model that were chosen to closely match the set of operations used in the. baseline model (Pasupat & Liang 2015). All the operations except select and most frequent entry. operate only on the set of selected rows which is given by the row selector variable. Before the firs timestep, all the rows in the table are set to be selected. The built-in operations are:.\nPengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. Neural enquirer: Learning to query table. with natural language. ArXiv, 2015.\nJohn M. Zelle and Raymond J. Mooney. Learning to parse database queries using inductive logic programming. AAAI/IAAI, 1996.\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. UAI, 2005.\nAll the operations are defined to work with soft selection so that the model can be trained with. backpropagation. The operations along with their definitions are discussed in the Appendix"}, {"section_index": "11", "section_name": "2.2 OUTPUT AND ROW SELECTOR", "section_text": "Neural programmer makes use of three variables: row selector, scalar answer and lookup answei which are updated at every timestep. The variable lookup answer stores answers that are selected from the table while scalar answer stores numeric answers that are not provided in the table'|The induced program transfers information across timesteps using the row selector variable which con- tains rows that are selected by the model.\nGiven an input table II, containing M rows and C columns (M and C can vary across examples) the output variables at timestep t are given by:\nwhere aP (op) and acol () are the probabilities assigned by the selector to operation op and column. j at timestep t respectively and output(count) is the output of the count operation at timestep t.. The row selector variable at timestep t is obtained by taking the weighted average of the outputs of the remaining operations and is discussed in the Appendix. lookup answerT[i][j] is the probability. that the element (i, j) in the input table is in the final answer predicted by the model.."}, {"section_index": "12", "section_name": "2.3 TRAINING OBJECTIVE", "section_text": "We modify the training objective of Neural Programmer to handle the supervision signal available in real-world settings. In previous work, the position of the answers are explicitly marked in the table when the answer is an entry from the table. However, as discussed in Section 1, in real-world datasets (e.g., (Pasupat & Liang 2015)) the answer is simply written down introducing two kinds of ambiguities. First, when the answer is a number and if the number is in the table, it is not known\nIt is possible to extend the model to generate natural language responses using an RNN decoder but it i not the focus of this paper and we leave it for further work.\ncount returns the number of selected rows in row selector.. select and most frequent entry are operations which are computed only once for every question and output a boolean tensor with size same as the size of the input table. An. entry in the output of the select operation is set to 1 if the entry matches some phrase in the question. The matched phrases in the question are anonymized to prevent overfitting. Similarly, for most frequent entry, it is set to 1 if the entry is the most frequently occurring. One in its column. argmax, argmin, greater than, less than, greater than or equal to, less than or equal to are. all operations that output a tensor with size same as the size of the input table.. first, last, previous and next modify the row selector.. print operation assigns row selector on the selected column of lookup answer.. reset resets row selector to its initial value. This operation also serves as no-op when the model needs to induce programs whose complexity is less than T..\nFor scalar answers we compute the square loss.\nLscalar(scalar answert, y scalar answert -\nwhere y is the ground truth answer. We divide Lscalar by the number of rows in the input table and. do not backpropagate on examples for which the loss is greater than a threshold since it leads to instabilities in training.\nWhen the answer is a list of items y = (a1, a2,..., an), for each element in the list (ai, i 1,2,...,N) we compute all the entries in the table that match that element, given by S {(r, c), V (r, c) I[r][c] = a}. We tackle the ambiguity introduced when an answer item occurs at multiple entries in the table by computing the loss only on the entry which is assigned the highest (i, j) in the input table is part of the output. We compute log-loss for each entry and the final loss is given by:\nN Llookup(lookup answerT, y) = min(r,c)es,(- log(lookup answert[r, c])) i=1 M C 1 [g[i,j] == 0] log(1- lookup answerr[i,j]) MC i=1 j=1\nTable 5: List of all operations provided to the model along with their definitions. mfe is abbreviation. for the operation most frequent entry. cond] is 1 when cond is True, and O otherwise. Comparison, select, reset and mfe operations are independent of the timestep while all the other operations are computed at every time step. Superlative operations and most frequent entry are computed within a. column. The operations calculate the expected output with the respect to the membership probabili-. ties given by the row selector so that they can work with probabilistic selection..\nwhere cond] is 1 when cond is True. and 0 otherwise\nWe deal with the ambiguity that occurs when the ground truth is a number and if the number also oc curs in the table, by computing the final loss as the soft minimum of Lscalar and Llookup. Otherwise. the loss for an example is Lscalar when the ground truth is a number and Llookup when the grounc. truth matches some entries in the table. The two loss functions Lscalar and Llookup are in differen. scales, so we multiply Llookup by a constant factor which we set to 50.0 after a small exploration ir. Our experiments.\nTable5lshows the list of operations built into the model along with their definitions"}, {"section_index": "13", "section_name": "3 EXPERIMENTS", "section_text": "As discussed in Section 2.3, the output variables scalar answer and lookup answer are calculated us ing the output of the count operations and print operation respectively. The row selector is computed. using the output of the remaining operations and is given by,."}, {"section_index": "14", "section_name": "3.1 DATA", "section_text": "We use the train, development, and test split given byPasupat & Liang(2015). The dataset contains. 11321, 2831, and 4344 examples for training, development, and testing respectively. We use theii. tokenization, number and date pre-processing. There are examples with answers that are neither\nwhere aP (op) and acol (j) are the probabilities assigned by the selector to operation op and column j at timestep t respectively.\nType Operation Definition M Aggregate count countt = row_selectt-1[i] argmax maxt[i][j] = max(0.0, row_selectt-1[i]- Superlative 1 argmin mint[i][j] = max(0.0, row_selectt-1[i]- M1([II[i][j] > II[k][j]] row_select-1[k])), i =1,, M, j = 1,.,C > g[i][j] =ll|i|[j > pivotq,V(i,j),i=1,...,M,j=1,...,C < l\\i[] =ll|illj < pivot, V(i,j),i=1,...,M,j=1,...,C Comparison V ge[i][j] I[i][j] pivotge,Vi,j),i=1,...,M,j=1,..., C < le[i[[j] =Ii[j< pivotle,V(i,j),i=1,..., M,j =1,..., C select s[i][j] = 1.0 if H[i][j] appears in question else 0.0, V(i,j,i=1,...,M,j=1,...,C mfe mfe[i][j] = 1.0 if H[i][j] is the most common entry in column j else 0.0, Table Ops V(i,j),i=1,...,M,j=1,...,C first ft[i] = max(0.0, row_select-1 [i] - j=1 row_select-1 [j]), i =1,..., M last lat[i] = max(0.0, row_select- 1 [i] - j=i+1 row_select-1[j]), i =1,...,M previous pt[i]=row_select-1[i+1],i=1,...,M1;pt[M]=0 next nt[i= row_selectt-1i- 1],i=2,...,M ; nt[1= 0 Print print lookup answert[i][j] = row_selectt-1[i], V(i, j)i = 1,..., M, j = 1,..., C Reset reset rt|2 1,Vi= 1,2,...,M\nwhether the loss should be computed using the scalar answer variable or the lookup answer variable Second, when the answer is a natural language phrase and if the phrase occurs in multiple positions. in the table, we again do not know which entry (or entries) in the table is actually responsible for. generating the answer. We extend Neural Programmer to handle this weaker supervision signal. during training by computing the loss only on the prediction that is closest to the desired response...\nSince we employ hard selection at test time, only one among scalar answer and lookup answer is modified at the last timestep. We use the variable that is set at the last timestep as the final output of the model.\nWe apply Neural Programmer on the WikiTableQuestions dataset (Pasupat & Liang2015) and compare it to different non-neural baselines including a natural language semantic parser devel- oped byPasupat & Liang(2015). Further, we also report results from training the sequence-to- sequence model (Sutskever et al.]2014) and a modified version of the pointer networks (Vinyals et al.2015). Our model is implemented in TensorFlow (Abadi et al.]2016) and the model takes ap- proximately a day to train on a single Tesla K80 GPU. We use double-precision format to store the model parameters since the gradients become undefined values in single-precision format. Our code is available at https://github.com/tensorflow/models/tree/master/neural\nlectort[i] ={acol(j)aP(>)g[i][j]+ acol(j)aP(<)l[i][j] j=1 + acol(j)atP()ge[i][j] + atol(j)atP()le[i][j], + Qtol(j)atP (argmax) maxt[i][j] + Qcol(j)atp(argmint) min[i][j] + acol(j)aP(select)s[i][j] + acol(j)aP(mfe)mfe[i][j]} + atp (previous)pt[i] + atp (next) nt[i] + atp(reset)rt[i] + QtP(first)ft[i] + atP(last)lat[i] Vi,i=1,2,...,M\nrow selectort[i] = )`{atol(j)atP(>)g[i][j] + atol(j)atp(<)l[i][j] +acol(j)atP()ge[i][j] + acol(j)aP(<)le[i][j], + acol (j)ap (argmax) maxt [i][j] + Qtol (j)atp (argmint) min[i][j] + acol(j)aP(select)s[i][j] + acol(j)atP(mfe)mfe[i][j]} + atp (previous)pt[i] + atp(next)nt[i] + atP(reset)rt[i] + QP(first)ft[i] + atP(last)lat[i] Vi,i=1,2,...,M\nTable 1: Performance of Neural Programmer compared to baselines from (Pasupat & Liang. 2015) The performance of an ensemble of 15 models is competitive to the current state-of-the-art natura language semantic parser\nnumber answers nor phrases selected from the table. We ignore these questions during training but. the model is penalized during evaluation followingPasupat & Liang(2015). The tables provided in. the test set are unseen at training, hence requiring the model to adapt to unseen column names at tes1. time. We train only on examples for which the provided table has less than 100 rows since we run out of GPU memory otherwise, but consider all examples at test time.."}, {"section_index": "15", "section_name": "3.2 TRAINING DETAILS", "section_text": "We use T = 4 timesteps in our experiments. Words and operations are represented as 256 dimen- sional vectors, and the hidden vectors of the question and the history RNN are also 256 dimensional The parameters are initialized uniformly randomly within the range [-0.1, 0.1]. We train the model using the Adam optimizer (Kingma & Ba2014) with mini-batches of size 20. The e hyperparam- eter in Adam is set to 1e-6 while others are set to the default values. Since the training set is small compared to other datasets in which neural network models are usually applied, we rely on strong regularization:\nTable [1shows the performance of our model in comparison to baselines from Pasupat & Liang (2015). The best result from Neural Programmer is achieved by an ensemble of 15 models. The. only difference among these models is that the parameters of each model is initialized with a differ-. ent random seed. We combine the models by averaging the predicted softmax distributions of the. models at every timestep. While it is generally believed that neural network models require a large number of training examples compared to simpler linear models to get good performance, our model\nWe clip the gradients to norm 1 and employ early-stopping. The occurrences of words that appear less than 10 times in the training set are replaced by. a single unknown word token.. We add a weight decay penalty with strength 0.0001. We use dropout with a keep probability of O.8 on input and output vectors of the RNN, and selector, operation and column name representations (Srivastava et al.]2014) We use dropout with keep probability of O.9 on the recurrent connections of the question. RNN and history RNN using the technique from|Ga1 & Ghahramani|(2016). We use word-dropout (Iyyer et al.[2015) with keep probability of 0.9. Here, words in the question are randomly replaced with the unknown word token while training.\nWe tune the dropout rates, regularization strength, and the e hyperparameter using grid search on the development data, we fix the other hyperparameters after a small exploration during initial experi ments.\nTable 2: Model ablation studies. We find that dropout and weight decay, along with the boolean feature indicating a matched table entry for column selection, have a significant effect on the perfor mance of the model.\nWe did not get better results either by using pre-trained word vectors (Mikolov et al.]2013) or by pre-training the question RNN with a language modeling objective (Dai & Le2015). A possible explanation is that the word vectors obtained from unsupervised learning may not be suitable to the task under consideration. For example, the learned representations of words like maximum and minimum from unsupervised learning are usually close to each other but for our task it is counter- productive. We consider replacing soft selection with hard selection and training the model with the REINFORCE algorithm (Williams|1992). The model fails to learn in this experiment, probably be cause the model has to search over millions of symbolic programs for every input question making it highly unlikely to find a program that gives a reward. Hence, the parameters of the model are not updated frequently enough"}, {"section_index": "16", "section_name": "3.3.1 NEURAL NETWORK BASELINES", "section_text": "To understand the difficulty of the task for neural network models, we experiment with two neural. network baselines: the sequence-to-sequence model (Sutskever et al.|2014) and a modified version of the pointer networks (Vinyals et al.] 2015). The input to the sequence-to-sequence model is a concatenation of the table and the question, and the decoder produces the output one token at a time.. We consider only examples whose input length is less than 400 to make the running time reasonable.. The resulting dataset has 8, 857 and 1, 623 training and development examples respectively. The. accuracy of the best model on this development set after hyperparameter tuning is only 8.9%. Next.. we experiment with pointer networks to select entries in the table as the final answer. We modify. pointer networks to have two-attention heads: one to select the column and the other to select entries within a column. Additionally, the model performs multiple pondering steps on the table before returning the final answer. We train this model only on lookup questions, since the model does not. have a decoder to generate answers. We consider only examples whose tables have less than 100. rows resulting in training and development set consisting of 7, 534 and 1, 829 examples respectively.. The accuracy of the best model on this development set after hyperparameter tuning is only 4.0%. These results confirm our intuition that discrete operations are hard to learn for neural networks. particularly with small datasets in real-world settings.."}, {"section_index": "17", "section_name": "3.4.1 MODEL ABLATION", "section_text": "Table 2 shows the impact of different model design choices on the final performance. While. anonymizing phrases in the question that match some table entry seems to have a small positive. effect, regularization has a much larger effect on the performance. Column selection is performed. in|Neelakantan et al.(2016) using only the name of a column; however, this selection procedure is. insufficient in real-world settings. For example the column selected in question 3 in Table 3|does. not have a corresponding phrase in the question. Hence, to select a column we additionally use a boolean feature that indicates whether an entry in that column matches some phrase in the question. Table[2lshows that the addition of this boolean feature has a significant effect on performance..\nwhich section is longest??\nTable 3: A few examples of programs induced by Neural Programmer that generate the correct. answer in the development set. mfe is abbreviation for the operation most frequent entry. The model. runs for 4 timesteps selecting an operation and a column at every step. The model employs hard. selection during evaluation. The column name is displayed in the table only when the operation. picked at that step takes in a column as input while the operation is displayed only when it is other than the reset operation. Programs that choose count as the final operation produce a number as the. final answer while programs that select print as the final operation produce entries selected from the. table as the final answer.\nID Question Step 1 Step 2 Step 3 Step 4 - what is the total number of Operation 1 1 count - teams? Column - - - 2 how many games had more. Operation - - > - count than 1,500 in attendance? Column attendance 1 1 3 what is the total number. Operation - select count of runner-ups listed on the. Column chart? 1 1 outcome 4 which year held the most. Operation - - mfe print competitions? Column - 1 year year 5 what opponent is listed last Operation last - last print on the table?. Column - opponent - 6 Operation - - argmax print which section is longest??. Column kilometers - - name 7 which engine(s) has the least Operation - - argmin print amount of power?. Column 1 1 power engine 8 what was claudia roll's Operation - - select print time? Column swimmer time - 1 9 who had more silver medals,. Operation argmax select argmax print cuba or brazil?. Column nation nation silver nation 10 who was the next appointed. Operation select next last print director after lee p. brown? Column name - - name 11 what team is listed previous Operation select previous first print to belgium?. Column team team 1"}]
SyWvgP5el
[{"section_index": "0", "section_name": "EPOPT: LEARNING ROBUST NEURAL NETWORK POLICIES USING MODEL ENSEMBLES", "section_text": "Aravind Rajeswaran', Sarvjeet Ghotra?, Balaraman Ravindran3, Sergey Levine4"}, {"section_index": "1", "section_name": "ACKNOWLEDGMENTS", "section_text": "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are repre sented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning/adaptation\nThe authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov's researcl group for insightful comments about the work. The authors would also like to thank Emo Todoroy. for the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financia support from ILDS, IIT Madras."}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Pieter Abbeel, Morgan Quigley, and Andrew Y. Ng. Using inaccurate models in reinforcement learning. In ICML, 2006.\nBrenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems. 57(5):469 - 483. 2009"}, {"section_index": "3", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning with powerful function approximators like deep neural networks (deep RI. has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al.]2015 Silver et al.] 2016), simulated control problems (Lillicrap et al.2015] Mordatch et al.[[2015b), an graphics (Peng et al.||2016). However, high sample complexity is a major barrier for directly applyin. model-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning. actor-critic, and policy gradients are known to suffer from long learning times (Kakade]2003), whic. is compounded when used in conjunction with expressive function approximators like deep neura. networks (DNNs). The challenge of gathering samples from the real world is further exacerbate. by issues of safety for the agent and environment, since sampling with partially learned policie. could be unstable (Garcia & Fernandez| 2015). Thus, model-free deep RL methods often require. prohibitively large numbers of potentially dangerous samples for physical control tasks.\nErick Delage and Shie Mannor. Percentile optimization for markov decision processes with paramete. uncertainty. Operations Research, 58(1):203-213, 2010.\nYan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In ICML, 2016..\nMichael O. Duff. Design for an optimal probe. In ICML, 2003\nModel-based methods, where the real-world target domain is approximated with a simulated source domain, provide an avenue to tackle the above challenges by learning policies using simulated data The principal challenge with simulated training is the systematic discrepancy between source and target domains, and therefore, methods that compensate for systematic discrepancies (modeling errors) are needed to transfer results from simulations to real world using RL. We show that the impact of such discrepancies can be mitigated through two key ideas: (1) training on an ensemble of models in an adversarial fashion to learn policies that are robust to parametric model errors, as well as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fron the target domain to progressively make it a better approximation. This can be viewed either as an instance of model-based Bayesian RL (Ghavamzadeh et al.|[2015); or as transfer learning from a collection of simulated source domains to a real-world target domain (Taylor & Stone]2009). While a number of model-free RL algorithms have been proposed (see, e.g.,Duan et al.(2016) for a survey) their high sample complexity demands use of a simulator, effectively making them model-based. We\nTom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodic tasks with contacts. In Proceedings of Robotics: Science and Systems, 2011.\nMohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning. 8(5-6):359 483. 2015\nSham Kakade. A natural policy gradient. In NIPS, 2001\nSham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Javier Garcia and Fernando Fernandez. A comprehensive survey on safe reinforcement learning Journal of Machine Learning Research, 2015..\nSergey Levine and Vladlen Koltun. Guided policy search. In ICML, 2013.\nIn this paper, we propose the Ensemble Policy Optimization (EPOpt-e) algorithm for finding policies. that are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for the target domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensemble. of models), find a robust policy that is competent for the whole distribution; (ii) gather data from the target domain using said robust policy, and adapt the source distribution. EPOpt uses an ensemble of models sampled from the source distribution, and a form of adversarial training to learn robusi policies that generalize to a broad range of models. By robust, we mean insensitivity to parametric . model errors and broadly competent performance for direct-transfer (also referred to as jumpstari like in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance (return) in the target domain, without any direct training on the target domain. By adversarial training. we mean that model instances on which the policy performs poorly in the source distribution are sampled more often in order to encourage learning of policies that perform well for a wide range of model instances. This is in contrast to methods which learn highly optimized policies for specific model instances, but brittle under model perturbations. In our experiments, we did not observe. significant loss in performance by requiring the policy to work on multiple models (for example. through adopting a more conservative strategy). Further, we show that policies learned using EPOpt are robust even to effects not modeled in the source domain. Such unmodeled effects are a major issue when transferring from simulation to the real world. For the model adaptation step (ii), we. present a simple method using approximate Bayesian updates, which progressively makes the source. distribution a better approximation of the target domain. We evaluate the proposed methods on the. hopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensional state space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggest that adversarial training on model ensembles produces robust policies which generalize better than. policies trained on a single, maximum-likelihood model (of source distribution) alone..\nLennart Ljung. System Identification, pp. 163-173. Birkhauser Boston, Boston, MA, 1998\nIgor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel V. Todorov. Interactive control of diverse complex characters with neural networks. In NIPs. 2015b.\nArnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780-798, 2005.\nosep M. Porta, Nikos A. Vlassis, Matthijs T. J. Spaan, and Pascal Poupart. Point-based value iteratior Or cont1nuous. 7.232023672006\nWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:. M(p) =< S, A, Tp, Rp, 7, So,p > where S, A are (continuous) states and actions respectively;. Tp Rp, and So,p are the state transition, reward function, and initial state distribution respectively, all. parametrized by p; and y is the discount factor. Thus, we consider a set of MDPs with the same state and action spaces. Each MDP in this set could potentially have different transition functions, rewards and initial state distributions. We use transition functions of the form St+1 = Tp(st, at) where Tp is. a random process and St+1 is a random variable..\nStephane Ross and Drew Bagnell. Agnostic system identification for model-based reinforcement learning. In ICML, 2012.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust regio policy optimization. In ICML, 2015.\nDavid Silver et al. Mastering the game of go with deep neural networks and tree search. Nature, 529 (7587):484-489, Jan 2016.\nWe distinguish between source and target MDPs using M and W respectively. We also refer to M and W as source and target domains respectively, as is common in the transfer learning set-up. Our objective is to learn the optimal policy for W; and to do so, we have access to M(p). We assume that we have a distribution (D) over the source domains (MDPs) generated by a distribution ovel the parameters P = P(p) that capture our subjective belief about the parameters of W. Let P be parametrized by (e.g. mean, standard deviation). For example, M could be a hopping task with reward proportional to hopping velocity and falling down corresponds to a terminal state. For this task, p could correspond to parameters like torso mass, ground friction, and damping in joints, all of which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e {p | M(p) = W}. However, in practice, there are likely to be unmodeled effects, and we analyze this setting in our experiments. We wish to learn a policy +(s) that performs well for all M ~ D Note that this robust policy does not have an explicit dependence on p, and we require it to perform well without knowledge of p.\nMatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey Journal of Machine Learning Research, 10:1633-1685, December 2009"}, {"section_index": "5", "section_name": "3 LEARNING PROTOCOL AND EPOPT ALGORITHM", "section_text": "Nikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, and Pascal Poupart. Bayesian Reinforcemen Learning, pp. 359-386. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.\nWe follow the round-based learning protocol of Bayesian model-based RL. We use the term rounds. when interacting with the target domain, and episode when performing rollouts with the simulator. In. each round, we interact with the target domain after computing the robust policy on the current (i.e\nshow in our experiments that such methods learn policies which are highly optimized for the specific models used in the simulator, but are brittle under model mismatch. This is not surprising, since deep. networks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressing. robustness of DNN-policies is particularly important to transfer their success from simulated tasks to. physical systems."}, {"section_index": "6", "section_name": "Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, 2002.", "section_text": "Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 518(7540) 529-533, Feb 2015.\nPascal Poupart, Nikos A. Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete bayesian reinforcement learning. In ICML, 2006\nposterior) simulated source distribution. Following this, we update the source distribution using data from the target domain collected by executing the robust policy. Thus, in round i, we update two set of parameters: 0, the parameters of the robust policy (neural network); and , the parameters of the source distribution. The two key steps in this procedure are finding a robust policy given a source distribution; and updating the source distribution using data from the target domain. In this section we present our approach for both of these steps.\nPawel Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experienc replay. Neural Networks, 22:1484-1497. 2009."}, {"section_index": "7", "section_name": "3.1 ROBUST POLICY SEARCH", "section_text": "We introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt is. a policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine. Batch policy optimization algorithms (Williams]1992f |Kakade[2001} Schulman et al.[2015) collect a batch of trajectories by rolling out the current policy, and use the trajectories to make a policy. update. The basic structure of EPOpt is to sample a collection of models from the source distribution sample trajectories from each of these models, and make a gradient update based on a subset of. sampled trajectories. We first define evaluation metrics for the parametrized policy, e:.\nIn (1), nm(0,p) is the evaluation of e on the model M(p), with being trajectories generated by M(p) and e: 7 ={St,at,rt}t=o where St+1 ~ Tp(St,at), so ~ S0,p, rt ~ Rp(St,at), and at ~ e(st). Similarly, np(0) is the evaluation of e over the source domain distribution. The corresponding expectation is over trajectories t generated by D and e: t = {St, at, rt}t-o, where St+1 ~ Tpt(St, at), Pt+1 = Pt, S0 ~ S0,po,Tt ~ Rpt(St,at), at ~ e(St), and po ~ P. With this modified notation of trajectories. batch policy optimization can be invoked for policy search\nOptimizing np allows us to learn a policy that performs best in expectation over models in the source domain distribution. However, this does not necessarily lead to a robust policy, since there could be high variability in performance for different models in the distribution. To explicitly seek a robust policy, we use a softer version of max-min objective suggested in robust control, and optimize for the conditional value at risk (CVaR) (Tamar et al.]2015):\nmax nM(0,p)P(p)dp s.t. P(nM(0,P) <y) = e 0,y F(0)\nwhere F(0) = {p | nm(0, p) y} is the set of parameters corresponding to models that produce the worst e percentile of returns, and provides the limit for the integral; nM(0, P) is the random variable of returns, which is induced by the distribution over model parameters; and e is a hyperparamete. which governs the level of relaxation from max-min objective. The interpretation is that (2) maximize. the expected return for the worst e-percentile of MDPs in the source domain distribution. We adap the previous policy gradient formulation to approximately optimize the objective in (2). The resulting algorithm, which we call EPOpt-e, generalizes learning a policy using an ensemble of source MDPs which are sampled from a source domain distribution.\nIn Algorithm 1, R(Tk) = T- t=o ' rt,k denotes the discounted return obtained in trajectory sample Tk. In line 7, we compute the e-percentile value of returns from the N trajectories. In line 8, w. find the subset of sampled trajectories which have returns lower than Qe. Line 9 calls one step o an underlying batch policy optimization subroutine on the subset of trajectories from line 8. For th CVaR objective, it is important to use a good baseline for the value function.Tamar et al.(2015 show that without a baseline, the resulting policy gradient is biased and not consistent. We use a linear function as the baseline with a time varying feature vector to approximate the value functior similar to|Duan et al.[(2016). The parameters of the baseline are estimated using only the subset o trajectories with return less than Qe. We found that this approach led to empirically good results.\nFor small values of e, we observed that using the sub-sampling step from the beginning led to unstable. learning. Policy gradient methods adjust parameters of policy to increase probability of trajectories\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229-256. 1992\nKemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1996. ISBN 0-13-456567-3.\nT-1 nM(0,p) =Eg p t=0\nnM(0,p)=E (1) t=0 [nM(0,p)] = Ep~P E: E t=0 t=0\nnD(0) = Ep~P [nM(0,p)] =Ep~P E E"}, {"section_index": "8", "section_name": "A APPENDIX", "section_text": "Algorithm 1: EPOpt-e for Robust Policy Search\nInput: , Oo, niter, N, e. 2 for iteration i = 0, 1, 2, ... niter do for k = 1,2,...N do 3 4 sample model parameters pk ~ P sample a trajectory Tk = {St, At,^t, St+1}T=1 from M(px) using policy (0;) 5 6 end 7 8 select sub-set T = {Tk : R(Tk) Qe} Update policy: 0i+1 = BatchPolOpt(0, T) 9 10 end"}, {"section_index": "9", "section_name": "A.1 DESCRIPTION OF SIMULATED ROBOTIC TASKS CONSIDERED IN THIS WORK", "section_text": "Hopper: The hopper task is to make a 2D planar hopper with three joints and 4 body parts hop forward as fast as possible (Erez et al.]2011). This problem has a 12 dimensional state space and a 3 dimensional action space that corresponds to torques at the joints. We construct the source domain by considering a distribution over 4 parameters: torso mass, ground friction, armature (inertia), and damping of foot.\nHalf Cheetah: The half-cheetah task (Wawrzynski 2009) requires us to make a 2D cheetah with two legs run forward as fast as possible. The simulated robot has 8 body links with an 18 dimensional state space and a 6 dimensional action space that corresponds to joint torques. Again, we construct the source domain using a distribution over the following parameters: torso and head mass, ground friction, damping, and armature (inertia) of foot joints.\nwith high returns and reduce probability of poor trajectories. EPOpt-e due to the sub-sampling step emphasizes penalizing poor trajectories more. This might constrain the initial exploration needed to find good trajectories. Thus, we initially use a setting of e = 1 for few iterations before setting epsilon to the desired value. This corresponds to exploring initially to find promising trajectories anc rapidly reducing probability of trajectories that do not generalize.\nIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observing trajectory data from the target domain. The Bayesian update can be written as:.\n1 P(St+1 = (k) P(tk x P(P St+1St t=0\nFigure 5: Illustrations of the 2D simulated robot models used in the experiments. The hopper (a) and. half-cheetah (b) tasks present the challenges of under-actuation and contact discontinuities. These challenges when coupled with parameter uncertainties lead to dramatic degradation in the quality of. policies when robustness is not explicitly considered..\nA video demonstration of the trained policies on these tasks can be viewed here: Supplimenrary video (https://youtu.be/w1YJ9vwaoto )\nP(St+1|St,At,P) = Tp(St,At)\nWe follow a sampling based approach to calculate the posterior, by sampling a set of model parameters: Pi = [P1, P2,..., Pm] from a sampling distribution, Ps(pi). Consequently, using Bayes rule and. importance sampling, we have:.\nReward functions: For both tasks, we used the standard reward functions implemented with. OpenAI gym (Brockman et al.]2016), with minor modifications. The reward structure for hopper\n(Pi P(pi[Tk) x L(Tk[Pi)\nr(s,a) = vx- 0.001|[a||2 + b\nFor the cheetah task. we use the reward function:\nthe alive bonus is 1 if head of cheetah is above -0.25 (relative to torso) and similarly episod terminates if the alive condition is violated.\nOur implementation of the algorithms and environments are public in this repository to facilitate reproduction ofresults: https://github.com/aravindr93/robustRL."}, {"section_index": "10", "section_name": "4 EXPERIMENTS", "section_text": "Supplementary video: https://youtu.be/w1YJ9vwaoto\n(a) (b)\nwhere s are the states comprising of joint positions and velocities; a are the actions (controls); and v. is the forward velocity. b is a bonus for being alive (b = 1). The episode terminates when Ztorso < 0.7 or when |0.,I < 0.2 where 0., is the forward pitch of the body.\nwhere Pp(pt) is the probability of drawing p; from the prior distribution; and L(Tk[pt) is the likeli-. hood of generating the observed trajectory with model parameters pi. The weighted samples from the posterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one could. approximate the continuous probability distribution using discrete weighted samples like in case of par-. ticle filters. In cases where the prior has very low probability density in certain parts of the parameter space, it might be advantageous to choose a sampling distribution different from the prior. The like-. lihood can be factored using the Markov property as: (Tk|Pi) = IIt P(St+1 = st+1|st~), at~', Pi) (k)[s(k) a(k) This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search as well as its integration with model adaptation to learn policies in cases where the target model could. be very different from the initially assumed distribution..\nr(s,a) =vx-0.1[[a[[2 + b\n. Neural network architecture: We used a neural network with two hidden layers, each with 64 units and tanh non-linearity. The policy updates are implemented using TRPO. 2. Trust region size in TRPO: The maximum KL divergence between sucessive policy updates are constrained to be 0.01\n3. Number and length of trajectory rollouts: In each iteration, we sample N = 240 models from the ensemble, one rollout is performed on each such model. This was implemented in parallel on multiple (6) CPUs. Each trajectory is of length 1000 - same as the standard implimentations of these tasks in gym and rllab.\nhigh dimensionality, and contact discontinuities make these tasks challenging reinforcement learning. benchmarks. These challenges when coupled with systematic parameter discrepancies can quickly degrade the performance of policies and make them unstable, as we show in the experiments. The. batch policy optimization sub-routine is implemented using TRPO. We parametrize the stochastic. policy using the scheme presented in Schulman et al.(2015). The policy is represented with a. Gaussian distribution, the mean of which is parametrized using a neural network with two hidden. layers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made of linear units. Normally distributed independent random variables are added to the output of this neural. network, and we also learn the standard deviation of their distributions. Our experiments are aimed at. answering the following questions:.\nThe results in Fig1|and Fig2|were generated after 150 and 200 iterations of TRPO respectively, wit each iteration consisting of 240 trajectories as specified in (3) above\nFigure|2jillustrates the performance of the three considered policies: viz. TRPO on mean parameters. EPOpt(e = 1), and EPOpt(e = 0.1). We similarly analyze the 10th percentile of the return distributiol as a proxy for worst-case analysis, which is important for a robust control policy (here, distributiol of returns for a given model instance is due to variations in initial conditions). The corresponding results are presented below:\nMaximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 36 1.5 1.5 1.5 1.6 1.6 1.6 300 1.7 1. 7890 1. 7 1.8 1:8 241 1.9 126222227 9 3:0 01234 180 2.1 2.7 2 120 2.3 2.4 600 2.5 2.5 0 62 8 4 6 8 4 06 6N 8 N8 4 62 8 O 6 8 40 m34 4 5. 6 6 8 m m4 5 6 6 8 9 m m4 4 5 6 6 8 Torso Mass\nIn all the comparisons, performance refers to the average undiscounted return per trajectory or episode (we consider finite horizon episodic problems). In addition to the previously defined performance we also use the 10th percentile of the return distribution as a proxy for the worst-case return..\nFigure 6: 10th percentile of return distribution for the hopper task. EPOpt(e = 0.1) clearly outper forms the other approaches. The 10th of return distribution for EPOpt(e = 0.1) also nearly overlaps with the expected return, indicating that the policies trained using EPOpt(e = 0.1) are highly robus and reliable."}, {"section_index": "11", "section_name": "4.1 COMPARISON TO STANDARD POLICY SEARCH", "section_text": "In Figure[1] we evaluate the performance of standard TRPO and EPOpt(e = 0.1) on the hopper task, in the presence of a simple parametric discrepancy in the physics of the system between the. training (source) and test (target) domains. The plots show the performance of various policies on test domains with different torso mass. The first three plots show policies that are each trained on a single torso mass in the source domain, while the last plot illustrates the performance of EPOpt\nA.4 ROBUSTNESS ANALYSIS FOR HALF-CHEETAH TASK\n4000 3500 m = 3 m = 3000 2500 2000 1500 1000 500 Ensemble 0+ 3 5 6 7 9 3 4 5 4 8 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 Torso Mass Torso Mass Torso Mass Torso Mass\n4000 3500 m m = 6 m = C g 3000 C 2500 2000 O 1500 2 1000 500- Ensemble 0- 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 Torso Mass Torso Mass Torso Mass Torso Mass\nFigure 1: Performance of hopper policies when testing on target domains with different torso masses The first three plots (blue, green, and red) show the performance of policies trained with TRPO on source domains with torso mass 3, 6, and 9, respectively (denoted by m = in the legend). The rightmost plot shows the performance of EPOpt(e = 0.1) trained on a Gaussian source distribution with mean mass = 6 and standard deviation o = 1.5. The shaded regions show the 10th and 90th percentile of the return distribution. Policies trained using traditional approaches on a single mass value are unstable for even slightly different masses, making the hopper fall over when trying to move forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on the entire range of masses considered. Further, the EPOpt policy does not suffer from degradation in performance as a consequence of adopting a more robust policy.\nFigure 7: Performance of policies for various model instances for the half-cheetah domain, similar to Figure2] Again, it is observed that the adversarial trained policy is robust and generalizes well to all models in the source distribution.\n1. How does the performance of standard policy search methods (like TRPO) degrade in the presenc of systematic physical differences between the training and test domains, as might be the case when training in simulation and testing in the real world?. 2. Does training on a distribution of models with EPOpt improve the performance of the policy wher tested under various model discrepancies, and how much does ensemble training degrade overal. performance (e.g. due to acquiring a more conservative strategy)?. 3. How does the robustness of the policy to physical parameter discrepancies change when using the. robust EPOpt-e variant of our method?. 4. Can EPOpt learn policies that are robust to unmodeled effects - that is, discrepancies in physica parameters between source and target domains that do not vary in the source domain ensemble?. 5. When the initial model ensemble differs substantially from the target domain, can the ensemble. be adapted efficiently, and how much data from the target domain is required for this?.\nMaximum Likelihood EPOpt(e =1) EPOpt(e=0.1) 3600 1.5 1.5 1.5 1.6 1.6 1.6 3000 1.7 1.7 1.7 188222 1.8 1.8 2400 1.9 1i 9 2.0 12622222 O 1800 2 1 12m45 3:3 2 2 1200 2. 2 3 4 2.4 600 2. 2.5 2.5 0 628 4 6 N 8 4 0628 4 O 628 4 628 406 8 40 m m 4 4 5 6 6 7 7 8 9 m m4 4 566 7 7 8 9 3344 56 8 Torso Mass\nMaximum Likelihood EPOpt(e=1) EPOpt=0.1 5000 0.3 0.3 0.3 0.34 0.34 0.34 0.38 0.38 0.38 4000 0.42 0.42 0.42 krleooon 0.46 0.46 0.46 3000 0.5 0.5 0.5 0.54 0.54 0.54 0.58 0.58 0.58 2000 0.62 0.62 0.62 0.66 0.66 0.66 1000 0.7 0.7 0.7 6 6 2 8 0'6 6 N 6 2 5 6 2 8 4 C 34 4 6 3m4 m m4 1 4 6 7 8 9 Torso Mass (a) Maximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 5000 0.3 0.3 0.3 0.34 0.34 0.34 0.38 0.38 0.38 4000 0.42 0.42 0.42 ltionn 0.46 0.46 0.46 3000 0.5 0.5 0.5 0.54 0.54 0.54 0.58 0.58 0.58 2000 0.62 0.62 0.62 0.66 0.66 0.66 1000 0.7 0.7 0.7 O 68 4 6 N840 O 6284062840 O68 4 62840 33445667789 33445667789 334456 67789 Torso Mass (b)\nMaximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 3600 56 1.5 1. 1. 1. . 6 3000 1i 7 1.7 1i 89012345 1. 8 2400 1262222 11222222 1222222 9012345 1800 1200 600 0 6 88 0'9 64 0'6 62 8 4 00 28 28 2 4 2 8 4 6 2 0 8 4 3 4 5 6 7 7 m 3 4 4 5 6 6 7 7 8 9 mj 3 4 4 5 6 6 7 7 8 9 Torso Mass"}, {"section_index": "12", "section_name": "A.5 DIFFERENT SETTINGS FOR e", "section_text": "Here, we analyze how different settings for e influences the robustness of learned policies. The policies in this section have been trained for 200 iterations with 240 trajectory samples per iteration Similar to the description in Section 3.1. the first 100 iterations use e = 1. and the final 100 iterations use the desired e. The source distribution is described in Table 1. We test the performance on a grid over the model parameters. Our results, summarized in Table2] indicate that decreasing e decreases the variance in performance, along with a small decrease in average performance, and hence enhances robustness.\nTable 2: Performance statistics for different e settings for the hopper task\nFigure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. On right, we depict the performance of policies for various model instances of the hopper task. The performance is depicted as a heat map for various model configurations, parameters of which are given in the x and y axis. The adversarially trained policy, EPOpt(e = 0.1), is observed to generalize to a wider range of models and is more robust.\nPerformance (Return) mean std Percentiles E 5 10 25 50 75 90 0.05 2889 502 1662 2633 2841 2939 2966 3083 0.1 3063 579 1618 2848 3223 3286 3336 3396 0.2 3097 665 1527 1833 3259 3362 3423 3483 0.3 3121 706 1461 1635 3251 3395 3477 3513 0.4 3126 869 1013 1241 3114 3412 3504 3546 0.5 3122 1009 984 1196 1969 3430 3481 3567 0.75 3133 952 1005 1516 2187 3363 3486 3548 1.0 3224 1060 1198 1354 1928 3461 3557 3604 Max-Lik 1710 1140 352 414 646 1323 3088 3272\nwhich is trained on a Gaussian mass distribution. The results show that no single torso mass value produces a policy that is successful in all target domains. However, the EPOpt policy succeeds almost uniformly for all tested mass values. Furthermore, the results show that there is almost no degradation in the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffer substantially from adopting a more robust strategy."}, {"section_index": "13", "section_name": "A.6 IMPORTANCE OF BASELINE FOR BATCHPOLOPT", "section_text": "As described in Section 3.1, it is important to use a good baseline estimate for the value function fo the batch policy optimization step. When optimizing for the expected return, we can interpret th baseline as a variance reduction technique. Intuitively, policy gradient methods adjust parameter.. of the policy to improve probability of trajectories in proportion to their performance. By using . baseline for the value function, we make updates that increase probability of trajectories that perforn better than average and vice versa. In practice, this variance reduction is essential for getting polic. gradients to work. For the CVaR case, Tamar et al.(2015) showed that without using a baseline. the policy gradient is biased. To study importance of the baseline, we first consider the case wher. we do not employ the adversarial sub-sampling step, and fix e = 1. We use a linear baseline with a. time-varying feature vector as described in Section 3.1. Figure[8(a) depicts the learning curve for the. source distribution in Table[1 The results indicate that use of a baseline is important to make policy. gradients work well in practice..\nNext, we turn to the case of e < 1. As mentioned in section 3.1, setting a low e from the start lead. to unstable learning. The adversarial nature encourages penalizing poor trajectories more, which. constrains the initial exploration needed to find promising trajectories. Thus we will \"pre-train\"' by. using e = 1 for some iterations, before switching to the desired e setting. From Figure 8(a), it i. clear that pre-training without a baseline is unlikely to help, since the performance is poor. Thus, we use the following setup for comparison: for 100 iterations, EPOpt(e = 1) is used with the baseline Subsequently, we switch to EPOpt(e = 0.1) and run for another 100 iterations, totaling 200 iterations The results of this experiment are depicted in Figure[8(b). This result indicates that use of a baseline. is crucial for the CVaR case, without which the performance degrades very quickly. We repeatec the experiment with 100 iterations of pre-training with e = 1 and without baseline, and observed the same effect. These empirical results reinforce the theoretical findings of Tamar et al.(2015).\nTable 1: Initial source domain distribution\nHopper low high 6.0 1.5 3.0 9.0 mass ground friction 2.0 0.25 1.5 2.5 joint damping 2.5 1.0 1.0 4.0 armature 1.0 0.25 0.5 1.5 Half-Cheetah low high 6 6.0 1.5 3.0 mass 9.0 ground friction 0.5 0.1 0.3 0.7 joint damping 1.5 0.5 0.5 2.5 armature 0.125 0.04 0.05 0.2\nAs emphasized previously, EPOpt is a generic policy gradient based meta algorithm for finding robust policies. The BatchPolOpt step (line 9, Algorithm 1) calls one gradient step of a policy gradient method, the choice of which is largely orthogonal to the main contributions of this paper. For the\n4000 3500 3000 ernnnnnnee 2500 2000 1500 P 1000 500 Ensemble (unmodeled) Maximum-Likelihood 0 3 4 5 6 7 8 9 Torso Mass\nFigure 3: Comparison between policies trained on a fixed maximum-likelihood model with mass (6), and an ensemble where all models have the same mass (6) and other parameters varying as described in Table|1\n3500 3500 3000 3000 Prnnrnmee 2500 2500 Prrnrnnnee 2000 2000 1500 1500 1000 1000 500 EPOpt(e= 1) with baseline 500 0 EPOpt(e= 1) without baseline 0 50 100 150 200 0 Iterations 0 50 100 150 200 EPOpt(e =1) with baseline Iterations EPOpt(e= 0.1) with baseline : EPOpt(e=0.1) without baseline (a) (b)"}, {"section_index": "14", "section_name": "4.3 ROBUSTNESS TO UNMODELED EFFECTS", "section_text": "3000 2500 Prnnrmnee 2000 1500 1000 500 0\nTo analyze the robustness to unmodeled effects, our next experiment considers the setting where the source domain distribution is obtained by varying friction, damping, and armature as in Table|1 but does not consider a distribution over torso mass. Specifically, all models in the source domain distribution have the same torso mass (value of 6), but we will evaluate the policy trained on this distribution on target domains where the torso mass is different. Figure|3lindicates that the EPOpt(e = 0.1) policy is robust to a broad range of torso masses even when its variation is not considered. However, as expected, this policy is not as robust as the case when mass is also modeled as part of the source domain distribution."}, {"section_index": "15", "section_name": "4.4 MODEL ADAPTATION", "section_text": "Figure 8: (a) depicts the learning curve for EPOpt(e = 1) with and without baselines. The learning curves indicate that use of a baseline provides a better ascent direction, thereby enabling faster learning. Figure[8(b) depicts the learning curve when using the average return and CVaR objectives For the comparison, we \"pre-train' for 100 iterations with e = 1 setting and using a baseline. The results indicates that a baseline is very important for the CVaR objective (e < 1), without which the performance drops very quickly. Here, performance is the average return in the source distribution\nThe preceding experiments show that EPOpt can find robust policies, but the source distribution in these experiments was chosen to be broad enough such that the target domain is not too far from high-density regions of the distribution. However, for real-world problems, we might not have the. domain knowledge to identify a good source distribution in advance. In such settings, model (source adaptation allows us to change the parameters of the source distribution using data gathered from the target domain. Additionally, model adaptation is helpful when the parameters of the target domair could change over time, for example due to wear and tear in a physical system. To illustrate model adaptation, we performed an experiment where the target domain was very far from the high density regions of the initial source distribution, as depicted in Figure|4[a). In this experiment, the source distribution varies the torso mass and ground friction. We observe that progressively, the source distribution becomes a better approximation of the target domain and consequently the performance improves. In this case, since we followed a sampling based approach, we used a uniform sampling. distribution, and weighted each sample with the importance weight as described in Section 3.2 Eventually, after 10 iterations, the source domain distribution is able to accurately match the targe1 domain. Figure4(b) depicts the learning curve, and we see that a robust policy with return more than 2500, which roughly corresponds to a situation where the hopper is able to move forward without falling down for the duration of the episode, can be discovered with just 5 trajectories from the targe1. domain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy with. just 11 episodes worth of data from the target domain. In contrast, to achieve the same level of performance on the target domain, completely model-free methods like TRPO would require more than 2 104 trajectories when the neural network parameters are initialized randomly.\n3500 3000 2500 nee Prrnmman 2000 EPOpt(e =1) with TRPO EPOpt(e =1) with REINFORCE 1500 1000 500 - 0 0 50 100 150 200 Iterations\n3500 3000 2500 Prnnmnmnee 2000 EPOpt(e =1) with TRPO EPOpt(e =1) with REINFORCE 1500 1000 500 0 50 100 150 200 Iterations\nFigure 9: Learning curves for EPOpt(e = 1) when using the TRPO and REINFORCE methods fo the BatchPolOpt step.\nreported results, we have used TRPO as the policy gradient method. Here, we compare the results t the case when using the classic REINFORCE algorithm. For this comparison, we use the same valu function baseline parametrization for both TRPO and REINFORCE. Figure[9|depicts the learning curve when using the two policy gradient methods. We observe that performance with TRPO i significantly better. When optimizing over probability distributions, the natural gradient can navigat the warped parameter space better than the \"vanilla\" gradient. This observation is consistent with th findings ofKakade(2001), Schulman et al.(2015), and Duan et al.(2016).\n3.0 Iteration 0 Iteration1 3500 2.5 3000 2.0- 2500 1.5 X ce Pernmmnr 2000 icioon 1.0 Iteration 2 Iteration 7 3.0 1500 2.5 - 1000 2.0- 500 1.5 0 0 2 4 6 8 10 1.0 5 5 15 Iterations 0 10 15 20 0 10 20 Torso Mass (b) (a)\nFigure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, where mass and friction coefficient are varied in the source domain. The red cross indicates the unknown parameters of the target domain. The contours in the plot indicate the distribution over models (we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicate regions of higher density. Each iteration corresponds to one round (episode) of interaction with the target domain. The high-density regions gradually move toward the true model, while maintaining probability mass over a range of parameters which can explain the behavior of target domain Figure|4(b) presents the corresponding learning curve, where the shaded region describes the 1Oth and 9Oth percentiles of the performance distribution, and the solid line is the average performance."}, {"section_index": "16", "section_name": "5 RELATED WORK", "section_text": "Robust control is a branch of control theory which formally studies development of robust policies [Zhou et al.]|1996} [Nilim & Ghaoui]2005f Lim et al.|[2013). However, typically no distribution over source or target tasks is assumed, and a worst case analysis is performed. Most results from this field have been concentrated around linear systems or finite MDPs, which often cannot adequately model complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a belief over models for decision making under uncertainty (Vlassis et al.]2012]Ghavamzadeh et al.2015) In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find the correct or closest model. Application of this idea in its full general form is difficult, and requires either restrictive assumptions like finite MDPs (Poupart et al.]2006), gaussian dynamics (Ross et al.|[2008), or task specific innovations. Previous methods have also suggested treating uncertain model parameters as unobserved state variables in a continuous POMDP framework, and solving the POMDP to get optimal exploration-exploitation trade-off (Duff2003] Porta et al.]2006). While this approach is general, and allows automatic learning of epistemic actions, extending such methods to large continuous control tasks like those considered in this paper is difficult.\nRisk sensitive RL methods (Delage & Mannor2010] Tamar et al.]2015) have been proposed to ac as a bridge between robust control and Bayesian RL. These approaches allow for using subjective model belief priors, prevent overly conservative policies, and enjoy some strong guarantees typically associated with robust control. However, their application in high dimensional continuous contro tasks have not been sufficiently explored. We refer readers to[Garcia & Fernandez (2015) for a survey of related risk sensitive RL methods in the context of robustness and safety.\nLearning of parametrized skills (da Silva et al.2012) is also concerned with finding policies fo a distribution of parametrized tasks. However, this is primarily geared towards situations wher task parameters are revealed during test time. Our work is motivated by situations where target tasl parameters (e.g. friction) are unknown. A number of methods have also been suggested to reduc sample complexity when provided with either a baseline policy (Thomas et al.[2015) Kakade & Langford][2002), expert demonstration (Levine & Koltun]2013] [Argall et al.]2009), or approximat simulator (Tamar et al.]2012f [Abbeel et al.]2006). These are complimentary to our work, in th sense that our policy, which has good direct-transfer performance, can be used to sample from th target domain and other off-policy methods could be explored for policy improvement."}, {"section_index": "17", "section_name": "6 CONCLUSIONS AND FUTURE WORK", "section_text": "In this paper, we presented the EPOpt-e algorithm for training robust policies on ensembles of source. domains. Our method provides for training of robust policies, and supports an adversarial training regime designed to provide good direct-transfer performance. We also describe how our approach. can be combined with Bayesian model adaptation to adapt the source domain ensemble to a target domain using a small amount of target domain experience. Our experimental results demonstrate that the ensemble approach provides for highly robust and generalizable policies in fairly complex. simulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation can produce distributions over models that lead to better policies on the target domain than more standard maximum likelihood estimation, particularly in presence of unmodeled effects.."}]
B1gtu5ilg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The power of the human mind in inference and generalization rests on our brain's ability to develo models of abstract knowledge of the natural world (Tenenbaum et al.]2011). When shown nove objects, both children and adults can rapidly generalize from just a few examples to classify and grou them based on their perceptual similarity. Understanding the processes that give rise to perceptua similarity will provide insight into the development of abstract models in our brain. In this paper, w explored computational models for understanding the neural basis of human perceptual similarit judgment.\nWe then average over all the categories to get a score for each network. The higher the score is, the. larger the inter-object distance is compared to intra object distance and the more closely different views of the same object are grouped together. In the experiment with the novel instance test set. OPnet's score is 0.535 whereas AlexNet's is 0.328, showing the different views of the same object. are more similar than that between different objects due to the object persistence constraint..\nRecent deep convolutional neural networks (DCNNs) have produced feature representations in the . hidden layers that can match well with neural representations observed in the primate and human. visual cortex. It was found that there is a strong correspondence between neural activities (neuronal spikes or fMRI signals) and the activities of the deep layers of deep networks (Agrawal et al.|2014. Khaligh-Razavi & Kriegeskorte2014] Yamins et al.2014), suggesting that deep neural networks have in fact learned meaningful representations that are close to humans', even though the neural.\nIn this work, we fine-tune AlexNet with object persistence constraints in the framework of distance metric learning with a Siamese triplet. This fine-tuning modifies the view-manifold of the objec"}, {"section_index": "1", "section_name": "TRANSFER OF VIEW-MANIFOLD LEARNING TO SIMI LARITY PERCEPTION OF NOVEL OBJECTS", "section_text": "Zhihao Li, Yimeng Zhang\nDepartment of Computer Science Carnegie Mellon University. OA\nzhihaol, yimengzh}@andrew.cmu.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "1 1|c-x||2 |Si Ointer_instance corei C 1 Ointra_instance 1 x- c |Sil c cESi xEc\nnetworks are trained for object classification in computer vision. Cognitive neuroscientists have. started to explore how the representations learned by deep networks can be used to model various aspects of human perception such as memorability of objects in images (Dubey et al.]2015), objec1. typicality (Lake et al.]2015), and similarity judgment (Peterson et al.]2016[Kubilius et al.] 2016 Certain correspondence between deep net representations and human experimental results are found. In particular, Peterson et al.(2016) found that human similarity judgment on a set of natural images. might be similar to the feature representations in deep networks after some transformation..\nrepresentation, bringing closer together the representations of an object in different views, driving. apart representations of different objects in the same category, resulting in better intra-categorica object recognition, without compromising inter-categorical discrimination. We investigated whethe. this view-manifold learning results in an improvement in the network's ability to recognize th. similarity of novel objects that have never been seen before by performing instance and categorica image retrieval on artificial novel objects or novel object classes, including a set tested in humai. similarity judgment. Interestingly, we find that AlexNet, with its rich feature representations, alread. perform similarity judgement significantly above chance, in the sense that different views of the sam object are considered more similar to the views of another object in the same category, or objects ii. the same category are considered to be more similar than objects in different categories. Fine-tuning. with the object persistence constraint significantly improves this \"'similarity judgement' among . variety of novel objects, suggesting the view manifold learning in the OPnet is accompanied b. feature embeddings with more general and abstract attributes that are transferable, likely at the leve. of local object parts.\nThe DCNNs that neuroscientists and cognitive scientists have studied so far, such as AlexNe (Krizhevsky et al.]2012), were trained with static images with the goal of classifying objects in static images into different categories. Perceptual similarity judgment is obviously closely related to th mechanisms used in object classification---we classify objects with similar attributes and appearances into the same class, and thus object classification rests in part on our perceptual similarity judgmen and relies on physical, semantic abstract attributes common to objects in each class. Our perceptua similarity judgment might also be tied to our need for individual object recognition---after all, we might want to recognize an individual person or object, not just a class. It is obviously important tc be able to recognize one's own child or the cup one is using. The need to recognize an individua object, independent of view points, requires fine discrimination of details, and might also be a very potent force for shaping our perceptual similarity judgment's machinery.\nFrom a technical point of view, our OPnet performs better than earlier approaches (Li et al.. 2015) in instance and categorical retrieval of novel objects. We have tested our approach with real image. database (Geusebroek et al.[2005) and found it only yields a slight improvement over AlexNet That database contains 100o objects with different views but without categorical labels. OPnet's. superiority over AlexNet lies in its better discrimination of objects within the same category. When. objects are not organized in categories, i.e. when each object is essentially treated as a category. OPnet loses its advantages. In addition, there are more complex variations such as lighting and scale. in real scene environments that our current OPnet has not considered. We plan to develop this model. to discount additional nuisance variables and to develop or find database to explore the transferability. of its view-manifold learning in more general settings..\nOur work was motivated by our hypothesis that object persistence/continuity constraint in our visual experience might play a role in the development of neural representations that shape our similarity judgement of objects that we have not seen before. The fact that fine-tuning AlexNet with this additional constraint automatically yields a new view-manifold that match human similarity judgment data better than AlexNet lends some support to our hypothesis. However, more extensive testing with human perception ground-truth will be needed to fully confirm our hypothesis."}, {"section_index": "3", "section_name": "ACKNOWLEDGMENTS", "section_text": "We retrain a DCNN with object persistence constraints, using rendered 3D objects. We call this. retrained network Object Persistence Net (OPnet). During training, we utilize a Siamese networl. architecture for incorporating object persistence constraints into the network. We demonstrated. that multi-view association training with a relatively small set of objects directly affects similarity. udgment across many classes of objects, including novel objects that the network has not seer. before. Our contribution is to demonstrate the surprising transfer of learning of similarity judgmen. to untrained classes of objects and a variety of completely artificial novel objects. We analyze. the view-manifold fine-tuned with object persistence constraints to understand what changes have. taken place in the feature representation of the OPnet that has resulted in the development of this. remarkable transfer of perceptual similarity judgment to novel objects..\nXingyu Lin and Hao Wang were supported by the PKU-CMU summer internship program. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC00007. The U.S. Governmen is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.\nWe thank Kalina Ko for helping us to construct part of the synthesized object database\nCreating large sets of human labeled data on object similarity judgement is expensive. There ha. been a recent trend in exploring inherent information as supervisory signal, including using cycl. consistency for learning dense correspondance(Zhou et al.|2015), camera motion for foregroun segmentation(Zeng et al.|2016) and context information(Doersch et al.2015). Among these, mos related to our study is the work of|Wang & Gupta (2015) utilizing visual tracking results as supervisor signals, which is an object persistence or continuity assumption, to learn deep networks withou explicit object labels. While the tracked patches can be partly regarded as multi-view images, th. changes in views tend to be very limited. In comparison, we used graphics rendered multi-viey images as object persistency constraint. Such a clean setup is necessary for us to study the effect o object persistency constraint on novel objects, as well as the transferability of view-manifold learnin. to similarity perception.\nThe development of invariant object recognition has often been attributed to object continuity or persistence in our visual experience. When we see an object, we tend to see it from different angles over time, as we walk by or around it, or directly manipulate it. This temporal persistence of objects allows our visual system to associate one view of an object with another view of the same object experienced in temporal proximity, as were proposed in slow-feature analysis (Wiskott & Sejnowski]2002) or memory trace models (Perry et al. 2006) in computational neuroscience for learning translation and rotation invariance in object recognition. Object persistence as a term in psychology sometimes refers to people's knowledge or belief on the continual existence of an object even when it is occluded and invisible from view. Here, we use it to more generally to denote the temporal persistence of an object in our visual experience. We propose to incorporate the object continuity or persistence constraint in the training of DCNN, and investigate what new abstraction and capability such a network would develop as a consequence. We also evaluate the behaviors of the resulting network to see if they match the data on human perceptual similarity judgment of novel objects in an earlier study (Tenenbaum et al.|2011)\nRecent approaches in representation learning of 3D shapes are also related to our work. Generative models such as (Wu et al.| 2016) and (Tatarchenko et al.]2015) learn a vector representation for generation of 3D shapes. Other approaches learn an embedding space for multi-view object retrieval"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Positive Rank Loss Layer Shared Weights Multi-view rendering from 3D Shared Weights Negative Training with Object Persistency Similarity Judgment - - - - - Query A=0.83 B=0.41 C=0.31 D=0.23 E=0.14 F=0.07 -\nFigure 1: Framework for training and testing the network utilizing object persistence. For training (upper panel) we first render multiple views for each object and arrange them into triplets containing a similar pair and a dissimilar pair as input to a Siamese network architecture. For testing (lower panel), when given a query image the network computes a similarity score for each of the candidate images. The lower panel shows some example similarity scores given by our OPnet, where different views of the same object are considered the most similar followed by different objects in the same category, and finally those objects belonging to different categories of. least similarities with the query image..\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). volume 1. pp. 539-546. IEEE. 2005\n(Guo et al.|2016) or for cross-view image and shape retrieval(Li et al.]2015). While these work explored training with multi-view images, they did not constrain the view points in a continuous way and most importantly, the transferability to judgement of novel objects of novel classes wer not studied. We evaluate the performance of the approach with Li et al.(2015) in our tasks fo comparison. That approach learned an embedding space of 3D shapes and used CNN for image embedding for the purpose of image purification.\nWe take a standard CNN (AlexNet), that has already learned good feature representations for objeci classification, and retrain the network in a Siamese triplet architecture with object persistence constraints using multi-view images rendered from a set of 3D object models in ShapeNet."}, {"section_index": "5", "section_name": "2.1 OBJECT PERSISTENT NET (OPNET)", "section_text": "Day S. B. Goldstone, R. L. Similarity. The Encyclopedia of Mind., pp. 696-699, 2013\nN W |I2 + max{0,D(X,X+)- D(X,X)+ M} min W 2 i=1\nf(X1) f(X2) D(Xi,X2) =1 I f(X1) I:If(X2) I\nwhere Xis the weight decay and W denotes the weights of the network. f() is the CNN representatior output as a function of an input image, and M denotes the margin parameter. The margin is a threshold\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and\nPositive Rank Loss Layer Shared Weights Multi-view rendering from 3D Shared Weights Negative Training with Object Persistency Similarity Judgment - - - Query A=0.83 B=0.41 C=0.31 D=0.23 E=0.14 F=0.07\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network InternationalJou 31t101 7(04):669_688.1993\nand Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE. Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pp. 886-893. IEEE, 2005. J. Deng, W. Dong, R. Socher, L. J. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference. on, pp. 248-255, June 2009. doi: 10.1109/CVPR.2009.5206848. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by. context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp.. 1422-1430, 2015. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML. pp. 647-655, 2014.\no study the impact of object persistence constraint in the development of perceptual similarity udgment, OPnet utilizes a Siamese triplet architecture. This triplet architecture can be visualized as hree baseline CNN towers that share the same parameters (Figure|1). In implementation, it is just one single CNN applied to three images, two of which are considered more \"similar' than the third different' one. Conceptually, our OPnet tries to bring the feature representations of the two \"similar mages together, and drive apart the representation corresponding to third \"different' image. The architecture and the initial weights of the baseline CNN is same as those of of AlexNet trained on mageNet (Deng et al.|2009). To train our OPnet with triplet input (X,, X+, X,-), we present two iews of the same 3D object to two base networks as (X,, X+), and a view of a different object tc he third base network as X. Object persistence means that given (X,, X+, X), we try to push he representations for views of the same object (X, X+) to be close and make them away from the epresentation for the different object X,-. We minimize the loss function with a hinge loss term:\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio. Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia. pp. 675-678. ACM, 2014\nAndrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei Large-scale video classification with convolutional neural networks. 2014.\nto decide whether the two views are considered similar or not. The higher the margin, the more we are forcing the network to develop a uniform representation for multiple views of the same object relative to views of another object. D is the cosine distance function for a pair of features\nThe different objects in principle could be from the same category or from different categories. During training, we constrain the \"different object\"' to be another 3D object from the same category. to push apart more forcefully the feature representations of objects from the same category, resulting. in view-invariant object discrimination within the same category. We expect the result of this training. to create a view-manifold for each individual object-views within the same manifold are considered to be \"similar' and closer together because they belong to the same object..\nYangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas Joint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 2015."}, {"section_index": "6", "section_name": "2.2 DISTANCE METRIC LEARNING", "section_text": "Francisco Massa, Bryan Russell, and Mathieu Aubry. Deep exemplar 2d-3d detection by adapting from real to rendered views. arXiv preprint arXiv:1512.02497, 2015.\nDCNNs, such as AlexNet, pre-trained on large dataset, have developed useful feature representations that can be fine-tuned for other specific tasks (Donahue et al.1 2014 Qian et al.2015]Karpathy et al.]2014). However, the pre-training of DCNN involves class labels as teaching signals. During pretraining, the network learns to throw away much information to extract invariants for classification On the other hand, DML approaches are able to develop feature representations that preserve more fine-grained features, as well as intra- and inter-class variations.\nJoshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. How to grow a mind: Statistics, structure, and abstraction. science, 331(6022):1279-1285, 2011.\nTo allow the network to learn features under the object persistence constraints and develop a similarit. judgment that can transfer, we create one set of data for training and five sets of novel objects fc testing of the transferability. To focus our study on the network's ability to perceive 3D spatia. relations and features of individual objects, we grayscale our images during rendering to eliminate. the impact of color. For the same reason, we do not add any backgrounds..\nWe render multi-view images of individual objects from 7K 3D CAD models of objects in ShapeNe (Chang et al.] 2015). The 7K models belong to 55 categories, such as cars and chairs. For each model we render 12 different views by rotating the cameras along the equator from a 30 elevation angl and taking photos of the object at 12 equally separated azimuthal angles (see Fig.1. We use the rendering pipeline in Blender, an open source 3D graphics software, with a spotlight that is static. relative to the camera.\nFor training, we sample 200 object models from 29 categories of ShapeNet. 20 of these object models. from each category are saved for cross validation. For testing, we make the assumptions that (1). views of the same object are perceived to be more similar when compared to views of a different object, and (2) views of objects in the same category are perceived to be more similar than views of. objects from different categories. These assumptions are consistent with findings in earlier studies on. similarity judgment in human (Quiroga et al.2005 Erdogan et al.2014) Goldstone2013). Since we render images based on CAD models, we can control the variations to create a large dataset that. can approximate ground-truth data for similarity judgment for our experiments without resorting. to large-scale human judgment evaluation. All the objects in the following five test sets are novel objects in the sense that they are not used in training..\nOur Siamese triplet approach transforms the view-manifold of the original baseline network, sc. hat different views of the same object are considered similar and become closer in the feature epresentation space. Thus, it can be viewed as a form of distance metric learning (DML), which is. set of methods that learn a transformation from the input space to a feature space. The Siamese network has been a popular distance metric learning method, used in signature verification (Bromley t al.|1993), learning invariant mapping (Hadsell et al.]2006), face verification (Chopra et al.[2005) insupervised learning (Wang & Gupta!2015) or image similarity ranking (Wang et al.2014). Ir hese works, the definition of similarity for DML comes from the semantic labeling like class label. Ir our work, the similarity is defined by the object persistence constraints, obtained during the rendering f 3D models and providing a continuous trajectory for each single object. Besides, the large variation. of the 2D appearance induced by 3D rotation prevents our network from learning trivial global. emplates, but induces it to learn features that are more generalized and thus transferable more easily. O novel objects.\nGavin Perry, Edmund T Rolls, and Simon M Stringer. Spatial vs temporal continuity in view invariant 11suaohiect rec0 6.November 2006\nJiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen. and Ying Wu. Learning fine-grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1386-1393, 2014\nNovel instance: Created by rendering additional 20 novel objects from each of the 29 categorie. used in training the OPnet. This is used to test the transfer of view-manifold learning to novel object of the same category. The task is not trivial due to the large intra-category variation existing in th. ShapeNet.\nAPPENDIX A EXAMPLES OF SOME TOP RANKING RESULTS\nNovel category: Created by rendering objects from 26 untrained categories. This is a more challeng ing test of the transfer of view-manifold learning to novel categories.\nSynthesized objects: Created by rendering a set of 3D models we synthesized. These are textureles. objects with completely novel shapes. The dataset consists of 5 categories, with 10 instances for each category. Within each category, the objects either have similar local parts, or have the same globa configuration, based on human judgment. This is an even more challenging test, as these synthesized objects are not in the ImageNet or ShapeNet.\nPokemon Created by 3D models of Pokemon dataset. Pokemons are cartoon characters with certain. evolution relationships with each other, which provides an alternative measurement of similarity. This test evaluates the transfer of learning to novel objects with different styles and more complicated textures. We collected 438 CAD models of Pokemon from an online database. We divide these models into 251 categories according to their evolution relationships, with most of these categories containing only 2 to 4 objects. Pokemons of the same category look more similar on average due to their \"genetic linkage\""}, {"section_index": "7", "section_name": "APPENDIX B INSTANCE RETRIEVAL RESULTS USING FEATURES FROM DIFFERENT LAYERS", "section_text": "The similarity score between a query image and a candidate image is computed as 1 minus the cosin. distance of the feature representations of the query and candidate pair, and higher score means highe. similarity. Given a test set containing objects of multiple categories, we evaluate the OPnet via tw retrieval tasks: object instance retrieval and categorical retrieval. In the object instance retrieval task. for each image P containing object O of category C in the test set, the network is asked to rank al. other images in C, such that images for O should have higher similarity score than images for othe. objects in C. In the categorical retrieval task, for each image P of category C, the network is aske. to rank all other images, such that images in category C should have higher score than images not ir. C. Here we are indirectly utilizing the human perception information, as categories are defined by. human perception based on their similarity in shapes or functions..\nAs shown in many literatures (Massa et al.]2015] Aubry & Russell]2015), features from differen. layers sometimes perform differently for a given task. For the instance retrieval task on the nove. instance dataset of the ShapeNet, we compare OPnet and AlexNet using features from different layers as shown in Figure[8] The accuracy of AlexNet is pretty flat up to conv3, and then keeps increasing. until layer fc8 where the feature becomes categorical probability and not appropriate for instance. level discrimination. On the other hand, the object persistence training gives a significant increase ir. accuracy in fully connected layers.."}, {"section_index": "8", "section_name": "2.5 IMPLEMENTATION DETAILS", "section_text": "Mean average precision using features from different layers 1.0 0.8 0.6 0.4 ean 0.2 AlexNet+CosDis OPnet 0.0 qoud data conv1 pool1 norm1 conv2 pool2 conv3 conV4 conv5 po norml Layers\nWe use Caffe (Jia et al.[2014) for training the networks. The base network of the OPnet is modifie. from the AlexNet architecture, where we drop the last fully connected layer (fc8) and replace th softmax loss with our triplet hinge loss. The network is initialized by weights pre-trained on ImageNe. The objective is optimized using mini-batch stochastic gradient descent (SGD) and we fine-tun. the network for all layers. For each pair of positive example (X, X+), we select two hard negativ. examples X- which give the highest loss (similar in (Wang & Gupta2015)) and another twc. randomly from within the mini-batch. Starting with a learning rate of O.01, we decrease it by a facto of 10 every 8K iterations and with a momentum of 0.9. We stop the training at 20K iterations. Weigh. decay is set to O.ooo5. We set the margin parameter M to O.1 by cross validation..\nWe compare HoG feature representation (Dalal & Triggs| 2005) and four deep learning networks: 1 OPnet, 2) AlexNet pre-trained on ImageNet, 3) An AlexNet fine-tuned for classification on ShapeNet. data, denoted as \"AlexNetFT\", 4) The joint embedding model by[Li et al.(2015). In AlexNetFT, we. replace the original fc8 layer with a fully connected layer with 29 output units and fine-tune the last two fully connected layers (fc7, fc8) with cross-entropy loss. The AlexNetFT model is trained with the same data we used for training the OPnet. The joint embedding model was pre-trained on 6700. shapes in the chair category of ShapeNet. For the first three deep models, we use the fc7 layer as the. feature representation and cosine distance to compute distance between feature representations. We.\nFigure 8: Instance Retrieval Results Using Features From Different Layers\nShapeNet Novel Category Synthesized D Objects 8 00 - Pokemon Query OPnet Retrieval Results AlexNet Retrieval Results\nFigure 7: Examples of top instance retrieval results for AlexNet and OPnet. Images that are different views of the same object(which are considered more similar) are marked with red solid rectangle while views of other. objects are marked with gray dashed rectangle. Obviously from the gun example we can see how the retrieval. results for AlexNet are highly view-dependent..\nPrecision-recall curve on shapenet novel instance Precision-recall curve on shapenet chair category 1.0 1.0 0.8 0.8 0.6 0.6 AlexNet+CosDis AlexNetFT 0.4 0.4 HoG Joint Embedding OPnet 0.2 0.2 0.8.0 0.8.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall (a) (b) Precision-recall curve on shapenet novel category Precision-recall curve on synthesized objects Precision-recall curve on pokemon dataset 1.0 1.0 1.C 0.8 0.8 0.8 0.6 0.6 0.6 Preaoon Preesoon 0.4 0.4 0.4 0.2 0.2 0.2 0.8.0 0.8.0 0.8.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall (c) (d) (e)\nFigure 2: The precision-recall curves for the object instance retrieval task on different datasets\nNovel instance Novel category Synthesized objects Pokemon Chair HoG 0.316 0.391 0.324 0.332 0.322 AlexNetFT 0.437 0.503 0.356 0.287 0.478 AlexNet+CosDis 0.529 0.623 0.517 0.607 0.686 AlexNet+EucDis 0.524 0.617 0.514 0.591 0.677 OPnet 0.856 0.855 0.574 0.697 0.938 Joint-embedding 0.429 0.513 0.443 0.387 0.814\nTable 1: Mean Average Precision for the object instance retrieval task over all test sets\nalso show results based on AlexNet feature representation both in terms of Eculidean distance an cosine distance measures, denoted as AlexNet+EcuDis and AlexNet+CosDis. Comparison of featur representations from different layers are shown in Appendix B. We show the results for the instance. retrieval task in Figure [2|and Table[1] The precision measure reflects the accuracy of the model's similarity judgment, with the two assumptions given in section 2.3..\nOn similarity judgment of novel objects from both the trained and untrained categories, OPne significantly outperforms AlexNet and AlexNetFT, with an increased Mean Average Precision of a least 23%. The improvement is due to OPnet's gains in ability in discriminating different object inside one category regardless of their viewpoints, while recognizing different views of the object to be similar. For novel shapes in artificial synthesized objects and Pokemons, OPnet still shows al increased MAP of at least 6% (or 15% decreased error rate for the Pokemon test). This shows tha the similarity judgment resulting from view manifold learning is valid not only for the trained object or just to the objects in the same data set, but generalizable to other classes of objects. This suggest the learned feature representations are more abstract and general, allowing the transfer of the learnin to substantially different datasets and novel objects, to a degree that is not well known or well studie in computer vision.\nWe compare OPnet with the joint embedding approach on the chair category of ShapeNet, shown in. Figure|2b Both networks are trained with the chair category and are tested on novel chairs. OPnet. outperforms the joint embedding approach by a large margin, showing that a better instance level\nPrecision-recall curve on shapenet novel instance Precision-recall curve on shapenet novel category Precision-recall curve on synthesized objects. 1.0 1.0 p 0.8 0.8 0.8 0.6 0.6 0.6 Preeeson Preesson Preesson 0.4 0.4 0.4 0.2 AlexNet+CosDis 0.2 AlexNet+CosDis 0.2 AlexNet+CosDis AlexNetFT AlexNetFT AlexNetFT HoG HoG HoG OPnet OPnet OPnet 0.8.0 0.8.0 0.2 0.8 0.8.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall\nFigure 3: The precision-recall curves for the category level retrieval task. The three figures show the network's performance on the ShapeNet dataset with novel instance, novel category and synthesized objects respectively.\ndiscrimination is achieved using object persistence training, compared to using known shapes a anchor points for image embedding. Furthermore, because the joint embedding approach would nee to be trained for each specific category, it does not perform well on novel categories.\nWhen we fine-tuned AlexNet for classification of the 29 trained categories, the resulting AlexNetFT's feature representation actually performs the worst, compared to OPnet and the original AlexNet, on the instance similarity judgment or retrieval tasks. When a network is trained to perform classification it learns to ignore subtle differences among objects in the same category. The fewer categories a network is trained on, the more the instance level similarity judgment will be compromised. This loss of the generality of its feature representation compromises its transferability to novel objects in other classes.\nWe notice that the performance gain for the OPnet is most significant in the ShapeNet dataset and the. gap becomes small for the synthesized and Pokemon dataset. This shows OPnet's certain overfitting. to the bias in ShapeNet, as the synthesized object dataset contains textureless objects and Pokemon dataset contains mainly human-like characters that are not in ShapeNet..\nCategorical retrieval provides another measure of the network's performance in similarity judgment. In this test, we randomly sample 20 categories each from the novel instance test and the novel category. test, with 20 object instances drawn from each category. For the synthesized object test set, we test. all 5 categories and each with 10 instances. For each instance, a single random view is provided. The results are shown in Figure[3] Despite the fact that AlexNet knows more about the semantic. features of each category, our OPnet still achieves comparable results. OPnet here shows an improved. ability in similarity judgment at the categorical level. On our artificially synthesized object dataset.. where all three networks have no prior experience, OPnet performs better than AlexNet. AlexNetFT. performs extremely well on trained categories likely because it is overfitted to the limited trained. objects, even though it uses the same amount of data. This overfitting problem shows that training with only class labels might not preserve the essential information to develop transferable general. feature and abstract feature representation, especially with limited training dataset.."}, {"section_index": "9", "section_name": "3.1 CORRELATION WITH HUMAN PERCEPTION", "section_text": "Using the novel objects from Tenenbaum et al.(2011), we are able to compare our networks with. human similarity perception. We collect 41 images from the paper, one image per object. A pairwise. similarity matrix is calculated based on the cosine distance of their feature representations. We. can then perform hierarchical agglomerative clustering to obtain a tree structure, using the Neares1 Point Algorithm. That is, for all points i in cluster u and points j in cluster v, the distance of the. two clusters are calculated by dist(u, v) = min(D(u[i], v[jD), where D(.) is the cosine distance. function. We merge two clusters with the shortest distance successively to construct the tree. The tree based on human perception is constructed by giving human subjects all the images and asking them to merge two clusters that are most similar each time, similar to the hierarchical agglomerative. clustering algorithm. Results are shown in Figure4.\nIn order to quantitatively measure the similarity between the trees output by neural networks and th one based on human perception, we calculate the Cophenetic distances on the tree for each pair o.\nobject. For object i and j, the Cophenetic distances t,,; are defined as ti,; = dist(u, v), i E u, j E where u,v are clusters connected by U-link. Finally, we can evaluate the similarity of the two trees b calculating the Spearman's rank correlation coefficient. In the experiment, the Spearman correlatioi is O.460 between AlexNet and the human perception and 0.659 between OPnet and the humar perception, meaning that our OPnet, trained with object persistence constraints on a relatively smal set of objects, automatically yielded a higher match to the human perceptual similarity data. Thi finding provides some support to our conjecture that object persistence might play an important role in shaping human similarity judgment.\nWe study the feature representations in these networks and their transformation induced by the objec persistence constraints to understand how the changes in similarity judgment performance come about. As our network uses cosine distance in the feature space as similarity measure, we study hov this measure changes in the view-manifold of the same object and between views of different object\nFigure 5: Distance measures for 5 cabinet objects. Lighter pixels mean larger distance. On the left is the object: each with 12 views. whose similarity distance between each other we are interested in. In the middle and th right is the cosine distance of the ouptut features of OPnet and AlexNet respectively. The element on the ith row and the jth column stands for the cosine distance between the ith and jth image. The ith image is rendered from [i/12]th object and (i mod 12)th view.\n(a) Grouping by Human Perception 0. (b) Grouping by AlexNet Features 0.1 0.00 (c) Grouping by OPnet Features\nFigure 4: Hierarchical clustering of the alien objects, based on (a) human perceptions, (b)A lexNet features. and (c) OPnet features. The dendrograms illustrate how each cluster is composed by drawing a U-shaped link. between a cluster and its children. The height of each U-link denotes the distance between its children clusters when they are merged."}]
HJDBUF5le
[{"section_index": "0", "section_name": "TOWARDS A NEURAL STATISTICIAN", "section_text": "Our architecture for this problem is based on one presented in|Lamb et al.(2016). We used a single stochastic layer with 500 dimensional latent c and 16 dimensional z variable. The statistic network and the inference network q(z|x, c; $) share a common convolutional encoder, and the deocder uses deconvolutional layers. For full details see Appendix [B.2] The likelihood function is a Gaussian but where the variance parameters are shared across all datapoints, this was found to make training faster and more stable.\nHarrison Edwards\nlallsonlDuwalas School of Informatics University of Edinburgh Edinburgh, UK\n.L.Edwards@sms.ed.ac.uk\nAn efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fash- ion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classify- ing previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.\nWe have demonstrated a highly flexible model on a variety of tasks. Going forward our approach wil. naturally benefit from advances in generative models as we can simply upgrade our base generative. model, and so future work will pursue this. Compared with some other approaches in the literature. for few-shot learning, our requirement for supervision is weaker: we only ask at training time that we. are given datasets, but we do not need labels for the datasets, nor even information on whether twc. datasets represent the same or different classes. It would be interesting then to explore applicatior. areas where only this weaker form of supervision is available. There are two important limitations tc this work, firstly that the method is dataset hungry: it will likely not learn useful representations ol datasets given only a small number of them. Secondly at test time the few-shot fit of the generative. model will not be greatly improved by using larger datasets unless the model was also trained or similarly large datasets. The latter limitation seems like a promising future research direction . bridging the gap between fast adaptation and slow training.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the Uni. versity of Edinburgh.\nThe machine learning community is well-practised at learning representations of data-points and se- quences. A middle-ground between these two is representing, or summarizing, datasets - unordered. collections of vectors, such as photos of a particular person, recordings of a given speaker or a doc-. ument as a bag-of-words. Where these sets take the form of i.i.d samples from some distribution such summaries are called statistics. We explore the idea of using neural networks to learn statistics.. and we refer to our approach as a neural statistician.."}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "The key result of our approach is a statistic network that takes as input a set of vectors and outputs a vector of summary statistics specifying a generative model of that set - a mean and variance specifying a Gaussian distribution in a latent space we term the context. The advantages of our approach are that it is:\nVeronika Cheplygina, David M.J. Tax, and Marco Loog. On classification with bags, groups anc sets. Pattern Recognition Letters, 59:11 - 17, 2015.\nKenji Fukumizu, Le Song, and Arthur Gretton. Kernel Bayes' rule: Bayesian inference with positiv. definite kernels. The Journal of Machine Learning Research. 14(1):3753-3783. 2013.\nThomas Gartner. Peter A. Flach. Adam Kowalczyk. and Alex J. Smola. Multi-instance kernels. Ir In Proc. 19th International Conf. on Machine Learning. pp. 179-186. Morgan Kaufmann. 2002\nWe are given datasets D, for i E I. Each dataset D, = {x1, ..., xk,} consists of a number of i.i.d. samples from an associated distribution p; over Rn. The task can be split into learning and inference. components. The learning component is to produce a generative model p; for each dataset D,. We assume there is a common underlying generative process p such that p; = p(-|c;) for c; E R' drawn.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re ducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine. Learning, pp. 448-456, 2015.\nAmos Storkey School of Informatics University of Edinburgh Edinburgh, UK A Storkev@ed..\nThe results are shown in Figure|6 Whilst there is room for improvement, we see that it is possible. to specify a complex distribution on-the-fly with a set of photos of a previously unseen person. The. samples conditioned on an input set have a reasonable likeness of the input faces. We also show the ability of the model to generate new datasets and see that the samples have a consistent identity and. varied poses."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, and Zhifeng Chen et al. TensorFlow. Large-scale machine learning on heterogeneous systems, 2015. URL http: //tensorf1ow.. Org/ Software available from tensorflow.org..\nUnsupervised: It provides principled and unsupervised way to learn summary statistics as. the output of a variational encoder of a generative model.. Data efficient: If one has a large number of small but related datasets, modelling the. datasets jointly enables us to gain statistical strength.. Parameter Efficient: By using summary statistics instead of say categorical labellings of. each dataset, we decouple the number of parameters of the model from the number of. datasets. Capable of few-shot learning: If the datasets correspond to examples from different classes class embeddings (summary statistics associated with examples from a class), allow us to. handle new classes at test time..\nfrom p(c). We refer to c as the context. The inference component is to give an approximate posteric over the context q(c|D) for a given dataset produced by a statistic network..\nBai Jiang, Tung-yu Wu, Charles Zheng, and Wing H Wong. Learning summary statistic for approx. imate Bayesian computation via deep neural network. arXiv preprint arXiv:1510.02175. 2015.\nIn order to exploit the assumption of a hierarchical generative process over datasets we will use a parameter-transfer approach' (seePan & Yang2010) to extend the variational autoencoder model ofKingma & Welling(2013)\nX1 X2 x3 C 0\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nGregory Koch. Siamese neural networks for one-shot image recognition. Doctoral dissertation University of Toronto, 2015.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learnin through probabilistic program induction. Science, 350(6266):1332-1338. 2015.\nFigure 1: Left: basic hierarchical model, where the plate encodes the fact that the context variable c is shared across each item in a given dataset. Center: full neural statistician model with three latent layers 21, 22, 23. Each collection of incoming edges to a node is implemented as a neural. network, the input of which is the concatenation of the edges' sources, the output of which is a parameterization of a distribution over the random variable represented by that node. Right: The statistic network, which combines the data via an exchangeable statistic layer..\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generativ models. arXiv preprint arXiv:1602.03220, 2016."}, {"section_index": "4", "section_name": "3.1 VARIATIONAL AUTOENCODER", "section_text": "Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nThe variational autoencoder is a latent variable model p(x|z; 0) (often called the decoder) with parameters 0. For each observed x, a corresponding latent variable z is drawn from p(z) so that.\np(x[z;0)p(z) dz\nThe generative parameters 0 are learned by introducing a recognition network (also called an en- coder) q(z|x; $) with parameters $. The recognition network gives an approximate posterior over the latent variables that can then be used to give the standard variational lower bound (Saul & Jordan. 1996) on the single-datum log-likelihood. I.e. log P(x(0) > Lr, where.\nLx =Eq(z|x,g) [logp(x|z;0)]- DkL(q(z|x;$)|p(z))\nLikewise the full-data log likelihood is lower bounded by the sum of the Lr terms over the whole dataset. We can then optimize this lower bound with respect to and 0 using the reparameterization trick introduced byKingma & Welling(2013) and Rezende et al.(2014) to get a Monte-Carlo estimate of the gradient.\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. Knowledge and Data Engineering IEEE Transactions on, 22(10):1345-1359, 2010.\nBarnabas Poczos, Liang Xiong, Dougal J Sutherland, and Jeff Schneider. Support distribution ma chines. Technical Report, 2012. URLhttp://arxiv.0rg/abs/1202.0302"}, {"section_index": "5", "section_name": "3.2 BASIC MODEL", "section_text": "Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AISTATS pp. 814-822, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Con- ference on Machine Learning, pp. 1278-1286, 2014.\np(c) p(x[z;0)p(z[c;0) dz] dc. xED\nDanilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One shot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016.\nThe prior p(c) is chosen to be a spherical Gaussian with zero mean and unit variance. The condi. tional p(z[c; 0) is Gaussian with diagonal covariance, where all the mean and variance parameters depend on c through a neural network. Similarly the observation model p(x[z; 0) will be a simple. likelihood function appropriate to the data modality with dependence on z parameterized by a neural. network. For example, with real valued data, a diagonal Gaussian likelihood could be used where the mean and log variance of x are created from z via a neural network..\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), number 2014, 2013\nWe extend the variational autoencoder to the model depicted on the left in Figure [1 This includes a latent variable c, the context, that varies between different datasets but is constant, a priori, for items within the same dataset. Now, the likelihood of the parameters 0 for one single particular dataset D is given by\nLp = Eq(c|D;s) Eq(z|c,x;g) [logp(x|z;0)]- DkL(q(z|c,x;$)||p(z|c;0 x E d DkL(q(c|D;$)|lp(c))\nThe full-data variational bound is given by summing the variational bound for each dataset in our collection of datasets. It is by learning the difference of the within-dataset and between-dataset distributions that we are able to discover an appropriate statistic network..\nLior Wolf, Tal Hassner, and Itay Maoz. Face recognition in unconstrained videos with matchec background similarity. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Confer ence on, pp. 529-534. IEEE, 2011."}, {"section_index": "6", "section_name": "3.3 FULL MODEL", "section_text": "The basic model works well for modelling simple datasets, but struggles when the datasets have. complex internal structure. To increase the sophistication of the model we use multiple stochastic layers 21,..., 2k and introduce skip-connections for both the inference and generative networks.. The generative model is shown graphically in Figure[1in the center. The probability of a dataset D is then given by\nL-1 p(c) II p(x[c, Z1:L; 0)p(ZL[c; 0) p(Zi|Zi+1,C;0) dZ1:L dc xED i=1\nAlgorithm 1 Sampling a dataset of size k\nThe full approximate posterior factorizes analogously as\nL-1 q(c,z1:L[D;$) = q(c|D;) q(zL[x,c;$) 1[q(zi|Zi+1,x,C;$) xED i=1\nAlgorithm 2 Sampling a dataset of size k conditioned on a dataset of size m\nFor convenience we give the variational lower bound as sum of a three parts, a reconstruction tern Rp, a context divergence Cp and a latent divergence Lp:.\nF CD + Lp with Rp = Eq(c|D;q) Eq(z1:L|c,x;q) l0g p(x|Z1:L,c;0) xED Cp = DkL(q(c|D;$)|[p(c)) Lp = Eq(c,z1:L|D;$) DKL(q(zL|c,x;$)|p(zL|c;0)) xE D L-1 DKL(q(zi|Zi+1,C,x;$)|p(zi|Zi+1,C;0) i=1\nAlgorithm 3 Selecting a representative sample of size k\nThe skip-connections p(z;|Zi+1, c; 0) and q(zi|Zi+1, x; ) allow the context to specify a more precise distribution for each latent variable by explaining-away more generic aspects of the dataset at each. stochastic layer. This architecture was inspired by recent work on probabilistic ladder networks in Kaae Sonderby et al. (2016). Complementing these are the skip-connections from each latent. variable to the observation p(x|Z1:L, c; 0), the intuition here is that each stochastic layer can focus. on representing a certain level of abstraction, since its information does not need to be copied into the next layer, a similar approach was used inMaalge et al.(2016).\nWe use approximate inference networks q(z[x, c; $), q(cD; ), with parameters collected into to once again enable the calculation and optimization of a variational lower bound on the log likelihood. The single dataset log likelihood lower bound is given by\nAs with the generative distributions, the likelihood forms for q(z[x, c; ) and q(c D; ) are diagonal Gaussian distributions, where all the mean and log variance parameters in each distribution are pro- duced by a neural network taking the conditioning variables as inputs. Note that q(c|D; $) accepts. as input a dataset D and we refer to this as the statistic network. We describe this in Subsection|3.4.\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match ing networks for one shot learning. arXiv preprint arXiv:1606.04080, 2016b\nSannpnnng a dalaselol slze k sample c ~ p(c) for i = 1 to k do sample Zi,L ~ p(zL|c;0) for j = L - 1 to 1 do sample Zi,j ~ p(zj|Zi,j+1,c;0) end for sample x; ~ p(x|Zi,1,.:., Zi,L, c; 0) end for\nwhere the p(zi|Zi+1, c, 0) are again Gaussian distributions where the mean and log variance are given as the output of neural networks. The generative process for the full model is described in Algorithm1\nOnce again, note that we are maximizing the lower bound to the log likelihood over many datasets D: we want to maximize the expectation of Lp over all datasets. We do this optimization using stochastic gradient descent. In contrast to a variational autoencoder where a minibatch would consis of a subsample of datapoints from the dataset, we use minibatches consisting of a subsample of datasets - tensors of shape (batch si ze, samplesize, number of features).\nAlgorithm 4 K -way few-shot classification Do, . .., Dk sets of labelled examples for each class x datapoint to be classified N q(c|x; ) {approximate posterior over c given query point} for i = 1 to K do Niq(c|Di;$) end for y argminiDKL(Ni|Nx)"}, {"section_index": "7", "section_name": "3.4 STATISTIC NETWORK", "section_text": "In addition to the standard inference networks we require a statistic network q(cD; $) to give a approximate posterior over the context c given a dataset D = {x1, ..., xk} . This inference networ must capture the exchangeability of the data in D.\nWe use a feedforward neural network consisting of three main elements:\n2 { conv2d 64 feature maps with 3 3 kernels and ELU activations } conv2d 64 feature maps with 3 3 kernels, stride 2 and ELU activations. 2 {conv2d 128 feature maps with 3 3 kernels and ELU activations } conv2d 128 feature maps with 3 3 kernels, stride 2 and ELU activations. 2 conv2d 256 feature maps with 3 3 kernels and ELU activations } conv2d 256 feature maps with 3 3 kernels, stride 2 and ELU activations\nWe note that the humble sample mean already gives the statistic network a great deal of represen. tational power due to the fact that the instance encoder can learn a representation where averaging makes sense. For example since the instance encoder can approximate a polynomial on a compac domain, and so can the post-pooling network, a statistic network can approximate any moment of a. distribution."}, {"section_index": "8", "section_name": "4 RELATED WORK", "section_text": "Due to the general nature of the problem considered, our work touches on many different topics which we now attempt to summarize..\nTopic models and graphical models The form of the graphical model in Figure [1on the left is. equivalent to that of a standard topic model. In contrast to traditional topic models we do not use discrete latent variables, or restrict to discrete data. Work such as that by Ranganath et al.(2014) has. extended topic models in various directions, but importantly we use flexible conditional distribu. tions and dependency structures parameterized by deep neural networks. Recent work has explored. neural networks for document models (see e.g.Miao et al.]2015) but has been limited to modelling. datapoints with little internal structure. Along related lines are structured variational autoencoders (see[Johnson et al.|[2016), where they treat the general problem of integrating graphical models with. variational autoencoders.\nInference network q(z[x, c; $) : h, c -> z, 0\nTransfer learning There is a considerable literature on transfer learning, for a survey seePan & Yang (2010). There they discuss 'parameter-transfer' approaches whereby parameters or priors ar shared across datasets, and our work fits into that paradigm. For examples seeLawrence & Plat (2004) where share they priors between Gaussian processes, and Evgeniou & Pontil (2004) wher they take an SVM-like approach to share kernels.\n3 {fully-connected layer with 256 units and ELU activations} fully-connected linear layers to z and log o?.\nOne-shot LearningLearning quickly from small amounts of data is a topic of great interest.Lake. et al.[(2015) use Bayesian program induction for one-shot generation and classification, and Koch (2015) train a Siamese (Chopra et al.[(2005)) convolutional network for one-shot image classifi-. cation. We note the relation to the recent work (Rezende et al.]2016) in which the authors use a. conditional recurrent variational autoencoder capable of one-shot generalization by taking as extra. input a conditioning data point. The important differences here are that we jointly model datasets and datapoints and consider datasets of any size. Recent approaches to one-shot classification are. matching networks (Vinyals et al.]2016b) (which was concurrent with the initial preprint of this work), and related previous work (Santoro et al.]2016). The former can be considered a kind of. differentiable nearest neighbour classifier, and the latter augments their network with memory to. store information about the classification problem. Both are trained end-to-end for the classification. problem, whereas the present work is a general approach to learning representations of datasets.. Probably the closest previous work is bySalakhutdinov et al.(2012) where the authors learn a topic.\nObservation decoder network p(x[c, z; 0) : c, z ->\n7lccle/lcle~ane fully-connected linear layers with 4 . 4 . 256 units. 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } deconv2d 256 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 128 feature maps with 3 3 kernels and ELU activations} deconv2d 128 feature maps with 2 2 kernels, stride 2, ELU activations 2 conv2d 64 feature maps with 3 3 kernels and ELU activations } deconv2d 64 feature maps with 2 2 kernels, stride 2, ELU activations. conv2d 1 feature map with 1 1 kernels, sigmoid activations.\nAn instance encoder E that takes each individual datapoint x; to a vector e; = E(x) An exchangeable instance pooling layer that collapses the matrix (e1,..., e) to a single. pre-statistic vector v. Examples include elementwise means, sums, products, geometric. means and maximum. We use the sample mean for all experiments.. A final post-pooling network that takes v to a parameterization of a diagonal Gaussian..\nmodel over the activations of a DBM for one-shot learning. Compared with their work we use mod ern architectures and easier to train VAEs, in particular we have fast and amortized feedforwarc inference for test (and training) datasets, avoiding the need for MCMC.\n2 { conv2d 32 feature maps with 3 3 kernels and ELU activations } conv2d 32 feature maps with 3 3 kernels, stride 2 and ELU activations 2 {conv2d 64 feature maps with 3 3 kernels and ELU activations } conv2d 64 feature maps with 3 3 kernels, stride 2 and ELU activations 2 conv2d 128 feature maps with 3 3 kernels and ELU activations } conv2d 128 feature maps with 3 3 kernels, stride 2 and ELU activations 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } conv2d 256 feature maps with 3 3 kernels, stride 2 and ELU activations\nMultiple-Instance Learning There is previous work on classifying sets in multiple-instance. learning, for a useful survey see [Cheplygina et al. (2015). Typical approaches involve adapting. kernel based methods such as support measure machines (Muandet et al.|2012), support distribu- tion machines (Poczos et al.[2012) and multiple-instance-kernels (Gartner et al.2002). We do not consider applications to multiple-instance learning type problems here, but it may be fruitful to do. so in the future.\nSet2SeqIn very related work, Vinyals et al. (2016a) explore architectures for mapping sets to sequences. There they use an LSTM to repeatedly compute weighted-averages of the datapoints anc use this to tackle problems such as sorting a list of numbers. The main difference between their work and ours is that they primarily consider supervised problems, whereas we present a general unsupervised method for learning representations of sets of i.i.d instances. In future work we may also explore recurrently computing statistics.\nABC There has also been work on learning summary statistics for Approximate Bayesian Com putation by either learning to predict the parameters generating a sample as a supervised problem, or by using kernel embeddings as infinite dimensional summary statistics. See the work by Fukumizu et al.[(2013) for an example of kernel-based approaches. More recently Jiang et al.[(2015) used deep neural networks to predict the parameters generating the data. The crucial differences are that their problem is supervised, they do not leverage any exchangeability properties the data may have, nor can it deal with varying sample sizes.\nInference network g(z[x, c, ) : h, c -> z, 0\nLOP~OC OZO fully-connected layer with 1000 units and ELU activations fully-connected linear layers to z and log o?\nGiven an input set x1, ... xk we can use the statistic network to calculate an approximate posterior over contexts q(c|x1, . .. , xk; $). Under the generative model, each context c specifies a conditional model p(x[c; 0). To get samples from the model corresponding to the most likely posterior value of c, we set c to the mean of the approximate posterior and then sample directly from the condi- tional distributions. This is described in Algorithm2 We use this process in our experiments to show samples. In all experiments, we use the Adam optimization algorithm (Kingma & Ba]2014 to optimize the parameters of the generative models and variational approximations. Batch normal ization (Ioffe & Szegedy|2015) is implemented for convolutional layers and we always use a batch size of 16. We primarily use the Theano (Theano Development Team,2016) framework with the Lasagne (Dieleman et al.2015) library, but the final experiments with face data were done using Tensorflow (Abadi et al.2015). In all cases experiments were terminated after a given number of epochs when training appeared to have sufficiently converged (300 epochs for omniglot, youtube and spatial MNIST examples, and 50 epochs for the synthetic experiment).\nObservation decoder network p(x|c, z; 0) : c, z -> x\nconcatenate z and c fully-connected layer with 1000 units and ELU activations fully-connected linear layer with 8 : 8 : 256 units 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } deconv2d 256 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 128 feature maps with 3 3 kernels and ELU activations deconv2d 128 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 64 feature maps with 3 3 kernels and ELU activations } deconv2d 64 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 32 feature maps with 3 3 kernels and ELU activations } deconv2d 32 feature maps with 2 2 kernels, stride 2, ELU activations conv2d 3 feature maps with 1 1 kernels, sigmoid activations"}, {"section_index": "9", "section_name": "5.1 SIMPLE 1-D DISTRIBUTIONS", "section_text": "In our first experiment we wanted to know if the neural statistician will learn to cluster synthetic 1-D datasets by distribution family. We generated a collection of synthetic 1-D datasets each con- taining 200 samples. Datasets consist of samples from either an Exponential, Gaussian, Uniform or Laplacian distribution with equal probability. Means and variances are sampled from U[-1, 1 anc U[0.5. 2] respectively. The training data contains 10K sets.\nThe architecture for this experiment contains a single stochastic layer with 32 units for z and 3 units for c, . The model p(x[z, c; 0) and variational approximation q(z[x, c; ) are each a diagona. Gaussian distribution with all mean and log variance parameters given by a network composed of. three dense layers with ReLU activations and 128 units. The statistic network determining the mean and log variance parameters of posterior over context variables is composed of three dense layers. before and after pooling, each with 128 units with Rectified Linear Unit (ReLU) activations..\nFigure[2|shows 3-D scatter plots of the summary statistics learned. Notice that the different families. of distribution cluster. It is interesting to observe that the Exponential cluster is differently orientated to the others, perhaps reflecting the fact that it is the only non-symmetric distribution. We also see. that between the Gaussian and Laplacian clusters there is an area of ambiguity which is as one.\nGaussian Exponential Laplacian Uniform -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 0.6 0.8 1.0 1.4 1.6 1.8 Mean\nFigure 2: Three different views of the same data. Each point is the mean of the approximate posterior over the context q(c|D; ) where c E R3. Each point is a summary statistic for a single dataset with 200 samples. Top plot shows points colored by distribution family, left plot colored by the mean and. right plot colored by the variance. The plots have been rotated to illustrative angles.\nmight expect. We also see that within each cluster the mean and variance are mapped to orthogona directions."}, {"section_index": "10", "section_name": "5.2 SPATIAL MNIST", "section_text": "Building on the previous experiments we investigate 2-D datasets. that have complex structure, but the datapoints contain little in- formation by themselves, making it a good test of the statistic. network. We created a dataset called spatial MNIST. In spatial MNIST each image from MNIST (LeCun et al.|1998) is turned into a dataset by interpreting the normalized pixel intensities as. a probability density and sampling coordinate values. An ex-. ample is shown in Figure [3] This creates two-dimensional spa-. tial datasets. We used a sample size of 50. Note that since the pixel coordinates are discrete, it is necessary to dequantize them. oy adding uniform noise u ~ U|0,1 to the coordinates if one. models them as real numbers, else you can get arbitrarily high. densities (see Theis et al.[(2016) for a discussion of this point)..\nThe generative architecture for this experiment contains 3 stochastic z layers, each with 2 units and a single c layer with 64 units. The means and log variances of the Gaussian likelihood for p(x|Z1:3, c; 0), and each subnetwork for z in both the encoder and decoder contained 3 dense layers. with 256 ReLU units each. The statistic network also contained 3 dense layers pre-pooling and 3. dense layers post pooling with 256 ReLU units..\nIn addition to being able to sample from the model conditioned on a set of inputs, we can alsc summarize a dataset by choosing a subset S C D to minimise the KL divergence of q(C|D; $) from q(C|S; $). We do this greedily by iteratively discarding points from the full sample. Pseudocode. for this process is given in Algorithm[3] The results are shown in Figure[4We see that the model is capable of handling complex arrangements of datapoints. We also see that it can select sensible subsets of a dataset as a summary."}, {"section_index": "11", "section_name": "5.3 OMNIGLOT", "section_text": "Next we work with the OMNIGLOT data (Lake et al.[2015). This contains 1628 classes of hand written characters but with just 20 examples per class. This makes it an excellent test-bed for transfer. / few-shot learning. We constructed datasets by splitting each class into datasets of size 5. We train.\nFigure3: An image from MNIST on the left. transformed to a set of 50 (x, y) coordinates, shown as a scatter plot on the right.\n7 1 1 : 2\nFigure 4: Conditioned samples from spatial MNIST data.. Blue and red digits are the input sets. black digits above correspond to samples given the input. Red points correspond to a 6-sample summary of the dataset\nO 0 0 0 6 0 6 0 - 1 - D1 e a 2 2 2 q 2 2 3 2 4 E E 3 3 3 3 3 9 3 3 3 11 ~ 1] 1 4 4 4 4 4 4 4 6 3 5 5 1 s 5 5 6 6 6 6 b 6 6 6 P p p P P p 1 7 7 7 7 7 7 7 7 ? 7 d 7 & 8 8 8 8 8 9 8 8 t0) do $) 9 9 9 9 9 9 9 U 9 9 9\non datasets drawn from 1200 classes and reserve the remaining classes to test few-shot sampling anc classification. We created new classes by rotating and reflecting characters. We resized the images to 28 28. We sampled a binarization of each image for each epoch. We also randomly appliec the dilation operator from computer vision as further data augmentation since we observed that the stroke widths are quite uniform in the OMNIGLOT data, whereas there is substantial variation in MNIST, this augmentation improved the visual quality of the few-shot MNIST samples consider- ably and increased the few-shot classification accuracy by about 3 percent. Finally we used sample dropout' whereby a random subset of each dataset was removed from the pooling in the statistic net- work, and then included the number of samples remaining as an extra feature. This was beneficial since it reduced overfitting and also allowed the statistic network to learn to adjust the approximate posterior over c based on the number of samples.\nWe used a single stochastic layer with 16 units for z, and 512 units for c. We used a shared convolu. tional encoder between the inference and statistic networks and a deconvolutional decoder network Full details of the networks are given in Appendix[B.1 The decoder used a Bernoulli likelihood..\nAs a further test we considered few-shot classification of both unseen OMNIGLOT characters and MNIST digits. Given a sets of labelled examples of each class Do,..., Dg (for MNIST say), we computed the approximate posteriors q(C|D; $) using the statistic network. Then for each test image x we also computed the posterior q(C|x; $) and classified it according to the training dataset D, minimizing the KL divergence from the test context to the training context. This process is described in Algorithm 4[ We tried this with either 1 or 5 labelled examples per class and either 5 or 20 classes. For each trial we randomly select K classes, randomly select training examples for each class, and test on the remaining examples. This process is repeated 100 times and the results averaged. The results are shown in Table|1 We compare to a number of results reported in Vinyals et al.(2016b) including Santoro et al.(2016) and Koch(2015). Overall we see that\n6 6 6 er a m E E 11 T F T P p P P P C p 7 J 7 do\nFigure 5: Few-shot learning Left: Few-shot learning from OMNIGLOT to MNIST. Left rows are input sets, right rows are samples given the inputs. Right: Few-shot learning from with OMNIGLOT data to unseen classes. Left rows are input sets, right rows are samples given the inputs. Black-white inversion is applied for ease of viewing.\nIn Figure 5] we show two examples of few-shot learning by conditioning on samples of unseen characters from OMNIGLOT, and conditioning on samples of digits from MNIST. The samples are mostly of a high-quality, and this shows that the neural statistician can generalize even to new datasets.\nthe neural statistician model can be used as a strong classifier, particularly for the 5-way task. but performs worse than matching networks for the 20-way tasks. One important advantage tha matching networks have is that, whilst each class is processed independently in our model, th. representation in matching networks is conditioned on all of the classes in the few-shot problen This means that it can exaggerate differences between similar classes, which are more likely t. appear in a 20-way problem than a 5-way problem..\nTable 1: The table shows the classification accuracies of various few-shot learning tasks. Models are trained on OMNIGLOT data and tested on either unseen OMNIGLOT classes or MNIST with vary- ing numbers of samples per class (K-shot) with varying numbers of classes (K-way). Comparison are to|Vinyals et al.(2016b) (Matching), Santoro et al.(2016) (MANN) andKoch(2015) (Siamese). 5-shot MNIST results are included for completeness.\nFigure 6: Few-shot learning for face data. Samples are from model trained on Youtube Faces Database. Left: Each row shows an input set of size 5. Center: Each row shows 5 samples from the model corresponding to the input set on the left. Right: Imagined new faces generated by sampling contexts from the prior. Each row consists of 5 samples from the model given a particular sampled context.\nFinally, we provide a proof of concept for generating faces of a particular person. We use the Youtube Faces Database fromWolf et al.(2011). It contains 3, 245 videos of 1, 595 different people We use the aligned and cropped to face version, resized to 64 64. The validation and test sets contain 100 unique people each, and there is no overlap of persons between data splits. The sets were created by sampling frames randomly without replacement from each video, we use a set size of 5 frames. We resample the sets for the training data each epoch."}]
SJNDWNOlg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Image retrieval is an important problem both for academic research and for industrial applications Although it has been studied for many years (Sivic & Zisserman 2003} Philbin et al.]2007Tolias et al.] 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. The first one is the category-level image retrieval (Sharma & Schiele|2015), in which an image in the dataset is deemed to be similar to the query image if they share the same class or they are similar ir shape and local structures. The other group is the instance-level image retrieval (Tolias et al.]2015) in which an image is considered to match the query if they contain the same object or the same scene. The instance-level image retrieval is harder in that the retrieval method need to encode th local and detailed information in order to tell two images apart, e.g., the algorithm should be able to detect the differences between the Eiffel Tower and other steel towers although they have similar shapes. In this paper, we focus on the instance-level image retrieval."}, {"section_index": "1", "section_name": "5.3 COMPARISON WITH OTHER METHODS", "section_text": "Based on the previous experimental results and our analysis of different impacting factors on the. retrieval performances, we propose a new multi-scale image feature representation. For a giver. image in the dataset, the whole process of image feature representation is divided into two steps First, the input image is fed into the network without the resizing operation (the free way) and . 4-scale feature representation is built on top of the feature maps of layer conv5_4. During the multi. scale representation step, max-pooling of feature maps are used and regional vectors from the same. scale are added together and l2-normalized. After that, features from different scales are summec. and l2-normalized again. The second step involves applying the PCA and whitening operations or. features from the first step. The PCA and whitening matrix used are either learned from differen. or same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, while. for Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenec. image features are used for reporting our method's performances..\nTraditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth ods using the local feature descriptors such as SIFT (Lowe]2004). In order to boost the retrieval performances, post-processing techniques such as query expansion (Chum et al.]2007) and spatial. verification (Philbin et al.|2007) are also employed.\nWith the decisive victory (Krizhevsky et al.]2012) over traditional models in the ImageNet (Rus. sakovsky et al.]2015) image classification challenge, convolutional neural networks (Lecun et al. 1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.. 2015 , Shaoqing Ren2015), semantic segmentation (Dai et al.2016) and even image style trans fer (Gatys et al.]2016). Networks trained on the Imagenet classification task can generalize quite well to other tasks, which are either used off-the-shelf (Razavian et al.f2014a) or fine-tuned on the task-specific datasets (Azizpour et al.|2014) Long et al.| 2015). Inspired by all these, researchers in the field of image retrieval also shift their interest to the CNNs. Their experiments have showr promising and surprising results (Babenko et al.| 2014] Razavian et al. 2014c}Tolias et al.]2015] which are on par with or surpass the performances of conventional methods like BoF and VLAL. (vector of locally aggregated descriptors) (Jegou et al.l. 2010 Arandielovic & Zisserman2013\nLayer ensemble. Inspired by previous work on model ensemble to boost the classification perfor. mances (Krizhevsky et al.|2012] Simonyan & Zisserman2014), we consider fusing the similarity score from different layers to improve the retrieval performances. Specifically, for two images, their. similarity score is computed as the weighted sum of the scores from different layers (these weights. sum to 1 so that overall similarity score between two images are still in the range [0, 1].). We have. evaluated various combination of layers to see their performances and find that best performance. is achieved by combining the score from conv5_4 and fc6-conv. For the fc6-conv features of an image, we use a 3-scale representation as the size of output feature maps are already very small."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Despite all these previous advances (Babenko et al.]2014] Babenko & Lempitsky2015) Tolias et al.2015) on using CNNs for image feature representation, the underlying factors that contribute to the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un- explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or the fully-connected layer? What is the best way to represent the multi-scale information of an image? Clarifying these questions will help us advance a further step towards building a more robust and accurate retrieval system. Also in situations where a large numbers of training samples are not avail- able, instance retrieval using unsupervised method is still preferable and may be the only option.\nThe fc6-conv features are compressed to low dimensional vectors for faster computation. Our laye. ensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively showing a large improvement over previous methods. This suggests that features from the fc6-con and conv5_4 are complementary. See Table|5 for the complete results on all four datasets.\nComparison. We compare the performance of our method with several state-of-the-art methods which use small footprint representations and do not employ the complicated post-processing tech- niques such as geometric re-ranking (Philbin et al.]2007) and query expansion (Arandjelovic & Zisserman2012). The results are shown in Table [5] In all the datasets and different scenarios (full or cropped), our method achieves the best performance with comparable cost. For Oxford5k. (cropped) and UKB dataset, the relative improvement of our best results over previous methods. (from Tolias et al.(2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.\nIn this paper, we aim to answer these questions and make three novel contributions. Unlike pre.. vious papers, we explicitly choose five factors to study the image representations based on CNNs. and conduct extensive experiments to evaluate their impacts on the retrieval performances. We also. give detailed analysis on these factors and give our recommendations for combining them. Dur-. ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find that. they are not as effective as some of the simpler design choices. Second, by combining the insights. obtained during the individual experiments, we are able to propose a new multi-scale image rep-. resentation, which is compact yet effective. Finally, we evaluate our method on four challenging. datasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our method is generally applicable and outperforms all previous methods on compact image representations by. a large margin.\nIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con. ducted extensive experiments to evaluate the impact of five factors on the performances of imag retrieval and analysed their particular impacts. Based on the insights gained from these experiments we have proposed a new multi-scale image representation which shows superior performances ove previous methods on four datasets. When combined with the technique \"layer ensemble', ou method can achieve further improvements. Overall, we have provided a viable and efficient solutiol to apply CNNs in an unsupervised way to datasets with a relatively small number of images..\nR. Arandjelovic and A. Zisserman. Three things everyone should know to improve object retrieval. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2911-2918, June 2012. doi: 10. 1109/CVPR.2012.6248018. R. Arandjelovic and A. Zisserman. All about vlad. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 1578-1585, June 2013. doi: 10.1109/CVPR.2013.207. R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. CoRR, abs/1406.5774, 2014. URL http : / / arxiv .\nArtem Babenko and Victor Lempitsky. Aggregating local deep features for image retrieval. In The IEEl International Conference on Computer Vision (ICCV), December 2015.\nImage representation using off-the-shelf CNNs.. Gong et al.(2014) propose the MOP (multi- scale orderless pooling) method to represent an image in which VLAD is used to encode the level 2 and level 3 features. Then features from different scales are PCA-compressed and concatenated to form the image features. This method is rather complicated and time-consuming. At the same time, Babenko et al.(2014) use Alexnet (Krizhevsky et al.2012) trained on the Imagenet 1000-class classification task and retrain the network on task-related dataset. The retraining procedure gives a. boost to the retrieval performances. Instead of using the output of the fully-connected layers as the image feature representations,Babenko & Lempitsky[(2015) use the output feature maps of last con- volutional layer to compute the image features. Recently, instead of sum-pooling the convolutional features, Tolias et al.[(2015) use max-pooling to aggregate the deep descriptors. Their multi-scale method, called R-MAC (regional maximum activation of convolutions), further improves the pre- vious results on four common instance retrieval datasets. Our work differs from these papers in. that we explicitly explore the various factors that underpin the success of unsupervised instance re- trieval, which have not been fully explored and analysed. By carefully choosing the different setting for each factor and combining them in a complementary way, we show that a large improvement can. be achieved without additional cost.\nJifeng Dai, Kaiming He, and Jian Sun. Instance-aware semantic segmentation via multi-task network cascades In CVPR, 2016.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolutional neura networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2O16.\nRoss Girshick. Fast r-cnn. In International Conference on Computer Vision (ICCV), 2015\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi preprint arXiv:1512.03385, 2015.\nYunchao Gong. Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale Orderless Pooling of Deep Convolutional Activation Features, pp. 392-407. Springer International Publishing, Cham, 2014. ISBN 978-3-319-10584-0. doi: 10.1007/978-3-319-10584-0_26. URL http://dx.doi.0rg/10.1007/ 978-3-319-10584-026"}, {"section_index": "3", "section_name": "3 IMPACTING FACTORS", "section_text": "H. Jegou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In 201. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3310-3317, June 2014. doi: 10.1109. CVPR.2014.417. H. Jegou, M. Douze, C. Schmid, and P. Perez. Aggregating local descriptors into a compact image representa tion. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 3304-3311, Jun. 2010. doi: 10.1109/CVPR.2010.5540039"}, {"section_index": "4", "section_name": "3.1 CNN FEATURES FOR INSTANCE RETRIEVAL", "section_text": "He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision, 2014.\nIn this paper, we are mainly interested in extracting compact and discriminative image features using. the off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean value. of the RGB channels from the original image and do not do other sophisticated preprocessing. Ther. the image is fed into the convolutional network and goes through a series of convolutions, non-linea.. activations and pooling operations. The feature activation maps of a certain layer can be interpretec. as the raw image features, based on which we build the final image features. These feature maps. form a tensor of size K H W, where K is the number of feature channels, and H and W are height and width of a feature map. Each feature map represents a specific pattern which encodes. a small part of information about the original image. If we represent the set of feature maps as. F = {F}, i = 1, 2, ..., K, where F, is the ith activation feature map, then the most simple image. feature is formulated as:\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012\nIn the above equation[1] fi is obtained by applying the feature aggregation method (see section|3.2 over the ith feature map Fi. Throughout this paper, we use feature maps after the non-linear acti-. vations (ReLU) so that the elements in each feature map are all non-negative. We also experiment with feature maps prior to ReLU, but find that they lead to inferior performances. After the image feature representation is obtained, post-processing techniques such as PCA and whitening can be. further applied.\nWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexan der C. Berg. SSD: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015.\nJonathan Long. Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3431-3440. 2015\nDavid G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110, 2004. ISSN 1573-1405. doi: 10.1023/B:VISI.0000029664.99615.94. URLhttp: //dx.d0i.0rg/10.1023/b:v1s1.0000029664.99615.94"}, {"section_index": "5", "section_name": "3.2 IMPACTING FACTORS ON PERFORMANCE", "section_text": "Feature aggregation and normalization. After the feature maps of a certain layer are obtained it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen tations for images. Previous papers use either sum-pooling (Babenko & Lempitskyl2015) or max pooling (Tolias et al.f2015) followed by l2-normalization. Sum-pooling over a particular feature map F; is expressed as\nD. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer. Vision and Pattern Recognition (CVPR), volume 2, pp. 2161-2168, June 2006. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object. retrieval in large scale image databases. In Computer Vision and Pattern Recognition, 2008. CVPR 2008 IEEE Conference on, pp. 1-8, June 2008. doi: 10.1109/CVPR.2008.4587635.\nH W fi= F(m,n),iE{1,2,...,K} m=1 n=1\nwhere m, n are all the possible values over the spatial coordinate of size H W. In this paper for the first time, different combinations of aggregation and normalization methods (l2 and l1 in the manner of RootSIFT (Arandjelovic & Zisserman2012)) are evaluated and their results are reported.\nOutput layer selection.Zeiler & Fergus(2014) has shown that image features aggregated from the feature activation maps of certain layers have interpretable semantic meanings.Gong et al. (2014) and Babenko et al.(2014) use the output of the first fully-connected layer to obtain the image features, while Babenko & Lempitsky[(2015) and Tolias et al.(2015) use the output feature maps of the last convolutional layer. But these choices are somewhat subjective. In this paper, we extract dataset image features from the output feature maps of different layers and compare thei retrieval performances. Based on the finding in this experiment, we choose the best-performing layer and also come up with a layer ensemble approach which outperforms state-of-the-art methods (see section5.3).\nAli Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. Visual instance retrieval with deep convolutional networks. CoRR, abs/1412.6574, 2014b. URLhttp://arxiv.0rg/abs/1412.6574\nAli Sharif Razavian. Josephine Sullivan. Atsuto Maki. and Stefan Carlsson. Visual instance retrieval with dee convolutional networks. CoRR, abs/1412.6574, 2014c. URLhttp://arxiv.org/abs/1412. 6574\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andre Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.\nImage resizing. Famous models such as Alexnet (Krizhevsky et al.]2012) and VGGnet (Simonyar & Zisserman 2014) all require that the input images have fixed size. In order to meet this require ment, previous papers (Gong et al. 2014 Babenko & Lempitsky 2015) usually resize the inpu\nRoss Girshick Jian Sun Shaoqing Ren, Kaiming He. Faster R-CNN: Towards real-time object detection witl region proposal networks. arXiv preprint arXiv:1506.01497. 2015.\nWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural question is: what kind of design choices should we make in order to make full use of the representational power of existing models? In this section, we summarize the five factors that may greatly impact. the performance of the final image retrieval system. In section 5.2l we will show our experimental results on each key factor. Before we delve into the impacting factors, first we will give a brief. introduction about how to represent an image using the activation feature maps of a certain layer..\nf = [f1,f2,..., fi,...,fK]T\nf = max F(m,n) m,n\nAli Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: An. astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and. Pattern Recognition Workshops, CVPRW '14, pp. 512-519, Washington, DC, USA, 2014a. IEEE Computer Society. ISBN 978-1-4799-4308-1. doi: 10.1109/CVPRW.2014.131. URLhttp://dx. doi.0rg/10. 1109/CVPRW.2014.131\n(a) level 1 (b) level 2 (c) level 3\nGaurav Sharma and Bernt Schiele. Scalable nonlinear embeddings for semantic category-based image retrieval In ICCV, 2015.\nJosef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. I Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 1470-1477. IEEE, 2003\nC. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra- binovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, June 2015. doi: 10.1109/CVPR.2015.7298594. G. Tolias, R. Sicre, and H. Jegou. Particular object retrieval with integral max-pooling of CNN activations ArXiv e-prints, November 2015.\nFigure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3 levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different number of equal-sized regions.\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Compute vision-ECCV 2014, pp. 818-833. Springer, 2014.\nMulti-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe2004 the feature vector extracted from the deep convolutional networks for an image is a global descripto which encodes the holistic information. When used for image retrieval, this kind of features stil lack the detailed and local information desired to accurately match two images. Inspired by spatia oyramid matching (Lazebnik et al.]2006) and SPP (Kaiming et al.]2014), we explore the feasibilit f applying this powerful method to obtain discriminative image features. An image is represente oy a L-level pyramid, and at each level, the image is divided evenly into several overlapping o non-overlapping regions. The vector representations of these small regions are computed, then th egional vectors are combined to form the image feature vectors. The single scale representation c an image is just a special case of the multi-scale method in which the number of level L equals 1.\nPCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method for. reducing the dimensionality of feature vectors and decorrelating the feature elements. Previous work (Babenko et al.]2014; Jegou et al.]2010) has shown evidences that PCA and whitened features can actually boost the performances of image retrieval. In this paper, we further investigate the. usefulness of PCA and whitening within our pipeline and give some recommendations..\nimages to the fixed size. We postulate that the resizing operation may lead to the distortion of im. portant information about the objects in the natural images. Ultimately, this kind of operation may hurt the discriminative power of image features extracted from the network, thus degrading the re-. trieval performances. For the task of image retrieval, we think it is best to keep the images their original sizes and feed them directly to the network whenever possible. In this paper, three image resizing strategies are explored:\nBoth the height and width of the dataset images are set to the same fixed value (denoted as two-fixed). The minimum of each dataset image's size is set to a fixed value. (The aspect ratio of the original image is kept.) (denoted as one-fixed) ges are kent their Orioinal size (denoted as free)\nFigure|1|shows an example of 3 level representations of an image. The time cost of re-feeding those small regions into the network to compute the regional vectors would be huge, thus unacceptable for instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al.(2015), we. assume a linear projection between the original image regions and the regions in the feature maps of a certain layer. Then the regional feature vectors can be efficiently computed without re-feeding the corresponding image regions. In section 5.2] various settings for the multi-scale and scale-. level feature combination methods are explored and their retrieval performances are reported and. analysed."}, {"section_index": "6", "section_name": "APPENDIX A THE NETWORK TRANSFORMATIONS", "section_text": "We use the open source deep learning framework Caffe (Jia et al.[2014) for our whole experiments The aim of this research is to investigate the most effective ways to exploit the feature activations of existing deep convolutional models. Based on past practices for networks to go deeper (Krizhevsky et al.[2012] Simonyan & Zisserman[[2014]Szegedy et a1.2015f|He et a1.[2015], a consideration fol moderate computational cost, and also the results from Tolias et al.(2015) that deeper networks work better than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman 2014) trained on ImageNet as our model.\nIn order for the network to process images of varying sizes, We change the layer fc6, fc7 and fc8. from the original model to fc6-conv, fc7-conv and fc8-conv. It should be noted there are certair. constraints on the input image size due to the network's inherent design. The original network. accepts an image of fixed size (224 224), so the output feature maps of the last convolutional laye. conv5_4 is of size 512 7 7. As a result, when we change the operation between layer conv5_. and fc6 from inner product to convolution, each filter bank kernel between conv5_4 and fc6-con. has size 7 7. This in turn means that if we are to extract features from layer fc6-conv and above the minimum size of an input image must equal to or be greater than 224. For output feature maps. of layer conv5_4 and below, there are no restrictions on the input image size. During the experiment. when we are extracting features from layer fc6-conv and above, the minimum size of an image is se. to be 224 if it is less than 224.\nNetwork transformation. The original VGG-19 network only accepts an image of fixed size (224 224), which is not the optimal choice when extracting image features for retrieval tasks. In order for the network to be able to process an image of arbitrary size (of course, the image size can not exceed the GPU's memory limit) and for us to experiment with different input image resizing strategies, we adapt the original VGG-19 network and change the fully-connected layers to convolutional (Long et al.]2015) layers. For more details about network transformations, see appendix A\nIn this paper, the overlaps between different regions occur in the 3 and 4 scale pyramid. A single region in each scale can be specified as the combination of a slice from the the width and heigh of the feature map. If a scale has N N regions, then the number of slices in width and heigh of the feature map are both N. We use the same set of slices for both the width and height in this experiment."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "The Oxford5k dataset (Philbin et al.] 2007) contains 5062 images crawled from Flickr by using 11 Oxford landmarks as queries. A total of 11 groups of queries - each having 5 queries with their ground truth relevant image list, are provided. For each query, a bounding box annotation is also provided to denote the query region. During experiment, we report results using the full query images (denoted as full-query) and image regions within the bounding boxes of the query images (denoted as cropped-query). The performance on this dataset is measured by mAP (mean average precision) over all queries.\nThe Paris6k dataset (Philbin et al.2008) includes 6412 images'|from Flickr which contains 11. landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55 queries belonging to 11 groups and the ground truth bounding boxes for each query are provided . The performance is reported as mAP over 55 queries..\nThe Oxford105k2 dataset contains the original Oxford5k dataset and additional 100,o0o im. ages (Philbin et al. 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datase1. and are used as distractors to test the retrieval performance when the dataset scales to larger size We use the same evaluation protocol as the Oxford5k on this dataset..\nThe UKB dataset (Nister & Stewenius]2006) consists of 10200 photographs of 2550 objects, each object having exactly 4 images. The pictures of these objects are all taken indoor with large variation in orientation, scale, lighting and shooting angles. During experiment, each image is used to query the whole dataset. The performance is measured by the average number of same-object images in the top-4 results."}, {"section_index": "8", "section_name": "5.2 RESULTS AND DISCUSSION", "section_text": "In this section, we report the results of experiments on the impact of different factors and analys. their particular impact. The experiments in this section are conducted on the Oxford5k dataset.\nFeature aggregation and normalization. In this experiment, we compare the different combina tions of feature aggregation (sum-pooling and max-pooling) and normalization methods (l2 and l1)\n' Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images 2The image named \"portrait_000801.jpg\" was corrupted and manually removed from this dataset.\nIn this section, we first introduce the datasets used and the evaluation metrics. Then we report our experimental results for different impacting factors and give detailed analysis. In the last part we show the performance of our method considering all these impacting factors and compare our method with the state-of-the-art methods on four datasets.\nIn 3 scale (see Table3](b3)), overlap occurs only in scale 2, and the slice (in the proportion to the length of feature map width or height: {(0, ?), (3, 1)}. In 4 scale v1 (Table|3|(c1)-(c3)), the slices for scale 2 and 3 are {(0, 3), (, 1)} and {(0, 3), (4, 3), (3, 1)}. In 4 scale v2 (Table[3|(c4)(c5)) the slices for scale 2 and 3 are {(0, 5), (3, 1)} and {(0, ), (5, 5), (?, 1)}. In 4 scale v3 (Table3 (c6)-(c8)), the slices are {(0, ), (?, 1)} and {(0, ), (, ), (3, 1)}, for scale 2 and 3, respectively.\nTable 1: Comparison between different combi- nations of feature aggregation and normaliza- tion methods.\nMethod full-query cropped-query max-l1 52.4 48.0 sum-l2 58.0 52.6 sum-l1 60.3 56.3 max-l2 60.1 53.5\nin terms of their retrieval performances. We use features from the layer conv5_4 with the free inpu. image size. The results (%) are shown in Table 1] Sum-pooling followed by l1 normalization leads. to slightly better results than the other combinations, especially for the cropped-query. However. after preliminary experiment with a multi-scale version of sum-l1 and max-l2, we find that max-l. is much better than sum-l1. For example, employing a 4 level representation of images in the Ox ford5k dataset, for the case of full-query, we find that the mAP for the max-l2 method is 65.1, whil. the mAP for sum-l1 is only 51.3 (even lower than the single scale representation). Base on these results, we stick to max-l2 in computing the final image features.\nOutput layer selection. In order to verify their feasibility for instance retrieval, we extract fron the network the output feature maps of different layers and aggregate them to get the image feature vectors. We evaluate the performances using features from layer conv3_3 up to the highest fc7-con layer (except the pooling layers, i.e. pool3, pool4 and pool5). Single-scale representations of the dataset images are used in this experiment.\nFigure 2 shows the retrieval performances of image features corresponding to different layers. The. retrieval performances for both the full and cropped queries increase as the layer increases from. lower layer conv3_3 to higher layers and plateau in layer conv5_4 and fc6-conv, then the perfor-. mances begin to decrease as the layers increase to fc7-conv. The result shows that features from lower layers such as conv3_3 and conv3_4 are too generic and lack the semantic meanings of the. object in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea-. tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailed. and local information needed to match two similar images. The best results are obtained in layer conv5_4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailed. information and high level semantic meanings of the image. Based on these observations and the. requirement for keeping the image features compact, we mainly focus on image features from the. layer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).\nFigure 2: Performance comparison between different layers. This experiment is conducted using the free input image size.\nImage resizing. We experiment with 3 kinds of image resizing strategies which are detailed in section[3.2] We use grid search to find the optimal size for the two-fixed and one-fixed strategy. As is shown in Table2 the free input strategy outperforms or is close to the other two strategies: it\nresizing strategies. The numbers in the parenthe- ses denote the sizes in which the maximum mAPs are achieved.\nMethod full-query cropped-query two-fixed 55.5 (864) 38.7 (896) one-fixed 59.0 (800) 39.3 (737) free 58.0 52.6\n0.64 0.56 0.48 0.40 E 0.32 0.24 full-query. 0.16 cropped-query 3 3 4 4 4 6-conv 1-conv conv3_e layer names.\n0.64 0.56 - - .. 0.48 0.40 0.32 0.24 full-query 0.16 cropped-query 3 4 3 3 conv3_ conv3_ conv4 onv5 conv5_ :6-conv conv5 fc7-conv\nperforms especially well in the cropped-query case. This experiment shows that changing the image. aspect ratio (two-fixed) distorts the image information, thus reducing the performance dramatically The one-fixed way is better than the two-fixed method. But information loss still occurs due to the resizing operation. The free method is able to capture more natural and un-distorted information. from the images, which explains its superior performance over the other two methods. It is best to. keep the images their original sizes for the instance retrieval tasks..\nThe benefit of multi-scale representation. In our multi-scale approach, the regional vectors fror. each scale are simply added together and l2-normalized to form the scale-level feature vectors. Then features from different scales are combined and l2-normalized to form the image representations. In fact, we also experimented with two methods which concatenate features from different scales. The first method is in same vein to spatial pyramid pooling (Kaiming et al.2014), i.e., region-level as well as the scale-level features are all concatenated to form a high dimensional vector. In the second. method, region-level features are added while scale-level features are concatenated. We find tha these two methods all lead to inferior results. The performance drop for the first in the case of cropped-query can be as large as 41%. The high dimensionality of the concatenated features (large. than 1.5k) will also lead to longer running times. Considering all these, we do not use concatenation. of features in the following experiments..\nTable 3: Multi-scale representation: comparison between different methods. \"overlap\"' denotes whether the regions in each level (see Figure[1) have some overlapping areas. \"s2\",\"s3' mean that overlap occurs in level 2 or 3. \"weighing\"' means if the features from each level are added using same weight or different weight \"version' means the different choice of the number of regions in each scale.\nscale overlap weighing version full-query cropped-query (a1) 2 x x 63.5 59.0 1 (a2) 2 x 1 63.9 61.0 (b1) 3 x x 1 64.2 60.9 (b2) 3 x 1 62.6 61.0 (b3) 3 s2 x 1 64.8 60.8 (c1) 4 s3 x v1 65.1 61.4 (c2) 4 s3 V v1 64.8 60.7 (c3) 4 s2,s3 x v1 65.5 60.8 (c4) 4 s2,s3 x v2 65.9 61.5 (c5) 4 s2,s3 v2 65.4 61.2 (c6) 4 x x v3 64.5 61.3 (c7) 4 s3 x v3 65.8 62.2 (c8) 4 s2,s3 x v3 66.3 62.6\nWe conduct extensive experiments to decide the best configurations for the multi-scale approach anc report our results in Table3] First, we explore the impact of the number of scales on the retrieva. performances. For the 2 and 3 scale representations, The region number for each level are {1 1 2 2 }, {1 1, 2 2, 3 3}. For the 4 scale representation, 3 versions are used and they differ in the. number of regions in each scale: for \"v1\", \"v2\", and \"v3', the number of regions are {1 1, 2 2. 3 x 3, 4 4},{1 1, 2 2, 3 3, 5 5} and {1 1, 2 2, 3 3, 6 6}. Table3|(a1)(b1)(c6) show the performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly. more scale levels improve the results and in the case of cropped-query, increase the performance by. an absolute 2%.\nWe also conduct experiments to find whether the weighing of different scales leads to improved performance. The weighing method for features from different scales is similar to the manner of spatial pyramid matching (Lazebnik et al.2006) - features from coarser level are given less weight while features from the finer levels are given more weight. Suppose the features of different scales for an L scale representation are f1 f L, then the image representation f is expressed as:\nMore details can be found in|Lazebnik et al.[(2006). Comparing the results of row (a1) and (a2), it seems that weighing different scales leads to better performance. But after more experiments, we. find that the weighing method generally leads to inferior results as the number of scales increase,.\nL 1 i=2\n0.75 0.65 0.55 0.45 crop-paris crop-self 0.35 full-paris A full-self 0.25 16 80 144 208 272 336 400 464 528 number of principal component reserved.\nFigure 3: The number of principal component reserved VS mAP. We show the results of full and cropped. query using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as \"full-self\" 'full-paris\" and \"crop-self\", \"crop-paris\".\nNext, we look into the issue of overlapping between different scales and try to verify its usefulness. For each scale and its different versions, we set some overlapping areas between the neighboring. regions in either one or two scales of the pyramid (For the exact configurations of overlap in all cases in Table[3] see appendixB|for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3) we can see that overlap increase the performance for full-query but decrease a little the performance for cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement for. both the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing our. final features.\nPCA and whitening. We perform PCA and whitening for the features extracted from the Oxford5k. dataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset and 2-normalize these features to get the final image representations.\nThe retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and\ne.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep features are different from the traditional local feature descriptors such as SIFT. We should exercise with caution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors, which is also suggested in Babenko & Lempitsky(2015). Based on the results of this experiment. no weighing methods are used in computing our final image feature representations.\nTable 4: The impact of PCA and whitening. \"PCA on self\"' and \"PCA on Paris\" mean that the corresponding. features are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,. respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining the corresponding results."}]
rJJRDvcex
[{"section_index": "0", "section_name": "AYER RECURRENT NEURAL NETWORKS", "section_text": "Architecture & Training. In the FCN-32s, input images are passed through the whole networks. and end up with predictions of 12 12 21, then, up-sampling layers are directly used to map. the predictions back to 384 384 (32 times). In the FCN-16s, instead of directly up-sampling 32 times, the predictions are first up-sampled by 2, and summed up with stream predictions from pool (named after VGG16), then up-sampled by 16 times. In the FCN-8s, the stream predictions fron pool3 are further added to the results from FCN-16s, thus, up-sampling layers with only factor 8 is. needed.(AppendixC)\nThe complete FCNs architecture used in the paper\nWeidi Xie. Alison Noble & Andrew Zisserman\nDepartment of Engineering Science, University of Oxford, Uk\n38 4096 4096 feats feats [C fc2 fc3 up32x FCN- Kernel size = Kernel size = 7x7x512x4096 1x1x4096x4096 up2x fc5 fc6 up16x FCN- up2x fc7 up8x FCN- fc8 C9"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "For all the architectures, the base net(VGG16) is pre-trained on ImageNet (Deng et al.. 2009), we further train on Pascal VOC2012 for 50 epochs, similar to the experiment for CIFAR-10, we iter. atively increase or decrease the learning rate between 10-3 and 10-5 after every 10 epochs. The. 4096 channel architectures are trained first, and then the number of channels is gradually reduced in. the FC layer by randomly cutting them (e.g. from 4096 to 2048), and re-training the networks.\nIn this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contribu- tions are three-fold: (i) we propose a hybrid neural network architecture that in- terleaves traditional convolutional layers with L-RNN module for learning long- range dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable re- sults (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs.\nResults & Discussion. Table|3|shows the performance of the six baselines: FCN-32s and FCN 8s with the number of channels varying from 512 to 4096. We observe that reducing the nodes ir the FC layers does produce a performance drop (from 4096 to 1024 nodes, 1% mean IOU) in botl FCN-32s and FCN-8s. Although from 1024 to 4096 nodes, the improvement is tiny, the difference in the number of parameters is over 64 million. Consequently, in the following experiments we choose to perform experiments based on networks with 512, 1024 or 2048 channels only (i.e. no 4096). In comparison to the original performance for the FCN-8s architecture in (Long et al.]2015) we exceed this (by 64.4 to 61.3 mean IOU) in our training. Thus, we use our trained networks as a baseline."}, {"section_index": "2", "section_name": "5.2.2 FCN-32s WITH L-RNN MODULES", "section_text": "Architecture & Training. The architecture FCN-32s(L-RNN) is shown in figure 4] the convolu tional part of the architecture is initialized with the pre-trained FCN-32s(2048 channels in FC layer baseline. Then, two 1D spatial RNNs are inserted into the fc1 layer in the horizontal direction, anc two 1D spatial RNNs are inserted into the fc2 layer in the vertical direction. The convolution activa tions of fc1 are shared for both left-right and right-left scanning. Similarly for fc2, the convolution activations are shared for up-down and down-up scanning. Thus the fc1 and fc2 layers together with the added 1D spatial RNNs form a complete L-RNN module.\nDuring training, as described in section4 the 1D spatial RNNs are initialized with a zero recurrence. matrix. The entire network is then fine-tuned end-to-end with the PASCAL VOC2012 data. We adopt RMS-prop (Tieleman & Hinton2012) for 30 epochs with hyper-parameters lr = 10-4 p = 0.9, e = 10-8, then decrease the learning rate to lr = 10-5 for 10 epochs.\nIn this paper we introduce an alternative 'module' for learning multi-scale spatial contextual infor. mation by using Recurrent Neural Networks (RNNs) within layers. This approach is inspired by the ReNet architecture of Visin et al.(2015), which we extend here into a hybrid architecture tha interleaves traditional convolutional neural network (CNN) modules with layer recurrent modules and we term a Layer Recurrent Neural Network (L-RNN). A L-RNN module is a combination o1 1D RNNs, and is able to learn contextual information adaptively, with the effective receptive fielc able to reach across the entire feature map or image, if that is required for the task. The hybric network combines the best of both worlds: canonical CNNs are composed of filters that are efficien in capturing features in a local region, whilst the L-RNNs are able to learn long-range dependencies. across a layer efficiently with only a small number of parameters..\nResults & Discussion. The results are shown in Table|3] Compare the 32s rows with and withou the L-RNN for the FC layers with 512, 1024, and 2048 channels. As can be seen, the addition o the L-RNN always improve the segmentation performance over the pre-trained FCN-32s baselines However, the improvement is not large - about 1 - 1.5% mean IOU. This is because the receptive field in the fully connected layers of FCN-32s is sufficiently large to cover 224 224 pixels of the input patch, and consequenly the networks are not able to benefit much from the context provided by the L-RNN. The benefit is greater when L-RNNs are added to the lower layers (where the receptive fields of the convolutions is much smaller), and we turn to that case next."}, {"section_index": "3", "section_name": "5.2.3 FCN-8s WITH L-RNN MODULES", "section_text": "We describe the basic L-RNN module in Section [2] and discuss different fusion choices for the. hybrid architecture by incorporating L-RNN into residual blocks (He et al.]2016b) in Section 3 In addition, in Section4] we explain how L-RNN modules can be inserted into pre-trained CNNs seamlessly. This means that the entire network does not have to be trained from scratch, only the added L-RNNs are fine-tuned together with pre-trained networks, and the experiments show that this. addition always improves performance. In Section[5] we experiment on the CIFAR-10 classification with the hybrid networks of increasing depths, by using Layer Normalization (Ba et al.2016).. we are able to train vanilla RNNs to match the performance of GRU (Chung et al.|2015), while.\nArchitecture & Training.The architecture FCN-8s(L-RNN) is shown in figure 4] as with the. FCN-32s architecture, 1D spatial RNNs are inserted into the fc1 and fc2 layers to form a L-RNN module. L-RNNs are also inserted into the lower layers, namely pool3 and pool4 layers. Unlike the FC layers in the FCN-32s, where prediction for each central pixel comes from image patches of size 224 224, the predictions from pool3 and pool4 are based on receptive field on the image of. much smaller sizes (around 44 44 and 100 100 pixels respectively). Thus, the inserted L-RNN modules must be able to model relatively long-range dependencies..\n4096 4096 feats feats 384 fc2 fc3 up32x FCN-32s Kernel size = Kernel_size = 7x7x512x4096 1x1x4096x4096 up2x up16x FCN-16s C.5 fc6 up2x up8x FCN-8s\nIn computer vision tasks, such as image classification or pixel level prediction, multi-scale contex- tual information plays a very important role in achieving high performance. The original architec- tures for these tasks (e.g.He et al.(2016a); Krizhevsky et al.(2012);Long et al.(2015);Ronneberger et al.(2015); Simonyan & Zisserman(2015); Szegedy et al.(2015)) were able to obtain multi-scale context with a large spatial footprint by the combination of filters through the layers of the network. so that a large receptive field was effectively built up. Indeed, the final layers of these networks use average pooling or fully connected layers (convolution with a large kernel) so that the effec- tive receptive field covers the entire input image patch. More recent pixel prediction architectures have used dilated convolutions (Yu & Koltun2016] Chen et al.][2016) which are able to aggregate multi-scale contextual information without losing resolution (due to the spatial pooling and strides in the original architectures), and without incurring the penalty of having to learn many parameters for convolutions with very large kernels.\nIt is worth noting that (broadly) recurrence can be used in feed-forward multi-layer convolutional. neural network architectures in two ways: between layers, and within layers. For example, between layer recurrence was used for scene labelling in (Liang et al.[2015f Pinheiro & Collobert|2014) with. convolutions applied recursively on top of feature maps from different layers or raw input images. And in (Zheng et al.2015), spatial dependencies are modelled explicitly for semantic segmentation. with densely connected Gaussian CRFs by iterated application of bilateral filtering using between-. layer recurrence.\nFigure 4: FCN-32s (above the blue dash line) and FCN-8s with L-RNN modules. Spatial RNNs are inserted to the fully connected (FC) layers in all FCNs, every two FC layer. construct a complete L-RNN module. {384, 192, 96} indicate the spatial sizes of the feature maps. Kernel Sizes for the fully connected layers (n is an experimental variable- number of channels) : fc1 : 7 x 7 x 512 x n , fc2: 1 x 1 xnXn fc3 : 1 x 1 x nx 21 fc4 : 1 x 1 x 512 x 1024, fc5 : 1 x 1 x 1024 x 1024, fc6 : 1 x 1 x 1024 x 21 fc7 : 1 x 1 x 256 x 1024, fc8 : 1 x 1 x 1024 x 1024, fc9 : 1 x 1 x 1024 x 21\nThe architecture of the network (Figure[1) is composed of two parts. Local features are calculated by the low-level CNNs module, the Layer-RNN (L-RNN) module, consisting of several 1D spatial. RNNs is applied to capture the spatial dependencies. By scanning across the feature maps in differ- ent directions, the complete L-RNN is able to learn the receptive field in an adaptive way, up to the size of the entire image. These two modules can be combined to build networks in various ways; for example, an L-RNN module can be stacked on top of several CNN modules at the final layer, or CNN and L-RNN modules can be interleaved at multiple levels..\nDuring training, the network is initialized from the FCN-8s baseline, and then fine-tuned using. segmentation data. Again the PASCAL VOC dataset is used. Furthermore, when comparing to the. other previously published methods, the network is further trained on the COCO trainval dataset and we use a densely connected CRF as post-processing (Krhenbhl & Koltun|2012)\nLayer-RNN Module Spatial Recurrent Module Spatial Recurrent Module CNN Module Concat Concat or or Sum Sum Input CNN Features A (B) (C)\nResults on PASCAL VOC Validation set. The experimental results are shown in Table|3\nTable 3: Comparison of FCN networks on the PASCAL VOC2012 segmentation validation set\nComparing the rows for 32s with and without L-RNN, to those for 8s with and without L-RNN. W can draw the following conclusions:\nAs shown in Figure[1] the Layer-RNN (L-RNN) module is a combination of the 1D spatial recurrent modules (B) and (C). In each module, there are two 1D RNNs scanning across the feature maps horizontally or vertically from two directions (bidirectional spatial RNNs), and their hidden states are updated at every spatial step. Consequently, for each of the horizontal and vertical directions. two output feature maps are obtained with the same width and height as the input feature maps. In our implementation, we simply sum up these output feature maps (an alternative is to concatenate the output feature maps, but that would increase the number of parameters).\nImprovement due to the skip layers. It can be seen (for IOU) that going from FCN-32s(2048) to FCN-8s(2048), where there are additional skip layers, the performance is boosted from 62.7 to 64.1. The skip layers in the FCN-8s architecture introduce more parameters, but this is not the. reason for the performance boost since FCN-8s(2048) and FCN-32s(4096), have a similar number of parameters though they perform very differently (64.1 vs. 62.9). This observation confirms that the. performance gain is brought by the the skip layers, rather than the increased number of parameters..\nMore formally, assume the feature maps (layer L) coming into the L-RNN module are XL e Rm n d and output X L+1 (layer L + 1), where m, n, d refers to the width, height, and the number of feature maps respectively for the input layer. For simplicity, assume the input to the 1D spatial\nImprovement due to L-RNN module. Inserting a L-RNN to the FC layers of FCN-32s(2048) only improves the performance from 62.7 to 64.2. However, as noted earlier, since the nodes in the\nLRNN Module 1 FCN-32s fc1 fc2 fc3 up32x (LRNN) up2x LRNN Module 2 up16x FCN-16s tc5 fc6 (L-RNN) up2x LRNN Module 3 up8x c fc8 fc9 FCN-8s (L-RNN)\nBy contrast, our Layer-RNN architecture falls into the second category, where within-layer recur. rence is used to capture dependencies. Others have learnt contextual information from within layer. recurrence for tasks such as object detection (Bell et al.[2016), and low-level vision problems, such as de-noising, colourization and smoothing (Liu et al.|2016). We postpone discussing in detail the. relationships of the proposed Layer-RNN modules to these architectures, and to that of ReNet (Visin. et al.[2015) and ReSeg (Visin et al.|[2016), until we have introduced the L-RNN in Section[2\nType # of channels in FC L-RNNs added Pixel Acc % Mean IOU % 32s 512 NO 90.4 61.5 32s 1024 NO 90.5 62.1 32s 2048 NO 90.7 62.7 32s 4096 NO 90.7 62.9 8s 1024 NO 91.3 63.8 8s 2048 NO 91.2 64.1 8s 4096 NO 91.3 64.4 8s (original (Long et al.[[2015)) 4096 61.3 \\ 32s 512 YES 90.8 62.7 32s 1024 YES 90.9 63.4 32s 2048 YES 91.1 64.2 8s 2048 YES 92.6 69.1\nIn (B), two 1D spatial RNNs are applied to scan along each row independently from dif- ferent directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange;\nIn (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate in- formation over the entire image.\nIn contrast, adding L-RNNs to FCN-8s brings a substantial improvement from 64.1(FCN-8s) tc 69.1(FCN-8s-LRNN). This process wil1 introduce more parameters due to the recurrence term in the RNNs, but it is clear that the improvement is mainly from the inserted L-RNN module after pool3 and pool4 in FCN-8s, rather than from the increased number of parameters. The reason is that, when comparing FCN-8s (2048 channels without L-RNN) to FCN-8s (4096 channels without L-RNN), although the number of parameters is increased dramatically, the performance is only increased from 64.1 to 64.4. While FCN-8s (4096 channels without L-RNN) has roughly the same number of parameters as that of FCN-8s (2048 channels with L-RNN), but the performance gain is from 64.4 to 69.1. In conclusion, the L-RNN is able to learn contextual information over a much larger range than the receptive field of pure local convolutions.\nL+1 L+1 L V +b) left to rig Xi,j i.i-1\nE Rdx1 xL+1 xL+1 denotes the number of nodes used in the 1D spatial RNN, and f refers to the non-linearity function.. 1D spatial RNNs scanning other directions can be calculated similarly. Notice that, the first term of equation|1|encodes local information independently, resembling the normal convolutional layer, and. the second term characterizes the within-layer recurrence (U is a convolution matrix, V a recurrence matrix). We make use of this observation in Section4."}, {"section_index": "4", "section_name": "2.2 DISCUSSION AND RELATION TO OTHER WORK", "section_text": "Results on PASCAL VOC Test set. Table4|shows the results of the FCN-8s with L-RNNs on the PASCAL VOC test data, and also compares to others who have published on this dataset. The. performance is far superior to the original result (Long et al.]2015) using a FCN-8s with 4096. channels (whereas only 2048 channels are used here). We also compare to the dilated convolutior network of (Yu & Koltun,2016), obtaining comparable, though slightly better performance. Note that in (Yu & Koltun2016), multi-scale contextual information is captured by explicitly designing. dilated convolution kernels, while the L-RNN is able to learn contextual information implicitly. Finally, we compare to (Zheng et al.] 2015) who add a densely connected CRF to FCN-8s. If we also add a dense CRF as post-processing, we boost the performance by 1% in IOU (the same boost as. obtained by (Yu & Koltun2016)). In Figure[5] we show the samples of semantic segmentations or\nAs can be seen in Figure 1C, the effective receptive field can cover the entire image. However, the actual receptive field depends on the parameters of the RNNs, and can be learnt adaptively. As ar. insight to what is learnt, consider a separable filter, such as an axis aligned 2D Gaussian. Such filters. can be applied exactly by a composition of 1D Gaussian convolutions in the horizontal and vertica directions. The 1D spatial RNNs can approximate finite 1D convolutions of this type..\nWe next discuss the relation of the L-RNN to prior work. First, ReNets (Visin et al.]2015), which. is an architecture completely made of 1D RNNs (i.e. no CNNs). In ReNets, the input images are. first split into non-overlapping patches of size m n d, where m, n, d refer to width, height. and feature channels respectively. The 1D RNNs takes the flattened patch (mn d) as input, and. outputs feature vector of size D 1, where D refers to the number of nodes used in the RNNs. In contrast, we interleave the L-RNN and CNN modules. There are two benefits of this: first, CNNs are. more efficient at capturing local features than RNNs, the L-RNN stacked upon them is able to learn. dependencies between local features (rather than the input channel reformatted); second, we are able. to introduce more non-linearities between the hierarchical layers (through the convolutional+ReLU. and pooling layers), and a RNN provides non-linearities within the same layer..\nMean IOU % Methods P P+CRF P+COCO P+COCO+CRF FCN-8s (Long et al.]|2015) 62.2 n/a n/a n/a CRF-RNNs (Zheng et al.l[2015) n/a 72.0 n/a 74.7 Dilated Conv. 7Yu & Koltun]2016 n/a n/a 73.5 74.7 FCN-8s-LRNN (2048) 71.9 72.7 74.2 75.7\nThe 2D-RNN, proposed in (Graves & Schmidhuber2009] Theis & Bethge2015), is able to scan across the image or feature maps row-by-row, or column-by-column sequentially, with each RNN node accept input from three sources, namely, projections of current input, and feedbacks from the two neighbour nodes. By contrast, we use unidirectional 1D spatial RNNs, with each hidden node only accepting feedbacks from its previous node. Another advantage of our model is that rows or columns can be processed in parallel on GPUs, and training time is shortened.\nBell et al.(2016) (Inside-Outside Net) and Visin et al.(2016) (ReSeg) describe similar ideas for. object detection and semantic segmentation. Both architectures follow a pipeline that consists of a. CNN feature extractor (VGG Net) followed by spatial RNNs at the final prediction stage. In contrast,. we treat the L-RNN module as a general computational layer, that can be inserted into any layer of. modern architectures, and interleaved with CNN modules. This enables a network to be capable of learning contextual information in a flexible way at multiple levels, rather than with hand-crafted. kernel sizes and receptive fields.\nthe PASCAL VOC2012 validation set. In each figure, we show our predictions and the results afte CRF post-processing. Comparing with the end-to-end trainable CRF-RNN (Zheng et al.]2015), ou predictions miss the small details, like the wheel of the bicycle, but show much better performance in determining the class of the segmented regions - something that context can really contribute to.\nNote that the vanilla RNN unit consists of two terms, a local term and a recurrence term, where. the local term is exactly the convolution operation. Therefore, the spatial RNN can be seen as a. generalisation of the convolutional layer, and in the worst case, when the RNN learns no context,. the layer simply becomes a convolutional one. For tasks with limited data (semantic segmentation. in our case), we propose a regime for inserting the L-RNN into the pre-trained FCN and fine-tuning. the entire network end-to-end. This means that we directly increase the representational power of. the model. and set the pre-trained mode1 free to learn contextual information if it is needed.\nThis paper has shown that the proposed L-RNN module is an alternative way of adding multi-leve spatial context to a network. In fact, L-RNNs can be interleaved with convolutional layers to learr context at any stage. When the L-RNN is only used at the final stage after the CNNs, it gives shallow networks the receptive fields of far deeper networks. Furthermore, we have demonstratec that inserting L-RNNs can boost the performance of pre-trained networks, and given an initializatior procedure that makes this training a simple matter of end-to-end fine tuning.\nThere is much left to investigate using L-RNNs as a new building block, and we suggest some av enues here: (i) training the hybrid architectures on larger dataset, such as ImageNet (Deng et al. 2009), and learn representations that can be transferred to other vision tasks, (ii) a similar investiga tion for deep residual networks where the residual blocks are either convolutional or L-RNNs; and (iii) including a CRF final layer in end-to-end training.\nIn this section. we describe the architecture for incorporating 1D spatial RNNs into the computa tional block of a Residual Networks(He et al.]2016b), and also discuss fusion methods for such blocks.\nRNNs from X L is a feature vector at each spatial location, each row or column on the feature maps. is treated as one sequence. When scanning from left to right, the feature responses for location ij can be calculated:\nInput Image CRF-RNN FCN(8s)-LRNN LRNN+CRF Ground-truth B-ground Aeroplane Bicycle Bird Bottle Bus Cat Chair Cow Dinging-table Dog Horse Motorbike Person Potted-Plant Sheep Sofa Train T V/Monitor\nWe start with the standard residual block of He et al.(2016b) (Figure 2(a)), and then replace the included CNN layer with bidirectional spatial RNNs, to includ a L-RNN module instead\nX X BN BN ReLU ReLU Conv Linear BN BN ReLU ReLU Conv (Linear Forward/ Forward/ 0: Sum/ Sum/ Concatenate Concatenate (a) CNN module (b) L-RNN module\ni.e. the block simply becomes a new layer; sum denotes the method of the original residual networks:\nxL+1 =XL;F(X,W)] (;) refers to concatenatior\nFigure 5: Qualitative Results. First column: input image. Second column: prediction from Zheng et al.(2015). Third column: prediction from the our networks. Fourth column: CRF post- processing. Fifth column: ground-truth annotation."}, {"section_index": "5", "section_name": "ADDING A LAYER-RNN TO A PRE-TRAINED CNN", "section_text": "In this section, we describe how a Layer-RNN module, can be seamlessly inserted into a pre-traine CNN. In a typical scenario, the CNN would be trained for classification on ImageNet (where ther are copious annotations). After inserting the L-RNN modules, the hybrid L-RNN network can the. be fine tuned for a new task such as pixel-level prediction, e.g. semantic segmentation (where th annotated data is usually more limited). This trick naturally allows multi-level contextual informa tion to be effortlessly incorporated. Avoiding training the network from scratch means the entir network can be re-purposed with the available annotations and trained end-to-end for the new task whilst benefiting from the earlier classification training\nBell, Sean, Zitnick, C Lawrence, Bala, Kavita, and Girshick, Ross. Inside-outside net: Detectin objects in context with skip pooling and recurrent neural networks. CVPR, 2016..\nWe illustrate the idea using 1D convolution, but the same principles hold for the entire L-RNN module. As shown in Figure[3] the canonical CNN architecture for a 1D convolution can be denoted as:\nWe consider three fusion options for combining the features from such blocks with the input to. subsequent layers; namely forward, sum and concatenation. Forward refers to the traditional feed- forward architectures:\nxL+1= F(XL,W)\nXL+1 = XL + F(XL,W)\nso that the L-RNN module acts as a residual block; whilst, in concatenation, features from multiple layers (same spatial sizes) are concatenated:\nvL+1_ =[X;F(XL,W)] (:) refers to concatenation\nTherefore, the channels of output feature maps will be the sum of the channels of the two concate nated layers (the number of parameters will be increased for the next layers). In the experimental evaluation of Section|5.1|we compare these options\nXL+1=f(W*XL+b)\nL+1 = f(U * XL + VXL+1 +b\nwhere U, V, b refer to the parameters that are shared across the whole scan-line Notice that the 1D spatial RNN are designed to incorporate two terms, projections from local region (input-to-hidden) and recurrence term from previous hidden unit (hidden-to-hidden). In fact, it is\nXL+ XL+ XL+1 = f(Xinter) xL+1 = f(Xinter +Vx+1 Xinter Xinter Xinter=W*XL+ b Xinter=U*XL+b X Convolutional Neural Networks Spatial Recurrent Neural Networks. (CNNs) (Spatial RNNs)\nXL+ XL XL+1 = f(Xinter + HVXL+ Xinter Xinter Xinter =W*XL+b Xinter =U*XL+b Convolutional Neural Networks Spatial Recurrent Neural Networks (CNNs) (Spatial RNNs)\nHariharan, Bharath, Arbelaez, Pablo, Girshick, Ross, and Malik, Jitendra. Simultaneous detection and segmentation. ECCV, 2014.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CVPR, 2016a\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual networks. ECCV, 2016b.\nHuang, Gao, Liu, Zhuang, and Weinberger, Kilian Q. Densely connected convolutional network. https://arxiv.0rg/abs/1608.06993, 2016\nKrhenbhl, Philipp and Koltun, Vladlen. Efficient inference in fully connected crfs with gaussian edge potentials. NIPS. 2012\nthe presence of non-zero recurrence matrix V, that characterizes the 1D spatial RNN, and they can be calculated in a two-step way as :\nKrizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep con volutional neural networks. NIPS, 2012.\nLiang, Ming, Hu, Xiaolin, and Zhang, Bo. Convolutional neural networks with intra-layer recurrent connections for scene labeling. NIPS, 2015.\n(X?nter (i = 1, zero initial state vL+1 Xinter + Vx(+1) (i > 1\n-L+1 Xinter + VX(+1) (i>1)\nBy interpreting the recurrence in this way, 1D spatial RNNs can be constructed by inserting recur rence directly into any convolutional layer right after the convolution. If the recurrence matrix V is initialized as zero, and ReLU is the activation function, then the 1D spatial RNN will be initialized exactly as the pre-trained CNNs. The complete L-RNN can be constructed by inserting two bidirec tional spatial RNNs into subsequent layers of the pre-trained CNNs. We derive the expression of the. within-layer gradient for use in back-prop fine-tuning in Appendix B.\nLiu, Sifei, Pan, Jinshan, and Yang, Ming-Hsuan. Learning recursive filters for low-level vision via. hybrid neural network. ECCV, 2016..\nLong, Jonathan, Shelhamer, Evan, and Darrell, Trevor. Fully convolutional networks for semant segmentation. CVPR, 2015.\nWe test the proposed Layer-RNN on two supervised learning tasks: CIFAR-10 classification i Section[5.1] and PASCAL VOC 2012 segmentation in Section[5.2\nPinheiro, Pedro HO and Collobert, Ronan. Recurrent convolutional neural networks for scene label ing. ICML, 2014.\nRomero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, anc Bengio, Yoshua. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014\nIn this section, we investigate classification performance under variations in an architecture contain ing L-RNN modules. We vary the depth of the network, the number and position of the L-RNN modules, the type of recurrent units in RNNs, the pooling mechanisms for the last pooling layer, anc the method of fusing the block outputs..\nRonneberger, Olaf, Fischer, Philipp, and Brox, Thomas. U-net: Convolutional networks for biomed. ical image segmentation. MICCAI, 2015.\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. ICLR, 2015\nThere are two principal architectural variations. The first variation is that from Network A to D.. we gradually increase the network depth by adding CNN Modules, with the L-RNN module always. stacked at the final stage to capture global information over the entire image, in a similar manner to the fully connected layers or average pooling in other networks. Network A has 5 convolutional. layers.\nThe second principal variation, in Network E and F. is to interleave CNN and L-RNN modules This means that the network is capable of learning representations across large spatial footprints at any stage in the network. To show the effectiveness of adding L-RNN modules, we include a Baseline-CNN composed of only convolutional layers (7 layers, with concatenation used at every skip layer). Network E is built upon the Baseline-CNN by inserting L-RNN modules before CNN modules at multiple stages. To make sure the performance gain is not from the increased number of parameters, we cut down the number of filters in the last CNN module to 128 (this number is 256 in the Baseline-CNN). Network F, uses more convolutional layers interleaved with L-RNN modules.\nTheis, Lucas and Bethge, Matthias. Generative image modeling using spatial lstms. NIPs, 2015\nTieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running. average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012\nrinter =U * XL + b (Convolution) inter) (i = 1, zero initial states ) inter V XL+1 (i > 1\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URLhttp://arxiv.org/abs/ 1605.02688\nTable 1: Network architectures for CIFAR-10 experiments In Network A, a variety of selections are tested (coded as blue). In Feature Fusion, we may choose. Forward, Sum, Concatenation; in the LRNN module, GRU and vanilla RNNs are tested; max pool- ing or average pooling can be used for global pooling.. From Network A to D, the depth of networks is gradually increased by adding CNN modules, for. example, comparing C to B, two more CNN modules are added based on B (coded as red). Com paring Networks E and F with the the Baseline-CNN, LRNN modules (green) are interleaved with. CNN modules.\nOther variations of architectures include: firstly, we may use Forward, Sum, Concatenation to fuse features; secondly, GRU and vanilla RNN units are compared for the L-RNN modules, ReLU is used for both cases as the non-linear activation; thirdly, both max pooling and average pooling are tested as global pooling. For clarity, we name the networks by these variations in Table [2 when Forward is selected to fuse features, Network A-Forward simply follows the traditional CNN witl pure feed-forward layers. A-Concat uses concatenation as an alternative, and A-Sum follows the idea of residual networks proposed in (He et al.|2016b), the number of filters is gradually increased as the networks get deeper. To match dimensions for summation, 1 1 convolution is used in A-Sum In our experiments, we found that concatenation works better than sum (Table2). Therefore, in all\nBaseline-CNN A B C D E F input (32 x 32 3) convolution (3 3 64) CNN Module (3 x 3 64) Forward CNN Module (3 x 3 x 64) CNN Module CNN Module Forward (3 3 64) CNN Module (3 x 3 64) CNN Module Forward Concatenate (3 3 64) (3 3 x 64) CNN Module CNN Module CNN Module CNN Module CNN Module Forward Concatenate (3 x 3 x 64) (3 x 3 x 64) (3 x 3 x 64) (3 x 3 x 64) (3 3 64) CNN Module CNN Module Concatenate Feature Fusion Forward Concatenate Concatenate (3 x 3 x 64) (3 3 128) CNN Module CNN Module Concatenate Forward (3 x 3 x 64) (3 x 3 x 64) CNN Module Concatenate (3 x 3 128) Concatenate Forward CNN Module (3 x 3 x 128) Concatenate MaxPooling (2) LRNN Module CNN Module CNN Module (128) (3 x 3 x 128) (3 x 3 x 128) Forward CNN Module LRNN Module Forward Forward CNN Module (3 x 3 x 128) (128) CNN Module CNN Module CNN Module CNN Module (3 3 64) Forward Forward (3 3 128) (3 3 128) (3 3 x 128) (3 x 3 128) Concatenate CNN Module CNN Module Concatenate Feature Fusion Forward Forward LRNN Module (3 x 3 x 128) (3 x 3 128) CNN Module CNN Module (128) Concatenate Concatenate (3 x 3 x 128) (3 x 3 128) Forward Concatenate Concatenate CNN Module (3 3 64) Concatenate MaxPooling (2) LRNN Module (128) Forward LRNN Module CNN Module (128) CNN Module LRNN Module LRNN Module LRNN Module LRNN Module (3 x 3 x 64) Forward (3 x 3 x 256) (256) (256) (256) (256) Concatenate CNN Module Concatenate Feature Fusion Concatenate Concatenate Concatenate LRNN Module (3 x 3 128) (128) Concatenate Forward CNN Module (3 3 64) Concatenate Global Pooling (8) Dropout (0.5) Softmax (10)\nVisin, Francesco, Ciccone, Marco, Romero, Adriana, Kastner, Kyle, Cho, Kyunghyun, Ben- gio, Yoshua, Matteucci, Matteo, and Courville, Aaron. Reseg: A recurrent neural network-based. model for semantic segmentation. CVPR. 2016..\nZheng, Shuai, Jayasumana, Sadeep, Romera-Paredes, Bernardino, Vineet, Vibhav, Su, Zhizhong Du, Dalong, Huang, Chang, and Torr, Philip HS. Conditional random fields as recurrent neural networks. ICCV, 2015."}, {"section_index": "6", "section_name": "Appendices", "section_text": "other architectures (B,C,D), as we gradually increase the network depth by adding CNN module we fuse the skip layers by only alternating between concatenation and forward\nFollowing the VGG-net (Simonyan & Zisserman]2015), in all architectures, convolutional kernels in the CNN Module are of size 3 3. Maxpoolings (2 2) are used as intermediate pooling, and 8 8 global poolings (average or max) are applied at the end. To avoid overfitting, we use dropout (0.5). Training details and recurrent units are described in the Appendix [A] Implementations are mostly based in Theano (Theano Development Team2016) with single NVIDIA Titan X.\nIn the Layer-RNN, we test the gated recurrent units (GRU) for the RNN blocks (Chung et al.J|2015 the GRU has two gates, namely reset gate r and update gate z. Intuitively, the reset gate determine how to combine the new input with the previous memory, and the update gate defines how much c the previous memory to use, thus, the hidden state st of the GRU at time t can be computed as :\nDataset & Evaluation. We conducted experiments on the CIFAR-10 dataset, which consists of 4Ok training images, 10k validation and 10k testing images in 10 classes, and each of the image is of 32 32 pixels with RGB channels. We augment the training data with simple transformations (rotation, flipping, scaling) on the fly. The mean image over the whole training set is subtracted from each image during training. Following the standard evaluation protocol, we report the top1 error on the testing set.\nz =O(xtUz + St-1Wz) r =o(xtUr + St-1Wr) h =f(xtUh+(St-10r)Wh St = (1-z) 0 h+ z O St-1\nResults & Discussion. We present detailed comparisons with other published methods in Table\nt)+b H H 1 1 E at (aj- Ut = H H i=1 i=1\nWhere U is the current input-to-hidden term, and V is the hidden-to-hidden recurrence term, b anc g are defined as the bias and gain parameters of the same dimension as ht.\nDuring training, we iteratively increase and decrease the learning rate (learning rate restart) between. 10-3 and 10-5 based on the conjecture that (Figure[6), networks tend to get trapped in the regions. with small derivatives, such as saddle points or bad local minima Dauphin et al.(2014). Tradi-. tionally, the learning rate is decreased every several epochs, and gradients that are used to update parameters depend on both the learning rate and the derivatives w.r.t loss functions. At the end of training, both of these two terms tend to be very small. Therefore, it becomes difficult for the net- works to escape from these regions. During our training, we restart the learning rate every some. epochs (we try 60 or 80 in our training), and decrease it gradually..\nTable 2: Comparison with previous published methods on CIFAR-10 The networks are named by the chosen operation at every step; for instance, A-Forward-GRU-Max refers to the architecture A with Forward feature fusion, GRU in L-RNN Module, and max pooling as the final global pooling.\nSaddle Point\nFrom the experimental results, we can draw the following conclusions:\nIn our experiments for shallow networks, the summing of residual connections shows no bene- fit compared to feed-forward or concatenation. This observation is made from the results by A Forward-GRU-Max (7.57%), A-Concat-GRU-Max (7.35%) and A-Sum-GRU-Max (7.69%). Thus. as also employed in U-Net or DenseNet (Ronneberger et al.]2015] Huang et al.]2016), concatena- tion can be used as an alternative to summation in building deeper networks..\nZ =OxtUz + St-1Wz) r = o(xtUr + St-1Wr f(xtUh+(St-10r)Wh St=(1-z)oh+ZOSt-1\nCIFAR-10 # Params # Conv Layers Approx. Time / Epoch (s) Top1 Error(%) ReNet [Visin et al.]|2015] 0 12.35 NIN (Lin et al.[[2013) 8.81 FitNet (Romero et al.|2014) 2.5M 19 8.39 Highway (Srivastava et al.)2015) 2.3M 19 7.54 ResNet-110 (He et al.|) [[2016a) 1.7M 110 6.61 ResNet-164 (He et al.) 2016b 1.7M 164 5.46 Dense Net (Huang et al.|2016) 27.2M 100 3.74 Baseline-CNN-Avg 1.56M 7 331 9.07 Baseline-CNN-Max 1.56M 7 331 8.48 A-Concat-RNN-Avg 0.9M 5 293 7.65 A-Concat-RNN-Max 0.9M 5 293 7.43 A-Forward-GRU-Max 1.68M 5 315 7.57 A-Concat-GRU-Max 1.95M 5 377 7.35 A-Sum-GRU-Max 1.99M 5 383 7.69 B-GRU-Max 2.3M 9 542 6.62 B-RNN-Max 1.27M 9 483 6.78 C (GRU-Max) 2.5M 13 726 6.21 D (GRU-Max) 3M 19 1321 5.73 E (RNN-Max) 0.97M 7 462 5.96 394 F (RNN-Max) 1.55M 15 5.39 (Tensorflow on 2 GPUs)\nTo simplify the training process and reduce number of parameters, we also test the vanilla RNNs for. the RNN blocks with Layer Normalization(Ba et al.]2016). In a standard RNN, the outputs in the recurrent layer are calculated from the current input xt and the previous hidden states ht-1, which are denoted as at = Uxt + Vht-1. The layer normalized layer is computed as :.\nSaddle Point or Side View Bad Local Minima\nComparison of basic choices. Max pooling consistently performs better when used as the global pooling in our case, this is seen in the results by Baseline-CNN-Avg (9.07%) vs. Baseline-CNN- Max (8.48%), and A-Concat-RNN-Avg (7.65%) vs. A-Concat-RNN-Max (7.43%). One possible explanation would be that for classification tasks, decisions are based on the most salient features.\nFigure 6: Intuitive Loss Surfaces. Deep Neural Networks may easily be trapped into saddle point or bad local minima"}, {"section_index": "7", "section_name": "B FINE-TUNING LAYER-RNNS WITH ZERO RECURRENCE MATRIX", "section_text": "It can be seen that vanilla RNN units trained with Layer Normalization (Ba et al. 2016) can perforn almost as well as GRU, while saving a a large number of parameters (by comparing the results fron A-Concat-RNN-Max with 0.9M parameters (7.43%) and that of A-Concat-GRU-Max with 1.95M parameters (7.36%), B-RNN-Max with 1.27M parameters (6.78%) vs. B-GRU-Max with 2.3M parameters (6.62%)).\nNetworks with L-RNN module stacked at the final stage. Even shallow networks with L. RNN modules (architectures A) can achieve comparable or superior performance to deep archi. tectures with 19 layers that requires more parameters (e.g. Network A-Concat-RNN-Max (0.9M). vs. Highway(2.3M)). This confirms that when a L-RNN module is stacked on top of CNNs, it is able to capture global information, avoiding the multiple layer route to increasing receptive fields in. standard architectures, e.g. in (Romero et al.]2014) Srivastava et al.]2015).\nAssume E denotes the loss function for a specific task. Since V is shared for the whole 1D sequence (length denoted by T), the back-propagation within the layer L + 1 can then be derived as:\ndE dE oXL+1 vL+1 ax dst E av 0XL+1 0XL+1 dst av T t<T\nAs expected, networks can always improve classification performance by adding more CNN mod- ules (going from architecture A to D). Network D with 19 convolutional layers performs better than the ResNet-110 (by 0.3% top1 error), (though Network D has more parameters than the ResNet- 110) and is slightly worse than ResNet-164 (by 0.25% top1 error). Thus, following this trend, it is reasonable to expect a benefit if L-RNN Modules are combined with very deep networks, like the residual variants.\nox+1 oxf+1 0x+1 0XL+1 = VT . diag(f') and ox+1x+ oXL+1 0xL+1 0XL+1\nNetworks with L-RNN modules interleaved with CNN modules. Comparing the performanc. of Baseline-CNN-Max (8.48%) with that of Network E (5.96%), there is a significant performanc. boost (2.5%), brought by simply inserting L-RNN modules. Network E also has other advantage. over the networks A to D: the number of parameters, network depth, and running time. Further more, when we continue increasing the network depth and interleaving L-RNN modules, Network I achieves comparable results (5.39%) to ResNet-164 (5.46%) and with fewer parameters (1.55M vs. 1.7M). This confirms that, firstly, L-RNN modules can be combined with very deep networks and secondly, rather than hand-craft the kernel size, we should set the model free and learn contex tual information at any stage\ndE dE oXL+1 dST oXL+1 dsT where av dST av dsT av 7"}, {"section_index": "8", "section_name": "5.2 SEMANTIC SEGMENTATION", "section_text": "matrix V randomly or to be identity matrix, we actually initialize it based on the features in a local neighbourhood (equation|20j. During the back-propagation of spatial RNNs, gradients flow within layers, ?F (between layers) is calculated in the same way as normal convolutional layers.\nIn this section, we insert L-RNN modules into the VGG-16 networks (pre-trained on Ima geNet (Deng et al.]2009), and fine-tune the entire network for the PASCAL VOC 2012 segmenta tion task. The objective is to boost the segmentation performance by providing contextual informa tion via the L-RNNs. In particular, we consider the two FCN segmentation architectures originally introduced byLong et al.(2015), FCN-32s and FCN-8s; these are described below.\nWe proceed in three steps: first, we establish baselines by training our own FCN-32s and FCN-8s. (AppendixC), and comparing their performance to those of (Long et al.||2015). We also investigate. the loss in performance as the fully connected (FC) layer is gradually reduced from 4096 to 512 channels. The reason for doing this is that when we insert the L-RNN module, its complexity. (dimension of the hidden units) depends on this number of channels, and so the overall complexity. can be varied. In the second step, we insert L-RNNs into the FCN-32s architecture and evaluate the change in performance. Finally, we insert L-RNNs into the FCN-8s architecture and compare with previous published methods.\nDataset & Evaluation.We used a training set consisted of VOC2012 training data (1464 images. provided by the challenge organizers), and augmented with training and validation data from Har- iharan et al.(2014), which further extend the training set to a total of 11, 685 images with pixel- level annotation. After removing the overlapping images between VOC2012 validation data and. this dataset, we are left with 346 images from the origina1 VOC2012 validation set to validate our model. In all the following experiments, we use a single scale for the input images (384 384),. and only horizontal flipping is used for data augmentation. The performance is measured in terms. of pixel intersection-over-union (IOU) averaged across the 21 classes..\nIn this section. we derive the procedure for fine-tuning the recurrence matrix, when it is initialized as zeros. We will only consider 1D scan-lines of the spatial RNN, and therefore simplify the derivation to a 1D sequence. Consider the fully connected layer for simplicity, L, L + 1 denote layer, t refers to the index of input, f refers to ReLU, U, V refer to the input-hidden matrix and recurrence matrix. respectively.\nSt = UX F VXL+1 -1 L+1=f(St)\ndE V = Vo - Q gradient descent at first iteration"}]
rJq_YBqxx
[{"section_index": "0", "section_name": "DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION BY LEARNING MORPHOLOGY", "section_text": "token before translating. Thus, it will produce an <unk> token or just take the word from source sentence (Gulcehre et al.]2016} Luong et al.| 2015). More interestingly, DCNMT could translate 'convenienter\"' correctly as shown in Table2(b). By concatenating \"convenient' and \"er\", we get the. comparative adjective form of \"convenient' which never appears in the training set; however, our. model guessed it correctly based on the morphemes and the rules..\nShenjian Zhao\nDepartment of Computer Science and Engineering Shanghai Jiao Tong University Shanghai 200240, China\nIn this paper we have proposed an hierarchical architecture to train the deep character-level neural machine translation model by introducing a novel word encoder and a multi-leveled decoder. We have demonstrated the efficiency of the training process and the effectiveness of the model in comparison with the word-level and other character-level models. The BLEU score implies that our deep character level neural machine translation model likely outperforms the word-level models and is competitive with the state-of-the-art character-based models. It is possible to further improve performance by using deeper recurrent networks (Wu et al.]2016), training for more epochs and training with longer sentence pairs.\nAs a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue tha1 word-level models suffer from, and we have obtained a new functionality to translate the misspelled o1 the nonce words. More importantly, the deep character-level is able to learn the similar embedding of the words with similar meanings like the word-level models. Finally, it would be potentially possible that the idea behind our approach could be applied to many other tasks such as speech recognition and text summarization.\nNeural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology."}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holge. Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 2014.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representation, 2015."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Neural machine translation (NMT) attempts to build a single large neural network that reads a sentence and outputs a translation (Sutskever et al.| 2014).Most of the extant neural machine translations models belong to a family of word-level encoder-decoders (Sutskever et al.||2014] |Cho et al.[[2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism which automatically searches the alignments and greatly improves the performance. However, the use of a large vocabulary seems necessary for the word-level neural machine translation models to improve performance (Sutskever et al.[2014f Cho et al.[2015).\nJunyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicit segmentation for neural machine translation. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a.\nChung et al.(2016a) listed three reasons behind the wide adoption of word-level modeling: (i) worc is a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling. Consider that a language itself is an evolving system. So it is impossible to cover all words in the. language. The problem of rare words that are out of vocabulary (OOV) is a critical issue which car. effect the performance of neural machine translation. In particular, using larger vocabulary does. improve performance (Sutskever et al.f2014] Cho et al.[2015). However, the training becomes. much harder and the vocabulary is often filled with many similar words that share a lexeme but have. different morphology.\nThere are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehre. et al.[(2016); Luong et al.[(2015); Cho et al.[(2015) proposed to obtain the alignment information of target unknown words, after which simple word dictionary lookup or identity copy can be performed. to replace the unknown words in translation. However, these approaches ignore several important. properties of languages such as monolinguality and crosslinguality as pointed out by Luong and.\nWang Ling, Tiago Luis, Luis Marujo, Ramon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. Finding function in form: Compositional character models for open vocabulary word representation. Empirical Methods in Natural Language Processing, 2015a."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in Neural Information Processing Svstems. pages 3104-3112. 2014\nIntuitively, it is elegant to directly model pure characters. However, as the length of sequence grows significantly, character-level translation models have failed to produce competitive results compared with word-based models. In addition, they require more memory and computation resource Especially, it is much difficult to train the attention component. For example, Ling et al.(2015a proposed a compositional character to word (C2w) model and applied it to machine translation (Ling et al.[[2015b). They also used a hierarchical decoder which has been explored before in other contex1 (Serban et al.]2015). However, they found it slow and difficult to train the character-level models, and one has to resort to layer-wise training the neural network and applying supervision for the attention component. In fact, such RNNs often struggle with separating words that have similar morphologies but very different meanings.\nJason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.\nIn order to address the issues mentioned earlier, we introduce a novel architecture by exploiting th structure of words. It is built on two recurrent neural networks: one for learning the representatior of preceding characters and another for learning the weight of this representation of the whole word. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al 2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al. 2016] Lee et al.][2016), our model is able to generate a meaningful representation of the word. Tc decode at character level, we devise a hierarchical decoder which sets the state of the second-leve RNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which wil generate a character sequence until generating a delimiter. In this way, our model almost keeps the same encoding length for encoder as word-based models but eliminates the use of a large vocabulary Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks achieving higher performance.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of.. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. 2012\nIn summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence -> target word -> target character) to train a deep character-level neural machine translator. We show that the model achieves a high translation performance which is comparable to the state-of-the-ar neural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiments and analyses further support the statement that our model is able to learn the morphology.\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.\nNeural machine translation is often implemented as an encoder-decoder architecture. The encoder usually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN) (Schuster and Paliwal]1997) to encode the input sentence x = {x1, .:.., xT} into a sequence of hidden states h = I h1 , : hT.} :\nht = f1(e(xt),ht-1)\np(yt{y1,.,Yt-1})=g(e(yt-1),St,Ct)"}, {"section_index": "4", "section_name": "where", "section_text": "St = f2e(yt-1),St-1,Ct\nand g is a nonlinear and potentially multi-layered function that computes the probability of yt. The context ct depends on the sequence of {h1, . . . , hT. }. Sutskever et al.[(2014) encoded all informatior in the source sentence into a fixed-length vector, i.e., c, = hT. .Bahdanau et al.(2015) computed c, by the alignment model which handles the bottleneck that the former approach meets.\n0* = argmax >logp(yt|{y1,.., Yt-1},x,0) 0 t=1\nwhere e(xt) E Rm is an m-dimensional embedding of xt. The decoder, another RNN, is often trained to predict next word yt given previous predicted words {y1, ..., Yt-1} and the context vector. Ct; that is,\nThe whole model is jointly trained by maximizing the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model 0:."}, {"section_index": "5", "section_name": "4 DETAILED DESCRIPTION OF THE MODEL", "section_text": "We consider two problems in the word-level neural machine translation models. First, how can we map a word to a vector? It is usually done by a lookup table (embedding matrix) where the size of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is usually done via a softmax function. However, the large vocabulary will make the softmax intractable computationally.\nHere we describe the implementation using Theano, it should be applicable to other symbolic deep learning frameworks. We use f to denote the transition of the recurrent network.."}, {"section_index": "6", "section_name": "A.1 SOURCE WORD ENCODER", "section_text": "As illustrated in Section[3.1] the word encoder is based on two recurrent neural networks. We compute the representation of the word 'anyone' as.\n6 ranyone = tanh( Wtr t=1"}, {"section_index": "7", "section_name": "3.1 LEARNING MORPHOLOGY IN A WORD ENCODER", "section_text": "Many words can be subdivided into smaller meaningful units called morphemes, such as \"any-one' 'any-thing' and \"every-one. At the basic level, words are made of morphemes which are recognized. as grammatically significant or meaningful. Different combinations of morphemes lead to differen. meanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rules of how they are combined. Even if the word encoder had never seen \"everything' before, with a. understanding of English morphology, the word encoder could gather the meaning easily. Thus. learning morphology in a word encoder might speedup training..\nEach rt contains information about the preceding characters. The weight wt of each representatior rt is computed by\nThe word encoder is based on two recurrent neural networks as illustrated in Figure 1 We compute the representation of the. word 'anyone' as\n1414\n6 ranyone = tanh t=1\nThe backward state h ; E R' is computed similarly, however in a reverse order.\nrt =f(e(xt),rt-1)\nEach rt contains information about the preceding characters The weight w+ of each representation r is computed by.\nAfter encoding the words by the source word encoder, we feed the representations to the source sentence encoder. For example, the source \"Hello world </s>\" is encoded into a vector [rHello, rworld, r</s>], then the BiRNN sentence encoder encodes this vector into [V1, V2, v3]. The com putation is the same as Eqn. (9) and Eqn. (10), however the input now changes to the representation of the words.\nwt = exp(aff(ht))\nwhere h is another RNN hidden state at time t and aff( is an affine function which maps h to a scalar. Here, we use a BiRNN to compute ht as shown in Figure[1] Instead of nor- malizing it by >, exp(aff(ht)), we use an activation function tanh as it performs best in experiments."}, {"section_index": "8", "section_name": "A.3 FIRST-LEVEL DECODER", "section_text": "We can regard the weight w; as the energy that determines whether r; is a representation of a morpheme and how it contributes to the representation of the word. Compared with an embedding lookup table, the decoupled RNNs learn the representation of morphemes and the rules of how they are combined respectively, which may be viewed as learning distributed representations of words explicitly. For example, we are able to translate \"convenienter' correctly which validates our idea.\nUt =(1-Zt)OUp-1 + ZtO U\nAfter obtaining the representation of the word, we could encode the sentence using a bidirectional RNN as RNNsearch (Bahdanau et al.2015). The detailed architecture is shown in Figure[2\nZt = o(WzrYt-1+ Uzut-1+ Czct) qt = o(WqrYt-1+ Uqut-1+ Cqct)."}, {"section_index": "9", "section_name": "3.2 HIERARCHICAL DECODER", "section_text": "qt =o(WqrYt-1 + Uqut-1+ Cqct)\nTo decode at the character level. we introduce a hierarchical decoder. The first-level decoder is similar to RNNsearch which contains the information of the target word. Specifically, s, in Eqn. (1) contains the information of target word at time t. Instead of using a multi-layer network following a softmax function to compute the probability of each target word using St, we employ a second-level decoder which generates a character sequence based on st.\nryt-, is the representation of the target word which is produced by an ordinary RNN (take the las state). The context vector ct is computed by the attention mechanism at each step:\nTx Ct = j=1\nWe correspondingly devise two novel architectures, a word encoder which utilizes the morphology and a hierarchical decoder which decodes at character level. Accordingly, we propose a deep character-level neural machine translation model (DCNMT).\nrt =f(e(xt),rt-1)\nwt = exp(Wwht + bw)\n6 ranyone = tanh( W(rt) t=1 r1 r2 r3 r4 r5 r6 000 000 000 000 000 000 ^ A A 1 a n y O n W1 W2 W3 e W4 W5 W6 000 000 000 000 000 000 h5 a n y n e\nr1 r2 r3 r4 r5 r6 000 000 000 000 000 000 A A n y n W3 e W1 W2 W4 W5 W6 000 000 000 000 13 000 000 T h5 T T a n V n e\nht=f(e(xt),ht-1)\nFigure 1: The representation of the word 'anyone..\nThe first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism Given the context vector ct from encoder, the hidden state ut E Rm of the GRU is computed by\nand Schmidhuber!1997) units instead of the GRU described here). HGRU has a settable state and generates character sequence based on the given state until generating a delimiter. In our model, the state is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will set the state to the next output of the first-level decoder. Given the previous output character sequence {yo, y1, . .. , Yt-1} where yo is a token representing the start of sentence, and the auxiliary sequence {ao, a1, ..., at-1} which only contains O and 1 to indicate whether y is a delimiter (ao is set to 1). HGRU undates the state as follows\nXt, etk. etj = Etanh(Weut-1 + Heh;)\nE E R1xm which maps the vector into a scalar. Then the hidden state ut is further processed as Eqn. (8) before feeding to the second-level decoder:\ngt-1=(1-At-1)gt-1+at-1Sit qt =o( [Wqe(yt-1)] + [Uqgt-1]) zf = o([Wze(yt-1)] +[Uzgt-1]] gt = $([We(yt-1)] + [U(qt O gt-1)]) gt =ztgt-1+(1-zt)gt\nSt+1 = W1Ct+1+ W2ry, + W3ut +l\nAs described in Section[3.2] the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level. decoder when training in batch manner (at least intractable for Theano (Bastien et al.[|2012) and other symbolic deep learning frameworks to build symbolic expressions). We use a matrix R E RTy T to unfold the outputs [S1, . .. , ST,] of the first-level decoder (Ty is the number of words in the target. sentence and T is the number of characters). R is a symbolic matrix in the final loss, it is constructed. according the delimiters in the target sentences when training (see Section 3.2 for the detailed. construction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes. [Si1, :.., SiT], that is\np(yt{y1,...,Yt-1},x) = softmax(gt)\nThe current problem is that the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al.]2012) and. other symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016 uses two forward passes (one for word-level and another for character-level) in batch training which is less efficient. However, in our model, we use a matrix to unfold the outputs of the first-level decoder, which makes the batch training process more efficient. It is a Ty T matrix R, where Ty is. the number of delimiter (number of words) in the target character sequence and T is the length of. the target character sequence. R[i, J1 + 1] to R[i, J2] are set as 1 if j1 is the index of the (i-1)-th delimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the. O-th delimiter is set as 0. For example, when the target output is \"g o - ! _\"' and the output of the. first-level decoder is s1. s2. the unfolding step will be:.\nSiu,..., Si S1,..., ST, R\nFinally, we could compute the cross-entroy loss and train with SGD algorithm\nWe show additional sample translations in the following Tables\nTable 3: Sample translations of En-Fr\n1 1 1 0 0 S1, S1, S1, S2, S2 0 0 0 1\ntherefore {Si1, Si2, Si3, Si4, Si5} is correspondingly set to {S1, S1, S1, S2, S2} in HGRU iterations. After this procedure, we can compute the probability of each target character by the second-level decoder according to Eqns. (2) to (7)"}, {"section_index": "10", "section_name": "3.3 MODEL ARCHITECTURES", "section_text": "There are totally six recurrent neural networks in our model, which can be divided into four layers as shown in Figure[2] Figure2|illustrates the training procedure of a basic deep character-level neura machine translation. It is possible to use multi-layer recurrent neural networks to make the mode deeper. The first layer is a source word encoder which contains two RNNs as shown in Figure[1 The second layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al. 2015). The third layer is the first-level decoder. It takes the representation of previous target word as a feedback, which is produced by the target word encoder in our model. As the feedback is less important, we use an ordinary RNN to encode the target word. The feedback ryt-, then combines the previous hidden state u-1 and the context ct from the sentence encoder to generate the vector st:\nSt = W1ct + W2ry-1 + W3ut-1 + b\nWith the state of HGRU in the second-level decoder setting to st and the information of previous. generated character, the second-level decoder generates the next character until generating an end of. sentence token (denoted as </s> in Figure2). With such a hierarchical architecture, we can train our character-level neural translation model perfectly well in an end-to-end fashion..\nt-1=(1-at-1)gt-1+at-1Sit qt = o([Wqe(yt-1)] + [Uqgt-1]) t = o([Wze(yt-1)] +[Uzgt-1]) gt = $([We(yt-1)] + [U(qt O gt-1)]) (1 - z)gt J=Z\nwhere s, is the output of the first-level decoder which calculated as Eqn. (8). We can compute the probability of each target character yt based on gt with a softmax function:\nSource This \" disturbance \" produces an electromagnetic wave ( of light , infrared , ultraviolet etc . ) , and this wave is nothing other than a photon - and thus one of the \" force carrier \" bosons . Reference Quand , en effet , une particule ayant une charge electrique accelere ou change de direction , cela \" derange \" le champ electromagnetique en cet. endroit precis , un peu comme un caillou lance dans un etang . DCNMT Lorsque , en fait , une particule ayant une charge electrique accelere ou. change de direction , cela \" perturbe \" le champ electromagnetique dans. cet endroit specifique , plutot comme un galet jete dans un etang . Source Since October , a manifesto , signed by palliative care luminaries includ- ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to demonstrate their opposition to such an initiative . Reference Depuis le mois d' octobre , un manifeste , signe de sommites des soins. palliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule pour temoigner de leur opposition a une telle initiative . DCNMT Depuis octobre , un manifeste , signe par des liminaires de soins palliatifs. , dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circule pour demontrer leur opposition a une telle initiative .\nource This \" disturbance \" produces an electromagnetic wave ( of light , infrared , ultraviolet etc . ) , and this wave is nothing other than a photon - and thus one of the \" force carrier \" bosons . Reference Quand , en effet , une particule ayant une charge electrique accelere ou change de direction, cela \" derange \" le champ electromagnetique en cet endroit precis , un peu comme un caillou lance dans un etang . DCNMT Lorsque , en fait , une particule ayant une charge electrique accelere ou change de direction , cela \" perturbe \" le champ electromagnetique dans cet endroit specifique , plutot comme un galet jete dans un etang . Source Since October , a manifesto , signed by palliative care luminaries includ- ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to demonstrate their opposition to such an initiative . Reference Depuis le mois d' octobre , un manifeste , signe de sommites des soins palliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule pour temoigner de leur opposition a une telle initiative . DCNMT Depuis octobre , un manifeste , signe par des liminaires de soins palliatifs , dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circule pour demontrer leur opposition a une telle initiative .\nFigure 2: Deep character-level neural machine translation. The HGRUs with red border indicate that the state should be set to the output of the first-level decoder.."}, {"section_index": "11", "section_name": "3.4 GENERATION PROCEDURE", "section_text": "We first encode the source sequence as in the training procedure, then we generate the target sequence character by character based on the output st of the first-level decoder. Once we generate a delimiter we should compute next vector st+1 according to Eqn. (8) by combining feedback ry, from the targe word encoder, the context ct+1 from the sentence encoder and the hidden state u. The generatior procedure will terminate once an end of sentence (EOS) token is produced.\nTable 5: Sample translations of Cs-En\nWe implement the model using Theano (Bergstra et al.2010]Bastien et al.]2012) and Blocks (var. Merrienboer et al.]2015), the source code and the trained models are available at github[] We train. our model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to French translation task where the languages are morphologically poor. For fair comparison, we. use the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACI. WMT'14. In order to show the strengths of our model, we conduct on the English-to-Czech and Czech-to-English translation tasks where Czech is a morphologically rich language. We use the same. dataset as (Chung et al.2 2016a, Lee et al.| 2016) which is provided by ACL WMT'152"}, {"section_index": "12", "section_name": "4.1 DATASET", "section_text": "We use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of 15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual tokenization. We choose a list of 120 most frequent characters for each language which coveres nearly 100% of the training data. Those characters not included in the list are mapped to a special token\nB 0 n 1 0 u r </d> m 0 n d e </d> </s> Y6 Yt Yt+1 Yt+2 Yt+3 Yt+4 Yt+5 Yt+6 ^ ^ ^ A o x P H H H H H H H G O P R Second-level Decoder U U U U U U A R matrix R St St+1 Target Word Encoder. ryt-1 ryt G O n P R Ut-1 Ut U ut+1 First-level Decoder. Ct Ct+1 Bidirectional RNN Sentence Encoder ^rx1 ^rx2 rx3 Source Word Encoder. X1 X2 X3 x4 x5 X6 X7 X8 x9 x10 X11 X12 X13 X14 H e 1 1 0 </d> W 1 d </d> </s> </d> 0 r\nTable 4: Sample translations of En-Cs\nSource French troops have left their area of responsibility in Afghanistan (. Kapisa and Surobi ) . Reference Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu ( Kapisa a Surobi) . DCNMT Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu. ( Kapisa a Surois ) . Source \" All the guests were made to feel important and loved \" recalls the top. model, who started working with him during Haute Couture Week Paris , in 1995 . Reference Vsichni pozvani se diky nemu mohli citit duleziti a milovani , \" vzpomina top modelka , ktera s nim zacala pracovat v pribehu Parizskeho tydne vrcholne mody v roce 1995 . DCNMT \" Vsichni hoste byli provedeni , aby se citili duleziti a milovani \". pripomina nejvyssi model , ktery s nim zacal pracovat v prubehu ty deniku Haute Coutupe v Parizi v roce 1995 . Source \" There are so many private weapons factories now , which do not endure. competition on the international market and throw weapons from under the counter to the black market , including in Moscow , \" says the expert. Reference \" V soucasnosti vznikaji soukrome zbrojarske podniky , ktere nejsou. konkurenceschopne na mezinarodnim trhu , a vyrazuji zbrane , ktere. dodavaji na cerny trh vcetne Moskvy , \" rika tento odbornik .. DCNMT \" V soucasnosti existuje tolik soukromych zbrani , ktere nevydrzi. hospodarskou soutez na mezinarodnim trhu a hodi zbrane pod pultem k cernemu trhu , vcetne Moskvy , \" rika odbornik .\nSource French troops have left their area of responsibility in Afghanistan (. Kapisa and Surobi ) .. Reference Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu ( Kapisa a Surobi) . DCNMT Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu. ( Kapisa a Surois ) . Source \" All the guests were made to feel important and loved \" recalls the top. model , who started working with him during Haute Couture Week Paris , in 1995 . Reference Vsichni pozvani se diky nemu mohli citit duleziti a milovani, \" vzpomina top modelka , ktera s nim zacala pracovat v prubehu Parizskeho tydne vrcholne mody v roce 1995 . DCNMT \" Vsichni hoste byli provedeni , aby se citili duleziti a milovani \". pripomina nejvyssi model, ktery s nim zacal pracovat v pribehu ty- deniku Haute Coutupe v Parizi v roce 1995 .. Source \" There are so many private weapons factories now , which do not endure. competition on the international market and throw weapons from under the counter to the black market , including in Moscow , \" says the expert. Reference \" V soucasnosti vznikaji soukrome zbrojarske podniky , ktere nejsou. konkurenceschopne na mezinarodnim trhu , a vyrazuji zbrane , ktere. dodavaji na cerny trh vcetne Moskvy , \" rika tento odbornik .. DCNMT \" V soucasnosti existuje tolik soukromych zbrani , ktere nevydrzi. hospodarskou soutez na mezinarodnim trhu a hodi zbrane pod pultem k cernemu trhu , vcetne Moskvy , \" rika odbornik .\nSource Prezident Karzai nechce zahranicni kontroly , zejmena ne pri prilezitosti voleb planovanych na duben 2014 . Reference President Karzai does not want any foreign controls , particularly on the occasion of the elections in April 2014 . DCNMT President Karzai does not want foreign controls , particularly in the opportunity of elections planned on April 2014 . Source Manzelsky par mel dve deti , Prestona a Heidi , a dlouhou dobu zili v kalifornskem meste Malibu , kde pobyva mnoho celebrit . Reference The couple had two sons , Preston and Heidi , and lived for a long time in the Californian city Malibu , home to many celebrities . DCNMT The married couple had two children , Preston and Heidi , and long lived in the California city of Malibu , where many celebrities resided . Source Trestny cin rouhani je zachovan a urazka je nadale zakazana , coz by mohlo mit vazne dusledky pro svobodu vyjadrovani , zejmena pak pro tisk . Reference The offence of blasphemy is maintained and insults are now prohibited , which could have serious consequences on freedom of expression , particularly for the press . DCNMT The criminal action of blasphemy is maintained and insult is still prohib- ited , which could have serious consequences for freedom of expression , especially for the press .\n(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest201 (Test). We do not use any monolingual corpus.."}, {"section_index": "13", "section_name": "4.2 TRAINING DETAILS", "section_text": "We follow (Bahdanau et al. 2015) to use similar hyperparameters. The bidirectional RNN sentence encoder and the hierarchical decoder both consists of two-layer RNNs. each has 1024 hidden units: We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is 64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have 512 hidden units.\nWe use the ADAM optimizer (Kingma and Ba]2015) with minibatch of 56 sentences to train each model (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10-3 and then. annealed to 10-4\nWe use a beam search to find a translation that approximately maximizes the conditional log. probability which is a commonly used approach in neural machine translation (Sutskever et al.| 2014 Bahdanau et al.2015). In our DCNMT model, it is reasonable to search directly on character level to. generate a translation.\nWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in Section[5.1] Apart from measuring translation quality, we analyze the efficiency of our model and effects of character-level modeling in more details..\nTable 1: BLEU scores of different models on three language pairs\nIn Table[1] \"Length\"' indicates the maximum sentence length in training (based on the number of words or characters), \"Size\" is the total number of parameters in the models. We report the BLEU\nWe illustrate the efficiency of the deep character-level neural machine translation by comparing with the bpe-based subword model (Sennrich et al.[|2016) and other character-level models. We measure the performance by BLEU score (Papineni et al.|2002)\nModel Size Src Trgt Length Epochs Days Dev Test bpe2bpe(1) bpe bpe - 50 50 - - 26.91 29.70 C2w(2) ~ 54 M char char 300 300 ~ 2.8 ~ 27 25.89 27.04 CNMT ~ 52 M char char 300 300 ~ 3.8 ~ 21 28.19 29.38 1 ~ 7 27.02 28.13 DCNMT ~ 54 M char char 300 300 ~ 2.8 ~ 19 29.31 30.56 bpe2bpe(1) bpe bpe 50 - 50 - 15.90 13.84 : bpe2char(3) bpe char 50 500 - 16.86 char(5) char char 600 600 > 4 ~ 90 1 17.5 - hybrid(5) ~ 250 M hybrid hybrid 50 50 > 4 ~ 21 19.6 1 1 ~ 5 15.50 14.87 DCNMT ~ 54 M char char 450 450 ~ 2.9 ~ 15 17.89 16.96 bpe2bpe(1) - bpe bpe 50 50 - 21.24 20.32 bpe2char(3) ~ 76 M bpe char 50 500 ~ 6.1 ~ 14 23.27 22.42 Cs-an char2char(4) ~ 69 M char char 450 450 ~ ' 7.9 ~ 30 23.38 22.46 1 ~ 5 20.50 19.75 DCNMT ~ 54 M char char 450 450 ~ 4.6 ~ 22 23.24 22.48\nscores of DCNMT when trained after one epoch in the above line and the final scores in the following. line. The results of other models are taken from (1)Firat et al.(2016), (3)Chung et al.(2016a), (4)Lee et al.(2016) and (5)Luong and Manning(2016) respectively, except (2) is trained according toLing et al.(2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN to encode source words (takes the last hidden state). The training time for (3) and (4) is calculated . based on the training speed in (Lee et al.]2016). For each test set, the best scores among the models. per language pair are bold-faced. Obviously, character-level models are better than the subword-level. models, and our model is comparable to the start-of-the-art character-level models. Note that, the purely character model of (5)(Luong and Manning2016) took 3 months to train and yielded +0.5 BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section|3.2 Besides, our model is the simplest and the smallest one in terms of the model size..\nFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words\nIn this section, we investigate whether our model could learn morphology. First we want to figure ou. the difference between an ordinary RNN word encoder and our word encoder. We choose some word with similar meaning but different in morphology as shown in Figure[3] We could find in Figure. 3(a)[that the words ending with \"ability', which are encoded by the ordinary RNN word encoder, are. jammed together. In contrast, the representations produced by our encoder are more reasonable anc. the words with similar meaning are closer.\nThen we analyze how our word encoder learns morphemes and the rules of how they are combined. We demonstrate the encoding details on \"any*\" and \"every*\". Figure4(a)|shows the energy of each. character, more precisely, the energy of preceding characters. We could see that the last character of a morpheme will result a relative large energy (weight) like \"any' and \"every\"' in these words. Moreover, even the preceding characters are different, it will produce a similar weight for the same. morpheme like \"way\" in \"anyway\" and \"everyway\". The two-dimensional PCA projection in Figure\n1.5 reliable notable solvable 1.5 1 Q flexible reliability notability solvabiligy- flexibility 1 0.5 solvable 0.5 capable 0 Qsolvability :capabptable capability -relianlbility 0 -0.5 Qcapabtability -0.5 Q possiblepossibility -1 -1 -1.5 flexih|Exibility -1.5 rpossible possibility 2 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (a) ordinary RNN word encoder (b) our word encoder.\n0.2 0.15 a y W a 0.18 anybody e e y W a y 0.1 q anyone 0.16 everybody everyone a n y 0 e 0.14 0.05 e V e y O n e 0.12 a h y b 0 d 0 0.1 e V b 0 d V Qeveryway 0.08 -0.05 axyfyng everything y h n g 0.06 e y h i n g Q everywhere 0.04 -0.1 anywhere a n W h e e 0.02 e V e y W h e r e -0.15 0 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 (a) energy of each character. (b) two-dimensional PCA projection\n4(b)|further validates our idea. The word encoder may be able to guess the meaning of \"everything even it had never seen \"everything'' before, thus speedup learning. More interestingly, we find that not only the ending letter has high energy, but also the beginning letter is important. It matches the behavior of human perception (White et al.||2008).\nFigure 5: Subword-level boundary detected by our word encoder\nAs analyzed in Section|5.2, learning morphology could speedup learning. This has also been shown in Table[1(En-Fr and En-Cs task) from which we see that when we train our model just for one epoch, the obtained result even outperforms the final result with bpe baseline.\nAnother advantage of our model is the ability to translate the misspelled words or the nonce words The character-level model has a much better chance recovering the original word or sentence. In. Table[2] we list some examples where the source sentences are taken from newstest2013 but we change some words to misspelled words or nonce words. We also list the translations from Google. translate[|and online demo of neural machine translation by LISA.\nTable 2: Sample translations\n(a) Misspelled words\n(b) Nonce words (morphological change)\nAs listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For a. word-based translator, it is never possible because the misspelled words are mapped into <unk>\n3The translations by Google translate were made on Nov 4, 2016\nMoreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b) we are able to detect the boundary of the subword units. As shown in Figure 5l \"consumers\"' \"monday\", \"football'' and \"greatest' are segmented into \"consum-er-s\",\"mon-day\", \"foot-ball' and 'great-est' respectively. Since there are no explicit delimiters, it may be more difficult to detect the subword units.\nSource For the time being howeve their research is unconclusive. Reference Leurs recherches ne sont toutefois pas concluantes pour 1'instant. Google translate Pour le moment, leurs recherches ne sont pas concluantes. LISA Pour le moment UNK leur recherche est UNK. DCNMT Pour le moment, cependant, leur recherche n'est pas concluante.\nSource Then we will be able to supplement the real world with virtual objects in. a much convenienter form . Reference Ainsi , nous pourrons completer le monde reel par des objets virtuels. dans une forme plus pratique . Google translate Ensuite, nous serons en mesure de completer le monde reel avec des objets virtuels dans une forme beaucoup plus pratique.. LISA Ensuite, nous serons en mesure de completer le vrai monde avec des objets virtuels sous une forme bien UNK.. DCNMT Ensuite, nous serons en mesure de completer le monde reel avec des objets virtuels dans une forme beaucoup plus pratique.."}]
BybtVK9lg
[{"section_index": "0", "section_name": "AUTOENCODING VARIATIONAL INFERENCE FOR TOPIC MODELS", "section_text": "Akash Srivastava\nISrlVaslaVa Informatics Forum, University of Edinburg 10, Crichton St Edinburgh, EH89AB, UK\nTopic models are one of the most popular methods for learning representations of. text, but a major challenge is that any change to the topic model requires mathe- matically deriving a new inference algorithm. A promising approach to address. this problem is autoencoding variational Bayes (AEVB), but it has proven diffi-. cult to apply to topic models in practice. We present what is to our knowledge the. first effective AEVB based inference method for latent Dirichlet allocation (LDA) which we call Autoencoded Variational Inference For Topic Model (AVITM). This. model tackles the problems caused for AEVB by the Dirichlet prior and by com-. ponent collapsing. We find that AVITM matches traditional methods in accuracy. with much better inference time. Indeed, because of the inference network, we find that it is unnecessary to pay the computational cost of running variational. optimization on test data. Because AVITM is black box, it is readily applied. to new topic models. As a dramatic illustration of this, we present a new topic. model called ProdLDA, that replaces the mixture model in LDA with a product. of experts. By changing only one line of code from LDA, we find that ProdLDA. yields much more interpretable topics, even if LDA is trained via collapsed Gibbs. sampling.\n0 10 20 30 40 Standard Gaussian+softmax 50 Dirichlet with alpha=1/10 Dirichlet with alpha=1/50 - Dirichlet with alpha=1/200 60 0 50 100 150 20 Topic Index"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Figure 1: Effect of prior assumptions on 0 on. oarsity of 0 in neural topic models\nTable 5: Average topic coherence for different choices of prior and optimization strategies o PRODLDA on 20 Newsgroup for k = 50.\nBoth mean-field and collapsed Gibbs have the drawback that applying them to new topic models. even if there is only a small change to the modeling assumptions, requires re-deriving the infer. ence methods, which can be mathematically arduous and time consuming, and limits the ability of practitioners to freely explore the space of different modeling assumptions. This has motivated the development of black-box inference methods (Ranganath et al.]2014) Mnih & Gregor2014] Ku- cukelbir et al.[[2016;Kingma & Welling[2014) which require only very limited and easy to compute information from the model, and hence can be applied automatically to new models given a simple. declarative specification of the generative process.\nThe inference network architecture can be found in figure2lin the appendix\nWe present what is to our knowledge the first effective AEVB inference algorithm for latent Dirich- let allocation. Although this combination may seem simple in principle, in practice this method is difficult to train because of the Dirichlet prior and because of the component collapsing problem By addressing both of these problems, we presented a black-box inference method for topic models with the notable advantage that the neural network allows computing topic proportions for new doc- uments without the need to run any variational optimization. As an illustration of the advantages of\nAutoencoding variational Bayes (AEVB) (Kingma & Welling 2014}Rezende et al.2014) is particularly natural choice for topic models, because it trains an inference network (Dayan et al 1995), a neural network that directly maps a document to an approximate posterior distribution\nAdditional affiliation: Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DI\nTable 4: Evaluation of inference network of VAE-LDA on 20 Newsgroups test set. \"Inference network only\"' shows the test perplexity when the inference network is trained on the training set, but no variational optimization is performed on the test set. \"Inference Network + Optimization' shows the standard approach of optimizing the ELBO on the test set. The neural network effectively learns to approximate probabilistic inference effectively.\nCharles Sutton\ncsutton@inf.ed.ac.uk\nsparsity for the standard Gaussian prior used by NVDM to the Laplace approximation of Dirichlet. priors with different hyperparameters. Clearly the Laplace approximation to the Dirichlet prior sig. nificantly promotes sparsity, providing support for our hypothesis that preserving the Dirichlet prior explains the the increased topic coherence in our method.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Topic models (Blei 2012) are among the most widely used models for learning unsupervised repre sentations of text, with hundreds of different model variants in the literature, and have have founc applications ranging from the exploration of the scientific literature (Blei & Lafferty2007) tc. computer vision (Fei-Fei & Perona]2005), bioinformatics (Rogers et al. 2005), and archaeology Mimno!2o09). A major challenge in applying topic models and developing new models is the. computational cost of computing the posterior distribution. Therefore a large body of work has. considered approximate inference methods, the most popular methods being variational methods specially mean field methods, and Markov chain Monte Carlo, particularly methods based on col. apsed Gibbs sampling\nwithout the need to run further variational updates. This is intuitively appealing because in topic models, we expect the mapping from documents to posterior distributions to be well behaved, that is, that a small change in the document will produce only a small change in topics. This is exactly the type of mapping that a universal function approximator like a neural network should be good at. representing. Essentially, the inference network learns to mimic the effect of probabilistic inference.. so that on test data, we can enjoy the benefits of probabilistic modeling without paying a further cost. for inference.\nProdLDA\nHowever, despite some notable successes for latent Gaussian models, black box inference methods are significantly more challenging to apply to topic models. For example, in initial experiments we tried to apply ADVI (Kucukelbir et al.]2016), a recent black-box variational method, but it was difficult to obtain any meaningful topics. Two main challenges are: first, the Dirichlet prior is not a location scale family, which hinders reparameterisation, and second, the well known problem of component collapsing (Dinh & Dumoulin]2016), in which the inference network becomes stuck in a bad local optimum in which all topics are identical.\nLDA Collapsed Gibbs\nIn this paper, we present what is, to our knowledge, the first effective AEVB inference method fo. topic models, which we call Autoencoded Variational Inference for Topic Models or AVITM1| On. several data sets, we find that AVITM yields topics of equivalent quality to standard mean-field inference, with a large decrease in training time. We also find that the inference network learns to mimic the process of approximate inference highly accurately, so that it is not necessary to run. variational optimization at all on test data..\nNVDM\nBut perhaps more important is that AVITM is a black-box method that is easy to apply to new models. To illustrate this, we present a new topic model, called ProdLDA, in which the distribution over individual words is a product of experts rather than the mixture model used in LDA. We find that ProdLDA consistently produces better topics than standard LDA, whether measured by auto- matically determined topic coherence or qualitative examination. Furthermore, because we perform probabilistic inference using a neural network, we can fit a topic model on roughly a one million documents in under 80 minutes on a single GPU, and because we are using a black box inference method, implementing ProdLDA requires a change of only one line of code from our implementation of standard LDA.\nTable 6: Five randomly selected topics from all the models\nTable 7: VAE-LDA fails to learn any meaningful topics when component collapsing occurs. The table shows five randomly sampled topics (, which are essentially slight variants of each other) from when the VAE-LDA model is trained without BN and high momentum training\nTo summarize. the main advantages of our methods are\nblack box inference techniques, we presented a new topic model, ProdLDA, which achieves signif. icantly better topics than LDA, while requiring a change of only one line of code from AVITM for. LDA. Our results suggest that AVITM inference is ready to take its place alongside mean field and. collapsed Gibbs as one of the workhorse inference methods for topic models. Future work could include extending our inference methods to handle dynamic and correlated topic models.\nOverall, our results suggest that AVITM is ready to take its place alongside mean field and collapse. Gibbs as one of the workhorse inference methods for topic models.."}, {"section_index": "3", "section_name": "ACKNOWLEDGMENTS", "section_text": "To fix notation, we begin by describing topic modelling and AVITM"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "David Blei. Probabilistic topic models. Communications of the ACM, 55(4):77-84, 2012\nDavid M. Blei and John D. Lafferty. Correlated topic models. In Advances in Neural Informatio. Processing Systems, 2006.\nWe describe the most popular topic model, latent Dirichlet allocation (LDA). In LDA, each doc- ument of the collection is represented as a mixture of topics, where each topic k is a probability distribution over the vocabulary. We also use to denote the matrix = (1 ... k). The generative process is then as described in Algorithm[1] Under this generative model, the marginal likelihood of\nDavid M. Blei and John D. Lafferty. A correlated topic model of science. Annals of Appliec Statistics, 1(1):17-35, 2007.\nModel Topics motherboard meg printer quadra hd windows processor vga mhz connector armenian genocide turks turkish muslim massacre turkey armenians armenia greek ProdLDA voltage nec outlet circuit cable wiring wire panel motor install season nhl team hockey playoff puck league flyers defensive player israel israeli lebanese arab lebanon arabs civilian territory palestinian militia db file output program line entry write bit int return drive disk get card scsi use hard ide controller one LDA game team play win year player get think good make NVLDA use law state health file gun public issue control firearm people say one think life make know god man see. write article dod ride right go get night dealer like. gun law use drug crime government court criminal firearm control LDA lunar flyers hitter spacecraft power us existence god go mean DMFVI stephanopoulos encrypt spacecraft ripem rsa cipher saturn violate lunar crypto file program available server version include software entry ftp use get right back light side like see take time one. list mail send post anonymous internet file information user message LDA thanks please know anyone help look appreciate get need email Collapsed Gibbs jesus church god law say christian one christ day come bike dod ride dog motorcycle write article bmw helmet get light die burn body life inside mother tear kill christian insurance drug different sport friend bank owner vancouver buy prayer NVDM input package interface output tape offer component channel level model price quadra hockey slot san playoff jose deal market dealer christian church gateway catholic christianity homosexual resurrection modem mouse sunday\n1. Topic coherence: ProdLDA returns consistently better topics than LDA, even when LDA is trained using Gibbs sampling 2. Computational efficiency: Training AVITM is fast and efficient like standard mean-field. On. new data, AVITM is much faster than standard mean field, because it requires only one forward pass through a neural network. 3. Black box: AVITM does not require rigorous mathematical derivations to handle changes in. the model, and can be easily applied to a wide range of topic models..\nWe thank Andriy Mnih, Chris Dyer, Chris Russell, David Blei, Hannah Wallach, Max Welling Mirella Lapata and Yishu Miao for helpful comments, discussions and feedback\nMichael Collins, Sanjoy Dasgupta, and Robert E Schapire. A generalization of principal compo nent analysis to the exponential family. In Advances in Neural Information Processing Systems volume 13, pp. 23, 2001.\nPeter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine Neural Computation, 7(5):889_904. 1995\nJames M Dickey. Multiple hypergeometric functions: Probabilistic interpretations and statistical uses. Journal of the American Statistical Association, 78(383):628-637, 1983.\nLi Fei-Fei and Pietro Perona. A Bayesian hierarchical model for learning natural scene categories In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) volume 2, pp. 524-531. IEEE, 2005.\nPosterior inference over the hidden variables 0 and z is intractable due to the coupling between the 0 and under the multinomial assumption (Dickey1983).\nThomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the Nationa. academy of Sciences, 101(suppl 1):5228-5235, 2004."}, {"section_index": "5", "section_name": "2.2 MEAN FIELD AND AEVB", "section_text": "Philipp Hennig, David H Stern, Ralf Herbrich, and Thore Graepel. Kernel topic models. In AISTATS pp. 511-519, 2012.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura computation, 14(8):1771-1800, 2002\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Replicated softmax: an undirected topic model. I 607.16142009 Advances inNeural\nL(y,$ a,) = DkL [q(0,z[y,$)[[p(0, z[w,a, B)]- logp(w[a,)\nIn fact the above equation is a lower bound to the marginal log likelihood, sometimes called al evidence lower bound (ELBO), a fact which can be easily verified by multiplying and dividing (1 by the variational posterior and then applying Jensen's inequality on its logarithm. Note that th mean field method optimizes over an independent set of variational parameters for each document To emphasize this, we will refer to this standard method by the non-standard name of Decouple Mean-Field Variational Inference (DMFVI).\nThomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual in ternational ACM SIGIR conference on Research and development in information retrieval, pp 50-57. ACM, 1999.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. pp. 448-456, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 3rd Internationa Conference on Learning Representations (ICLR), 2015..\nAEVB (Kingma & Welling 2014; Rezende et al.[[2014) is one of several recent methods that aim at \"black box\"' inference methods to sidestep this issue. First. rewrite the ELBO as\nAlp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic differentiation variational inference. arXiv preprint arXiv:1603.00788. 2016\nL(y,$|a, ) DkL [q(0.zy.)][p(0.za)] logp(w|z,0,a,)\nThis form is intuitive. The first term attempts to match the variational posterior over latent variable to the prior on the latent variables, while the second term ensures that the variational posterior favor values of the latent variables that are good at explaining the data. By analogy to autoencoders, this second term is referred to as a reconstruction term.\nJey Han Lau, David Newman, and Timothy Baldwin. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In EACL, pp. 530-539, 2014.\nWhat makes this method \"Autoencoding, and in fact the main difference from DMFVI, is the pa rameterization of the variational distribution. In AEVB, the variational parameters are computed by using a neural network called an inference network that takes the observed data as input. For example, if the model prior p(0) were Gaussian, we might define the inference network as a feed- forward neural network ((w), v(w)) = f(w, ), where (w) and v(w) are both vectors of length. k, and y are the network's parameters. Then we might choose a Gaussian variational distribution q(0) = N(0; (w), diag(v(w))), where diag(.. .) produces a diagonal matrix from a column vec-. tor. The variational parameters y can then be chosen by optimizing the ELBO (3). Note that we have.\nDavid JC MacKay. Choice of basis for Laplace approximation. Machine learning, 33(1):77-86 1998.\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. pp. 1727 1736, 2016.\naurent Dinh andVincent Dumoulin. Training neuralBayesian nets http: //www iro.umontreal.ca/~bengioy/cifar/NcAp2014-summerschool/slides/ Laurent_dinh_cifar_presentation.pdf,August2016.\nN k II p(wn|zn,B)p(zn|9) W p(0|a)d0 n=1 zn=1\nA popular approximation for efficient inference in topic models is mean field variational inference. which breaks the coupling between 0 and z by introducing free variational parameters y over 0 and over z and dropping the edges between them. This results in an approximate variational posterior q(0, z|, $) = qy(0) In qo(zn), which is optimized to best approximate the true posterior. p(0, z|w, Q, ). The optimization problem is to minimize.\nMatthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent dirichlet allocation nAdvancesinNeural 11t111 856-864.2010\nFor LDA, this optimization has closed form coordinate descent equations due to the conjugacy. between the Dirichlet and multinomial distributions. Although this is a computationally conve. nient aspect of DMFVI, it also limits its flexibility. Applying DMFVI to new models relies on the. practitioner's ability to derive the closed form updates, which can be impractical and sometimes impossible.\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. The International Confer ence on Learning Representations (ICLR), Banff, 2014.\nHugo Larochelle and Stanislas Lauly. A neural autoregressive topic model. In Advances in Neural Information Processing Systems, pp. 2708-2716, 2012.\nnow, unlike DMFVI, coupled the variational parameters for different documents because they are all computed from the same neural network. To compute the expectations with respect to q in (3) Kingma & Welling(2014); Rezende et al.(2014) use a Monte Carlo estimator which they call the \"reparameterization trick\" (RT; appears also in|Williams|(1992)). In the RT, we define a variate U with a simple distribution that is independent of all variational parameters, like a uniform or standard normal, and a reparameterization function F such that F(U, ) has distribution qy. This is always possible, as we could choose F to be the inverse cumulative distribution function of qy, although we will additionally want F to be easy to compute and differentiable. If we can determine a suitable F then we can approximate (3) by taking Monte Carlo samples of U, and optimize using stochastic gradient descent.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. pp 1791-1799, 2014\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. pp. 1278-1286, 2014.\nAlthough simple conceptually, applying AEVB to topic models raises several practical challenges. The first is the need to determine a reparameterization function for q(0) and q(zn) to use the RT. The zn are easily dealt with, but 0 is more difficult; if we choose q(0) to be Dirichlet, it is difficul to apply the RT, whereas if we choose q to be Gaussian or logistic normal, then the KL divergenc n (3) becomes more problematic. The second issue is the well known problem of component col. apsing (Dinh & Dumoulin!2016), which a type of bad local optimum that is particularly endemi to AEVB and similar methods. We describe our solutions to each of those problems in the next fev. subsections.\nSimon Rogers, Mark Girolami, Colin Campbell, and Rainer Breitling. The latent process decom position of cdna microarray data sets. IEEE/ACM Transactions on Computational Biology and. Bioinformatics (TCBB), 2(2):143-156. 2005.\nHanna Wallach, David Mimno, and Andrew McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009.\nMax Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems.. volume 4, pp. 1481-1488, 2004.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256. 1992\nDealing with discrete variables like z using reparameterization can be problematic, but fortunately in LDA the variable z can be conveniently summed out. By collapsing z we are left with having to sample from 0 only, reducing (1) to\nE BN Layer 100x k Mean Sigma Softplus 100x100 FC Layer Softplus Input x100 FC Layer\nwhere the distribution of wn[, 0 is Multinomial(1, 30), recalling that denotes the matrix of all topic-word probability vectors.\nLDA gets its name from the Dirichlet prior on the topic proportions 0, and the choice of Dirichlet. prior is important to obtaining interpretable topics (Wallach et al.|. 2009). But it is difficult to handle the Dirichlet within AEVB because it is difficult to develop an effective reparameterization function. for the RT. Fortunately, a RT does exist for the Gaussian distribution and has been shown to perform quite well in the context of variational autoencoder (VAE) (Kingma & Welling. 2014).\nWe resolve this issue by constructing a Laplace approximation to the Dirichlet prior. Following MacKay(1998), we do so in the softmax basis instead of the simplex. There are two benefits of this choice. First, Dirichlet distributions are unimodal in the softmax basis with their modes coinciding with the means of the transformed densities. Second, the softmax basis also allows for carrying out unconstrained optimization of the cost function without the simplex constraints. The Dirichlet probability density function in this basis over the softmax variable h is given by\nFigure 2: Architecture of the inference network used in the experiments\nHere 0 = o(h), where o(.) represents the softmax function. Recall that the Jacobian of is pro portional to II 0k and g() is an arbitrary density that ensures integrability by constraining the redundant degree of freedom. We use the Laplace approximation of Hennig et al.(2012), which\nN p(w[a,B) = p(wn|B,0) p(0|a)d0 e n=1\nFLk Qk P(0(h)[) z 0%rg(1Th) HkTQk k\nhas the property that the covariance matrix becomes diagonal for large k (number of topics). Thi approximation to the Dirichlet prior p(0|a) is results in the distribution over the softmax variables h as a multivariate normal with mean 1 and covariance matrix 1 where\n1 1k = log Qk log Qi K 1 2 1 1 1kk 1 K Qk K2 Qk"}, {"section_index": "6", "section_name": "3.3 VARIATIONAL OBJECTIVE", "section_text": "E1 L(0) = tr(-'o) + (1-o)T(1- o)- K+log +Ee~N(0,I)wT log 0()0(o + 1/2\nIn order to impose the simplex constraint on the matrix during the optimization, we apply the. softmax transformation. That is, each topic k E RV is unconstrained, and the notation o() means. to apply the softmax function separately to each column of the matrix . Note that the mixture of multinomials for each word wn can then be written as p(wn[, 0) = o()0, , which explains the dot product in (7). To optimize (7), we use stochastic gradient descent using Monte Carlo samples from e, following the Law of the Unconscious Statistician..\n3.4 TRAINING AND PRACTICAL CONSIDERATIONS: DEALING WITH COMPONENT COLLAPSING\nAEVB is prone to component collapsing (Dinh & Dumoulin]2016), which is a particular type of local optimum very close to the prior belief, early on in the training. As the latent dimensionality of the model is increased, the KL regularization in the variational objective dominates, so that the outgoing decoder weights collapse for the components of the latent variable that reach close to the prior and do not show any posterior divergence. In our case, the collapsing specifically occurs because of the inclusion of the softmax transformation to produce 0. The result is that the k inferred topics are identical as shown in table7\nWe were able to resolve this issue by tweaking the optimization. Specifically, we train the network with the ADAM optimizer (Kingma & Ba]2015) using high moment weight (1) and learning rate (n). Through training at higher rates, early peaks in the functional space can be easily avoided. The\nFinally, we approximate p(0|) in the simplex basis with p(0|1, 1) = LV(0|1, 1) where LV is a logistic normal distribution with parameters 1, 1. Although we approximate the Dirichlet. prior in LDA with a logistic normal, this is not the same idea as a correlated topic model (Blei & Lafferty2006), because we use a diagonal covariance matrix. Rather, it is an approximation to. standard LDA.\nNow we can write the modified variational objective function. We use a logistic normal variational distribution over 0 with diagonal covariance. More precisely, we define two inference networks as feed forward neural networks f and f with parameters o; the output of each network is a vector in RK. Then for a document w, we define q(0) to be logistic normal with mean o = f(w,). and diagonal covariance o = diag(f(w, )), where diag converts a column vector to a diagonal. matrix. Note that we can generate samples from q(0) by sampling e ~ N(0, I) and computing.\nwhere O represents the set of all the model and variational parameters and w1 . .. w p are the docu ments in the corpus. The first line in this equation arises from the KL divergence between the two logistic normal distributions q and p, while the second line is the reconstruction error.\nproblem is that momentum based training coupled with higher learning rate causes the optimizer tc diverge. While explicit gradient clipping helps to a certain extent, we found that batch normalization. (Ioffe & Szegedy|2015) does even better by smoothing out the functional space and hence curbing sudden divergence.\nFinally, we also found an increase in performance with dropout units when applied to 0 to force the network to use more of its capacity.\nWhile more prominent in the AEVB framework, the collapsing can also occurs in DMFVI if the learning offset (referred to as the t parameter (Hofmann1999)) is not set properly. Interestingly, a similar learning offset or annealing based approach can also be used to down-weight the KL term in early iterations of the training to avoid local optima"}, {"section_index": "7", "section_name": "4.1 MODEL", "section_text": "The connection to a product of experts is straightforward, as for the multinomial, a mixture of natural parameters corresponds to a weighted geometric average of the mean parameters. That is, consider two N dimensional multinomials parametrized by mean vectors p and q. Define the corresponding. natural parameters as p = (r) and q = o(s), and let E [0, 1]. It is then easy to show that.\nN N x|8r+(1-0)s xo(8ri+(1-8)si)*i x I[r i=1 i=1\nSo the ProDLDA model can be simply described as a product of experts, that is, p(wn[0,3) o p(wn|zn = k, )0r. PRoDLDA is an instance of the exponential-family PCA (Collins et al. 2001) class, and relates to the exponential-family harmoniums (Welling et al.| 2004) but with non- Gaussian priors."}, {"section_index": "8", "section_name": "5 RELATED WORK", "section_text": "For an overview of topic modeling, seeBlei(2012). There are several examples of topic mod-. els based on neural networks and neural variational inference (Hinton & Salakhutdinov2009 Larochelle & Lauly2012] Mnih & Gregor2014] Miao et al.2016) but we are unaware of meth- ods that apply AEVB generically to a topic model specified by an analyst, or even of a successful. application of AEVB to the most widely used topic model, latent Dirichlet allocation..\nIn LDA, the distribution p(w|0, ) is a mixture of multinomials. A problem with this assumption. is that it can never make any predictions that are sharper than the components that are being mixed. (Hinton & Salakhutdinov2009). This can result in some topics appearing that are poor quality. and do not correspond well with human judgment. One way to resolve this issue is to replace this. word-level mixture with a weighted product of experts which by definition is capable of making. sharper predictions than any of the constituent experts (Hinton|2002). In this section we present a. novel topic model ProDLDA that replaces the mixture assumption at the word-level in LDA with. a weighted product of experts, resulting in a drastic improvement in topic coherence. This is a good illustration of the benefits of a black box inference method, like AVITM, to allow exploration of. new models.\nThe ProDLDA model can be simply described as latent Dirichlet allocation where the word-level mixture over topics is carried out in natural parameter space, i.e. the topic matrix is not constrained to exist in a multinomial simplex prior to mixing. In other words, the only changes from LDA are that is unnormalized, and that the conditional distribution of wn is defined as wn[,0 ~ Multinomial(1, o(0)).\nRecently, Miao et al.[(2016) introduced a closely related model called the Neural Variational Docu ment Model (NVDM). This method uses a latent Gaussian distribution over topics, like probabilistic latent semantic indexing, and averages over topic-word distributions in the logit space. However.\nthey do not use either of the two key aspects of our work: explicitly approximating the Dirichlet. prior using a Gaussian, or high-momentum training. In the experiments we show that these aspects lead to much improved training and much better topics..\nQualitative evaluation of topic models is a challenging task and consequently a large body of worl. has developed automatic evaluation metrics that attempt to match human judgment of topic quality. Traditionally, perplexity has been used to measure the goodness-of-fit of the model but it has been. repeatedly shown that perplexity is not a good metric for qualitative evaluation of topics (Newman. et al.2010). Several new metrics of topic coherence evaluation have thus been proposed; seeLau. et al. 72014) for a comparative review.Lau et al.(2014) showed that among all the competing. metrics, normalized pointwise mutual information (NPMI) between all the pairs of words in a set of. topics matches human judgment most closely, so we adopt it in this work. We also report perplexity primarily as a way of evaluating the capability of different optimizers. Following standard practice. (Blei et al.]2003), for variational methods we use the ELBO to calculate perplexity. For AEVB. methods, we calculate the ELBO using the same Monte Carlo approximation as for training..\nWe run experiments on both the 20 Newsgroups (11,000 training instances with 2000 word vocab. ulary) and RCV1 Volume 2 ( 800K training instances with 10000 word vocabulary) datasets. Ou preprocessing involves tokenization, removal of some non UTF-8 characters for 20 Newsgroups. and English stop word removal. We first compare our AVITM inference method with the stan. dard online mean-field variational inference (Hoffman et al.2010) and collapsed Gibbs sampling. (Griffiths & Steyvers]2004) on the LDA model. We use standard implementations of both meth. ods, scikit-1earn for DMFVI and ma11et (McCallum2002) for collapsed Gibbs. Ther. we compare two autoencoding inference methods on three different topic models: standard LDA ProDLDA using our inference method and the Neural Variational Document Model (NVDM (Miao et al.2016), using the inference described in the paper2\nTable 1: Average topic coherence on the 20 News, roups dataset. Higher is better.\nTables[1and[2|show the average topic coherence values for all the models for two different settings of k, the number of topics. Comparing the different inference methods for LDA, we find that, consistent with previous work, collapsed Gibbs sampling yields better topics than mean-field methods. Among. the variational methods, we find that VAE-LDA model (AVITM)|yields similar topic coherence. and perplexity to the standard DMFVI (although in some cases, VAE-LDA yields significantly bette: topics). However, AVITM is significantly faster to train than DMFVI. It takes 46 seconds on 20 Newsgroup compared to 18 minutes for DMFVI. Whereas for a million document corpus of RCV1. it only under 1.5 hours while scikit-learn's implementation of DMFVI failed to return any results even after running for 24 hours4\nComparing the new topic models than LDA, it is clear that ProDLDA finds significantly better topics than LDA, even when trained by collapsed Gibbs sampling. To verify this qualitatively, we display examples of topics from all the models in Table|6 The topics from ProdLDA appear visually more coherent than NVDM or LDA. Unfortunately, NVDM does not perform comparatively to LDA\nTable 2: Average topic coherence on the RCV1 dataset. Higher is better. Results not reported for LDA DMFVI, as inference failed to converge in 24 hours..\nTable 3: Perplexity scores for 20 Newsgroups. Lower is better.\nfor any value of k. To avoid any training dissimilarities we train all the competing models until we reach the perplexities that were reported in previous work. These are reported in Table|3f.\nA major benefit of AVITM inference is that it does not require running variational optimization. which can be costly, for new data. Rather, the inference network can be used to obtain topic pro. portions for new documents for new data points without running any optimization. We evaluate. whether this approximation is accurate, i.e. whether the neural network effectively learns to mimic. probabilistic inference. We verify this by training the model on the training set, then on the test set. holding the topics ( matrix) fixed, and comparing the test perplexity if we obtain topic proportions. by running the inference neural network directly, or by the standard method of variational optimiza- tion of the inference network on the test set. As shown in Table4] the perplexity remains practically. un-changed. The computational benefits of this are remarkable. On both the datasets, computing. perplexity using the neural network takes well under a minute, while running the standard variational. approximation takes ~ 3 minutes even on the smaller 20 Newsgroups data. Finally, we investigate. the reasons behind the improved topic coherence in ProDLDA. First, Table5lexplores the effects of. each of our two main ideas separately. In this table, \"Dirichlet' means that the prior is the Laplace approximation to Dirichlet(a = 0.02), while \"Gaussian' indicates that we use a standard Gaussian. as prior. 'High Learning Rate\"' training means we use 1 > 0.8 and 0.1 > n > 0.001||with batch. normalization, whereas \"Low Learning Rate\" means 1 > 0.8 and 0.0009 > n > 0.00009 without batch normalization. (For both parameters, the precise value was chosen by Bayesian optimization.. We found that these values in the 'with BN\" cases were close to the default settings in the Adam. optimizer.) We find that the high topic coherence that we achieve in this work is only possible if we use both tricks together. In fact the high learning rates with momentum is required to avoid. local minima in the beginning of the training and batch-normalization is required to be able to train the network at these values without diverging. If trained at a lower momentum value or at a lower. learning rate ProDLDA shows component collapsing. Interestingly, if we choose a Gaussian prior. rather than the logistic normal approximation used in ProdLDA or NVLDA, the model is easier to train even with low learning rate without any momentum or batch normalization..\nThe main advantage of AVITM topic models as opposed to NVDM is that the Laplace approxima. tion allows us to match a specific Dirichlet prior of interest. As pointed out by|Wallach et al. (2009 the choice of Dirichlet hyperparameter is important to the topic quality of LDA. Following this rea. soning, we hypothesize that AVITM topics are higher quality than those of NVDM because the. are much more focused, i.e., apply to a more specific subset of documents of interest. We provid. support for this hypothesis in Figure|1] by evaluating the sparsity of the posterior proportions ove. topics, that is, how many of the model's topics are typically used to explain each document. In orde. to estimate the sparsity in topic proportions, we project samples from the Gaussian latent spaces o. ProDLDA and NVDM in the simplex and average them across documents. We compare the topi.\n5We note that much recent work followsHinton & Salakhutdinov (2009) in reporting perplexity for the LDA Gibbs sampler on only a small subset of the test data. Our results are different because we use the entire test dataset.\n61 is the weight on the average of the gradients from the previous time step and n refers to the learning *ate"}]
HJKkY35le
[{"section_index": "0", "section_name": "5 CONCLUSIONS", "section_text": "Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and com puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs.\nWe introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data gener- ating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem\nWe provide systematic ways to measure and avoid the missing modes problem and stabilize training. with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics car provide more stable gradients than trained discriminators, and when combined with the encoder they can be used as regularizers for training. These regularizers can also penalize missing mode. and encourage a fair distribution of probability mass on the generation manifold.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative adversarial networks (GAN) (Goodfellow et al.|2014) have demonstrated their potentia on various tasks, such as image generation, image super-resolution, 3D object generation, and vide. prediction (Radford et al.2015f Ledig et al.]2016] Sonderby et al.2016f Nguyen et al.2016] W et al.[2016f Mathieu et al.2015). The objective is to train a parametrized function (the generator which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to tha. of the data generating distribution. The basic scheme of the GAN training procedure is to trai. a discriminator which assigns higher probabilities to real data samples and lower probabilities t generated data samples, while simultaneously trying to move the generated samples towards the rea. data manifold using the gradient information provided by the discriminator. In a typical setting, th. generator and the discriminator are represented by deep neural networks.."}, {"section_index": "2", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6).\nDespite their success, GANs are generally considered as very hard to train due to training instabilit and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely although the generators produce meaningful samples, these samples are often from just a few mode (small regions of high probability under the data distribution). Behind this phenomenon is the miss ing modes problem, which is widely conceived as a major problem for training GANs: many mode of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution.\nThis issue has been the subject of several recent papers proposing several tricks and new archi-. tectures to stabilize GAN's training and encourage its samples' diversity. However, we argue that a. general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards. that of real data, using the discriminator as a metric. However, even if we train the discriminator. to distinguish between these two manifolds, we have no control over the shape of the discriminator. function in between these manifolds. In fact, the shape of the discriminator function in the data\nAuthors contributed equally"}, {"section_index": "3", "section_name": "MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS", "section_text": "MDGAN Regularized -GAN\nTong Che* Yanran Li* t,sAthul Paul Jacob, Yoshua Bengio, Wenjie Li. 'Montreal Institute for Learning Algorithms, Universite de Montreal, Montreal, QC H3T 1J4, Canada #Department of Computing, The Hong Kong Polytechnic University, Hong Kong. sDavid R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada. { tong.che,ap.jacob,yoshua.bengio} @umontreal.ca. csyli.cswili } @comp.polyu.edu.hk\nAlthough Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction. towards that of higher concentration than that of the data generating distribution.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nspace can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure|1)\nPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv. 2016\nFigure 1: Samples with very high discrim- ination values (D=1.0) in DCGAN model trained on CelebA dataset.\nRegularizers usually bring a trade-off between model variance and bias. Our results have showr. that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize th training, and fix the missing mode problem all at once, with positive or at the least no negative effect on the generated samples. We also discuss a variant of the regularized GAN algorithm, which cat even improve sample quality as compared to the DCGAN baseline.."}, {"section_index": "4", "section_name": "2 RELATED WORK", "section_text": "Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nIn Goodfellow et al.(2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets.Mirza & Osindero(2014) enlarges GAN's representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneficial information. Motivated from this. several conditional variants of GAN has beer applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al.[(2016) and edge map Isola et al.(2016), real-time image manipulationZhu et al.(2016), temporal image generationZhou & Berg(2016);Saito & Matsumoto(2016);Vondrick et al.(2016), texture synthesis, style transfer, and video stylization Li & Wand(2016).\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016\nMasaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprini arXiv:1611.06624, 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Cher Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortise map inference for image super-resolution. arXiv preprint arXiv:1610.04490. 2016\nResearchers also aim at stretching GAN's limit to generate higher-resolution, photo-realistic images. Denton et al.(2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convo- lutional networks. As an alternative to LAPGAN,Radford et al.(2015) successfully designs a class of deep convolutional generative adversarial networks which has led to significant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models.Larsen et al.(2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual fidelity. Recent literature have also shown impressive results on image super-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al.(2016); Sonderby et al.(2016);Nguyen et al.(2016).\nXiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar ial networks. In ECCV, 2016\nJiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016.\nYipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. Ir European Conference on Computer Vision, pp. 262-277. Springer, 2016.\nDespite these promising successes, GANs are notably hard to train. Although Radford et al.(2015 provide a class of empirical architectural choices that are critical to stabilize GAN's training, i1 would be even better to train GANs more robustly and systematically. Salimans et al.[(2016) pro pose feature matching technique to stabilize GAN's training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted byZhao et al.(2016)\nJun- Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula tion on the natural image manifold. In Proceedings of European Conference on Computer Visio (ECCV), 2016.\nTo remedy this problem, we propose a novel regu- larizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L, norm. Differentiating these similarity met- rics will provide us with more stable gradients to train our generator. Combining this idea with an ap- proach meant to penalize the missing modes, we pro-\nlarizer for the GAN training target. The basic idea. is simple yet powerful: in addition to the gradien. information provided by the discriminator, we want. the generator to take advantage of other similarity metrics with much more predictable behavior, sucl. as the L2 norm. Differentiating these similarity met. 1: Samples with very high discrim. rics will provide us with more stable gradients tc. values (D=1.0) in DCGAN model train our generator. Combining this idea with an ap. on CelebA dataset. proach meant to penalize the missing modes, we pro-. amily of additional regularizers for the GAN objective. We then design a set of metrics tc. e the generated samples in terms of both the diversity of modes and the distribution fairness robability mass. These metrics are shown to be more robust in judging complex generative including those which are well-trained and collapsed ones..\nAnders Boesen Lindbo Larsen, Soren Kaae Sgnderby, Hugo Larochelle, and Ole Winther. Autoen coding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.\npose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones."}, {"section_index": "5", "section_name": "APPENDIX: PSEUDO CODE FOR MDGAN", "section_text": "In addition to feature distances,Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GAN's training stability. Furthermore, some researchers make use of infor- mation in both spaces in a unified learning procedure (Dumoulin et al.]2016] Donahue et al.]2016). In|Dumoulin et al.(2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al.(2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours.\nIn this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section3.3\nOur work is related to VAEGAN (Larsen et al. 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GAN's training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix|D"}, {"section_index": "6", "section_name": "S MODE REGULARIZERS FOR GANS", "section_text": "The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold.\n7. Update generator G using SGD with gradient ascent:\nWe now take a closer look at the root cause of the instabilities while training GANs. The discrim inator is trained on both generated and real examples. As pointed out by[Goodfellow et al.[(2014) Denton et al.(2015);Radford et al.(2015), when the data manifold and the generation manifold are. disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and O on the generation manifold. In order tc pass good gradient information to the generator, it is important that the trained discriminator pro. duces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example|Denton et al.[(2015) notec a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classifies real and fake examples, such that around the fake examples, L is nearly zero. In such cases, the generator will receive no gradient to improve itself\nFigure 8: The detailed training procedure of an MDGAN example\nAnother important problem while training GANs is mode missing. In theory, if the generated dat. and the real data come from the same low dimensional manifold, the discriminator can help th generator distribute its probability mass, because the missing modes will not have near-O probability. under the generator and so the samples in these areas can be appropriately concentrated towards. regions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tends. to be near 1 on all the real data samples, so large modes usually have a much higher chance o. attracting the gradient of discriminator. For a typical GAN model, since all modes have similar L. values, there is no reason why the generator cannot collapse to just a few major modes. In othe. words, since the discriminator's output is nearly O and 1 on fake and real data respectively, th generator is not penalized for missing modes.\nOne has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor. malization layers both in the generator and the discriminator. However, two classes of data gc through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these twc classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way. the batch statistics of these two kinds of batches cannot interfere with each other."}, {"section_index": "7", "section_name": "3.1 GEOMETRIC METRICS REGULARIZER", "section_text": "Compared with the objective for the GAN generator, the optimization targets for supervised learning. are more stable from an optimization point of view. The difference is clear: the optimization targel for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training..\nThe data is sampled from a mixture of 6 Gaussians, with standard derivation of O.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to\nIThis problem exists even when we use log D(G(z)) as target for the generator, as noted byDenton et a 2015) and our experiments.\nm 1 [log D1(x;) + log(1 - D1(G(E(x;))) V m i=1\n1 m m i=1\nm 1 [log D(G(E(x;))) + log(1- D2(zi)) d. m i=1\nm 1 V [log D2(G(zi))] m i=1\nWe use similar architectures for Compositional MNIST and CelebA experiments. The architecture. is based on that found in DCGAN Radford et al.[(2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the 'inverse\"' of the generator, by. reversing the order of layers and replacing the de-convolutional layers with convolutional layers.\nInspired by this observation, we propose to incorporate a supervised training signal as a regularizei on top of the discriminator target. Assume the generator G(z) : Z -> X generates samples by sam pling first from a fixed prior distribution in space Z followed by a deterministic trainable transforma tion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X -> Z Assume d is some similarity metric in the data space, we add Ex~pa[d(x, Go E(x))] as a regularizer, where pa is the data generating distribution. The encoder itself is trained by minimizing the same reconstruction error.\nGAN Reg-GAN Epoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nIn practice, there are many options for the distance measure d. For instance, the pixel-wise L2 distance, or the distance of learned features by the discriminator (Dumoulin et al.|2016) or by othe. networks, such as a VGG classifier. (Ledig et al.]2016)\nThe geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say L' metric. The idea of adding an encoder is equivalent to first training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds.\nFigure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left. shows heatmaps of the generator distributions as the number of training epochs increases, whereas. the rightmost column presents the target, the original data distribution. The top row shows standard. GAN result. The generator has a hard time oscillating among the modes of the data distribution, and. is only able to \"recover\"' a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and fits the target. distribution.\nIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss. ing modes. In traditional GANs, the optimization target for the generator is the empirical sum. >', Ve log D(Ge(zi)). The missing mode problem is caused by the conjunction of two facts: (1. the areas near missing modes are rarely visited by the generator, by definition, thus providing very. few examples to improve the generator around those areas, and (2) both missing modes and non. missing modes tend to correspond to a high value of D, because the generator is not perfect sc that the discriminator can take strong decisions locally and obtain a high value of D even near. non-missing modes."}, {"section_index": "8", "section_name": "D APPENDIX: COMPARISON WITH VAEGAN", "section_text": "In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al.(2015) in terms of its theoretical dif ference, sample quality and number of missing modes.\ntowardsM towardsM generation manifold M1 M\ngeneration manifold M1 M2\nThe first assumption does not necessarily hold for GANs. We have found that in some trainec. models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to. mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plair auto-encooders works better than VAE for our purposes because as long as the model G(z) is able. to generate training samples, there always exists a function E*(x) such that G(E(x)) = x. Our. encoder can therefore be viewed as being trained to approximate this real encoder E*. There are no conflicts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able tc generate the training samples. This is why our work is essentially different from VAEGAN. In oui experiments, we also believe that this is the reason why VAEGAN generates worse samples than a. carefully tuned regularized GANs.\nFigure 2: Illustration of missing modes problem\nIn short, our regularized optimization target for the generator and the encoder becomes\nIn terms of sample quality and missing modes, we run the official code of VAEGAN 3|with theii. default setting. We train VAEGAN for 30 epochsland our models for only 20 epochs. For fairness\nTg = -Ez[log D(G(z))] + Ex~pa[A1d(x, G o E(x)) + X2 log D(G o E(x))] TE = Ex~pa[1d(x, G o E(x)) + X2log D(G o E(x))]\nTg = -Ez[log D(G(z))] + Ex~pa[1d(x, G o E(x)) + X2 log D(G o E(x))] TE = Ex~pg[1d(x, G o E(x)) + X2 log D(G o E(x))\na real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4.\nIn the regularized version, we choose Xj = X2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure9\nGAN Reg-GAN Epoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nAs an example, consider the situation in Fig- ure 2 For most z, the gradient of the generator Ve log D(Ge(z)) pushes the generator towards the major mode M1. Only when G(z) is very close to the mode M2 can the generator get gra- dients to push itself towards the minor mode M2. However, it is possible that such z is of low or zero probability in the prior distribution Po.\nWith regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) Eq(z|x) [log p(x|z)] KL(q(z|x)||p(z)). This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN:\n1. In general, VAE is based on the assumption that the true posterior p(z[x) can be well. approximated by factorized Gaussian distribution q. 2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not con- flict with GAN objective in terms of probabilistic framework.\nGiven this observation, consider a regularize. GAN model with the metric regularizer.A. sume Mo is a minor mode of the data genera. ing distribution. For x E Mo, we know tha. if G o E is a good autoencoder, G(E(x)) wj. be located very close to mode Mo. Since thei. M2 are sufficient training examples of mode Mo i. the training data, we add the mode regularize. Ex~pa[log D(G o E(x))] to our optimizatio nissing modes problem target for the generator, to encourage G(E(x. mode of the data generating distribution. In this way, we can achieve fa ion across different modes..\nGiven this observation, consider a regularized GAN model with the metric regularizer. As-. sume Mo is a minor mode of the data generat-. ing distribution. For x E Mo, we know that if G o E is a good autoencoder, G(E(x)) will. be located very close to mode Mo. Since there. are sufficient training examples of mode Mo in the training data, we add the mode regularizer Ex~pg[log D(G o E(x))] to our optimization target for the generator. to encourage G(E(x))\nto move towards a nearby mode of the data generating distribution. In this way, we can achieve fair orobability mass distribution across different modes.."}, {"section_index": "9", "section_name": "3. 3 MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS", "section_text": "The generated samples are shown in Figure[10] The most obvious difference between our sample. and VAEGAN's samples is the face distortion, which is consistent with our experimental results ir Section|4.2.2] We conjecture that the distortions of VAEGAN's samples are due to the conflicts be tween the two objectives, as we present above. In other words, the way we introduce auto-encoder as regularizers for GAN models is different from VAEGAN's. The difference is that the second as sumption mentioned above is not required in our approaches. In our framework, the auto-encoder. helps alter the generation manifolds, leading to fewer distortions in fine-grained details in our gen erated samples.\nThe proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution.\nMDGAN Regularized -GAN VAEGAN -trained VAEGAN -reported\nAn example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D1 which separates between the samples x and G o E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D1(Go E(x))+Ad(x, Go E(x))] in orde to match the two manifolds. In the diffusion step we train a discriminator D2 between distribution. G(z) and G o E(x), and we train G to maximize log D(G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D2 provides much smoothe and more stable gradients. The detailed training procedure is given in Appendix|A] See Figure6|fo the quality of generated samples."}, {"section_index": "10", "section_name": "3.4 EVALUATION METRICS FOR MODE MISSING", "section_text": "In order to estimate both the missing modes and the sample qualities in our experiments, we use several different metrics for different experiments instead of human annotators.\nexp(ExKL(p(y[x)][p*(y)))\nWhere x denotes one sample, p(y|x) is the softmax output of a trained classifier of the labels, anc. p*(y) is the overall label distribution of generated samples. The intuition behind this score is that. a strong classifier usually has a high confidence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bac. image. Although the model is very bad, it can have a perfect inception score, because p(y[x) car. have a high entropy and p* (y) can have a low entropy. So instead, for labelled datasets, we propose. another assessment for both visual quality and variety of samples, the MODE score:.\nIn terms of the missing modes problem, we use the same method described in Section 4.2.1|fo computing the number of images with missing modes. The results are shown below\nexp(ExKL(p(y[x)[[p(y)) - KL(p*(y)[[p(y)))\nHowever, in datasets without labels (LSUN) or where the labels are not sufficient to characterize. every data mode (CelebA), the above metric does not work well. We instead train a third party. discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al.|2014) for proof):\nWhere pq is the probability density of the generator and pd is the density of the data distribution To prevent D* from learning a perfect O-1 separation of pq and pd, we inject a zero-mean Gaussiar noise to the inputs when training D*. After training, we test D* on the test set T of the real dataset If for any test sample t E T, the discrimination value D(t) is close to 1, we can conclude that th mode corresponding to t is missing. In this way, although we cannot measure exactly the numbe of modes that are missing, we have a good estimator of the total probability mass of all the missing modes.\nIn conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri cally result in better sample quality and mode preserving ability, which are our main contributions.\nOn some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without care-. fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized. GANs, which is very stable and much easier to tune for producing good samples..\nMDGAN Regularized GAN VAEGAN -trained VAEGAN -reported\nFigure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison.\nThe inception score (Salimans et al.2016) was considered as a good assessment for sample quality from a labelled dataset:\nTable 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN.\nVAEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 9720 754 3644 74 4.0 5862 42 391 13\nwhere p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric..\nWe see that using our proposed regularizers results in a huge drop in the number of missing modes We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classifies the samples as \"not on mode\". Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes\nPg(s) D*(s) ~ Pg(s)+pa(S)\nTo conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed five individuals to conduct this evaluation of sample variability Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse."}, {"section_index": "11", "section_name": "4.1 MNIST", "section_text": "We perform two classes of experiments on MNIST For the MNIST dataset. we can assume that the data generating distribution can be approximated with ten dominant modes, if we define the term \"mode\" here as a connected component of the data manifold."}, {"section_index": "12", "section_name": "4.1.1 GRID SEARCH FOR MNIST GAN MODELS", "section_text": "1 order to systemncaly expiore the enect ol our pro OPC1 OptimD [SGD,Adam] posed regularizers on GAN models in terms of im-. Ir [1e-2,1e-3,1e-4] proving stability and sample quality, we use a large. scale grid search of different GAN hyper-parameters. on the MNIST dataset. The grid search is based on a. pair of randomly selected loss weights: X1 = 0.2 and. X2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized (. list the search ranges in Table[1] Our grid search is similar to those proposed inZhao et a Please refer to it for detailed explanations regarding these hyper-parameters..\nFor evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it t compute the MODE scores for the generated samples from all these models. The resulting distribu tion of MODE score is shown in Figure[3] Clearly, our proposed regularizer significantly improve. the MODE scores and thus demonstrates its benefits on stabilizing GANs and improving sample. qualities.\n70 69.97 GAN Regularized GAN 60 50 40 30 22.29 20 17.34 14.86 11.15 10 9.6 6.19 7.43 4.026.19 2.484.33 2.79 4.33 0.31 1.552.17 0 0.31 0.31 0.0 0-0.5 0.5-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9\nFigure 3: The distributions of MODE scores for GAN and regularized GAN.\nTo illustrate the effect of regularizers with different coefficients, we randomly pick an architecture and train it with different Xj = X2. The results are shown in Figure4.\n3 5 3 M 2 4 5 5 2 7 1 0.000 0.0005 0.0009 0.002 0.01 3-3-800-256-T-SGD-Adam-0.001 3-3-1600-512-T-Adam-SGD-0.001\nFigure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the X1 and X2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples. through grid search for GAN and Regularized GAN..\nIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1o00 modes dataset. The digits on the image are sampled with different.\nTable 1: Grid Search for Hyperparameters\nnLayerG [2,3,4] nLayerD [2,3,4] sizeG [400,800,1600,3200] sizeD [256, 512, 1024] dropoutD [True,False] optimG [SGD,Adam] optimD [SGD,Adam] lr [1e-2,1e-3,1e-4]\n69.97 70 GAN Regularized GAN 60 50 40 30 22.29 20 17.34 14.86 11.15 10 9.6 6.19 7.43 4.02 6.19 2.484.33 2.79 4.33 0.31 1.552.17 0.31 0.31 0 0.0 0-0.5 0.5-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9\nprobabilities, in order to test the model's capability to preserve small modes in generation. We agair use a pre-trained classifier for MNIST instead of a human to evaluate the models.\nThe performances on the compositional experiment are measured by two metrics. #Miss represents the classifier-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classifier-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table[2 With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem."}, {"section_index": "13", "section_name": "4.2.1 MISSING MODES ESTIMATION ON CELEBA", "section_text": "We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes.\nThe comparison result is shown in Table [3] Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models. showing its superiority on modes preserving. We also find that, although sharing the same architec ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently\nTo get a better understanding of the models' performance, we want to figure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instruc- tive. In Figure5] the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data\nTable 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg. DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score)..\nSet 1 Set 2 Set 3 Set 4 #Miss KL #Miss KL #Miss KL #Miss KL DCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8 Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8\nTo test the effectiveness of our proposal on harder problems, we implement an encoder for the. DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN. baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B\nTable 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina tor. The numbers in the brackets indicate the dimension of prior z. denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods .\nDCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 5463 17089 754 3644 74 4.0 590 15832 42 391 13\nFigure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing Right: Only DCGAN missing\nAfter quantitative evaluation, we manually examine the generated samples by our regularized GAN. to see whether the proposed regularizer has side-effects on sample quality. We compare our mode. with ALI (Dumoulin et al.]2016), VAEGAN (Larsen et al.]2015), and DCGAN (Radford et al.) 2015) in terms of sample visual quality and mode diversity. Samples generated from these models. are shown in Figured\nFigure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures.\nBoth MDGAN and Regularized-GAN generate clear and natural-looking face images. Although ALI's samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp\nAs to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions With all four other models, the majority of generated samples suffer from some sort of distortion However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn fine-grained details such as face edges. As a result, MDGAN is able to reduce distortions..\nfor GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure|5|are only missed by DCGAN. The sideface, paleface, black and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them.\nMDGAN Regularized GAN ALI VAEGAN DCGAN"}]
Hy-lMNqex
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "It is only recently that commodity computing hardware in the form of graphics processors delivered the performance necessary for practical, large scale Deep Neural Network applications Krizhevsky et al.(2012). At the same time, the end of Dennard Scaling in semiconductor technology Es-. maeilzadeh et al.(2011) makes it difficult to deliver further advances in hardware performance using existing general purpose designs. It seems that further advances in DNN sophistication would have to rely mostly on algorithmic and in general innovations at the software level which can be. helped by innovations in hardware design. Accordingly, hardware DNN accelerators have emerged.. The DianNao accelerator family was the first to use a wide single-instruction single-data (SIsD) architecture to process up to 4K operations in parallel on a single chip Chen et al.[(2014a b) out-. performing graphics processors by two orders of magnitude. Development in hardware accelerators has since proceeded in two directions: either toward more general purpose accelerators that can. support more machine learning algorithms while keeping performance mostly on par with DaDian-. Nao (DaDN) Chen et al.(2014b), or toward further specialization of specific layers or classes of. DNNs with the goal of outperforming DaDN in execution time and/or energy efficiency, e.g., Han. et al.(2016); |Albericio et al.(2016a); Judd et al.(2016a); Chen, Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne[(2016); Reagen et al.(2016). This work is along the second direction. Section5[reviews several other accelerator designs.\nWe have also performed an evaluation of NeuralTalk LSTM Karpathy & Li|(2014) which uses long short-term memory to automatically generate image captions. Precision can be reduced down to 11 bits withouth affecting the accuracy of the predictions (measured as the BLEU score when comparec to the ground truth) resulting in a ideal performance improvement of 1.45 translating into a 1.38 speedup with TRT.\nWhile DaDN's functional units process 16-bit fixed-point values, DNNs exhibit varying precision requirements across and within layers, e.g.,Judd et al.(2015). Accordingly, it is possible to use"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Tartan TRT a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT ex-. ploits the variable per layer precision requirements of DNNs to deliver execution. time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layersJudd et al.[(2016a c) Experiments on image classification CNNs show that on average across all net-. works studied, TRT outperforms a state-of-the-art bit-parallel accelerator Chen et al.[(2014b) by 1.90 without any loss in accuracy while it is 1.17 more en- ergy efficient. TRT requires no network retraining while it enables trading off. accuracy for additional improvements in execution performance and energy effi-. ciency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on. average 2.04 faster and 1.25 more energy efficient than a conventional bit- parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1.24 over the bit- parallel baseline while being 73% faster for convolutional layers and 60% faster. for fully-connected layers is also presented..\nEnergy Efficiency: This section compares the energy efficiency or simply efficiency of TRT and. DaDN. Energy Efficiency is the inverse of the relative energy consumption of the two designs. The. average efficiency improvement with TRT across all networks and layers for the 100% profile is 1.17. In the FCLs, TRT is not as efficient as DaDN, however, the energy efficiency for CVLs. more than compensates when whole networks are considered except for VGG_19. Regardless, per- formance would not scale linearly if DaDN was to include more tiles in an attempt to match TRT's performance: under-utilization for most layers in these networks would severely reduce any perfor- mance improvements delivered via additional tiles under DaDN. Overall, efficiency primarily comes from the reduction in effective computation following the use of reduced precision arithmetic for the inner product operations. Furthermore, the amount of data that has to be transmitted from the SB. and the traffic between the central eDRAM and the SIPs is decreased proportionally to the chosen\nshorter, per layer representations for activations and/or weights. However, with existing bit-paralle functional units doing so does not translate into a performance nor an energy advantage as the values are expanded into the native hardware precision inside the unit.\nThis work presents Tartan (TRT), a massively parallel hardware accelerator whose execution tim. for fully-connected and convolutional layers scales with the precision p used to represent the inpu. values. TRT uses hybrid bit-serial/bit-parallel functional units and exploits the abundant parallelisn of typical DNN layers with the goal of exceeding DaDN's execution time performance and energy. efficiency. Ideally Tartan can improve execution time by 16 where p is the precision used for the. activations in convolutional layers, and for the activations and weights in fully-connected layers Every bit of precision that can be eliminated ideally reduces execution time and increases energ.. efficiency. TRT builds upon the Stripes (STR) accelerator Judd et al.(2016c a) which improves. execution time and energy efficiency only for convolutional layers..\nTable 3: Area Breakdown for TRT and DaDN\nThis work evaluates TRT on a set of convolutional neural networks (CNNs) for image classification. On average TRT reduces inference time by 1.61, 1.91 and 1.90 over DaDN for the fully- connected, the convolutional, and all layers respectively. Energy efficiency compared to DaDN with TRT is 0.92, 1.18 and 1.17 respectively. TRT enables trading off accuracy for improving exe-. cution time and energy efficiency. For example, on average for the fully-connected layers, accepting a 1% loss in accuracy improves performance to 1.73 and energy efficiency to 1.00 compared to DaDN.\nTable 4: Relative performance of 2-bit TRT variation compared to DaDN and the 1-bit TR7\nThe rest of this document is organized as follows: Section 2|illustrates the key concepts behind TRT via an example. Section |3|reviews the DaDN architecture and presents an equivalent Tartan configuration. Section 4 presents the experimental results. Section 5|reviews related work and discusses the limitations of this study and the potential challenges with TRT. Section|6|concludes.\nprecision. When the per layer precisions are reduced adequately TRT becomes more efficient than DaDN.\nArea Overhead: Table [3|reports the area breakdown of TRT and DaDN. Over the full chip, TRT needs 1.49 the area compared to DaDN while delivering on average a 1.90 improvement in. speed. Generally, performance would scale sublinearly with area for DaDN due to underutilization The 2-bit variant, which has a lower area overhead, is described in detail in the next section.."}, {"section_index": "2", "section_name": "4.3 TWO-BIT AT ONCE PERFORMANCE EVALUATION", "section_text": "We evaluate the performance for a multi-bit design as described in section 3.4] where 2 bits are processed every cycle in as half as many total SIPs. The precisions used are the same as indicated. in Table 1 for 100% accuracy, rounded up to the next multiple of two. The results are shown in. Table 4 The 2-bit TRT always improves performance compared to DaDN as the \"vs.DaDN'. columns show. Compared to the 1-bit TRT performance is slightly lower however given that the. area of the 2-bit TRT is much lower, this can be a good trade-off. Overall, there are two forces. at work that shape performance relative to the 1-bit TRT. There is performance potential lost due. to rounding all precisions to an even number, and there is performance benefit by requiring less parallelism. The time needed to serially load the first bundle of weights is also reduced. In VGG_19. the performance benefit due to the lower parallelism requirement outweights the performance loss. due to precision rounding. In all other cases, the reverse is true..\nWhere f1, f2, C1 and c2 are output activations, w1, w2, and w are weights, and a1, a2 and a are input activations. For clarity all values are assumed to be represented in 2 bits of precision"}, {"section_index": "3", "section_name": "2.1 CONVENTIONAL BIT-PARALLEL PROCESSING", "section_text": "Figure2.1a shows a bit-parallel processing engine representative of DaDN. Every cycle, the engine can calculate the product of two 2-bit inputs, i (weight) and v (activation) and accumulate or store it into the output register OR. Parts (b) and (c) of the figure show how this unit can calculate the example CVL over two cycles. In part (b) and during cycle O, the unit accepts along the v input bits O and 1 of a1 (noted as a1/0 and a1/1 respectively on the figure), and along i bits O and 1 of u and produces both bits of output c1. Similarly, during cycle 1 (part (c)), the unit processes a2 and u to produce c2. In total, over two cycles, the engine produced two 2b 2b products. Processing the example FCL also takes two cycles: In the first cycle w1 and a produce f1, and in the second cycle w2 and a produce f2. This process is not shown in the interest of space.\nA hardware synthesis and layout of both DaDN and TRT's 2-bit variant using TSMC 65nm typica case libraries shows that the total area overhead can be as low as 24.9%, with an improved energy. efficiency in fully connected layers of 1.24 on average.."}, {"section_index": "4", "section_name": "RELATED WORK AND LIMITATIONS OF THIS WORK", "section_text": "The recent success of Deep Learning has led to several proposals for hardware acceleration of DNNs. This section reviews some of these recent efforts. However, specialized hardware designs for neura. networks is a field with a relatively long history. Relevant to TRT, bit-serial processing hardware for neural networks has been proposed several decades ago, e.g., Svensson & Nordstrom(1990);Murray et al.(1988). While the performance of these designs scales with precision it would be lower than that of an equivalently configured bit-parallel engine. For example, Svensson & Nordstrom (1990 uses an interesting bit-serial multiplier which requires O(4 p) cycles, where p the precision ir bits. Furthermore, as semiconductor technology has progressed the number of resources that can be"}, {"section_index": "5", "section_name": "2.2 Tartan's APPROACH", "section_text": "Figure 2|shows how a TRT-like engine would process the example CVL. Figure|2a|shows the en gine's structure which comprises two subunits. The two subunits accept each one bit of an activatioi. per cycle through inputs v0 and v1 respectively and as before, there is a common 2-bit weight inpu (i1, i0). In total, the number of input bits is 4, identical to the bit-parallel engine..\nTRT area (mm2) TRT 2-bit area (mm2) DaDN area (mm2 Inner-Product Units 57.27 (47.71%) 37.66 (37.50%) 17.85 (22.20%) Synapse Buffer 48.11 (40.08%) 48.11 (47.90%) 48.11 (59.83%) Input Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%) Output Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%) Neuron Memory 7.13 (5.94%) 7.13 (7.10%) 7.13 (8.87%) Dispatcher 0.21 (0.17%) 0.21 (0.21%) Total 120.04 (100%) 100.43 (100%) 80.41 (100%) Normalized Total 1.49 1.25 1.00\nThis section illustrates at a high-level the TRT design by showing how it would process two pur posely trivial cases: 1) a fully-connected layer (FCL) with a single input activation producing two output activations, and 2) a convolutional layer (CVL) with two input activations and one single weight filter producing two output activations. The per layer calculations are:\nFully - Connected : Convolutional : f1 =W1X a C1 = W X a1 f2 = W2 X a C2 = w X a2\nput on chip and the trade offs (e.g., relative speed of memory vs. transistors vs. wires) are today vastly different facilitating different designs. However, truly bit-serial processing such as that use. in the aforementioned proposals needs to be revisited with today's technology constraints due to it. potentially high compute density (compute bandwidth delivered per area).\nVo V1 a1/0| a1/1 a2/0J a2/1 OR OR OR X X X 11 W/1 W/1 (a) (b) (c) io W/0 W/0\nIn general, hardware acceleration for DNNs has recently progressed in two directions: 1) consider-. ing more general purpose accelerators that can support additional machine learing algorithms, and. 2) considering further improvements primarily for convolutional neural networks and the two most dominant in terms of execution time layer types: convolutional and fully-connected. In the first. category there are accelerators such as Cambricon Liu et al.[(2016) and Cambricon-X Zhang et al. (2016). While targeting support for more machine learning algorithms is desirable, work on further. optimizing performance for specific algorithms such as TRT is valuable and needs to be pursued as. it will affect such general purpose accelerators..\nFigure 1: Bit-Parallel Engine processing the convolutional layer over two cycles: a) Structure, b) Cycle 0, and c) Cycle 1.\nTRT is closely related to Stripes Judd et al.(2016c a) whose execution time scales with precisioi but only for CVLs. STR does not improve performance for FCLs. TRT improves upon STR by enabling: 1) performance improvements for FCLs, and 2) slicing the activation computation across multiple SIPs thus preventing underutilization for layers with fewer than 4K outputs. Pragmatic use a similar in spirit organization to STR but its performance on CVLs depends only on the number o activation bits that are 1|Albericio et al.(2016b). It should be possible to apply the TRT extension to Pragmatic, however, performance in FCLs will still be dictated by weight precision. The area an energy overheads would need to be amortized by a commensurate performance improvement.\nv0 v1| AR BR OR AR BR OR AR BR OR AR BR OR w/1 w/1 w/0 w/0 i1 w/1 i0 w/0 (a) Engine Structure (b) Cycle 1: Parallel Load w on BRs a1/0| a2/0 a1/1| a2/1 AR BR OR AR BR OR AR BR OR AR BR OR w/1 w/1 w/1 c1/1 w/1 c2/1 w/0 w/0 w/0 c1/0 w/0 c2/0 (c) Cycle 2: Multiply w with bits O of (d) Cycle 3: Multiply w with bits 1 of the activations the activations\nThe Efficient Inference Engine (EIE) uses synapse pruning, weight compression, zero activation. elimination, and network retraining to drastically reduce the amount of computation and data com-. munication when processing fully-connected layers|Han et al.(2016). An appropriately configured EIE will outperform TRT for FCLs, provided that the network is pruned and retrained. However. the two approaches attack a different component of FCL processing and there should be synergy be. tween them. Specifically, EIE currently does not exploit the per layer precision variability of DNNs. and relies on retraining the network. It would be interesting to study how EIE would benefit from a TRT-like compute engine where EIE's data compression and pruning is used to create vectors of weights and activations to be processed in parallel. EIE uses single-lane units whereas TRT uses a. coarser-grain lane arrangement and thus would be prone to more imbalance. A middle ground may. be able to offer some performance improvement while compensating for cross-lane imbalance..\nEyeriss uses a systolic array like organization and gates off computations for zero activations Chen Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne(2016) and targets primarily high. energy efficiency. An actual prototype has been built and is in full operation. Cnvlutin is a SIMD accelerator that skips on-the-fly ineffectual activations such as those that are zero or close to zero A1 bericio et al.(2016a). Minerva is a DNN hardware generator which also takes advantage of zero. activations and that targets high-energy efficiencyReagen et al.(2016). Layer fusion can furthe. reduce off-chip communication and create additional parallelism|Alwani et al.(2016). As multipl layers are processed concurrently, a straightforward combination with TRT would use the maximum. of the precisions when layers are fused..\nFigure 2: Processing the example Convolutional Layer Using TRT's Approach\nGoogle's Tensor Processing Unit uses quantization to represent values using 8 bits Jouppi(2016) tc support TensorFlow|Abadi et al.(2015). As Table[1shows, some layers can use lower than 8 bits of precision which suggests that even with quantization it may be possible to use fewer levels and tc potentially benefit from an engine such as TRT..\nLimitations: As in DaDN this work assumed that each layer fits on-chip. However, as networks evolve it is likely that they will increase in size thus requiring multiple TRT nodes as was suggested in DaDN. However, some newer networks tend to use more but smaller layers. Regardless, it would be desirable to reduce the area cost of TRT most of which is due to the eDRAM buffers. We have noi explored this possibility in this work. Proteus Judd et al.(2016b) is directly compatible with TR7 and can reduce memory footprint by about 60% for both convolutional and fully-connected layers Ideally, compression, quantization and pruning similar in spirit to EIE Han et al.(2016) would be used to reduce computation, communication and footprint. General memory compresion Mittal & Vetter(2016) techniques offer additional opportunities for reducing footprint and communication.\nWe evaluated TRT only on CNNs for image classification. Other network architectures are impor. tant and the layer configurations and their relative importance varies. TRT enables performance\nFigure 3: Processing the example Fully-Connected Layer using TRT's Approach\nAR BR OR AR BR OR AR BR OR BR OR AR BR OR AR BR OR w1/1 w2/1 0 >> w1/1 w1/1 w2/1 w2/1 + w1/0 w2/0 w2/1 w1/0 w1/0 w2/0 w2/0 w1/0 w2/0 ycle 1: Shift in bits 1 of (b) Cycle 2: Shift in bits 0 of (c) Cycle 3: Copy AR into BI Its into the ARs weights into the ARs a/0 a/0 a/1 a/1 AR BR OR AR BR OR AR BR OR AR BR OR w1/1 w1/1 w2/1 w2/1 w1/1 w1/1 f1/1 w2/1 w2/1 f2/1 > >> > w1/0 w1/0 w2/0 w2/0 w1/0 w1/0 f1/0 w2/0 w2/0 f2/0 (d) Cycle 4: Multiply weights with(e) Cycle 5: Multiply weights with first bit of a second bit of a\nEach subunit contains three 2-bit registers: a shift-register AR, a parallel load register BR, and ar parallel load output register OR. Each cycle each subunit can calculate the product of its single bi v, input with BR which it can write or accumulate into its OR. There is no bit-parallel multipliei since the subunits process a single activation bit per cycle. Instead, two AND gates, a shift-and-adc functional unit, and OR form a shift-and-add multiplier/accumulator. Each AR can load a single bi per cycle from one of the i wires, and BR can be parallel loaded from AR or from the i wires.\nimprovements for two of the most dominant layer types. We have also provided some preliminary evidence that TRT works well for NeuralTalk LSTM|Karpathy & Li|(2014). Moreover, by enabling output activation computation slicing it can accommodate relatively small layers as well.\nConvolutional Layer: Figure2b|through Figure[2d|show how the CVL is processed. The figures abstract away the unit details showing only the register contents. As Figure|2b shows, during cycle 1, the w synapse is loaded in parallel to the BRs of both subunits via the i1 and i0 inputs. During cycle 2, bits O of a1 and of a2 are sent via the v0 and v1 inputs respectively to the first and second subunit. The subunits calculate concurrently a1/0 w and a2/0 w and accumulate these results into their ORs. Finally, in cycle 3, bit 1 of a1 and a2 appear respectively on v0 and v1. The subunits calculate respectively a1/1 w and a2/1 w accumulating the final output activations c1 and c2 into their ORs.\nWe have evaluated TRT only for inference only. Using an engine whose performance scales with. precision would provide another degree of freedom for network training as well. However, TRT. needs to be modified accordingly to support all the operations necessary during training and the training algorithms need to be modified to take advantage of precision adjustments..\nThis section commented only on related work on digital hardware accelerators for DNNs. Advances. at the algorithmic level would impact TRT as well or may even render it obsolete. For example, work on using binary weights Courbariaux et al.(2015) would obviate the need for an accelerator whose. performance scales with weight precision. Investigating TRT's interaction with other network types. and architectures and other machine learning algorithms is left for future work..\nIn total it took 3 cycles to process the layer. However, at the end of the third cycle, another w could have been loaded into the BRs (the i are idle) allowing a new set of outputs to commence computation during cycle 4. That is loading a new weight can be hidden during the processing of the current output activation for all but the first time. In the steady state, when the input activations are represented in two bits, this engine will be producing two 2b 2b terms every two cycles thus matching the bandwidth of the bit-parallel engine.\nIf the activations a1 and a2 could be represented in just one bit, then this engine would be pro ducing two output activations per cycle, twice the bandwidth of the bit-parallel engine. The latter is incapable of exploiting the reduced precision. In general, if the bit-parallel hardware was using Pbase bits to represent the activations while only Pa bits were enough, TRT would outperform the bit-parallel engine by Pease\nThis work presented Tartan an accelerator for inference with Deep Learning Networks whose perfor. mance scales inversely linearly with the number of bits used to represent values in fully-connected and convolutional layers. TRT also enables on-the-fly accuracy vs. performance and energy ef-. ficiency trade offs and its benefits were demonstrated over a set of popular image classification. networks. The new key ideas in TRT are: 1) Supporting both the bit-parallel and the bit-serial. loading of weights into processing units to facilitate the processing of either convolutional or fully-. connected layers, and 2) cascading the adder trees of various subunits (SIPs) to enable slicing the output computation thus reducing or eliminating cross-lane imbalance for relatively small layers.\nFully-Connected Layer: Figure|3|shows how a TRT-like unit would process the example FCL. As Figure3a shows, in cycle 1, bit 1 of w1 and of w2 appear respectively on lines i1 and i0. The left subunit's AR is connected to i1 while the right subunit's AR is connected to i0. The ARs shift in the corresponding bits into their least significant bit sign-extending to the vacant position (shown as a O bit on the example). During cycle 2, as Figure 3b[shows, bits O of w1 and of w2 appear on the respective i lines and the respective ARs shift them in. At the end of the cycle, the left subunit's AR contains the full 2-bit w1 and the right subunit's AR the full 2-bit w2. In cycle 3, Figure|3cshows that the contents of AR are copied to BR in each subunit. From the next cycle, calculating the products can now proceed similarly to what was done for the CVL. In this case, however, each BR contains a different weight whereas in the CVL all BRs held the same w value. The shift capability of the ARs coupled with the different i wire per subunit connection allowed us to load a different weight bit-serially over two cycles. Figure|3d and Figure|3e|show cycles 4 and 5 respectively. During cycle 4, bit O of a1 appears on both v inputs and is multiplied with the BR in each subunit. In cycle 5, bit 1 of a1 appears on both v inputs and the subunits complete the calculation of f1 and f2. It takes two cycles to produce the two 2b 2b products once the correct inputs appear into the BRs.\nTRT opens up a new direction for research in inference and training by enabling precision adjust- ments to translate into performance and energy savings. These precisions adjustments can be done statically prior to execution or dynamically during execution. While we demonstrated TRT for in- ference only, we believe that TRT, especially if combined with Pragmatic, opens up a new direction for research in training as well. For systems level research and development, TRT with its ability to trade off accuracy for performance and energy efficiency enables a new degree of adaptivity for operating systems and applications."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "While in our example no additional inputs nor outputs are shown, it would have been possible to. overlap the loading of a new set of w inputs into the ARs while processing the current weights storec into the BRs. That is the loading into ARs, copying into BRs, and the bit-serial multiplication of the. BRs with the activations is a 3-stage pipeline where each stage can take multiple cycles. In general. assuming that both activations and weights are represented using 2 bits, this engine would match the. performance of the bit-parallel engine in the steady state. When both set of inputs i and v can be. represented with fewer bits, 1 in this case, the engine would produce two terms per cycle, twice the. bandwidth of the bit-parallel engine of the previous section..\nSummary: In general, if Pbase the precision of the bit-parallel engine, and PL and PL, the preci. sions that can be used respectively for activations and weights for layer L, a TRT engine can ideally Pba se -for FCLs. This P example used the simplest TRT engine configuration. Since typical layers exhibit massive paral lelism, TRT can be configured with many more subunits while exploiting weight reuse for CVLs. and activation reuse for FCLs. The next section describes the baseline state-of-the-art DNNs accel erator and presents an equivalent TRT configuration..\nJorge Albericio, Patrick Judd, Alberto Delmas Lascorz, Sayeh Sharify, and Andreas Moshovos Bit-pragmatic deep neural network computing. Arxiv, arXiv:1610.06920 [cs.LG], 2016b.\nApplying some of the concepts that underlie the TRT design to other more general purpose acceler ators such as Cambricon Liu et al. (2016) or graphics processors would certainly be more preferable than a dedicated accelerator in most application scenarios. However, these techniques are best first nvestigated into specific designs and then can be generalized appropriately.\nHadi Esmaeilzadeh, Emily Blem, Renee St. Amant, Karthikeyan Sankaralingam, and Doug Burger Dark silicon and the end of multicore scaling. In Proceedings of the 38th Annual Internationa Symposium on Computer Architecture, ISCA '11, pp. 365-376, New York, NY, USA, 2011. ACM ISBN 978-1-4503-0472-6. doi: 10.1145/2000064.2000108\nYangqing Jia. Caffe model zoo. https://github.com/BVLC/caffe/wiki/Model-Zoo, 2015\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed. ding. arXiv preprint arXiv:1408.5093. 2014.\nFigure 5: Overview of the system components and their communication. a) DaDN. b) Tartan\nThis work presents TRT as a modification of the state-of-the-art DaDianNao accelerator. Accord ingly, Section 3.1[reviews DaDN's design and how it can process FCLs and CVLs. For clarity, in what follows the term brick refers to a set of 16 elements of a 3D activation or weight array'linput. which are contiguous along the i dimension, e.g., a(x, y, i)...a(x, y, i + 15). Bricks will be denoted. by their origin element with a B subscript, e.g., ab(x, y, i). The size of a brick is a design parameter..\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raque Urtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets. arXiv:1511.05236v4 [cs.LG]. arXiv.org. 2015"}, {"section_index": "7", "section_name": "3.1 BASELINE SYSTEM: DADIANNAC", "section_text": "TRT is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed byCher et al.(2014b). Figure4a shows a DaDN tile which processes 16 filters concurrently calculating 16 activation and weight products per filter for a total of 256 products per cycle. Each cycle the tile accepts 16 weights per filter for total of 256 synapses and 16 input activations. The tile multiplie each weight with only one activation whereas each activation is multiplied with 16 weights, one pe filter. The tile reduces the 16 products into a single partial output activation per filter, for a total o 16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each processing a different set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes 16 activations and 256 16 = 4K weights producing 16 16 = 256 partial output activations, 16 per tile.\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network Computing . Computer Architecture Letters, 2016c.\nAndrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descrip tions. CoRR. abs/1412.2306.2014. URLhttp://arxiv.0rg/abs/1412.2306\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per weight lane, 2) an input neuron buffer (NBin) which provides 16 activations per cycle through 16 neuron lanes, and 3) a neuron output buffer (NBout) which accepts 16 partia1 output activations per cycle In the tile's datapath each activation lane is paired with 16 weight lanes one from each filter. Each synapse and neuron lane pair feeds a multiplier, and an adder tree per filter lane reduces the 16 per filter products into a partial sum. In all, the filter lanes produce each a partial sum per cycle, for a\n1 An FCL can be thought of as a CVL where the input activation array has unit x and y dimensions, and there. are as many filters as output activations, and where the filter dimenions are identical to the input activation array\nNBin WindowActivation NBin Lane 0 Bit Lane 0 Activation 16 from central Activation Lane 0 from central Bit Lane 15 eDRAM eDRAM Activation Activation Bit Lane 240 Lane 15 Window Activation Lane 15Bit Lane 25 Weight 16 SIP(0,0) SIP(15,0) Lane 0 IPO Weight Filter Lane 0 Filter Lane 0 SWR SW Weight Lane 0 NBout Weight to central Lane 15 Lane 15 b16 WR WR eDRAM 16 .. : to central ... eDRAM 16 Weight Weight NBout IP15 Lane 0 Lane 0 Filter Filter SWR SWR Lane 15 Lane 15 Weight Weight Lane 15 16 Lane 15 WR 16 SIP(0,15) SIP(15,15) SB (eDRAM) SB (eDRAM) (a) DaDianNao (b) Tartan Figure 4: Processing Titles Tile 0 Tile 15 Tile 0 Tile 15 (Reducer) (Reducer 256 bits 256 bits 4 Dispatcher NM NM (a) (b)\nTile 0 Tile 15 Tile 0 Tile 15 Reducer Reducer 256 bits 256 bits Dispatcher NM NM (a) (b)\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor M. Aamodt, Natalie Enright Jerger, and. Andreas Moshovos. Proteus: Exploiting numerical precision variability in deep neural networks.. In Proceedings of the 2016 International Conference on Supercomputing, ICS '16, pp. 23:1- 23:12, New York, NY, USA, 2016b. ACM. ISBN 978-1-4503-4361-9. doi: 10.1145/2925426 2926294. URLhttp://doi.acm.0rg/10.1145/2925426.2926294\nFigure 5a shows an overview of the DaDN chip. There are 16 processing tiles connected via ar. interconnect to a shared central eDRAM Neuron Memory (NM). DaDN's main goal was minimizing. off-chip bandwidth while maximizing on-chip compute utilization. To avoid fetching weights from off-chip, DaDN uses a 2MB eDRAM Synapse Buffer (SB) for weights per tile for a total of 32MB eDRAM. All inter-layer activation outputs except for the initial input and the final output are storec in NM which is connected via a broadcast interconnect to the 16 Input Neuron Buffers (NBin buffers. All values are 16-bit fixed-point, hence a 256-bit wide interconnect can broadcast a ful. activation brick in one step. Off-chip accesses are needed only for reading: 1) the input image. 2) the weight once per layer, and 3) for writing the final output..\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large cache\nAlan F Murray, Anthony Vw Smith, and Zoe F Butler. Bit-serial neural networks. In Neura Information Processing Systems, pp. 573-583, 1988.\nM. Poremba, S. Mittal, Dong Li, J.S. Vetter, and Yuan Xie. Destiny: A tool for modeling emerging 3d nvm and edram caches. In Design, Automation Test in Europe Conference Exhibition (DATE) 2015, pp. 1543-1546, March 2015. Brandon Reagen, Paul Whatmough, Robert Adolf, Saketh Rama, Hyunkwang Lee, Sae Kyu Lee Jose Miguel Hernandez-Lobato, Gu-Yeon Wei, David Brooks, undefined, undefined, undefined and undefined. Minerva: Enabling low-power, highly-accurate deep neural network accelerators. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 00 (undefined):267-278, 2016. ISSN 1063-6897. doi: doi.ieeecomputersociety.org/10.1109/ISCA 2016.32. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014 arXiv: 1409.0575. Bertil Svensson and T Nordstrom. Execution of neural network algorithms on an array of bit serial processors. In Pattern Recognition, 1990. Proceedings., 1Oth International Conference on volume 2, pp. 501-505. IEEE, 1990. Synopsys. Design Compiler. http://www.synopsys.com/Tools/\nProcessing starts by reading from external memory the first layer's filter weights, and the input image. The weights are distributed over the SBs and the input is stored into NM. Each cycle an input activation brick is broadcast to all units. Each units reads 16 weight bricks from its SB and produces a partial output activation brick which it stores in its NBout. Once computed, the output activations are stored through NBout to NM and then fed back through the NBins when processing the next layer. Loading the next set of weights from external memory can be overlapped with the processing of the current layer as necessary.\nAs Section|2lexplained, TRT processes activations bit-serially multiplying a single activation bit with a full weight per cycle. Each DaDN tile multiplies 16 16-bit activations with 256 weights each cycle. To match DaDN's computation bandwidth, TRT needs to multiply 256 1-bit activations with 256 weights per cycle. Figure|4b|shows the TRT tile. It comprises 256 Serial Inner-Product Units (SIPs) organized in a 16 16 grid. Similar to DaDN each SIP multiplies 16 weights with 16 activations and reduces these products into a partial output activation. Unlike DaDN, each SIP accepts 16 single-bit activation inputs. Each SIP has two registers, each a vector of 16 16-bit subregisters: 1) the Serial Weight Register (SwR), and 2) the Weight Register (WR). These correspond to AR and BR of the example of Section [2] NBout remains as in DaDN, however, it is distributed along the SIPs as shown.\nSynopsys. Design Compiler. http://www.synopsys.com/Tools Implementation/RTLSynthesis/DesignCompiler/Pages\nShijin Zhang, Zidong Du, Lei Zhang, Huiying Lan, Shaoli Liu, Ling Li, Qi Guo, Tianshi Chen, and Yunji Chen. Cambricon-x: An accelerator for sparse neural networks. In Proceedings of the 49th International Symposium on Microarchitecture, 2016.\nConvolutional Layers: Processing starts by reading in parallel 256 weights from the SB as in. DaDN, and loading the 16 per SIP row weights in parallel to all SWRs in the row. Over the next. PI cycles, the weights are multiplied by the bits of an input activation brick per column. TRT. exploits weight reuse across 16 windows sending a different input activation brick to each column.. For example, for a CVL with a stride of 4 a TRT tile will processes 16 activation bricks aB(x, y, i),. ab(x + 4, y, i) through a(x + 63, y, i) in parallel a bit per cycle. Assuming that the tile processes. filters fi though fi+15, after P cycles it would produce the following partial output activations:. Ob(x/4, y/4, fi), through oB(x/4 + 15, y/4, fi), that is 16 contiguous on the x dimension output. activation bricks. Whereas DaDN would process 16 activations bricks over 16 cycles, TRT processes them concurrently but bit-serially over PI cycles. If PL is less than 16, TRT will outperform DaDN. by 16/PL, and when PL is 16, TRT will match DaDN's performance.\nFully-Connected Layers: Processing starts by loading bit-serially and in parallel over Ph, cycles, 4K weights into the SWRs. Each SWR per row gets a different set of 16 weights as each subregister is connected to one out of the 256 wires of the SB output bus for the SIP row. Once the weights have been loaded, the SwRs are copied to the SWs and multiplication with the input activations can then proceed bit-serially over PL cycles. Assuming that there are enough output activations so that a different output activation can be assigned to each SIP, the same input activation brick can be broadcast to all SIP columns. For example, for an FCL a TRT tile will process one activation brick aB(i) bit-serially to produce 16 output activation bricks ob(i) through ob(i 16) one per SIP column. Loading the next set of weights can be done in parallel with processing the current set, thus -max(PL, Ph). Thus, a TRT tile produces 256 partial execution time is constrained by PL.\ntotal of 16 partial output activations per Once a full window is processed, the 16 resulting sums are fed through a non-linear activation function, f, to produce the 16 final output activations. The multiplications and reductions needed per cycle are implemented via 256 multipliers one per weight lane and sixteen 17-input (16 products plus the partial sum from NBout) adder trees one per filter lane.\nSWR ICONV 1(a0)|MSB WR nbout 16 16 1(a0) weight1 16 x16 : o nbout 1(a15) Sau max IF 16 16 prec weight 1(a15) nbout 16 activation 16 MSB Figure 6: TRT's SIP\nSWR CONV 1(a0)|MSB WR i nbout 16 16 1(a0) weight 16 6 x16 : o nbout 1(a15) Bau 16 max 16 16 prec weight <<1 16 1(a15) i nbout activation16 MSB\noutput activations every PL Pmax cycles, a speedup of 16/Pmax over DaDN since a DaDN tile alway needs 16 cycles to do the same..\nFor TRT to be fully utilized an FCL must have at least 4K output activations. Some of the network studied have a layer with as little as 2K output activations. To avoid underutilization, the SIPs alon each row are cascaded into a daisy-chain, where the output of one can feed into an input of the nex via a multiplexer. This way, the computation of an output activation can be sliced over the SIPs alon the same row. In this case, each SIP processes only a portion of the input activations resulting int several partial output activations along the SIPs on the same row. Over the next np cycles, wher np the number of slices used, the np partial outputs can be reduced into the final output activation The user can chose any number of slices up to 16, so that TRT can be fully utilized even with fully connected layers of just 256 outputs. For example, in NeuralTalk Karpathy & Li|(2014) the smalles layers can have 600 outputs or fewer.\nSIP: Bit-Serial Inner-Product Units: Figure 6 shows TRT's Bit-Serial Inner-Product Unit (SIP) Each SIP multiplies 16 activations by 16 weights to produce an output activation. Each SIP has two registers, a Serial Weight Register (SwR) and a Weight Registers (WR), each containing 1 16-bit subregisters. Each SwR subregister is a shift register with a single bit connection to one o the weight bus wires that is used to read weights bit-serially for FCLs. Each WR subregister can b parallel loaded from either the weight bus or the corresponding SwR subregister, to process CVL or FCLs respectively. Each SIP includes 256 2-input AND gates that multiply the weights in the WR with the incoming activation bits, and a 16 16b adder tree that sums the partial products. A final adder plus a shifter accumulate the adder tree results into an output register. In each SIP, a multiplexer at the first input of the adder tree implements the cascade mode supporting slicing th output activation computation along the SIPs of a single row. To support signed 2's complemen neurons, the SIP can subtract the weight corresponding to the most significant bit (MSB) from the partial sum when the MSB is 1. This is done with negation blocks for each weight before the adde tree. Each SIP also includes a comparator (max) to support max pooling layers.\nDispatcher and Reducers: Figure5b shows an overview of the full TRT system. As in DaDN ther is a central NM and 16 tiles. A Dispatcher unit is tasked with reading input activations from NN always performing eDRAM-friendly wide accesses. It transposes each activation and communicate each a bit a time over the global interconnect. For CVLs the dispatcher has to maintain a pool o multiple activation bricks, each from different window, which may require fetching multiple row from NM. However, since a new set of windows is only needed every PL cycles, the dispatcher cai keep up for the layers studied. For FCLs one activation brick is sufficient. A Reducer per title i tasked with collecting the output activations and writing them to NM. Since output activations tak multiple cycles to produce, there is sufficient bandwidth to sustain all 16 tiles.\nOther Layers: TRT like DaDN can process the additional layers needed by the studied networks For this purpose the tile includes additional hardware support for max pooling similar to DaDN. An activation function unit is present at the output of NBout in order to apply nonlinear activations before the output neurons are written back to NM."}, {"section_index": "8", "section_name": "3.4 PROCESSING SEVERAL BITS AT ONCE", "section_text": "In order to improve TRT's area and power efficiency, the number of bits processed at once can be parameterized. In this case, the weights are multiplied with several activation bits at once, and th multiplication results are partially shifted before they are inserted into their corresponding adde tree.\nIn order to load the weights on time, the SwR subregister has to be modified so it can load sev. eral bits in parallel, and shift that number of positions every cycle. The negation block (for 2's complement support) will operate only over the most significant product result.\nThe chief advantage of such a design is that less SIPs are needed in order to achieve the same throughput - for example, processing 2 bits at once allows reducing the number of columns from 16 to 8. Although the total number of bus wires is similar, the distance they have to cover is significantly reduced. Likewise, the total number of adders required stays similar, but they are clustered closer together.\nA drawback of this design is the limitation to precisions that are exact multiples of the number of bits processed at once.\nThis section evaluates TRT's performance, energy and area and explores the trade-off between ac curacy and performance comparing to DaDN"}, {"section_index": "9", "section_name": "4.1 METHODOLOGY", "section_text": "Numerical Representation Requirements Analysis: The per layer precision profiles are found via the methodology of Judd et al. Judd et al.(2015). Caffe Jia et al.(2014) was used to measure hov reducing the precision of each FCL affects the network's overall top-1 prediction accuracy over 5000 images. The network definitions and pre-trained synaptic weights are taken from the Caffe Mode Zoo [Jia(2015). Since TRT's performance for FCLs is bound by the maximum of the weight an activation precisions, our exploration was limited to the cases where both are the same. The searcl procedure is a gradient descent where a given layer's precision is iteratively decremented one bit a a time, until the network's accuracy drops. For weights, the fixed point numbers are set to represen values between -1 and 1. For activations, the number of fractional bits is fixed to a previously determined value known not to hurt accuracy, as per Judd et al.(2015). While both activations anc weights use the same number of bits, their precisions and ranges differ.\nPerformance, Area and Energy: DaDN, STR and TRT were modeled using the same methodol. ogy for consistency. A custom cycle-accurate simulator models execution time. Computation was. scheduled as described by Judd et al.(2016a) to maximize energy efficiency for DaDN. The logic components of the both systems were synthesized with the Synopsys Design Compiler Synopsys. for a TSMC 65nm library to report power and area. The circuit is clocked at 980 MHz. The NBin and NBout SRAM buffers were modelled using CACTI Muralimanohar & Balasubramonian The eDRAM area and energy were modelled with Destiny|Poremba et al.(2015).\nFully-Connected Layer Precisions: Table 1reports the per layer precisions for the CVLs and. FCLs of the networks studied along with the speedup over DaDN that would be ideally possible.. The discussion in this section focuses solely on FCLs. The precisions that can be used vary from 8 up to 10 bits vs. the 16 bits DaDN uses. The ideal speedup ranges from 63% to 66% with. no accuracy loss. Additional exploration of the precision space may yield even shorter precisions without sacrificing accuracy. Modest additional improvements are possible with a loss of 1% in. accuracy.\nExecution Time: Table2reports TRT's performance and energy efficiency relative to DaDN for the precision profiles in Table 1 separately for the fully-connected layers, for the convolutional layers"}]
HJTXaw9gx
[{"section_index": "0", "section_name": "ACKNOWLEDGMENTS", "section_text": "Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping ii the process of writing this work\nVicenc Rubies Royo, Claire Tomlin"}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Department of Electrical Engineering and Computer Science. UC Berkeley Rorzol. IISA\nJohn Schulman, Sergey Levine, Michael Jordan, and Pieter Abbeel. Trust Region Policy Optimiza tion. Icml-2015, page 16, 2015. ISSN 2158-3226. doi: 10.1063/1.4927398.\nMost machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must.\nIan Mitchell. A toolbox of level set methods. Technical report, 2007."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Artificial neural networks are remarkable function approximators used in a myriad of applications. ranging from complex controllers for robotic actuation (Levine et al.]2016) (Schulman et al.]2015) to simple image classifiers for digit recognition (LeCun et al.||1989) . They even find uses in physics. to find approximations to solutions of PDEs and systems of coupled ordinary differential equations. (ODEs) (Lagaris et al.||1998). Their success is in part achieved by their property of being universal. function approximators (Hornik et al.1989). In order to train a neural network one usually defines. a cost function which captures the 'goodness\"' of the choice of parameters in our model, and uses. gradient descent/ascent algorithms to improve them. In supervised learning, for example, input out- put data pairs are used to define a cost function such as the mean squared error or the mean absolute. error; unfortunately, in many cases the function we want to approximate is unkown. For instance,. in many reinforcement learning settings one wants to find the optimal policy, a function from state variables to actions'I which maximizes the expected sum of discounted rewards of an agent in some. environment. This function is usually unkown a priori, so this problem can't readily be framed. as a regression problem using input-output pairs. This assertion becomes blurred, however, when. looking at the work of[Mnih et al.(2013), where a deep Q-network learns by generating targets and. minimizing a cost of the form."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Y. LeCun, B. Boser. J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel Backpropagation Applied to Handwritten Zip Code Recognition, 1989. ISSN 0899-7667.\nBadis Djeridane and John Lygeros. Neural approximation of PDE solutions: An application to. reachability computations. Proceedings of the 45th IEEE Conference on Decision and Control pages 3034-3039, 2006. ISSN 01912216. doi: 10.1109/CDC.2006.377184.\nLi(0i) = Es,a~e[(yi Q(s,a;0))2]\nHere, the targets yi are generated from the same Q-network that is being used to approximate the Q-function, hence the neural network has two purposes: approximation and data generation. In this work, we show that this same idea can be extended to the domain of approximating solutions to partial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE.\nThis experiment was designed to test the applicability of the method to problems beyond those presented in the previous sections. In particular, we show that with small changes we can also compute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the previous examples, we frame the problem in relative coordinates with the x-axis aligned with the evader's heading, and give the pursuer and evader control over the rate of rotation. This can be. written as follows:\nIn control theory and robotics we often want to know how a system evolves in time given some. input signal. In particular, one would like to know whether there exists an (optimal) input signal tha. drives our system to a particular region of interest in our state space and what that input is. For deterministic system with continuous states and inputs, this problem can be succinctly expressed as. a partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE..\nLet V : Rn R- -> R. Then, given a time invariant system of the form dxt = f(x, a,b) and. boundary condition V(x,O) = l(x), where x E Rn is the state vector and a E A C Rma and b E B C Rms are inputs to the system] we wish to find the solution to the minimum-payoff HJI PDE, associated to the reachability problem:\naV(x,t) -min{0,H(x,VxV)} dt\nFor this problem the capture condition is encoded in the boundary condition V(x,0) [xr yr[TT2 - 1 (where we ignore 0r since the capture condition only depends on the distance) and we consider a the time horizon T = 1.0s. For this problem we give both pursuer and evader the same speed vp = ve = 1.0 and the same turning rates a,b E [-1,1]. Unlike the previous experiments, we used a neural network with two hidden layers with 10 and 5 units respectively and sigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked over the set S := {(xr, yr, 0r)[xr, yr E [-5, 5], 0r E [, ]} and over t E [T, 0]. The batches were picked to be of size K = 25, momentum decay y = 0.999 and learning rate n = 0.001. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations.\nis known as the Hamiltonian. The boundary condition V(x, 0) = l(x) encodes in its zero sub-level set (i.e. l(x) < O) the region of interest in our state space known as the target set T. Lastly, the so. lution V(x, t) to (2) encodes the information about all the starting states whose induced trajectories will enter (and possibly leave) T within t], given the dynamics and input signals. More precisely. for some starting state xo and t < 0, V(xo,t) < O if and only if the trajectory starting from xo. enters T withint.\nE 1 E 2 Loss 0.25 0.50 10 0.45 9 0.20 0.40 8 0.35 7 0.15 0.30 6 0.25 5 0.10 0.20 4 0.05 0.15 3 0.10 2 0.00 0.05 0 250000 0 250000 0 250000\nTo give some intuition as to why V(x,t) encodes the starting states whose trajectories enter 7 within t, let us consider the simpler problem where dx = f(x) is an autonomous system without any inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing the gradient into V (i.e. VxVT f(x)t+V(x, t) ~ V(x+ f(x)t, t)), one can obtain the following approximation\nV(x,t- t)~min{ V(x,t), V(x+ f(x)t,t) }\nFigure 6: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E2 and the third figure shows the loss Le as defined in algorithm 4.1 over all the data.\nAs shown in Fig. 6] both error metrics decrease as the algorithm progresses, reaching an averag. error for E1 in the order of 5.0 10-2 and an average error for E2 in the order of 1.0 10-1. The. points used to compute E1 were taken from a 51 51 50 approximation grid at t = -0.5s. Thi set of experiments was run in a different machine'|using 8 threads and the total time for all thread.. to finish was 1000 seconds. Finally, Fig. 7 shows the zero level set contour at t = 0.5, which is. now a 3D surface, from side and top perspectives. The first row shows the output of the LevelSe. Toolbox from each perspective, and the second row shows a 3D scatter plot of points on the zerc. level-set obtained from one of the 8 neural networks that were trained..\nFor the case of one input trying to drive our system into T, the approximation becomes\nV(x,t- t) ~ min{V(x,t), min V(x+ f(x,b)t,t) }\nV(x,t- t) ~ min{ V(x,t) , max min V(x + f(x, a,b)t,t) }\na is usually taken to be the input and b is taken to be some bounded input disturbance\n4due to heavy usage of the first machine we had to switch to a different one\n-ve + vpcos(0r) + ayr fx,a,b= Vpsin(0r) - axr b - a\nH(x, VxV) := max min VxVTf(x, a, b aEA bEB\nIt is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V(x, 0) > 0) but near its boundary, whose induced trajectories enter the target (i.e. V(x + f(x)t, O) < O) within t, will become negative in V(x, -t). Thinking of this update recursively one can intuitively see. how the zero sub-level set of V grows backward in time to include more and more states..\nUsing the previous analogy of the autonomous system, one can see how (5) and (6) are essentially. different ways to expand the zero sub-level set backward in time: (5) can be seen as an input trying. to expand the set as fast as possible; (6) can be seen as two inputs with competing goals, where one input tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting shows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded worse case disturbance and T as some unsafe region, one can establish safety guarantees about the. system and claim which states won't be driven into T within some time horizon..\nt=0.5 t= 0.5 t= 0.5 6 - 64 3 5 5 4 4 0 3 3 - -1 2 2 -2 1 1 -3 0 0 4 3 .2 -1 2 4 -5 -4 -3 2 -1 - 2 5 3 5 54321012345 5 4 5 5 3 4 4 2 3 3 1 0 2 2 1 1 -2 0 3 0 5 4 3 .2 0 2 3 4 -5 4 -3 -2 -1 1 2 3 4 5 4 5 54321012345\nb* = argmin xV(xo,t) f(xo,b bEB\nyields the instantaneous optimal input for state xo at time t to guide the trajectory into T as fast as possible. Using this fact one can generate an optimal control policy based on the gradient of V. This idea can then be easily extended to the case of two competing inputs to obtain competing control policies. Finally, even though (7) need not be a convex problem, in this work we will only deal with simple dynamical systems, making the optimization problem easy to solve..\nThe problem presented in section[2(as in many other cases with PDEs) is general not straightforward to solve. For this reason, trying to find a good approximation instead of the actual solution can be a reasonable approach. Many current state-of-the-art tools used to approximate solutions of PDEs, including (2), use gridding techniques (Mitchell]2007) whereby finite differences are used to iteratively update values on a grid. Another approach (Lagaris et al.]|1998) is to train a feedforward neural network by minimizing the following loss\nN Le := G(xi,Ye(xi),Vyo(xi),V2ye(xi)) i=1\nwhere G(x,(x),Vy(x), V2y(x)) = 0 is the PDE whose solution (x) we are trying to ap proximate and x; are points taken from the discretization of our domain. In (8), the function. Ve(x) := A(x) + F(x, Ne(x)) is a candidate approximation which by construction satisfies the boundary condition, where Ne(x) is a feedforward neural network. In order to ensure that the con ditions at the boundary are satisfied, F(x, Ne(x)) = 0 at the boundary and A(x) is a fixed function. which satisfies them.\nFigure 7: The first column shows the first side view perpendicular with respect to the x-z plane. The second column shows the second side view perpendicular with respect to the y-z plane. Finally, the third column shows the top view which is perpendicular with respect to the x-y plane\nFor this experiment, only 111 numbers were needed to store the approximation, as opposed to 51 51 50 10 = 1300500 numbers (i.e. 51 in xr, 51 in yr, 50 in 0r and 10 in t) for a 51 51 50 10 grid approximation.\nN OV(xi,ti Le := + min{0, H(xi,VxV)})2 dt i=1\nIn this work, we try to tackle the problem of finding an approximate solution to (2) from a different perspective. We show that a poor approximation to our solution is enough to generate \"good enough new data for regression, which can in turn be used to improve our model.\nIn this section we present a simple method for approximating the solution to (2) by utilizing a. feedforward neural network in two ways: as a function approximator and a data generator. We. believe that this parametric approach is better suited for finding good approximations by avoiding. some of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To\nLastly, it is important to note that V(x, t) contains useful information in its gradient x V(x, t). In problem\nAlthough this approach is well suited for some problems, special care must be taken when com puting the gradient of the loss with respect to the parameters. For instance, following the previous procedure, the loss for HJI PDE would be written as\nthat end, we start by defining our candidate approximation Ve(x) to be of the same form as ir (Lagaris et al.||1998); that is, a sum of two terms which help satisfy our boundary condition V(x, 0)\nwhere Ne(x, t) is a neural network mapping from our states and time variables to the real numbers.. Next, we sample N points in the state variable x chosen uniformly at random over some set S which. includes T (the target set), and similarly, sample N points in the time variable t uniformly at random. over the set -T, O], where T > 0 is the desired time horizon. By sampling from these distributions we seek to find a good approximation to V(x, t) over the set S [-T, 0]. Once these points have. been gathered, we make use of the update (4), (5) or (6) (depending on our problem) and use Ve(x, t). the approximation itself, to generate the new regression points. The complete algorithm|4.1jis shown. using update equation (6), but it should be clear how to modify it for the other cases..\nAlgorithm 1 Recursive Regression via SGD with Momentum"}, {"section_index": "4", "section_name": "4.2 COMMENTS", "section_text": "Algorithm|4.1|is a type of bootstrapping method in that lines 12 and 13 make use of Ve(x, t) to. generate points for regression to train Ne(x, t) which in turn modify Ve(x, t) itself. At first glance,. it is unclear whether the generated pairs ((xj,tj), yj) will result in a good approximation to the. solution of our PDE after regression; however, given the form of our candidate function (10) we. expect that points sampled near t = 0 will in fact be reasonable approximations of V(x, t) for small t. Given this assumption, we hypothesize that despite the presence of misleading data, our network. will be able to do a good job at regressing over all points, thus improving our initial model and. allowing the generation of improved data. By repeating this procedure, we expect the accuracy of the boundary condition to '\"propagate\"' backward in time (possibly with some minor error) in the. form of better and better points for regression..\nAnother important aspect from line 13 is that we are simulating our dynamics forward in time using. the Euler approximation step x; + f(x, a*, b*)t. In practice, depending on the variability and. complexity of the dynamics, one might use a Runge-Kutta method or a more involved integration procedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was. used.\nVe(x,t) = V(x,O)+tNe(x,t)"}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution. In case it is not straightforward to obtain the solution, a very accurate approximation taken from state-of-the-art tools is used instead. In particular, we make use of the LevelSet Toolbox from [Mitchell (2007), a powerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs.\nThe first error metric to be used will be\nwhere M are the number of points chosen from our domain to compute the average absolute erro. and V(x, t) can denote either the true solution or an accurate approximation. In the case where the analytical solution is known, the points are taken uniformly at random over S; otherwise, they are taken over some grid in S and [-T. 0]. Lastly, we also use a second error metric\nM 1 OV(xi,ti) E2(Ve(x,t)) : + min{0,H(xi,VxV)} M dt i=1"}, {"section_index": "6", "section_name": "5.1 A LINEAR SYSTEM", "section_text": "In this experiment we study the performance of the algorithm on an autonomous system of the form\n-1 -2 x=fx= x 2\nwith V(x, 0) = ||x||2 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be found analytically to be V(x, t) = e-t[x2 - 1. One can easily verify this by checking it satisfies the boundary condition and (2). For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled. was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)[x1, x2 E [-5, 5]} and. over t E [-T, 0]. The batches were picked to be of size K = 10, momentum decay y = 0.95 and. learning rate n = 0.1. The interval to renew the regression points was chosen to be 1000 iterations. and the program was halted at 500,000 iterations..\nFigure 1: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E2 and the third figure shows the loss Le as defined in algorithm 4.1 over all the data. The horizontal axis represents the iteration number\nM 1 |V(xi,ti)-Vo(xi,ti)] E1(Ve(x,t)) := M i=1\nsimilar to the one defined in (9), which denotes the extent by which (on average) the approximation is violating the PDE equality. For all experiments M = 30o0, all chosen uniformly at random over S T, 0]. In section[5.4 we also show a visual representation of the approximations.\nE 1 E 2 Loss 1.6 4.0 25 1.4 3.5 20 1.2 3.0 1.0 2.5 15 0.8 2.0 0.6 1.5 10 0.4 1.0 5 0.2 0.5 0.0 0.0 0 0 250000 O 250000 0 250000\nThe results shown in Fig. 1where taken over 10 runs of the algorithm concurrently executed over multiple threads. The overall time to run the 500,O00 iterations for all threads was 1521 seconds The average E1 error at halting time was in the order of 7 10-2, whereas the E2 error was in the order of 3 10-1. The sharp jumps appearing in the loss figure in the majority of cases correspond to the error after new points are generated and used for regression."}, {"section_index": "7", "section_name": "5.2 PURSUIT-EVASION GAME: SINGLE INPUT", "section_text": "In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In. a first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer has the same speed as the evader but has the liberty to change the direction of its heading. Fixing. the evader at the origin with its heading aligned with the x-axis we frame the problem in relative coordinates between the evader and pursuer, that is x = [xr yrlT, where xr and yr represent the x. and y position of the pursuer relative to the evader. This system's dynamics are readily encoded in the following equation\nwhere vp = ve = 2.0 represent the speed of the pursuer and evader respectively, b E [0, 2 represents the input available to the pursuer, which is the angle with respect to the x-axis. In this simplified pursuit-evasion game we say the pursuer has captured the evader if they are within 1 unit of distance from each other. Thus, we define our capture condition by defining V(x, O) = x2 - 1 which will ensure that our approximation captures all the states from which the pursuer can capture the evader in within T = 1.0. As in the previous example, we choose the same network architecture and the same values for the halting time, renewal interval, N,K,y and n.\nE_1 E 2 Loss 1.2 2.5 25 1.0 2.0 20 0.8 1.5 15 0.6 1.0 10 0.4 0.5 5 0.2 0.0 0.0 0 250000 0 250000 0 250000\n1.0 0.8 0.6 0.4 0.2 0.0 0\nFigure 2: From left to right: the first figure shows the mean absolute error E1, the second figur. shows the mean absolute PDE error E, and the third figure shows the loss Le as defined in algorithn 4.1 over all the data. The horizontal axis denotes iteration number..\nThe results shown in Fig. 2|where also taken over 10 runs of the algorithm like in section 5.2] The overall time to run the 500,000 iterations was 1952 seconds. The average E1 error at halting time was also in the order of 7 10-2, whereas the E2 error was in the order of 1.5 10-1. The points used to compute E1 were taken from a 51 51 grid at t = 0.5 (half of the time horizon), using a previously computed approximation from the LevelSet Toolbox. The reason why a single time instance was used to compute E1 was purely to reduce the amount of computation of the error at run-time."}, {"section_index": "8", "section_name": "5.3 PURSUIT-EVASION GAME: TWO INPUTS", "section_text": "The last experimental example also consists of a pursuit-evasion game, but in this case the evader has access to a range of speeds through an input a E [-2, 2]. The system dynamics thus become\nx Vpcos(b) - a fx,a,b= yr Vpsin(b)\nxr Vpcos(b) - Ve fx,b= yr Vpsin(b)\n1 E E 2 Loss 0.40 0.9 25 0.35 0.8 20 0.30 0.7 0.25 0.6 15 0.20 0.5 0.15 0.4 10 0.10 0.3 5 0.05 0.2 0.00 0.1 0 O 150000 0 150000 150000\nFigure 3: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E, and the third figure shows the loss Le as defined in algorithn 4.1over all the data.\nThe results shown in Fig. 3|where also taken over 10 runs of the algorithm. The overall time to rur the 300,000 iterations over the all threads was 1028 seconds. The average Ej error at halting time was in the order of 6 10-2, whereas the E2 error was in the order of 1.5 10-1. Like in the single input case, the points used to compute E1 were taken from a 51 51 grid at t = 0.5 of a pre-computed approximation."}, {"section_index": "9", "section_name": "5.4 CONTOUR VISUALIZATION", "section_text": "In this section we briefly display some of the contours for a neural network picked at random. from those computed in the experimental section. Each line corresponds to the set of states where. Ve(x, t) = 0 for t = 0, 0.25, -0.5, -0.75, -1.0. These contours enclose within them the states from which our system can reach the target set T within the absolute value of its associated time\n4 4 4 2 2 2 0 0 -2 -2 -2 -4 -4 -4 -4 -2 0 2 4 -4 -2 0 2 4 -4 -2 0 2 4\nFigure 4: From left to right: contours for experiment one, experiment two and experiment three. As one can appreciate, the contours grow according to the specified dynamical model.\nAs expected, the linear system's contours expand radially in all directions since the origin is a stable equilibrium poinl|where all trajectories converge. For the pursuit-evasion game of one input, we also see that the contours grow toward the right, which is a sensible outcome given that the pursue. can't catch up with the evader if it starts somewhere where xr < -1.0. Finally, the last set o1 contours associated with the pursuer-evader game of two competing inputs also make sense, since starting states xr < 1.0 or xr > 1.0 should not permit the pursuer to intercept the evader, and sc\nwith the same negative real part for the eigenvalues\nand, similarly, V(x, 0) = x2 1 and T = 1.0. As before, vp = 2.0. The interesting behavior we expect to see from this experiment, in comparison to the single input counterpart, is that this new available action to the evader will make it more difficult for the pursuer to intercept. This should then be evident by looking at our approximation Ve and its zero sub-level sets at different times. For this experiment we also chose the same architecture for the network as in the previous experiments and the same parameters, except for the halting time which was 300,o00 iterations.\nthe contours should not expand in those directions. As a last comparison, in Fig. 5|we display the actual contours that would be obtained using the LevelSet Toolbox.\n5 5 5 4 4 3 3 3 2 2 2 1 1 0 0 0 -1 -1 1 -2 -2 2 -3 -3 -3 -4 -4 4 -5 -5 -5 -5 0 5 5 0 5 -5 0\nFigure 5: Contours obtained from the LevelSet Toolbox in Matlab\nBy comparing Fig.5and4|one can qualitatively see that the neural network has learned an accurate approximation of V(x, t)\nThe first advantage of using this method over gridding techniques is a dramatic improvement in memory requirements. For instance, using a standard grid with 51, 51, 10] discretization points per axis (i.e. 51 in xr, 51 in yr and 10 in t) each of the three previous experiments requires the storage of 26, 010 numbers, as opposed to 51 weights for our neural network. For the gridding approach this memory requirement must increase exponentially with the number of dimensions, whereas this need not be the case for our method. Furthermore, points that do not fall exactly on the grid have to be interpolated, whereas the neural network is an approximation that assigns values to all points in the domain. To this we can also add that fact that the neural network can yield the gradient at any point directly with backpropagation, whereas the gradient must once again be approximated for gridding techniques.\nThe main disadvantage of this method, for small dimensional systems in particular, is the time requirement. Computing values over a grid with the LevelSet Toolbox for the previous systems tool less than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears ir higher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage o using this method is the necessity to tune hyper parameters.\nIn this work we focus our attention on the idea that recursive/bootstrapped regression can be used in some problems where the function we wish to approximate has some known characteristics. In particular, we show that accurate approximations to the HJI PDE solution can be found by assigning a neural network two roles, one of them being function approximation, and the other data gener ation.To validate our hypothesis three different experiments with three distinct dynamical systems were performed with satisfactory results.\nIn this work we did not focus on the architecture of the neural network, but rather on its ability to. perform well on three distinct tasks using the same algorithm. In future work we will try to find. whether one can construct wider or deeper neural networks and obtain better results. We also want to investigate how well this method scales with the number of state and input dimensions. Positive results in that front could suppose an important step to further alleviate the effects of the curse of. dimensionality, which are pervasive in griding methods.."}]
S13wCE9xx
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. Multimodal distributional semantics. J. Artif Intell. Res.(JAIR), 49(1-47), 2014\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Www, pp. 406-414, 2001.\nOthmar Koch and Christian Lubich. Dynamical low-rank approximation. SIAM J. Matrix Ana Appl., 29(2):434 454, 2007.\nDaniel Kressner, Michael Steinlechner, and Bart Vandereycken. Low-rank tensor completion by riemannian optimization. B1T Numerical Mathematics, 54(2):447-468, 2014\nSiwei Lai, Kang Liu, Shi He, and Jun Zhao. How to generate a good word embedding? arXiv preprint arXiv:1507.05523, 2015."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa tions of words and phrases and their compositionality. In NIPS, pp. 3111-3119, 2013.\nStep 1. Search for a low-rank matrix X that provides a ood SGNS objective value:\nXin Rong. word2vec parameter learning explained. arXiy p reprint arXiv:1411.2738, 2014.\nMingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno Jialin Pan. Riemannian pursui for big matrix recovery. In ICML, volume 32, pp. 1539-1547, 2014.\nAlexander Fonarev123, Alexey Grinchuk12, Gleb Gusev2, Pavel Serdyukov2, Ivan Oseledets14. 1Skolkovo Institute of Science and Technology, Moscow, Russia. 2Yandex LLC, Moscow, Russia 3sBDA Group, Dublin, Ireland 4Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia. newo@newo.su, oleksii.hrinchuk@skolkovotech.ru, gleb57@yandex-team.r\nnewo@newo.su, oleksii.hrinchuk@skolkovotech.ru, gleb57@yandex-team.ru pavser@yandex-team.ru, ioseledets@skoltech.ru"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in \"word2vec\"' software, is usually optimized by stochastic gra- dient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank con- straint. The most standard way to solve this type of problems is to apply Rieman- nian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes. SGNS objective using Riemannian optimization and demonstrates its superiority. over popular competitors, such as the original method to train SGNS and SVD. over SPPMI matrix.\nIn this paper, we consider the problem of embedding words into a low-dimensional space in order. to measure the semantic similarity between them. As an example, how to find whether the word \"table\" is semantically more similar to the word \"stool\"' than to the word \"sky'? That is achieved. by constructing a low-dimensional vector representation for each word and measuring similarity. between the words as the similarity between the corresponding vectors..\nOne of the most popular word embedding models by Mikolov et al. (2013) is a discriminative neural network that optimizes Skip-Gram Negative Sampling (SGNS) objective (see Equation 3). It aims at predicting whether two words can be found close to each other within a text. As shown in Section 2. the process of word embeddings training using SGNS can be divided into two general steps with clear objectives:\nUnfortunately, most previous approaches mixed these two steps into a single one, what entails a not completely correct formulation of the optimization problem. For example, popular approaches to train embeddings (including the original \"word2vec\" implementation) do not take into account that the objective from Step 1 depends only on the product X = W CT : instead of straightforward computing of the derivative w.r.t. X, these methods are explicitly based on the derivatives w.r.t W and C, what complicates the optimization procedure. Moreover, such approaches do not take into account that parametrization WC' of matrix X is non-unique and Step 2 is required. Indeed. for any invertible matrix S, we have X = WiCT = W SS-1CT = WCT, therefore, solutions W C1 and WC2 are equally good in terms of the SGNS objective but entail different cosine sim ilarities between embeddings and, as a result, different performance in terms of linguistic metrics (see Section 4.2 for details).\nTobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsupervised word embeddings. In EMNLP, 2015.\nA successful attempt to follow the above described steps, which outperforms the original SGNS op timization approach in terms of various linguistic tasks, was proposed by Levy & Goldberg (2014 In order to obtain a low-rank matrix X on Step 1, the method reduces the dimensionality of Shifte Positive Pointwise Mutual Information (SPPMI) matrix via Singular Value Decomposition (SVD) On Step 2, it computes embeddings W and C via a simple formula that depends on the factors ob tained by SVD. However, this method has one important limitation: SVD provides a solution to surrogate optimization problem, which has no direct relation to the SGNS objective. In fact, SVI minimizes the Mean Squared Error (MSE) between X and SPPMI matrix, what does not lead t minimization of SGNS objective in general (see Section 6.1 and Section 4.2 in Levy & Goldber (2014) for details).\nThese issues bring us to the main idea of our paper: while keeping the low-rank matrix search setup. on Step 1, optimize the original SGNS objective directly. This leads to an optimization problen. over matrix X with the low-rank constraint, which is often (Mishra et al. (2014)) solved by applying. Riemannian optimization framework (Udriste (1994)). In our paper, we use the projector-splitting. algorithm (Lubich & Oseledets (2014)), which is easy to implement and has low computationa complexity. Of course, Step 2 may be improved as well, but we regard this as a direction of future. WOrk.\nTo summarize, the main contributions of our paper are:\nIn this paper, we consider the Skip-Gram Negative Sampling (SGNS) word embedding model (Mikolov et al. (2013)), which is a probabilistic discriminative model. Assume we have a text cor- pus given as a sequence of words w1,..., Wn, where n may be larger than 1012 and w, E Vw belongs to a vocabulary of words Vw. A context c E Vc of the word wi is a word from set {Wi- L, , Wi-1, Wi+1, ., Wi+L} for some fixed window size L. Let w, c E Rd be the word embed- dings of word w and context c, respectively. Assume they are specified by the following mappings:\nW : Vw ->Rd. C : Vc ->Rd\nThe ultimate goal of SGNS word embedding training is to fit good mappings W and C\nIn the SGNS model, the probability that pair (w, c) is observed in the corpus is modeled as a follow ing function:\n1\nIn order to collect a training set, we take all pairs (w, c) from D as positive examples and k randomly generated pairs (w, c) as negative ones. Let #(w, c) be the number of times the pair (w, c) appears.\nAs a result, our approach achieves the significant improvement in terms of SGNS optimization on Step 1 and, moreover, the improvement on Step 1 entails the improvement on Step 2 in terms of linguistic metrics. That is why, the proposed two-step decomposition of the problem makes sense, what, most importantly, opens the way to applying even more advanced approaches based on it (e.g., more advanced Riemannian optimization techniques for Step 1 or a more sophisticated treatment of Step 2).\nWe reformulated the problem of SGNS word embedding learning as a two-step procedure with clear objectives; For Step 1, we developed an algorithm based on Riemannian optimization framework that optimizes SGNS objective over low-rank matrix X directly; Our algorithm outperforms state-of-the-art competitors in terms of SGNS objective and the semantic similarity linguistic metric (Levy & Goldberg (2014); Mikolov et al. (2013); Schnabel et al. (2015)).\nwhere D is the multiset of all word-context pairs (w, c) observed in the corpus and (x, y) is the scalar product of vectors x and y. Number d is a hyperparameter that adjusts the flexibility of the model. It usually takes values from tens to hundreds\n(w,c)(logo((w,c)) + k: Ec'~Pp logo(-(w,c)))\nl =) #(w,c)(logo((w,c)) + k: Ec'~Pp logo(-(w,c))) -> max W, wEVw cEVc\nUsually, this optimization is done via the stochastic gradient descent procedure that is performed during passing through the corpus (Mikolov et al. (2013); Rong (2014))."}, {"section_index": "3", "section_name": "2.2 OPTIMIZATION OVER LOW-RANK MATRICES", "section_text": "Relying on the prospect proposed by Levy & Goldberg (2014), let us show that the optimization problem given by (3) can be considered as a problem of searching for a matrix that maximizes a certain objective function and has the rank-d constraint (Step 1 in the scheme described in Section 1)"}, {"section_index": "4", "section_name": "2.2.1 SGNS LOSS FUNCTION", "section_text": "As shown by Levy & Goldberg (2014), the logarithmic likelihood (3) can be represented as the sun of lw,c(w, c) over all pairs (w, c), where lw,c(w, c) has the following form:\n#(w)#(c) lw,c(w,c) =#(w,c) logo((w,c)) + k log(-(w,c)) D"}, {"section_index": "5", "section_name": "2.2.2 MATRIX NOTATION", "section_text": "X =(xw,c), w E Vw,c E Vc\nF(X) = LL wEVw cEVc\nProposition1 SGNS optimization problem given by (3) can be rewritten in the following con\nmaximize F(X) XERn X m subject to X E Md\nMa ={X E Rnxm:rank(X) = d}\nA crucial observation is that this loss function depends only on the scalar product (w, c) but not on embeddings w and c separately:\nDenote |Vw| as n and |Vc] as m. Let W E Rnxd and C E Rmxd be matrices, where each row w E Rd of matrix W is the word embedding of the corresponding word w and each row c E Rd of matrix C is the context embedding of the corresponding context c. Then the elements of the product of these matrices\nX = WC\nThe key idea of this paper is to solve the optimization problem given by (6) via the framework o Riemannian optimization, which we introduce in Section 3..\nImportant to note that this prospect does not suppose the optimization over parameters W and C directly. This entails the optimization in the space with ((n + m - d) : d) degrees of freedom (Mukherjee et al. (2015)) instead of ((n + m) : d), what simplifies the optimization process (see Section 5 for the experimental results)."}, {"section_index": "6", "section_name": "2.3 COMPUTING EMBEDDINGS FROM A LOW-RANK SOLUTION", "section_text": "Once X is found, we need to recover W and C such that X = W C ' (Step 2 in the scheme described in Section 1). This problem does not have a unique solution, since if (W, C) satisfy this equation, then W S-1 and CS' satisfy it as well for any non-singular matrix S. Moreover, different solutions. may achieve different values of the linguistic metrics (see Section 4.2 for details). While our paper focuses on Step 1, we use, for Step 2, a heuristic approach that was proposed by Levy et al. (2015) and it shows good results in practice. We compute SVD of X in the form X = UV', where U. and V have orthonormal columns, and is the diagonal matrix, and use.\nW =UV C =VV"}, {"section_index": "7", "section_name": "as matrices of embeddings", "section_text": "A simple justification of this solution is the following: we need to map words into vectors in a way that similar words would have similar embeddings in terms of cosine similarities:\ncOs(w1, W2) W W2\nIt is reasonable to assume that two words are similar, if they share contexts. Therefore, we can estimate the similarity of two words w1, W2 as s(w1, w2) = cevc w1,c Xw2,c, what is the. element of the matrix XXT with indices (w1, w2). Note that XXT = UVTVUT = U2UT. If we choose W = U, we exactly obtain (w1, w2) = s(w1, w2), since WwT = XXT in this. case. That is, the cosine similarity of the embeddings w1, w2 coincides with the intuitive similarity. s(w1, w2). However, scaling by instead of was shown by Levy et al. (2015) to be a better. solution in experiments.\nThe main idea of Riemannian optimization (Udriste (1994)) is to consider (6) as a constrained op. timization problem. Assume we have an approximated solution X, on a current step of the opti. mization process, where i is the step number. In order to improve X, the next step of the stan. dard gradient ascent outputs X; + VF(X), where VF(X) is the gradient of objective F at th point X. Note that the gradient VF(X) can be naturally considered as a matrix in Rnm. Poin. X, + F(X) leaves the manifold Md, because its rank is generally greater than d. That is wh. Riemannian optimization methods map point X, + F(X) back to manifold Md. The standar. Riemannian gradient method first projects the gradient step onto the tangent space at the curren. point X, and then retracts it back to the manifold:.\nXi+1 = R(PT(X;+ VF(X)))\nwhere R is the retraction operator. and PT. .. is the proiection onto the tangent space\nIn our paper, we use a much simpler version of such approach that retracts point X, + F(X directly to the manifold, as illustrated on Figure 1: X+1 = R(X, + V F(X)).\nIntuitively, retractor R finds a rank-d matrix on the manifold M that is similar to high-rank ma trix X, + F(X) in terms of Frobenius norm. How can we do it? The most straightforward way tc reduce the rank of X, + F(X) is to perform the SVD, which keeps d largest singular values of it:\n1: Ui+1, Si+1, V SVD(X;+ VF(X))\nHowever, it is computationally expensive. Instead of this approach, we use the projector-splitting method (Lubich & Oseledets (2014)), which is a second-order retraction onto the manifold (fo details. see the review by Absil & Oseledets (2015)). Its practical implementation is also quit intuitive: instead of computing the full SVD of X, + V F(X) according to the gradient projectior method, we use just one step of the block power numerical method (Bentbib & Kanber (2015)) which computes the SVD, what reduces the computational complexity.\nwhich means that the gradient will be large if S is close to singular. The projector-splitting scheme is free from this problem.\nIn case of SGNS objective given by (5), an element of gradient V F has the form.\nxw,c VF(X =#(w,c).(-xw,c dxw.c\nThe whole optimization procedure is summarized in Algorithm 1\nXi+VF(Xi) VF(X) retraction Md Xi Xi+1\nFigure 1: Geometric interpretation of one step of projector-splitting optimization procedure: the gradient step an the retraction of the high-rank matrix X; + F(X) to the manifold of low-rank matricesMd.\n2: Vi+1,S+1 QR((X+ VF(Xi))' 3: Xi+1 Ui+1Si+1Vi+1 IT\nIn this way, we always keep the solution Xi+1 = U+1Si+1V+1 on the manifold Ma and in the form (8).\nWhat is important, we only need to compute V F(X,), so the gradients with respect to U, S and V. are never computed explicitly, thus avoiding the subtle case where S is close to singular (so-called singular (critical) point on the manifold). Indeed, the gradient with respect to U (while keeping the. orthogonality constraints) can be written (Koch & Lubich (2007)) as:.\nOF OF VS-1 au ax\nRequire: Dimentionality d, initialization Wo and Co, step size A, gradient function VF : Rnm Rn x m, number of iterations K Ensure: Factor W E Rnd 1: Xo WoCJ # get an initial point at the manifold 2: Uo,So,V' SVD(Xo) # compute the first point satisfying the low-rank constraint 3: i0 4: while i < K do 5: Ui+1,Si+1QR((Xi+XVF(Xi))Vi) # perform one step of the block power method with two QR-decompositions Vi+1,S+QR((X;+AVF(X;)U+1) 6: Xi+1 Ui+1Si+1V+1 7: # update the point at the manifold 8: ii+1 9: end while 10: U,,VT SVD(Xk) 11: WU # compute word embeddings 12: return W"}, {"section_index": "8", "section_name": "4.1 TRAINING MODELS", "section_text": "We compare our method (\"RO-SGNS\" in the tables) performance to two baselines: SGNS embed- dings optimized via Stochastic Gradient Descent, implemented in the original \"word2vec\", (\"SGD- SGNS\" in the tables) by Mikolov et al. (2013) and embeddings obtained by SVD over SPPMI matrix (\"SVD-SPPMI\"' in the tables) by Levy & Goldberg (2014). We have also experimented with the blockwise alternating optimization over factors W and C, but the results are almost the same to SGD. results, that is why we do not to include them into the paper. The source code of our experiments is available online'.\nThe models were trained on English Wikipedia \"enwik9' corpus2, which was previously used in. most papers on this topic. Like in previous studies, we counted only the words which occur more than 200 times in the training corpus (Levy & Goldberg (2014); Mikolov et al. (2013)). As a result. we obtained a vocabulary of 24292 unique tokens (set of words Vw and set of contexts Vc are equal). The size of the context window was set to 5 for all experiments, as it was done by Levy & Goldberg (2014); Mikolov et al. (2013). We conduct two series of experiments: for dimensionality d = 100 and d = 200.\nOptimization step size is chosen to be small enough to avoid huge gradient values. However, thor. ough choice of does not result in a significant difference in performance (this parameter was tunec on the training data only, the exact values used in experiments are reported below).."}, {"section_index": "9", "section_name": "4.2 EVALUATION", "section_text": "We evaluate word embeddings via the word similarity task. We use the following popular datasets for this purpose: \"wordsim-353\" (Finkelstein et al. (2001); 3 datasets), \"simlex-999\" (Hill et al (2016)) and \"men\" (Bruni et al. (2014)). Original \"wordsim-353' dataset is a mixture of the word pairs for both word similarity and word relatedness tasks. This dataset was split (Agirre et al. (2009)) into two intersecting parts: \"wordsim-sim'' (\"ws-sim' in the tables) and \"wordsim-rel' (\"ws-rel'' in the tables) to separate the words from different tasks. In our experiments, we use both of them or a par with the full version of \"wordsim-353' (\"ws-full'' in the tables). Each dataset contains worc pairs together with assessor-assigned similarity scores for each pair. As a quality measure, we use Spearman's correlation between these human ratings and cosine similarities for each pair. We call this quality metric linguistic in our paper.\nTable 1: Comparison of SGNS values obtained by the models. The larger is better\nDim. d Algorithm ws-sim ws-rel ws-full simlex men SGD-SGNS 0.719 0.570 0.662 0.288 0.645 d = 100 SVD-SPPMI 0.722 0.585 0.669 0.317 0.686 RO-SGNS 0.729 0.597 0.677 0.322 0.683 SGD-SGNS 0.733 0.584 0.677 0.317 0.664 d = 200 SVD-SPPMI 0.747 0.625 0.694 0.347 0.710 RO-SGNS 0.757 0.647 0.709 0.353 0.701\nTable 2: Comparison of the methods in terms of the semantic similarity task. Each entry represents the Spearman's correlation between predicted similarities and the manually assessed ones.\nWe see that SGD-SGNS and SVD-SPPMI methods provide quite similar results, however, the pro posed method obtains significantly better SGNS values, what proves the feasibility of using Rie mannian optimization framework in SGNS optimization problem. It is interesting to note that SVD SPPMI method, which does not optimize SGNS objective directly, obtains better results than SGD SGNS method, which aims at optimizing SGNS. This fact additionally confirms the idea describec in Section 2.2.2 that the independent optimization over parameters W and C may decrease the per formance.\nHowever, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods in terms of it. We see that our method outperforms the competitors on all datasets except for \"men' dataset where it obtains slightly worse results. Moreover, it is important that the higher dimensior entails higher performance gain of our method in comparison to the competitors.\nTalking about the optimal number K of iterations in the optimization procedure and step size X. we found that they depend on the particular value of dimensionality d. For d = 100, we have K = 25, X ~ 5 : 10-5, and for d = 200, we have K = 13, X = 10-4. Moreover, it is interesting that the best results were obtained when SVD-SPPMI embeddings were used as an initialization of Riemannian optimization process.\nSkip-Gram Negative Sampling was introduced by Mikolov et al. (2013). The \"negative sampling approach was thoroughly described by Goldberg & Levy (2014), and the learning method is ex-\nd = 100 d = 200 SGD-SGNS 1.68 109 1.67 . 109 SVD-SPPMI 1.65 . 109 1.65 109 RO-SGNS -1.44 109 1.43 109\nTable 3: Examples of the semantic neighbors obtained for words \"five\", \"he' and \"main\"' by our method and SVD-SPPMI.\nplained by Rong (2014). There are several open-source implementations of SGNS neural networl which is widely known as \"word2vec\" 34.\nAs shown in Section 2.2, Skip-Gram Negative Sampling optimization can be reformulated as a problem of searching for a low-rank matrix. In order to be able to use out-of-the-box SVD for this task, Levy & Goldberg (2014) used the surrogate version of SGNS as the objective function. There are two general assumptions made in their algorithm that distinguish it from the SGNS optimization:\nThis makes the objective not interpretable in terms of the original task (3). As mentioned by Levy & Goldberg (2014), SGNS objective weighs different (w, c) pairs differently, unlike the SVD, which works with the same weight for all pairs, what may entail the performance fall. The comprehen- sive explanation of the relation between SGNS, SPPMI, SVD-over-SPPMI methods is provided by Keerthi et al. (2015). Lai et al. (2015); Levy et al. (2015) give a good overview of highly practical methods to improve these word embedding models.\nAn introduction to optimization over Riemannian manifolds can be found in the paper of Udrist (1994). The overview of retractions of high rank matrices to low-rank manifolds is provided by Ab sil & Oseledets (2015). The projector-splitting algorithm was introduced by Lubich & Oseledets (2014), and also was mentioned by Absil & Oseledets (2015) as \"Lie-Trotter retraction\".\nRiemannian optimization is succesfully applied to various data science problems: for example, ma trix completion (Vandereycken (2013)), large-scale recommender systems (Tan et al. (2014)), and. tensor completion (Kressner et al. (2014))\nIt seems to be an interesting direction of future work to apply more advanced optimization tech niques to Step 1 of the scheme proposed in Section 1 and to explore the Step 2 - obtaining embed dings with a given low-rank matrix.\n3Original Google word2vec: https: //code. google. com/archive/p/word2vec/ 4Gensim word2vec: https://radimrehurek. com/gensim/models/word2vec.htm.\nfive he main SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Ib 0.748 four 0.999 she 0.918 when 0.904 major 0.631 major 0.689 kg 0.731 three 0.999 was 0.797 had 0.903 busiest 0.621 important 0.661 mm 0.670 six 0.997 promptly 0.742 was 0.901 principal 0.607 line 0.631 mk 0.651 seven 0.997 having 0.731 who 0.892 nearest 0.607 external 0.624 lbf 0.650 eight 0.996 dumbledore 0.731 she 0.884 connecting 0.591 principal 0.618 per 0.644 and 0.985 him 0.730 by 0.880 linking 0.588 primary 0.612\n1. SVD optimizes Mean Squared Error (MSE) objective instead of SGNS loss function.. 2. In order to avoid infinite elements in SPMI matrix, it is transformed in ad-hoc manner (SPPMI matrix) before applying SVD.\nIn our paper, we proposed the general two-step scheme of training SGNS word embedding model and introduced the algorithm that performs the search of a solution in the low-rank form via Rie. mannian optimization framework. We also demonstrated the superiority of the proposed method, by providing the experimental comparison to the existing state-of-the-art approaches."}]
SywUHFcge
[{"section_index": "0", "section_name": "A THEORETICAL FRAMEWORK FOR ROBUSTNESS OF (DEEP CLASSIFIERS AGAINST ADVERSARIAL EXAMPLES", "section_text": "behavioral signature of the original PDF malware and the manipulated variant, this oracle successfull determines if the malicious behavior is preserved from x to x'. One may argue that \"since Cuckoc sandbox works well for PDF-malware identification, why a machine-learning based detection systen is even necessary?\". This is because Cuckoo sandbox is computationally expensive and runs slow For many security-sensitive applications about machines, oracles f2 do exist, but machine-learning classifiers f1 are used popularly due to speed or efficiency."}, {"section_index": "1", "section_name": "this thinking is wrong and can lead to their classifiers vulnerable to adversarial examples(Xu et al. 2016).", "section_text": "Table 5: Accuracy of the deep residual network(He et al.]2015) obtained from two noise-perturbec testing cases. The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9411 0.9411 1 0.9409 0.5833 5 0.9369 0.3943 10 0.9288 0.3853\nBeilun Wang, Ji Gao, Yanjun Q. Department of Computer Science. University of Virginia. Charlottesville. VA 22901. USA\nBeilun Wang, Ji Gao, Yanjun Qi\nRainer Dahlhaus. Fitting Time Series Models to Nonstationary Processes. The Annals of Statistics 25(1):1-37. 1997\nIt is difficult to decompose an arbitrary f1 into g1 o c1. However, since in our context, f1 is a machine learning classifier, we can enumerate many possible g1 functions to cover classic machine learning classifiers.\n[bw4mw, jg6yd,yanjun}@virginia.edu\nVarious feature selection methods are potential g1. For DNN, g1 includes all the layers from input layer to the layer before the classification layer In SVM, X1, d1 is decided by the chosen reproducing Hilbert kernel space.. Regularization is another popular implicit feature extraction method. For example, l1 regularizatior. can automatically do the feature extraction by pushing some parameters to be 0..\nUsing Theorem (3.3), we obtain another corollary as follows (proof in Section 10.3.1):\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 248-255. IEEE, 2009.\nFor cases when f1 is not continuous a.e., obtaining more samples is clearly a good way to learn a better decision boundary that might improve the adversarial robustness of the classifier at the same. time.\nMost machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. Such inputs are typically generated by adding small but purposeful modifications that lead to incorrect outputs while imperceptible to human eyes. The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. By using concepts from topology, our theoretical analysis brings forth the key reasons why an adversarial example can fool a classifier (.f1) and adds its oracle (.f2, like human eyes) in such analysis. By investigating the topological relationship between two (pseudo)metric spaces corresponding to predictor f1 and oracle f2, we develop necessary and sufficient conditions that can determine if f1 is always robust (strong robust) against adversarial examples according to f2. Interestingly our theorems indicate that just one unnecessary feature can make f1 not strong-robust, and the right feature representation learning is the key to getting a classifier that is both accurate and strong robust.\nThis corollary shows that if a learned classifier and its oracle share the same derived feature space. (Xj = X), the learned classifier is strong-robust when two metrics are both norm functions (even if not the same norm). We can call this corollary as \"norm doesn't matter\"'.\nJames J DiCarlo and David D Cox. Untangling invariant object recognition. Trends in cognitive sciences, 11(8):333-341, 2007."}, {"section_index": "2", "section_name": "1 2 MORE ABOUT DNNS' ROBUSTNESS AGAINST ADVERSARIAL SAMPLES", "section_text": "Many interesting phenomena can be answered by Corollary (4.2. For instance, for a norm regularized. classifier, this corollary answers an important question that whether a different norm function will influence its robustness against adversarial examples. The corollary indicates that changing to a. different norm function may not improve the robustness of the model under adversarial perturbation\nJames J DiCarlo, Davide Zoccolan, and Nicole C Rust. How does the brain solve visual objec recognition? Neuron, 73(3):415-434. 2012\nMost previous studies (Table[2) have made an important and implicit assumption about f1 and f2: J is almost everywhere (a.e.) continuous. i E {1, 2}.\nDefinition 9.1. Suppose f is the classification function. fi is continuous a.e., i E {1,2}, if Vx E X a.e., 38, > 0, such that Vx' E X,d(g(x),gi(x')) < dj, fi(x) = fi(x')\nf1(): f1() is a DNN classifier with multiple layers, including linear perceptron layers, activation. layers, convolutional layers and softmax decision layer.. (X1, d1): X1 denotes the feature space discovered by the layer right before the last fully connected. layer. This feature space is automatically extracted from the original image space (e.g., RGB. representation) by the DNN. (X, d) is defined by d1 using Eq. (10.2). (X2, d2): X2 denotes the feature space that oracle (e.g., human annotators) used to decide ground. truth labels of training images. For example, a human annotator needs to recognize a hand-written digit \"O. X includes what patterns he/she needs for such a decision. (X, d) is defined by d2. using Eq. (10.3)\nRichard O Duda, Peter E Hart, and David G Stork. Pattern classification. John Wiley & Sons, 2012\nIllustrated in Figure[1] d, is the metric function (details in Section[3) f; uses to measure the similarity among samples in the space X,. For notation simplicity, we use the term \"continuous a.e.\" for \"continuous almost everywhere'|11in the rest of the paper. The above definition is a special case of almost everywhere continuity defined in (Folland]2013) (see Definition (9.2) in Section[9.1), since we decompose fi in a certain way (see Figure[1). The a.e. continuity has a few indications, like: D/"}, {"section_index": "3", "section_name": "4.3 ROBUSTNESS AND GENERALIZATION", "section_text": "Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Fundamental limits on adversarial robustness In Proceedings of ICML, Workshop on Deep Learning, number EPFL-CONF-214923, 2015."}, {"section_index": "4", "section_name": "1 INTRODUCTION", "section_text": "In Table[3] we provide four situations in which the proposed theorems can be used to determin whether a classifier f1 is strong-robust against adversarial examples or not.\nDeep Neural Networks (DNNs) can efficiently learn highly accurate models and have been demon strated to perform exceptionally well (Krizhevsky et al.]2012) Hannun et al.]2014). However, recent studies show that intelligent attackers can force many machine learning models, including DNNs, to. misclassify examples by adding small and hardly visible modifications on a regular test sample..\nf2 is assumed continuous a.e. previously: Most previous studies find \"adversarial examples\" by solving Eq. (2.1), instead of Eq. (2.2). This made an implicit assumption that if the adversarial example x' is similar to the seed sample x, they belong to the same class according to f2. This assumption essentially is: f2 is almost everywhere (a.e.) continuous.\n12.1 MORE ABOUT ARE STATE-OF-THE-ART DEEP NEURAL NETS STRONG-ROBUST ?\nWe can observe some properties of d1 through experimental results. Table5lTable [6lTable7|anc Table 8 show properties of dj (and d) resulting from performing testing experiments on four state-of-art DNN networks.\nThe maliciously generated inputs are called \"adversarial examples\" (Goodfellow et al.]2014)Szegedy et al.[2013) and are commonly crafted by carefully searching small perturbations through an optimization procedure. Several recent studies proposed algorithms for solving such optimization to fool DNN classifiers. (Szegedy et al.] [2013) firstly observe that convolution DNNs are vulnerable to small artificial perturbations. They use box-constrained Limited-memory BFGS (L-BFGS) to create adversarial examples and find that adversarial perturbations generated from one DNN network can also force other networks to produce wrong outputs. Then, (Goodfellow et al.]2014) try to. clarify that the primary cause of such vulnerabilities may be the linear nature of DNNs. They then propose the fast gradient sign method for generating adversarial examples quickly. Subsequent papers (Fawzi et al.[2015] Papernot et al.[2015a] [Nguyen et al.2015] have explored other ways to explore adversarial examples for DNN (details in Section 2.1). The goal of this paper is to analyze the robustness of machine learning models in the face of adversarial examples..\nf1 is continuous a.e.: Almost all popular machine learning classifiers satisfy the a.e. continuity assumption. For instance, a deep neural network is certainly continuous a.e.. Similarly to the results shown by (Szegedy et al.]2013), DNNs satisfy that [f1(x) - f1(x')] W |I x - x' |2 where W I W; and W, (wi, bi)||oo. Here i = {1, 2, ..., L} representing i-th linear layer in NN Therefore, Ve > 0, let & = e/W. Then |f1(x) f1(x')] < e when d1(x, x') =|I x - x' ||2< 8. This shows that a deep neural network is almost everywhere continuous when d1 (.\nIn Table 9] the model we use is a 200-layer residual network (He et al.] 2015) trained on Imagenet. dataset (Deng et al.2009) by Facebool15We generate two types of test samples from 50000 images in the validation set of Imagenet: (1) 50oo0 randomly perturbed images. The random perturbations on each image are generated by first fixing the perturbation value on every dimension to be the same, and then randomly assigning the sign on every dimension as + or - (with probability 1/2). In this way, the size of the perturbation can be described by x - x'Ioo that we name as the level of. attacking power ( later defined in Eq. (12.6)). (2) 50000 adversarially perturbed images. We use the. fast-gradient sign method (introduced in Section[8.2) to generate such adversarial perturbations on each seed image. The \"attacking power' of such adversarial perturbations uses the same formula. as Eq. (12.6). The first column of Table 9|shows different attack powers (Eq. (12.6)) we use in the. experiment. The second column shows the accuracy of running the DNN model on the first group of image samples and the third column shows the accuracy of running the DNN model on the second group of image samples.\nTable[3|provides a much better understanding of the relationship between robustness and accuracy Two interesting cases from Table[3|are worth to emphasize again: (1) If f1 misses features used by f2 and does not include unnecessary features (according to X2), f1 is strong-robust (even though it may not be accurate). (2) If f1 extracts some extra unnecessary features, it will not be strong-robust (though it may be a very accurate predictor).\nKathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435. 2016\nWe want to emphasize that \"f1 is strong-robust' does not mean it is a good classifier. For example, a trivial example for strong-robust models is f1(x) = 1, Vx E X. However, it is a useless model since it doesn't have any prediction power. In an adversarial setting, we should aim to get a classifier that is both strong-robust and precise. A better feature learning function g1 is exactly the solution that may achieve both goals.\nFor the rare cases when f1 is not continuous a.e., see next Section 11discussing \"boundary points that matter for analyzing adversarial perturbations.\nIn response to progress in generating adversarial examples, researchers attempt to design strategies for making machine-learning systems robust to various noise, in the worst case as adversarial examples. For instance, denoising NN architectures (Vincent et al.2008Gu & Rigazio]2014) Jin et al.]2015 can discover more robust features by using a noise-corrupted version of inputs as training samples. A modified distillation strategy (Papernot et al.2015b) is proposed to improve the robustness of. DNNs against adversarial examples, though it has been shown to be unsuccessful recently (Carlini &. Wagner| 2016a). The most generally successful strategy to date is adversarial training (Goodfellow. et al.[[2014) Szegedy et al.[2013) which injects adversarial examples into training to improve the generalization of DNN models. More recent techniques incorporate a smoothness penalty (Miyato.\nTable3lindicates that c1 and c2 do not influence the strong-robustness of f1 when f1 is continuou. a.e.[10] Figure4and Figure5|further show two concrete example cases in which f1 is strong-robus according to f2. However, in both figures, f1 is not accurate according to f2.\nThe a.e. continuity has a few indications\nX is not a finite space; and Vx,x' E X,P(fi(x) = fi(x')|d;(gi(x),gi(x')) < 0) = 1 It does not mean the function f1 is continuous in every point in its feature space X;\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.\n5https://github.com/facebook/fb.resnet.torch\n10When f1 is not continuous a.e., c1 matters for the strong-robustness. See Section|11|for det:\n11The measure (e.g., Lebesgue measure) of discontinuous set is 0.\nAs another example, multiple DNN studies about adversarial examples claim that adversarial examples are transferable among different DNN models. This can be explained by Figure[2|(when X1 is a much higher-dimensional space). Since different DNN models learn over-complete feature spaces {Xj}, there is a high chance that these different Xj involve a similar set of unnecessary features (e.g., the different learned features are correlated with others). Therefore the adversarial examples are generated along similar gradient directions. That is why many such samples can evade multiple DNN models.\nNicolo Cesa-Bianchi and Gabor Lugosi. Prediction, learning, and games. Cambridge University Press, 2006."}, {"section_index": "5", "section_name": "ABSTRACT", "section_text": "Summarizing Theorem (3.2), Theorem (3.4), Corollary (4.2) and Corollary (4.1), the robustness of a. learned classifier is decided by two factors: (1) the difference between two derived feature spaces:. and (2) the difference between the metric functions. Two corollaries show that the difference between the feature spaces is more important than the difference between the two metric functions..\nCase (I): If f1 uses some unnecessary features, it will not be strong-robust to adversarial examples It may not be an accurate predictor if f1 misses some necessary features used by f2 Case (II): If f1 uses some unnecessary features, it will not be strong-robust to adversarial examples It may be an accurate predictor if f1 uses all the features used by f2. Case (III): If f1 and f2 use the same set of features and nothing else, f1 is strong-robust and may be accurate. Case (IV): If f1 misses some necessary features and does not extract unnecessary features, f1 is strong-robust (even tough its accuracy may not be good).\nIn Section [9.1] we show that if f1 is not continuous a.e., it is not robust to any types of noise Considering the generalization assumption of machine learning, machine learning classifiers should satisfy the continuity a.e. assumption. Section 9.2|provides two examples of how popular machine learning classifiers satisfy this assumption.\nTable 6|Table 7 and Table 8repeat similar experiments on three other DNN models: overfeat network(Sermanet et al.|2013), the residual network(He et al.]2015) and the VGG mode1 (Simonyan & Zisserman2014). The conclusion is consistent across all four models.\nIf a probability distribution admits a density, then the probability of every one-point set {a} is zero the same holds for finite and countable sets and the same conclusion holds for zero measure sets[12 for instance, straight lines or circle in Rn. . The a.e. continuity follows the same property as density function: the probability of picking one-point set {x} from the whole feature space is zero; the same holds for zero measure sets. This means: the probability of picking the discontinuous points (e.g., points on the decision boundary is zero, because they are null sets. : Most machine learning methods focus on X = RP or space equivalent to RP (e.g., [0, 1|P) (see Appendix: Section [11.1). Most machine learning methods assume f1 is continuous a.e. (see Appendix: Section9.2).\nTable 1: A list of important notations used in the paper\nTable 4: Connecting to relevant DNN hardening solutions. The experimental results of comparing different hardening solutions are shown in Figure[9] Figure[10] Table[10|and Table[11\nTable 6: Accuracy of the overfeat network(Sermanet et al.]2013) obtained from two noise-perturbed testing cases. The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6) perturbed sam- ally perturbed ples samples 0 0.7944 0.7944 1 0.7923 0.5922 5 0.7844 0.4270 10 0.7762 0.3485\nrandom perturbation\nDefinition 9.2. Suppose (X,F, P) is a probability space(for general definition, (X,, ) is a. measure space), where X is the sample space, a -algebra F is a collection of all the events and P is. a probability measure defined in X and F. A property holds \"almost everywhere\" (a.e.) in X if anc only if the probability measure of the set for which the property holds equals 1..\nrandom perturbation\nLemma 9.3. If the a.e. continuity assumption doesn't hold, there exists a non-zero measure set D\nJohn L Kelley. General topology. Springer Science & Business Media, 1975\nVx E D,3x' s.t. f1(x)f1x d1(x,x') < dj\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pp 1097-1105, 2012.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9431 0.9431 1 0.9431 0.9294 5 0.9429 0.6815 10 0.943 0.2961\nProof. Without it, for any test sample x, you can easily find a very similar sample x' (i.e. for any small d1, d1(x, x') < 81) such that [f1(x) - f1(x')] > e. In classification problems, this means that f1(x) f1(x' (i.e. there exist very similar pair of two samples x and x' that have different labels for most x E X1).\nAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arX preprint arXiv:1611.01236, 2016.\nFor DNN, it is difficult to derive a precise analytic form of d1 (or d). But we can observe some properties of d] through experimental results. Table 5|Table|6[Table[7and Table[8|show properties oi d1 (and d) resulting from performing testing experiments on four state-of-art DNN networks (detail in Section 12.1). All four tables indicate that the accuracy of DNN models in the adversarial setting are quite bad. The performance on randomly perturbed inputs is much better than performance or maliciously perturbed adversarial examples.\n9.2 MOST MACHINE-LEARNING CLASSIFIERS SATISFY THE A.E. CONTINUITY ASSUMPTION\nTaehoon Lee, Minsuk Choi, and Sungroh Yoon. Manifold regularized deep neural networks using adversarial examples. arXiv preprint arXiv:1511.06381, 2015.\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. For exampl\nBo Li and Yevgeniy Vorobeychik. Feature cross-substitution in adversarial classification. In Advances in Neural Information Processing Systems. pp. 2087-2095. 2014."}, {"section_index": "6", "section_name": "12.2 CONNECTING PREVIOUS STUDIES HARDENING DNNS", "section_text": "Multiple hardening solutions (Zheng et al.]2016} Miyato et al.]2016]Lee et al.] 2015) exist in the DNN literature. They mostly aim to learn a better g1 by minimizing different loss functions L f1 (x, x') so that when d2(g2(x), g2(x')) < e, this loss Lf1 (x, x') is small. This might improve the the topological equivalence (or finer topology). Two major variations exist among related methods the choice of L f (x, x') and the way to generate pairs of (x, x').\nThis paper tries to answer above questions and makes the following contributions:\nWei Liu and Sanjay Chawla. Mining adversarial patterns via regularized loss minimization. Machin learning, 81(1):69-83, 2010.\nSection[2|points out that previous definitions of adversarial examples for a classifier (f1) have. overlooked the importance of an oracle function (f2) of the same task.. Section[3|formally defines when a classifier f1 is always robust (\"strong-robust\") against adversarial examples. It proves four theorems about sufficient and necessary conditions that make f1 always robust against adversarial examples according to f2. Our theorems lead to a number of interesting insights, like that the feature representation learning controls if a DNN is strong-robust or not.. Section[12|is dedicated to provide practical and theoretically grounded directions for understanding and hardening DNN models against adversarial examples..\nShike Mei and Xiaojin Zhu. The security of latent dirichlet allocation. 2015a"}, {"section_index": "7", "section_name": "5.2 TOWARDS PRINCIPLED SOLUTIONS", "section_text": "1 0 APPENDIX: USING METRIC SPACE AND PSEUDO METRIC SPACES TO UNDERSTAND CLASSIFIERS' ROBUSTNESS AGAINST ADVERSARIAL EXAMPLES\nShike Mei and Xiaojin Zhu. Some submodular data-poisoning attacks on machine learners. 2015b\nTable1provides a list of important notations we use in the paper.\nOur theorems suggest a list of possible solutions that may improve the robustness of DNN classifier against adversarial samples. Options include such as:.\nTakeru Miyato, Shin-ichi Maeda, and Koyama Masanori. Distributional smoothing with virtua adversarial training. ICLR' 16, 2016.\nBy learning a better g1: Methods like DNNs directly learn the feature extraction function g1. Table summarizes multiple hardening solutions (Zheng et al.2016) [Miyato et al.]2016] Lee et al.]2015 in the DNN literature. They mostly aim to learn a better g1 by minimizing different loss functions L f1 (x, x') so that when d2(g2(x), g2(x)) < e (approximated by (X, |L : ID), this loss L f1 (x, x') is small. Two major variations exist among related methods: the choice of Lf (x, x') and the way to generate pairs of (x, x'). For instance, to reach the strong-robustness we can force to learn a g1 that helps (X, d) to be a finer topology than (X2, d2). Section|12.4|explores this option (\"Siamese training\" in Table[4) through Siamese architecture. Experimentally Section[12.5[compares adversarial training, stability training and Siamese training on two state-of-the-art DNN image-classification\nBesides, (Zheng et al.[2016) uses L f1 (x, x) = K L(f1(x), f1(x)) and uses it as a regularization term adding onto the original training loss function. Its samples x' are generated from original samples x adding a small Gaussian noise. (Miyato et al.2016) uses the similar loss function as (Zheng et al.]2016).But(Miyato et al.]2016) uses adversarial perturbed x' from x. (Lee et al.2015) uses Lf1 (x, x') = d1(g1(x),g1(x')) and x's are generated xs by adding a small Gaussian noise. Recently proposed adversarial training (Goodfellow et al.|2014, Kurakin et al. 2016) uses L f (x, x') = L(f1(x), f2(x)) and uses adversarial perturbed x' from x. These studies are summarized and compared in Table4\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599, 2015.\n10.1 METRIC SPACES AND TOPOLOGICAL EQUIVALENCE OF TWO METRIC SPACES\nAnh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR. IEEE, 2015.\nVarious definitions of \"adversarial examples\"' exist in the recent literature, with most following. Eq. (2.1). See more detailed reviews in Section[8] The basic idea is to generate a misclassified sample\nx Loss L f1 (x, x On Layer Stability training (Zheng random perturbation KL(f1(x),f1(x)) Classification layer et al.]2016) (Miyato et al.[(2016) adversarial perturbation KL(f1(x),f1(x)) Classification layer Adversarial train- adversarial perturbation L(f1(x'),f2(x)) Loss function ing(Goodfellow et al. 2014) Large Adversarial train- adversarial perturbation L(f1(x'),f2(x)) Loss function ing(Kurakin et al.[2016) (Lee et al.[[2015) adversarial perturbation ll g1(x) - g1(x') lI2 Layer before classification layer Siamese Training random perturbation g1(x)-g1(x') |2 Layer before classification layer\nTable 7: Accuracy of the residual network(He et al.2015) obtained from two noise-perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2009). The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\nOur theoretical analysis uncovers fundamental properties to explain the adversarial examples. In this section, we apply them to analyze DNN classifiers. More specifically, (1) we find that DNNs are not. strong-robust against adversarial examples; and (ii) we connect to possible hardening solutions and introduce principled understanding of these solutions.."}, {"section_index": "8", "section_name": "et al.]2016] Zheng et al.]2016) or a layer-wise penalty (Carlini & Wagner]2016b) as a regularization term in the loss function to promote the smoothness of the DNN model distributions.", "section_text": "Recent studies (reviewed by (Papernot et al.|2016b)) are mostly empirical and provide little under-. standing of why an adversary can fool machine learning models with adversarial examples. Several important questions have not been answered yet:.\nThe Lemma (9.3) shows that f1 is not robust to a random noise if we don't assume f1 is continuous\nWhat makes a classifier always robust to adversarial examples? Which parts of a classifier influence its robustness against adversarial examples more, compared with the rest? What is the relationship between a classifier's generalization accuracy and its robustness against adversarial examples? Why (many) DNN classifiers are not robust against adversarial examples ? How to improve?\nLogistic regression for text categorization with a bag of word representation. A classifier on a multivariate feature representation in which each feature representing (modified) counts of a word is naturally a.e. continuous. Since {x'[d1(x, x') < d1, x x'} = 0 when 1 is small and x, x' are mostly sparse vectors. Logistic regression with a bag of word representation is a continuous a.e. predictor. Support Vector Machine with continuous feature representation. Suppose we define (X1, d1) by the d?(x, x) = k(x, x) + k(x', x') - 2k(x, x). Then support vector machine is a linear classifier on (X1, d1). Thus, the SVM prediction function is continuous a.e. with d1.\nThe phenomenon we observed can be explained by Figure[3] Comparing the second column and. the third column in four tables we can conclude that d1 (and d') in a random direction is larger than d1 (and d) in the adversarial direction. This indicates that a round sphere in (X1, d1) (and (X, d)) corresponds to a very thin high-dimensional ellipsoid in (X, : ) (illustrated by the left half. of Figure[3). Figure[3[(I) shows a sphere in (X, d) and Figure[3|(III) shows a sphere in (X1, d1) They correspond to the very thin high-dimensional ellipsoid in (X, || : |) in Figure 3(V). The norm. function |I : || is defined in space X and is application-dependent. All four tables uses |I : = |I : oo.\nChoice of loss function L f, (x, x'): Siamese training (G) (Section[12.4) and (Lee et al.2015) use Lf1(x, x') = d1(g1(x), g1(x')). Siamese training (F) chooses Lf1(x, x') = dist(f1(x), f1(x')) where dist(.,:) is a distance function measuring the difference between f1(x) and f1(x' ). If f1 is. continuous a.e., when d1(g1(x), g1(x')) is small -> we get dist(f1(x), f1(x')) is small. However,. the reverse direction may not hold. Therefore, Lf1 (x, x') = dist(f1(x), f1(x')) may not work for. cases. Generating pairs of (x, x'): Another variation is the way of generating pairs of (x, x') so that. d2(g2(x), g2(x')) is small. There exist two common ways. One is generating x' by adding a. random (e.g. Gaussian) perturbation on x. The other one is generating the adversarial perturbation to get x' from x.\nDifferently, for human oracles, a sphere in (X, d2) (shown in Figure[3|(II)) or in (X2, d2) (shown in Figure[3|(IV)) corresponds to an ellipsoid in (X, II : I) not including very-thin directions (shown in Figure[3|(VI)). When the attackers try to minimize the perturbation size using the approximated distance function d2 = || : ||, the thin direction of ellipsoid in Figure[3|(V) is exactly the adversarial direction.\nMost machine learning methods focus on the Rn space or the space equivalent to Rn (e.g., [0, 1]n). For example, the sample space of image classification task intuitively is 255p, where p is the number of features (e.g., 3 224 224). However, people mostly rescale the raw image data samples into. X = [0, 1]p. Therefore, the sample space X for f1 for this case is [0, 1]p..\nThis section provides a general definition of adversarial examples , by including the notion of an oracle. For a particular classification task, a learned classifier is represented as f1 : X -> Y, where X represents the input sample space and Y is the output space representing a categorical set\nThis subsection briefly introduces the concept of metric space and topological equivalence. A metric. on a set/space X is a function d : X X -> 0, oo satisfying four properties: (1) non-negativity, (2). identity of indiscernibles, (3) symmetry and (4) triangle inequality. In machine learning, for example. the most widely used metric is Euclidean distance. Kernel based methods, such as SVM, kernel\nMachine-learning f1 (X,d1) classifier (X1,d1) Y 91 C1 x 0 Feature Extraction Classification 0 g2 C2 (X2,d2) (X, d'2. f2 Oracle\nX,d II (X,d2') Not a Finer Tppology. a a Far! Close 1 1 Human oracle Deep Neural Nets. d'(a, b) Large d2'(a, b) small III (X1,d IV Not Topological Equivalent @ 0 a Far! Clbse Human oracle Deep Neural Nets. d1(a, b) Large d2(a, b) small V (X, III) VI (X, II II) Adversarial direction II a - b IIthe same. @ Class 1 a Class 2 Class 3\nregression and Gaussian process, consider samples in a Reproducing kernel Hilbert space (RKHS) The metric in a RKHS is naturally defined as: d2(x, y) = K(x, x) + K(y, y) - 2K(x, y), in which K (:, :) is a kernel function\nTable 8: Accuracy of the wide residual network(Zagoruyko & Komodakis, 2016) obtained fron two noise-perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2009). The second column shows the result on randomly perturbed samples, and the third column shows the result or adversarially perturbed samples.\nNow we present an important definition, namely that of \"topological equivalence\", that can represe. a special relationship between two metric spaces.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.953 0.953 1 0.953 0.8527 5 0.953 0.4718 10 0.953 0.2529\nA function or mapping h() from one topological space to another is continuous if the inverse image of any open set is open. If this continuous function is one-to-one and onto, and the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function, in our case (X1, d1), is said to be homeomorphic to the output range, e.g., here (X2, d2) In other words, metric space (X1, d1) is topologically equivalent to the metric space (X2, d2).\nWe can state this definition as the following equation:\nh(x1) = x2,h(x1) = x2\nTable 9: Accuracy of the VGG model (Simonyan & Zisserman. 2014) obtained from two noise perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2. 2009). The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\n10.2 PSEUDOMETRIC SPACES AND FINER TOPOLOGY AMONG PSEUDOMETRIC SPACES\nWe have briefly reviewed the concept of metric space in Section 10.1and proposed the related Theo rem (3.2) in Section[3.3] This is partly because the concept of metric space has been widely used in many machine learning models, such as metric learning (Xing et al.|2003). Theorem (3.2) anc related analysis indicate that feature spaces X1 and X2 (See Figure[1) are key determining factors for deciding learning model's strong-robustness.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9395 0.9395 1 0.938 0.7807 5 0.938 0.3767 10 0.9377 0.2092\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nFigure 1: Example of a machine-learning classifier (predictor) and a human annotator (oracle) for classifying images of hand-written \"0'. Both include two steps: feature extraction and classification The upper half is about the learned machine classifier f1 and the lower half is about the oracle f2. f1 transforms samples from the original space X to an embedded metric space (X1, d) using its feature extraction step. Here, dj is the similarity measure on the feature space X1. Classification models like DNN cover the feature extraction step in its model, though many other models like decision tree need hard-crafted or domain-specific feature extraction. Then f1 can use a linear function to decide the classification prediction y E Y. Similarly, human oracle f2 transforms data samples from the original space X into an embedded metric space (X2, d2) by its feature extraction. Here, d2 is the corresponding similarity measure. Then the oracle get the classification result y E Y using the feature representation of samples (X2, d2).\nArunesh Sinha, Debarun Kar, and Milind Tambe. Learning adversary behavior in security games. A pac model perspective. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 214-222. International Foundation for Autonomous Agents and. Multiagent Systems, 2016.\nHowever, it is difficult to get the analytic form of X2 in most applications (e.g., when an oracle f2 is. a human annotator). In fact, most previous studies (reviewed in Section|2.2) assume (X2, d2) equals to (X, . D, where : is a norm function. Therefore, we want to extend our analysis and result. from the implicit feature space X2 to the original feature space X..\nFigure 3: This figure shows one situation that (X, d) is not a finer topology than (X, d) (therefore. (X1, d1) and (X2, d2) are not topologically equivalent). According to Theorem (3.4), in this case, the. DNN is vulnerable to adversarial examples. The two sample points a and b are close with regards to (w.r.t.) a norm : in X. They are also close w.r.t. d2 in (X2, d2) space and close w.r.t. d, in (X, d2. space. But they are far from each other in the space of (X, d) and in the space of (X1, d1). In other. words, while d2(a, b), d2(a, b) and ||a - b| are small, d1(a, b) and d' (a, b) are large. Clearly, DNN. can be easily evaded by adding a small perturbation a - b on sample a or sample b. NOTE: it is normally difficult to get the analytic form of (X2, d2) for most applications. Most previous studies. (reviewed in Section|2.2) assume (X2, d2) equals to (X, : D, where : is a norm function.\nBen Stoddard, Yan Chen, and Ashwin Machanavajjhala. Differentially private algorithms fo empirical machine learning. arXiv preprint arXiv:1411.5428, 2014.\nWhen we extend the analysis to the original space X, it is important to point out that the distance. function measuring sample similarity for a learned predictor f1 in the original space X may not be a. metric. The distance function in the original feature space X for oracle f2 may not be a metric as well. This is because the distance between two different samples in the original space X may equa to 0. Because two different samples may be projected into the same point in X1 or X2. For example a change in one pixel of background in an image does not affect the prediction of f1 or f2 since th 91 and g2 have already eliminated that (irrelevant) feature. This property contradicts the identity o. indiscernibles assumption for a metric function. Therefore we need a more general concept of th. distance function for performing theoretical analysis in the original space X. By using the concept o Pseudometric Space13] we derive another important theorem about strong-robustness..\nOur theoretical analysis indicates that strong-robustness is a strong condition of machine learning classifiers and requires thorough understanding of oracle. Since many state-of-the-art learning models including many DNNs, are not strong-robust, it is important to understand and quantify how far they are away from strong-robustness.\nx' by \"slightly' perturbing a correctly classified sample x, with an adversarial perturbation (x, x') Formally, when given x E X.\nWilliam Uther and Manuela Veloso. Adversarial reinforcement learning. Technical report, Technica report, Carnegie Mellon University, 1997. Unpublished, 1997..\nThis section proposes a new evaluation measure \"Adversarial Robustness of Classifiers (ARC)\" to. quantify how far a classifier is away from the strong-robustness. This quantitative measure considers. both the predictor f1 and the oracle f2. By design, a classifier (f1)'s ARC achieves the maximum (1 since ARC is rescaled to [0, 1]) if and only if f1 is strong-robust (see Theorem (12.3)).\nPseudometric: If a distance function d' : X X -> [0, oo] has the following three properties: (1) non-negativity, (2) symmetry and (3) triangle inequality, we call d is a pseudometric or generalized metric. The space (X, d') is a pseudometric space or generalized metric space. It is worth to point out that the generalized metric space is a special case of topological space and metric space is a special case of pseudometric space.\nHere x, x' E X. (x, x') represents the difference between x and x', which depends on the specific data type that x and x' belong to[ Table[2|summarizes different choices of f1 and (x, x') used in the recent literature, in which norm functions on the original space X are mostly used to calculate (x, x' ). Multiple algorithms have been implemented to solve Eq. (2.1) as a constrained optimization (summarized by the last column of Table |2). More details are included for three such studies in Section8.2\ntasks through performance against adversarial samples (details in Section12.5). The hardening effects of these strategies vary from task to task, however, they all improve the base DNN models performance in the adversarial setting\nWhy Pseudometric Space: As shown in Figure[1] we can decompose a common machine learning classifier f1 = c1 0 g1, where g1 : X -> X1 represents the feature extraction and c1 : X1 > Y performs the operation of classification. Assume there exists a pseudometric d' (,.) on X and a metric d1(, .) defined on X14] so that Vx, x' E X 10?\nWe name such situations as \"weak-robustness\" and propose a quantitative measure to describe how robust a classification model is against adversarial examples. The proposed measure \"Adversarial Robustness of Classifiers (ARC)\"' considers both the predictor f1 and the oracle f2 (introduced in Section2.2). By design, a classifier (f1)'s ARC achieves the maximum (1 since ARC is rescaled to [0, 1]) if and only if f1 is strong-robust against adversarial examples and is based on the expectation of how difficult it is to generate adversarial examples.\nBy modifying unnecessary features: As shown by Table 3] unnecessary features ruin the strong robustness of learning-based classifiers. A simple way to remove the unrelated features is to identify which feature is unnecessary. In (Gao et al.] 2017) the authors compare the difference between g1(x') and g1(x) from DNN. They hypothesize that those learned DNN feature dimensions (in X1) changing rapidly are utilized by an adversary, and thus can be removed to improve the robustness of DNN model. Another efficient method is to substitute different values of features into several equivalent classes. By this way, the adversarial perturbation in the unnecessary feature dimensions can be squeezed by projecting into the same equivalent class. A recent study (Li & Vorobeychik 2014) explored a similar strategy by using equivalent-feature-group to replace each word feature in a group, in order to improve the robustness of spam-email classifiers against evasion attacks.\nWhen searching for adversarial examples, one important property has not been fully captured by Eq. (2.1). That is, an adversarial example has been modified very slightly from its seed and these modifications can be so subtle that, for example in image classification, a human observer does nol even notice the modification at all. We define the role of \"human observer' more formally as follows\nEric P. Xing, Michael I. Jordan, Stuart J Russell, and Andrew Y. Ng. Distance metric learning with application to clustering with side-information. In S. Becker, S. Thrun, and K. Obermayer (eds.),. Advances in Neural Information Processing Systems 15. pp. 521-528. MIT Press. 2003..\nSince d1 is a metric in X1, d' fulfills the (1) non-negativity, (2) symmetry and (3) triangle inequalit properties. However, d' may not satisfy the identity of indiscernible property (i.e., making it not a\nDefinition 2.1. An \"Oracle\" represents a decision process generating ground truth labels for a tasl of interest. Each oracle is task-specific, with finite knowledge and noise-fre"}, {"section_index": "9", "section_name": "Definition 12.1. Adversarial Robustness of Classifiers (ARC)", "section_text": "By adding the constraint d2(x, x') < 82 into Eq. (2.2) (our general definition of adversarial examples. and taking the expactation of dy between adversarial example and seed sample, we define a measure\n'For example, in the case of strings, (x, x') represents the difference between two strings. 2we leave all detailed analysis of when an oracle contains noise as future work.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096-1103. ACM, 2008.\ns.t. f1(x) f1(x)\nPengtao Xie, Misha Bilenko, Tom Finley, Ran Gilad-Bachrach, Kristin Lauter, and Michael Naehrig Crypto-nets: Neural networks over encrypted data. arXiv preprint arXiv:1412.6181, 2014.\nmetric). For example, suppose g1 only selects the first three features from X. Two samples x and x have the same value in the first three features but different values in the rest features. Clearly, x / x but d' (x, x') = d1(g1(x), g1(x')) = 0. This shows that d' (, :) is a pseudometric but not a metric in X. Similarly, a pseudometric d', for the oracle can be defined as follow: d.(r x = do(ao(r)ao( (10.3)\nFei Zhang, Patrick PK Chan, Battista Biggio, Daniel S. Yeung, and Fabio Roli. Adversarial Featur Selection against Evasion Attacks. IEEE Transactions on Cybernetics, PP(1), 2015..\nAdversarial examples are maliciously created inputs that lead a learning-based classifier to produce. incorrect output labels. An adversarial example is often generated by adding small perturbations. that appear unmodified to human observers. Recent studies that tried to analyze classifiers under. adversarial examples are mostly empirical and provide little understanding of why. To fill the gap, we. propose a theoretical framework for analyzing machine learning classifiers, especially deep neura. networks (DNN) against such examples. This paper is divided into three parts. The first section. provides a revised definition of adversarial examples by taking into account of the oracle of the task The second section defines strong-robustness and provides the principled understanding of wha. makes a classifier strong-robust. The third section examines practical and theoretically groundec. directions for understanding and hardening DNN models against adversarial examples. Future steps. will include an empirical comparison to analyze recent literature using our theorems..\nStephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. arXiv preprint arXiv:1604.04326, 2016\nMachine classifier f\nBefore introducing this condition, we need to briefly introduce the definition of topology and finer/coarser topology here:\nDefinition 10.2. A topology t is a collection of en sets in a space X.."}, {"section_index": "10", "section_name": "RELATED WORKS IN A BROADER CONTEXT", "section_text": "A topology t is generated by a collection of open balls {B(x, 01)} where x E X and B(x, 01) =. {z[d(x, z) < 01}. The collection contains {B(x, 01)}, the infinite/finite number of the union of balls and the finite number of intersection of them..\nDefinition 10.3. Suppose T1 and T2 are two topologies in space X. If T2 C T1, the topology T2 is. called a coarser (weaker or smaller) topology than the topology T1, and T1 is called a finer (stronger or larger) topology than T2.\nThis motivates us to design a computable criteria to estimate Definition (12.1). For instance, for image classification tasks, we can choose d2 = || : ||. as an example. Then in Eq. (12.1), to estimate of IE[||x - x'I[o], we need to make some assumptions. Assume that there exists a threshold 2, that any perturbation larger than d2 will change the classification of oracle f2. That is if x - x'Ioo 02 then f2(x) f2(x'). More concretely, for image classification tasks, as the input space is discrete (with every dimension ranging from 0 to 255), ARC can be estimated by the following Eq. (12.2):\nFigure 2: An example showing that f1 with one unnecessary feature (according to f2) is prone to. adversarial examples. The red circle denotes an adversarial example (e.g. generated by some attack. similar as JSMA (Papernot et al.] 2015a) (details in Section[8.2). Each adversarial example is very close to its seed sample in the oracle feature space (according to d2), but it is comparatively far from. its seed sample in the feature space (according to d1) of the trained classifier and is at the different. side of the decision boundary of f1. Essentially \"adversarial examples'' can be easily found for all. seed samples in this Figure. We only draw cases for two seeds. Besides, for each seed sample, we. can generate a series of \"adversarial examples\" (by varying attacking power) after the attacking line. crosses the decision boundary of f1. We only show one case of such an adversarial example for each seed sample.\nd2-1 ARCx(f1,f2)=E[l x-x']=>~iP(l| x-x'llx=i i=1 +82P(f1(x) =f1(t),Vlx-to< 82) x' = argmin d2(x,t) tEX\nS2. S1 and S2 are bases of (X, T1) and (X, T2). First, we want to prove that given 02 > 0, s1 > 0 such that if d,(x, x) d2, then d (x, x) < 01 Consider a pair of samples (x, x) and d,(x, x) < d2. x, x' E B2(x, 02). Of course, B2(x, 02) E T2. Suppose the (X, d) is a finer topology than (X, d2). Then B(x, 02) E T1. You can. find B1(xo, 01/2) E T1 such that B2(x, 02) C B1(xo, 01/2), where B2(x, 02) is the closure of. B2(x, 02). Therefore d' (x, x') 01. Based on a.e. continuity assumption of f1, since d' (x, x') < 8, f1(x) = f1(x') a.e. . This means. that P(f1(x) = f1(x')|d2(g2(x), g2(x')) < 02) = 1, which is our definition of strong-robustness. Next, we want to show that if f1 is strong-robust, then t1 is a finer topology than T2.. Suppose f1 is strong-robust, we need to prove that V2 > 0, 1 > 0 such that if d'(x, x') 02,. then d'(x, x') d1. Assume T1 is not a finer topology than T2. This means there exists a B2(x, 02) such that B2(x, 02) T1. Therefore Vj > 0, there exists x' E B2(x, 2) such that d,(x, x') < d2 and d (x, x) > 01 Based on a.e. continuity assumption of f1, d't(x, x') > 01 indicates that f1(x) / f1(x'). This. contradicts the strong-robust assumption. Thus, T1 is a finer topology than T2..\nTable 2: Summary of the previous studies defining adversarial examples\nIn the broader secure machine learning field, researchers also make attempts for hardening learning systems. For instance: (1) (Barreno et al.2010) and (Biggio et al.]2008) propose a method to introduce some randomness in the selection of classification boundaries; (2) A few recent studie. (Xiao et al.]2015] Zhang et al.2015) consider the impact of using reduced feature sets on classifiers under adversarial attacks. (Xiao et al.2015) proposes an adversary-aware feature selection model that can improve a classifier's robustness against adversarial attacks by incorporating specific assumptions about the adversary's data manipulation strategy. (3) Another line of works, named as adversarial training (Goodfellow et al.||2014), designs a new loss function for training neural networks, which is a linear interpolation of the loss function of the original sample and the loss function of the adversarial example generated by the original sample. A scalable version of adversarial training (Kurakin et al. 2016) was recently proposed. By applying several tricks, the author can apply the adversarial training to deeper network trained by the imagenet dataset. (4) Multiple studies model adversarial scenarios with formal frameworks representing the interaction between the classifier and the adversary. Related efforts include perfect information assumptions (Dalvi et al.2004), assuming a polynomial number of membership queries (Lowd & Meek|[2005), formalizing the attack process as a two-person sequential Stackelberg game (Bruckner & Scheffer2011)Liu & Chawla2010), a min-max strategy (training a classifier with best performance under the worst perturbation) (Dekel et al. 2010 [Globerson & Roweis2006), exploring online and non-stationary learning (Dahlhaus1997 Cesa-Bianchi & Lugosi!2006), and formalizing as an adversarial reinforcement learning problem (Uther & Veloso 1997). (5) A PAC model study about learning adversary behavior in a security games also investigated the solution of computing the best defender strategy against the learned adversary behavior. It has a\nARC(f1,f2) ARCA(f1) = Accuracy(f1) d2\nConvolutional neural networks Random forest and SVM.\nRandom forest and SVM\nTheorem 12.3. f1 is strong-robust against adversarial examples if and only if ARC(f1 )/8, = 1\nProof. If ARC(f)/, = 1, then based on Definition d12.1b\nClass 1 Class 2 Adversarial sample Machine classifier f1 X1 the oracle f2 X2\nx' =argmin d2(x,t) tEX Subject to: f1(x) f1(t d2(x,t) < d2\nTo analyze the strong robustness problem in the original feature space X, we assume it to be a generalized metric (pseudometric) space (X, d) for f1 and a generalized metric (pseudometric) space (X, d2) for f2. Now we can analyze f1 and f2 on the same feature space X but relate to two different pseudometrics. This makes it possible to define a sufficient and necessary condition for determining the strong robustness of f1 against adversarial perturbation..\nTwo recent studies (Moosavi-Dezfooli et al.. 2015}Papernot et al. 2015b) propose two similar measures both assuming d2 as norm functions, but do not consider the importance of an oracle. More. importantly, (Papernot et al. 2015b) does not provide any computable way to calculate the measure.. In (Moosavi-Dezfooli et al.| 2015). the measure is normalized by the size of the test samples. while. no evidence exists to show that the size of perturbation is related to the size of test samples.\nThe fact that previous measures neglect the oracle f2 leads to a severe problem: the generated. adversarial examples are not necessarily valid. This is because if the size of perturbation is too large oracle f2 may classify the perturbed sample into a different class (different from the class of the seed. sample).\nInvestigating the behavior of machine learning systems in adversarial environments is an emerging topic (Huang et al. 2011 Barreno et al.]2006, 2010; Globerson & Roweis2006, Biggio et al. 2013 Kantchelian et al.]2015] Zhang et al.]2015). Recent studies can be roughly categorized into three types: (1) Poisoning attacks in which specially crafted attack points are injected into the. training data. Multiple recent papers (Alfeld et al.]2016f Mei & Zhu]2015bf Biggio et al.]2014 2012] Mei & Zhu[2015a) have considered the problem of an adversary being able to pollute the. training data with the goal of influencing learning systems including support vector machines (SVM), autoregressive models and topic models. (2) Evasion attacks are attacks in which the adversary's goal is to create inputs that are misclassified by a deployed target classifier. Related studies (Szegedy. et al.[2013]|Goodfellow et al.]2014f|Xu et al.2016] Kantchelian et al.[[2015] Rndic & Laskov2014 Biggio et al.[[2013] Papernot et al.[[2016bf Sinha et al.2016) assume the adversary does not have an opportunity to influence the training data, but instead finds \"adversarial examples\"' to evade a trained classifier like DNN, SVM or random forest. (3) Privacy-aware machine learning (Duchi et al. 2014) is another important category relevant to data security in machine learning systems. Recent. studies have proposed various strategies (Xie et al.2014]Bojarski et al.2014] |Stoddard et al.|2014 Li & Zhou]2015] Rajkumar & Agarwal2012] Dwork]2011} [Nock et al.]2015] to preserve the privacy of data such as differential privacy. This paper focuses on evasion attacks that are mostly used to attacking classifiers that try to distinguish malicious behaviors from benign behaviors. Here. we extend it to a broader meaning - adversarial manipulation of test samples. Evasion attacks may be encountered during system deployment of machine learning methods in adversarial settings.\nPrevious studies f1 (x,x) Formulation of f1(x) f1(x) (Goodfellow et al.|2014 Convolutional neural networkss. lo argmax Loss(f1(x), f1(x)) (Szegedy et al.2013) Convolutional neural networks l2 argmin Loss(f1(x'), l), subject to: l f1(x') (Biggio et al.|2013) Support vector machine (SVM) l2 argmin Loss(f1(x'), -1), subject to: f1(x) = 1 (Kantchelian et al.]2015) Decision tree and Random forest. l2, l1, argmin Loss(f1(x'), -1), subject to: f1(x) = 1 lo [Papernot et al.[2016a) Convolutional neural networks. lo argmax Loss(f1(x'), f1(x)) (Grosse et al.2016) Convolutional neural networkss lo argmax Loss(f1(x), f1(x)) (Xu et al.[2016) Random forest and SVM l1, lo argmin Loss(f1(x'), -1), subject to: f1(x) = 1\nAs we have discussed in the Section4] both accuracy and robustness are important properties in. determining whether a classification model is preferred or not. Therefore we combine accuracy and ARC into the following unified measure ARCA:.\nThe goal of machine learning is to train a learning-based predictor function f1 : X -> Y to. approximate an oracle classifier f2 : X -> Y for the same classification task. For example, in image. classification tasks, the oracle f2 is often a group of human annotators. Adding the notation of oracle we revise Eq. (2.1) into\nS2. S1 and S2 are bases of (X, T1) and (X, T2). First, we want to prove that given 2 > 0, 31 > 0 such that if d',(x, x') < 2, then d (x, x') Consider a pair of samples (x, x') and d,(x, x') < 82. x, x' E B2(x, 2). Of course, B2(x, 2) e Suppose the (X. d) is a finer topology than (X.d). Then B,(x,d) E T1. You can\ns.t. f1(x)f1(x') 2(x,x') < e f2(x)=f2(x)\nfind B1(xo,01/2) E T1 such that B2(x, d2) C B1(xo,01/2), where B2(x, 02) is the closure of. B2(x, 02). Therefore d'(x, x') o1. Based on a.e. continuity assumption of f1, since d' (x, x') < d1, f1(x) = f1(x') a.e. . This means. that P(f1(x) = f1(x)[f2(x) = f2(x), d2(g2(x),g2(x)) < 02) = 1, which is our definition of strong-robustness. . Next, we want to show that if f1 is strong-robust, then t1 is a finer topology than T2.. Suppose f1 is strong-robust, we need to prove that V2 > 0, o1 > 0 such that if d,(x, x') 02 then d'j (x, x') d1. Assume T1 is not a finer topology than T2. This means there exists a B2(x, 02) such that B2(x, 02) . T1. Therefore V1 > 0, there exists x' E B(x, 02) such that d,(x, x') < d2 and d (x, x') > 01 Based on a.e. continuity assumption of f1, d' (x, x') > 1 indicates that f1(x) f1(x'). This. contradicts the strong-robust assumption. Thus, T1 is a finer topology than T2..\nA2(x, x') < e reflects that adversarial examples add \"small modifications\"' that are almost imper ceptible to oracle of the task. Clearly calculating 2(x, x') needs to accord to oracle f2. For most classification tasks, an oracle does not measure the sample difference in the original input space X. We want to emphasize that sample difference is with regards to its classification purpose. For instance, when labeling images for the hand-written digital recognition, human annotators do not need to consider those background pixels to decide if an image is \"0' or not.\nTest-Sample Case (a) X1 Machine classifier Y' f1 Accurate Prediction fi(x) =fi(x')|d2(x,x')< e X2 the oracle f2 Class 1 Class 2e Test-Sample Case (b) Machine x classifier f1 Not accurate f1(x) =fi(x')|d2(x,x')< e X2 the oracle f2 Class 1 Class 2 Test-Sample Case (c) Machine classifier f1 f1(x) = fi(x)|d2(x,x') < e P(f1(x) = fi(x')|d2(x,x')<e) = 0 X2 the oracle f2 O Class 1 Class 2\n12.4 USING \"SIAMESE ARCHITECTURE\" TO IMPROVE DNNS' ADVERSARIAL ROBUSTNESS\nOne intuitive formulation that we can use to improve a DNN's adversarial robustness is by solving the following:\nVx,x' E X,if d2(g2(x),g2(x')) < e argmin d1(g1(x; w), g1(x'; w)\nThis essentially forces the DNN to have the finer topology between (X1, d1) and (X2, d2) by learning. a better g1. We name the strategy minimizing the loss defined in Eq. (12.5) as \"Siamese Training. because this formulation uses the Siamese architecture (Bromley et al.|1993), a classical deep. learning approach proposed for learning embedding. We feed a slightly perturbed input x' together. with its original seed x to the Siamese network which contains two copies (sharing the same weights) of a DNN model we want to improve. By penalizing the difference between middle-layer (g1()) outputs of (x, x'), \"Siamese Training\" can push two spaces (X, d) versus (X2, d) to approach finer topology relationship, and thus increase the robustness of the model. This can be concluded from Figure [8] By assuming d2(g2(x), g2(x)) equals (approximately) to (x, x)|], previous studies (summarized in Table[2) normally assume d2 is a norm function : . Because for a pair of inputs (x, x) that are close to each other (i.e., [x - x'| is small) in (X, . , Siamese training pushes. them to be close also in (X1, d1) . As a result, this means that a sphere in (X1, d1) maps to a. not-too-thin high-dimensional ellipsoid in (X, || : I). Therefore the adversarial robustness of DNN. model after Siamese training may improve. In experiments, we choose Euclidean distance : 2 for. di() (however, many other choices are possible).\nIn Section[3|our theoretical analysis uses (X2, d2) to bring forth the fundamental causes of adversarial. examples and leads to a set of novel insights to understand such examples. To the best of the authors. knowledge, the theoretical analysis made by this paper has not been uncovered by the literature\n7 =1-P(fi(x)fi(x)|f2(x)=f2(x),d(x,x)< 82 =1-P(f1(x)f1(x)|f2(x)=f2(x),d1(x,x')< 01, d2(x, x') < d2) >1 - n\nModeling Oracle f2: One may argue that it is hard to model f2 and (X2, d2) for real application since if such oracles can be easily modeled machine-learning based f1 seems not necessary. Ii Section8.3] we provide examples of modeling oracles for real applications. For many security sensitive applications about machines, oracles f2 do exist[3] For artificial intelligence tasks like imag classification, humans are f2. As illustrated by cognitive neuroscience papers (DiCarlo & Cox]2007 DiCarlo et al.2012), human brains perform visual object recognition using the ventral visual strean and this stream is considered to be a progressive series of visual re-representations, from V1 to V to V4 to IT cortex (DiCarlo & Cox|2007). Experimental results support that human visual syster makes classification decision at the final IT cortex layer. This process is captured exactly by ou decomposition f2 = C2 0 g2.\nP(f1(x) = f1(x)|f2(x) =f2(x),d2(g2(x),g2(x)) < 02 =1-P(f1(x) f1(x)|f2(x)=f2(x) d2(g2(x),g2(x')) < 02) 1-P(f1x)f1(x)f2x)=f2(x) d1(g1(x),g1(x')) < 01,d2(g2(x),g2(x')) < 82) >1 - n\nDatasets: Currently, we are using the following 2 image datasets to evaluate our model:\nMNIST: MNIST, released in (LeCun et al.l[1998) includes a task to classify handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. Each example is a 32x32 pixel black and white image of handwritten digit.. CIFAR-10: CIFAR-10 is an image classification dataset released by (Krizhevsky & Hinton 2009 The training set contains 50,000 32x32 color images in 10 classes, and the test set contains. 10.000 32x32 color images. VGG model: We choose a VGG model (Simonyan & Zisserman2014) as a base DNN model. The VGG model in our experiment has 16 weight layers (55 layers in total).\nDefinition 2.2. adversarial example: Suppose we have two functions f1 and f2. f1 : X -> Y is the. classification function learned from a training set and f2 : X -> Y is the classification function of the oracle that generates ground-truth labels for the same task. Given a sample x E X, an adversarial. example x' E X. (x, x') satisfies Eq. (2.3)\ns.t. f1(x)f1(x') d2(g2(x),g2(x)) < 82 f2(x) = f2(x)\nBaseline: Three different hardening strategies are compared through testing on adversarial examples (details in Section|12.2): (1) original model; (2) stability training (Zheng et al.2016) (3) Siamese training (alone); (4) adversarial training (Goodfellow et al.2014)|Kurakin et al.2016) uses adversarial perturbed x' and original samples x to train a DNN model..\nFigure 4: An example figure illustrating Table3|Case (III) when f1 is strong-robust. We assume c1 and c2 as linear classification functions. We show one case of X1 = X2 = R2 and f1, f2 are. continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2. (red boundary line). All pairs of test samples (x, x') can be categorized into the three cases shown in this figure. Test-case (a): f1 and f2 assign the same classification label (yellow circle) on x and x'. x. and x' are predicted as the same class by both. Test-case (b): f1 assigns the class of \"blue square\"' on. both x and x'. f2 assigns the class of \"yellow circle\"' on both x and x'. Test-case (c): f2 assigns the. class of \"yellow circle'' on both x and x'. However, f1 assigns the class of \"blue square\"' on x and assigns a different class of \"yellow circle\" on x'. This case has been explained in Section|11.\nThe first column of Table[10|and Table[11$hows different levels of attack power (defined in Eq. (12.6)) Test accuracy reported in Figure [9(a), Figure 10(a), Table[10|and Table 11shows different hardening approches can increase the effectiveness of the adversarial attacks. Details of our experimental set-up and datasets are included in Section|12.2"}, {"section_index": "11", "section_name": "Evaluation Metrics:", "section_text": "3Oracles f2 do exist in many security-sensitive applications about machines. But machine-learning classifier f1 are used popularly due to speed or efficiency\nIllustrated in Figure [1] we denote the feature space an oracle uses to consider difference among samples for the purpose of classification decision as X2. The sample difference uses a distance function d2 in this space. An oracle function f2 : X -> Y can be decomposed as f2 = c2 o g2 where g2 : X > X2 represents the operations for feature extraction from X to X2 and c2 : X2 -> Y denotes the simple operation of classification in X2. Essentially g2 includes the operations that (progressively) transform input representations into an informative form of representations X2. c2 applies relatively simple functions (like linear) in X2 for the purpose of classification. d2 is the metric function (details in Section|3) an oracle uses to measure the similarity among samples (by relying on representations learned in the space X2). We illustrate the modeling and decomposition in Figure|1\nMost previous studies (Table|2) have made an important and implicit assumption about f2 (through using (x, x') < e): f2 is almost everywhere (a.e.) continuous. We explains the a.e. continuity assumption and its indication in Section9] Basically, when f2 is assumed continuous a.e.,. P(fo(r S=\ns.t.f1(x)f1(x) d2(g2(x),g2(x')) < 82\n16 ATT: Stability training was shown to improve the model robustness against Gaussian noise in. (Zheng et al.|[2016). Differently, our experiments focus on testing a learning model's robustness against \"adversarial. perturbation\"'. The sole purpose of including this baseline is to show where state-of-art hardening strategies are in our experimental setting.\nProof. Suppose n1 > n2 and X2 C X1. (X, d2) is a finer topology than (X, d). Therefore (X, d). is not a finer topology than (X, d), which indicates that f1 is not strong-robust against adversarial examples.\nTest-Sample Case (a) Machine classifier f1 X1 Accurate Prediction f1(x) =fi(x')|d2(x,x')< e X the oracle f2 Class 1 Class 2 Test-Sample Case (b) Machine classifier f1 X1 Xx Not accurate f1(x) =f1(x')|d2(x,x')< e the oracle f2 Class 1 Class 2 X2 Machine classifier f1 Test-Sample Case (c) >X1 f1(x) = fi(x')|d2(x,x')< e P(f1(x) = f1(x)|d,(x,x')< e) = 0 x' the oracle f2 Class 1 Class 2 X2"}, {"section_index": "12", "section_name": "3.1 MODELING AND DECOMPOSING f", "section_text": "As shown in Figure[1] we decompose f1 in a similar way as the decomposition of f2. This is to answer another key question: \"which parts of a learned classifier influence its robustness against. adversarial examples more, compared with the rest?\". A machine-learning classifier f1 = c1 o g1. where g1 : X -> X1 represents the feature extraction operations and c1 : Xj -> Y performs a simple operation (e.g., linear) of classification. Section 8.4|provides multiple examples of decomposing. state-of-the-art f14I d1 denotes the distance function f1 uses to measure difference among samples. in X1 :\nAll pairs of test samples (x, x) can be categorized into the three cases shown in both figures\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. It means.\nClearly from the two figures, c1 does not determine the strong-robustness of f1.\nFor the rare cases that f1 is not continuous a.e., Section|11|discusses \"boundary points\" of f1 Roughly speaking, when f1 is not continuous a.e.,\n10.4.2 MORE ABOUT EXTRA UNNECESSARY FEATURES RUIN STRONG-ROBUSTNES\nFigure 8: Sketch of Siamese training. Inputs are pairs of seed sample and their randomly perturbe version, while we suppose the d2 distance between the pair is small. By forwarding a pair into th Siamese network and penalizing the outputs of the pair, this training intuitively limit the di distanc between two similar samples to be small. Backpropagation is used to update the weights of the network.\nIn real-world applications, such attacks can be, for example, adding words with a very tiny font siz in a spam E-mail, that is invisible to a human annotator. When a learning-based classifier tries t utilize such extra words (unnecessary for human), it can lead to many easily generated adversaria emails.\nP(f1(x) f1(x[f2(x)=f2(x)\nAs another example, one previous study (Xu et al.] 2016) shows that a genetic-programming based adversarial example strategy can always evade two state-of-art learning-based PDF-malware classifiers. (with \"100%\" evasion rates). The reason behind such good evasion rates is the Condition (4.1). Both state-of-art PDF-malware classifiers have used many superficial features (e.g., a feature representing \"is there a long comment section\") that are not relevant to \"the malicious property\" of a PDF sample. at all !\nTest accuracy: We use top-1 test accuracy as the performance metric. It is defined as the number of successfully classified samples divided by the number of all test samples. The base model achieves accuracy when there's no adversarial attack.. ARC (Eq. (12.2)) : We use ARC to measure the adversarial robustness of each model. n is chosen to be 10. ARCA: (Eq. (12.3)) : We use ARCA to measure the total performance of each model..\n3 .2 2{d2,n}-STRONG-ROBUST AGAINST ADVERSARIAL EXAMPLES\nWe generate adversarial examples using the fast gradient sign method, in which the power of the adversary attack can be easily controlled. By controlling the power of fast-sign attacks, we can obtain a complete view of how the accuracy changes according to different attack powers\nVx,x' E X P(f1(x)=f1(x)[f2(x)=f2(x) d2(g2(x),g2(x)) < 02) >1 7\nIn the following analysis, the attack power is defined as\nFor image classification tasks, we control the perturbed sample to be still in the valid input space, so. that every dimension of the perturbed samples is in the range of integers between 0 and 255\nFigure 5: An example figure illustrating Table[3|Case (IV) when f1 is strong-robust. We assume c1 and c2 as linear classification functions. We show one case of 1 = n1 < n2 = 2, Xj C X2 and f1 f2 are continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2 (red boundary line). All pairs of test samples (x, x') can be categorized into the three cases shown in this figure. Test-case (a): f1 and f2 assign the same classification label (yellow circle) on x and x' x and x' are predicted as the same class by both. Test-case (b): f1 assigns the class of \"yellow circle' on both x and x'. f2 assigns the class of \"blue square\"' on both x and x'. Test-case (c): f2 assigns the class of \"yellow circle\" on both x and x'. However, f1 assigns the class of \"blue square\"' on x and assigns a different class of \"yellow circle\"' on x'. This case can be explained in Section|11\nEq. (3.2) defines the \"{2, n}-strong-robustness\"' as a claim with the high probability. To simplify notations, in the rest of this paper, we use \"strong-robust\"' representing \"{ 2, n}-strong-robust\"'. Also in the rest of this paper we propose and prove theorems and corollaries by using its more general. form by Eq. (3.2). For all cases, if f2 is continuous a.e., all proofs and equations can be simplified. by using only the term d2(g2(x), g2(x')) < 02 (i.e. removing the term f2(x) = f2(x')) according to Eq. (3.3))."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "When f1 is not continuous a.e., the analysis of adversarial examples needs to consider \"boundary points\" of f1 with certain properties. This section tries to clarify the definition and related scope\nThe \"strong-robustness\" definition leads to four important theorems in next two subsections\nx E X,x'E X\nd1(g1(x),g1(x')) Loss II g1(x)-g1(x') II2 function to be small. Shared parameters Finer Network g1(x) g1(x') Topology W Assume (X2,d2) = (X,IIII) Network d2(g1(x),g1(x')) x input X is small Before Siamese training: (X1,d1) After Siamese training:. (X1,d1) b Far IClose Deep Neural Nets. Deep Neural Nets d1 (a, b) Large d1 (a, b) Large (X, II-1I) (X, IIII) Adversarial direction Adversarial direction Class 1 b Class 1 a Class 2 Class 2 Class 3 Class 3\nWith a more accurate definition of \"adversarial examples\", now we aim to answer the first central question: \"What makes a classifier always robust against adversarial examples?'. Section |3.2 defines the concept \"strong-robust\"' describing a classifier always robust against adversarial examples. Section [3.3|and Section [3.4]present sufficient and necessary conditions for \"strong-robustness\". Section4[then provides a set of theoretical insights to understand \"strong-robustness\"'.\nFigure4|uses an example to illustrate Table[3|Case (III) when f1 is strong-robust. We show one case of X = X2 = R2 and f1, f2 are continuous a.e.. In terms of classification, f1 (green boundary line). is not accurate according to f2 (red boundary line)..\nFigure5|uses an example figure to illustrate Table|3|Case (IV) when f1 is strong-robust. We show one case of 1 = n1 < n2 = 2, X1 C X2 and f1, f2 are continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2 (red boundary line)..\nTest-case (a) is when x and x' are predicted as the same class by both. f1 gets correct predictions according to f2. There exist no adversarial examples. Test-case (b) is when x and x' are predicted as the same class by both. But f1 gets incorrect predictions according to f2. There exist no adversarial examples. Test-case (c) shows when f1(x) f1(x'), d2(x,x') < d2 and f2(x) = f2(x'). This case is explained in Section|11] Essentially, this is about \"Boundary based adversarial examples\" and can only attack points whose distance to the boundary of f1 is smaller than d2 (f1(x) f1(x'). d2(x, x') < d2 and f2(x) = f2(x')). When f1 is continuous a.e., the probability of this set is 0.\nWe then apply reverse-thinking on Definition (2.2) and derive the following definition of strong. robustness for a machine learning classifier against adversarial examples:.\nDefinition 3.1. {2, n}-Strong-robustness of a machine-learning classifier: A machine-learning classifier f1() is {82,n}-strong-robust against adversarial examples if: Vx, x' E X a.e., (x, x') satisfies Eq. (3.2).\nTable[3Jindicates that training a strong-robust and accurate classifier in practice is extremely difficult For instance, Figure[2 shows only one extra irrelevant feature, which does not hurt accuracy, makes. the classifier not robust to adversarial perturbation at all (i.e., for samples a.e. in X, easy to find its adversarial examples.).\nWhen f1 is continuous a.e., P(f1(x) = f1(x')|f2(x) = f2(x'), d2(g2(x), g2(x')) < 82) equals to either 1 or 0. This means f1 is either strong-robust or not robust under AN at all a.e.. One case with this probability as 0 is illustrated by Figure[2] Case (III) and Case (IV) from Table[3|have this probability equaling to 1."}, {"section_index": "14", "section_name": "1 1 BOUNDARY POINTS OF f1 MATTER FOR ADVERSARIAL EXAMPLES WHEN f1 IS NOT CONTINUOUS A.E.", "section_text": "Our definition of the boundary points describes such points as pairs of samples that are across the. classification boundary. This format of definition makes the following analysis (notation-wise) easy and concise.\nFor the purpose of \"fooling\" a classifier, naturally, the attacker wants to control the size of the perturbation (x, x') to ensure the perturbed sample x' still stays close enough to the original sample\nx to satisfy the intended \"fooling\" purpose. For example, in the image classification case, Eq. (2.1] can use the gradient information to find a (x, x' ) that makes human annotators still recognize x' as almost the same as x. though the classifier will predict x' into a different class. In another example with more obvious security implications about PDF malware (Xu et al.|2016), x' in Eq. (2.1) is found by genetic programming. A modified PDF file from a malicious PDF seed will still be recognized as malicious by an oracle machine (i.e., a virtual machine decides if a PDF file is malicious or not by actually running it), but are classified as benign by state-of-art machine learning classifiers (Xu et al. 2016).\n3.3 TOPOLOGICAL EQUIVALENCE OF TwO METRIC SPACES (X1, d1) AND (X2, d2) IS SUFFICIENT IN DETERMINING STRONG-ROBUSTNESS\nTable 10: Test accuracy for different training strategies on CIFAR-10 dataset Attack power (Eq. (12.6) Original model Stability Training Siamese Training\nAttack power (Eq. (12.6)) Original model. Stability Training Siamese Training 0 93.95% 93.81% 93.96% 1 78.07% 78.01% 93.88% 2 61.38% 60.34% 90.13% 3 50.07% 49.21% 86.73% 4 42.86% 41.51% 83.85% 5 37.67% 36.33% 81.21% 6 33.60% 32.08% 78.61% 7 29.70% 28.09% 76.09% 8 26.23% 25.11% 73.21% 9 23.53% 22.43% 69.67% 10 20.92% 20.25% 65.98% ARC 4.9798 4.8717 8.9332 ARCA 0.4253 0.4155 0.7631\nIf the topological equivalence ( Eq. (10.1) exists between (X1, d1) and (X2, d2), it means that for all pair of samples from X, we have the following relationship:.\nd1(g1(x),g1(x))< 01 H> d2(g2(x),g2(x)) < 0z\nSubject to: f1(x) f1(x)\nTheorem 3.2. When f1 is continuous a.e., if (X1, d1) and (X2, d2) are topologically equivalent then the learned classifier f1() is strong-robust to adversarial examples.\nTable Iest accuracy Slldlegles oll Mi dalasel. Attack power Original model Adversarial Training Stability Training Siamese Training 0 98.98% 98.96% 99.06% 99.03% 1 98.75% 98.84% 98.94% 98.84% 2 98.44% 98.63% 98.60% 98.47% 3 98.10% 98.41% 98.29% 98.16% 4 97.56% 98.12% 97.80% 97.78% 5 97.09% 97.80% 97.47% 97.26% 6 96.23% 97.38% 97.01% 96.56% 7 95.43% 96.96% 96.23% 95.81% 8 94.22% 96.47% 95.37% 95.01% 9 92.95% 96.06% 94.49% 93.89% 10 91.53% 95.57% 93.30% 92.76% ARC 10.5928 10.732 10.6656 10.6357 ARCA 0.953159 0.96549 0.960486 0.957503\nBesides, in the field of computer security, machine learning has been popular in classifying the malicious (y = 1) behavior versus benign behavior (y = -1). For such a context, two different definitions of adversarial examples exist in the literature:.\nFor instance. Biggio et al 2013) uses a formula as follows:\nP(f1(x)= f1(x)[f2(x)=f2(x)\nBattista Biggio, Giorgio Fumera, and Fabio Roli. Adversarial pattern classification using multi ple classifiers and randomisation. In Structural, Syntactic, and Statistical Pattern Recognition pp. 500-509. Springer, 2008. URLhttp://1ink. springer.com/chapter/10.1007/ 978-3-540-89689-0_54\n3.4 FINER TOPOLOGY OF (X, d) THAN (X, d2) IS SUFFICIENT AND NECESSARY IN DETERMINING STRONG-ROBUSTNESS\nTo fool classifiers at test time, several approaches have been implemented to generate \"adversaria perturbations' by solving Eq. (2.2). According to Eq. (2.2), an adversarial example should be able to change the classification result f1(x), which is a discrete value. To solve Eq. (2.2), we need to transform the constraint f1(x) / f1(x') into an optimizable formulation. Then we can easily use the Lagrangian multiplier to solve Eq. (2.2). All the previous studies define a loss function Loss(., :) tc quantify the constraint f1(x) f1(x'). This loss function can be the same with the training loss, or it can be chosen differently, such as hinge loss or cross entropy loss.\nThis lemma shows that a case with probability of boundary points larger than O is exactly the situatio when f, being not continuous a.e...\nBattista Biggio, Samuel Rota Bulo, Ignazio Pillai, Michele Mura, Eyasu Zemene Mequanint, Marcello Pelillo, and Fabio Roli. Poisoning complete-linkage hierarchical clustering. In Structural, Syntactic. and Statistical Pattern Recognition, pp. 42-52. Springer Berlin Heidelberg, 2014..\nWe summarize four common attacking studies as follows:\nThe third column of Figure|6|describes \"Boundary based adversarial examples\"' that can only attack seed samples whose distance to the boundary of f1 is smaller than d2. Essentially this attack is about those boundary points of f1 that are treated as similar and belong to the same class by f2. That is\nMariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, and Yann LeCun. Differentially-and non-differentially-private random decision trees. arXiv preprint arXiv:1410.6973. 2014.\nVx,x' e X, d2(g2(x),g2(x)) < 02 => d1(g1(x),g1(x)) <\nGradient ascent method (Biggio et al.] 2013) Machine learning has been popular in classifying malicious (y = 1) versus benign (y 1) in computer security tasks. For such contexts, a simple way to solve Eq. (2.2) is through gradient ascent. To minimize the size of the perturbation and maximize the adversarial effect, the perturbation should follow the gradient direction (i.e., the direction providing the largest increase of function value, here from y = -1 to 1). Therefore, the perturbation r in each iteration is calculated as:\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a \"siamese\" time delay neural network International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nUsing Eq. (3.7) and the continuity a.e. assumption, we can derive the following Theorem about the sufficient and necessary condition for f1 being strong-robust:\nTheorem 3.4. When f1 is continuous a.e., f1 is strong-robust against adversarial examples if and only if the topology in (X, d' ) is a finer topology than the topology in (X, d2).\nCase (a) Case (b) Case (c) X1 X1 X1 Machine a a classifier b f1 Not consider in the Not consider in the. Boundary points of f1. strong-robustness strong-robustness Boundary points of f2. Boundary points of f2. Boundary-based attack and f1 X2 the oracle f2 b a Class 1 b Class 2\nTable 10: Test accuracy for different training strategies on CIFAR-10 dataset\nIn the appendix, Section|10.1 briefly introduces the concept of metric space and the definition of topological equivalence among two metric spaces. As shown in Figure[1] here f1 defines a metric space (X1, d1) on X1 with the metric function d1. Similarly f2 defines a metric space (X2, d2) on X, with the metric function d2\ndj(g1(x),g1(x)) < 0j > d2(g2(x),g2(x)) < 02\nWhen fi is continuous a.e., this can get us the following important theorem, indicating that the topological equivalence between (X1, d1) and (X2, d2) is a sufficient condition in determining whether or not f1 is strong-robust against adversarial examples.\nEq. (8.1) tries to find the x' by minimizing (x, x') under some constraints. Eq. (2.1) is a more general formulation than Eq. (8.1) and can summarize most relevant studies. For example, in (Xu et al.]2016) \"adversarial examples\" are those generated PDFs that can fool PDFRate (a learning-based classifier for detecting malicious PDFs) to classify them as benign. The distances of these variant PDFs to the seed PDF are not necessarily minimal. For such cases, Eq. (2.1) still fits, while Eq. (8.1 does not.\nTable 11: Test accuracy for different training strategies on MNIST dataset.\nFigure 6: An example showing boundary points of f1 and boundary points of f2. We assume f1 and f2 are continuous a.e.. We assume c1 and c2 as linear classification functions. The first two columns showing boundary points of f2 that are not considered in this paper. The third column describes \"Boundary based adversarial attacks\"' that can only attack seed samples whose distance to the boundary of f1 is smaller than e. Essentially this attack is about those boundary points of f1 that are treated as similar and belong to the same class by f2.\ns.t. (x, x') < dmax f1(x) > 0\nFor more general cases including f1 might not be continuous a.e., we need to consider the probability of the boundary point attacks (Eq. (3.1)). Therefore, we get a more general theorem as follows:\ns.t. fi(x)< 0 fi(x) >0\nHere dmax is a small positive constant. These definitions of \"adversarial examples\" are special cases of Eq. (8.1) and Eq. (2.1)\nNow we extend the discussion from two metric spaces into two pseudometric spaces. This extension. finds the sufficient and necessary condition that determines the strong-robustness of f1. The related two pseudometrics are d' (for f1) and d, (for f2), both directly being defined on X. Appendix Sec-. tion|10.2|includes detailed descriptions of pseudometric, pseudometric spaces, topology and a finer. topology relationship between two pseudometric spaces..\nBattista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgic. Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013.\nIn addition, we want to point out that all boundary pairs of f2 (satisfying f2(x) f2(x') and. d2(g2(x), g2(x')) < d2) are not considered in our analysis of adversarial examples. Figure 6 illustrates three types of boundary points, using the first two columns showing boundary points of f2\nd1(g1(x),g1(x))< 01\nThe value of this probability is critical for our analysis in Theorem (3.3) and in Theorem (3.5). Again we want to emphasize that most machine learning methods assume f1 is continuous a.e. and therefore 'boundary based adversarial attacks'' are not crucial.\n(a) Attack Power vs. Accuracy (b) ARC & ARCA value among different methods 0.9 JARCARCA 0.9 0.8 0.8 Q 0.7 0.7 0.6 0.5 A ARC 0.5 8 38\\ 0.4 0.4 0.3 A 0.3 0.2 0.2 - Original 0.1 0.1 - Stability Training *-Siamese Training 0 0 2 4 6 8 10 Original 0 Stability Training Attack Power\nX1 Machine classifier Assume [X] = 10, g1 = f1 g2,C1 # C2 P(adversarial samples 2x3 = 60% X2 the oracle 0 f2 Class 1 Class 2\nWhen f1 is not continuous a.e., we need to consider the probability of the boundary points based adversarial examples (Eq. (3.1)). For such a case, we get a sufficient condition|for the strong robustness:\nS11V zLOSSJ1~J1 O. Here the loss function is the function used to train the neural network. A recent paper (Kurakin et al. 2016) shows that adversarial examples generated by fast gradient sign method are misclassified even after these images have been recaptured by cameras.\nTheorem 3.5. When f1 is not continuous a.e., if the topology in (X,d')) is a finer topol. ogy than the topology in (X,d) and P(f1(x) f1(x)[f2(x) = f2(x),d1(g1(x),g1(x)) 1, d2(g2(x), g2(x')) < 82) < n, then f1 is strong-robust against adversarial examples.\nFigure 7: When f1 is not continuous a.e., the strong-robustness of f1 is determined by both g1 and c1 We assume c1 and c2 as linear classification functions. This figure shows when (1) sample space X is finite, (2) f1 learns a wrong decision boundary and (3) the probability of test samples around f1's decision boundary is large, f1 is not strong-robust against adversarial examples. However, we want to emphasize that the situation is very rare for a well-trained classifier f1..\nJacobian-based saliency map approach (Papernot et al.] 2015a) (Papernot et al.]2015a) pr0 posed the Jacobian-based saliency map approach (JSMA) to search for adversarial samples while limiting the number of pixel to modify in the image. As a targeted attack, JSMA iteratively pe. turbs pixels in an input that have large adversarial saliency scores. The adversarial saliency map is calculated from the Jacobian (gradient) matrix x.f1(x) of the DNN model at the current inpu x. The (i, j)th component in Jacobian matrix xf1(x) describes the derivative of output class with respect to feature pixel i. For each pixel i, its adversarial saliency score is calculated to reflec how this pixel will increase the output score of class j versus changing the score of other possible output classes. The process is repeated until misclassification in the target class is achieved or the maximum number of perturbed pixels has been reached. Essentially, JSMA optimizes Equation|2. by measuring perturbation (x, x') through the lo-norm.\nFigure 9: Result of CIFAR-10: (a) Test accuracy under adversarial example attacks: three differen. colors for three different training strategies. (Details in Section 12.2) We don't include the resuli. of adversarial training because previous adversarial training can't be used on networks with batch. normalization. Some tricks of training such networks are released in a recent paper (Kurakin et al.. 2016) (b) ARC and ARCA for three different training strategies under adversarial example attacks\nWhen f1 is not continuous a.e., its strong-robustness is significantly influenced by its boundary points and therefore relates to the c1 function. Section 11.2|provides some discussion and we omit covering such cases in the rest of this paper.."}, {"section_index": "15", "section_name": "TOWARDS PRINCIPLED UNDERSTANDING", "section_text": "(a) Attack Power vs. Accuracy (b) ARC & ARCA value among different methods JARCARCA 0.99 0.98 0.8 0.97 Q Q 0.96 0.6 0.95 CCr Q A 0.94 ARC 0.4 0.93 0.92 0.2 Original Adversarial Training 0.91 Stability Training *-Siamese Training 0.9 0 2 6 10 Adversarial Training 0 4 8 Original Stability Siamese Attack Power Training Training\n4.1 UNNECESSARY FEATURES RUIN STRONG-ROBUSTNESS\nThough difficult, we want to argue that it is possible to theoretically model \"oracles\" for some state-of-the-art applications. For instance, as illustrated by the seminal cognitive neuroscience paper \"untangling invariant object recognition\" (DiCarlo & Cox2007) and its follow-up study. (DiCarlo et al.|2012), the authors show that one can view the information processing of visual object recognition by human brains as the process of finding operations that progressively transform retinal representations into a new form of representation (X2 in this paper), followed by the application of relatively simple decision functions (e.g., linear classifiers (Duda et al.[2012)). More specifically in human and other primates, such visual recognition takes place along the ventral visual stream,. and this stream is considered to be a progressive series of visual re-representations, from V1 to V2 to V4 to IT cortex (DiCarlo & Cox]2007). Multiple relevant studies (e.g., (DiCarlo & Cox 2007; Johnson1980] Hung et al.|2005)) have argued that this viewpoint of representation learning plus simple decision function is more productive than hypothesizing that brains directly learn very. complex decision functions (highly non-linear) that operate on the retinal image representation. This. is because the experimental evidence suggests that this view takes the problem apart in a way that is. consistent with the architecture and response properties of the ventral visual stream. Besides, simple decision functions can be easily implemented in a single step of biologically plausible neuronal processing (i.e., a thresholded sum over weighted synapses).\n. C2.929202 #{(x,x)|f2(x) = f2(x)&d2(g2(x),g2(x))< 82&f1(x) F f1(x)} #{(x,x')|f2(x) = f2(x')&d2(g2(x),g2(x'))< 02}\nCorollary 4.1. When f1 is continuous a.e., if Xj = Rn1, X2 = Rn2, n1 > n2, X2 C X1, d1, d are norm functions, then f1() is not strong-robust against adversarial examples..\nThis corollary shows if unnecessary features (with regards to X) are selected in the feature selection step, then no matter how accurate the model is trained, it is not strong-robust to adversarial examples\nFigure|2|shows a situation that the oracle for the current task only needs to use one feature to classify samples correctly. A machine learning classifier extracts two features with one used by the oracle and the other is an extra unnecessary feature[] In X1, f1 (actually c1) successfully classifies all the test inputs. However, it's very easy to find adversary examples satisfying Eq. (2.4) by only adding a small perturbation along the unnecessary feature dimension. In Figure2l red circles show a few such. adversarial examples. The adversarial examples are very close to seed samples in the oracle space But they are predicted into a different class by f1.\nFigure 10: (a) Test accuracy under adversarial example attacks on MNIST dataset: four different colors for four different training strategies. (Details in Section|12.2) (b) ARC and ARCA for four different training strategies under adversarial example attacks\nFor many security sensitive applications, previous studies using state-of-art learning-based classifiers normally believe that adding more features is always helpful. Apparently, our corollary indicates that\nAs another example, the authors of (Xu et al.]2016) used genetic programming to find \"adversarial examples\" (by solving Eq. (2.2)) for a learning-based malicious-PDF classifier. This search needs an oracle to determine if a variant x' preserves the malicious behavior of a seed PDF x (i.e., f2(x) = f2(x')). The authors of (Xu et al.[2016) therefore used the Cuckoo sandbox (a malware analysis system through actual execution) to run a variant PDF sample in a virtual machine installed with a PDF reader and reported the behavior of the sample including network APIs calls. By comparing the\n8When f1 is not continuous a.e., it is difficult to find the necessary and sufficient condition for strong robustness of f1. We leave this to future research. 9Two features of X1 actually positively correlate in Figure[2 However, the oracle does not need to use the second feature for making classifcation decisior.\nNicholas Carlini and David Wagner. Defensive distillation is not robust to adversarial examples arXiv preprint arXiv:1607.04311, 2016a.\nTable 3: Summary of theoretical conclusions that we can derive. Here Xj = Rn1 and X, = Rn2 The strong-robustness is determined by feature extraction function g1. The accuracy is determined by both the classification function c1 and the feature extraction function g1..\nCases: d1&d2 are norms Can be accurate? Based on Illustration (I) Xi\\(XiNX2)F0, Not Strong-robust may not be accurate Theorem Figure2 (3.4 X2 X1 (II) n1 > n2,X2 C X1 Not strong-robust may be accurate Corollary (4.1) Figure2 (III) n2,X1 = X2 Strong-robust. may be accurate Corollary n1 = (4.2] Figure (IV) n1< n2,X1 C X2 Strong-robust. may not be accurate Theorem (3.4) Figure[5\nHere p is the total number of features, c is a term added for the Lagrange multiplier. (for an image classification task, it is 3 times the total number of pixels of an RGB image) l is a target label, which. is different from the original label. The constraint x + r E [0, 1|P means that the adversarial example. is still in the range of sample space..\nFast gradient sign method (Goodfellow et al. 2014) The fast gradient sign method proposed by (Goodfellow et al.][2014) views d2 as the loo-norm. In this case, a natural choice is to make the. attack strength at every feature dimension the same. The perturbation is obtained by solving the following equation:\nnin(c d2(x,x+r)- Loss(f1(x+r),f1(x))),x+r E [0,1\nThe four theorems proposed above lead to a set of key insights about why and how an adversarial can fool a machine-learning classifier using adversarial examples. One of the most valuable insights is. feature learning step decides whether a predictor is strong-robust or not in an adversarial test setting All the discussions in the subsection assume f1 is continuous a.e..\nTheorem (3.2) and Theorem (3.4) indicate that when f1 is continuous a.e., the two feature spaces (X1, d1) and (X2, d2) or the functions g1 and g2 determine the strong-robustness of f1. Based on. Theorem (3.4), we can derive a corollary as follows (proof in Section10.3.1):\nThis is exactly the proportion of those pairs of points for which f1 classifies them into different classes and f2 treats them as similar and \"same-class\" samples. For this case, both g1 and c1 matter for the strong-robustness of f1. See Appendix Section|11.2|for an example showing how c1 makes f1 not strong robust.\nBased on Eq. (11.4), when f1 is not continuous a.e., the strong-robustness of f1 is determined by both g1 and c1. Figure7|shows an exemplar case in which X has only ten samples (i.e. X = 10) We assume the learned f1 and the oracle f2 derive the same feature space, i.e., X1 = X2. And we also assume f1 performs the classification very badly because the decision boundary (by c1) on X1 is largely different from the decision boundary on X2. The probability of \"adversarial examples\" in this case can be calculated by using Eq. q11.4). We get P(f1(x) f1(x')|f2(x) = f2(x'), d1(91(x), g1(x')) < 01) = 2*3 = 0.6. 5*2\nClearly in this case, c1 matters for the strong-robustness (when f1 is not a.e. continuous). This figure. indicates that when (1) sample space X is finite, (2) f1 learns a wrong decision boundary and (3) the. probability of test samples around f1's decision boundary is large, f1 is not strong-robust against adversarial examples. However, we want to point out that this situation is very rare for a well-trained classifier f1."}]
H12GRgcxg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The presence of class label noise inherent to training samples has been reported to deteriorate the. performance of even the best classifiers in a broad range of classification problems (Nettleton et al. (2010), Pechenizkiy et al.(2006),Zhu & Wu(2004)). Noisy labels also tend to be more harmfu than noisy attributes (Zhu & Wu (2004)). Noisy data are usually related to the data collectioi process. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate. However, this assumption often does not hold since labels that are provided by human judgments. are subjective. Many of the largest image datasets have been extracted from social networks. These. images are labeled by non-expert users and building a consistent model based on a precisely labelec. training set is very tedious. Mislabeling examples have been reported even in critical application. such as biomedical datasets where the available data are restricted (Alon et al.(1999)). A very. common approach to noisy datasets is to remove the suspect samples in a preprocessing stage or have. them relabeled by a data expert (Brodley & Friedl(1999)). However, these methods are not scalable. and may run the risk of removing crucial examples that can impact small datasets considerably.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "D. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise on. the precision of supervised learning techniques. Artificial intelligence review, 2010.. M. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn- ing in medical domains: The effect of feature extraction. In Computer-Based Medical Systems (CBMS), 2006. S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. In arXiv preprint arXiv:1412.6596, 2014.. S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXiv preprint arXiv:1406.2080, 2014. X. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial Intelligence Review, 22(3):177-210, 2004.\nVariants that are noise robust have been proposed for the most common classifiers such as logistic- regression and SVM (Frenay & Verleysen(2014),Jakramate & Kaban(2012),Beigman & Klebanov (2009)). However, classifiers based on label-noise robust algorithms are still affected by label noise. From a theoretical point of view, Bartlett et al.[(2006) showed that most loss functions are not com. pletely robust to label noise. Natarajan et al.(2013) proposed a generic unbiased estimator for binary classification with noisy labels. They developed a surrogate cost function that can be expressed by a weighted sum of the original cost functions, and provided asymptotic bounds for performance Grandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex- treme case of noisy label data. They suggested a semi-supervised algorithm that encourages the classifier to predict the non-labeled data with high confidence by adding a regularization term to the cost function. The problem of classification with label noise is an active research area. Comprehen sive up-to-date reviews of both the theoretical and applied aspects of classification with label noise can be found inFrenay & Kaban(2014) and Frenay & Verleysen(2014)."}, {"section_index": "2", "section_name": "A PROBABILISTIC FRAMEWORK FOR NOISY LABELS", "section_text": "Assume we want to train a multi-class neural-network soft-classifier p(y = i[x; w) where x is the feature vector, w is the network parameter-set and i is a member of the class-set {1, ..., k}. We further assume that in the training process we cannot directly observe the correct label y. Instead. we only have access to a noisy version of it denoted by z. Here we follow the probabilistic modeling and the EM learning approach described in Bekker & Goldberger(2016). In this approach noise generation is assumed to be independent of the features and is modeled by a parameter 0(i, j) = p(z = j|y = i). The noise distribution is unknown and we want to learn it as part of the training phase. The probability of observing a noisy label z given the feature vector x is:\nwhere k is the number of classes. The model is illustrated in the following diagram\nIn the training phase we are given n feature vectors x1,..., xn with the corresponding noisy la bels z1, ..., n which are viewed as noisy versions of the correct hidden labels y1, .., Yn. The log likelihood of the model parameters is:\nn k (w,0) = ) log p(zt|yt=i;O)p(yt =i|xt;W t=1 i=1\nBased on the training data, the goal is to find both the noise distribution 0 and the Neural Network parameters w that maximize the likelihood function. Since the random variables y1, ..., yn are hid- den, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of\nn spite of the huge success of deep learning there are not many studies that have explicitly attempted to address the problem of Neural Net (NN) training using data with unreliable labels.Larsen et al.. (1998) introduced a single noise parameter that can be calculated by adding a new regularization. term and cross validation.Minh & Hinton(2012) proposed a more realistic noise model that de-. pends on the true label. However, they only considered the binary classification case. Sukhbaatar. & Fergus(2014) recently proposed adding a constrained linear layer at the top of the softmax layer,. and showed that only under some strong assumptions can the linear layer be interpreted as the tran-. sition matrix between the true and noisy (observed) labels and the softmax output layer as the true. probabilities of the labels. Reed et al.(2014) suggested handling the unreliability of the training data. labels by maximizing the likelihood function with an additional classification entropy regularization. term.\nThe correct unknown label can be viewed as a hidden random variable. Hence, it is natural to apply. the EM algorithm where in the E-step we estimate the true label and in the M-step we retrain the network. Several variations of this paradigm have been proposed (e.g.Minh & Hinton (2012). Bekker & Goldberger(2016)). However, iterating between EM-steps and neural network training. does not scale well. In this study we use latent variable probabilistic modeling but we optimize the. likelihood score function within the framework of neural networks. Current noisy label approaches. assume either implicitly or explicitly that, given the correct label, the noisy label is independent. of the feature vector. This assumption is probably needed to simplify the modeling and derive. applicable learning algorithms. However, in many cases this assumption is not realistic since a wrong annotation is more likely to occur in cases where the features are misleading. By contrast. our framework makes it easy to extend the proposed learning algorithm to the case where the noise. s dependent on both the correct label and the input features. In the next section we describe a model formulation and review the EM based approach. In Section 3 we described our method which is. based on adding another softmax layer to the network and in Section 4 we present our results..\nk p(z=j|y=i;0)p(y=i|x;w) o(z =j[x;w,0) = i=1\neach EM iteration we estimate the hidden true data labels based on the noisy labels and the curren parameters:\nCti = p(yt = i|xt, Zt;W0,0o) i = 1,..., k. t = 1,...,n\n`tCtil{zt=j} 0(i,j) = i,j e{1,..., k} t Cti\nThe k k matrix 0 can be viewed as a confusion matrix between the soft estimates of the true label {ct[i = 1, ..., k} and the observed noisy labels zt. As part of the EM M-step, to find the updated NN parameter w we need to maximize the following function:.\nas n (P(yt =i|xt,Zt;w0,0o) -p(Yt =i|xt;w))h(xt) dui t=1\nsuch that h is the final hidden layer and u1 tr are the parameters of the soft-max output layer\nThe method reviewed here is closely related to the work of Minh & Hinton (2012). They addresse. the problem of mislabeled data points in a particular type of dataset (aerial images). The maii. difference is that in their approach they assumed that they do not learn the noise parameter. Instea. they assume that the noise model can be separately tuned using a validation set or set by hand. Note that even if the true noise parameters are given, we still need the apply the EM iterative procedure. However, this assumption makes the interaction between the E-step and the NN learning mucl. easier since each time a data-point xt is visited we can compute the p(yt = i[xt, Zt) based on th current network parameters and the pre-defined noise parameters. Motivated by the need for mode compression, Hinton et al.(2014) introduced an approach to learn a \"distilled\" model by training. a more compact neural network to reproduce the output of a larger network. Using the notatio defined above, in the second training stage they actually optimized the cost function: S(w) = network that was trained using the labels z1, ..., 2n, w is the parameter of the smaller network anc. 0o(i, j) in this case is a non-informative distribution (i.e. 0o(i, j) = 1/k)..\nThere are several drawbacks to the EM-based approach described above. The EM algorithm is a greedy optimization procedure that is notoriously known to get stuck in local optima. Another potential issue with combining neural networks and EM direction is scalability. The framework requires training a neural network in each iteration of the EM algorithm. For real-world, large-scale networks, even a single training iteration is a non-trivial challenge. Moreover, in many domains (e.g. object recognition in images) the number of labels is very large, so many EM iterations are likely to be needed for convergence. Another drawback of the probabilistic models is that they are based on the simplistic assumption that the noise error is only based on the true labels but not on the input features. In this study we propose a method for training neural networks with noisy labels that successfully addresses all these problems.\nIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood functior (2). In this section we describe an algorithm that optimizes the same function within the framework of neural networks. Assume the neural network classifier we are using is based on non-linear inter mediate layers followed by a soft-max output layer used for soft classification. Denote the non-linea\nwhere wo and 0o are the current parameter estimations. In the M-step we update both the NN and the noisy channel parameters. The updated noise distribution has a closed-form solution\nn k S(w) = Cti logp(yt = i|xt; W t=1 i=1\nwhich is a soft-version of the likelihood function of the fully observed case, based on the current estimate of the true labels. The back-propagation derivatives of the function (5) that we maximize in the M-step are:\nfunction applied on an input x by h = h(x) and denote the soft-max layer that predicts the true y label by:\nexp(u[h+ bi) p(y = i(x;w) = =1. .K =1 exp(ufh + bl\nexp(uT;h+ bij p(z =j[y=i,x) , exp(uh+ bil\np(z =j|x) =)`p(z=j|y=i,x)p(y=i|x) ~\nexp(bij) .\np(z =j|x) =)`p(z=j|y=i)p(y=i|x)\nWe denote the two noise modeling variants as the complex model (c-model) (8) and the simple model (s-model) q10h. Hereafter we use the notation wnoise for all the parameters of the second softmax layer which can be viewed as a noise adaptation layer.\nIn the training phase we are given n feature vectors x1,..., xn with corresponding noisy labels Z1, ..., ~n which are viewed as noisy versions of the correct hidden labels y1,..., yn. The log likelihood of the model parameters is:\nS(w, Wnoise) l0g p(zt[xt) log P(Zt|Yt = i,Xt;Wnoise)P(Yt = i|Xt;W))\nSince the noise is modeled by adding another layer to the network, the score S(w, wnoise) can be optimized using standard techniques for neural network training. By setting.\nexp(bij) p(z=j|y=i)=0(i,j)= ,exp(bil)\nit can easily verified that, by using either the EM algorithm (2) or the s-model neural network scheme (12), we are actually optimizing exactly the same function. Thus the neural network with the s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm. Instead of alternating between optimizing the noisy model and the network classifier, we consider them as components of the same network and optimize them simultaneously..\nWnoise W W h, y X h Z non-linear function soft-max soft-max W W x h y non-linear function soft-max\nFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above and test phase (below).\nwhere w is the network parameter-set (including the softmax layer). We next add another softmax output layer to predict the noisy label z based on both the true label and the input features:\nWe can also define a simplified version where the noisy label only depends on the true label; i.e. we assume that labels flips are independent of x:\nThere are degrees of freedom in the two softmax layer model. Hence, a careful initialization of the. parameters of the noise adaptation layer is crucial for successful convergence of the network into. a good classifier of the correct labels at test time. We used the parameters of the original network. to initialize the parameters of the s-model network that contains the noise adaptation level. We can initialize the softmax parameters of the s-model by assuming a small uniform noise:.\nsuch that k is the number of different classes. A better procedure is to first train the original NN. without the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat the. labels produced by the NN as the true labels and compute the confusion matrix on the train set and used it as an initial value for the bias parameters:.\n1{zt=j}P(Yt=i|xt) tP(Yt =i|xt)\nThe computational complexity of the proposed method is quadratic in the size of the class-set. Sup pose there are k classes to predict, in this case the proposed methods require k+1 sets of softmax operations with a size of k each. Hence there are scalability problems when the class set is large. As we explained in the previous paragraph, we initialized the second soft-max layer using the confusion matrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assume the rows of the matrix correspond to the true labels and the matrix columns correspond to the noisy labels. The l largest elements in the i-th row are the most frequent noisy class values when the true class value is i. We can thus connect the i-th element in the first softmax layer only to its l most probable noisy class candidates. Note that if we connect the i-th label in the first softmax only to the i-th label in the second softmax layer, the second softmax layer collapses to identity and we obtain the standard baseline model. Taking the l most likely connections to the second softmax layer, we allow an additional l - 1 possible noisy labels for each correct label. We thus obtain a data driven sparsifying of the second softmax layer which solves the scalability problem since the complexity becomes linear in the number of classes instead of quadratic. In the experiment section we show that by using this approach there is not much deference in performance.\nOur architecture, which is based on a concatenation of softmax layers, resembles the hierarchical softmax approach Morin & Bengio (2005) that replaces the flat softmax layer with a hierarchical. layer that has the classes as leaves. This allowed them to decompose calculating the probability of the class into a sequence of probability calculations, which saves us from having to calculate the expensive normalization over all classes. The main difference between our approach and theirs. (apart from the motivation) is that in our approach the true-label softmax layer is fully connected to the noisy-label layer.Sukhbaatar & Fergus (2014) suggested adding a linear layer to handle. noisy labels. Their approach is similar to our s-model. In their approach, however, they proposed a. different learning procedure."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate the robustness of deep learning to training data with noisy labels with and without explicit noise modeling. We first show results on the MNIST data-set with injected label\nNote that in the c-model, where the noise is also dependent on the input features, we can still apply the EM algorithm to learn the parameters of the additional noise layer. However, there is no closed form solution in the M-step for the optimal parameters and we need to apply neural-network training in the M-step to find the noise-layer parameters..\nAt test time we want to predict the true labels. Hence, we remove the last softmax layer that aims to get rid of the noise in the training set. We compute the true-label softmax estimation p(y = i[x; w) 7). The proposed architecture for training the neural network based on training data with noisy labels is illustrated in Figure|1\nbi = log((1- e)1{i=j} -\nsuch that x1, ..., xn are the feature vectors of the training dataset and z1, ..., 2n are the corresponding noisy labels. So far we have concentrated on parameter initialization for the s-model. The strategy that works best to initialize the c-model parameters is to use the parameters that were optimized for the s-model. In other words we set linear terms u; to zero and initialize the bias terms b; with the values that were optimized by the s-model.\nteeeenre eeey teeeenre eeey 0.7 0.6 0. 0.5 Complex 0.5 Complex Simple Simple Reed hard Reed hard Reed soft Reed soft Baseline Baseline 0.40.30 0.403 0.35 0.40 0.45 ).50 0.35 0.40 0.5 noise fraction noise fraction (a) 20% dataset (b) 50% dataset Eeeeeere eeeyy 0.6 0.5 Complex Simple Reed hard Reed soft Baseline 0.40.30 0.35 0.40 0.45 0.50 noise fraction (c) 100% dataset\nFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level The results are shown for several training data sizes (20%,50%,100%) of the training subset.\nnoise in our experiments. The MNIST is a database of handwritten digits, which consists of 28 28 images. The dataset has 60k images for training and 10k images for testing. We used a two hidden layer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU and we used dropout with parameter O.5. We trained the network using the Adam optimizer (Kingma & Ba(2014)) with default parameters, which we found to converge more quickly and effectively than SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experiments described below. In addition to a network that is based on fully connected layers, we also applied a network based on a CNN architecture. The results we obtained in the two architectures were similar The network we implemented is publicly available\nWe generated noisy data from clean data by stochastically changing some of the labels. We con verted each label with probability p to a different label according to a predefined permutation. We used the same permutation as in Reed et al.(2014). The labels of the test data remained, of course, unperturbed to validate and compare our method to the regular approach.\nWe compared the proposed noise robust models to other model training strategies. The first network. was the baseline approach that ignores the fact that the labels of the training data are unreliable Denote the observed noisy label by z and the softmax decision by q1, ..., qk. The baseline log-. likelihood score (for a single input) is:.\nS = Og(qi\n0.40 Complex CNN 20% Complex CNN 50% Complex CNN 100% Simple CNN 20% 0.35 Simple CNN 50% Simple CNN 100% Reed hard 20% Reed hard 50% Reed hard 100% 0.30 Baseline CNN 20% Baseline CNN 50% teeennee teey Baseline CNN 100% 0.25 0.20 0.15 0.10 0.30 0.35 0.40 0.45 0.50 noise fraction\nComplex CNN 20% Complex CNN 50% Complex CNN 100% Simple CNN 20% 0.35 Simple CNN 50% Simple CNN 100% Reed hard 20% Reed hard 50% Reed hard 100% 0.30 Baseline CNN 20% Baseline CNN 50% Leeennne eeey Baseline CNN 100% 0.25 0.20 0.15 0.10 0.30 0.35 0.40 0.45 0.50 noise fraction\nFigure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise level. The results are shown for several training data sizes (20%,50%,100%) of the training subse for a CNN network architecture).\nWe also implemented two variants of the noise robust approach proposed by Reed et al. (2014) They suggested a soft version\n3S-(1-)Hq)= >1{z=i} log(qi)+1) ) qi log(qi) i\nFigure|2|depicts the comparative test errors results as a function of the fractions of noise. The results are shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST training subset. Bootstrapping was used to compute confidence intervals around the mean. For 1o00 times, N = 10 samples were randomly drawn with repeats from the N available samples and mean was computed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.\nThe results show that all the methods that are explicitly aware of the noise in the labels are bette. than the baseline which is the standard training approach. We revalidated the results reported inReec. et al.(2014) and showed that the hard version of their method performs better than the soft version. In all cases our models performed better than the alternatives. In most cases the c-model was bette than the s-model. In the case where the entire dataset was used for training, we can see from the. results that there was a phase transition phenomenon. We obtained almost perfect classificatioi results until the noise level was high and there was a sudden strong performance drop. Analyzing. why this effect occurred is left for future research..\nWe next show the results on the CIFAR-100 image dataset Krizhevsky & Hinton(2009) which con-. sists of 32 32 color images arranged in 100 classes containing 600 images each. There are 500. training images and 100 testing images per class. We used raw images directly without any pre-. processing or augmentation. We generated noisy data from clean data by stochastically changing. some of the labels. We converted each one of the 100 labels with probability p to a different label. according to a predefined permutation. The labels of the test data remained, of course, unperturbed. to validate and compare our method to the regular approach. We used a CNN network with two. convolutional layers combined with ReLU activation and max-pooling, followed by two fully con nected layers. Figure 3] depicts the comparative test errors results as a function of the fractions. of noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 training.\nBS + (1 - ) maxlog(qi\nIn their experiments they took = 0.8 for the hard version and = 0.95 for the soft version, and. observed that the hard version provided better results. Finally we implemented the two variants of our approach; namely, the noise modeling based only on the labels (s-model) and the noise modeling. that was also based on the features (c-model).\n0.40 0.40 0.35 0.30 0.30 eeeeenre eeeyy Leeeenne eeeyy 0.25 0.25 0.20 0.20 Simple CNN sparse 5 100% Complex CNN 100% 0.15 Simple CNN 100% 0.15 Complex CNN sparse 5 100% Simple CNN sparse 5 50% Complex CNN sparse 5 50% Simple CNN 50% Complex CNN 50% Simple CNN 20% Complex CNN 20% Simple CNN sparse 5 20% Complex CNN sparse 5 20% 0.10.30 0.10.30 0.35 0.40 0.50 0.35 0.40 0.45 0.50 noise fraction noise fraction\nFigure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise level. The results of regular and sparse second softmax layers are shown for several training data sizes (20%,50%,100%) of the training subset\nsubset. Bootstrapping was used to compute confidence intervals around the mean in the same way as for the MNIST experiment. The results showed that the proposed method works better than the alternatives. The simple model consistently provided the best results but when the noise level was very high the complex method tended to perform better.\nWe next report experimental results for the sparse variant of our method that remains efficient even when the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consists of 100 possible classes. For each class we only took the five most probable classes in the confusion matrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure sparsifying the second softmax layer did not not result in a drop in performance"}, {"section_index": "4", "section_name": "5 CONCLUSION", "section_text": "In this paper we investigated the problem of training neural networks that are robust to label noise. We proposed an algorithm for training neural networks based solely on noisy data where the noise. distribution is unknown. We showed that we can reliably learn the noise distribution from the noisy. data without using any clean data which, in many cases, are not available. The algorithm can be. easily combined with any existing deep learning implementation by simply adding another softmax. output layer. Our results encourage collecting more data at a cheaper price, since mistaken data. labels can be less harmful to performance. One possible future research direction would be tc. generalize our learning scheme to cases where both the features and the labels are noisy. We showec. results on datasets with small and medium sized class-sets. Future research direction would be tc evaluate the performance and efficiency of the proposed method on tasks with large class-sets.."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "U. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745-6750, 1999. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, pp. 138-156, 2006.\nE. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL-IJCNLP, 2009"}]
HJStZKqel
[{"section_index": "0", "section_name": "LIFELONG PERCEPTUAL PROGRAMMING BY EXAMPLE", "section_text": "The challenge for LML with neural networks is the problem of catastrophic forgetting: if the dis tribution of examples changes during training, then neural networks are prone to forget knowledge gathered from early examples. Solutions to this problem involve instantiating a knowledge repository (KR) either directly storing data from earlier tasks or storing (sub)networks trained on the earlier tasks with their weights frozen. This knowledge base allows either (1) rehearsal on historical examples (Robinsl|1995), (2) rehearsal on virtual examples generated by the frozen networks (Silver & Mercer 2002||Silver & Poirier2006) or (3) creation of new networks containing frozen sub networks from the historical tasks (Rusu et al. 2016 Shultz & Rivest2001)\nAlexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow\nTo frame our approach in these terms, our KR contains partially-trained neural network classifiers which we call from learned source code. Crucially, we never freeze the weights of the networks ir the KR: all parts of the KR can be updated during the training of all tasks - this allows us to improve performance on earlier tasks by continuing training on later tasks (so-called reverse transfer). Reverse transfer has been demonstrated previously in systems which assume that each task can be solved by a model parametrized by an (uninterpretable) task-specific linear combination of shared basis weights Ruvolo & Eaton[2013). The representation of task-specific knowledge as source code, learning fron weak supervision, and shared knowledge as a deep neural networks distinguishes this work from the linear model used inRuvolo & Eaton (2013).\nNeural Networks Learning Algorithms. Recently, extensions of neural networks with primitive. such as memory and discrete computation units have been studied to learn algorithms from input output data (Graves et al. 2014} Weston et al 2014 Joulin & Mikolov 2015 Grefenstette et al. 2015} Kurach et al.|2015 Kaiser & Sutskever 2016 Reed & de Freitas. 2016 Bunel et al.[|2016 Andrychowicz & Kurach |2016fZaremba et al. 2016f Graves et al.[ 2016 Riedel et al.[2 2016 Gaunt et al.[[2016f Feser et al.f2016). Whereas many of these works use a neural network controller manag ing a differentiable computer architecture, we flip this relationship. In our approach, a differentiable interpreter that is expressible as source code and makes calls to neural network components."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A goal of artificial intelligence is to build a single large neural network model that can be trained in a lifelong learning setting; i.e., on a sequence of diverse tasks over a long period of time, and gain cumulative knowledge about different domains as it is presented with new tasks. The hope is that such systems will learn more accurately and from less data than existing systems, and that they will exhibit more flexible intelligence. However, despite some work showing promise towards multitask learning (training on many tasks at once) and transfer learning (using source tasks to improve learning in a later target task) (Caruana[1997) Luong et al.[2015] Parisotto et al.[2015f |Rusu et al.2016), most successes of neural networks today come from training a single network on a single task, indicating that this goal is highly challenging to achieve.\nThe methods above, with the exception of Reed & de Freitas[(2016) and|Graves et al.[(2016), operate on inputs of (arrays of) integers. However, Reed & de Freitas (2016) requires extremely strong supervision, where the learner is shown all intermediate steps to solving a problem; our learner only observes input-output examples.Reed & de Freitas(2016) also show the performance of their system. in a multitask setting. In some cases, additional tasks harm performance of their model and they freeze parts of their model when adding to their library of functions. OnlyBunel et al.(2016), Riedel. et al.[(2016) and[Gaunt et al.[(2016) aim to consume and produce source code that can be provided by a human (e.g. as sketch of a solution) to or returned to a human (to potentially provide feedback).. DISCUSSION\nWe argue for two properties that such systems should have in addition to the ability to learn from sequence of diverse tasks. First is the ability to learn from weak supervision. Gathering high-qualit labeled datasets is expensive, and this effort is multiplied if all tasks require strong labelling. I this work, we focus on weak supervision in the form of pairs of input-output examples that com from executing simple programs with no labelling of intermediate states. Second is the ability t distill knowledge into subcomponents that can be shared across tasks. If we can learn models wher the knowledge about shared subcomponents is disentangled from task-specific knowledge, then th sharing of knowledge across tasks will likely be more effective. Further, by isolating shared subcon ponents, we expect that we could develop systems that exhibit reverse transfer (i.e., performance o earlier tasks automatically improves by improving the shared components in later tasks).\nWe have presented NeuRAL TeRPRET, a framework for building end-to-end trainable models that. structure their solution as a library of functions represented as source code or neural networks Experimental results show that these models can successfully be trained in a lifelong learning context. and they are resistant to catastrophic forgetting; in fact, they show that even after instances of earlier tasks are no longer presented to the model, performance still continues to improve.\nLearning neural network models within differentiable interpreters has several benefits. First, learning. programs imposes a bias that favors learning models that exhibit strong generalization, as illus. trated by many works on program-like neural networks. Second, the source code components are. interpretable by humans, allowing incorporation of domain knowledge describing the shape of the problem through the source code structure. Third, source code components can be inspected, anc. the neural network components can be queried with specific instances to inspect whether the sharec. classifiers have learned the expected mappings. A final benefit is that the differentiable interprete. can be seen as focusing the supervision. If a component is un-needed for a given task, then the. differentiable interpreter can choose not to use the component, which shuts off any gradients fron. flowing to the component. We speculate that this could be a reason for the models being resistant tc. catastrophic forgetting, as the model either chooses to use a classifier, or ignores it (which leaves the. component unchanged).\nA key challenge in achieving these goals with neural models is the difficulty in interpreting weights. inside a trained network. Most notably, with a purely neural model, subcomponents of knowledge. gained after training on one task cannot be easily transferred to related tasks. Conversely, traditional computer programs naturally structure solutions to diverse problems in an interpretable, modular. form allowing (1) re-use of subroutines in solutions to new tasks and (2) modification or error. correction by humans. Inspired by this fact, we develop end-to-end trainable models that structure. their solutions as a library of functions, some of which are represented as source code, and some of. which are neural networks.\nIt is known that differentiable interpreters are difficult to train (Kurach et al.]2015]Neelakantan et al. 2016, [Gaunt et al.2016), and being dependent on differentiable interpreters is the primary limitation of this work. However, if progress can be made on more robust training of differentiable interpreters (perhaps extending ideas in|Neelakantan et al.(2016); Feser et al.(2016)), then we believe there tc be great promise in using the models we have presented here to build large lifelong neural networks\nMethodologically, we start from recent work on programming by example (PBE) with differentiable. interpreters, which shows that it is possible to use gradient descent to induce source code operating. on basic data types (e.g. integers) from input-output examples (Gaunt et al.]2016)Riedel et al.]2016 Bunel et al.f 2016). In this work we combine these differentiable interpreters with neural network classifiers in an end-to-end trainable system that learns programs that manipulate perceptual data.\nWe introduce and develop solutions for the problem of Lifelong Perceptual Pro gramming By Example (LPPBE). The problem is to induce a series of programs. that require understanding perceptual data like images or text. LPPBE systems learn from weak supervision (input-output examples) and incrementally construct a shared library of components that grows and improves as more tasks are solved Methodologically, we extend differentiable interpreters to operate on perceptual. data and to share components across tasks. Empirically we show that this leads to a lifelong learning system that transfers knowledge to new tasks more effectively. than baselines, and the performance on earlier tasks continues to improve even as the system learns on new, different tasks.."}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Rich Caruana. Multitask learning. Machine Learning, 28:41-75, 1997.\nJohn K. Feser, Marc Brockschmidt, Alexander L. Gaunt, and Daniel Tarlow. Neural functional programming. 2016. Submitted to ICLR 2017.\nAlexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathar Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. CoRR.abs/1608.04428.2016. URLhttp://arxiv.0rg/abs/1608.04428\nIn addition, we make our interpreter modular, which allows lifelong learning on a sequence of re. lated tasks: rather than inducing one fresh program per task, the system is able to incrementally. build a library of (neural) functions that are shared across task-specific programs. To encapsulate. the challenges embodied in this problem formulation, we name the problem Lifelong Perceptual Programming By Example (LPPBE). Our extension of differentiable interpreters that allows per ceptual data types, neural network function definitions, and lifelong learning is called NeuRAL. TERPRET (NTPT).\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning tc. transduce with unbounded memory. In Proceedings of the 28th Conference on Advances in Neura Information Processing Systems (NIPS), pp. 1828-1836, 2015"}, {"section_index": "3", "section_name": "2.1 TERPRET", "section_text": "TeRpReT programs describe differentiable interpreters by defining the relationship between Input s and Outputs via a set of inferrable P arams that define an executable program and Vars that store intermediate results. TeRPRET requires all of these variables to be finite integers. To learn using gradient descent, the model is made differentiable by a compilation step that lifts the relationships\nMichael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psvchology. of learning and motivation. 24:109-165. 1989\nFigure 1: (NEURAL) TERPRET programs for counting symbols on a tape, with input-output examples. Both programs describe an interpreter with instructions to MOvE on the tape and READ the tape according to source code parametrized by instr. (left) A TeRpReT program that counts '1's. (right) A NeURAL TeRPRET program that additionally learns a classifier is_dinosaur.\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nEmpirically, we show that a NTPT-based model learns to perform a sequence of tasks based on images of digits and mathematical operators. In early tasks, the model learns the concepts of digits and mathematical operators from a variety of weak supervision, then in a later task it learns to compute the results of variable-length mathematical expressions. The approach is resilient to catastrophic forgetting (McCloskey & Cohen||1989| Ratcliff||1990); on the contrary, results show that performance continues to improve on earlier tasks even when only training on later tasks. In total, the result is a method that can gather knowledge from a variety of weak supervision, distill it into a cumulative re-usable library, and use the library within induced algorithms to exhibit strong generalization.\nAbhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning arXiv preprint arXiv:1206.6417. 2012\nWe briefly review the TeRpRET language (Gaunt et al.] 2016) for constructing differentiable in terpreters. To address LPPBE, we develop NeURAL TERPRET, an extension to support lifelong learning, perceptual data types, and neural network classifiers. We also define our tasks.\nMinh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task se quence to sequence learning. In International Conference on Learning Representations (ICLR) 2015.\n(b) C 4 +>8 A>14 ?3 ? J2 8 >10 >3 A 11 7 11 5 14\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro grams with gradient descent. In Proceedings of the 4th International Conference on Learning. Representations 2016, 2016.\nFigure 2: Overview of tasks in the (a) ADD2x2, (b) ApPLy2x2 and (c) MATH scenarios. 'A' denotes the APPLY operator which replaces the ? tiles with the selected operators and executes the sum. We show two MATH examples of different length\nEric Price, Wojciech Zaremba, and Ilya Sutskever. Extensions and limitations of the neural gpu 2016. Submitted to ICLR 2017.\nbetween integers specified by the TeRpRET code to relationships between marginal distributions over integers in finite ranges. There are two key operations in this compilation process:\nRoger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning anc forgetting functions. Psychological review, 97(2):285, 1990\nScott E. Reed and Nando de Freitas. Neural programmer-interpreters. 2016\nSebastian Riedel, Matko Bosnjak. and Tim Rocktaschel. Programming with a differentiable fort interpreter. CoRR, abs/1605.06640, 2016. URLhttp://arxiv.0rg/abs/1605.06640\nThis compilation process yields a TensorFlow (Abadi et al.] 2016) computation graph containing many of these two operations, which can then be trained using standard methods."}, {"section_index": "4", "section_name": "2.2 NEURAL TERPRET", "section_text": "To handle perceptual data, we relax the restriction that all variables need to be finite integers. We intro duce a new tensor type whose dimensions are fixed at declaration, and which is suitable to store per ceptual data. Additionally, we introduce learnable functions that can process vector variables. A learn able function is declared using @Learn([d1,...,dp], dout, hid_sizes=[l1,...,l]) where the first component specifies the dimensions d1,..., dp of the inputs (which can be finite integers or tensors) and the second the dimension of the output. NTPT compiles such functions intc a fully-connected feed-forward neural network whose layout can be controlled by the hi d_s i zes component, which specifies the number of layers and neurons in each layer. The inputs of the functior are simply concatenated. Vector output is generated by learning a mapping from the last hidden layer and finite integer output is generated by a softmax layer producing a distribution over integers up tc the declared bound. Learnable parameters for the generated network are shared across every use ir the NTPT program, and as they naturally fit into the computation graph for the remaining TeRPRET program, the whole system is trained end-to-end.\nDaniel L Silver and Ryan Poirier. Machine life-long learning with csmtl networks. In AAAI, 2006\nDaniel L Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pp. 49-55, 2013.\nA simple TeRpRET program counting bits on a tape, and a related NTPT program that counts up images of a particular class on a tape are displayed in Fig.1\nSebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in Neura Information Processing Systems 8 (NIPS), pp. 640-646, 1995.\nTo demonstrate the benefits of our approach for combining neural networks with program-like archi tecture, we consider three toy scenarios consisting of several related tasks depicted in Fig.2\nADD2x2 scenario: The first scenario in Fig.2(a) uses of a 2 2 grid of MNIST digits. We set 4 tasks based on this grid: compute the sum of the digits in the (1) top row, (2) left column, (3) botton row, (4) right column. All tasks require classification of MNIST digits, but need different programs. to compute the result. As training examples, we supply only a grid and the resulting sum. Thus, we. never directly label an MNIST digit with its class..\nAppLy2x2 scenario: The second scenario in Fig.2(b) presents a 2 2 grid of of handwritten arithmetic operators. Providing three auxiliary random integers d1, d2, d3, we again set 4 tasks\n(a) (b) (c) >14 03 2 8 >10 A>3 11 7 11 5 14\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2010.\nFunction application. The statement z. set to (foo (x, y)) is translated into ? = jk Iijk % where o represents the marginal distribution for the variable a and I is. an indicator tensor 1[i = foo(j, k)]. This approach extends to all functions mapping any. number of integer arguments to an integer output.. Conditional statements The statements if x == O: z.set_to(a) ; elif x 1 : z. set_to (b) are translated to = a + . More complex statements follow. a similar pattern, with details given inGaunt et al.(2016).\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2) 123. 146.1995\nThomas R Shultz and Francois Rivest. Knowledge-based cascade-correlation: Using knowledge to speed learning. Connection Science, 13(1):43-72, 2001.\nDaniel L Silver and Robert E Mercer. The task rehearsal method of life-long learning: Overcom ing impoverished data. In Conference of the Canadian Society for Computational Studies of Intelligence, pp. 90-101. Springer, 2002.\nFigure 3: Example solutions for the tasks on the right columns of the (a) ADD2x2 and (b) AppLy2x2. scenarios. The read head is initialized READing the top left cell and any auxiliary Input Ints are loaded into memory. Instructions and arguments shown in black must be learned..\nbased on this grid, namely to evaluate the expression'[d1 op1 d2 op2 d3 where (op1, op2) are the operators represented in the (1) top row, (2) left column, (3) bottom row, (4) right column. In comparison to the first scenario, the dataset of operators is relatively small and consistentl making the perceptual task of classifying operators considerably easier. However, the algorithmic part is more difficult, requiring non-linear operations on the supplied integers.\nMATH scenario: The final task in Fig.2(c) requires combination of the knowledge gained from the weakly labeled data in the first two scenarios to execute a handwritten arithmetic expression"}, {"section_index": "5", "section_name": "3 MODELS", "section_text": "We design one NTPT model for each of the three scenarios outlined above. Knowledge transfer is achieved by defining a library of 2 neural networks shared across all tasks and scenarios. Training on each task should produce a task-specific source code solution (from scratch) and improve the overall usefulness of the shared networks. Below we outline the details of the specific models foi each scenario along with baseline models."}, {"section_index": "6", "section_name": "3.2 ADD2x2 MODEL", "section_text": "For the ADD2x2 scenario we build a model capable of writing short straight line algorithms with up. to 4 instructions. The model consists of a read head containing net_0 and net_1 (with the exception. of the very first task, which only has access to net_0, as discussed above) which are connected to a set of registers each capable of holding integers in the range 0, . . . , M, where M = 18. The head is. initialized reading the top left cell of the 2 2 grid, and at each step in the program, one instruction. can be executed from the following instruction set:.\nNOOP: a trivial no-operation instruction\nWe refer to the 2 networks in the shared library as net_0 and net_1. Both networks have similar. architectures: they take a 28 28 monochrome image as input and pass this sequentially through two fully connected layers each with 256 neurons and ReLU activations. The last hidden vector is. passed through a fully connected layer and a softmax to produce a 10 dimensional output (net _0). or 4 dimensional output (net_1) to feed to the differentiable interpreter. Note that the output sizes are chosen to match the number of classes of MNIST digits and arithmetic operators respectively..\nIf we create an interpreter model which is allowed to make calls to N untrained networks, and part of the interpreter uses a parameter net_choice = Param (N) to deciding which network to apply then the system effectively sees one large untrained network, which cannot usefully be split apart into the N components after training. To avoid this, we enforce that no more than one untrained network is introduced at a time (i.e. the first task has access to only net_0, and all other tasks have access to both nets). We find that this breaks the symmetry sufficiently to learn separate, useful classifiers.\nLO: MOVE Label: R0 = READ(net_0) instr, GOTO_IF L1 if is MOVE: pos++ net_choice 1: R = READ R1 = APPLYR1R0 R2) GOTO IF L2 else: arg1j arg2 arg3 R = APPLY L2: MOVE GOTO IF R2 = READ(net_1) GOTO IF L0 Label,: halt: return_addr halt: return (a) (b) return R1\nFigure 4: Overview of the MATH model. (a) The general form of a block in the model. Blue element are learnable. (b) A loop-based solution to the task in the MATH scenario.\nwhere the parameter net _choi ce is to be learned and decides which of net_0 and net_1 to apply"}, {"section_index": "7", "section_name": "3.3 APPLY2x2 MODEL", "section_text": "We adapt the ADD2x2 model to the ApPLy2x2 scenario by initializing three immutable registers with the auxiliary integers supplied with each 2 2 operator grid [see Fig.2[b)]. In addition, we swap the ADD (,.) instruction for APPLY (,:,.). The action of APPLY (a, b, op) is to interpret the. integer stored at op as an arithmetic operator and to compute a op b. All operations are performed modulo (M + 1) and division by zero returns M. In total, this model exposes a program space of. size ~ 1012 syntactically distinct programs."}, {"section_index": "8", "section_name": "3.4 MATH MODEL", "section_text": "We design the final scenario to investigate the synthesis of more complex control flow than straight. line code. A natural solution to execute the expression on the tape is to build a loop with a body tha alternates between moving the head and applying the operators [see Fig.4[b)]. This loopy solution has the advantage that it generalizes to handle arbitrary length arithmetic expressions.."}, {"section_index": "9", "section_name": "4 BASELINES", "section_text": "MOVE_NORTH, MOVE_EAST, MOVE_SOUTH, MOVE_WEST: translate the head (if po. sible) and return the result of applying the neural network chosen by net_choi ce to the image in the new cell ADD (., :) : accepts two register addresses and returns the sum of their contents.\nTo construct each line of code requires choosing an instruction and (in the case of sum) addresses of arguments for that instruction. We follow|Feser et al. (2016) and allow each line to store its result in a separate immutable register. Finally, we learn a parameter specifying which register to return after execution of the program. An example program in this model is shown in Fig.3[a). Even this simple model permits ~ 107 syntactically distinct programs for the differentiable interpreter to search over.\nFig.4(a) shows the basic architecture of the interpreter used in this scenario. We provide a set of blocks each containing the instruction MOVE or APPLY. A MOVE instruction increments the position of the head and loads the new symbol into a block specific immutable register using either net_0 or net_1 as determined by a block specific net_choice. After executing the instruction, the interpreter executes a GOTO_IF statement which checks whether the head is over the end of the tape and if not then it passes control to the block specified by goto_addr, otherwise control passes to a ha1t block which returns a chosen register value and exits the program. This model describes a space of ~ 106 syntactically distinct programs.\nNTPT aims to combine neural networks and differentiable interpreters for handling perceptual and algorithmic parts of a task respectively. A natural baseline is to replace the differentiable interpreter with a neural network to create a purely neural solution. In this spirit we define a column as the following architecture for handling the 2 2 tasks (see Fig.5(a)):\n(a) indep. (b) PNN (c) MTNN (d) NTPT TASK 1 TASK 2 TASK 3 19 RO= RE R0 = In R0 = RE R1 = MO R1 In R1 = MOI 128 R2 = SU R2 = In R2 = MO 128 R3 = NO R3 = MO R3 = SUI R4 = NO R4 = MO R4 = NO 128 return R5 = AP return concat concat concat concat Library concat A A 10 + 256 256 4,3,2 ++ 4,32+x+ 03 4,3,2 + 02 ?3 ?2\nFigure 5: Cartoon illustration of all models used in the experiments. See text for detail.\nWe construct 3 different neural baselines derived from this column architecture (see Fig.5)\nFor the MATH task, we build a purely neural baseline by replacing the task-specific part of the MTNN network with an LSTM. At each step, this network takes in the shared embeddings of the current symbol, updates an LSTM hidden state and then proceeds to the next symbol. We make a classification of the final answer using the last hidden states of the LSTM. We find that we achieve best performance with a 3 layer LSTM with 1024 elements in each hidden state and dropout between layers. In addition, we investigate a Neural GPU baseline based on|Kaiser & Sutskever(20163\nEach of the images in the 2 2 grid is passed through an embedding network with 2 layers of 256 neurons (c.f. net_0/1) to produce a 10-dimensional embedding. The weights of the embedding network are shared across all 4 images. These 4 embeddings are concatenated into a 40-dimensional vector and for the AppLy2x2 the auxiliary integers are represented as one-hot vectors and concatenated with this 40- dimensional vector. This is then passed through a network consisting of 3 hidden layers of 128 neurons to produce a 19-dimensional output\n1. Indep.: Each task is handled by an independent column with no mechanism for transfer. 2. Progressive Neural Network (PNN): We followRusu et al.(2016) and build lateral con- nections linking each task specific column to columns from tasks appearing earlier in the learning lifetime. Weights in all columns except the active task's column are frozen during a training update. Note that the number of layers in each column must be identical to allow lateral connections, meaning we cannot tune the architecture separately for each task. 3. Multitask neural network (MTNN): We split the column into a shared perceptual part and a task specific part. The perceptual part consists of net_0 and net_1 embedding networks In an ideal case the symmetry between these embedding networks will be broken and one. will become specialized to handle handwritten digits while the other will handle handwritten operators. In order to encourage this symmetry breaking we zero out one of the networks when training on the first task (cf. the symmetry breaking technique mentioned in Sec.3.1) The task-specific part consists of a neural network that maps the perceptual embeddings to a 19 dimensional output. Note that unlike PNNs, the precise architecture of the task specific part of the MTNN can be tuned for each individual task. We consider two MTNN architectures: (a) MTNN-1: All task-specific parts are 3 layer networks comparable to the PNN case. (b) MTNN-2: We manually tune the number of layers for each task and find best perfor- when the task s1 Ontainc ns 1 hidden 1aver for the A DD2x2 tasks and 3\n1.0 1.0 ADD2x2: top row Prigeqoad ADD2x2: left column 0.5 ADD2x2: bottom row 0.5 ADD2x2: right column APPLY2x2 tasks 0.0 1.0 0.0 1.0 acennecy lett 9l:ZXZX7ddB indep. 0.5 PNN 0.5 MTNN-1 MTNN-2 NTPT 0.0 0 128 256 384 512 0 128 256 384 512 0.0 training example (1000s) training example (1000s) C 128 256 384 512 training example (1000s) (a) (b) c"}, {"section_index": "10", "section_name": "5.1 LIFELONG LEARNING", "section_text": "Reverse transfer: Fig.6[a) focuses on the performance of NTPT on the first task (ADD2x2:top) The red bars indicate times where the the system was presented with an example from this task. Note that even when we have stopped presenting examples, the performance on this task continues. to increase as we train on later tasks - an example of reverse transfer. We verify that this is due tc continuous improvement of net_0 in later tasks by observing that the accuracy on the ADD2x2:top. task closely tracks measurements of the accuracy of net_0 directly on the digit classification task\nAvoidance of catastrophic forgetting: Fig. 6(b) shows the performance of the NTPT on the remaining ADD2x2 tasks. Both Fig.[6(a) and (b) include results for the MTNN- 2 baseline (the best baseline for the ADD2x2 tasks). Note that whenever the dominant training task swaps from an ADD2x2 task to an ApPLY2x2 task the baseline's perfor mance on ADD2x2 tasks drops. This is because the shared perceptual network becomes corrupted by the change in task - an example of catastrophic forgetting. To try to limit\nFigure 6: Lifelong learning with NTPT. (a) top: the sequential learning schedule for all 8 tasks bottom: performance of NTPT (solid) and the MTNN-2 baseline (dashed) on the first ADD2x2 task.. (b) performance on the remaining ADD2x2 tasks. (c) Performance of all the baselines on the *:left. tasks.\nFirst we create a data set in a regime which best demonstrates the LPPBE problem. The most convincing demonstration of LPPBE requires a series of tasks for which there is insufficient data to learn independent solutions to all tasks and instead, success requires transferring knowledge from one task to the next. Empirically, we find that training on any individual ADD2x2 task with only 1k distinct 2 2 examples produces low accuracies of around 40 20% (measured on a held-out test set of 1Ok examples) for both the purely neural baselines and NTPT methods. Since none of our models can satisfactorily solve an ADD2x2 task independently in this regime, we work with this limited data set and argue that any success on these tasks during a lifetime of learning can be attributed to successful knowledge transfer. In addition, we check that in a data rich regime (e.g >4k examples) all of the baseline models and NTPT can independently solve each task with >80% accuracy. This indicates that the models all have sufficient capacity to represent satisfactory solutions and the challenge is to find these solutions during training.\nTo test knowledge transfer between tasks we train on batches of data drawn from a time-evolving prob. ability distribution over all 8 tasks in the ADD2x2 and ApPLy2x2 scenarios (see the top of Fig.|6(a)) During training, we observe the following key properties of the knowledge transfer achieved by NTPT:\ntask indep PNN MTNN-1 MTNN-2 NTPT top 35% 35% 26% 24% 87% left 32% 36% 38% 47% 87% bottom 34% 33% 40% 56% 86% right 32% 35% 44% 60% 86% top 38% 39% 40% 38% 98% left 39% 51% 41% 39% 100% bottom 39% 48% 41% 40% 100% right 39% 51% 42% 37% 100%\nFigure 7: Final accuracies on all 2 2 tasks for all models at the end of lifelong learning\nFinal performance: Fig.6(b) focuses on the ADD2x2:left and ADD2x2:left tasks to illustrate the. relative performance of the baselines described in Sec.4] Note that although PNNs avoid catastrophic forgetting, there is no clear overall winner between the MTNN and PNN baselines. NTPT learns. faster and to a higher accuracy than all baselines for all the tasks considered here. For clarity we only plot results for the *:left tasks: the other tasks show similar behavior and the accuracies for all tasks. at the end of the lifetime of learning are presented in Fig.7."}, {"section_index": "11", "section_name": "5.2 GENERALIZATION", "section_text": "In the final experiment we take net_0/1 from the end of the NTPT 2 2 training and start training on the MATH scenario. For the NTPT model we train on arithmetic expressions con- taining only 2 digits. The loopy structure of the MATH model introduces many local optima into the optimization landscape and only 2/100 ran- dom restarts converge on a correct program. We detect convergence to the correct program by a rapid increase in the accuracy on a valida- tion set (typically occurring after around 30k training examples). Once the correct program is found, continuing to train the model model mainly leads to further improvement in the ac- curacy of net_0, which saturates at 97.5% on the digit classification task. The learned source code generalizes perfectly to expressions containil the performance on long expressions comes from\nTo pick a strong baseline for the MATH problem, we first perform a preliminary experiment with. two simplifications from the case above: (1) rather than expecting strong generalization from just 2-digit training examples, we train candidate baselines with supervision on examples up to 5 digits in. length, and (2) we remove the perceptual component of the task, presenting the digits and operators as one-hot vectors rather than images. Fig.[8[a) shows the generalization performance of the LSTM and Neural GPU (512-filter) baselines in this simpler setting after training to convergencq4. Based on these results, we restrict attention to the LSTM baseline and return to the full task including the perceptual component. In the full MATH task, we initialize the embedding networks of each model using net_0/1 from the end of the NTPT 2 2 training. Fig.8(b) shows generalization of the. NTPT and LSTM models on expressions of up to 16 digits after training to convergence. We find. that even though the LSTM shows surprisingly effective generalization when supplied supervision up to 5 digits, NTPT trained on only 2-digit expressions still offers better results..\nLifelong Machine Learning. We operate in the paradigm of Lifelong Machine Learning (LML) (Thrun]1994]1995]Thrun & O'Sullivan]1996][Silver et al.][2013] Chen et al.2015), where a learner is presented a sequence of different tasks and the aim is to retain and re-use knowledge from earlier tasks to more efficiently and effectively learn new tasks. This is distinct from related paradigms of multitask learning (presentation of a finite set of tasks simultaneously rather than in sequence (Caruana][1997] |Kumar & Daume II[2012]|Luong et al.[|2015} [Rusu et al.[2016), transfer learning (transfer of knowledge from a source to target domain without notion of knowledge retention (Pan & Yang 2010), and curriculum learning (training a single model for a single task of varying difficulty (Bengio et al.2009)).\nthe extent of catastrophic forgetting and make the shared components more robust, we have a separate learning rate for the perceptual networks in both the MTNN baseline and NTPT which is 100 fold. smaller than the learning rate for the task-specific parts. With this balance of learning rates we find empirically that NTPT does not display catastrophic forgetting..\n(a) 100 100 (%) Aceancce 92.8 50 Neural GPU (43.8M) LSTM (21.1M) 25.0 TerpreT (32) 0 (b) 100 LSTM - 2digit (%) Aceunnge LSTM - 5digit NTPT - 2digit 90 87.1 82.8 80 0 5 10 15 digits in expression\nFigure 8: Generalization behavior on MATH expres-. sions. Solid dots indicate expression lengths used in training. We show results on (a) a simpler non- perceptual MATH task (numbers in parentheses indicate parameter count in each model) and (b) the MATH task including perception\n4Note that|Price et al.(2016) find similarly poor generalization performance for a Neural GPU applied to the similar task of evaluating arithmetic expressions involving binary numbers."}]
HkYhZDqxg
[{"section_index": "0", "section_name": "TREE-STRUCTURED DECODING RECURRENT NEURAL NETWORKS", "section_text": "DRNN (Small) DRNN (Large) Seq2Seq (Large) Seq2Seq (Small) 0 20 40 60 80 100 Log-Likelihood relative change (%)\nDayid AIyarez-Melis & Tommi S. Iaakkola\nComputer Science and Artificial Intelligence Lab MIT\ndavidam,tommi}@csail.mit.edu\n(a) Encoder sentence input: \"ROOT P R C\nWe propose a neural network architecture for generating tree-structured objects. from encoded representations. The core of the method is a doubly recurrent neu-. ral network model comprised of separate width and depth recurrences that are. combined inside each cell (node) to generate an output. The topology of the tree. is modeled explicitly together with the content. That is, in response to an encoded. vector representation, co-evolving recurrences are used to realize the associated. tree and the labels for the nodes in the tree. We test this architecture in an encoder. decoder framework, where we train a network to encode a sentence as a vector. and then generate a tree structure from it. The experimental results show the ef-. fectiveness of this architecture at recovering latent tree structure in sequences and. at mapping sentences to simple functional programs..\nFirst, we analyze the quality of translations as a function of the maximum allowed target sentence \"size\"'. The notion of size for a sequence decoder is simply the length while for DrNN we use depth instead so as to tap into the inherent granularity at which sentences can be generated from this architecture. Two such examples are shown in Table[2] Since DRNN topology has been trained to mimic dependency parses top-down, the decoder tends to first generate the fundamental aspects of the sentence (verb, nouns), leaving less important refinements for deeper structures down in the tree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thus produces less informative translations under max-length restrictions.\n(b) Encoder sentence input: \"ROOT Z T Y Q\nIn our second experiment we investigate the decoders' ability to entertain natural paraphrases o sentences. If we keep the semantic content of a sentence fixed and only change its grammatica structure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence One way to assess this invariance is to compare the relative likelihood that the model assigns to th gold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WM test split and manually generate paraphrases with various types of structural alterations (see detail n the Appendix). For each type of decoder, we measure the relative change (in absolute value) o he log-likelihood resulting from the perturbation. All the models we compare have similar standar deviation (40 20) of log-likelihood scores over these examples, so the relative changes in th og-likelihood remain directly comparable. For each architecture we train two versions of differen sizes, where the sizes are balanced in terms of the number of parameters across the architectures. Th results in Figure|6|show that DRNN's exhibit significantly lower log-likelihood change, suggesting that, as language models, they are more robust to natural structural variation than their SEQ2SEC counterparts."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural networks have become extremely popular for modeling structured data. Key tc. their success is their ability to learn long-range temporal dependencies, their flexibility, and ease o. customization. These architectures are naturally suited for modeling sequences since the underlying. state evolution resulting from successive operations follows an inherently linear order (Williams & Zipser1995 Hochreiter & Schmidhuber1997). Indeed, they have been successfully adapted tc language modeling (Zaremba et al.[2015), machine translation (Sutskever et al.[2014) and conver sational agents (Vinyals & Le2015), among other applications..\nAlthough sequences arise frequently in practice, other structures such as trees or graphs do no1. naturally conform to a linear ordering. For example, natural language sentences or associated parse trees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.. While sentences in natural language can be modeled as if they were linear sequences, the underlying. process is compositional (Frege 1892). Models that construct sentences compositionally should. derive an advantage from adopting a more appropriate inductive bias.."}, {"section_index": "2", "section_name": "5 DISCUSSION AND FUTURE WORK", "section_text": "We have presented doubly recurrent neural networks, a natural extension of (sequential) recurrent architectures to tree-structured objects. This architecture models the information flow in a tree with two separate recurrent modules: one carrying ancestral information (received from parent and passed on to offspring) and the other carrying fraternal information (passed from sibling to sibling). The topology of the tree is modeled explicitly and separately from the label prediction, with modules that given the state of a node predict whether it has children and siblings.\nThe flexibility and success of recurrent neural networks in modeling and generating sequential data has prompted efforts to adapt them to non-sequential data too. Recent work has focused on the application of neural architectures to hierarchical structures, albeit in limited ways. Much of this work has assumed that either the full tree structure is given (Socher et al.[|2012)[Tai et al.[[2015) or at least the nodes are (Socher & Lin]2011Chen & Manning]2014] Kiperwasser & Goldberg2016) In the former scenario, the network aggregates the node information in a manner that is coherent with a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e., sequentially deciding which pairs of nodes to join with an edge until a tree is formed.\nFigure 9: Selected trees generated by the DRNN decoder from vector-encoded descriptions for test. examples of the synthetic tree dataset. Trees in the same row correspond to predictions by models trained on randomly sampled subsets of size N of the training split. We present cases for which the. prediction is accurate (a,c) and cases for which it is not (b,d). Note how in (d) the model predicts many of the labels correctly, but confuses some of the dependencies (edges) in the tree..\nThe experimental results show that the proposed method is able to predict reasonable tree structure. from encoded vector representations. Despite the simple structure of the IFTTT trees, the result. on that task suggest a promising direction of using DrNNs for generating programs or executable. queries from natural language. On the other hand, the results on the toy machine translation tasl. show that even when used to generate sequences, DRNN's exhibit desirable properties, such as in variance over structural modifications and the ability to perform coarse-to-fine decoding. In orde. to truly use this architecture for machine translation, the approach must be scaled by resorting t batch processing in GPU. This is possible since forward and backward propagation are computec. sequentially along tree traversal paths so that inputs and hidden states of parents and siblings can be. grouped into tensors and operated in batch. We leave this as an avenue for future work..\nThe full problem of decoding with structure, i.e., generating a tree-structured object with node labels from a given vector representation, has remained largely unexplored until recently. Recent efforts to adapt RNNs to this context have so far remained relatively close to their sequential counterparts. For example, in order to capture depth and branching in the tree, one can introduce special tokens (Dong & Lapata!2016) or use alternating RNNs coupled with external classifiers to predict branching (Zhang et al.|2016).\nproduit differentes reponses qui. 'je ne sais jamais quoi. Source changent avec le temps selon nos. dire dans ces cas la' experiences et nos relations \". SEQ2SEQ: l = 1 a 1 l = 4 with the different actions. I do l = 8 with the different actions who change with I do not know what to say. DRNN: d = 1 answers know d = 2 different answers change but i do not know d = 3 product the different answers change . but i do not know to say.\nN=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROO ROOT ROOT\n(a) Encoder sentence input: \"ROOT P R C' N=500 N=1000 N=1500 N=3500 gold ROOT ROO ROO (b) Encoder sentence input: \"ROOT Z T Y Q'. N=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROOT ROOT ROOT (c) Encoder sentence input: \"ROOT K T V\". N=500 N=1500 N=2500 N=4000 gold ROO ROO ROOT (d) Encoder sentence input: \"ROOT Q F V R G D A\"'\nnge un- Table 2: Translations at different resolutions (size constraints im- rbation. posed during decoding) for two example sentences.."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "N=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROOT ROOT ROOT D"}, {"section_index": "4", "section_name": "ACKNOWLEDGEMENTS", "section_text": "In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At th. heart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whicl. separately models the flow of information between parent and children nodes, and between siblings. Each of these relationships is modeled with a recurrent module whose hidden states are update. upon observing node labels. Every node in the tree receives two hidden states, which are ther. combined and used to predict a label for that node. Besides maintaining separate but simultaneous. fraternal and paternal recurrences, the proposed architecture departs from previous methods in tha it explicitly models tree topology. Each node in the network has modules that predict, based or. the cell state, whether the node is terminal, both in terms of depth and width. Decoupling thes. decisions from the label prediction allows for a more concise formulation, which does not require. artificial tokens to be added to the tree to simulate branching.\nDA-M acknowledges support from a CONACYT fellowship. The authors would like to thank the anonymous reviewers for their constructive comments..\nTo summarize, the main contributions of this paper are as follows.\nGottlob Frege. Uber Sinn und Bedeutung. Zeitschrift fur Philos. und Philos. Krit., (1):25-50, 1892\nSepp Hochreiter and Jurgen Jurgen Schmidhuber. Long short-term memory. Neural Comput., 9(8): 1-32, 1997. 1SSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735\nRecursive Neural Networks.Recursive neural networks (Socher & Lin]2011) Socher et al.]|2012 were proposed to model data with hierarchical structures, such as parsed scenes and natural language. sentences. Though they have been most successfully applied to encoding objects when their tree structured representation is given (Socher et al.2013), the original formulation by Socher & Lin. [2011) also considered using them to predict the structure (edges), albeit for the case where nodes. are given. Thus, besides their limited applicability due to their assumption of binary trees, recursive. neural networks are not useful for fully generating trees from scratch..\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Int. Conf. Learn Represent., pp.1-13, 2014. URLhttp://arxiv.0rg/abs/1412.6980\nTree-structured encoders. The Tree-LSTM of Tai et al.[(2015) is a generalization of long short. term memory networks (Hochreiter & Schmidhuber1997) to tree-structured inputs. Their mode). constructs a sentence representation bottom-up, obtaining at every step the representation of a node. in the tree from those of its children. In this sense, this model can be seen as a generalization of recursive neural networks to trees with degree potentially greater than two, with the additional long. range dependency modeling provided by LSTMs. They propose two methods for aggregating the. states of the children, depending on the type of underlying tree: N-ary trees or trees with unknowr. and potentially unbounded branching factor. TreeLSTMs have shown promising results for compo. sitional encoding of structured data, though by construction they cannot be used for decoding, since. they operate on a given tree structure.\nTree-structured decoders. Proposed only very recently, most tree-structured decoders rely on. stacked on intertwined RNNs, and use heuristic methods for topological decisions during genera-. tion. Closest to our method is the Top-down Tree LSTM ofZhang et al.(2016), which generates. a tree from an encoded representation. Their method relies on 4 independent LSTMs, which act in. alternation---as opposed to simultaneously in our approach---yielding essentially a standard LSTM that changes the weights it uses based on the position of the current node. In addition. their method\nMarc' Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train ing with Recurrent Neural Networks. In ICLR, pp. 1-15, 2016. URL http: / /arxiv. org/\nWe test this novel architecture in various encoder-decoder frameworks, coupling it with sequential. encoders to predict tree structure from encoded vector representations of sequences. The experimen- tal results show the effectiveness of this approach at recovering latent structure in flattened string. representations of trees (Section|4.1) and at mapping from natural language descriptions of simple programs to abstract syntax trees (Section |4.2). In addition, we show that even for sequence-to sequence tasks such as machine translation, the proposed architecture exhibits desirable properties. such as invariance to structural changes and coarse-to-fine generation (Section4.3).\nKyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Proper ties of Neural Machine Translation: Encoder-Decoder Approaches. Proc. SssT-8, Eighth Work.. Syntax. Semant. Struct. Stat. Transl., pp. 103-111, 2014. URL http://arxiv.org/pdf/. 1409.1259v2.pdf\nWe propose a novel neural network architecture specifically tailored to tree-structured de coding, which maintains separate depth and width recurrent states and combines them t obtain hidden states for every node in the tree.. We equip this novel architecture with a mechanism to predict tree topology explicitly (a. opposed to implicitly by adding nodes with special tokens).. We show experimentally that the proposed method is capable of recovering trees fron. encoded representations and that it outperforms state-of-the-art methods in a task consisting. of mapping sentences to simple functional programs.\nRj Kate, Yw Wong, and Rj Mooney. Learning to transform natural to formal languages. In Proc. Natl. Conf. Artif. Intell., volume 20, pp. 1062-1068, 2005. ISBN 1-57735-236-x. URLhttp: //www.aaai.org/Librarv/AAAI/2005/aaai05-168.php\nR Socher and Cc Lin. Parsing natural scenes and natural language with recursive neural network In EMNLP, pp. 129-136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9\nprovides children with asymmetric parent input: \"younger' children receive information from the parent state only through the previous sibling's state. Though most of their experiments focus on the case where the nodes are given, they mention how to use their method for full prediction by in troducing additional binary classifiers which predict which of the four LSTMs is to be used. These classifiers are trained in isolation after the main architecture has been trained. Contrary to this approach, our method can be trained end-to-end in only one pass, has a simpler formulation and explicitly incorporates topological prediction as part of the functioning of each neuron.\nA similar approach is proposed byDong & Lapata(2016). They propose sEQ2TREE, an encoder. decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchical. use of an LSTM, similar to Tai et al.(2015), but in the opposite direction: working top-down from. the root of the tree. To decide when to change levels in the hierarchy, they augment the training trees with nonterminal nodes labeled with a special token <n>, which when generated during decoding. trigger the branching out into a lower level in the tree. Similar to our method, they feed nodes with. hidden representations of their parent and sibling, but they do so by concatenating both states and. running them through a single recurrent unit, as opposed to our method, where these two sources. of information are handled separately. A further difference is that our approach does not require. artificial nodes with special tokens to be added to the tree, resulting in smaller trees..\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved Semantic Representa. tions From Tree-Structured Long Short-Term Memory Networks. In Proc. 53rd Annu. Meet Assoc. Comput. Linguist. 7th Int. Jt. Conf. Nat. Lang. Process., pp. 1556-1566, 2015. ISBN. 9781941643723. URLhttp://arxiv.0rg/abs/1503.0075\nArun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving Multi-step Prediction o. Learned Time Series Models. Twenty-Ninth AAAI Conf. Artif. Intell.. pp. 3024-3030. 2015\nHierarchical Neural Networks for Parsing. Neural networks have also been recently introduced to the problem of natural language parsing (Chen & Manning2014) Kiperwasser & Goldberg 2016). In this problem, the task is to predict a parse tree over a given sentence. For this,Kiperwasser & Goldberg[(2016) use recurrent neural networks as a building block, and compose them recursively to obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with a projective bottom-up strategy, which sequentially updates the encoded vector representation of the tree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach their method relies on having access to the nodes of the tree (words) and only predicts its topology, so---similar to recursive neural networks-it cannot be used for a fully generative decoding.\nOrioi Vinyals and Quoc V. Le. A Neural Conversational Model. arXiv, 37, 2015\nRonald J. Williams and David Zipser. Gradient-based learning algorithms for recurrent network. and their computational complexity. Back-propagation Theory, Archit. Appl., pp. 433-486, 1995 doi: 10.1080/02673039508720837.\nXingxing Zhang, Liang Lu, and Mirella Lapata. Top-down Tree Long Short-Term Memory Net works. In NAACL-HLT-2016, pp. 310-320, 2016"}, {"section_index": "5", "section_name": "3 DOUBLY RECURRENT NEURAL NETWORKS", "section_text": "Generating a tree-structured object from scratch using only an encoded representation poses severa design challenges. First, one must decide in which order to generate the tree. If the nodes on th decoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fron these nodes (e.g. as Kiperwasser & Goldberg 2016|do). In the setting we are interested in, howeve not even the nodes are known when decoding, so the natural choice is a top-down decoder, whicl starting from an encoded representation generates the root of the tree and then recursively generate the children (if any) of every node.\nThe second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence. to-sequence setting where encoding and decoding can be achieved with analogous procedures, wher. dealing with tree-structured data these two involve significantly different operations. For example an encoder that processes a tree bottom-up using information of a node's children to obtain its. representation cannot be simply reversed and used as a decoder, since when generating the tree. top-down, nodes have to be generated before their children are..\nAn additional design constraint comes from deciding what information to feed to each node. Fol sequences, the choice is obvious: a node should receive information from the node preceding o1 succeeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is ar evident flow of information from parent to children (or vice-versa), but when generating nodes ir a top-down order it seems unnatural to generate children in isolation: the label of one of them will likely influence what the states of the other children might be. For example, in the case of parse trees, generating a verb will reduce the chances of other verbs occurring in that branch.\nWith these considerations in mind, we propose an architecture tailored to tree decoding from scratch:. top-down, recursive and doubly-recurrent, i.e. where both the ancestral (parent-to-children) and fraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, the building block of a doubly recurrent neural network (DRNN) is a cell with two types of input states.. one coming from its parent, updated and passed on to its descendants, and another one received from"}, {"section_index": "6", "section_name": "A VARIATIONS ON TOPOLOGY PREDICTION", "section_text": "its previous sibling!'[updated and passed on to the next one. We model the flow of information ir the two directions with separate recurrent modules.\nBesides the topology prediction approach presented in Section|3.1 we experimented with two addi tional variations of the proposed doubly-recurrent neuron: (i) using tokens to trigger both depth an width termination (i.e. implicit topology prediction) and (ii) using tokens for width-stopping deci sion, but predict explicitly depth termination (single topology prediction). Recall that in the mode proposed in Section3.1 both decisions are explicit (double topology prediction). The neurons i each of these alternative formulations are depicted in Figure[7 In order to train these two alternativ models, we add special stopping tokens to the vocabulary, and we pad the training with additiona nodes labeled with this token. Besides requiring larger trees and resulting in slower training, w empirically observed alternatives (i) and (ii) to result in worse performance. We hypothesize tha this has to do with the fact that when using token-based stopping, topological and label predictio decisions are confounded, which results in less efficient learning.\nha Xp ha X ha Xp hf h Pi O ha ha Pi ha Pi\nO; = softmax(Wh(pred)"}, {"section_index": "7", "section_name": "3.1 TOPOLOGICAL PREDICTION", "section_text": "As mentioned before, the central issue with free-form tree construction is to predict the topology. of the tree. When constructing the tree top-down, for each node we need to decide: (i) whether i is a leaf node (and thus it should not produce offspring) and (ii) whether there should be additiona. siblings produced after it. Answering these two questions for every node allows us to construct a. tree from scratch and eventual stop growing it.\nSequence decoders typically rely on special tokens to terminate generation (Sutskever et al.|2014. The token is added to the vocabulary and treated as a regular word. During training, the examples ar. padded with this token at the end of the sequence, and during testing, generation of this token signal. termination. These ideas has been adopted by most tree decoders (Dong & Lapata2016). Ther. are two important downsides of using a padding strategy for topology prediction in trees. First. the size of the tree can grow considerably. While in the sequence framework only one stoppin. token is needed, a tree with n nodes might need up to O(n) padding nodes to be added. This ca. have important effects in training speed. The second reason is that a single stopping token selecte. competitively with other tokens requires one to continually update the associated parameters i response to any changes in the distribution over ordinary tokens so as to maintain topological contro..."}, {"section_index": "8", "section_name": "B.1 BACKPROPAGATION WITH DRNN'S", "section_text": "The gradients of the input ancestral and fraternal hidden states are then passed on to the previous. sibling and parent. When nodes have more than one child, we combine gradients from multiple children by averaging them. This procedure is repeated until the root note is reached, after which a. single (ancestral state) gradient is passed to the encoder.\nFormally, let T = {V, E, } be a connected labeled tree, where V is the set of nodes, E the set of edges and ' are node labels2[Let ga and gf be functions which apply one step of the two separate RNNs. For a node i E V with parent p(i) and previous sibling s(i), the ancestral and fraternal. hidden states are updated via\nh = ga(h) )\nwhere xs(), Xp(i) are the vectors representing the previous sibling's and parent's values, respec-. tively. Once the hidden depth and width states have been updated with these observed labels, they are combined to obtain a predictive hidden state:.\nre : tanh UJf 1 + U*h,\nwhere Uf E RnD and Ua E RnxDa are learnable parameters. This state contains combined. information of the node's neighborhood in the tree, and is used to predict a label for it. In its simplest form, the network could compute the output of node i by sampling from distribution .\nIn the next section, we propose a slight modification to (4) whereby topological information is included in the computation of cell outputs. After the node's output symbol x, has been obtained by sampling from oi, the cell passes h to all its children and h, to the next sibling (if any), enabling them to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, until termination conditions (explained in the next section) cause it to halt.\nFigure 7: A single unit in each of the three alternative versions of the doubly-recurrent neural net work, for node i with parent p and sibling s. Left: No explicit topology prediction, Middle: single. (ancestral) topology prediction, Right: double (ancestral and fraternal) topology prediction. The top. (left) incoming arrows represent the input and state received from the parent node (previous node,. respectively).\nDuring training, we do the forward pass over the trees in breadth-first preorder, feeding into every node an ancestral and a fraternal state. For computational efficiency, before passing on the ancestral state to the offspring, we update it through the RNN using the current node's label, so as to avoid repeating this step for every child node. After the forward pass is complete, we compute label (cross-entropy) and topological (binary cross-entropy) loss for every node. In the backward pass. we compute in this order:\n1. Gradient of the current node's label prediction loss with respect to softmax layer parameters W, va, vf: VeL(xi,Xi). 2. Gradients of topological prediction variable loss with respect to sigmoid layer parameters: VeL(p,t) and VL(pf,t) 3. Gradient of predictive state layer parameters with respect to h(pred) 4. Gradient of predicted ancestral and fraternal hidden states with respect to gf and ga's pa- rameters.\nBased on these observations, we propose an alternative approach to stopping, in which topological decisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use the predictive hidden state of the node h(pred) with a projection and sigmoid activation:\nprec 11a\nu .h pred\n1Unlike the \"ancestral' line, the order within sibling nodes is ambiguous. While in abstract trees it is. assumed that the there is no such ordering, we assume that for the structures were are interested in learning. there is always one: either chronological (the temporal order in which the nodes were generated) or latent (e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).. 2We assume throughout that these values are given as class indicators x, E {1, . .., N}..\nha Xp h% Encoder 1 hf h1 hi h1 2 3 4 h2 h ha ha h4 h4 h4 5 6 8 hf ht hg h pi"}, {"section_index": "9", "section_name": "B.2 MODEL SPECIFICATION AND TRAINING PARAMETERS", "section_text": "The best parameters for all tasks are chosen by performance on the validation sets. We perform early stopping based on the validation loss. For the IFTTT task, we initialize word embeddings with pretrained G1oVe vectors (Pennington et al.[2014). For both tasks we clip gradients when the absolute value of any element exceeds 5. We regularize with a small penalty p on the l2 norm of the parameters. We train all methods with ADAM (Kingma & Ba] 2014), with initial learning rate chosen by cross-validation. The parameter configurations that yielded the best results and were used for the final models are shown in Table 3] Details about the four models used for the machine translation task are shown in Table\nTable 3: Hyperparameter choice for DRNNs in the synthetic and IFTTT tasks\nFigure 1: Left: A cell of the doubly-recurrent neural network corresponding to node i with parent p. and sibling s. Right: Structure-unrolled DRNN network in an encoder-decoder setting. The nodes are labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal) connections. Crossed arrows indicate production halted by the topology modules..\nTable 4: Models used in the machine translation task\nNote that these stopping strategies depart from the usual padding methods in a fundamental property. the decision to stop is made before instead of in conjunction with the label prediction. The rationale behind this is that the label of a node will likely be influenced not only by its context, but also by the type of node (terminal or non-terminal) where it is to be assigned. This is the case in language. for example, where syntactic constraints restrict the type of words that can be found in termina. nodes. For this purpose, we include the topological information as inputs to the label prediction. layer. Thus, (4) takes the form\nModel Encoder Decoder Dim RNN Layers Batch SEQ2SEQ (Small) LSTM LSTM 150 1 64 SEQ2SEQ (Large) LSTM LSTM 300 3 64 DRNN (Small) LSTM DRNN-GRU (Left-Right) 150 1 32 DRNN (Large) LSTM DRNN-GRU (Left-Right) 300 1 32\nWe generate trees in a top-down fashion, conditioning the label and topology of every node ol. the state of its ancestors and siblings. For simplicity, we use a Markovian assumption on thes dependencies, modeling the probability of a node's label as depending only on the label of its paren p(i) and the last sibling s(i) generated before it (if any). Conditioned on these two inputs, we mode. the label of the node as coming from a multinomial distribution over the alphabet:."}, {"section_index": "10", "section_name": "3.2 TRAINING DRNNS", "section_text": "We train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler. 1996). In the forward pass, node outputs are computed in a top-down fashion on the structure. unrolled version of the network, following the natural'[dependencies of the tree. We obtain erro signal at the node level from the two types of prediction: label and topology. For the former, w. compute cross-entropy loss of o; with respect to the true label of the node xy. For the topologica. values p4 and p, we compute binary cross entropy loss with respect to gold topological indicator. Qi, $i E {0, 1}. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding int. every node the gradients received from child and sibling nodes and computing internally gradient. with respect to both topology and label prediction. Further details on the backpropagation flow ar provided in the Appendix.\nwhere 0wp(i),ws(i) a are class probabilities drawn from a Dirichlet prior with parameter Qy. On the other hand, we denote by ba the binary variable indicating whether node i has descendants, and by 6f that indicating whether it has an ensuing sibling. We model these variables as depending only on the label of the current node and its position in the tree:\nP(ba T) = P(ba w, D) = Bernoulli(pa,.: g(D) P(bf IT) = P(bf I wi, W) = Bernoulli( (W)\nNote that the way BPTS is computed implies and underlying decoupled loss functio.\nThe decoupled nature of this loss allows us to weigh these two objectives differently, to emphasiz either topology or label prediction accuracy. Investigating the effect of this is left for future work.\n3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visited might depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependency parse trees), it is usually desirable to preserve this order.\nIn summary, we use the following. nerative procedure to grow the trees:\nho Encoder 1 h, hq h1 hq 2 3 4 h2 hf h2 h2 h4 h4 h4 Oi 5 6 7 8 h, h h pa\nTask Encoder Dim Batch Learning Rate Regularization p synthetic LSTM 50 20 0.05 1x10-5 IFTTT GRU 150 35 0.06 1x10-4 IFTTT LSTM 150 35 0.05 5x10-4\nO; = softmax(Wh(pred) FQ;va+S;vJ\nwhere ai, i E {0, 1} are binary variables indicating the topological decisions and va, vf are learn- able offset parameters. During training, we use gold-truth values in (7), i.e. Q; = 1 if node i has children and ; = 1 if it has a succeeding sibling. During testing, these values are obtained from pa, pf by sampling or beam-search. A schematic representation of the internal structure of a DRNN cell and the flow of information in a tree are shown in Figure|1\nP(ba T) = P(ba wi, D) = Bernoulli(pa. g(Di)) P(bf |T) = P(bf | wi, Wi) = Bernoulli(pw, gf (Wi))\n(*) = Llabel(x;,Xi) ) + Ltopo(Pi,Pi) iEV\nwhere D, is the depth of node i and W, its width, defined as its position among the children of its par- ent p(i). Intuitively, we want to make P(6 = 1T) decrease as we go deeper and further along the branches of the tree, so as to control its growth. Thus, we model ga and gf as decreasing functions with geometric decay, namely g(D) = (ya)D and gf (W) = (yf)W, with , f E (0,1). For the label-conditioned branching probabilities P(ba w) and P(bwi), we use Bernoulli distributions. with probabilities drawn from beta priors with parameters (aa, a) and (f , f ), respectively..\nN=500 N=1000 N=3500 N=4000 gold ROOT ROOT ROOT ROOT ROOT\nN=500 N=1000 N=3500 N=4000 gold ROOT ROOT ROOT ROOT ROOT\nFigure 2: Trees generated by the DRNN decoder trained on subset of size N of the synthetic datase for a test example with description \"ROOT B W F J V\"\nAs is common with sequence generation, during training we perform teacher forcing: after predict ing the label of a node and its corresponding loss, we replace it with its gold value, so that childre. and siblings receive the correct label for that node. Analogously, we obtain the probabilities p and pf, compute their loss, and replace them for ground truth variables ai, 9i for all downstrean computations. Addressing this exposure bias by mixing ground truth labels with model predictions during training (Venkatraman et al.2015) or by incremental hybrid losses (Ranzato et al.]2016) i left as an avenue for future work.\nNote that this generative process does create a dependence between the topology and content of th trees (since the variables ba and bf depend on the content of the tree via their dependence on th label of their corresponding node). However, the actual process by which labels and topologica decision is generated relies on separate mechanisms. This is natural assumption which is reasonabl to expect in practice.\nThe choice of prior parameters is done drawing inspiration from natural language parse trees. We. want nodes to have low but diverse probabilities of generating children, so we seek a slow-decaying distribution with most mass allocated in values close to 0. For this, we use (aa, 3a) = (0.25, 1). Fo. sibling generation, we use (af , f) = (7, 2), which yields a distribution concentrated in values close. to 1, so that nodes have on average a high and similar probability of producing siblings. Since we. seek trees that are wider than they are deep, we use decay parameters Ya = 0.6, 7f = 0.9. Finally. we use a d, = 10 : 1 for the parent-sibling probability prior, favoring non-uniform interactions.. Using this configuration, we generate 5000 sentence-tree pairs, which we split into training (4000. examples), validation (500) and test (500) sets. The characteristics of the trees in the dataset are. summarized in Table5"}, {"section_index": "11", "section_name": "4.1 SYNTHETIC TREE RECOVERY", "section_text": "In our first set of experiments we evaluate the effectiveness of the proposed architecture to recover trees from flattened string representations. For this, we first generate a toy dataset consisting of simple labeled trees. To isolate the effect of label content from topological prediction, we take a small vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-down fashion, conditioning the label and topology of every node on the state of its ancestors and siblings For simplicity, we use a Markovian assumption on these dependencies, modeling the probability of a node's label as depending only on the label of its parent and the last sibling generated before it (if any). Conditioned on these two inputs, we model the label of the node as coming from a multinomial distribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we model the probability of a node having children and a next-sibling as depending only on its label and the depth of the tree. For each tree we generate a string representation by traversing it in breadth-first preorder, starting from the root. The labels of the nodes are concatenated into a string in the order in which they were visited, resulting in a string of [T symbols. We create a dataset of 5,o00 trees with this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10% split). Further details on the construction of this dataset are provided in the Appendix.\nTable 5: Synthetic tree dataset statistics. Tree size is measured in number of nodes, depth is the. largest path from the root node to a leaf and width is the maximum number of children for any node in the tree. The values reported correspond to means with one standard deviation in parentheses.\nThe task consists of learning a mapping from strings to trees, and using this learned mapping tc. recover the tree structure of the test set examples, given only their flattened representation. To do so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vector. representation using a recurrent neural network. For the decoder, we use a DRNN with LSTM modules, which given the encoded representation generates a tree. We choose hyper-parameters with cross-validation. Full training details are provided in the Appendix..\nThe IFTTT dataset comes with a script to generate the data by crawling and parsing the recipes. Unfortunately, by the time we ran the script many recipes had been removed or changed. We there. fore resorted to the original dataset used by Quirk et al.[(2015). We converted these recipes intc our tree format, assigning a node to each element in the first three levels (channels, functions anc. arguments, see figure 5). For the parameters level, many recipes have sentences instead of single. tokens, so we broke these up creating one node per word. The last two layers are therefore the mos. topologically diverse, whereas the structure of the first two layers is constant (all trees have channel. and functions). A very small fraction (< 1%) of trees that could not by parsed into our format wa. excluded from the dataset.\nMeasuring performance only in terms of exact recovery would likely yield near-zero accuracies for most trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit for correctly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the quality of the predicted tree in terms of the precision and recall of recovering nodes and edges present in the gold tree. Thus, we penalize both missing and superfluous components. As baseline, we induce a probabilistic context-free grammar (PCFG) on the full training data and use it to parse the test sentences. Note that unlike the DRNN, this parser has direct access to the sentence representation and thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline\nTable [6 shows various statistics about the topological characteristics of the recipes in the IFTTT. dataset. The middle columns show percentage of trees that contain nonempty arguments and param. eters in trigger (IF) and action (THEN) branches. Almost all recipes have none empty arguments and. parameters (and thus depth 4, excluding the root), and a lower percentage--but still a majority--has - arguments and parameters on the trigger side too. The last two columns show tree statistics pertain-. ing to the complexity of trees after conversion to our format. The distribution of tree sizes is mostly concentrated between 4 and 30 nodes, with a slow-decaying tail of examples above this range (see Figure [8).\nFigure [3|shows the results on the test set. Training on the full data yields node and edge retrieval. F1-Scores of 75% and 71%, respectively, the latter considerably above the baseline4| This 4% gap. can be explained by correct nodes being generated in the wrong part of the tree, as in the example in\n4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method is irrelevant and thus omitted from the analysis.\n1. For each w, E V, draw pw, ~ Beta(aa, a) and pw, ~ Beta(f 2. For each pair (w, wj) draw Owi,w, ~ Dir(aV 3. While there is an unlabeled non-terminal node i do: Sample a label for i from w* ~ P(w|Wp(i), Ws(i) = Multi(0wp(s),ws(i) : Draw ba ~ P(ba|w*, D) = Bernoulli( - pw(), where D is the current depth. If ba = 1, generate an node k, set p(k) = i, and add it to the queue. bf = 1, generate an node k, set s(k) = i, and add it to the queue.\nFold Examples Size Depth Width train 4000 3.94 (3.38) 1.42 (0.66) 2.89 (1.71) dev 500 4.13 (3.21) 1.46 (0.67) 2.91 (1.76) test 500 3.64 (3.21) 1.32 (0.61) 2.80 (1.71)\nTable 6: IFTTT dataset statistics. The middle columns show percentage of trees that contain nonempty arguments and parameters in trigger (IF) and action (THEN) branches. The last column. shows average (with standard deviation) tree size and depth..\nHas args. (%) Has params. (%) Tree Size Fold Examples Trigger Action Trigger Action # Nodes Depth train 67,444 69.10 98.46 65.47 96.77 16.93 (31.71) 3.99 (.13) dev 4,038 69.44 98.46 66.42 96.31 16.55 (8.75) 3.99 (.11) test 3,725 68.38 98.66 65.64 97.50 16.43 (8.18) 3.99 (.12) Dev Test Train 100 (fepoN #) az!S eee 80 60 40 20 0 O 2000 4000 0009 0008 10000 12000 14000 10090 Frequency\nFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averaged. over 5 repetitions. Right: Node (first column) and edge (second) precision as a function of tree size\n1.0 1.0 Node Nod 0.8 0.8 Edge Edg uo! 0.6 0.6 reeesl 0.4 0.4 0.2 0.2 0.0 0.0 2 3 4 5 6 7 8 6 C 2 3 4 5 6 Tree Depth (# nodes) Tree Width (# nodes)\nFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right)\nFigure 8: Tree size distribution in the IFTTT dataset\nRegarding the content of the trees, the labels of the nodes in the first two levels (channels anc functions) come from somewhat reduced vocabularies: 111 and 434 unique symbols for the trigge branch, respectively, and 157 and 85 for the action branch. The lower layers of the tree have a mucl more diverse vocabulary, with about 60K unique tokens in total. On the source side, the vocabulary over the sentence descriptions is large too, with about 30K unique tokens. The average sentence size. is 6.07 tokens, with 80% of the sentences having at most 12 tokens..\nTree structures arise naturally in the context of programs. A typical compiler takes human-readable. source code (expressed as sequences of characters) and transforms it into an executable abstrac syntax tree (AST). Source code, however, is already semi-structured. Mapping natural languag sentences directly into executable programs is an open problem, which has received considerable. interest in the natural language processing community (Kate et al.]2005] Branavan et al.]2009).\nThe IFTTT dataset (Quirk et al.] 2015) is a simple testbed for language-to-program mapping. It consists of if-this-then-that programs (called recipes) crawled from the IFTTT website| paired with natural language descriptions of their purpose. The recipes consist of a trigger and an action, each defined in terms of a channel (e.g. \"Facebook'), a function (e.g. \"Post a status update') and poten tially arguments and parameters. An example of a recipe and its description are shown in Figure|5 The data is user-generated and extremely noisy, which makes the task significantly challenging.\nFor the perturbation experiments, we randomly selected 50 sentences from among those in the tes that could be easily restructured without significantly altering their meaning. The type of alterations we perform are: subordinate clause swapping, alternative construction substitution, passive/active voice change. In doing this, we try to keep the number of added/deleted words to a minimum, tc minimize vocabulary-induced likelihood variations. When inserting new words, be verify that the are contained in the original vocabulary of 20K words. In Table|7|we show a few examples of the source, original target and perturbed target sentences.\nRecipe Save photos you're tagged in on Facebook to Dropbox Root IF (TRIGGER) THEN (ACTION) (a) Channels Facebook Dropbox (b) Functions You_are_tagged_in_a_photo Add_file_from_URL File_URL File name Dropbox Folder.Path (c) Arguments \"{{CreatedAt}} {{ImageSource}} -{{From}}- {{Facebook}} (b) Parameters {{Caption}}\"\nFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generated natural language explanation of the if-this-then-that program (below)..\n80 1.0 Basline - Edge Node seore 75 0.8 Node Edge 70 Preeeson Edge 0.6 Maaaeeer 65 0.4 60 0.2 55 50 0.0 500 1000 1500 2000 2500 3000 3500 4000 2 3 5 6 8 9 11 2 4 5 8 6 2 2 Training examples Tree Size (# nodes)\n1.0 1.0 Node Node 0.8 0.8 Edge Edge uo uol 0.6 0.6 Cisi preessg 2 0.4 0.4 P 0.2 0.2 0.0 0.0 2 3 4 5 6 8 2 2 3 5 6 Tree Depth (# nodes) Tree Width (# nodes)\nFigure[2] The second plot in Figure[3 shows that although small trees are recovered more accurately precision decays slowly with tree size, with depth accounting for the largest effect (Figure4)\nStarting from a preprocessed2% sub-selection of the English-French section of the WMT14 dataset, we further prune down the data by keeping only sentences of length between 5 and 20 words, and for which every word is within the 20K most frequent. The reason for this is to simplify the task by keeping only common words and avoiding out-of-vocabulary tokens. After this filtering. we are left with 53,607, 918 and 371 sentences for train, validation and test sets. After tokenizing we obtain dependency parses for the target (English) sentences using the Stanford CoreNLP toolkit (Manning et al.|2014).\nTable 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262 recipes). Right: examples for which at least 3+ humans agree with gold (758 recipes).\nMethod Channel +Func F1 retrieval 36.8 25.4 49.0 phrasal 27.8 16.4 39.9 sync 26.7 15.4 37.6 classifier 64.8 47.2 56.5 posclass 67.2 50.4 57.7 SEQ2SEQ 68.8 50.5 60.3 SEQ2TREE 69.6 51.4 60.4 GRU-DRNN 70.1 51.2 62.7 LSTM-DRNN 74.9 54.3 65.2\nWe approach this task using an encoder-decoder framework. We use a standard RNN encoder, either. an LSTM or a GRU (Cho et al.|2014), to map the sentence to a vector representation, and we use. a DRNN decoder to generate the AST representation of the recipe. We use the original data split. which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, we use the same metrics as Quirk et al.[(2015), who note that computing exact accuracy on such a noisy dataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score on. the set of recovered productions. In addition, they compute accuracy at the channel level (i.e. when. both channels are predicted correctly) and at the function level (both channels and both functions. predicted correctly).\nTable 7: Example structural perturbations for likelihood robustness experiments"}, {"section_index": "12", "section_name": "4.3 MACHINE TRANSLATION", "section_text": "In our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machine translation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decoding with structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasel consisting of about 50K English > French sentence pairs (see the Appendix for details) along with dependency parses of the target (English) side.\nWe train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previous. experiments. A slight modification here is that we distinguish left and right children in the tree. using two symmetric width-modules gt, gR that produce children from the parent outwards. With. this, children are lexically ordered, and therefore trees can be easily and un-ambiguously projected. back into sentences. We compare our model against a sequence-to-sequence architecture of similar. complexity (in terms of number of parameters) trained on the same data using the optimized Open NMT library (Klein et al.2017). For decoding, we use a simple best-of-k sampling scheme for our. model, and beam search for the SEQ2SEQ models..\nMethod Channel +Func F1 Method Channel +Func F1 retrieval 36.8 25.4 49.0 retrieval 43.3 32.3 56.2 phrasal 27.8 16.4 39.9 phrasal 37.2 23.5 45.5 sync 26.7 15.4 37.6 sync 36.5 23.5 45.5 classifier 64.8 47.2 56.5 classifier 79.3 66.2 65.0 posclass 67.2 50.4 57.7 posclass 81.4 71.0 66.5 SEQ2SEQ 68.8 50.5 60.3 SEQ2SEQ 87.8 75.2 73.7 SEQ2TREE 69.6 51.4 60.4 SEQ2TREE 89.7 78.4 74.2 GRU-DRNN 70.1 51.2 62.7 GRU-DRNN 89.9 77.6 74.1 STM-DRNN 74.9 54.3 65.2 LSTM-DRNN 90.1 78.2 77.4\nMethod Channel +Func F1 retrieval 43.3 32.3 56.2 phrasal 37.2 23.5 45.5 sync 36.5 23.5 45.5 classifier 79.3 66.2 65.0 posclass 81.4 71.0 66.5 SEQ2SEQ 87.8 75.2 73.7 SEQ2TREE 89.7 78.4 74.2 GRU-DRNN 89.9 77.6 74.1 LSTM-DRNN 90.1 78.2 77.4\nsource apres un accord de paix signe en 1992 elle est devenue un parti d opposition.'. target \"after a 1992 peace deal it became an opposition party'. perturbation \"it became an opposition party after a 1992 peace deal.'*. source \"cela represente environ 9 milliards de grains de mais.. target \"'that's about 9 billion individual kernels of corn'. perturbation \"this amounts to about 9 billion kernels of corn\". source \"1'exercice de fonctions publiques est une question de service public.. target \"public office is about public service.\" perturbation \"the exercise of public functions is a matter of public service.\". source 'nous avons ainsi effectue depuis la fin de 1'hiver dernier 64 interventions.' target \"hence we have carried out 64 operations since last winter\". perturbation \"we have therefore carried out 64 operations since last winter.\". source \"on estime qu'un enfant sur 2000 nes chaque annee n'est ni un garcon ni une fille.'. target \"an estimated one in 2000 children born each year is neither boy nor girl.\". perturbation \"it is estimated that one in every 2000 children born every year is neither a boy nor a girl.\"\nWe compare our methods against the various extraction and phrased-based machine translation base lines ofQuirk et al.(2015) and the the methods ofDong & Lapata (2016): SEQ2SEQ, a sequence- to-sequence model trained on flattened representations of the AST, and SEQ2TREE, a token-driven hierarchical RNN. Following these two works, we report results on two noise-filtered subsets of the data: one with all non-English and unintelligible recipes removed and the other one with recipes for which at least three humans agreed with the gold AST. The results are shown in Table[1 In both subsets, DRNNs perform on par or above previous approaches, with LsTm-DRNN achieving. significantly better results. The improvement is particularly evident in terms of F1-score, which is the only metric used by previous approaches that measures global tree reconstruction accuracy. To. better understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure [5) we computed node accuracy on the arguments level. Our best performing model, LsTm-DRNN,. achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which shows that the model is reasonably successful at predicting structure even beyond depth three. The best. performing alternative model, SEQ2TREE, achieves a corresponding F1 score of 46%.."}]
HkzuKpLgg
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Linnan Wang\nSchool of Computer Science Georgia Institute of Technology\nSchool of Computational Science & Engineering Georgia Institute of Technology.\nWe consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for partial gradient aggregations, which for large messages quickly dominates overall execution time and limits parallel scalability. To address this problem, we develop a new technique for collective operations, referred to as Linear Pipelining (LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively on multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is the number of GPUs, while the cost of the more conventional Minimum Spanning Tree (MST) scales like O(log P). LP also demonstrates up to 2x higher bandwidth than Bidirectional Exchange (BE) techniques that are widely adopted by current MPI implementations. We apply these collectives to BSP-SGD, showing that the proposed implementations reduce communication bottlenecks in practice while preserving the attractive convergence properties of BSP-SGD.\nEdgar Gabriel, Graham E Fagg, George Bosilca, Thara Angskun, Jack J Dongarra, Jeffrey M Squyres Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, et al. Open mpi: Goals concept, and design of a next generation mpi implementation. In European Parallel Virtual Machine/Message Passing Interface Users' Group Meeting, pp. 97-104. Springer, 2004.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Scaling up neural networks with respect to parameter sizes, training sets, or both has drasticall improved the state-of-the-art performance in several domains ranging from scene understanding speech recognition, even to playing Go against professional players. Although training a larg network saturated with nonlinearities is extremely time-consuming, the benefits brought forth b large-scale models has sparked a surge of interest in parallelizing training on multi-GPUs. The parallelization of SGD demands synchronizations to exchange gradients and parameters per iteration and this introduces significant communication overhead. Previous studies have focused on trading th SGD convergence rate for fast gradient updates, such as stale or asynchronous SGD, 1-bit compressec gradient, etc. However, these methods are rarely adopted by Deep Learning frameworks as the depend on the balance between the enhanced iteration throughput and the decelerated convergenc rate. Since BSP retains the convergence properties of SGD, its optimization should be of interest.\nThe gradient aggregations and parameter exchanges in BSP SGD are typical operations of commu. nication collectives (Chan et al. 2007). Messages in the large-scale neural networks training are. dense, long, and fixed-length, while the performance of collective algorithms is drastically sensitive to these attributes. Besides, the processing speed is several orders of magnitude faster than the. network unidirectional transmission rate. These prioritize the utilization of network bandwidth in the collective design. However, we have seen sub-optimal collective algorithms, e.g. MST and BE.. widely adopted by the deep learning community (Agarwal et al.[2014) (Jia et al.2014) (Duchi et al.. 2011). MST is only suitable for the latency dominant case such as frequent short message exchanges.. while the bandwidth term of BE can be further improved (Thakur et al.2005)..\nAlekh Agarwal, Olivier Chapelle, Miroslav Dudik, and John Langford. A reliable effective terascale 1near learning svster louLvnaLotNaehiwoLoarninaRocoareh 1111-1133.2014\nWei Wu & George Bosilca\nBig Data Research Center Uniy. of Electr. Sci. & Tech. of Chin zlxu@uestc.edu.cn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Galen M Shipman, Timothy S Woodall, Richard L Graham, Arthur B Maccabe, and Patrick G Bridge Infiniband scalability in open mpi. In Proceedings 2Oth IEEE International Parallel & Distribute Processing Symposium, pp. 10-pp. IEEE, 2006.\nGPUO GPUO COMMUNICATION COMPUTE J SYNC GPU1 GPU1 (a) legend (b) Reference SGD (c) ASGD GPUO GPUO GPUO GPU1 staleness GPU1 GPU1 (d) Stale SGD (e) CUDNN (f) ours\nFigure 1: Illustrations of various methods to accelerate the training. Black blocks stands for computa tions, and white blocks stands for communications. CUDNN reduces the computation cost, while we. reduce the communication cost.\nIn this paper, we introduce new Linear Pipeline based collectives for multiGPU training. Th. collectives demonstrate O(log(P)) speedups over MST collectives and up to 2x speedups over BI based ones; the bounds only hold in training large neural networks. In particular, the theoretica analysis and the implementation yield an interesting insight that the cost of our design is invarian to GPU numbers, i.e., the cost of collective operations on 2 GPUs is similar to 20 GPUs. The desigr explores message granularity to maximize simultaneous bidirectional data exchanges. In specific. it divides a message into fine-grained blocks as the basic communication element. A GPU send a block (via DMA 1) while receiving (via DMA 2) a new block from a neighbor. The copies are. asynchronously launched on two GPU streams, and numerical operations further overlap data copie. As a result, our method yields a highly efficient pipeline over which messages for neural networl training may be exchanged.\nMartin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pp. 2595-2603, 2010.\nThe proposed collective design achieves 2.3x to 360.55x speedups over Open MPI alternatives o1 6 GPUs. In training GoogLeNet, we set up the same BSP SGD implementation with differen underlying collectives. Our design demonstrates up to 1.7x convergence speedup over MST base Caffe.\nThe first group of approaches relaxes synchronous models of SGD to increase the iteration throughpu. (Dean et al.(2012),Zinkevich et al.(2010)). In this case, the relaxed SGD enables computations on a GPU to partially overlap with communications on others as demonstrated in Fig|1c|and Fig|1d. Recht et al.(2011) proposed a lock free Asynchronous SGD (ASGD) that entirely gets rid of the. synchronization requirement by allowing free concurrent parameter updates. But the relaxation only. works well on sparse learning problems. In response, Ho et al.[(2013) introduced the concept o1. staleness by bounding the fastest and the slowest machine within a few iterations of each other t ensure correctness. These relaxations claim to be effective as the enhanced iteration throughpu. offsets the disadvantages of degraded convergence rate. However, recent advances in deep learning. frameworks (Cui et al.[(2016) have reestablished the advantages of BSP over relaxed ones in training. neural networks. This reiterates the importance of studying BSP SGD..\nThe second group of approaches tries to reduce the overall communication volume. Seide et al. (2014) quantized gradients from 32 bits to 1 bit to reduce the message length, but the lost gradient. information decelerates the convergence rate. Another approach is to accelerate the convergence with a large batch.Dekel et al.[(2012) shows the convergence rate of mini-batch SGD is O(1/Tb + 1/T) with b being the batch size. This result indicates a large batch needs fewer iterations to find a solution and thereby fewer overall synchronizations. However, unwieldy increasing the batch size is also unfavorable under limited computing resources demonstrated by|Wang et al.(2016b). Please note these methods still need synchronizations, and our work will further improve their performance..\nThe communication overhead has been widely identified as the major bottleneck in the data-parallel. SGD (Shamir (2014),Li et al.(2014)). The data parallelism linearly adds the processing power by concurrent gradient computations with multiple GPUs. But it also requires synchronizations to collect partial gradients or to broadcast parameters. In practice, the communication rate is several. orders of magnitude slower than the computation (Coates et al.|2013). Various approaches have been. proposed to reduce the overhead.\nin1 TIME- TIME- TIME in2- out 3 M SENTFROMGPU2 REDUCE a\"b\"c\"d\" abcd abcd GPUO GPUO a0b0c0d0 a0a1 a GPUO a0b0 c0 d05 1y 2y 3y`44 123 123 abcd MEMCPYa1b1c1d1 MEMCPYa1b1c1d1abcd\" >a0 a GPU1MEMCPYabcd GPU1 GPU1 S: GPU Stream COMPUT a'b' c'd' Buffer COMPUTa'b'c'd so1 3 2x 3x`4`5 X 2y 3x`4 S1 2 4 MEMCPY a2|b2]c2|d2 MEMCPY a2|b2]c2[d2a\"b\"c\"d\" GPU2MEMCPY abcd GPU2 a0 a1 GPU2 COMPUT a\"b\"c\"d\" COMPUT a\"b\"c\"d\" (a) broadcast (b) reduce (c) allreduce\nFigure 2: The data flow of broadcast, reduce and allreduce on 3 GPUs\nThe third group of approaches conducts system optimizations to minimize the communication cos (Wang et al.[ [2016a). Agarwal & Duchi[(2011) and Agarwal et al.[(2014) presented partial gradient aggregations guided with a MST that takes log(P) steps to fully synchronize the model. Deep learning frameworks such as Caffe (Jia et al.]2014) also adopt this approach. Unfortunately, MST i only suitable for latency dominant scenarios (i.e. high frequent short messages). Although collective algorithms have been thoroughly discussed in the HPC community (Almasi et al.(2005), Gabrie et al.(2004), Shipman et al.(2006)), few have studied their performances for the deep learning. The performance of collectives varies significantly with different message lengths and network topologies while messages in deep network training are dense, long and fixed-length. Therefore, it is imperativ to address such peculiarities in the collectives. Worringen (2003) proposed a pipeline collective model in shared memory environment for CPU data, but communications of different MPI processe sharing the same CPU memory bus within the same CPU socket. This causes bandwidth competitioi among different processes, thereby poor performance for the collective communication in share. memory environment for CPU data. In contrast, PCI-E is bi-directional. The latest GPUs also featur two independent DMA engines for simultaneous independent in/out communications. The hardware updates pave the way for LP based GPU communications.\nBroadcast tackles the synchronizations of parameters among multiple GPUs. It copies the source vector to every GPU. Fig|2ajillustrates the data flow of the broadcast collective on 3 GPUs. GPU0 is the source, and the rest are destinations. Broadcast starts with filling the pipe by copying block a on GPU0 to GPU1 at step 1. Let's focus on GPU1. At each step, GPU1 receives a block from GPU0 via DMA1, while GPU1 is also sending a block to GPU2 via DMA2. The data exchange in either way utilizes an independent link and DMA engine to achieve the maximal unidirectional rate. Hence the bandwidth is fully exploited.\nReduce aggregates the partial gradients to reconstruct the global one. It combines the elements provided in the vector of each GPU, and returns the combined value in the receive vector to a specific GPU. It supports basic arithmetic operations such as summations and multiplications. Fig|2b illustrates the data flow of the reduce collective. GPU2 is the root that aggregates the vectors across all GPUs. Reduce starts with filling the pipe by writing block a0 to a buffer on GPU1. Then, GPU1 reduces the received block a0 with a1 to yield a' (within the rectangle of Fig|2b). Please note the computation is much faster than the communication, we assume no latency on it. In practice. computations are further overlapped with communications. In the next step, GPU1 retrieves b0 from GPU0 to reduce to b' via DMA 1, while GPU1 is also sending a' to GPU2 to reduce to a\" via DMA 2. b\"', c\"', d\" are reduced at steps 3, 4, 5 in a similar fashion.\nAllReduce enables us to collect partial gradients and broadcast the latest parameters with only one. synchronization point per SGD iteration. It combines vectors from all GPUs and distributes the\nThis section presents a new LP based MultiGPU collective design ensued by the concrete proof of its performance in training neural networks. The general idea of LP is as follows: a) we dissect a long message into fine-grained blocks. b) a GPU receives a block from the prior GPU via DMA1 while sending a block to the next one via DMA2. Please note each block exchange utilizes an independent physical link, and the entire network is fully utilized once the pipeline is filled..\nTable 1: The estimated costs of 3 collective communications\nBidirectional Exchange (BE) Minimal Spanning Tree (MST) Linear Pipeline (LP) broadcast (log p + p - 1) + 2(-1n) log p(a + n) (p-1+ )+(b(p-1) +n) (2 logp)a+2(P-1n)+(P-1n)y logp(a+ n+ ny) reduce (p-1+ )a+(bp-b+n)(+y) -n)y logp(2a + 2n + ny) 2(p-1+ )a+ (bp-b+n)(2+y) P p\nresult back to them. Mathematically, it is equivalent to a reduce followed by a broadcast. However allreduce is more efficient than two separate calls as it only needs to fill the pipeline once. For example, it takes 9 timesteps to allreduce 4 message blocks, while broadcast + reduce will cost 10 Fig|2c|illustrates the data flow of the allreduce collective. It starts with reducing a\", after which a\" is broadcast to GPU1 and GPU2 at step 5, 6 respectively. Please note d0 utilizes the outbound DMA at step 4, therefore a\" has to wait until step 5. b\", c\", d' are processed in a similar fashion.\nOur collective is also specifically designed to accommodate GPU features such as asynchronous kernel launches and multi-stream processing. In the rectangle of Fig|2al it demonstrates the data transfers are asynchronously launched on two separate streams. The copies happening in the red steps are scheduled on one stream while copies in the black steps are scheduled on another stream This overlaps the overhead of GPU kernel launches, further improving the pipeline. We illustrate the data flow of the collectives on 3 GPUs. If there are k GPUs, GPU n, 0 < n < k - 1, duplicates the same communication pattern on GPU 1."}, {"section_index": "3", "section_name": "3.1 ARCHITECTURE ANALYSIS", "section_text": "LP is the optimal collective algorithm to fully exploit the network bandwidth of a MultiGPU system Even though PCI-E supports full-duplex communication between any two endpoints, each PCI- endpoint device only has one input and output port. This results in bandwidth competition if a GPU is receiving from multiple GPUs. Similarly, each PCI-E switch only contains one input and output port used for inter-switch communication, and inter-switch communications of the same directior also compete for the PCI-E bus. It is known that any delay in data movement between two GPUs interrupts the pipelining in the collectives. In such architecture, the communication from parents to children in MST based collective algorithms will compete for the same PCI-E bus, therefore breaking pipelining. The data exchange of BE also suffers from the inter-switch communication congestion in one direction. In contrast, LP connects all GPUs into a chain, and data always flow in one direction. Hence, data movements between two GPUs exclusively occupy the entire PCI-E bus ensuring uninterrupted pipelining.\nwhere a is the latency or startup time of sending a message, and y is the transmission rate and. reduce rate measured by time per byte, and n is the message size in bytes. We also denote p as the node count, and b as the block size (in bytes) in the pipeline..\nProposition 1 If the network latency Q -> 0, Linear Pipeline collectives provide an O(log p) speedu over Minimal Spanning Tree collectives and up to a 2 times speedup over Bidirectional Exchang collectives as the message size n -> oo.\nProof. First, we derive the costs of the three Linear Pipeline collectives. According to Fig|2 the length of pipeline is p -- 1 + n blocks assuming each block to be b bytes. A block exchange takes a + 36 + y6 (with reduce) or a + 36 (without reduce). Consequently, broadcast essentially costs a+36)(p-1+ ) = (p-1+)a+(b(p-1)+n), and reduce costs (a+b+y6)(p-1+ ) (p - 1 + )a + (b(p - 1) + n)( + ). allreduce is approximately equivalent with a reduce\nT=a+ Bn+yn\nfollowed by a broadcast. Therefore, the allreduce's cost is broadcast's cost plus reduce's cost, i.e 2(p- 1 + )a + (bp- b+ n)(2 + y)\nSecondly, we derive the costs of the three Minimal Spanning Tree collectives. MPI adopts MST tc. broadcast or reduce short messages (Thakur et al.(2005)), the length of which is less than 12 KB. The core concept of MST is to organize p GPUs into a balanced tree of height [logp]. Then, it takes. log p steps to traverse all GPUs in the tree. Each step carries the message of length n, resulting ir the cost of broadcast to be the tree height times the cost per step, i.e. log p(a + n3) (we omit the ceiling for simplicity). Similarly, MST reduce is log p(a + n + ny), and MST allreduce is also a combination of broadcast and reduce. Please note the latency term, log pa, is the smallest among. algorithms in Table[1] and the bandwidth term, log pn, is the slowest as log pn > n. Therefore. MST is widely used for high frequent exchanges of short message..\nFinally, we present the costs of the three Bidirectional Exchange collectives. MPI broadcast handles long messages with a MST scatter followed by a BE allgather. Please refer to|Chan et al.(2007 while allgather costs (p - 1) + p-1n. The cost of broadcast is the sum of these two. The MPI long message reduce consists of a reducescatter plus a gather, while allreduce consists of a reducescatter and a allgather. The cost for reducescatter is log pa + P-1n + P-1ny, and both the costs of gather and allgather are log pa + -1n (also inChan et al.(2oo7)). Table 1summarizes the costs of broadcast, reduce and allreduce for the three different underlying algorithms.\nThe proposition holds under the assumptions of -> 0 and n -> oo, and these assumptions are. legitimate for the training of large scale neural networks on multiGPUs. Nowadays, the PCI Express. x16 effectively reduces the latency down to 10-7s. The current two sockets shared memory. machine supports up to 8 GPUs indicating limited p in practice. Let's take an appropriate block size. b to ensure p < n and a ~ 0. This enables us to safely ignore the latency term, e.g. log pa ir. MST broadcast. On the other hand, current deep convolutional neural network uses a tremendous. number of parameters. For example, AlexNet uses 50 MB parameters. The transmission rate. 3 ~ 109 Byte/Seconds. Compared to the trivial latency term, the bandwidth term dominates the. entire cost T. This result leads us to simplify the costs of BE, MST, and LP based broadcast (Table 2) to be 2P-1 n, n log p and (b(p - 1) + n) , obtaining the following equations:.\nCompared with broadcast, reduce has the additional y term. Please note the processing speed of GPUs exceeds TFLOPs implying the term y * n -> 0. Therefore, it is also legitimate to ignore the term, and it yields the same result Treduce_BE/Treduce_LP < 2 and Treduce_M ST/Treduce.L P < log p. This completes our proof of the proposition 1..\nAnother interesting point is the cost of Linear Pipeline is invariant to GPU count p regardless of message length n. This implies broadcasting a vector to 8 GPUs should cost the same as broadcasting to 2 GPUs. In practice, we set the block size b around 64 KB, and p is within 101. This suggests the bandwidth term, e.g. the cost of LP broadcast (bp - p + n) ~ n. Hence, the cost of LP collectives are less likely to be affected by GPU counts p."}, {"section_index": "4", "section_name": "3.3 DEEP LEARNING WITH EFFICIENT BSP SGD", "section_text": "Tbroadcast_BE 2(1-) 2 Tbroadcast.LP Tbroadcast M ST log p ogr Toroadcast.LP b(p-1) n\nWe formulate the neural network training as the following optimization problem. Let be a loss. function with weight vector w as function parameters that takes randomly sampled images d, as the\nAlgorithm 1: BSP SGD with communications/computations overlapping\nwhile not converge do. 2 broadcast(wt) 3 for i E [0, 1, ..., max layers] do. nonblocking_broadcast(wi+1) 4 5 Forward(i) 6 sync_broadcast( Backward(max layers) 7 8 for i E [max layers - 1, ..., 1, 0] do 9 10 Backward(i) 11 sync_reduce( 12 wt+1 = GradientUpdate()\nAlgorithm 2: BSP SGD uses broadcast + reduce\nwhile not converge do 1 whi Vsub = ForwardBackward(dt) 2 Vy = reduce(Vysub) 3 if root then 4 wt+1 = GradientUpdate( 5 broadcast(wt+1) 6 barrier /* sync new 7 W\nnput. The objective of training is to find an approximate solution to the following problem\nmin E{yw(dt)} = Yw(dt)dP W\nIn Alg[2] synchronizations rely on broadcast and reduce. Each GPU calculates a partial gradient referred to as Vysub. The master GPU reconstructs Vy by reducing all Vysub. Then, the GPUs synchronize the latest weight, w, by broadcasting..\nIn Alg|3] synchronizations only rely on allreduce. The differences between this and Alg2|are that 1) there is only 1 synchronization point; 2) every GPU computes the gradient update. However, the parameters are not consistent after several iterations due to the precision issues of float multiplications in GradientUpdate. We synchronize w every 5 iterations to enforce consistency while still retaining the benefit of efficient pipelining in allreduce (line 7-8 Alg3)\nAlgorithm 3: BSP SGD uses allreduce\n1 while not converge do 2 Vysub = ForwardBackward(dt Vy = allreduce(Vysub) 3 4 barrier /* collect Vysub 5 wt+1 = GradientUpdate( if iter%5 = 0 then 6 broadcast(wt+1) ew W\nA typical neural network training iteration consists of a forward and backward pass. The forward. pass yields a loss that measures the discrepancy between the current predictions and the target; The backward pass calculates the gradient, the negative of which points to the steepest descent direction The gradient descent updates the parameters, w, as follows:.\n1 ntVyw(dt)\nGuided with Data Parallelism, BSP SGD evenly divides dt into p slices dt, d?, ..., d so that every. GPU computes a partial gradient from dt in parallel. The global gradient is equivalent to the average of partial gradients. After finishing the gradient update, w' is synchronized to all GPUs. We integrate the proposed collectives into this process to harness parallel processing capabilities of multiGPU system. In this paper, we discuss two approaches to BSP SGD implementations..\nfork and join: This approach forks the gradient computations, and joins partial gradients with. communications. In this case, communications do not overlap with computations. Alg2|and Alg|3 demonstrate two collective based implementations using 2 and 1 synchronization points, respectively\noverlapping communications with computations: Another approach is to overlap communica. tions and computations for each network layer. In the forward pass, GPUs broadcast network parameters of layer t+1 during forward computations at layer t. In the backward pass, GPUs reduce.\n4 k40m 4 k40m 4 k40m BE BE BE MST MST MST 10 LP 10 LP LP TTine nme 10 102 103 100 101 102 100 101 102 101 102 100 Message Size in MB Message Size in MB Message Size in MB (a) Broadcast (b) Reduce (c) AllReduce\n4 k40m 4 k40m 4 k40m 10 Q- BE BE - BE MST MS MST 10 100 LP LP IP 102 10-2 10 100 101 102 100 101 102 100 101 102 Message Size in MB. Message Size in MB Message Size in MB (a) Broadcast (b) Reduce (c) AllReduce\nFigure 3: The performance of different collective algorithms at different message sizes on 4 K40m\n.2 10 BE 0.15 BE O BE MST MST MST 4 X LP LP LP 0.8 0.1 10 0.6 0.4 0.2 0 0.05 2 5 6 2 5 6 GPU K40m Count GPU K40m Count GPU K40m Count (a) Broadcast (b) Reduce (c) AllReduce\nFigure 4: The scalability experiment: it measures performance variations with increasing GPUs.. partial gradients of layer t+1 during backward computations at layer t. As a result, layer-wise compu tations partially overlap with communications further improving the SGD efficiency. Alg|1|outlines. the general idea of overlapping communications and computations during network training. We use. nonblocking collectives to achieve the overlap.."}, {"section_index": "5", "section_name": "4.1 COLLECTIVES EVALUATION", "section_text": "The MST and BE implementations used in benchmarks are Caffe||and OpenMPI. Caffe optimizes the GPU placement in an MST to fully utilize inter-GPU peer to peer (P2P) access. OpenMPI and our implementation, similar to Caffe, also take advantages of P2P. We set up AlexNet and GoogLeNet training using the three BSP SGD algorithms proposed in section|3.3\nFig|3|presents the performance of LP, MST, and BE based collectives at different message sizes on 4 K40m. The LP broadcast demonstrates an average of 29.2x and 2.3x speedup over BE and MST based alternatives in Caffe and OpenMPI; the LP reduce demonstrates an average of 360.55x and 8.7x speedup over BE and MST reduce, and the LP allreduce demonstrates an average of 109.2x and 7.9x speedup over BE and MST allreduce. In theory, LP is approximately 2x faster than both the MST (p = 4 -> logp = 2) and BE approaches. An extraordinary speedup against Open MPI is observable due to inefficient data movement in Open MPI, which moves data to host RAM to perform reduce operations on the CPU before being copied to the target GPU. Instead, we perform reduce on the GPUs, and data blocks directly flow to the target GPU via P2P access. The overlapped reduce computations with communications enables our reduce and allreduce to be 8x faster than that of MST. At each step of MST, GPUs reduce the incoming data only after all the data is available In contrast, our fine-grained block design enables communications and computations to overlap by reducing a block while receiving a new one in the pipeline. broadcast only involves data copies, and both we and Caffe use P2P to transmit the data. Therefore, the speedup of MST broadcast (2.3x) conforms to the 2.0x theoretical prediction.\nThe theoretical analysis indicates both the cost of LP and BE collectives are invariant to the GPU count p. while the cost of MST increases with p by a factor of loqp. This is also noticeable in the\n2Caffe implements an MST based broadcast and reduce for the multiGPU training\n200mR 10 .2 BE 0.15 BE Q BE MST MST MST + LP LP 0.8 LP 0.1 10 0.6 0.4 0.2 102 0.05 : 6 2 2 3 6 GPU K40m Count GPU K40m Count GPU K40m Count (a) Broadcast (b) Reduce (c) AllReduce\npros and cons of both approaches: The cost of Alg|2|or Alg|3|is comm + compt, while the cost of Alg|1|is max(comm, compt). If the network has over a few hundred MB of parameters, the overlapping will be significantly better than the fork and join approach. However, Alg|2land Alg3|are. relatively easy to implement, and the performance on networks < 100 MB is similar to that of Alg|1\nAlexNet 256MB.iters = 30000. batch size = 1000 GoogLeNet 51MB, iters = 67000, batch size = 80 10 DBE Alg.1 BE Alg.1 - MST Alg.1 9 6 - MST AIg.1 faon naannsn BE Overlap Alg.3 sso 8 BE Overlap Alg.3 LP AIg.1 LP Alg.1 7 -- LP AIg.2 LP AIg.2 -LP Overlap Alg.3 6 LP Overlap Alg.3 5 4 2 3 2 0 2 3 4 5 0 2 3 seconds x104 seconds x10 (a) AlexNet (b) GoogLeNet\nFigure 5: The training losses in fixed iterations on 4 K40m. We set GoogLeNet lr = 0.01. AlexNet starts at lr = 0.015, and set to 0.0015 after the average loss < 2. The solver is SGD + momentum, and the dataset is ImageNet."}, {"section_index": "6", "section_name": "4.2 IMPACT ON THE NEURAL NETWORK TRAINING", "section_text": "Fig|5|demonstrates LP collectives effectively reduce the total training time without affecting SGD's. convergence properties in training large scale neural networks. We use inspurCaffe, Caffe and cuhk's Caffe branch to benchmark the performance of BE-Alg.1, MST-Alg.1 and BE-Overlap-Alg.3. We also implement Alg.1,2,3, integrated with LP collectives, in Caffe to ensure consistency. Please note the model size affects the communication time, while the batch size affects the computation time We carefully set these parameters to cover as many cases as possible. Please refer to the captions. of Table|2land Fig|5|for experiment details. We assume these algorithms have similar convergence. speeds in iterations as losses of AlexNet are approximately 1 after 30oo0 iterations and losses of GoogLeNet are approximately 2 after 67000 iterations. However, the time taken to reach the targe1 loss varies dramatically. For example, the speedups of LP-Overlap-Alg.3 over BE-Alg.1 in training. AlexNet and GoogLeNet are 2.12x and 2.19x, respectively..\nThe experiments demonstrate that the speed of the three proposed BSP SGD algorithms is Alg.3 > Alg.2 > Alg.1. The result conforms to our expectations as the cost of Alg.3 is max(comm, compt) while the cost of Alg.1 and Alg.2 is comm + compt. However, the performance gain is quite limitec from Alg.2 to Alg.3 as there is little room left for reducing communications from LP Alg.2 to Alg.3 as demonstrated in Table|2] If the model parameters keep increasing, we expect Alg.3 to be more efficient than Alg.2.\nTable 2: The iteration profile. comm stands for communications, and compt stands for computations. % represents the percentages of communications in an iteration. The statistics are the average of 30000 AlexNet iterations, and 67000 GoogLeNet iterations. We set the batch size of AlexNet to 1000, and GoogLeNet to 80. AlexNet and GoogLeNet are 256MB and 51MB, respectively.\nscalability experiment demonstrated in Fig4] Please note there is a cost jump between 4 and 5 GPUs Communications have to go through QPI after 4 GPUs incurring the additional cost of copying through the host RAM. The cost of the Linear Pipeline method robustly stays the same if GPU counts -[2,3,4] or [5,6], and QPI explains the inconsistency. The communication steps of MST for 2,3,4,5,6 GPUs are 1,2,2,3,3, respectively. The MST experiments verify the logp cost increase w.r.t GPU counts by evident cost jumps at 3 and 5 GPUs. The data flow of OpenMPI between two GPUs follows GPU RAM->host RAM->GPU RAM. The inefficient data flow inside Open MPI contributes to the near linear cost increase with GPU counts p.\nUnder Alg.1, but using different underlying collective algorithms, LP-Alg.1 presents 1.91x and 1.74x speedup over BE-Alg.1 and MST-Alg.1 in AlexNet, and 1.6x and 1.1x speedup over BE-Alg.1 and MST-Alg.1 in GoogLeNet. The iteration profiles of these 3 algorithms in Table|2lindicate the communication cost of LP-Alg.1 is only 10% of BE-Alg.1, and 11% of MST-Alg.1 in AlexNet; and 6% of BE-Alg.1, and 43% of MST-Alg.1 in GoogLetNet.."}]
r17RD2oxe
[{"section_index": "0", "section_name": "APPENDIX", "section_text": "Yan Wang* Kun He\nComputer Science Department\nSiamese cat kit fox tiger cat red fox Egyptian cat Jackal tabby jaguar Persian cat leopard African elephant snow leopard Indian elephant tiger cat snow leopard tabby leopard Siamese cat jaguar Persian cat coati Egyptian cat Alaskan brown bear giant panda grizzly lesser panda Jackal Bassarisk kit fox coati red fox African elephant Bassarisk Indian elephant giant panda Alaskan brown bear lesser panda grizzly MDS method WordNet\nIn Evolutionary Biology, species close in the tree of evolution are identified by similar visual features. In computer vision, deep neural networks perform image classification by learning to identify similar visual features. This leads to an in- teresting question: is it possible to leverage the advantage of deep networks to construct a tree of life? In this paper, we make the first attempt at building the phylogenetic tree diagram by leveraging the high-level features learned by deep neural networks. Our method is based on the intuition that if two species share similar features, then their cross activations in the softmax layer should be high Based on the deep representation of convolutional neural networks trained for im- age classification, we build a tree of life for species in the image categories of ImageNet. Further, for species not in the ImageNet categories that are visually similar to some category, the cosine similarity of their activation vectors in the same layer should be high. By applying the inner product similarity of the activa- tion vectors at the last fully connected layer for different species, we can roughly build their tree of life. Our work provides a new perspective to the deep repre- sentation and sheds light on possible novel applications of deep representation to other areas like Bioinformatics."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning transforms the data into compact intermediate representations akin to principal com ponents, and derives layered structures by removing the redundancy in representations (Li Deng. 2014). In recent years, deep learning has demonstrated great success with significant improve. ment in various artificial intelligence applications, including speech recognition (Sak et al.| 2015) image recognition (Ciresan et al.]2012]Cir]Krizhevsky et al.]2012), and natural language process- ing (Vinyals et al. 2015} Socher et al.2013).\nConvolutional Neural Networks (CNNs) are mainly designed for image and video recognition. Typ. ical CNN architecture alternates convolutional layers and pooling layers, followed by several fully. connected or sparsely connected layers with a final softmax as the classification layer. Milestone include the 16-layer AlexNet (Krizhevsky et al.||2012), the 19-layer VGG (Simonyan & Zisserman. 2014), and the 22-layer GoogleNet (Szegedy et al.]2015). By adding identity function as a shor. cut,He et al.(2015) are able to build a substantially deeper ResNet with 152 layers, which receivec. the first place on the ILSVRC 2015 image classification task (Russakovsky et al.[|2015). Other very. deep networks include the highway network with depths up to 100 layers (Srivastava et al.|2015). Eldan & Shamir(2016) provide a theoretical justification that reveals the utility of having deepe. networks rather than wider networks, implying that future progress will lead to the development oi. even deeper networks\nUnderstanding the deep representations of neural networks has become increasingly difficult as the state-of-the-art models have more layers. This problem is important because it will help us understand the intrinsic mechanism of deep neural networks and explore possible novel applications based on the understanding.Ballester & de Araujo(2016) show how CNNs, trained to identify ob- jects primarily in photos, could be used for abstract sketch recognition.Gatys et al.(2015a b) utilize\nFigure 6: Constructing tree of life containing some species not in training set (marked by pink point). We use inner product method to build the distance matrix. Only coati is in the wrong leaf of the tree\n*The first three authors contribute equally ' Corresponding author.\nTo test the inner product method in Section|2.2[ that can build tree of the species not in the training set, we select 5 species not in the training set and 14 species in the training set. We choose 1000 images for each species except for Bassarisk which only contains 694 images. We show the results on ResNet using the MDS based method. Figure|6Jillustrates the result.\nJohn E. Hopcroft Yu Sun\nComputer Science Department Cornell University\n{jeh, ys646}@cs.cornell.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "the correlations between feature maps to synthesize natural textures and transfer artistic style with high perceptual quality. In Bioinformatics, deep neural networks are used for the analysis of medi- cal images for cancer detection (Cirean et al.|2013) as well as drug discovery and toxicology (Dahl et al.]2014] Ramsundar et al.[2015] [Wallach et al.[2015). A deep-learning approach based on the autoencoder architecture has been adopted to predict Gene Ontology annotations and gene-function relationships (Chicco et al.2014).\nThe Tree of Life refers to the compilation of a comprehensive phylogenetic (or evolutionary database rooted at the last universal common ancestor of life on Earth. Over the course of hundred. of millions of years, the splitting and subsequent divergence of lineages has produced the tree of life which has as its leaves the many species of organisms (Darwin 1859). Here we refer to a phyloge. netic tree, evolutionary tree or tree of life as a branching diagram showing the inferred genealogica elationships (Evaluate how close two species are in the evolutionary history, as evaluated by ob. served heritable traits, such as DNA sequences) among various biological species (Hug et al.|2016) This is an important problem in evolutionary biology and many attempts have been made (Darwin. [859Doolittle & Bapteste]2007]Bapteste et al.]2009] Edwards2009). Originally tree of life was nanually built based on the understanding of the evolution history or the visual similarity of th species. Today modern techniques have been applied based on the gene similarity.."}, {"section_index": "3", "section_name": "Our contributions are two-fold", "section_text": "1) Provides a potential solution to the important problem of constructing a biology evolutionary tree\nWe propose a novel approach in constructing a tree of life using the deep representation of CNNs trained for image classification. We conjuncture that the hierarchical feature representation learnec by deep networks can be leveraged to quantify the visual similarity of the species. In this way, we might be able to construct tree of life using their feature similarity.\n2) Gives insight into the representations produced by deep neural networks\nWe conjecture that if images of two training categories share some similar features, then their cross. activations in the softmax layer should be high. Hence we could evaluate the genetic distance of species within the training categories. Based on the deep representation of several typical CNNs,. AlexNet (Krizhevsky et al.]2012), VGG (Simonyan & Zisserman]2014) and ResNet (He et al. 2015) that are trained for ImageNet classification, we construct tree of life for dozens of species in. the thousands of ImageNet categories of the training dataset..\nFor species not in the training categories that are visually similar to some species in the training. dataset, could we still utilize their deep representation in order to judge the relationship amon, different species? We conjuncture that they show high cosine similarity of the activation vectors it high-level layers. By applying the inner product similarity of the activation vectors at the last full connected layer for different species, we present empirical evidence that through transfer learning we could roughly construct their tree of life..\nWe have two important criterions in mind while constructing our image dataset. 1) We would like. each image category, which corresponds to a node in the tree (i.e. a species), to have enough sample. such that a statistic from the network activations is reasonably robust to noise. 2) There exists a. ground truth hierarchy on the image categories, so we can objectively evaluate the effectiveness of. our method.\nExperiments show that the proposed method using deep representation is very competitive to humar beings in building the tree of life based on the visual similarity of the species. We also try on net-. works at different epochs during the training, and the quality of tree of life increases over the course. of training. The performance among the three networks, AlexNet, VGG and ResNet, improves with. the improvement of their classification quality..\nFortunately, the ImageNet 2012 Classification dataset provides the raw material we need. This. dataset contains 1000 categories of common life objects, and each category contains 1000 images as the training data. Also, those categories correspond exactly to nodes in the WordNet hierarchy WordNet (Miller1995) is a large lexical database of English, where words are grouped into sets. of cognitive synonyms (synsets), each expressing a distinct concept and synsets are interlinked by. means of conceptual-semantic and lexical relations..\nFor the ground truth, in the smallest WordNet subtree that contains A: 1) we could just consider the categories in A and their positions in this WordNet subtree and build a smallest ground truth tree T. 2) we could additional consider some categories outside A in this WordNet subtree. Then the ground. truth tree T? contains some categories outside the ImageNet training categories. Note that nodes. in T] is basically the intersection of nodes in T? and nodes in the 1000 ImageNet categories. For. each category outside the 1000 training categories, we also use the 1000 images from the ImageNet database"}, {"section_index": "4", "section_name": "2.2 SIMILARITY EVALUATION", "section_text": "We input all selected images for species in T or T? to a reference network and execute the feed forward pass. The feature maps (i.e. the activation vectors) of the last fully connected (FC) laye and the softmax layer are used to build the distance matrix.\n1) The Probability Method. For T, each class is in the training set and their ground truth labels are among the ones represented by the softmax layer. So we utilize the probability distribution of the images at the softmax layer in order to build a distance matrix. Specifically, for two classes of images A and B in the categories of A, we consider their cross activations in the softmax layer. For each image a E A, we obtain the predicted probability Pa2B that this image belongs to node B, and we calculate the average of these values, named PA2B.\nThe closer the genealogical relationship of A and B, the higher the cross predicted probability value should be. As the cross confidence is close to zero, we use the logistic function to enlarge the value Then we add \"_' to assign lower value to closer species and to keep the value nonnegative.\n2) The Inner Product Method. For T2. A, as some species are not in the 1000 classification cate-. gories, we use the centroid vector of the activations at the last fully connected (FC) layer for each species, and calculate the dot product of the two unitized centroid vectors to get their cosine simi- larity. Then we add \"_\" to assign lower value to closer species.\nThe only exception is for Bassarisk which only contains 694 images\nFor the reference network, we select three popular CNNs (AlexNet, VGG-16 and ResNet-152) trained on ImageNet. The top 5 classification errors of AlexNet, VGG and ResNet are 15.3% 9.9% and 6.7% respectively. So they all learn the features of the images very well and we could. leverage their deep representations for the ToL construction..\nTo find a small branch of the phylogenetic tree in order to do the reconstruction, we choose a set. A of genealogically close species (species close in the evolutionary tree of life as evaluated by the branch distance) from the 1o0o ImageNet categories. And for each category A E A, we use all the 1000 images from the training dataset to get robust result..\nPA2B = Pa2B aEA\nFor each image b E B, we obtain the predicted probability P2A that this image belongs to node A and we calculate the average of these values, named Pb2A.\nPB2A Pb2A bEB\nif A = B -log(0.5PA2B + 0.5PB2A if A B\nVA:VB DAB = -log OR\nBased on the distance matrix, we have three methods, namely \"Approximation Central Point\"', \"Mir imum Spanning Tree\"', and \"Multidimensional Scaling\", to construct a tree of life\n2) The \"Minimum Spanning Tree' (MST) based method. In the MST based method, we first construct a Minimum Spanning Tree (MST) based on the distance matrix. Then we build a tree from the root to the leaves, recursively split the current MST subtree into two parts by removing its longest edge until there is only one node in each subtree. In this way we build a \"tree\" with all the leaves corresponding to the species and closest species are splitted in the end..\n3) The \"Multidimensional Scaling'(MDS) based method. In the MDS based method, according to D, we know distances among the points which corresponds to the species. We first apply the MDS (Multi-Dimensional Scaling) (Borg & Groenen]2005) algorithm to do dimension reduction and project the species points into a two dimensional subspace. Then we build a tree bottom up by recursively merging two points with the smallest Euclidean distance in the two dimensional subspace and regard the midpoint of the two merging points as the new representative point.\nOur following experiments show that MST and MDS show similar performance but ACP is consid erably weaker.\nWe conduct a plenty set of experiments to build several branches of the phylogenetic trees of differ. ent granularity. To test whether our method could distinguish tiny visual differences, we first choos genealogically very close species, such as a set of fish species or a set of canine species, and con. struct their tree of life. Then, to test whether our method has good scalability for larger species, sucl. as dog, cat, fish, etc., we choose 39 different large species to build a more general tree of life an. verify whether different breeds of one large species like dogs could be grouped together. In addi. tion, to evaluate the ability of constructing hierarchical trees based on the visual similarity of image. outside the Biology, we choose some vehicle categories from the ImageNet dataset (Russakovsk. et al.[[2015) and build a vehicle tree.\nFor the methods, we use the probability method in Section 2.2|to build the distance matrix, and. apply ACP, MST, and MDS based methods to build the tree of life. For the inner product method. in Section2.2] the results is slightly weaker, but it can deal with species or categories outside the. training set. For details of inner product method, the readers are referred to the Appendix\nTo construct fine-grained tree of life, we select several fish species of high visual similarity and test whether we could identify the tiny differences of the features. We pick six fish species from the ImageNet training set and for each species, we input all the 1000 images in the training dataset to the ResNet network.\nFigure[1shows that the tree of life constructed by MST and MDS coincides with the hierarchial tree built on WordNet. The hierarchical tree constructed by ACP does not coincide with the ground truth at all. The reason may be that in any triangle ABC, the edge length from A to the median of BC say D, is shorter than the average length of edge AB and AC. If A is far more from symmetric as evaluated by edge BC, the recalculated distance of AD does not accurately represent the distance of A to the merged set of {B, C}.\nOur results demonstrate that deep CNNs could capture the local features as well as the global fea tures simultaneously. As to rebuild tree of life for genealogically close species, we need both features of different granularity like the animal's size, skin texture and shape. For instance, the texture of a\n1) The \"Approximation Central Point'(ACP) based method. In the ACP based method, we build a tree bottom up by recursively merging two species points, say A and B, with the smallest distance.. and setting the distance of the new point to other points as the average distance of A and B to other. points respectively.\nAs another example, we choose 11 very similar canine species and build a relatively lager tree, as illustrated in Figure[3] We can correctly build the canine tree, possibly according to their fur texture and shape features. The reconstructed quality is as good as what human beings could reconstruct based on the visual similarity\nlionfish tiger shark tiger shark tiger shark tench great white shark great white shark great white shark puffer lionfish lionfish lionfish tiger shark puffer puffer puffer great white shark tench tench tench goldfish goldfish goldfish goldfish ACP method MST method MDS method WordNet\nFigure2|shows the coarse-grained tree of life for clustering species of different families by different networks: ResNet, VGG and AlexNet. We pick 38 species from five families: bird, canine, plant. fish and feline.ResNet and VGG can correctly cluster the species by families, while AlexNet has makes some mistakes. This result indicates that deep networks with higher classification quality learn the deep representations better, such that the Tree of Life built based on the deep representation also have different reconstruction quality.\nTo show that we not only correctly cluster the species, but also ensure the correct hierarchy within. each family, we further construct a tree containing 20 species of five families, as illustrated in Figure\nFigure 1: Trees of life for fish species. The first three trees are constructed by our methods, and the fourth tree is the ground truth using WordNet. The hierarchy of MST and MDS coincides with that. of the WordNet.\nbird canine plant fish feline ResNet VGG AlexNet\nFigure 2: Constructed tree of life for families of species by different networks. Species of the five families are in different colors. ResNet and VGG can correctly cluster the species but AlexNet does not. Build by MST based method..\nJapanese spaniel Border collie Shetland sheepdog collie Greater Swiss Mountain dog Great Dane Rottweiler Doberman briard schipperke German shepherd\nFigure 3: A constructed tree of life for 11. canine species. Closer species show shorter distance. Build by MDS based method..\nmountain bike mountain bike tendem bicycle tendem bicycle garbage truck garbage truck fire truck fire truck ambulance ambulance model T model T convertible convertible cab cab MST method WordNet\nFigure 5: A constructed vehicle tree. Our result looks more reasonable than that of the WordNet Build by the MDS method.\nTo show the ability of building hierarchical tree for other objects other than animals, we pick eight vehicle categories from the ImageNet training set. Vehicles are very different from the animals\ntoucan brambling house finch jacamar lorikeet pufferfish goldfish sea lion dugong sturgeon Shetland sheepdog Japanese spaniel Greater Swiss Mountain dog German sheepdog great dane tabby Persian cat cougar leopard jaguar\nFigure 4: A constructed small tree of life for different families of species. We not only correctly cluster each family of species, but also present correct hierarchy of the species within each family. Build by MDS based method.\nTheir shapes are kind of fixed and they can only do certain motions like going forward or turnin around. Images of vehicles do not embed abundant features as the animal images do\nNevertheless, our method still output good results, as shown in Figure5] We cluster the ambulance fire truck and garbage truck together, all of which have big carriages. While in WordNet, the ambu lance is close to model T, convertible and cab, but the three do not have carriage and they are much smaller than ambulance. Our result is more reasonable than the WordNet provides."}, {"section_index": "5", "section_name": "4 CONCLUSION", "section_text": "By leveraging the similarity of features extracted automatically by deep learning techniques, we. build a tree of life for various biological species, either belonging to the training categories or not. The results are highly competitive to the level of human beings in building the tree of life based or the visual similarity of the images. Our work provides new understandings to the deep representatior of neural networks and sheds light on possible novel applications of deep learning in the area o. Bioinformatics. An intriguing future work would be on how to utilize deep learning techniques tc. build a more delicate tree of life based on the gene similarity of the species.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research work was supported by US Army Research Office(W911NF-14-1-0477) and Nationa Science Foundation of China(61472147).\nPedro Ballester and Ricardo Matsumura de Araujo. On the performance of googlenet and alexnet applied to sketches. In AAAI, pp. 1124-1128, 2016.\nEric Bapteste, Maureen A O'Malley, Robert G Beiko, Marc Ereshefsky, J Peter Gogarten, Laura Franklin-Hall, Francois-Joseph Lapointe, John Dupre, Tal Dagan, Yan Boucher, et al. Prokaryotic. evolution and the tree of life are two different things. Biology direct, 4(1):1, 2009..\nDan C. Ciresan, Ueli Meier, Jonathan Masci, and Jurgen Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333-338, 2012\nCharles Darwin. On the origin of species by means of natural selection. Nature, pp. 502, 1859\nRonen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In COLT, pp 907-940, 2016.\nIngwer Borg and Patrick JF Groenen. Modern multidimensional scaling: Theory and applications Springer Science & Business Media, 2005..\nDan C. Cirean, Alessandro Giusti, Luca M. Gambardella, and Jrgen Schmidhuber. Mitosis detection in breast cancer histology images using deep neural networks. In MICCAI, pp. 411-418, 2013.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutiona neural networks. In NIPS, pp. 262-270, May 2015b\nDong Yu Li Deng. Deep learning: Methods and applications. Technical report. May 2014\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. 2014\nRichard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. Parsing with composi tional vector grammars. In ACL, pp. 455-465, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In N1PS, pp. 1097-1105, 2012\nGeorge A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39-41, 1995.\nBharath Ramsundar, Steven M. Kearnes, Patrick Riley, Dale Webster, David E. Konerding, and Vijay S. Pande. Massively multitask networks for drug discovery. CoRR, abs/1502.02072, 2015"}]
HJGODLqgx
[{"section_index": "0", "section_name": "RECURRENT HIDDEN SEMI-MARKOV MODEI", "section_text": "Hanjun Dai1, Bo Dai1, Yan-Ming Zhang?, Shuang Li1, Le Song\nFigure 4: Reconstruction illustration. The generative RNNs (decoders) are asked to reconstruct th signals from only the discrete labels and durations (which are generated from encoder)..\nsmooth, here the signals depict high variance. Different activities exhibit quite different duratior and patterns. Also, the activity types changes frequently. The R-HSMM almost captured each. changing point of activities with both long and short durations. The corresponding mean accuracy. also outperforms the baselines. However, we observed there are some correct segmentations witl. wrong labels. This happens mostly to the short segments, in which the RNN doesn't have enougl. history established for distinguishing similar activity types..\nPhysionet The heart sound records, usually represented graphically by phonocardiogram (PCG) are key resources for pathology classification of patients. We collect data from PhysioNet Challenge 2016 (Springer et al.] 2015), where each observation has been labeled with one of the four states namely Diastole, S1, Systole and S2. We experiment with both the raw signals and the signals afte feature extraction. Regarding the raw signals (Heart dataset), we collect 7 1-dimensional sequences of length around 40000. The feature-rich dataset (PN-Full) contains 2750 sequences, where each of them consists of 1500 4-dimensional observations. We do 5-fold cross validation for PN-Full. The visualization of segmentation results are shown in AppendixB.4 As the results shown in Table|1 our algorithm still outperforms the baselines significantly. Also for such long raw signal sequences the speed advantage of bi-RNN encoder over Viterbi is more significant. Viterbi takes 8min to do one inference, while bi-RNN only takes several seconds. Our framework is also flexible to incorporate prior knowledge, like the regularity of heart state transition into HSMM."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Segmentation and labeling of time series data is an important problem in machine learning an signal processing. Given a sequence of observations {x1, x2,..., xT}, we want to divide the . observations into several segments and label each segment simultaneously, where each segmer consists of consecutive observations. The supervised sequence segmentation or labeling technique have been well studied in recent decades (Sutskever et al.]2014] Kong et al.]2015] Chen et al. 2015). However, for complicated signals, like human activity sensor data, accurately annotating th segmentation boundary or the activity type would be prohibitive. Therefore, it is urgent to develo. unsupervised algorithms that can jointly learn segmentation and labeling information directly fror. the data without supervisions. Figure|1|provides an illustration which we are focus on.."}, {"section_index": "2", "section_name": "5.2 RECONSTRUCTION", "section_text": "Figure 14: More segmentation results on Heart Sound dataset\nThe Hidden Semi-Markov Model (HSMM) (Murphy2002) is a powerful model for such task. I. eliminates the implicit geometric duration distribution assumptions in HMM (Yu]2010), thus allow. the state to transit in a non-Markovian way. Most of the HSMM variants make strong parametri. assumptions on the observation model (Rabiner1989| Johnson & Willsky2013]Yu]2010). Thi makes the learning and inference simple, but ignores the nonlinear and long-range dependency withii. a segment. Take the human activity signals as an example. The movements a person performs at . certain time step would rely heavily on the previous movements, like the interleaving actions of lef. hand and right hand in swimming, or more complicated dependency like shooting after jumping i1. playing basketball. Some models have been proposed to tackle this problem (Ghahramani & Hintor 2000 Fox et al.] 2009] Linderman et al.]2016), but are limited in linear case.\nFrom Fig.4|we can see the generative RNN correctly captures different characteristics from signals. of different segment labels, such as different frequencies and scales in Sine dataset, or the different. variance patterns in GP dataset. This is essential to distinguish between different segments.\nWe presented the R-HSMM, a generalization of HSMM by incorporating recurrent neural generative. model as the emission probability. To eliminate the difficulty caused by such flexible and powerful. model in inference, we introduced the bi-RNN as the encoding distribution via the variational. autoencoder framework to mimic the forward-backward algorithm. To deal with the difficulty of training VAE containing discrete latent variables, we proposed a novel stochastic distributional penalty. method. We justified the modeling power of the proposed R-HSMM via segmentation accuracy and. reconstruction visualization. From the comprehensive comparison, the proposed model significantly. outperforms the existing models. It should be emphasized that the structured bi-RNN encoder yields similar performance as the exact MAP inference, while being 400 times faster. Future work includes. further speeding up of our algorithm, as well as generalizing our learning algorithm to other discrete. variational autoencoder.\nSince people have justified RNN's ability in modeling nonlinear and complicated dependen. cies (Sutskever et a1.]2014f Du et al.[2016), we introduce the recurrent neural emission mode1 into HSMM for capturing various dependencies within each segment to address such issue. However, the. flexibility of recurrent neural model comes with prices: it makes the exact Expectation-Maximization (EM) algorithm computationally too expensive.\nTo speed up the learning and inference, we exploit the variational encoder (VAE) framework (Kingma & Welling 2013). Specifically, we propose to use bidirectional RNN (bi-RNN) encoder. Such"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Segmentation and labeling of high dimensional time series data has wide appli cations in behavior understanding and medical diagnosis. Due to the difficulty of obtaining a large amount the label information, realizing this objective in an. unsupervised way is highly desirable. Hidden Semi-Markov Model (HSMM) is a classical tool for this problem. However, existing HSMM and its variants typically. make strong generative assumptions on the observations within each segment, thus their ability to capture the nonlinear and complex dynamics within each segment is limited. To address this limitation, we propose to incorporate the Recurrent Neural. Network (RNN) as the generative process of each segment, resulting the Recurrent. HSMM (R-HSMM). To accelerate the inference while preserving accuracy, we designed a structure encoding function to mimic the exact inference. By gener- alizing the penalty method to distribution space, we are able to train the model. and the encoding function simultaneously. We also demonstrate that the R-HSMM. significantly outperforms the previous state-of-the-art on both the synthetic and real-world datasets.\n0.5 M Wwwww 0.5 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 CCRAAE Diastole |S1 Systole I s2 Diastole | S1 Systole I s2 0.5 0 ^wwM wwww Whwm -0.5 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 500 1000 1500 2000 2500 3000 3500 4000 4500 dp-WWSHMNWSH 5000 CRREEE Diastole S1 Systole l s2 Diastole S1 Systole I s2\nIn this section, we examine the ability of learned generative model by visualizing the reconstructed signals. Given a sequence x, we use recognition model to get the latent variables z and d, then use learned K generative RNNs to generate signals within each segment. For the ease of visualization. we show the results on 1D signal dataset in Fig.4a and Fig.4b\n100200300400 500 600 700 800 900 100 1100 1200 100 200300400500600700800 900 10001100 1200 S1 ]S2 s3 S1 S2 s3 (a) Sine (b) Gaussian Process Figure 1: Synthetic experiment results. Different background colors represent the segmentations\nThis project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF IIS-1639792 EAGER, ONR N00014-15-1-2340, Nvidia and Intel.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. 2013\nD. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, second edition, 1999\nFigure 1: Synthetic experiment results. Different background colors represent the segmentations. with different labels. In the top row, the black curve shows the raw signal. (a) The Sine data set is generated by a HSMM with 3 hidden states, where each one has a corresponding sine function; (b). Similar to[1a] but the segments are generated from Gaussian processes with different kernel functions.. The first two rows are our algorithms which almost exact locate every segment..\narchitecture will mimic the forward-backward algorithm, and hence is expected to capture simil information as in exact posterior calculation.\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks arXiv preprint arXiv:1609.01704, 2016.\nIt should be emphasized that due to the discrete nature of the latent variables in our model, the algorithm proposed in Kingma & Welling(2013) and its extension on time-series models (Gao et al. 2016}Krishnan et al. 2015) are not directly applicable. There are plenty of work proposed based on stochastic neuron (Tang & Salakhutdinov2013] Bengio et al.]2013) Mnih & Gregor2014 Raiko et al.]2014] Gu et al.[2015 Chung et al.[2016) to remedy such issue. However, none of these off-the-shelf methods are easy to achieve good performance according to our experiment: the hundreds or thousands layers of stochastic neuron (which is equal to the length of sequence), togethe with the switching generative RNN, make the encoding function very sensitive, and thus, extremely difficult to train fully on unsupervised setting. We propose a solution, stochastic distributional penalty method, which introduces auxiliary distributions to separate the decoding R-HSMM and encoding bi-RNN in training procedure, and thus, reduces the learning difficulty for each component. This novel algorithm is general enough and can be applied to other VAE with discrete latent variables which can be of independent interest. We emphasize that the proposed algorithm is maximizing exact the nagative Helmholtz variational free energy. It is different from Johnson et al.(2016) in which a lower bound of the variational free energy is proposed as the surrogate to be maximized for convenience.\nWe experimentally justified our algorithm on the synthetic datasets and three real-world datasets. namely the segmentation tasks for human activity. fruit fly behavior and heart sound records. The R-HSMM with Viterbi exact inference significantly outperforms basic HSMM and its variants,. demonstrating the generative model is indeed flexible. Moreover, the trained bi-RNN encoder. also achieve similar state-of-the-art performances to the exact inference, but with 400 times faster. inference speed, showing the proposed structured encoding function is able to mimic the exact. inference efficiently.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780.1997."}, {"section_index": "4", "section_name": "2 MODEL ARCHITECTURE", "section_text": "Given a sequence x = [1, x2, ..., |j], where xt E Rm is an m dimensional observation at time t. our goal is to divide the sequence into meaningful segments. Thus, each observation xt will have the corresponding label zt E Z, where Z = {1, 2, ..., K} is a finite discrete label set and K is predefined. The label sequence z = [1, 22, : . , %lr! should have the same length of x.\nMatthew J Johnson and Alan S Willsky. Bayesian nonparametric hidden semi-markov models. Th Journal of Machine Learning Research. 14(1):673-701. 2013\nBesides labels, HSMM will associate each position t with additional variable dt E D = {1, 2, . . . , D}. where d, is known as duration variable and D is the maximum possible duration. The duration variable can control the number of steps the current hidden state will remain. We use d to denote the duration sequence. We also use notation xt:t2 to denote the substring [xt, Xt1+1,..., Xt2] of x. Without ambiguity, we use z as a segment label, and d as the duration..\nJamey Kain, Chris Stokes, Quentin Gaudry, Xiangzhi Song, James Foley, Rachel Wilson, an Benjamin de Bivort. Leg-tracking and automated behavioural classification in drosophila. Natur communications, 4:1910, 2013.\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016.\nShixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. arXiv preprint arXiv:1511.05176, 2015.\nLingpeng Kong, Chris Dyer, and Noah A Smith. Segmental recurrent neural networks. arXiv preprin arXiv:1511.06018, 2015\nScott W Linderman, Andrew C Miller, Ryan P Adam, David M Blei, Liam Paninski, and Matthew Johnson. Recurrent switching linear dynamical systems. arXiv preprint arXiv:1610.08466, 2016.\nIn this paper, we focus on one of the variants of HSMM, namely the explicit duration HMM (EDHMM) (Rabiner] 1989), and use Decreasing Count Variables (Chiappa2014) for the notation.\nExplicit Duration Hidden Markov Model. Similar to HMM, this model treats the pair of (z, d) a 'macro hidden state'. The probability of initial macro state is defined as P(z, d) = P(z)P(d|z). We use the notation , P(z) and P(d|z) Bz.d to parametrize the initial probability and duratior probability, respectively. A,; = P(zt = i|zt-1 = j, dt-1 = 1) is the state transition probabilit on the segment boundary. Here E RK is in K-dimensional simplex. For each hidden state z, the corresponding rows Bz.: and Az.: are also in probability simplex. Here we assume the multinomia distribution for P(d|z).\nIn EDHMM, the transition probability of macro hidden state P(zt, dt|Zt-1, dt-1) is decomposed b P(zt|Zt-1. dt-1)P(dtZt, dt_ -1) and thus can be defined as:\nKevin P Murphy. Hidden semi-markoy models (hsmms). 2002\nKevin P. Murphy. Machine learnin. probabilistic ective. MIT Press, 2012\nLawrence R Rabiner. A tutorial on hidden markov models and selected applications in speecl recognition. Proceedings of the IEEE, 77(2):257-286, 1989.\nTapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989. 2014\n|x L(x) = log II P(zq|zt-1,dt-1)P(dt|zt,dt-1)P(x|z,d) ) T z1 z,d t=2\nJorge-L Reyes-Ortiz, Luca Oneto, Albert Sama, Xavier Parra, and Davide Anguita. Transition-aware human activity recognition using smartphones. Neurocomputing, 171:754-767, 2016\nP. M. Williams. Bayesian conditionalisation and the principle of minimum information. British Journal for the Philosophy of Science, 31(2):131-144, 1980.\nShun-Zheng Yu. Hidden semi-markov models. Artificial Intellige ence, 174(2):215-243, 2010.\nShun-Zheng Yu and Hisashi Kobayashi. An efficient forward-backward algorithm for an explicit duration hidden markov model. Signal Processing Letters, IEEE, 10(1):11-14, 2003.\nht = 0(W(zsi)xt\nFinally, in this model, P(x|z, d) = =1 P(s,s;,+ds,-1|2s,, ds.) is computed by the product of generative probabilities for each segment. In Eq.4 W E Rm xh is a weight matrix capturing the last observation xt-1, and V E Rhh is for the propagation of history ht-1. The b is a bias term. The superscript zs, indexes the RNN we used for the corresponding segment. The segments with different labels are generated using different RNNs. So we should maintain K RNNs. o() is a nonlinear activation function. We use tanh in our experiments.\nArnold Zellner. Optimal Information Processing and Bayes's Theorem. The American Statistician 42(4), November 1988.\nAndriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprini arXiv:1602.06725, 2016\nif dt-1 = 1 if dt-1 = 1 P(zt|Zt-1,dt-1 P(dt if dt-1>1; if dt-1>1 =dt-1-1\nRecurrent Hidden Semi-Markov Model. For the simplicity of explanation, we focus our algorithm on the single sequence first. It is straightforward to apply the algorithm for dataset that has multiple. sequences. Given the parameters {, A, B}, the log-likelihood of a single observation sequence x. can be written as below,\nsi+ds,-1 si+ds,-1 P(xs;:s;+ds;-1|Zs, ds) = P(xt|Xs;t-1,Zsi) = P(xt|ht,Zsi) t=Si t=Si"}, {"section_index": "5", "section_name": "Appendix", "section_text": "At the time step t, we assume a diagonal multivariate gaussian distribution over the conditiona likelihood, where the mean and covariance matrix are the output of RNN, i.e...\nP(xt|ht,Zs) ~ N(xt; = Diag(exp()"}, {"section_index": "6", "section_name": "OPTIMIZING DYNAMIC PROGRAMMING", "section_text": "The above formulation indicates that the generative model P(xt[ht, s,) depends not only on the last step observation xt-1, but also the last hidden state ht-1, which is together captured in Eq.4 In summary, we denote all the parameters in the proposed R-HSMM as 0 = {, A, B, 0rnn}. The corresponding graphical model is shown in Figure|2b\nIt is easy to see that the memory consumption is O([x[K)\nTo obtain the posterior or MAP in the proposed R-HSMM, the classical forward-backward algorithm or Viterbi algorithm needs to solve one dynamic programming per sample, which makes the inference costly, especially for the long sequence with thousands of timestamps. So instead, we treat the Bayesian inference from optimization perspective, and obtain the posterior by maximizing the negative Helmholtz variational free energy (Williams1980] Zellner!1988 Dai et al.2016),\nCaching emission probabilityAt each time step t, we compute P(xt+r|xt:t+r-1, z = j) for each j E Z and r E D. That is to say, we compute all the emission probabilities of observations starting from time t, and within max possible duration D. This can be done by performing feed-forward. of K RNNs. After that, storing these results will require O(KD) space. For simplicity, we let. ej,r = P(xt+r|Xt:t+r-1,z = j), where et E RKx D.\nover the space of all valid densities P. To make the optimization (6) tractable, the variationa autoencoder restricts the feasible sets to be some parametrized density Qw, which can be execute efficiently comparing to the forward-backward algorithm or Viterbi algorithm. However, sucl restriction will introduce extra approximation error. To reduce the approximation error, we use a structured model, i.e., bidirectional RNN, to mimic the dynamic programming in forward-backwar algorithm. Specifically, in the forward-backward algorithm, the forward message at(t, dt) an backward message t(zt, dt) can be computed recursively, and marginal posterior at position depends on both at(Zt, dt) and t(Zt, dt). Similarly, in bi-RNN we embed the posterior messag with RNN's latent vector, and marginal posterior is obtained from the latent vectors of two RNN bi-RNN encoder, the Ql, is decomposed as:"}, {"section_index": "7", "section_name": "A.2 SOUEEZE THE TIME COMPLEXITY", "section_text": "In Eq.[13] the most expensive part is when r = 1 and t > 1. If we solve this in a naive way, then this step would require O(|x|K2 D) for time complexity, which is quite expensive.\n|x| Qy(z,d|x) = Q(z1|h1;4)Q(d1|z1,h1;y) ]Q(zt|dt-1,ht;W)Q(dt|zt,dt-1,ht;Y t=2\nX+(i. = max max Q og(Ai,iB;.1P(xt|z = ) iEZ r'ED A -r+1=j,dt-r+1=rx - max Yt log(Ai,jBj,1P(xt|z =j)) iEZ A Zt-r+1=j,dt-r+1=r|x)\nThis reduces the complexity to be O(|x|K2)\nIt should be emphasized that due to the discrete nature of latent variables in our model, the algorithm proposed in Kingma & Welling(2013) is not directly applicable, and its extension with stochastic neuron reparametrization (Bengio et al.]2013]Raiko et al.2014] Gu et al.2015] Chung et al.]2016 cannot provide satisfied results for our model according to our experiments. Therefore, we extend the penalty method to distribution space to solve optimization (9).\nIn this section, we show that the Eq.[13|can be computed in a memory efficient way. Specifically, the dynamic programming procedure can be done with O([x|K) memory requirement, and caching for precomputed emission probabilities requires O(D? K) memory space.\nUpdate forward variable a Note that in Eq.13] when r > 1, we can update at(j, r) deterministi. cally. So it is not necessary to keep the records for r > 1.\nSpecifically, let's only record at(j, 1), and do the updates in a similar way as in Eq.[13] The only difference is that, when constructing the answer, i.e., the last segment solution, we need to do a loop over all possible z and d in order to find the best overall segmentation solution..\nLQ(x) := EQ(z,d|x) [log Pe(x,z,d) - log Q(z,d|x)], max Q(z,d|x)EP\nNote that, at a certain time step t, we would require the emission probability of observations P(xt|xt-r+1:t-1, z = j) for some j E Z and r E D. In this case, the corresponding first observation. . , e' at time step t. This makes the memory. consumption goes to O(K D2)\nHere we adopt similar technique as in|Yu & Kobayashi(2003). Let yt(i) = maxr'eD Qt-1(i, r') then we can get\nwhere ht = [RNN1(x1:t), RNN2(xt:|xJ)] is computed by bi-RNN. We use multinomial distributions Q(zt[ht; V) = M(softmax(W' ht)) and Q(dt[zt, ht; V) = M(softmax(W] ht)). The dependency over dt-1 ensures that the generated segmentation (z, d) is valid according to Eq.1] For example, if we sampled duration d-1 > 1 from Q, at time t - 1, then d and zt should be deterministic. In our experiment, we use LSTM (Hochreiter & Schmidhuber|[1997) as the recursive units in bi-RNN.\nN 1 max L(0,y;x 0,4 N n=1\n2 2 0 -1 -1 -2 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400 2 Reennreereee aeennereep W 2 2 3 3\nAlgorithm 1 Learning sequential VAE with stochastic distributional penalty method\n1: Input: sequences {x(n) 1N_ 2: Randomly initialize y(0) and 0 = {, A, B, 09nn} 3: for = 0,..., oo do 4: for t = 0 to T do 5: Sample {x(n)}M IM=1 uniformly from dataset with mini-batch size M. 6: Get{z(n),d(n)7M 7: 8: 9: 10: end for 11: end for\nFigure 5: More reconstruction illustration on Sine dataset\n6 2 -4 4 -6 6 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400 6 4 Rennreneee uetee 2 O 2 -2 4 4 -6\nAs we discussed, learning the sequential VAE with stochastic neuron reparametrization in unsu pervised setting is extremely difficult, and none the off-the-shelf techniques can provide satisfiec. results. In this section, we introduce auxiliary distribution into (9) and generalize the penalty. method Bertsekas (1999) to distribution space.\nN (x(n), z, d) - log Q(z, d|x max 0,y,{Q(z,d|x(n)}N=1 n= KL(Q(z,d|x(n)||Qy(z,d|x(n S.t.\nFigure 6: More reconstruction illustration on Gaussian Process dataset\nN 1 max N 0,4,{Q(z,d|x(n))}N n=1\nThe reconstructed signals from the original signals are shown in Fig.5|and Fig.6|for sine datase. and gaussian Process dataset respectively. We can see the reconstructed signal almost recovered the original signal. The RNN captured the key differences of states, such as the frequency and scale. while in gaussian process dataset, it also recovered the complicated pattern involving long tern. dependencies.\nLx(0,y\\x) = EQ(z,d|x) log Pe(x,z,d) -log Q(z,d|xi)- XKL (Q(z,d|x)|IQy(z,d\nWe show the confusion matrix of all methods on synthetic sine and gaussian process dataset i Figure7and Figure|8[respectively."}, {"section_index": "8", "section_name": "B.2 HUMAN ACTIVITY", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure9\nn Figure[10l we also show several other segmentation results on different testing sequences"}, {"section_index": "9", "section_name": "B.3 DROSOPHILA", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure|11\nSince each sequence is too long to be clearly shown in one figure, we split the segmentation results of one sequence into four parts, and show them in Figure|12.\nThe confusion matrices of our method and two baseline algorithms are shown in Figure|13\nSpecifically, we first introduce an auxiliary distribution Q(z, d|x) for each x and reformulate the optimization (9) as\nWe enforce the introduced Q(z, d|x) equals to Qw(z, d|x) in term of K L-divergence, so that the. optimization problems (9) and (10) are equivalent. Because of the non-negativity of K L-divergence. itself can be viewed as the penalty function, we arrive the alternative formulation of (10) as\nIn fact, because we are using the stochastic gradient for updating 0 and later, Q*(z, d|x) is never explicitly computed and only samples from it are required. Recall the fact that Qw,(z, d|x) has a nice decomposition|7] we can multiply its factors into each recursion step and still get the same complexity\ntneee icted class redicted cla (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE Figure 7: Confusion matrix on Synthetic Sine dataset. predicted class predicted classe predicted class predicted class predicted classe predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE Figure 8: Confusion matrix on Synthetic Gaussian Process dataset\nas original Viterbi algorithm for MAP or sampling. Specifically, let's define at(j, r) to be the best joint log probability of prefix x1:t and its corresponding segmentation which has the last segment with label j and duration r, i.e.,\nAlso, we split the segmentation results of one sequence into four parts, and show them in Figure[14\nWithout considering the complexity of computing emission probabilities, the dynamic programming needs time complexity O (x|K2 + [x|K D) (Yu & KobayashiJ2 2003) and O(|x|K) memory. We explain the details of optimizing the time and memory requirements in Appendix|A\nM 1 L max +Xlog Q 0,4 M n=1\nUpdate 0: Finding parameters to maximize the likelihood needs to solve the constrained optimization shown below\nM |s| 1 max log - n M n n 0 n= =Si\nM\nQt(j, r) = max logQ(z1:t,d1:t|X1:t), s.t.Zt=j,dt= dt-r=1,dt-r+1= r Z1:t,d1:t\n7 Qt-1(j, r -1) + 1+ l0gBj,r Bj,r P(xt|xt-r+1:t-1,z =j)) r > 1,t > 1 Qy(dt-r+1=r|z=j,x) maxieZ\\j maxr'EDQt-1(i,r')+ 1+x log(Ai,jBj,1P(xt|z=j)) r=1,t>1 +1+x l0g Qy(zt-r+1 = j,dt-r+1 = r|x); 1+x log Qy(z1 = j,d1 =r|x) + 1+x log(rjBj,1P(x1|z = j)); ;r=1,t =1 0. otherwise\nFigure 8: Confusion matrix on Synthetic Gaussian Process dataset\nRemark: When X = oo, the Q(z, d|x) will be exactly Qs(z, d|x) and the algorithm will reduce to. directly working on Qw(z, d|x) without the effect from Pe(x, z, d). Therefore, it is equivalent to obtaining MAP or sampling of the latent variables z, d from Q(z, d|x), whose cost is O([x|K) In practical, to further accelerate the computation, we can follow such strategy to generate samples when A is already large enough, and thus, the effect of Pe(x, z, d) is negligible..\nWith the fixed Q(z, d|x), we can update the 0 and y by exploiting stochastic gradient descent algorithm to avoid scanning the whole training set. Sample a mini-batch of sequences {xn} M-1 with size M < N, we proceed to update {0, %} by optimizing the Monte Carlo approximation of (11)\nwhere {z(n), d(n)} is the MAP or a sample of Q(z, d|x(n)). Note that the two parts related to 0 and are separated now, we can optimize them easily..\nwhere , A, B} are constrained to be valid probability distribution. We use stochastic gradient descent to update Ornn in totally K RNNs. For parameters , A, B which are restricted to simplex,. the stochastic gradient update will involve extra projection step. To avoid such operation which may. be costly, we propose the closed-form update rule derived by Lagrangian,.\nSince we already have the segmentation solution, the total number of samples used for training is equal to the number of observations in dataset. The different RNNs use different parameters, and train on different parts of observations. This makes it easy for parallelized training.\nRemark: We can get multiple samples {z, d} for each x from Q(z, d|x) to reduce the variance in. stochastic gradient. In our algorithm, the samples of latent variable come naturally from the auxiliary distributions (which are integrated with penalty method), rather than the derivation from lower bound. of objective (Tang & Salakhutdinov2013] Raiko et al.]2014} [Mnih & Rezende]2016)\nFigure 9: Confusion matrix on Human Activity dataset"}, {"section_index": "10", "section_name": "5 EXPERIMENTS", "section_text": "Baselines We compare with classical HSMM and two popular HSMM variants. The first one is Hierarchical Dirichlet-Process HSMM (HDP-HSMM) (Johnson & Willsky2013), which is the nonparametric Bayesian extension to the traditional HSMM that allows infinite number of hidder states; the second one is called subHSMM (Johnson & Willsky2014), which uses infinite HMM as the emission model for each segment. This model also has two-level of latent structure. It considers the dependency within each segment, which is a stronger algorithm than HDP-HSMM. We also compare with the CRF autoencoder (CRF-AE) (Ammar et al.]2014), which uses markovian CRF as recognition model and conditional i.i.d.model for reconstruction. Comparing to HSMM, this model ignores the segmentation structures in modeling and is more similar to HMM.\n3 1 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 200 400 600 800 1000 1200 1400 1600 1800 p-WWMH WSMH I-dOH CREAE Walk Walk upstairs Walk downstairs Sitting Standing Laying Walk Walk upstairs Walk downstairs Sitting Standing Laying I Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand L.5 0 0.5 0.5 200 400 600 800 1000 1200 1400 1600 1800 400 600 800 1000 1200 1400 1600 1800 2000 Ip-WWSH WSMH NWSH WWSH-dOH CREAE CCRF I Walk I Walk upstairs Walk downstairs Sitting Standing Laying Walk Walk upstairs Walk downstairs Sitting Standing Laying I Stand to sit Sit to stand Sit to lie I Lie to sit Stand to lie Lie to stand Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand\nEvaluation Metric We evaluate the performance of each method via the labeling accuracy. Specifi cally, we compare the labels of each single observations in each testing sequence. Since the labels are unknown during training, we use KM algorithm (Munkres||1957) to find the best mapping between predicted labels and ground-truth labels.\nSettings Without explicitly mentioned, we use leave-one-sequence-out protocol to evaluate the. methods. Each time we test on one held-out sequence, and train on other sequences. We report the mean accuracy in Table[1 We set the truncation of max possible duration D to be 400 for all tasks.. We also set the number of hidden states K to be the same as ground truth..\nFor CRF-AE, we extend the origin model for the continuous observations, and learn all parameter similar to|M. Schmidt (2008). We use mixture of Gaussians to model the emission, where the numbe of mixtures is tuned in { 1, : , 10}\nFor the proposed R-HSMM, we use Adam (Kingma & Ba] 2014) to train the K generative RNN and bi-RNN encoder. To make the learning tractable for long sequences, we use back propagation through time (BPTT) with limited budget. We also tune the dimension of hidden vector in RNN, the L2-regularization weights and the stepsize. We implemented with CUDA that parallelized for different RNNs, and conduct experiments on K-20 enabled cluster. We include both the R-HSMM with the exact MAP via dynamic programming (rHSMM-dp) and sequential VAE with forward pass (rHSMM-fw) in experiments. In all tasks, the rHSMM-fw achieves almost the same performance to rHSMM-dp, but 400 times faster, showing the bi-RNN is able to mimic the forward-backward algorithm very well with efficient computation.\nFigure 10: More segmentation results on Human Activity dataset\npredicted classe predicted class predicted class predicted class predicted class predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE\nFigure 11: Confusion matrix on Drosophila dataset\nUpdate : Given fixed X, log Q(z(n), d(n) |x(n)) is essentially the sequence to sequence likelihood. where the input sequence is x and output sequence is {z, d}. Using the form of Q, in Eq[7] this likelihood can be decomposed by positions. Thus we can conveniently train a bi-RNN which maximize the condition likelihood of latent variables by stochastic gradient descent\nFor the HDP-HSMM and subHSMM, the observation distributions are initialized as standard Mul-. tivariate Gaussian distributions. The duration is modeled by the Poisson distribution. We tune the concentration parameters a, y E {0.1, 1, 3, 6, 10}. The hyperparameters are learned automatically. For subHSMM, we tune the truncation threshold of the second level infinite HMM from {2 ... 15}.\nSynthetic Experiments We first evaluate the proposed method on two 1D synthetic sequential data sets. The first data set is generated by a HSMM with 3 hidden states, where , A, B are designed beforehand. A segment with hidden state z is a sine function z sin(wzx + e1) + e2, where e1 and e2. are Gaussian random noises. Different hidden states use different scale parameters Az and frequency parameters wz. The second data set also has 3 hidden states, where the segment with hidden state z is sampled from a Gaussian process (GP) with kernel function kz (x, y). Different hidden states employ different kernel functions. The specific kernel functions used here are k1 (x, y) = exp{- min(| x-y |x+y[)2/10},k2(x,y) =exp{-(x-y)2/10} and k3(x,y) =(5-|x-y[)I{(5-|x-y)< 5} For both of the Sine and GP data sets, the duration of a segment is randomly sampled from a distribution defined on {1, ..., 100}, which depends on the hidden states. Thus, the segmentation task. corresponds to finding out different functions embedded in the sequences..\n0.5 200 400 600 800 1000 1200 1400 1600 1800 2000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 dp-WWSH MJ-WWSH NWSHqns WWSH ] Walk Walk upstairs Walk downstairs Sitting Standing Laying Nothing Postural adjustment Running Crabwalking Stand to sit I Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand Cplx motion L1 groom Head groom L2-L3 groom Ab. groom L3 groom (a) Human activity (b) Drosophila\nFigure 3: Segmentation results on Human activity and Drosophila datasets. Different background colors represent the segmentations with different labels. In the top row, the black cure shows the signal sequence projected to the first principle component. The following two rows are our algorithms which almost exact locate every segment. (a) The Human activity data set contains 12 hidden states. each of which corresponds to a human action; (b) The Drosophila data set contains 11 hidden states, each of which corresponds to a drosophila action..\nTable 1: Error rate of segmentation. We report the mean and standard deviation of error rate\nMethods SINE GP HAPT Drosophila Heart PN-Full rHSMM-dp 2.67 1.13% 12.46 2.79% 16.38 5.03% 36.21 1.37% 33.14 7.87% 31.95 4.32% rHSMM-fw 4.02 1.37% 13.13 2.89% 17.74 7.64% 35.79 0.51% 33.36 8.10% 32.34 3.97% HSMM 41.85 2.38% 41.15 1.99% 41.59 8.58% 47.37 0.27% 50.62 4.20 % 45.04 1.87% subHSMM 18.14 2 2.63% 24.81 4.63% 22.18 4.45% 39.70 2.21% 46.67 4.22% 43.01 2 2.35% HDP-HSMM 42.74 2.73% 41.90 1.58% 35.46 6.19% 43.59 1.58% 47.56 4.31% 42.58 1.54% CRF-AE 44.87 1.63% 51.43 2 2.14% 49.26 10.63% 57.62 0.22% 53.16 4.78% 45.73 0.66%\nWe visualize the segmentation results of ground truth and three competitors on Sine and GP data sets in Figure|1a|and Figure 1b|respectively, and report the numerical results in Table|1] As w can see, R-HSMM provides much better results on even small segments, dramatically outperform HSMM variants and CRF-AE. Also note that, the sine function depicts short term dependencies, whil Gaussian process has long dependency that determined by the kernel bandwidth. This demonstrate the ability of R-HSMM in capturing the long or short term dependencies.\nFigure 12: More segmentation results on Drosophila dataset\nHuman activity This dataset which is collected byReyes-Ortiz et al.(2016) consists of signals collected from waist-mounted smartphone with accelerometers and gyroscopes. Each of the volun- teers is asked to perform a protocol of activities composed of 12 activities (see Figure|3a for the details). Since the signals within an activity type exhibit high correlation, it is natural for RNN to model this dependency. We use these 61 sequences, where each sequence has length around 3000 Each observation is a 6 dimensional vector, consists of triaxial measures from accelerometers and gyroscopes.\npredicted class predicted class predicted class predicted class predicted class predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE\nFigure|3a[shows the ground truth and the segmentation results of all methods. Both rHSMM-dp and rHSMM-fw almost perfectly recover the true segmentation. They can also capture the transition activity types, e.g., stand to lie or sit to lie. The HSMM, HDP-HSMM and CRF-AE makes some fragmental but periodical segmentations for walking, caused by lacking the dependency modeling within a segment. The subHSMM also has similar problem, possibly due to the limited ability of HMM generative model.\nDrosophila Here we study the behavior patterns of drosophilas. The data was collected byKain et al.(2013) with two dyes, two cameras and some optics to track each leg of a spontaneously behaving fruit fly. The dimension of observation in each timestamp is 45, which consists of the raw features and some higher order features. See Figure[3blfor the detail of the 11 behavior types. We perform leave-one-sequence-out experiment on 10 sequences of length 10000 each. Figure 3b|shows the segmentation results on the prefix of one sequence, while Table|1|gives the mean accuracy on all sequences. Different from the previous experiment, where the human activity signals are relatively\nrounrunn wwwwwwwwwwwy 1000 1500 2000 2500 2000 2500 3000 Nothing Postural adjustment Running Turning in place Crabwalking Nothing Postural adjustment 1 Running I Turning in place Crabwalking I Cplx motion L1 groom Head groom J L2-L3 groom |Ab. groom L3 groom I Cplx motion L1 groom Head groom L2-L3 groom |Ab. groom L3 groom 500 1000 1500 2000 2500 500 1000 1500 2000 2500 Nothing Postural adjustment Running Turning in place Crabwalking Nothing Postural adjustment Running I Cplx motion L1 groom Head groom L2-L3 groom Ab. groom L3 groom I Cplx motion L1 groom Head groom L2-L3 groom Ab.groom L3 groom\nFigure 13: Confusion matrix on Heart Sound dataset"}]
rky3QW9le
[{"section_index": "0", "section_name": "TRANSFORMATIONAL SPARSE CODING", "section_text": "Marc'Aurelio Ranzato. Fu-Jie Huang. Y-Lan Boureau, and Yann LeCun. Unsupervised learning o1 invariant feature hierarchies with applications to object recognition. In Proc. Computer Vision and Pattern Recognition Conference (CVPR'07). IEEE Press, 2007.\nDimitrios C. Gklezakos & Raiesh P. N. Rao\nDepartment of Computer Science and Center for Sensorimotor Neural Engineering University of Washington Seattle. WA 98105. USA\ngklezd, rao}@cs.washington.edu\nRajesh P. N. Rao and Daniel L. Ruderman. Learning lie groups for invariant visual perception In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems. II, pp. 810-816, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL http: //dl.acm.0rg/citation.cfm?id=340534.340807\nA fundamental problem faced by object recognition systems is that objects anc. their features can appear in different locations, scales and orientations. Current. deep learning methods attempt to achieve invariance to local translations via pool ing, discarding the locations of features in the process. Other approaches explic. itly learn transformed versions of the same feature, leading to representations tha1 quickly explode in size. Instead of discarding the rich and useful information. about feature transformations to achieve invariance, we argue that models should learn object features conjointly with their transformations to achieve equivariance We propose a new model of unsupervised learning based on sparse coding tha. can learn object features jointly with their affine transformations directly fron images. Results based on learning from natural images indicate that our approach. matches the reconstruction quality of traditional sparse coding but with signifi cantly fewer degrees of freedom while simultaneously learning transformations from data. These results open the door to scaling up unsupervised learning tc allow deep feature+transformation learning in a manner consistent with the ven tral+dorsal stream architecture of the primate visual cortex.."}, {"section_index": "1", "section_name": "B DEEPER TREES AND STRUCTURE", "section_text": "Figure6|presents an example of structure learned by deeper trees. This example consists of vertical and horizontal lines. Each image patch is either blank, contains one vertical or one horizontal line or both. A patch is blank with probability , contains exactly one line with probability ? or two lines with probability ?. Each line is then generated at one of eight positions at random. Fitting two binary trees results in some continuity in the features. whereas flat trees provide no such structure"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "A challenging problem in computer vision is the reliable recognition of objects under a wide range of transformations. Approaches such as deep learning that have achieved success in recent years usually require large amounts of labeled data, whereas the human brain has evolved to solve the problem using an almost unsupervised approach to learning object representations. During early development, the brain builds an internal representation of objects from unlabeled images that can be used in a wide range of tasks.\nMuch of the complexity in learning efficient and general-purpose representations comes from the fact that objects can appear in different poses, at different scales, locations, orientations and lighting conditions. Models have to account for these transformed versions of objects and their features. Cur rent successful approaches to recognition use pooling to allow limited invariance to two-dimensiona translations (Ranzato et al. (2007)). At the same time pooling discards information about the loca tion of the detected features. This can be problematic because scaling to large numbers of objects requires modeling objects in terms of parts and their relative pose, requiring the pose information tc be retained.\nPrevious unsupervised learning techniques such as sparse coding (Olshausen & Field(1997)) car learn features similar to the ones in the visual cortex but these models have to explicitly learn large numbers of transformed versions of the same feature and as such, quickly succumb to combinatoria explosion, preventing hierarchical learning. Other approaches focus on computing invariant objec signatures (Anselmi et al.(2013) 2016)), but are completely oblivious to pose information.\nIdeally, we want a model that allows object features and their relative transformations to be si multaneously learned, endowing itself with a combinatorial explanatory capacity by being able. to apply learned object features with object-specific transformations across large numbers of ob- jects. The goal of modeling transformations in images is two-fold: (a) to facilitate the learning of"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Jascha Sohl-Dickstein, Jimmy C. Wang, and Bruno A. Olshausen. An unsupervised algorithm for learning lie group transformations. CoRR, abs/1001.1027, 2010. URL http: / /arxiv. org/ abs/1001.1027\nI ~ Fw s.t. w is sparse\nSparsity is usually enforced by the appropriate penalty. A typical choice is S1(w) = |w|[1. We can enhance sparse coding with affine transformations by transforming features before combining them The vectorized input image I is then modeled as:\nK L I = WkT(xk)Fk k=1\nwhere wg, Fg denote the k-th weight specific to the image and the k-th feature respectively anc T(xk) is a feature and image specific transformation.\nIn modeling image transformations we follow the approach ofRao & Ruderman|(1999) andMiao & Rao(2007). We consider the 2D general affine transformations. These include rigid motions such as vertical and horizontal translations and rotations, as well as scaling, parallel hyperbolic deformations. along the X/Y axis and hyperbolic deformations along the diagonals. A discussion on why these. are good candidates for inclusion in a model of visual perception can be found in Dodwell[(1983). Figure|5|in Appendix|A|shows the effects of each transformation.\nAny subset of these transformations forms a Lie group with the corresponding number of dimension. (6 for the full set). Any transformation in this group can be expressed as the matrix exponential of a weighted combination of matrices (the group generators) that describe the behaviour of infinitesima transformations around the identity:\nAlthough this model ties sparse coding with transformations elegantly, learning large transforma- tions with it is intractable. The error surface of the loss function is highly non-convex with many shallow local minima. Figures|1(a) 1(b) 1(c) show the surface of L as a function of horizontal and vertical translation, horizontal translation and rotation and vertical translation and rotation parame- ters. The model tends to settle for small transformations around the identity. Due to the size of the parameters that we need to maintain, a random restart approach would be infeasible.\nWe introduce Transformational Sparse Coding Trees to circumvent this problem using hierarchies of transformed features. The main idea is to gradually marginalize over an increasing range of\nWe propose a new model of sparse coding called transformational sparse coding that exploits a tree structure to account for large affine transformations. We apply our model to natural images. We show that our model can extract pose information from the data while matching the reconstruction quality of traditional sparse coding with significantly fewer degrees of freedom. Our approach to insupervised learning is consistent with the concept of \"capsules'' first introduced by Hinton et al. 2011), and more generally, with the dorsal-ventral (features+transformations) architecture observed in the primate visual cortex.\n(a) - (b) H H (c) 0 (d) H (e) 0 (f) 0 (g)\nT(x) = e;x;G\nFor images of M pixels, T(x) is a matrix of size M M. Note that the generator matrices and. the features used are common across all images. The feature weights and transformation parameters can be inferred (and the features learned) by gradient descent on the regularized MSE objective.\nN K 1 I- L(w,x,F) WikT(xik)Fk +XwS1(w)+XF||F N i=1 k=1\nFigure 5: Effects of each individual transformation on the template (a): (b) horizontal translation. (c) vertical translation, (d) rotation, (e) scaling, (f) parallel hyperbolic deformation along the X/Y axis, (g) hyperbolic deformation along the diagonals. To compute the generators, we used the sinc interpolation function\n0.9 0.9 0.9 -3 -3 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 3 3 0.1 0.1 -4 -3 -2 1 0 1 2 3 -4 -3 -2 1 0 1 2 3 -4 3 -2 1 0 1 2 3 4 Horizontal Translation Parameter Horizontal Translation Parametere Vertical Translation Parameter (a) (b) (c) 4 4 0.9 0.9 0.9 -3 3 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 3 3 0.1 0.1 1 0 -4 -3 -2 -1 0 1 2 3 -4 -3 2 -1 0 2 3 -4 -3 -2 -1 0 1 2 3 4 Horizontal Translation Parameter Horizontal Translation Parameter Vertical Translation Parameter (d) (e) (f)\n(a) (b) (c)\nV I~ WhUi v=1 b~ch(v)\nwhere U, = T(xv->b)F, and ch(v) the children of root v. The feature U, is a leaf, derived fron the root feature F, via the fixed (across all data-points) transformation T(xy->b). Deeper trees car be built accordingly (Section|3.3). A small example of a tree learned from natural image patches is shown in Figure|2\nThere are multiple advantages to such a hierarchical organization of sparse features. Some transfor mations are more common in data than others. Each path in the tree corresponds to a transformatio that is common across images. Such a path can be viewed as a \"transformation feature\" learne from the data. Each additional node in the tree \"costs\"' a fixed set of new parameters equal in size t the dimensions of the underlying Lie group (six in our case). At the same time the node contribute a whole new feature to the sparse code. Averaging over many data points, smoothens the surfac of the error function and makes larger transformations more accessible to optimization. Figure 1(d)[1(e) 1(f)|show the error surface averaged over a batch of 2000 patches.\nFor every leaf that is activated, the root template represents the identity of the feature and the trans formation associated with the path to the root, the pose. In other words the tree is an equivarian representation of the feature over the parameter region defined by the set of paths to the leaves, very similar to the concept of a capsule introduced by Hinton et al.[(2011). In fact, every increasing subtree corresponds to a capsule of increasing size..\nFigure 1: Normalized reconstruction error for individual vs. batch 8 8 natural image patches (a),(b),(c) show the surface of the reconstruction error for horizontal and vertical translations, hor- izontal translations and rotation, vertical translations and rotations for an individual data point and feature. (d),(e),(f) show the same, averaged over a batch of 2000 data points. The error is normalized between 0 and 1 for comparison. The global minimum in the range is marked in red. In the batch case, averaging makes the error surface smoother and learning easier.\ntransformations. Each node in the tree represents a feature derived as a transformed version of its parent, with the root being the template of the feature. The leaves are equivalent to a set of sparse. basis features and are combined to reconstruct the input as described above. A version of the model using a forest of trees of depth one (flat trees), is given by:.\nFigure 6: Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input.\n(a) (b)\nFigure 2: Example of a tree learned from natural image patches. The leaves correspond to rigid transformations of the root\nThe reconstruction mean s. quared-error (MSE) for a forest of flat trees is given by:\nN V 1 I- LMSE(w,x, F) WibT(xy-b)F N i=1 v=1 b~ch(v)\nIncreasing the feature magnitudes and decreasing the weights will result in a decrease in loss. We constraint the root feature magnitudes to be of unit l, norm. Consider different, transformed, ver. sions of the same root template. For every such version there is a set of tree parameters that com. pensates for the intrinsic transformation of the root and results in the same leaves. To make the. solution unique we directly penalize the transformation parameter magnitudes. Since scaling anc. parallel deformation can also change the magnitude of the filter, we penalize them more to keep. features/leaves close to unit norm. The full loss function of the model is:.\n6 L(w,x,F) = LMSE(w,x,F) + XwS1(w) + ) j=1\nVv,lFyl2=1\nwhere X1 is the vector of the collective parameters for generator G\nLee et al.(2007) use an alternating optimization approach to sparse coding. First the weights are inferred using the feature sign algorithm and then the features are learned using a Lagrange dual approach. We use the same approach for the weights. Then we optimize the transformation param-. eters using gradient descent. The root features can be optimized using the analytical solution and projecting to unit norm.\nThe matrix exponential gradient OL can be computed using the following formula (Ortiz et a Ox (2001):\nA(t A ) do dt dt 0\nwhere D() = eA(t) @A(t) e(1-)A(t) . For our experiments we approximated the gradient by draw- ing a few samples{&s} and computing E [D(a)]. This can be regarded as a stochastic version of the approach used byCulpepper & Olshausen (2009).\nFigure 7: Learned features for 16 trees with branching factor 16. Each row corresponds tc leaves/transformations of the same root.\nSome features might get initialized near shallow local optima (i.e. close to the borders or outside the receptive field). These features eventually become under-used by the model. We periodically check\n' In practice even a single sample works well. The computation over samples is easily parallelizable.\nRoot Leaves (a)\nEa~U(0,1) [D(a)]\n(a) (b)"}, {"section_index": "4", "section_name": "3.1 LEARNING REPRESENTATIONS", "section_text": "We apply transformational sparse coding (TSC) with forests of flat trees to natural image patches Our approach allows us to learn features resembling those of traditional sparse coding. Apart from reconstructing the input, the model also extracts transformation parameters from the data. Figur 3 shows a reconstruction example. Figure|4 shows the root features learned from 10 10 natura image patches using a forest of size 8 with branching factor 8, equipped with the full six-dimensional group. The forest has a total of 64 features. Figure|4(a)|shows the features corresponding to the roots Figure|4(b)|shows the corresponding leaves. Each row contains features derived from the same root. More examples of learned features are shown in Figures[7l 8]9|and 10jin Appendix??\nFigure 8: Learned features for 8 trees with branching factor 32. Each row corresponds to leaves/transformations of the same root.\nFigure 3: Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the 8 8 natural image patch in the top right corner.."}, {"section_index": "5", "section_name": "8.2 COMPARISON WITH SPARSE CODING", "section_text": "We compare transformational sparse coding forests of various layouts and choices for Aw with tra. ditional sparse coding on 10 10 natural image patches. Some transformations change the featur magnitudes and therefore the sparsity pattern of the weights. To make the comparison clearer, fo each choice of layout and penalty coefficient, we run sparse coding, constraining the feature mag nitudes to be equal to the average feature magnitude of our model. The results are shown in Tabl. 1] The reconstruction error of our model is close to that of sparse coding, albeit with slightly les sparse solutions, even though it has significantly fewer degrees of freedom. Our model extracts pose. information in the form of group parameters..\nFigure 9: Learned features for 4 trees with branching factor 16. Each row corresponds tc leaves/transformations of the same root.\n2A feature is under-used when the total number of data-points using it in a batch drops close to zero\nfor under-used features and re-initialize their transformation parameters For re-initialization we select another feature in the same tree at random with probability proportional to the fraction of data points that used it in that batch. We then reset the transformation parameters at random, with small variance and centered around the chosen filter's parameters.\n+ (0.0527) (0.4183) (-0.4114) (a)\n(a) (b)\nEven though derivative features have to be explicitly constructed for inference, the degrees of free dom of our model are significantly lower than that of traditional sparse coding. Specifically:\nfTSC (# of roots) (# of pixels - 1 + branching factor group dimension)\nNote that the group dimension is equal to 3 for rigid motions and 6 for general 2D affine transfor mations.\n(a) (b)\n(a) (b)\nFigure 4: Learned features for 8 trees with a branching factor of 8. (a) Features corresponding to the. roots. (b) Features/Leaves: Each row corresponds to leaves/transformations of the same root.\nTable 1: Comparison of transformational sparse coding (TSC) with sparse coding (SC) for 10 10 natural image patches. We compare the error (MSE) and the degrees of freedom (df) over 40000 data points. \"Sparsity\" is the average number of non-zero weights. Aw is the penalty coefficient for the weights and controls the sparseness of the solution\nTSC SC Mw Layout df sc MSE Sparsity dfTSC MSE Sparsity dfsc # of features dfTSC 0.4 1 x 64 2.13 13.3 447 1.71 12.3 6336 64 14.17 0.5 1 128 2.28 12.1 867 1.96 10.3 12672 128 14.62 0.4 8 x 8 1.89 13.3 1176 1.72 12.5 6336 64 5.38 0.4 4 x 16 1.91 13.3 780 1.69 12.3 6336 64 8.12 0.5 8 x 8 2.36 10.4 1176 2.15 9.9 6336 64 5.38 0.5 4 x 16 2.38 11 780 2.12 10.0 6336 64 8.12 0.4 16 16 1.66 14.3 3120 1.56 13.2 25344 256 8.12 0.4 8 x 32 1.67 14.6 2328 1.56 13.2 25344 256 10.88\nFigure 10: Learned features for 1 tree with branching factor 64. All features are transformations ol the same root.\nWe can define deeper trees by associating a set of transformation parameters with each branch These correspond to additive contributions to the complete transformation that yields the leaf wher applied to the root:\nV I~ WibT(xp)F v=1 b~ch(v)\nwhere x p= eEpath(6,) e. Optimizing deeper trees is more demanding due to the increased number of parameters. Their advantage is that they lend structure to model. The parameters cor- responding to the subtree of an internal node tend to explore the parameter subspace close to the transformation defined by that internal node. In tasks where it is disadvantageous to marginal-. ize completely over transformations, equivariant representations corresponding to intermediate tree layers can be used. An example of such structure is shown in Figure[6|in Appendix[B.\n(a)\nSohl-Dickstein et al.(2010) present a model for fitting Lie groups to video data. Their approach only works for estimating a global transformation between consecutive video frames. They only support transformations of a single kind (ie only rotations). Different such single-parameter transformations have to be chained together to produce the global one. The corresponding transformation parameters also have to be inferred and stored in memory and cannot be directly converted to parameters of a single transformation. Kokiopoulou & Frossard[(2009) present an approach to optimally estimating transformations between pairs of images. They support rigid motions and isotropic scaling.Culpep- per & Olshausen[(2009) focus on learning the group operators and transformation parameters from pairs of images, but do not learn features from data. Our model supports all six transformations and learns object parts and their individual transformations. In contrast with those approaches, our model learns object parts jointly with their transformations within the same image. Our model uti- lizes the full, six-dimensional, general affine Lie group and captures the pose of each object part in the form of a single set of six transformation parameters.\nGrimes & Rao (2005) propose a bilinear model that combines sparse coding with transformations The model accounts for global transformations that apply to the entire image region. Our mode accounts for individual transformations of image parts. Rao & Ballard[(1998) propose a model tha captures small image transformations with Lie groups using a first-order Taylor approximation. Ou model estimates larger transformations of image parts using the full exponential model.Rao & Ruderman (1999) and [Miao & Rao[(2007) use a first-order Taylor approximation to learn the grouj operators and the transformation parameters for small transformations.\nThe work closest to ours is that of|Hinton et al.(2011) on capsules. A capsule learns to recognize it.. template (feature) over a wide range of poses. The pose is computed by a neural network (encoder). The decoder, resembling a computer graphics engine combines the capsule templates in differen. poses to reconstruct the image. Each transformational sparse coding tree can be thought of as cap. sule. The template corresponds to the root. The tree learns to \"recognize\"' transformed version. of that template. Our work arrives at the concept of a capsule from a sparse coding perspective. A major difference is that our approach allows us to reuse each feature multiple times in different. transformed versions for each data point.."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed a sparse coding based model that learns object features jointly with theii transformations, from data. Naively extending sparse coding for data-point specific transformations makes inference intractable. We introduce a new technique that circumvents this issue by using a tree structure that represents common transformations in data. We show that our approach can learr interesting features from natural image patches with performance comparable to that of traditiona sparse coding.\nInvestigating the properties of deeper trees, learning the tree structure dynamically from the data anc extending our model into a hierarchy are subjects of ongoing research..\nGens & Domingos (2014) propose a convolutional network that captures symmetries in the data. by modeling symmetry groups. Experiments with rigid motions or various affine transformations. show reduced sample complexity.Cohen & Welling (2016) propose a convolutional network that can handle translations, reflections and rotations of 90 degrees.Dieleman et al.(2016) propose a network. that handles translations and rotations. All the above are supervised learning models and apart from. the first can handle a limited set of transformations. Our model is completely unsupervised, extends sparse coding and can handle all transformations given by the first order differential equation:.\na1(0) = AI(0) de"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Fabio Anselmi, Joel Z. Leibo. Lorenzo Rosasco. Jim Mutch, Andrea Tacchetti, and Tomaso A Poggio. Unsupervised learning of invariant representations in hierarchical architectures. CoRR.. abs/1311.4158,2013. URLhttp://arxiv.0rg/abs/1311.4158\nFabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio Unsupervised learning of invariant representations. Theor. Comput. Sci., 633(C):112-121, June 2016. ISSN 0304-3975. doi: 10.1016/j.tcs.2015.06.048. URL http://dx.doi.0rg/10. 1016/j.tcs.2015.06.048\nDavid B. Grimes and Rajesh P. N. Rao. Bilinear sparse coding for invariant vision. Neural Comput 17(1):47-73, January 2005. ISSN 0899-7667. doi: 10.1162/0899766052530893. URL http : //dx.do1.0rg/10.1162/0899766052530893\nBruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311-3325, 1997.\nM. Ortiz, R. A. Radovitzky, and E. A. Repetto. The computation of the exponential and logarithmi mappings and their first and second linearizations. International Journal for Numerical Method. in Engineering, 52:1431, December 2001. doi: 10.1002/nme.263.\nRobert Gens and Pedro Domingos. Deep symmetry networks. In Proceedings of the 27th Inter- national Conference on Neural Information Processing Systems, NIPS'14, pp. 2537-2545, Cam bridge, MA, USA, 2014. MIT Press. URL http://dl.acm.org/citation.cfm?id= 2969033.2969110\nHonglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms In B. Scholkopf. J. C. Platt. and T. Hoffman (eds.), Advances in Neural Information Process ing Systems 19, pp. 801-808. MIT Press, 2007. URL http: //papers.nips. cc/paper/ 2979-efficient-sparse-coding-algorithms.pdf"}]
SJIMPr9eg
[{"section_index": "0", "section_name": "BOOSTED RESIDUAL NETWORKS", "section_text": "Alan Mosca & George D. Magoulas\nDepartment of Computer Science and Information Systems Birkbeck, University of London Malet Street. WC1E 7HX. London. UK\na.mosca,gmagoulas}@dcs.bbk.ac.uk"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks have been developed with many more layers than traditional Deep Networks. in some cases with over 1000 blocks, such as the networks in He et al. (2016). A recent study ir Veit et al. (2016) compares Residual Networks to an ensemble of smaller networks. This is done. by unfolding the shortcut connections into the equivalent tree structure, which closely resembles ar ensemble. An example of this can be shown in Figure1\n>1\nFigure 1: A Residual Network of N blocks can be unfolded into an ensemble of 2N _ 1 smaller networks.\nDense Convolutional Neural NetworksHuang et al. (2016) are another type of network that make use of shortcuts, with the difference that each layer is connected to all its ancestor layers directly b a shortcut. Similarly, these could be also unfolded into an equivalent ensemble.\nTrue ensemble methods are often left as an afterthought in Deep Learning models: it is generally considered sufficient to treat the Deep Learning method as a \"black-box'' and use a well-known generic Ensemble method to obtain marginal improvements on the original results. Whilst this is an effective way of improving on existing results without much additional effort, we find that it can amount to a waste of computations. Instead, it would be much better to apply an Ensemble method that is aware, and makes us of, the underlying Deep Learning algorithm's architecture.\nWe define such methods as \"white-box\"' Ensembles, which allow us to improve on the generalisation. and training speed compared to traditional Ensembles, by making use of particular properties of the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we present a new ensemble method, called Boosted Residual Net works, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent de- velopements in Residual Networks - a method for creating very deep networks by ncluding a shortcut layer between different groups of layers - with the Deep Incre. nental Boosting, which has been proposed as a methodology to train fast ensem les of networks of increasing depth through the use of boosting. We demonstrate hat the synergy of Residual Networks and Deep Incremental Boosting has better ootential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers\nResidual Networks, a type of deep network recently introduced in He et al. (2015a), are character. ized by the use of shortcut connections (sometimes also called skip connections), which connect. the input of a layer of a deep network to the output of another layer positioned a number of levels. 'above' it. The result is that each one of these shortcuts shows that networks can be build in blocks. which rely on both the output of the previous layer and the previous block..\nThe next section presents the background on Deep Incremental Boosting. Then the proposed Boosted Residual Networks method is described. Experiments and results are discussed next, and the paper ends with conlusions\nThis practically enables the ensemble algorithm to train the subsequent rounds for a considerably. smaller number of epochs, consequently reducing the overall training time by a large factor. The. original paper also provides a conjecture-based justification for why it makes sense to extend the. previously trained network to learn the \"corrections' taught by the boosting algorithm. A high level description of the method is shown in Algorithm 1] and the structure of the network at each round is illustrated in Figure3\nAlgorithm 1 Deep Incremental Boosting\nbase classifier's learning algorithm and architecture. We propose a new such method, which we call Boosted Residual Networks, which makes use of developments in Deep Learning, previous other white-box Ensembles and combines several ideas to achieve improved results on benchmark datasets.\nUsing a white-box ensemble allows us to improve on the generalisation and training speed by making use of the knowledge of the base classifier's structure and architecture. Experimental results show that Boosted Residual Networks achieves improved results on benchmark datasets.\nRound 1 Round 2 Round N 5 N\nFigure 2: Illusration of subsequent rounds of DIB\nIn this section we propose a method for generating Boosted Residual Networks. This works by increasing the size of an original residual network by one residual block at each round of boosting.. The method achieves this by selecting an injection point index p; at which the new block is to be. added, which is not necessarily the last block in the network, and by transferring the weights from the layers below p; in the network trained at the previous round of boosting..\nBecause the boosting method performs iterative re-weighting of the training set to skew the resample at each round to emphasize the training examples that are harder to train, it becomes necessary tc utilise the entire ensemble at test time, rather than just use the network trained in the last round This has the effect that the Boosted Residual Networks cannot be used as a way to train a singl. Residual Network incrementally. However, as we will discuss later, it is possible to alleviate thi. situation by deriving an approach that uses bagging instead of boosting; therefore removing the necessity to use the entire ensemble at test time. It is also possible to delete individual blocks fron a Residual Network at training and/or testing time, as presented in He et al.(2015a), however this issue is considered out of the scope of this paper.\nThe iterative algorithm used in the paper is shown in Algorithm [2] At the first round, the entire training set is used to train a network of the original base architecture, for a number of epochs no After the first round, the following steps are taken at each subsequent round t:\nThe ensemble constructed so far is evaluated on the training set to obtain the set errors e, so that a new training set can be sampled from the original training set. This is a step common to all boosting algorithms. A new network is created, with the addition of a new block of layers Bnew immediately after position pt, which is determined as an initial pre-determined position po plus an offset i * d, for all the blocks added at previous layers. This puts the new block of layers im- mediately after the block of layers added at the previous round, so that all new blocks are effectively added sequentially. The weights from the layers below pt are copied from the network trained at round t - 1 to the new network. This step allows to considerably shorten the training thanks to the transfer of learning shown inYosinski et al. (2014). The newly created network is subsequently trained for a reduced number of epochs nt>0. The new network is added to the ensemble following the traditional rules and weight Qg used in AdaBoost.\nFigure 3 shows a diagram of how the Ensemble is constructed by deriving the next network at eacl round of boosting from the network used in the previous round..\nAlgorithm 2 Boosted Residual Networks\nRound1 Round 2 Round N\nFigure 3: Illusration of subsequent rounds of BRN\nWe identified a number of optional variations to the algorithm that may be implemented in practice which we have empirically established as not having an impact on the overall performance of the. network. We report them here for completeness.\nFreezing the layers that have been copied from the previous round. Only utilising the weights distribution for the examples in the training set instead of resam pling, as an input to the training algorithm. Inserting the new block always at the same position, rather than after the previously inserted block (we found this to affect performance negatively)"}, {"section_index": "3", "section_name": "3.1 COMPARISON TO APPROXIMATE ENSEMBLES", "section_text": "In the case of Densely Connected Convolutional Networks (DCCN) specifically, one may argue. that a partial unfolding of the network could be, from a schematic point of view, very similar to ar. ensemble of incrementally constructed Residual Networks. We make the observation that, althougl. this would be correct, on top of the benefit of diversity, our method also provides a much fastei. training methodology: the only network that is trained for a full schedule is the network created. at the first round, which is also the smallest one. All subsequent networks are trained for a mucl shorter schedule, saving a considerable amount of time. Additionally, while the schematic may. seem identical, there is a subtle difference: each member network outputs a classification of its own which is then aggregated by weighted averaging, whilst in a DCCN the input of the final aggregatior. layer is the output of each underlying set of layers. We conjecture that this aggressive dimensionality. reduction before the aggregation will have a regularising effect on the ensemble..\nSingle Net AdaBoost DIB BRN MNIST 99.41 % 99.41 % 99.47 % 99.53 % CIFAR-10 89.12 % 89.74 % 90.83 % 90.85 % CIFAR-100 67.25 % 68.18 % 68.56 % 69.04 %\nTable 1: Test accuracy in the three bencharks for the methods compared\nIn the experiments we used the MNIST, CIFAR-10 and CIFAR-100 datasets, and compared Boosted Residual Networks (BRN) with an equivalent Deep Incremental Boosting (DIB) without the skip- connections, AdaBoost with the equivalent Residual Network as its base classifier (AdaBoost), and the single Residual Network (Single Net) In order to reduce noise, we aligned the random initialisa- tion of all networks across experiments, by fixing the seeds for the random number generators, and no dataset augmentation was used, both online and offline. Results are reported in Table[1] while Figure4|shows a side-by-side comparison of accuracy levels at each round of boosting for both DIB and BRN on the MNIST and CIFAR-100 test sets. This figure illustrates how BRNs are able to consistently outperform DIB, regardless of ensemble size, and although such differences still fall within a Bernoulli confidence interval of 95%, we make the note that this does not take account of the fact that all the random initialisations were aligned, so both methods started with the exact same network.\nTable[2shows that this is achieved without significant changes in the training timq The main speed increase is due to the fact that the only network being trained with a full schedule is the first network which is also the smallest, whilst all other derived networks are trained for a much shorter schedule (in this case only 10% of the original training schedule).\nThe initial network architectures for the first round of boosting are shown in Table|3a|for MNIST and Table 3b for CIFAR-10 and CIFAR-100. It is worth mentioning that we used relatively sim- ple network architectures that were fast to train, which still perform well on the datasets at hand with accuracy close to, but not comparable to, the state-of-the-art. This enabled us to test larger. Ensembles within an acceptable training time..\nTraining used the WAME method (Mosca & Magoulas (2016b)), which has been shown to be faster. than Adam and RMSprop, whilst still achieving comparable generalisation. This is thanks to a\n'In some cases BRN is actually faster than DIB, but we believe this to be just noise due to external factor. such as system load.\nWhile both Residual Networks and Densely Connected Convolutional Networks may be unfolded into an equivalent ensemble, we note that there is a differentiation between an actual ensemble method and an ensemble \"approximation'. During the creation of an ensemble, one of the principal factors is the creation of diversity: each base learner is trained independently, on variations (resam-. ples in the case of boosting algorithms) of the training set, so that each classifier is guaranteed to learn a different function that represents an approximation of the training data. This is the enabling. factor for the ensemble to perform better in aggregate..\nTable 3: Network structures used in experiments. The layers marked with \"*' indicate the locatior after which we added the residual blocks.\nspecific weight-wise learning rate acceleration factor that is determined based only on the sign of the current and previous partial derivative dwij networks in AdaBoost, we trained each member for 100 epochs. For Deep Incremental Boosting and Boosted Residual Networks, we trained the first round for 50 epochs, and every subsequent round for 10 epochs, and ran all the algorithms for 10 rounds of boosting, except for the single network. The structure of each incremental block added to Deep Incremental Boosting and Boosted Residual Networks at each round is shown in Table|4a for MNIST, and in Table 4b for CIFAR-10 and CIFAR-100. All layers were initialised following the reccommendations in He et al. (2015b).\nDistilled Boosted Residual Network: DBRN In another set of experiments we tested the per. formance of a Distilled Boosted Residual Network (DBRN). Distillation has been shown to be an effective process for regularising large Ensembles of Convolutional Networks in[Mosca & Magoulas. (2016c), and we have applied the same methodology to the proposed Boosted Residual Network.. For the distilled network structure we used the same architecture as that of the Residual Network. from the final round of boosting. Accuracy results in testing are presented in Table[5] and for com-. pleteness of comparison we also report the results for the distillation of DIB, following the same. procedure, as DDIB.\nTable 4: Structure of blocks added at each round of DIB and BRN\nResNet AdaBoost DIB BRN MNIST 115 min 442 min 202 min 199 min CIFAR-10 289 min 1212 min 461 min 449 min CIFAR-100 303 min 1473 min 407 min 448 min\nTable 2: Training times comparison\n2 96 conv, 3 3 96 conv. 3 3. 2 2 strides 96 conv. 3 3.2 2 strides 96 conv, 3 3, 2 2 strides 2 2 max-pooling 64 conv, 5 x 5 2 x 192 conv.3 x 3 2 2 max-pooling 192 conv. 3 3, 2 2 strides 128 conv, 5 5 192 conv. 3 3. 2 2 strides 2 2 max-pooling * 192 conv, 3 3, 2 2 strides Dense. 1024 nodes 2 2 max-pooling * 50% dropout 192 conv, 3 x 3 (a) MNIST 192 conv, 1 1 10 conv, 1 1 global average pooling 10-way softmax (b) CIFAR-10 and CIFAR-100\n192 conv. 3 x 3 Batch Normalization 64 conv. 3 x 3 ReLu activation Batch Normalization 192 conv, 3 x 3 ReLu activation Batch Normalization (a) MNIST ReLu activation (b) CIFAR-10 and CIFAR- 100\n99.5 69.2 D1B DIB 99.48 BRN 69 BRN 99.46 68.8 68.6 99.44 68.4 Aeerey 99.42 Aeeunrey 68.2 99.4 68 99.38 67.8 99.36 67.6 99.34 67.4 99.32 67.2 1 2 3 4 5 6 7 8 9 10 2 3 4 5 6 1 7 8 9 Boosting round Boosting round (a) MNIST (b) CIFAR-100\nFigure 4: Round-by-round comparison of DIB vs BRN on the test set\nTable 5: Comparative results in terms of testing accuracy\nBagged Residual Networks: BARN We experimented with substituting the boosting algorithm with a simpler bagging algorithm (Breiman (1996)) to evaluate whether it would be possible to only use the network from the final round of bagging as an approximation of the Ensemble. We called this the Bagged Approximate Residual Networks (BARN) method. We then also tested the performance of the Distilled version of the whole Bagging Ensemble for comparison. The results are reported as \"DBARN'. The results are reported in Table[6] It is clear that trying to use the last round of bagging is not comparable to using the entire Bagging ensemble at test time, or deriving a new distilled network from it.\nIn this paper we have derived a new ensemble algorithm specifically tailored to Convolutional Net works to generate Boosted Residual Networks. We have shown that this surpasses the performance of a single Residual Network equivalent to the one trained at the last round of boosting, of an en semble of such networks trained with AdaBoost, and Deep Incremental Boosting on the MNIST anc CIFAR datasets, without using augmentation techniques.\nWe then derived and looked at a distilled version of the method, and how this can serve as an effective way to reduce the test-time cost of running the Ensemble. We used Bagging as a proxy. to test generating the approximate Residual Network, which, with the parameters tested, does not. perform as well as the original Residual Network, BRN or DBRN..\nTable 6: Test accuracy for BARN\nDBRN DDIB MNIST 99.49 % 99.44 % CIFAR-10 91.11 % 90.66 % CIFAR-100 66.63 % 65.91 %\nFurther experimentation of the Distilled methods presented in the paper, namely DBRN and. DBARN, is necessary to fully investigate their behaviour. This is indeed part of our work in the near. future. Additionally, the Residual Networks built in our experiments were comparatively smaller than those that achieve state-of-the-art performance. Reaching state-of-the-art on specific bench- mark datasets was not our goal, instead we intended to show that we developed a methodology that makes it feasible to created ensembles of Residual Networks following a \"white-box'' approach to.\nBRN Bagging BARN DBARN MNIST 99.50 % 99.55 % 99.29 % 99.36 % CIFAR-10 90.56 % 91.43 % 88.47 % 90.63 % CIFAR-100 69.04 % 68.15 % 69.42 % 66.16 %\nsignificantly improve the training times and accuracy levels. Nevertheless, it might be appealing ir the future to evaluate the performance improvements obtained when creating ensembles of larger state-of-the-art, networks. Additional further investigation could also be conducted on the creatior of Boosted Densely Connected Convolutional Networks, by applying the same principle to DCCN instead of Residual Networks."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "L. Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1996.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015a.\nAlan Mosca and George Magoulas. Deep incremental boosting. In Christoph Benzmuller, Geof. Sutcliffe, and Raul Rojas (eds.), GCAI 2016. 2nd Global Conference on Artificial Intelligence volume 41 of EPiC Series in Computing, pp. 293-302. EasyChair, 2016a..\nAlan Mosca and George D. Magoulas. Training convolutional networks with weight-wise adaptiv learning rates. In Under Review, 2016b\nR. E. Schapire. The strength of weak learnability. Machine Learning, 5:197-227, 1990\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016."}]
HyWWpw5ex
[{"section_index": "0", "section_name": "RECURRENT COEVOLUTIONARY FEATURE EMBEDDING PROCESSES FOR RECOMMENDATION", "section_text": "Hanjun Dai, Yichen Wang, Rakshit Trivedi & Le Song\nRecommender systems often use latent features to explain the behaviors of users and capture the properties of items. As users interact with different items over time, user and item features can influence each other, evolve and co-evolve over time. To accurately capture the fine grained nonlinear coevolution of these features, we propose a recurrent coevolutionary feature embedding process model, which combines recurrent neural network (RNN) with a multi-dimensional point process model. The RNN learns a nonlinear representation of user and item embeddings which take into account mutual influence between user and item features, and the feature evolution over time. We also develop an efficient stochastic gradient algorithm for learning parameters. Experiments on diverse real-world datasets demonstrate significant improvements in user behavior prediction compared to state-of-the-arts.\nFigure 5: Visualization of the sparsity property in each dataset. The first row shows the distribution of number of events per user. The second row shows the user-item interaction graph. It is generated as follows. For each dataset, we randomly pick 10 users with 100 history events each user and collect all items they have interacted with. The interaction graph itself is a bipartite graph, and we put users on left side, and items on the right side\nSparsity in terms of the number of events per user. Typically, the more user history data we have. the better results we will obtain in the prediction tasks. We can see in IPTV dataset, users typically. have longer length of history than the users in Reddit and Yelp datasets. Thus our algorithm and all. other baseline methods have their best performance on this dataset. However, the Reddit dataset and. Yelp dataset are hard to tell the performance based only on the distribution of history length, thus we do a more detailed visualization."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "E-commerce platforms and social service websites, such as Reddit, Amazon, and Netflix, attracts thousands of users every second. Effectively recommending the appropriate service items to users is a fundamentally important task for these online services. It can significantly boost the user activities on these sites and leads to increased product purchases and advertisement clicks.\nSparsity in terms of diversity of items to recommend. From the bipartite graph, it is easy to see that Yelp dataset has higher density than the other two datasets. The density of the interaction graph reflects the variety of history per each user. For example, the users in IPTV only has 385 programs to watch, but they can have 47,924 businesses to choose in Yelp dataset. Also, the Yelp dataset has 9 times more items than IPTV and Reddit dataset in the bipartite graph. This means the users in Yelp dataset has more diverse tastes than users in other two datasets. This is because if users has similar tastes, the distinct number of items in the union of their history should be small.\nThe interactions between users and items play a critical role in driving the evolution of user interests and item features. For example, for music streaming services, a long-time fan of Rock music listens to an interesting Blues one day, and starts to listen to more Blues instead of Rock music. Similarly, a single music may also serve different audiences at different times,e.g., a music initially targeted fo an older generation may become popular among the young, and the features of this music need to be updated. Furthermore, as users interact with different items, users' interests and items' features car also co-evolve over time, i.e., their features are intertwined and can influence each other:\n. User -> item. In online discussion forums such as Reddit, although a group (item) is initially. created for statistics topics, users with very different interest profiles can join this group. Hence the participants can shape the features of the group through their postings. It is likely that this. group can finally become one about deep learning because most users concern about deep learning. Item -> user. As the group is evolving towards topics on deep learning, some users may become more interested in deep learning topics, and they may participate in other specialized groups on deep learning. On the opposite side, some users may gradually gain interests in pure math groups. lose interests in statistics and become inactive in this group..\nBased on the above two facts, we can see Yelp dataset is the most sparse, since it has shorter length of history per user, and much more diversity of the items, it is not surprising that this dataset is much. harder than the other IPTV and Reddit dataset.."}, {"section_index": "2", "section_name": "6.4.2 ROBUSTNESS OF THE ALGORITHM", "section_text": "With the case study on the most challenging Yelp dataset, we further evaluate how each algorithm. performs with lower level of sparsity as compared to the one used in Figure 4|(c).We use this to demonstrate that our work is most robust and performs well across different levels of sparsity..\nSuch co-evolutionary nature of user-item interactions raises very important questions on how to. learn them from the increasingly available data. However. existing methods either treat the tempora user-item interactions data as a static graph or use epoch based methods such as tensor factorization. to learn the latent features (Chi & Kolda]2012] Koren]2009] [Yang et al.]2011). These methods are not able to capture the fine grained temporal dynamics of user-item interactions. Recent point. process based models treat time as a random variable and improves over the traditional methods. significantly (Du et al. 2015 Wang et al. 2016b). However, these works make strong assumptions.\nOn this dense dataset, Figure [6|(b) and (c) show that all the algorithms' performances improve with more history events, comparing to the performance in original Yelp dataset. For example LOwRANKHAwKEs has similar rank prediction results as our DEEPCOEvOLVE on this dense dataset However, as the dataset becomes sparse, the performance of LowRANKHAwKEs drops significantly as shown in Figure4(c). For example, the rank prediction error goes from 90 to 2128, and the\nAuthors have equal contributions\nIPTV Reddit Yelp 0.3 0.7 0.5 0.6 0.25 0.4 0.5 0.2 39.04 0.15 6 0.3 0.1 0.1 0.05 0.1 # 0 0 0 200 400 800 1000 0 200 400 600 800 1000 200 400 600 600 800 1000 # events per user # events per user # events per user users items users items users items (a) IPTV, 385 items (b) Reddit, 1,403 groups (c) Yelp, 47,924 businesses\n{hanjundai, yichen.wang, rstrivedi}@gatech.edu, lsong@cc.gatech.edu"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We first create Yelp100, a more dense dataset, by filtering the original Yelp dataset to keep the top 100 users. Each user would have at least 200 events. Figure[6(a) shows the statistics of this dataset.. On average the users have more history events than the original Yelp dataset in Figure|5(c)\nItem Initialize item feature feature ii,(to)=0(V1 i{) Item profile ii(t) V1 i(ti) Evolution JACKS Co-evolution +V2.uu (t) i(ti) = User >Item Interaction +V3.q1 Context +feature u1,1 +V4(t1-to) t191 Drift q(t) 07 Initialize user feature. uu (to) = o(W1.uu1) . User profile W.uu(t1) .Evolution User Christine Alice +W2:i(t)< Co-evolution David Alice feature uuq(t1) =o Item->User +W3.91 Context uu(t) +W4(t1-to) Drift\nMethods Methods Yelp100 0.3 DeepCoevolve 1000DeepCoevolve llpooo LowRankHawkes 7800.1 8100.3 8320.5 Coevolving Coevolving LowRankHawkes 724.3 768.4 883 0.25 PoissonTensor PoissonTensor 1000 TimeSVD++ STIC FIP 0.2 ussr MAR 125.9 72.81 * 0.15 raaeeonn 87.16 90.1 80.1 0.1 10 10 0.05 0 200 400 600 800 S 1000 Methods Methods # events per user (a) # events distribution (b) MAR (c) MAE\nOn the contrary, our DeepCoevoLVE still has superior performance with such high level of sparsity. The rank error only changes from 87 to 107, and the time error changes from 72 to 884 as the data becomes sparse. It shows that our work is the most robust to the sparsity in the data. We think it is because our work accurately captures the nonlinear multidimensional dependencies between users. and items latent features.\nabout the function form of the generative processes, which may not reflect the reality or accurat enough to capture the complex and nonlinear user-item influence in real world."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "We have proposed an efficient framework to model the nonlinear co-evolution nature of users' and. items' latent features. Moreover, the user and item's evolving and co-evolving processes are captured. by the RNN. It is based on temporal point processes and models time as a random variable. Hence. it is in sharp contrast to prior epoch based works. We demonstrate the superior performance of our. method on both the time and item prediction task, which is not possible by most prior work. Future. work includes extending to other social applications, such as group dynamics in message services.\nNovel model. We propose a novel model that captures the nonlinear co-evolution nature of user and items' embeddings. It assigns an evolving feature embedding process for each user and iten and the co-evolution of these latent feature processes is modeled with two parallel components: ( item -> user component, a user's latent feature is determined by the nonlinear embedding of later features of the items he interacted with; and (ii) user -> item component, an item's latent feature are also determined by the latent features of the users who interact with the item Technical Challenges. We use RNN to parametrize the interdependent and intertwined user an item embeddings. The increased flexibility and generality further introduces technical challenge on how to train RNN on the co-evolving graphs. The co-evolution nature of the model makes th samples inter-dependent and not identically distributed, which is contrary to the assumptions i the traditional setting and significantly more challenging. We are the first to propose an efficier stochastic training algorithm that makes the BTPP tractable in the co-evolving graph. Strong performance. We evaluate our method over multiple datasets, verifying that our metho can lead to significant improvements in user behavior prediction compared to previous state-of-th arts. Precise time prediction is especially novel and not possible by most prior work.\nRecent work predominantly fix the latent features assigned to each user and item dSalakhutdinoy & Mnih2008 Chen et al.][2009} |Agarwal & Chen]2009] Ekstrand et al.[[2011] Koren & Sill]2011 Yang et al.|2011} Yi et al. 2014 Wang & Pal]2015). In more sophisticated methods, the time is divided into epochs, and static latent feature models are applied to each epoch to capture some. temporal aspects of the data (Koren]2009] Karatzoglou et al.|2010f|Xiong et al.|2010] Karatzoglou et al.[2010} Xiong et al.|2010f Chi & Kolda 2012f|Gultekin & Paisley2014f Charlin et al.[2015 Preeti Bhargava2015f Gopalan et al.[2015 Hidasi & Tikk[ 2015}|Wang et al.|2016a). For such methods, it is not clear how to choose the epoch length parameter. First, different users may have very different timescale when they interact with those service items, making it difficult to choose a unified. epoch length. Second, it is not easy for these methods to answer time-sensitive queries such as when a user will return to the service item. The predictions are only in the resolution of the chosen epoch length. Recently, (Du et al.|2015) proposed a low-rank point process based model for time-sensitive recommendations from recurrent user activities. However, it fails to capture the heterogeneous coevolutionary properties of user-item interactions. [Wang et al.[(2016b) models the co-evolutionary property, but uses a simple linear representation of the users' and items' latent features, which might not be expressive enough to capture the real world patterns. As demonstrated in Du et al.[(2016)\nFigure 6: Comparison of performance with different amount of history.\ntime error goes from 724 to 11043.5. We think it is because this model relies more on the history information per each user-item pair.\nFigure 1: Model illustration. (a) User-item interaction events data. Each edge stands for a tuple. and contains the information of user. item. interaction time. and interaction feature. (b) The latent feature of the user and item are updated at each event time, by a nonlinear activation function o(.). and contain four terms: self evolution, co-evolution, context (interaction feature), and self drift.\nIn this paper, we propose a recurrent coevolutionary feature embedding process framework. It. combines recurrent neural network (RNN) with point process models, and efficiently captures the. co-evolution of user-item features. Our model can automatically find an efficient representation of. the underlying user and item latent feature without assuming a fixed parametric forms in advance. Figure|1 summarizes our framework. In particular, our work makes the following contributions:\nthe nonlinear RNN is quite flexible to approximate many point process models. Also we will show. that, our model only has O(#user + #item) regardless of RNN related parameters, and can also be potentially applied to online setting.\nD.R. Cox and V. Isham. Point processes, volume 12. Chapman & Hall/CRC, 1980."}, {"section_index": "5", "section_name": "3 BACKGROUND ON TEMPORAL POINT PROCESSES", "section_text": "Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In ICML, 2016.\nD.J. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: genera. theory and structure, volume 2. Springer, 2007.\nNan Du, Yichen Wang, Niao He, and Le Song. Time sensitive recommendation from recurrent user activities. In NIPS, 2015.\nA(t)dt := P{event in [t,t + dt)[H(t)} = E[dN(t)[H(t)]\nX(t)dt := P{event in |t,t + dt)|H(t)} = E|dN(t)|H(\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song Recurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016..\nPrem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchica poisson factorization. UAI, 2015.\nThe function form of the intensity X(t) is often designed to capture the phenomena of interests. Som commonly used form includes:\nHawkes processes (Hawkes 1971 Wang et al.]2016c), whose intensity models the mutua is an exponential triggering kernel, > 0 is a baseline intensity. Here, the occurrence of each historical event increases the intensity by a certain amount determined by the kernel kw and the weight a > 0, making the intensity history dependent and a stochastic process by itself. Rayleigh process, whose intensity function is A(t) = at, where a > O is the weight parameter. 4 RECURRENT COEVOLUTIONARY FEATURE EMBEDDING PROCESSES\nIn this section, we present the generative framework for modeling the temporal dynamics of user-iten interactions. We first use RNN to explicitly capture the co-evolving nature of users' and items' laten feature. Then, based on the compatibility between the users' and items' latent feature, we model th user-item interactions by a multi-dimensional temporal point process. We further parametrize the intensity function by the compatibility between users' and items' latent features.\nBalazs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. Session-based recommendations with recurrent neural networks. In ICLR. 2016.\nKomal Kapoor, Karthik Subbian, Jaideep Srivastava, and Paul Schrater. Just in time recommendations Modeling the dynamics of boredom in activity streams. In WsDM. 2015.\nEVENT REPRESENTATION Given m users and n items, we denote the ordered list of N observed events as O = {e; 0 < t1 < t2 ... < T. This represents the interaction between user uj, item i, at time t, with th interaction context q; E Rd. Here q; can be a high dimension vector such as the text review,\nAlexandros Karatzoglou, Xavier Amatriain, Linas Baltrunas, and Nuria Oliver. Multiverse recom mendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Recsys 2010.\nn the deep learning community, (Wang et al.]2015a) proposed a hierarchical Bayesian model hat jointly performs learning for the content features and collaborative filtering for the ratings natrix. (Hidasi et al.2016) applied RNN and adopt item-to-item recommendation approach with session based data. (Tan et al.f 2016) improved this model with techniques like data augmentation emporal change adaptation.(Ko et al.2016) proposed collaborative RNN that extends collaborative iltering method to capture history of user behavior. Specifically, they used static global latent factors or items and assign separate latent factors for users that are dependent on their past history. (Song et al.] 2016) extended the deep semantic structured model to capture multi-granularity tempora oreference of users. They use separate RNN for each temporal granularity and combine them with feed forward network which models users' and items' long term static features. However, none of hese works model the coevolution of users' and items' latent features and are still extensions of epocl oased methods. Our work is unique since we explicitly treat time as a random variable and captures he coevolution of users' and items' latent features using temporal point processes. Finally, our work s inspired from the recurrent marked temporal point process mode1 (Du et al.||2016). However, this vork only focuses on learning a one-dimension point process. Our work is significantly differen ince we focus on the recommendation system setting with the novel idea of feature coevolution anc ve use multi-dimensional point processes to capture user-item interactions.\nEric C Chi and Tamara G Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal on Matrix Analysis and Applications. 33(4):1272-1299. 2012.\nA temporal point process (Cox & Isham]1980] Cox & Lewis]2006] Aalen et al.]2008) is a random process whose realization consists of a list of discrete events localized in time, {t} with t, E R+. Equivalently, a given temporal point process can be represented as a counting process, N(t), which records the number of events before time t. An important way to characterize temporal point processes is via the conditional intensity function X(t), a stochastic model for the time of the next event given all the previous events. Formally, X(t)dt is the conditional probability of observing an event in a small window (t, t + dt) given the history H(t) up to t and that the event has not happen before t, i.e.,\nMichael D Ekstrand, John T Riedl, and Joseph A Konstan. Collaborative filtering recommender\nY. Koren. Collaborative filtering with temporal dynamics. In KDD, 2009\nWe associate feature embeddings uu(t) E Rk with each user u and i,(t) E Rk with each item. i. These features represent the subtle properties which cannot be directly observed, such as the. interests of a user and the semantic topics of an item. Specifically, we model the drift, evolution, and co-evolution of uu(t) and i,(t) as a piecewise constant function of time that has jumps only at event. times. Specifically, we define:.\nUser latent feature embedding process. For each user u, the corresponding embedding after use. u's k-th event ef' = (i%, t%, q) can be formulated as:.\nYong K Tan, Xinxing Xu, and Yong Liu. Improved recurrent neural networks for session-based recommendations. arXiv:1606.08117v2, 2016.\nYichen Wang and Aditya Pal. Detecting emotions in social media: A constrained optimizatior approach. In IJCAI, 2015.\nNext we discuss the rationale of each term in detail.\nTemporal drift. The first term is defined based on the time difference between consecutive events of specific user or item. It allows the basic features of users (e.g., a user's self-crafted interests) and items (e.g., textual categories and descriptions) to smoothly drift through time. Such changes of basic features normally are caused by external influences. Self evolution. The current user feature should also be influenced by its feature at the earlier time. This captures the intrinsic evolution of user/item features. For example, a user's current taste. should be more or less similar to his/her tastes two days ago. User-item coevolution. Users' and items' latent features can mutually influence each other. This term captures the two parallel processes. First, a user's embedding is determined by the latent features of the items he interacted with. At each time tk, the latent item feature is ii, (t-) We capture both the temporal influence and feature of each history item as a latent embedding Conversely, an item's embedding is determined by the feature embedding of the user who just. interacts with the item. Evolution with interaction features. Users' and items' features can evolve and be influenced by.. the characteristics of their interactions. For instance, the genre changes of movies indicate the changing tastes of users. The theme of a chatting-group can be easily shifted to certain topics of the involved discussions. In consequence, this term captures the influence of the current interaction features to the changes of the latent user (item) features. . Interaction feature. This is the additional information happened in the user-item interactions. For example, in online discussion forums such as Reddit, the interaction features are the posts and comments. In online review sites such as Yelp, it is the reviews of the businesses.\nYichen Wang, Bo Xie, Nan Du, and Le Song. Isotonic hawkes processes. In ICML, 2016c\nLiang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff G. Schneider, and Jaime G. Carbonell. Tempora collaborative filtering with bayesian probabilistic tensor factorization. In SDM, 2010.\nShuang-Hong Yang, Bo Long, Alex Smola, Narayanan Sadagopan, Zhaohui Zheng, and Hongyuan Zha. Like like alike: joint friendship and interest propagation in social networks. In Www, 2011\nXing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. Beyond clicks: Dwel time for personalization. In RecSys, 2014.\nsimply the embedding of static user/item features such as user's profile and item's categorical features For notation simplicity, we define Ou = {e = (i, t, q)} as the ordered listed of all events related. to user u, and O' =- {e, = (u', t3, q)} as the ordered list of all events related to item i. We also set. = t = O for all the users and items. t denotes the time point just before time tk..\nYoung-Jun Ko, Lucas Maystre, and Matthias Grossglauser. Collaborative recurrent neural networks for dynamic recommender systems. Journal of Machine Learning Research, pp. 1-16, 2016..\nYehuda Koren and Joe Sill. Ordrec: an ordinal model for predicting personalized item rating distributions. In RecSvs. 2011.\nAndriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimationInAdvance 265.22732013\nJiayu Zhou Juhan Lee Preeti Bhargava, Thomas Phan. Who, what, when, and where: Multi dimensional collaborative recommendations using tensor factorization on sparse user-generated data. In WWW, 2015. R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte. carlo. In ICML, 2008. Yang Song, Ali Mamdouh Elkahky, and Xiaodong He. Multi-rate deep learning for temporal. recommendation. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 909_912, 2016.\nW1(tk )+W2uu(tk-1) W3iik(tk W4qk U,ik U temporal drift self evolution co-evolution: item feature interaction feature\nVi(tk tk-1)+V2ii(tk-1) V3Uuk V4qk i,uk K temporal drift self evolution co-evolution: item feature interaction feature\nwhere t means the time point just before time t, W4, V4 E Rkd are the embedding matrices mapping from the explicit high-dimensional feature space into the low-rank latent feature space and W1, V E Rk, W2, V2, W3, V3 E IRkk are weights parameters. o() is the nonlinear activation function, such as commonly used Tanh or Sigmoid for RNN. For simplicity, we use basic recurrent neural network to formulate the recurrence, but it is also straightforward to extend it using GRU or LSTM to gain more expressive power. Figure[1summarizes the basic setting of our model.\nHere both the user and item's feature embedding processes are piecewise constant functions of time and only updated if an interaction event happens. A user's attribute changes only when he has a new interaction with some item. For example, a user's taste for music changes only when he listens to some new or old musics. Also, an item's attribute changes only when some user interacts with it Different from|Chen et al.[(2013) who also models the time change with piecewise constant function, but their work has no coevolve modeling, and is not capable of predicting the future time point.\nYichen Wang, Nan Du, Rakshit Trivedi, and Le Song. Coevolutionary latent feature processes for continuous-time user-item interactions. In NIPS, 2016b."}, {"section_index": "6", "section_name": "DETAILS ON GRADIENT COMPUTATION", "section_text": "To summarize, each feature embedding process evolves according to the respective base temporal user (item) features and also are mutually dependent on each other due to the endogenous influences from the interaction features and the entangled latent features.\nComputing gradient. For illustration purpose, we here use Sigmoid as the nonlinear activatior function o. In order to get gradient with respect to parameter W's, we first compute gradients witl respect to each varying points of embeddings. For user u's embedding after his k-th event, the corresponding partial derivatives are computed by:\ndl TTd7 dl O (1- uu(tk+1)) O uu(tk+1)W duu(tk duu(t%) Ouu(tk+1 from intensity\ndl + liu..(tk+ O lu di +\nTime as a random variable. Instead of discretizing the time into epochs as traditional meth ods (Charlin et al.]2015]Preeti Bhargava] 2015] Gopalan et al.]2015] Hidasi & Tikk2015] Wang et al.|[2016a), we explicitly model the timing of each interaction event as a random variable, which naturally captures the heterogeneity of the temporal interactions between users and items. Short term preference. The probability for user u to interact with item i depends on the compatibility of their instantaneous embeddings, which is evaluated through the inner product at the last event time t'. Because uy(t) and i,(t) co-evolve through time, their inner-product measures a general representation of the cumulative influence from the past interactions to the occurrence of the current event. The exp() function ensures the intensity is positive and well defined. Rayleigh time distribution. The user and item embeddings are piecewise constant, and we use the time lapse term to make the intensity piecewise linear. This form leads to a Rayleigh distribution for the time intervals between consecutive events in each dimension. It is well-adapted to modeling fads, where the event-happening likelihood f(.) in (1) rises to a peak and then drops extremely rapidly. Furthermore, it is computationally easy to obtain an analytic form of f(.). One can then use f(.) to make item recommendation by finding the dimension that f(:) reaches the peak.\nwhere O denotes element-wise multiplication\nThe gradient coming from the second term (i.e., the survival term) is also easy to compute, since the Rayleigh distribution has closed form of survival function. For a certain item i, if its feature doesn't. changed between time interval [t4, ty+ l, then we have\nk+1 Xu,i(T|T')dT fu exp(uu(th)'ii(t%)ii(tk) Ouu(t%) 2\nOn the other hand, if the embedding of item i changes during this time interval, then we should break this interval into segments and compute the summation of gradients in each segment in a way similar to (7). Thus, we are able to compute the gradients with respect to W,, i E {1, 2, 3, 4} as follows.\nWith the parameterized intensity function, we can further estimate the parameters using maximum likelihood estimation of all events. The joint negative log-likelihood is (Daley & Vere-Jones 2007):.\nN m n 7 l =-log(Xuj2j(tj|t's)) Au,i(T|T') d7 j=1 u=1 i=1 ent\nSince the items are treated symmetrically as users, the corresponding derivatives can be obtained in a similar way."}, {"section_index": "7", "section_name": "5 PARAMETER LEARNING", "section_text": "In this section, we propose an efficient algorithm to learn the parameters {Vi}4-1 and {Wi}4-1. The batch objective function is presented in (5). The Back Propagation Through Time (BPTT) is the standard way to train a RNN. To make the back propagation tractable, one typically needs to do truncation during training. However, due to the novel co-evolutionary nature of our model, all the events are related to each other by the user-item bipartite graph (Figure2), which makes it hard to decompose.\nHence, in sharp contrast to works (Hidasi et al.2016] Du et al.]2016) in sequential data where one. can easily break the sequences into multiple segments to make the BPTT trackable, it is a challenging. task to design BPTT in our case. To efficiently solve this problem, we first order all the events globally and then do mini-batch training in a sliding window fashion. Each time when conducting. feed forward and back propagation, we take the consecutive events within current sliding window to. build the computational graph. Thus in our case the truncation is on the global timeline, instead over. individual independent sequences as in prior works..\nNext, we explain our procedure in detail. Given a mini-batch of M ordered events O = {ej}=1, we set the time span to be [To = t1, T = tm]. Below we show how to compute the intensity and survival probability term in the objective function (5) respectively.\nXu,r(t|t') = exp(uu(t' li(t (t - user-item compatibility time lapse\nwhere t > t', and t' is the last time point where either user u's embedding or item i's embedding changes before time t. The rationale behind this formulation is three-fold:.\n-uu(tk)) O uu(tk)(tk-tk-1 aw1 u=1 K al m dl L (i-uu(th)) Ouu(th) Uu aW2 yu u=1 k dl m al i-uu(tk)) O uu(tk aw3 du u=1 k dl m dl (i- uu(tk)) O uu(tk . aw4 Ou. u=1 k\nThe rationale of the objective two-fold: (i) the negative intensity summation term ensures the probability of all interaction events is maximized; (ii) the second survival probability term penalizes the non-presence of an interaction between all possible user-item pairs on the observation window Hence, our framework not only explains why an event happens, but also why an event did not happen\n(user, forum) 1:45pm 3:45pm 5:00pm Jacob Jacob 3:30p Jacob 3:1%nm Sophie 9:45am 10:15am 1:30pm 2:45pm Sophie 2:30p 4:25pm events Jacob Sophie Jacob Sophie (a) Graph of embedding computation (b) Dependency between events\n(user, Torum) 1:45pm 3:45pm 5 Jacob Jacob 3:30pn 2 Jacob 3:1pm Sophie 9:45am 10:15am 1:30pm 2:45pm Sophie 2:30p 4:25pm events Jacob Sophie Jacob Sophie\nFigure 2: Intensity computation. (a) Each arrow means the flow of feature embedding computa. tion, e.g., Jacob interacts with basketball at 10:15am. Then the embeddings are updated: his feature. at 10:15 am is influenced by his feature and the basketball feature at 9:45am (arrow 1 and 2): th basketball's feature is influenced by Jacob's feature and its feature (arrow 3 and 4). (b) The events. lependency for two users and two forums (items). It shows how event at one dimension influence. other dimensions. Each orange arrow represents the dependency within each dimension, and the. black arrow denotes the cross-dimension dependency, e.g., Sophie interacts with volleyball at 2:30pm. and this event changes the volleyball embedding, thus will affect Jacob's visit at 3:3Opm..\nyu,i (t)dt = uut3Iit3) time (u1,i1,t1,q1) u2,i2,t2,q2)u2,i1,t3,q3) u1,i1,t4,q4) 1 dimension of user embedding 1 dimension of Item embedding (a) Piecewise constant embedding visualization (b) Survival probability computation\nFigure 3: Survival probability computation. (a) A user or item's feature embedding is piecewise constant and will change only after an interaction event happens. Only one dimension of the feature embedding is shown. (b) Survival probability for a user-item pair (u, i). The integral f' Xu, (r|t')d7 is decomposed into 4 inter-event intervals separated by { to, : : . , t3 }, with close form on each interval.\nComputing the intensity function. Each time when a new event e; happens between u; and ij their corresponding feature embeddings will evolve according to a computational graph, as illustrated in Figure[2al Due to the change of feature embedding, all the dimensions related to u; or i; wil be influenced and the intensity function for that dimension will change consequently. Such cross dimension influence dependency is shown in Figure|2b In our implementation, we first compute the corresponding intensity Auj; (t,|t's) according to (4), and then update the embedding of u, and ij This operation takes O(M) complexity, and is independent to the number of users or items.\nComputing the survival function. To compute the survival probability - ST. Au,i(r|t')dr for each pair (u, i), we first collect all the time stamps {tk} that have events related to either u or i. For notation simplicity, let |{t}| = nu,i and t1 = To, tnu. = T. Since the embeddings are piecewise constant, the corresponding intensity function is piecewise linear, according to (4). Thus, the integration is decomposed into each time interval where the intensity is constant, i.e.,\nnu,i-1 nu,i- tk+1 U,l Tdt = U,2 Tdt = >(tk+1-tk)exp(uu(tk)'i(tk) TO k=1 tk k=1\nFigure 3|visualizes the computation. Although the survival probability term exists in close form, we still need to solve two challenges. First, it is still expensive to compute it for each user item pair Moreover, since the user-item interaction bipartite graph is very sparse, it is not necessary to monitor each dimension in the stochastic training setting. To speed up the computation, we propose a novel random-sampling scheme as follows.\nNote that the intensity term in the objective function (5) tries to maximize the inner product between user and item that has interaction event. while the survival term penalize over all other pairs of innei\nTable 1: Comparison with different methods\nproducts. We observe that this is similar to Softmax computing for classification problem. Hence inspired by the noise-contrastive estimation method (Gutmann & Hyvarinen!2012) that is widely used in language models (Mnih & Kavukcuoglu]2013), we keep the dimensions that have events on them, while randomly sample dimensions without events in current mini-batch.\nThe second challenge lies in the fact that the user-item interactions vary a lot across mini-batches hence the corresponding computational graph also changes greatly. To make the learning efficient, we use the graph embedding framework (Dai et al.|2016) which allows training deep learning models where each term in the objective has a different computational graphs but with shared parameters The Adam Optimizer (Kingma & Ba]2014) together with gradient clip is used in our experiment.."}, {"section_index": "8", "section_name": "6 EXPERIMENTS", "section_text": "We evaluate our model on real-world datasets. For each sequence of user activities, we use all the. events up to time T : p as the training data, and the rest events as the testing data, where T is the. observation window. We tune the latent rank of other baselines using 5-fold cross validation with grid search. We vary the proportion p E 0.7, 0.72, 0.74, 0.76, 0.78} and report the averaged results over five runs on two tasks (we will release code and data once published):.\nLowRankHawkes (Du et al.,2015): This is a low rank Hawkes process model which assumes user-item interactions to be independent of each other and does not capture the co-evolution of user and item features. Coevolving (Wang et al.J2016b): This is a multi-dimensional point process model which uses a simple linear embedding to model the co-evolution of user and item features. PoissonTensor (Chi & Kolda2012): Poisson Tensor Factorization has been shown to perform better than factorization methods based on squared loss (Karatzoglou et al.2010] Xiong et al. 2010f [Wang et al.[[2015b) on recommendation tasks. The performance for this baseline is reported using the average of the parameters fitted over all time intervals. TimeSVD++ (Koren]2009) and FIP (Yang et al.] 2011): These two methods are only designed for explicit ratings, the implicit user feedbacks (in the form of a series of interaction events) are converted into the explicit ratings by the respective frequency of interactions with users. STIC (Kapoor et al.2015): it fits a semi-hidden markov model (HMM) to each observed user-item pair and is only designed for time prediction.\nWe use three real world datasets as follows\nIPTV. It contains 7,100 users' watching history of 385 TV programs in 11 months (Jan 1 - Nov 30 2012), with around 2M events, and 1,420 movie features (including 1,073 actors, 312 directors, 22 genres, 8 countries and 5 years). Yelp. This data was available in Yelp Dataset challenge Round 7. It contains reviews for various businesses from October, 2004 to December, 2015. The dataset we used here contains 1,005 users and 47,924 businesses, with totally 291,716 reviews.\nMethod DeepCoevolveLowRankHawkes Coevolving PoissonTensor TimeSVD++ FIP STIC Continuous time V V V V Predict Item V V V V V V Predict Time V V V V Computation RNN Factorization Factorization Factorization Factorization Factorization HMM.\n. Item prediction. At each test time t, we predict the item that the user u will interact with. We rank all the items in the descending order of the conditional density fu,i(t) = Xu,i(t) Su,i(t). We report. the Mean Average Rank (MAR) of each test item at the test time. Ideally, the item associated with the test time t should rank one, hence smaller value indicates better predictive performance.. .Time prediction. We predict the expected time when a testing event will occur between a given. TT We report the Mean Absolute Error (MAE) between the predicted and true time..\nMethods Methods Methods DeepCoevolve DeepCoevolve DeepCoevolve LowRankHawkes 177.2 191.3 LowRankHawkes 450.1 510.7 540.7 Coevolving 8823.3 9104.2 9318.2 100 Coevolving 150.3 Coevolving LowRankHawkes PoissonTensor PoissonTensor PoissonTensor TimeSVD++ 100TimeSVD++ 1000 TimeSVD++ 2128.3 FIP FIP FIP MAR MAR MAR 10 107.16 120.1 10.4 10 13.2 10 Methods Methods Methods Methods Methods Methods DeepCoevolve DeepCoevolve 1000 DeepCoevolve Coevolving Coevolving Coevolving 12423.4 14847.4 11043.5 LowRankHawkes 830.2 901.1 LowRankHawkes 186.4 203 LowRankHawkes PoissonTensor 100 PoissonTensor PoissonTensor STIC 356 STIC 1000STIC 67.2 1360.5 884.3 MAE MAE MAE 34.5 10 10 10.4 8.1 10.79 10 Methods Methods Methods (a) IPTV (b) Reddit (c) Yelp\nReddit. We collected discussion related data on different subreddits (groups) for the month c January 2014. We filtered all bot users' and their posts from this dataset. Furthermore, we randoml selected 1,000 users, 1,403 groups, and 10,000 discussion events.\nFigure 4 shows that DeepCoevOLVE significantly outperforms both epoch-based baselines anc. state-of-arts point process based methods. LowRANKHAwKEs has good performance on iten. prediction but not on time prediction, while COEvOLVING has good performance on time predictior but not on item prediction. We discuss the performance regarding the two metrics below..\nItem prediction. Note that the best possible MAR one can achieve is 1, and our method gets quite accurate results: with the value of 1.7 on IPTV and 1.9 on Reddit. Note LOwRANKHAwKES achieves comparable item prediction performance, but not as good on the time prediction task. We think the reason is as follows. Since one only need the rank of conditional density f(.) in (1) to conduct item prediction, LowRANKHAwKEs may still be good at differentiating the conditional density function, but could not learn its actual value accurately, as shown in the time prediction task where the value of the conditional density function is needed for precise prediction.\nTime prediction. The second row of Figure4shows that DeEpCoevOLVE outperforms other meth ods. Compared with LowRANKHAwKEs that achieves comparable time predication performance 6 improvement on Reddit, it has 10 improvement on Yelp, and 30 improvement on IPTV. The time unit is hour. Hence it has 2 weeks accuracy improvement on IPTV and 2 days on Reddit. This is important for online merchants to make time sensitive recommendations. An intuitive explanation is that our method accurately captures the nonlinear pattern between user and item interactions The competitor LowRANKHAwKEs assumes specific parametric forms of the user-item interaction process, hence may not be accurate or expressive enough to capture real world temporal patterns Furthermore, it models each user-item interaction dimension independently, which may lose the important affection from user's interaction with other items while predicting the current item's reoccurrence time. Our work also outperforms CoEvoLvING, e.g., with around 3 MAE improve on IPTV. Moreover, the item prediction performance is also much better than CoevoLvING. It shows the importance of using RNN to capture the nonlinear embedding of user and item latent features instead of the simple parametrized linear embedding in CoEvOLVING."}, {"section_index": "9", "section_name": "6.4 INSIGHT OF RESULTS", "section_text": "We will look deeper and provide rationale behind the prediction results in the following two sub sections. First, to understand the difficulty of conducting prediction tasks in each dataset, we study their different sparsity properties. For the multidimensional point process models, the fewer events we observe in each dimension, the more sparse the dataset is. Our approach alleviates the sparsity problem via the modeling of dependencies among dimensions, thus is consistently doing better than other baseline algorithms.\nNext, we fix one dataset and evaluate how different levels of sparsity in training data influences each algorithm's performance.\nFigure 4: Prediction results on three real world datasets"}]
BJrFC6ceg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "We presented PixelCNN++, a modification of PixelCNN using a discretized logistic mixture like lihood on the pixels among other modifications. We demonstrated the usefulness of these mod- ifications with state-of-the-art results on CIFAR-10. Our code is made available at https: qi + hub. cOm nixe1 cnnand can easily be adanted for use on other data sets\nThe PixelCNN, introduced byvan den Oord et al.(2016b), is a generative model of images with a tractable likelihood. The model fully factorizes the probability density function on an image x ovei. all its sub-pixels (color channels in a pixel) as p(x) = 1 I, p(x;[x<i). The conditional distributions. p(x;[x<i) are parameterized by convolutional neural networks and all share parameters. The Pixel CNN is a powerful model as the functional form of these conditionals is very flexible. In additior it is computationally efficient as all conditionals can be evaluated in parallel on a GPU for an ob. served image x. Thanks to these properties, the PixelCNN represents the current state-of-the-art in. generative modeling when evaluated in terms of log-likelihood. Besides being used for modeling images, the PixelCNN model was recently extended to model audio (van den Oord et al.]2016a). video (Kalchbrenner et al.2016b) and text (Kalchbrenner et al.[2016a)."}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti mation. arXiv preprint arXiv:1410.8516, 2014.\nFor use in our research, we developed our own internal implementation of PixelCNN and made a number of modifications to the base model to simplify its structure and improve its performance We now release our implementation at https : //github. com/openai/pixe1-cnn hoping that it will be useful to the broader community. Our modifications are discussed in Section [2l and evaluated experimentally in Section 3 State-of-the-art log-likelihood results confirm their useful- ness."}, {"section_index": "2", "section_name": "MODIFICATIONS TO PIXELCNN", "section_text": "We now describe the most important modifications we have made to the PixelCNN model archite cure as described by van den Oord et al. (2016c). For complete details see our code release at h+ + n\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016b\nThe standard PixelCNN model specifies the conditional distribution of a sub-pixel, or color channel. of a pixel, as a full 256-way softmax. This gives the model a lot of flexibility, but it is also very costly. in terms of memory. Moreover, it can make the gradients with respect to the network parameters"}, {"section_index": "3", "section_name": "PIXELCNN++: IMPROVING THE PIXELCNN WITH DISCRETIZED LOGISTIC MIXTURE LIKELIHOOD AND OTHER MODIFICATIONS", "section_text": "sub-pixel. At no point during training does the unregularized model get a test-set log-likelihood. below 3.0 bits per sub-pixel. Contrary to what we might naively expect, the perceptual quality of the generated images by the overfitted model is not great, as shown in Figure[8."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv: 1610.10099, 2016a\nDiederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd\nDiederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling Improving variational inference with inverse autoregressive flow. In Advances in Neural Informa tion Processing Systems, 2016.\nvery sparse, especially early in training. With the standard parameterization, the model does noi know that a value of 128 is close to a value of 127 or 129, and this relationship first has to be learned before the model can move on to higher level structures. In the extreme case where a particulai sub-pixel value is never observed, the model will learn to assign it zero probability. This would be especially problematic for data with higher accuracy on the observed pixels than the usual 8 bits: Ir the extreme case where very high precision values are observed, the PixelCNN, in its current form. would require a prohibitive amount of memory and computation, while learning very slowly. We therefore propose a different mechanism for computing the conditional probability of the observed discretized pixel values. In our model, like in the VAE of|Kingma et al.(2016), we assume there is a latent color intensity v with a continuous distribution, which is then rounded to its nearest 8-bit representation to give the observed sub-pixel value x. By choosing a simple continuous distribution for modeling v (like the logistic distribution as done by|Kingma et al.(2016)) we obtain a smooth and memory efficient predictive distribution for x. Here, we take this continuous univariate distributior to be a mixture of logistic distributions which allows us to easily calculate the probability on the observed discretized value x, as shown in equation (2). For all sub-pixel values x excepting the edge cases 0 and 255 we have:\nK Tilogistic(i, Si) i=1 K P(x[, , s) i[o((x+0.5-i)/si)-((x-0.5-i)/si)] i=1\nLucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances ir Neural Information Processing Systems, pp. 1927-1935, 2015.\nLucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mix tures applied to multiscale image representations. PloS one, 7(7):e39857, 2012\nwhere o( is the logistic sigmoid function. For the edge case of 0, replace x - 0.5 by oo, and for 255 replace x + 0.5 by +oo. Our provided code contains a numerically stable implementation for. calculating the log of the probability in equation2.\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499. 2016a\nOur approach follows earlier work using continuous mixture models (Domke et al.. 2008Theis et al.]2012Uria et al.]2013] Theis & Bethge2015], but avoids allocating probability mass to values outside the valid range of [0, 255] by explicitly modeling the rounding of v to x. In addi-. tion, we naturally assign higher probability to the edge values 0 and 255 than to their neighboring. values, which corresponds well with the observed data distribution as shown in Figure[1] Experi- mentally, we find that only a relatively small number of mixture components, say 5, is needed to. accurately model the conditional distributions of the pixels. The output of our network is thus of much lower dimension, yielding much denser gradients of the loss with respect to our parameters. In. our experiments this greatly sped up convergence during optimization, especially early on in train-. ing. However, due to the other changes in our architecture compared to that of|van den Oord et al.. (2016c) we cannot say with certainty that this would also apply to the original PixelCNN model..\nAaron van den Oord. Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks In International Conference on Machine Learning (ICML), 2016b\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016c\n0.010 0.008 0.006 hrenneeey 0.004 0.002 0.000 0 50 100 150 200 250\nFigure 1: Marginal distribution of all sub-pixel values in CIFAR-10. The edge value of 255 is much more frequent than its neighbouring values: This is easy to model using our rounding based approach, but harder using continuous or truncated distributions..\nDanilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi mate inference in deep generative models. In ICML, pp. 1278-1286, 2014\nBenigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems, pp. 2175-2183, 2013\nThe pixels in a color image consist of three real numbers, giving the intensities of the red, blue and green colors. The original PixelCNN factorizes the generative model over these 3 sub-pixels. This allows for very general dependency structure, but it also complicates the model: besides keeping track of the spatial location of feature maps, we now have to separate out all feature maps in 3 groups depending on whether or not they can see the R/G/B sub-pixel of the current location. This added complexity seems to be unnecessary as the dependencies between the color channels of a pixel are likely to be relatively simple and do not require a deep network to model. Therefore, we instead condition only on whole pixels up and to the left in an image, and output joint predictive distributions over all 3 channels of a predicted pixel. The predictive distribution on a pixel itself can be interpreted as a simple factorized model: We first predict the red channel using a discretized mixture of logistics. as described in section|2.1] Next, we predict the green channel using a predictive distribution of the same form. Here we allow the means of the mixture components to linearly depend on the value of the red sub-pixel. Finally, we model the blue channel in the same way, where we again only allow linear dependency on the red and green channels. For the pixel (ri,j, gi,j, bi,j) at location (i, j) in. our image, the distribution conditional on the context C,j, consisting of the mixture indicator and. the previous pixels, is thus\nTi,j,9i,j,bi,jCi,j) )= P(ri,j|r(Ci,j),sr(Ci,j)) x P(gi,j|g(Ci,j,ri XP(bi,j|b(Ci,j,ri,j,Ji,j),sb(Ci,j)) Hg(Ci,j) + Q(Ci,j)ri,j ri.i Hb(Ci,j,Ti,j,Ji,j) b(Ci,j)+B(Ci,j)ri,j+y(Ci,j)bi,j\nwith a. 3. ~ scalar coefficients depending on the mixture com ponent and previous pixels\nThe mixture indicator is shared across all 3 channels; i.e. our generative model first samples a mix. ture indicator for a pixel, and then samples the color channels one-by-one from the corresponding. mixture component. Had we used a discretized mixture of univariate Gaussians for the sub-pixels instead of logistics, this would have been exactly equivalent to predicting the complete pixel using. a (discretized) mixture of 3-dimensional Gaussians with full covariance. The logistic and Gaus sian distributions are very similar, so this is indeed very close to what we end up doing. For full. implementation details we refer to our code at https : //github. com/openai/pixel-cnn.\nThe original PixelCNN only uses convolutions with small receptive field. Such convolutions are good at capturing local dependencies, but not necessarily at modeling long range structure. Al-. though we find that capturing these short range dependencies is often enough for obtaining very good log-likelihood scores (see Table 2), explicitly encouraging the model to capture long range. dependencies can improve the perceptual quality of generated images (compare Figure |3|and Fig- ure [5). One way of allowing the network to model structure at multiple resolutions is to introduce. dilated convolutions into the model, as proposed by van den Oord et al.(2016a) and Kalchbren- ner et al.(2016b). Here, we instead propose to use downsampling by using convolutions of stride. 2. Downsampling accomplishes the same multi-resolution processing afforded by dilated convo- lutions, but at a reduced computational cost: where dilated convolutions operate on input of ever. increasing size (due to zero padding), downsampling reduces the input size by a factor of 4 (for stride of 2 in 2 dimensions) at every downsampling. The downside of using downsampling is that. it loses information, but we can compensate for this by introducing additional short-cut connections. into the network as explained in the next section. With these additional short-cut connections, we found the performance of downsampling to be the same as for dilated convolution.."}, {"section_index": "5", "section_name": "2.4 ADDING SHORT-CUT CONNECTIONS", "section_text": "For input of size 32 32 our suggested model consists of 6 blocks of 5 ResNet layers. In betweer the first and second block, as well as the second and third block, we perform subsampling by stridec convolution. In between the fourth and fifth block, as well as the fifth and sixth block, we perforn upsampling by transposed strided convolution. This subsampling and upsampling process loses information, and we therefore introduce additional short-cut connections into the model to recover\nthis information from lower layers in the model. The short-cut connections run from the ResNe layers in the first block to the corresponding layers in the sixth block, and similarly between blocks two and five, and blocks three and four. This structure resembles the VAE model with top down. inference used byKingma et al.(2016), as well as the U-net used by Ronneberger et al.(2015) for image segmentation. Figure2 shows our model structure graphically..\n32x32 16x16 8x8 8x8 16x16 32x32 Sequence of 6 layers = Downward stream = Downward and rightward stream =Identity (skip) connection - Convolutional connection\n32x32 16x16 8x8 8x8 16x16 32x32 Sequence of 6. layers = Downward stream = Downward and. rightward stream. =Identityskip connection = Convolutional connection\nFigure 2:Like van den Oord et al.(2016c), our model follows a two-stream (downward, and. downward+rightward) convolutional architecture with residual connections: however, there are two significant differences in connectivity. First, our architecture incorporates downsampling and up. sampling, such that the inner parts of the network operate over larger spatial scale, increasing com-. putational efficiency. Second, we employ long-range skip-connections, such that each k-th layer. provides a direct input to the (K - k)-th layer, where K is the total number of layers in the net. work. The network is grouped into sequences of six layers, where most sequences are separated by. downsampling or upsampling."}, {"section_index": "6", "section_name": "2.5 REGULARIZATION USING DROPOUT", "section_text": "We apply our model to modeling natural images in the CIFAR-10 data set. We achieve state-of-the art results in terms of log-likelihood, and generate images with coherent global structure"}, {"section_index": "7", "section_name": "3.1 UNCONDITIONAL GENERATION ON CIFAR-10", "section_text": "We apply our PixelCNN model, with the modifications as described above, to generative modeling o the images in the CIFAR-10 data set. For the encoding part of the PixelCNN, the model uses 3 Resne blocks consisting of 5 residual layers, with 2 2 downsampling in between. The same architecture is used for the decoding part of the model, but with upsampling instead of downsampling in betweer blocks. All residual layers use 192 feature maps and a dropout rate of 0.5. Table[1|shows the state of-the-art test log-likelihood obtained by our model. Figure3|shows some samples generated by the model.\nThe PixelCNN model is powerful enough to overfit on training data. Moreover, rather than just reproducing the training images, we find that overfitted models generate images of low perceptual quality, as shown in Figure[8] One effective way of regularizing neural networks is dropout (Srivas- tava et al.|2014). For our model, we apply standard binary dropout on the residual path after the first convolution. This is similar to how dropout is applied in the wide residual networks of|Zagoruyko & Komodakis(2016). Using dropout allows us to successfully train high capacity models while avoiding overfitting and producing high quality generations (compare figure[8 and figure 3).\nFigure 3: Samples from our PixelCNN model trained on CIFAR-10\nTable 1: Negative log-likelihood for generative models on CIFAR-10 expressed as bits per sub-pixel\nNext, we follow van den Oord et al.(2016c) in making our generative model conditional on the. class-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of the. class-label into a separate class-dependent bias vector for each convolutional unit in our network. We. find that making the model class-conditional makes it harder to avoid overfitting on the training data:. our best test log-likelihood is 2.94 in this case. Figure 4 shows samples from the class-conditional. model, with columns 1-10 corresponding the 10 classes in CIFAR-10. The images clearly look qualitatively different across the columns and for a number of them we can clearly identify their class label.\nFigure 4: Class-conditional samples from our PixelCNN for CIFAR-10 (left) and real CIFAR-1 images for comparison (right).\nIt is hypothesized that the size of the receptive field and additionally the removal of blind spots i the receptive field are important for PixelCNN's performance (van den Oord et al.J2016b). Indee van den Oord et al.(2016c) specifically introduced an improvement over the previous PixelCNI model to remove the blind spot in the receptive field that was present in their earlier model..\nHere we present the surprising finding that in fact a PixelCNN with rather small receptive field can attain competitive generative modelling performance on CIFAR-10 as long as it has enough capacity.. Specifically, we experimented with our proposed PixelCNN++ model without downsampling blocks and reduce the number of layers to limit the receptive field size. We investigate two receptive field sizes: 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditional distribution of a pixel can depends on a rectangle above the pixel of size 11x5 as well as 111 = 5x1 2 block to the left of the pixel.\nAs we limit the size of the receptive field, the capacity of the network also drops significantly since it contains many fewer layers than a normal PixelCNN. We call the type of PixelCNN that's simply limited in depth \"Plain'' Small PixelCNN. Interestingly, this model already has better performance than the original PixelCNN in van den Oord et al.[(2016b) which had a blind spot. To increase capacity, we introduced two simple variants that make Small PixelCNN more expressive withou growing the receptive field:\nNIN (Network in Network): insert additional gated ResNet blocks with 1x1 convolution be- tween regular convolution blocks that grow receptive field. In this experiment, we inserted 3 NIN blocks between every other layer. Autoregressive Channel: skip connections between sets of channels via 1x1 convolution gated ResNet block.\nBoth modifications increase the capacity of the network, resulting in improved log-likelihood as shown in Table [2 Although the model with small receptive field already achieves an impressive likelihood score, its samples do lack global structure, as seen in Figure|5.\nTable 2: CIFAR-10 bits per sub-pixel for Small PixelCNN\nFigure 5: Samples from 3.03 bits/dim Small PixelCNN\nIn order to test the effect of our modifications to PixelCNN, we run a number of ablation experiments where for each experiment we remove a specific modification.\nIn order to test the contribution of our logistic mixture likelihood, we re-run our CIFAR-10 experi ment with the 256-way softmax as the output distribution instead. We allow the 256 logits for each sub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that are given as output by the model. Our model with softmax likelihood is thus strictly more flexible thar our model with logistic mixture likelihood, although the parameterization is quite different from that used by|van den Oord et al.[(2016c). The model now outputs 1536 numbers per pixel, describing the logits on the 256 potential values for each sub-pixel, as well as the coefficients for the dependencies between the sub-pixels. Figure|6[shows that this model trains more slowly than our original model In addition, the running time per epoch is significantly longer for our tensorflow implementation For our architecture, the logistic mixture model thus clearly performs better. Since our architecture differs from that of[van den Oord et al.(2016c) in other ways as well, we cannot say whether this would also apply to their model."}, {"section_index": "8", "section_name": "3.4.2 CONTINUOUS MIXTURE LIKELIHOOD INSTEAD OF DISCRETIZATION", "section_text": "Instead of directly modeling the discrete pixel values in an image, it is also possible to de-quantize. them by adding noise from the standard uniform distribution, as used byUria et al.[(2013) and others,. and modeling the data as being continuous. The resulting model can be interpreted as a variational autoencoder (Kingma & Welling]2013f Rezende et al.]2014), where the dequantized pixels z form a latent code whose prior distribution is captured by our model. Since the original discrete pixels x can be perfectly reconstructed from z under this model, the usual reconstruction term vanishes from\n5.0 original 4.8 softmax likelihood 4.6 4.4 4.2 4.0 3.8 3.6 3.4 3.2 0 2 4 6 8 10 12 14 16 18 epochs\nFigure 6: Training curves for our model with logistic mixture likelihood versus our model with softmax likelihood.\nthe variational lower bound. The entropy of the standard uniform distribution is zero, so the term that remains is the log likelihood of the dequantized pixels, which thus gives us a variational lower bound on the log likelihood of our original data..\nWe re-run our model for CIFAR-10 using the same model settings as those used for the 2.92 bit per dimension result in Table 1] but now we remove the discretization in our likelihood model anc. instead add standard uniform noise to the image data. The resulting model is a continuous mixtur model in the same class as that used byTheis et al.(2012);Uria et al.(2013);Theis & Bethge(2015 and others. After optimization, this model gives a variational lower bound on the data log likelihoo of 3.11 bits per dimension. The difference with the reported 2.92 bits per dimension shows th benefit of using discretization in the likelihood model.."}, {"section_index": "9", "section_name": "3.4.3 NO SHORT-CUT CONNECTIONS", "section_text": "Next, we test the importance of the additional parallel short-cut connections in our model, indicatec by the dotted lines in Figure2] We re-run our unconditional CIFAR-10 experiment, but remove the. short-cut connections from the model. As seen in Figure[7] the model fails to train without these. connections. The reason for needing these extra short-cuts is likely to be our use of sub-sampling. which discards information that otherwise cannot easily be recovered,.\n6.0 original no short-cuts 5.5 5.0 blr deer 4.5 4.0 3.5 3.0 0 10 20 30 40 50 epochs\nFigure 7: Training curves for our model with and without short-cut connections"}, {"section_index": "10", "section_name": "3.4.4 NO DROPOUT", "section_text": "We re-run our CIFAR-10 model without dropout regularization. The log-likelihood we achieve oI the training set is below 2.0 bits per sub-pixel, but the final test log-likelihood is above 6.0 bits pe"}]
rJqFGTslg
[{"section_index": "0", "section_name": "PRUNING FILTERS FOR EFFICIENT CONVNETS", "section_text": "ImageNet, ResNet-34, prune smallest filters. ImageNet, ResNet-34, prune the second layer of the basicblock 75 conv_2 64 70 : 1 - 7, step=2 conv_4 64 9 - 15, step=2 conv 6 64 17 - 27, step=2 70 60 conv_8 128 29 - 33, step=2 conv_10 128 50 conv_12 128 65 ACeunrey conv_14 128 40 conv_16 256 conv_18 256 60 30 conv_20 256 conv_22 256 conv_24 256 20 55 conv_26 256 conv 28 512 10 conv 30 512 50 conv 32 512 0 20 40 60 80 40 0 20 40 60 80 100 Filters Pruned Away(%) Parameter Pruned Away(%) (a) Pruning the first layer of residual blocks. (b) Pruning the second layer of residual blocks.\nAsim Kadav\nUniversity of Marylanc\nFigure 7: Sensitivity to pruning for the residual blocks of ResNet-34\nThe success of CNNs in various applications is accompanied by a significant. increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers. due to irregular sparsity in the pruned networks. We present an acceleration method. for CNNs, where we prune filters from CNNs that are identified as having a small. effect on the output accuracy. By removing whole filters in the network together. with their connecting feature maps, the computation costs are reduced significantly In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and. can work with existing efficient BLAS libraries for dense matrix multiplications We show that even simple filter pruning techniques can reduce inference costs for. VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks..\nWe compare our approach with pruning random filters and largest filters. As shown in Figure [8 pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning fo all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest l1-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger l1-norms."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The ImageNet challenge has led to significant advancements in exploring various architectural. choices in CNNs (Russakovsky et al.. (2015);Krizhevsky et al.(2012); Simonyan & Zisserman (2015); Szegedy et al.(2015a); He et al.(2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have significant inference costs especially. when used with embedded sensors or mobile devices where computational and power resources. may be limited. For these applications, in addition to accuracy, computational efficiency and small. network sizes are crucial enabling factors (Szegedy et al.[(2015b)). In addition, for web services. that provide image search and image classification APIs that operate on a time budget often serving. hundreds of thousands of images per second, benefit significantly from lower inference times..\nFigure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest filters, pruning random filters and pruning the largest filters. In random filter pruning, the order of filters to be pruned is randomly permuted..\nThere has been a significant amount of work on reducing the storage and computation costs by model compression (Le Cun et al.(1989);Hassibi & Stork(1993);Srinivas & Babu (2015); Han et al. (2015);[Mariet & Sra (2016)). Recently|Han et al.(2015} 2016b) report impressive compression rates on AlexNet (Krizhevsky et al.(2012)) and VGGNet (Simonyan & Zisserman(2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall floating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Iandola et al.(2016)), but additionally require sparse\nThe activation-based feature map pruning method removes the feature maps with weak activation. patterns and their corresponding filters and kernels (Polyak & Wolf (2015), which needs sample generated by applying filter Fi, E Rn, kk to feature maps of previous layer x; E Rn, w; hi, i.e.. Xi+1,j = Fi, * x;. Given N randomly selected images {x } N=1 from the training set, the statistics. of each feature map can be estimated with one epoch forward pass of the N sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch. normalization or non-linear activation. We compare our l1-norm based filter pruning with feature map pruning using the following criteria: Omean-mean(xi,j. N n=1 mean(x,j), Omean-std(xi,j\nIgor Durdanovic\nHans Peter Graf NEC Labs Americ"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "than pruning the second layer. This finding also correlates with the bottleneck block design for deeper. ResNets, which first reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping..\n100 CIFAR10,VGG-16, prune filters with smallest -norm CIFAR10,VGG-16, prune random filters CIFAR10,VGG-16, prune filters with largest -norm 100 100 80 conv_164 80 BC conv_2 64 conv_3 128 conv_4 128 60 conv_5 256 60 ACeenrey 0 conv_6 256 conv_7 256 40 conv_8 512 conv_9 512 conv_10 512 20 conv_11 512 20 20 0conv_12 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%)\nCIFAR10,VGG-16, prune filters with smallest l-norm CIFAR10, VGG-16, prune random filters 100 CIFAR10,VGG-16, prune filters with largest -norm 100 100 conv_1 64 30 conv 264 conv_3128 conv_4 128 50 conv_5 256 conv_6256 oconv_7 256 40 conv_8 512 conv_9 512 conv_10 512 20 conv 11 512 20 20 conv_12 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%)\nRecent work on CNNs have yielded deep architectures with more efficient design (Szegedy et al (2015a b);He & Sun (2015); He et al.(2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al.[(2013);He et al.(2016)), which reduces the number of parameters significantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun|(2015). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate.\nCNNs with large capacity usually have significant redundancy among different filters and feature. channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning. filters. Compared to pruning weights across the network, filter pruning is a naturally structured way. of pruning without introducing sparsity and therefore does not require using sparse libraries or any. specialized hardware. The number of pruned filters correlates directly with acceleration by reducing. the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead. of layer-wise iterative fine-tuning (retraining), we adopt a one-shot pruning and retraining strategy to. save retraining time for pruning filters across multiple layers, which is critical for pruning very deep. networks. Finally, we observe that even for ResNets, which have significantly fewer parameters and. inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacrificing. too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets.\nFigure 9: Comparison of activation-based feature ma oruning for VGG-16 on CIFAR-10\nOvar-e,(xi,j) = var({||x, ||2}N-1), where mean, std and var are standard statistics (average. standard deviation and variance) of the input. Here, Ovar-l, is the contribution variance of channel. criterion proposed in Polyak & Wolf|(2015), which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias.\nThe early work byLe Cun et al.(1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justified saliency measure. Later,Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information Mariet & Sra|(2016) reduce the network redundancy by identifying a subset of diverse neurons tha1 does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections.\nTo reduce the computation costs of the convolutional layers, past work have proposed to approximate. convolutional operations by representing the weight matrix as a low rank product of two smaller . matrices without changing the original number of filters (Denil et al.(2013);Jaderberg et al.(2014) Zhang et al.(2015b a); Tai et al.(2016); Ioannou et al.(2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al.(2013)) and fast. convolution using the Winograd algorithm (Lavin & Gray(2016)). Additionally, quantization (Han. et al.(2016b)) and binarization (Rastegari et al.[(2016);Courbariaux & Bengio[(2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition. to these techniques to reduce computation costs without incurring additional overheads"}, {"section_index": "3", "section_name": "5 CONCLUSIONS", "section_text": "Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune filters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-1O) and deep ResNets without significant loss in the original accuracy Instead of pruning with specific layer-wise hayperparameters and time-consuming iterative retraining we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures.\nSeveral work have studied removing redundant feature maps from a well trained network (Anwar et al (2015);Polyak & Wolf (2015)).Anwar et al.(2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle filtering, which selects the best combination from a number of random generated masks.Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the filter weights and prune filters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune filters for simple and complex convolutional network architectures."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank the anonymous reviewers for their valuable feedback"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "CIFAR10, VGG-16, prune filters with smallest -norm CIFAR10, VGG-16, prune feature maps with smallest Cmean- mean CIFAR10, VGG-16, prune feature maps with smallest omean- std 100 100 100 80 conv_1 64 conv_1 64 conv_164 BO conv 264 BO conv_2 64 conv_2 64 conv_3128 conv_3 128 conv_3 128 conv_4 128 conv_4 128 conv_4 128 60 60 conv_5 256 conv_5 256 60 ACeenney ACeunney ACeunney conv_5 256 conv_6 256 conv 6256 conv_6 256 conv_7 256 conv 7 256 conv 7256 40 40 conv_8 512 conv_8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 conv_10 512 20 conv_l1 512 20 conv_11 512 20 conv_11 512 conv_12 512 o conv_12 512 -0conv_12 512 conv_13 512 : conv_13 512 : conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) (a) l|F,j|1 (b) Omean-mean (c) Omean-std CIFAR10, VGG-16, prune feature maps with smallest mean- CIFAR10, VGG-16, prune feature maps with smallest mean- CIFAR10, VGG-16, prune feature maps with smallest ovar-4 100 100 100 80 conv_164 80 conv_164 80 conv_1 64 conv_2 64 conv_2 64 conv_2 64 conv_3128 conv_3128 conv_3128 conv_4 128 conv_4 128 conv_4 128 60 conv_5 256 60 o conv_5 256 60 ACenney ACeuney ACeuney conv_5 256 conv_6 256 conv 6256 conv 6256 conv_7 256 conv_7 256 conv_7 256 40 conv_8 512 40 conv_8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 conv_10 512 20 conv_l1 512 20 conv_11 512 20 conv_11 512 conv_12 512 o conv_12 512 o conv_12 512 conv_13 512 conv_13 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) d) Omean-l1 (e) Omean-l2 Ovar-l2\nThe estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (N = 50, 000 for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning outperforms feature map pruning with the criteria Omean-mean, mean-l, Omean-l2 and Ovar-l. The Omean-std criterion has better or similar performance to l1-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find l1-norm is a good heuristic for filter selection considering that it is data free.\nConcurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky(2016); Zhou et al.(2016); [Wen et al.[(2016).Lebedev & Lempitsky (2016) leverage group-sparsity on the convolutional filters to achieve structured brain. damage, i.e., prune the entries of the convolution kernel in a group-wise fashion.Zhou et al.(2016). add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters.Wen et al.(2016) add structured sparsity regularizer on each layer to reduce trivial filters,. channels or even layers. In the filter-level pruning, all above work use l2.1-norm as a regularizer.\nSimilar to the above work, we use l1-norm to select unimportant filters and physically prune them Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desirec speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one. Sta ge.\nMatthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.\nMisha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013."}, {"section_index": "6", "section_name": "3 PRUNING FILTERS AND FEATURE MAPS", "section_text": "Let n; denote the number of input channels for the ith convolutional layer and hi/w, be the height/width of the input feature maps. The convolutional layer transforms the input feature maps x; E Rn;xh;xwi into the output feature maps Xi+1 E IRni+1hi+1wi+1, which are used as in- put feature maps for the next convolutional layer. This is achieved by applying ni+1 3D filters filter is composed by n, 2D kernels K E Rkk (e.g., 3 3). All the filters, together, constitute the kernel matrix F, E Rnn+1 xkxk. The number of operations of the convolutional layer is ni+1n;k2hi+1Wi+1. As shown in Figure[1] when a filter Fi,, is pruned, its corresponding feature map xi+1,j is removed, which reduces nyk2hi+1Wi+1 operations. The kernels that apply on the removed feature maps from the filters of the next convolutional layer are also removed, which saves an additional ni+2k2hi+2Wi+2 operations. Pruning m filters of layer i will reduce m/ni+1 of the computation cost for both layers i and i + 1.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a\nSong Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b..\nBabak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS. 1993\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016\nYani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Trainin CNNs with Low-Rank Filters for Efficient Image Classification. In ICLR, 2016.\nSergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015\nFigure 1: Pruning a filter results in removal of its corresponding feature map and related kernels i the next layer.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classification with Deep Conve lutional Neural Networks. In NIPS, 2012.\nOur method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each laye. by calculating the sum of its absolute weights |Fi,], i.e., its l1-norm ||Fi,||1. Since the number of input channels, ni, is the same across filters, |Fi,j also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure2(a)|illustrates the distribution of filters' absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section|4.4j. Comparec to other criteria for activation-based feature map pruning (Section4.5), we find l1-norm is a good criterion for data-free filter selection.\nYann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPs, 1989\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400 2013.\nZelda Mariet and Suvrit Sra. Diversity Networks. In ICLR. 2016\nThe procedure of pruning m filters from the ith convolutional layer is as follows.\nAdam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access. 2015\nkernel matrix W; Fi,j ni ni+1 hi K Ni+1 ni+2 Xi Xi+1 Xi+2\nForrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ; 1MB model size. arXiv preprint arXiv:1602.07360, 2016.\nAndrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016\nBaoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu tional Neural Networks. In CVPR, 2015\n1. For each filter Fi,j, calculate the sum of its absolute kernel weights s; = r=1 |Ki|. 2. Sort the filters by S. 3. Prune m filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed. 4. A new kernel matrix is created for both the ith and i + 1th layers, and the remaining kernel weights are copied to the new model.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015.\nFigure 2: (a) Sorting filters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the filter index divided by the total number of filters. The y-axis is the filter weight sum divided by the max sum value among filters in that layer. (b) Pruning filters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them.\nCheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-ran regularization. In ICLR, 2016.\nRelationship to pruning weights Pruning filters with low absolute weights sum is similar to pruning low magnitude weights (Han et al.[(2015)). Magnitude-based weight pruning may prune away whole filters when all the kernel weights of a filter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difficult to predict the exact number of filters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efficient sparse libraries, especially for the case of low-sparsity.\nRelationship to group-sparse regularization on filters Recent work (Zhou et al.(2016);Wen et al. (2016)) apply group-sparse regularization (j=1 ||Fi, ||2 or l2,1-norm) on convolutional filters which also favor to zero-out filters with small l2-norms, i.e. Fi.; = 0. In practice, we do not observe noticeable difference between the l2-norm and the l1-norm for filter selection, as the important filters tend to have large values for both measures (Appendix [6.1). Zeroing out weights of multiple. filters during training has a similar effect to pruning filters with the strategy of iterative pruning and. retraining as introduced in Section3.4\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona Networks for Classification and Detection. IEEE T-PAMI, 2015a.\nXiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. In CVPR. 2015b.\nHao Zhou. Jose Alvarez. and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016"}, {"section_index": "7", "section_name": "3.2 DETERMINING SINGLE LAYER'S SENSITIVITY TO PRUNING", "section_text": "To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned network's accuracy on the validation set. Figure 2(b)[shows that layers that maintair their accuracy as filters are pruned away correspond to layers with larger slopes in Figure|2(a)| On the contrary, layers with relatively flat slopes are more sensitive to pruning. We empirically determine the number of filters to prune for each layer based on their sensitivity to pruning. For deep network. such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them"}, {"section_index": "8", "section_name": "3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS", "section_text": "To prune filters across multiple layers, we consider two strategies for layer-wise filter selection\nCIFAR-10, VGG-16 CIFAR10, VGG-16, pruned smallest filters CIFAR10, VGG-16, prune smallest filters, retrain 20 epochs 1.0 100 conv 1 conv 2 90 conv 3 conv_1 64 0.8 conv 1 conv 4 80 conv_2 64 conv 2 conv 5 . conv_3 128 conv 3 conv 6 70 conv_4 128 conv 4 0.6 conv 7 euree 60 o conv_5 256 conv 5 conv 8 conv_6 256 91 conv 6 conv 9 C 50 conv_7 256 conv 7 0.4 conv 10 conv_8 512 conv 8 conv 11 40 90 conv_9 512 conv 9 conv 12 30 conv 10 512 conv 10 conv 13 conv_11 512 conv 11 30 20 conv_12 512 conv 12 : conv_13 512 conv 13 0.00 20 40 60 80 100 120 140 20 40 60 80 100 20 40 60 80 100 filter index / # filters (%) Filters Pruned Away(%) Filters Pruned Away(%) (a) Filters are ranked by sj. (b) Prune the smallest filters. (c) Prune and retrain\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 2014..\nSergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http: //torch. ch/b1og/2015/07/307 cifar.htm1|2015\nWe now discuss how to prune filters across the network. Previous work prunes the weights on a layer. by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al.. (2015)). However, understanding how to prune filters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2). Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for. the ResNet, pruning the identity feature maps or the second layer of each residual block results in. additional pruning of other layers..\nWe compare l1-norm with l2-norm for filter pruning. As shown in Figure[10] l1-norm works slightly better than l2-norm for layer conv_2. There is no significant difference between the two norms for other layers.\nFigure 3|illustrates the difference between two approaches in calculating the sum of absolute weights The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many filters are pruned.\nCIFAR10,VGG-16,prune filters with smallest l-norm CIFAR10,VGG-16,prune filters with smallest l-norm 100 100 conv_1 64 . conv_1 64 80 80 conv_2 64 . conv_2 64 conv_3 128 . conv_3 128 11 conv_4 128 . conv_4 128 11 60 60 ACeunrey C conv 5 256 ACeunrey conv 5 256 conv_6 256 conv_6 256 =0 conv_7 256 conv_7 256 40 conv 8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 20 conv_11 512 20 conv_11 512 conv 12 512 conv 12 512 conv 13 512 conv 13 512 0 0 O 20 40 60 80 100 0 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) (a) l|Fi,j|1 (b) l|Fi,j|2\nNi+1 ni+2 Xi+1 Xi+2\nFigure 3: Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a (n+1 - 1) (n+2 - 1) kernel matrix.\nFigure 10: Comparison of l1-norm and l2-norm based filter pruning for VGG-16 on CIFAR-10"}, {"section_index": "9", "section_name": "6.2 FLOP AND WALL-CLOCK TIME", "section_text": "projection shortcut Xi Xi+1 Xi+2 Pxi\nFLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to. compute and can be done statically, which is independent of the underlying hardware and software implementations. Since we physically prune the filters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller. number of filters from scratch..\nWe report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 32 images and 50,000 224 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table [3] the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted.\nFigure 4: Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions..\nFor simpler CNNs like VGGNet or AlexNet, we can easily prune any of the filters in any convolutiona. layer. However, for complex network architectures such as Residual networks (He et al.(2016)). pruning filters may not be straightforward. The architecture of ResNet imposes restrictions and the. filters need to be pruned carefully. We show the filter pruning for residual blocks with projectior. mapping in Figure4] Here, the filters of the first layer in the residual block can be arbitrarily pruned. as it does not change the number of output feature maps of the block. However, the correspondence. between the output feature maps of the second convolutional layer and the identity feature maps. makes it difficult to prune. Hence, to prune the second convolutional layer of the residual block, the. corresponding projected feature maps must also be pruned. Since the identical feature maps are more. important than the added residual maps, the feature maps to be pruned should be determined by the. pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we. use the same selection criterion based on the filters of the shortcut convolutional layers (with 1 . kernels). The second layer of the residual block is pruned with the same filter index as selected by. the pruning of the shortcut layer..\nTable 3: The reduction of FLOP and wall-clock time for inference\nAfter pruning the filters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the filters across multiple layers:.\nIndependent pruning determines which filters should be pruned at each layer independent of. other layers. Greedy pruning accounts for the filters that have been removed in the previous layers This strategy does not consider the kernels for the previously pruned feature maps while. calculating the sum of absolute weights.\nNi+1 ni+2 Xi+1 Xi+2\nModel FLOP Pruned % Time (s) Saved % VGG-16 3.13 108 1.23 VGG-16-pruned-A 2.06 108 34.2% 0.73 40.7% ResNet-56 1.25 108 1.31 ResNet-56-pruned-B 9.09 107 27.6% 0.99 24.4% ResNet-110 2.53 108 2.38 ResNet-110-pruned-B 1.55 108 38.6% 1.86 21.8% ResNet-34 3.64 109 36.02 ResNet-34-pruned-B 2.76 109 24.2% 22.93 28.0%\nWe find that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away significant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some filters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks."}, {"section_index": "10", "section_name": "4 EXPERIMENTS", "section_text": "Table 1: Overall results. The best test/validation accuracy during the retraining process is reportec Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity..\nVGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan &. Zisserman (2015)). Recently,Zagoruyko(2015) applies a slightly modified version of the model on CIFAR-10 and achieves state of the art results. As shown in Table [2] VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use. the model described inZagoruyko(2015) but add Batch Normalization (Ioffe & Szegedy(2015))\nWe prune two types of networks: simple CNNs (VGG-16 on CIFAR-1O) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet that are often used to demonstrate model compression, both VGG (on CIFAR-1O) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our filter pruning method in Torch7 (Collobert et al.[(2011)). When filters are pruned, a new model with fewer filters is created and the remaining parameters of the modified layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al (2016). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work has reported up to 3 original training times to retrain pruned networks (Han et al.(2015)).\nModel Error(%) FLOP Pruned % Parameters Pruned % VGG-16 6.75 3.13 108 1.5 10 VGG-16-pruned-A 6.60 2.06 108 34.2% 5.4 106 64.0% VGG-16-pruned-A scratch-train 6.88 ResNet-56 6.96 1.25 108 8.5 10 ResNet-56-pruned-A 6.90 1.12 108 10.4% 7.7 105 9.4% ResNet-56-pruned-B 6.94 9.09 107 27.6% 7.3 105 13.7% ResNet-56-pruned-B scratch-train 8.69 ResNet-110 6.47 2.53 108 1.72 106 ResNet-110-pruned-A 6.45 2.13 108 15.9% 1.68 106 2.3% ResNet-110-pruned-B 6.70 1.55 108 38.6% 1.16 106 32.4% ResNet-110-pruned-B scratch-train 7.06 ResNet-34 26.77 3.64 109 2.16 107 ResNet-34-pruned-A 27.44 3.08 109 15.5% 1.99 107 7.6% ResNet-34-pruned-B 27.83 2.76 109 24.2% 1.93 107 10.8% ResNet-34-pruned-C 27.52 3.37 109 7.5% 2.01 107 7.2%\nTable 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model..\nlayer type Wi X hi #Maps FLOP #Params #Maps FLOP% Conv_1 32 32 64 1.8E+06 1.7E+03 32 50% Conv_2 32 x 32 64 3.8E+07 3.7E+04 64 50% Conv_3 16 16 128 1.9E+07 7.4E+04 128 0% Conv_4 16 16 128 3.8E+07 1.5E+05 128 0% Conv_5 8 x 8 256 1.9E+07 2.9E+05 256 0% Conv_6 8 x 8 256 3.8E+07 5.9E+05 256 0% Conv_7 8 x 8 256 3.8E+07 5.9E+05 256 0% Conv_8 4 x 4 512 1.9E+07 1.2E+06 256 50% Conv_9 4 x 4 512 3.8E+07 2.4E+06 256 75% Conv_10 4 x 4 512 3.8E+07 2.4E+06 256 75% Conv_11 2 x 2 512 9.4E+06 2.4E+06 256 75% Conv_12 2 x 2 512 9.4E+06 2.4E+06 256 75% Conv_13 2 x 2 512 9.4E+06 2.4E+06 256 75% Linear 1 512 2.6E+05 2.6E+05 512 50% Linear 1 10 5.1E+03 5.1E+03 10 0% Total 3.1E+08 1.5E+07 34%\nAs shown in Figure[2(b)] each of the convolutional layers with 512 feature maps can drop at least 60% of filters without affecting the accuracy. Figure 2(c)|shows that with retraining, almost 90% of the filters of these layers can be safely removed. One possible explanation is that these filters operate on 4 4 or 2 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 8 dimensions. Unlike previous work (Zeiler & Fergus(2014);Han et al.[(2015)), we observe that the first layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much usefu1 filters as or ImageNet (as shown in Figure.5). Even when 80% of the filters from the first layer are pruned, the number of remaining filters (12) is still larger than the number of raw input channels. However, wher removing 80% filters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose significant information from previous layers, thereby hurting the accuracy. With 50% of the filters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy.\nFigure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by l1-norm"}, {"section_index": "11", "section_name": "4.2 RESNET-56/110 ON CIFAR-10", "section_text": "ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 32 16 16 and 8 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the first layer of the residual block. As shown in Figure[6l most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even\n94 94 conv_216 conv 20 32 conv_38 64 Ceunee conv_4 16 conv_22 32 conv_40 64 conv_6 16 conv_24 32 conv_42 64 91 conv_8 16 conv_26 32 conv_44 64 conv_10 16 conv_28 32 conv_46 64 conv_12 16 conv_30 32 conv 48 64 90 conv_14 16 conv_32 32 90 conv_50 64 conv_16 16 conv_34 32 conv_52 64 conv_18 16 conv_36 32 conv_54 64 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) CIFAR10, ResNet-110, prune smallest filters CIFAR10, ResNet-110, prune smallest filters CIFAR10, ResNet-110, prune smallest filters 94 conv_2 16 conv_38 32 conv_74 64 conv_4 16 conv 40 32 conv_76 64 conv_6 16 conv 42 32 conv_78 64 conv_8 16 conv_44 32 conv_80 64 conv 10 16 conv_46 32 conv_8264 conv 12 16 conv_48 32 conv_84 64 conv_14 16 CCuueey conv_50 32 conv_86 64 conv_16 16 conv 52 32 conv_88 64 conv 18 16 conv 54 32 conv_90 64 91 conv_20 16 conv_56 32 91 conv_92 64 conv_22 16 conv_58 32 conv_94 64 conv_24 16 conv_60 32 conv_96 64 conv_26 16 90 conv_62 32 90 conv_98 64 conv_28 16 conv_64 32 conv_100 64 conv_30 16 conv_66 32 conv_102 64 conv_32 16 40 conv_68 32 :o conv_104 64 40 20 40 60 80 20 40 60 80 40 20 40 60 80 Filters Pruned Away(%) conv_34 16 Filters Pruned Away(%) conv_70 32 Filters Pruned Away(%) conv_106 64 . conv_36 16 . conv_72 32 conv_10864\nFigure 6: Sensitivity to pruning for the first layer of each residual block of ResNet-56/110\nimproves the performance. In addition, we find that layers that are sensitive to pruning (layers 20 38 and 54 for ResNet-56. layer 36. 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the first and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps."}, {"section_index": "12", "section_name": "4.3 RESNET-34 oN ILSVRC2012", "section_text": "ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 56 28 28, 14 14 and 7 7. ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We first prune the first layer of each residual block. Figure7|shows the sensitivity o the first layer of each residual block. Similar to ResNet-56/110, the first and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table[1|we compare two configurations of pruning percentages for the first three stages: (A) p1=30%, p2=30% p3=30%; (B) p1=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difficult to prune as compared to deeper ResNets.\nWe also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of filters, they are pruned equally. As shown in Figure 7(b) these layers are more sensitive to pruning than the first layers. With retraining, ResNet-34-pruned-C prunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy Therefore, pruning the first layer of the residual block is more effective at reducing the overall FLOP\nCIFAR10, ResNet-56, prune smallest filters CIFAR10, ResNet-56, prune smallest filters CIFAR10, ResNet-56, prune smallest filters 94 94 94 93 93 93 92 92 conv_2 16 conv_20 32 92 ACeenney ACeunrey ACeuneey conv_38 64 conv 4 16 conv 22 32 conv 40 64 conv_6 16 conv 24 32 conv 42 64 9 conv_8 16 91 conv 26 32 conv 44 64 conv_10 16 oconv_28 32 conv_46 64 conv_12 16 conv_30 32 conv_4864 90 conv_14 16 90 conv 32 32 90 conv_50 64 conv_16 16 conv_34 32 conv_52 64 conv_18 16 conv_36 32 conv_54 64 RO 89 20 40 100 20 40 80 100 89 60 80 60 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) CIFAR10, ResNet-110, prune smallest filters. CIFAR10, ResNet-110, prune smallest filters. CIFAR10, ResNet-110, prune smallest filters. 94 94 94 conv_2 16 conv 38 32 conv_74 64 conv_4 16 conv_40 32 conv_76 64 93 conv_6 16 93 conv_42 32 93 conv_78 64 conv_8 16 conv_44 32 conv_80 64 conv_10 16 conv_46 32 conv_82 64 92 conv_12 16 92 conv_48 32 conv_84 64 conv_14 16 92 ACeunrey ACeeuney conv_50 32 ACeunrey conv_86 64 conv_16 16 conv_52 32 conv_88 64 conv_18 16 conv_54 32 conv_90 64 91 91 conv_20 16 conv_56 32 91 conv_92 64 conv_22 16 conv_58 32 conv_94 64 conv_24 16 conv_60 32 conv_96 64 90 90 conv_26 16 conv_62 32 90 conv_98 64 conv_28 16 conv_64 32 conv_100 64 conv_30 16 conv_66 32 conv_102 64 conv_32 16 40 89 conv_104 64 40 20 40 60 80 20 40 60 80 conv_68 32 40 20 conv_34 16 conv_70 32 40 60 80 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) conv_106 64 conv_36 16 conv_72 32 .. conv_108 64\nThe retraining performance can be improved by skipping these sensitive layers. As shown in Table1 ResNet-56-pruned-A improves the performance by pruning 10% filters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we find that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use pi to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16 18, 20, 34, 38, 54) and prunes layers with p1=60%, p2=30% and p3=10%. For ResNet-110, the firs1 pruned model gets a slightly better result with p1=50% and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with p1=50%, p2=40% and p3=30%. When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56."}]
H1GEvHcee
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Jorg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In ICLR, 2015.\nChun-Liang Li Siamak Ravanbakhsh Barnabas Poczos\nchunlial,mravanba,bapoczos}@cs.cmu.edu\nA. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP, 2012\nRestricted Boltzmann Machine (RBM) is a bipartite graphical model that is usec. as the building block in energy-based deep generative models. Due to its numer. ical stability and quantifiability of its likelihood, RBM is commonly used witl Bernoulli units. Here, we consider an alternative member of the exponential fam ily RBM with leaky rectified linear units - called leaky RBM. We first study the joint and marginal distributions of the leaky RBM under different leakiness, which leads to interesting interpretation of the leaky RBM model as truncated Gaussiar distribution. We then propose a simple yet efficient method for sampling fron this model, where the basic idea is to anneal the leakiness rather than the energy. - i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakiness over iterations. This serves as an alternative to the annealing of the temperature parameter and enables numerical estimation of the likelihood that are more effi. cient and far more accurate than the commonly used annealed importance sam pling (AIS). We further demonstrate that the proposed sampling algorithm enjoys. relatively faster mixing than contrastive divergence algorithm, which improves the training procedure without any additional computational cost.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013\nIn this paper, we are interested in deep generative models. One may naively classify these model into a family of directed deep generative models trainable by back-propagation (e.g., Kingma 8 Welling, 2013; Goodfellow et al., 2014), and deep energy-based models, such as deep belief net work (Hinton et al., 2006) and deep Boltzmann machine (Salakhutdinov & Hinton, 2009). Th building block of deep energy-based models is a bipartite graphical model called restricted Boltz mann machine (RBM). The RBM model consists of two layers, visible and hidden. The resultin graphical model which can account for higher-order interactions of the visible units (visible layer using the hidden units (hidden layer). It also makes the inference easier that there are no interaction between the variables in each layer.\nThe conventional RBM uses Bernoulli units for both the hidden and visible units (Smolensky, 1986) One extension is using Gaussian visible units to model general natural images (Freund & Haussler 1994). For hidden units, we can also generalize Bernoulli units to the exponential family (Welling et al., 2004; Ravanbakhsh et al., 2016).\nNair & Hinton (2010) propose a variation using Rectified Linear Unit (ReLU) for the hidden laye. with a heuristic sampling procedure, which has promising performance in terms of reconstructior. error and classification accuracy. Unfortunately, due to its lack of strict monotonicity, ReLU RBM does not fit within the framework of exponential family RBMs (Ravanbakhsh et al., 2016). In stead we study leaky-ReLU RBM (leaky RBM) in this work and address two important issues i) a better training (sampling) algorithm for ReLU RBM and; ii) a better quantification of leaky RBM. -i.e., evaluation of its performance in terms of likelihood..\nN. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim., 2014.\nWe study some of the fundamental properties of leaky RBM, including its joint and marginal dis tributions (Section 2). By analyzing these distributions, we show that the leaky RBM is a union oJ\nR. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In AISTATS, 2009\nD. E. Carlson, P. Stinson, A. Pakman, and L. Paninski. Partition functions from rao-blackwellized tempered sampling. In ICML, 2016. KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted boltz- mann machines. Neural Computation, 2013. A. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP, 2012. Y. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two layer networks. Technical report, 1994.. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In ICML. 2014. R. B. Grosse, C. J. Maddison, and R. Salakhutdinov. Annealing between distributions by averaging moments. In NIPS, 2013. G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computa- tion, 2002. G. E. Hinton. A practical guide to training restricted boltzmann machines. In Neural Networks. Tricks of the Trade (2nd ed.). 2012. G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006. C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estima- tion using quadratic approximation. In NIPS, 2011.. D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013. D D 1 o1ohle"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009. Q. Liu, J. Peng, A. Ihler, and J. Fisher II. Estimating the partition function by discriminance sampling. In UAI, 2015. A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. 2013. V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. A. Pakman and L. Paninski. Exact hamiltonian monte carlo for truncated multivariate gaussians. Journal of Computational and Graphical Statistics, 2014\nM. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized third-order. boltzmann machines. In CVPR, 2010. S. Ravanbakhsh, B. Poczos, J. G. Schneider, D. Schuurmans, and R. Greiner. Stochastic neural networks with monotonic activation functions. In A1STATS, 2016..\ntruncated Gaussian distributions. In this paper, we show that training leaky RBM involves under lying positive definite constraints. Because of this, the training can diverge if these constrains are. not satisfied. This is an issue that was previously ignored in ReLU RBM, as it was mainly used for. pre-training rather than generative modeling\nOur contribution in this paper is three-fold: I) we systematically identify and address model con. straints in leaky RBM (Section 3); II) for the training of leaky RBM, we propose a meta algo. rithm for sampling, which anneals leakiness during the Gibbs sampling procedure (Section 3) anc. empirically show that it can boost contrastive divergence with faster mixing (Section 5); III) We. demonstrate the power of the proposed sampling algorithm on estimating the partition function. Ir particular, comparison on several benchmark datasets shows that the proposed method outperform: the conventional AIS (Salakhutdinov & Murray, 2008) in terms of efficiency and accuracy (Sec tion 4). Moreover, we provide an incentive for using leaky RBM by showing that the leaky ReLt hidden units perform better than the Bernoulli units in terms of the model log-likelihood (Section 4).\nThe Boltzmann distribution is defined as p(x) = e-E(x) /Z where Z = x e-E(x) is the partition function. Restricted Boltzmann Machine (RBM) is a Boltzmann distribution with a bipartite struc- ture It is also the building block for many deep models (e.g., Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Lee et al., 2009), which are widely used in numerous applications (Bengio, 2009). The. conventional Bernoulli RBM, models the joint probability p(v, h) for the visible units v E [0, 1]' and. the hidden units h E [0,1]J as p(v, h) x exp(-E(v, h)), where E(v, h) = aTv - vTWh + bTh.. The parameters are a E RI, b E RI and W RIJ. We can derive the conditional probabilities as.\nL p(vi =1|h) = Wijhj+ai and p(h;=1v)= j=1 i=1\nOne extension of Bernoulli RBM is replacing the binary visible units by linear units v E RI with independent Gaussian noise. The energy function in this case is given by\nFor leaky RBM, the activation function of hidden units is defined as f(nj) = max(cnj, nj), where c E (0,1) and nj = i=1 WU; + bj. The inverse function of f is f-1(hj) = min(hj, hj/c) Therefore, the anti-derivatives are.\nThe conditional distributions are as follows:\nif nj>O h3 else.\nWighj,1 p(v;[h) =N and p(h=1v)= Wii j=1 i=1\nThe activation function of Gaussian visible units can be treated as the linear unit f(v) = V, where V; = j=1 Wighj. Following the similar steps for deriving F and F*, we get the anti-derivatives F(vi) = 3v? and F*(vi) = v?\nFrom Ravanbakhsh et al. (2016), the conditional distribution is defined as\nBy plugging F' and F* into (12). we.. get the conditional distribution for leaky RBM\nFrom (1) and (2), we can see that the mean of the p(h;[v) is the nonlinearity of the hidden unit at nj = i=1 Wijui + b - e.g., mean of the Bernoulli unit is the sigmoid function. From this perspective, we can extend the sigmoid function to other functions and thus allow RBM to have more expressive power (Ravanbakhsh et al., 2016). In particular, it would be interesting to use rectified linear unit (ReLU) nonlinearity, f(n) = max(0, n), for generative modeling.\nJN(nj,1)with g(hj) = -log(2), if nj > 0 N(cn;,c)with g(h;) = -log(2cr), if nj 0. .1 with a(w\nI I J E(v,h)= Whj 20? 0 i i=1 i=1 j=1\nTo simplify the notation, we assume a normalized data so that a; and o; 1s no longer required elimination does not influence the discussion and one can easily extend all the results in this paper. to the model that includes a; and oi.)..\nif nj > O 2n? 2nj, else,\nwhere N(, V) is a Gaussian distribution with mean and variance V. To simplify the notation, in the following we define nj = WigUi + b, - that is nj is the input to the jth hidden layer neuron and similarly define v = 1 Wih, + ai. Using this notation the conditionals in the (2) are p(vi|vi) = N(vi,1) and p(h; = 1|nj) = o(nj).\np(h;[nj) = exp(-njh;+ F(ni) + F*(hj))"}, {"section_index": "3", "section_name": "A.2 JOINT AND MARGINAL DISTRIBUTIONS", "section_text": "Given the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the general treatment for MRF model given by Yang et al. (2012) is\nvWh-(F*(vi)+g(vi))-(F*(hj)+g(hj) ) x exp OU. i=1 j=1\nBy Ravanbakhsh et al. (2016), the conditional probability of the activation, assuming the nonlinear ity f(nj), is generally defined as p(h;[v) = exp(-Df(nj||hj) + g(hj)), where Df(n;||hj) is the Bregman Divergence associated with f, and g(h;) is the base (or carrier) measure in the exponential family which ensures the distribution is well-defined. The Bergman divergence, for strictly mono the anti-derivative (integral) of f and F* is the anti-derivative of f-1 (i.e., f-1(f(n)) = n); Note that due to the strict monotonicity of f, f-1 is well-defined, and F and F* are commonly referred to as conjugate duals.\nN(nj,1), if nj > O N(cnj,c), if nj 0.\nHaving these two conditional distributions is enough for training a leaky RBM model using con trastive divergence (Hinton, 2002) or some other alternatives (e.g., Tieleman, 2008; Tieleman & Hinton, 2009).\nI ||w -W| =|UsvT-USvT|(Su-Su)2 i=1\nGiven the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the genera treatment for MRF model is (Yang et al., 2012; Ravanbakhsh et al., 2016).\n(vWh-(F*(vi)+g(vi))-(F*(hj)+g(hj) p(v, h) x exp i=1 j=1"}, {"section_index": "4", "section_name": "D NECESSITY OF THE PROJECTION STEP", "section_text": "We conduct a short comparison to demonstrate the projection step is necessary for the leaky RBM. on generative tasks. We train two leaky RBM as follows. The first model is trained by the same. setting in Section 4. We use the convergence of log likelihood as the stopping criteria. The second. model is trained by CD-1 with weight decay and without the projection step. We stop the training when the reconstruction error is less then 10-2. After we train these two models, we run Gibbs. sampling with 1000 independent chains for several steps and output the average value of the visible units. Note that the visible units are normalized to zero mean. The results on SVHN and CIFAR10 are shown in Figure 5.\nv`Wh p(v,h) x exp 2 2c nj>0 nj0\nFrom Figure 5, the model trained by weight decay without projection step is suffered by the problen of the diverged values. It confirms the study shown in Section 3.1. It also implies that we canno\nNair & Hinton (2010) use an RBM with visible Gaussian unit and ReLU hidden activation functions for pretraining. They suggest sampling from max(0, n +N(0, o(n)) for conditional sampling from the hidden units (compare to (2)). However, this sampling heuristic does not suggest the parametric form of the joint ReLU-Gaussian distribution. This also means we cannot evaluate it using methods such as Annealed Importance Sampling that require access to this parametric form. In fact, only strictly monotonic activation functions can derive feasible joint and conditional distributions in the exponential familly RBM and ReLU is not strictly monotonic Ravanbakhsh et al. (2016). Similar activation functions that are monotonic are Softplus, f(n) = log(1 + e) and leaky ReLU (Maas et al., 2013), defined as f(n) = max(cnj, nj), where c E (0, 1) is the leakiness parameter. In con- trast to the ReLU RBM the joint parametric form of these two distributions are available. However, the energy (logarithm of the joint probability) in the case of Softplus activation function contains a polylogarithmic term that requires evaluation of an infinite series; see Table 1 in Ravanbakhsh et al. (2016). For this reason, here we focus on Leaky-ReLU activation function.\nlu|2 h v`wh x exp 2 nj>0 nj0\nh)dh X ex exp 2CT dh 2 2 2c nj>0 nj<0 lu|2 n3 I cna 11 X exp exp 2 2 2 nj>0 nj0 I- w,w-cw,w b;Wv+cb;W X exp u+ nj>0 nj0 nj>0 nj0\nConsidering the leaky ReLU activation function f(n) = max(cn,n), using this formalism, the conditional distributions of hidden units in the leaky RBM simplifies to (see Appendix A.1 for details)\nSince the visible units uses the identity function, the corresponding conditional distribution is a Gaussian'\nf. Since WwT-,a;W;W; = ,(1-a;)W;W 0, we have WWT , a;W;Wj efore, I -, a,W,W' I-WW' 0.\nJ p(vi|h) =N Wijhj,1 j=1\nG Weight Decay G Weight Decay * Projection * Projection 5 3 2 *-*-*-*-* 1 *-*-*-*-**-*-*-* 2 3 20 40 60 80 100 0 2 3 4 5 Gibbs Sampling Iterations Gibbs Sampling Iterations x104 (a) SVHN (b) CIFAR10\nand the corresponding visible marginal distribution is\nFigure 5: Divergence results on two datasets\ntrain leaky RBM with larger CD steps when we do not do projection; otherwise, we would have the diverged gradients. Therefore, the projection is necessary for training leaky RBM for the generative purpose. However, we also oberseve that the projection step is not necessary for the classification and reconstruction tasks. he reason may be the independency of different evaluation criteria (Hinton. 2012; Theis et al., 2016) or other implicit reasons to be studied.\nFrom (6) we see that the marginal probability is determined by the affine constraints n; > 0 or nj O for all hidden units j. By combinatorics, these constraints divide RI (the visible domain). into at most M = i=1 () ) convex regions R1, ... Rm. An example with I = 2 and J = 3 is. shown in Figure 1. If I > J, then we have at most 2J regions..\nWe analyze the performance gap between AIS-Leaky and AIS-Energy. One major difference is the initial distribution. The intermediate marginal distribution of AIS-Energy has the following form:\nWe discuss the two types of these regions. For bounded regions, such as Ri in Figure 1, the integra. tion of (6) is also bounded. which results in a valid distribution. Before we discuss the unboundec cases, we define = I - j=1 a,W,WJ, where a; = 1n >o + clln,o. For the unbounded region, if E R I I is a positive definite (PD) matrix, then the probability density is proportional to. (covariance matrix -1) but over an affine-constrained region. Therefore, the distribution of each. unbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can be treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su . et al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu. tions and require approximated and more complicated sampling algorithms for truncated Gaussian. 1istributior vhile 1eakv R BM onlv re mnlefrom distrihutio\nI(1k) wW-(1-Bk)cW;W, x exp nj>0 nj0\na multivariate Gaussian distribution with mean and precision matrix .\n(covariance matrix -1) but over an affine-constrained region. Therefore, the distribution of each unbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can be treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su et al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu- tions and require approximated and more complicated sampling algorithms for truncated Gaussian distribution, while leaky RBM only requires to sample from Gaussian distributions.\nTo address the higher bias problem of AIS-Energy, we replace the initial distribution with the one used in Algorithm 2. By elementary calculation, the marginal distribution becomes\nOn the other hand, if is not PD, and the region R, contains the eigenvectors with negative eigen values of , the integration of (6) over R, is divergent (infinite), which can not result in a valic. probability distribution. In practice, with this type of parameter, when we do Gibbs sampling on the conditional distributions, the sampling will diverge. However, it is unfeasible to check exponentially many regions for each gradient update..\nI-WW-(Bk+(1-Bk)c) x exp )) WW U nj>0 nj0\nTheorem 1. If I WwT is positive definite, then I - , ;W;WJ is also positive definite, fo all Q; E [0, 1].\nWe show the sampled images from leaky RBM train on CIFAR10 and SVHN datasets. We randomly initialize 20 chains and run Gibbs sampling for 1000 iterations. The sampled results are shown in Figure 6 The results shows that single layer RBM does not adequately model CIFAR10 and SVHN\nTheorem 2. The above projection step (7) can be done by shrinking the singular values to be less than 1.\nW3 R3 R5 V 2 y R6 R4 R1 W3 W R7 R3 R2 W2 Figure 2: An one dimensional Figure 3: A three dimensional example of truncated Gaussian. 1 example with 3 hidden units,. Figure 1: A two dimensional. distributions with different vari- where W, are orthogonal to example with 3 hidden units.. ances. each other. and the corresponding visible marginal distribution is - wwI-c wW v+bjWv+cbjW exp nj>0 nj0 nj>0 nj0 (6)\nI-ww-cww v+b;Wfv+cb;W exp X nj>0 nj<0 nj>0 nj<0\nwhich recovers the proposed Algorithm 2. From this analysis, we understand AIS-Leaky is a special. case of conventional AIS-Energy with better initialization inspired by the study in Section 3. Also, by this connection between AIS-Energy and AIS-Leaky, we note that AIS-Leaky can be combined. with other extensions of AIS (Grosse et al., 2013; Burda et al., 2015) as well..\nThe proof is shown in Appendix 1. From Theorem 1 we can see that if the constraint I - WwT is PD, then one can guarantee that the distribution of every region is a valid truncated Gaussian distribution. Therefore, we introduce the following projection step for each W after the gradient update.\nargmin W I-WWT 0 s.t.\nThe proof is shown in Appendix C. The training algorithm of the leaky RBM is shown in Algo rithm 1. By using the projection step (7), we could treat the leaky RBM as the union of truncate Gaussian distributions, which uses weight vectors to divide the space of visible units into severa regions and use a truncated Gaussian distribution to model each region. Note that the leaky RBM model is different from Su et al. (2016), which uses a truncated Gaussian distribution to model th conditional distribution p(h|v) instead of the marginal distribution.\nFigure 6: Sampled images from leaky RBM\nThe empirical study about the divergent values and the necessity of the projection step is shown i. Appendix D. Without the projection step, when we run Gibbs sampling for several iterations from the. model, the sampled values will diverge because the model does not have a valid marginal distributio. p(v). It also implies that we cannot train leaky RBM with larger CD steps without projection, whicl. would result in divergent gradients. The detailed discussion is shown in Appendix D..\nFigure 7: Sampled images in gray-scale from Bernoulli-Gaussian RBM trained on CIFAR10 (Ran zato & Hinton, 2010).\nwhen compared to multilayer models. The similar results for single layer Bernoulli-Gaussian RBM from Ranzato & Hinton (2010) (in gray scale) is shown in Figure 7. Therefore, we instead focused on quantitative evaluation of the log-likelihood in Table 3.."}, {"section_index": "5", "section_name": "F.2 COMPUTATIONAL TIME BETWEEN DIFFERENT SAMPLING STRATEGIES", "section_text": "The comparison in terms of CPU time of different sampling algorithms discussed in Section 5 is shown in Figure 8. Please note that the complexity of CD and Mix are the almost the same. Mix only need a few more constant time steps which can be ignored compared with sampling steps. Leaky is more time-consuming because of computing and decomposing the covariance matrix as we discussed in Section 5. We also report the execution time of each step of algorithms in Table 4."}, {"section_index": "6", "section_name": "F.3 STUDY ON RELU-BERNOULLI RBM", "section_text": "If we set the leakiness c to be 1, then (6) becomes a simple multivariate Gaussian distribution. N ((I - WWT)-1Wb, (I WWT)-1), which can be easily sampled without Gibbs sampling.. Also, the projection step (7) guarantees it is a valid Gaussian distribution. Then we decrease the. leakiness with a small e, and use samples from the multivariate Gaussian distribution when c = 1. as the initialization to do Gibbs sampling. Note that the distribution of each region is a truncated. Gaussian distribution. When we only decrease the leakiness with a small amount, the resulted dis- tribution is a \"similar' truncated Gaussian distribution with more concentrated density. From this. observation, we could expect the original multivariate Gaussian distribution serves as a good initial-. ization. The one-dimensional example is shown in Figure 2. We then repeat this procedure until we. reach the target leakiness. The algorithm can be seen as annealing the leakiness during the Gibbs. sampling procedure. The meta algorithm is shown in Algorithm 2. Next, we show the proposed. sampling algorithm can help both the partition function estimation and the training of leaky RBM..\nWe study the idea of annealing leakiness on the RBM model with leaky ReLU hidden units anc. Bernoulli visible units. We create the toy dataset with 20, 25 and 30 visible units as shown i1 Figure 9. The small datasets allow exact computation of the partition function. For each dataset, we. sample 60,000 images for training and 10,000 images for testing. We use 100 hidden units and PCI. to train the model. The log likelihood results are shown in Table 5..\nCompared to the Gaussian visible units case we study in Section 3, where p(v) is a multi-variate. Gaussian distribution when c = 1, the partition function of p(v) in ReLU-Bernoulli when c = 1 does not have the analytical form. Therefore, we do the following two-stage alternative. We first. run the standard AIS algorithm, which anneals the energy, to the distribution with leakiness c = 1.. We then change to anneals the leakiness from 1 to the target value. For the typical AIS algorithm. (AIS-Energy), we use 104 chains with 2 104 intermediate distributions. For the proposed two-. staged algorithm (AIS-Leaky), we use 104 chains with 104 intermediate distributions for annealing. to c = 1 and the other 104 distributions for annealing the leakiness. The results are shown in Table 6..\nIn Table 6, the standard AIS algorithm (AIS-Energy) has unsatisfactory performance. We show the performance of AIS for estimating the partition function of models with different leakiness on Toy20. We use the 104 independent chains and 2 104 intermediate distributions. The results are shown in Table 7. From Table 7, we observe that the AIS performances worse when the leakiness is closer to 0. Although we observed that increasing chains and intermediate distributions could improve the performance, but the improvements are limited. The study demonstrates when the"}, {"section_index": "7", "section_name": "PARTITION FUNCTION ESTIMATION", "section_text": "Table 4: The execution time (s) of each step of algorithms (1000 iterations)\nGibbs sampling is the core procedure for RBM, including training, inference, and estimating the partition function (Fischer & Igel, 2012; Tieleman, 2008; Salakhutdinov & Murray, 2008). For ev- ery task, we start from randomly initializing v by an arbitrary distribution q, and iteratively sample from the conditional distributions. Gibbs sampling guarantees the procedure result in the stationary distribution in the long run for any initialized distribution q. However, if q is close to the target dis- tribution p, it can significantly shorten the number of iterations to achieve the stationary distribution\nSannpnng1on DDIVI Sample v from N ((I - WWT)-1Wb,(I - WWT)-1) e = (1 - c)/T and c = 1 for t = 1,..., T do Decrease c' = c' e and perform Gibbs sampling by using (13) and (4) with leakiness c' end for\nIt is known that estimating the partition function of RBM is intractable (Salakhutdinov & Murray, 2008). Existing approaches, including Salakhutdinov & Murray (2008); Grosse et al. (2013); Liu et al. (2015); Carlson et al. (2016) focus on using sampling to approximate the partition function of the conventional Bernoulli RBM instead of the RBM with Gaussian visible units and non-Bernoulli hidden units. In this paper, we focus on extending the classic annealed importance sampling (AIS) algorithm (Salakhutdinov & Murray, 2008) to leaky RBM.\n-1520 -2000 -1540 -2020 -1560 -2040 C -1580 -2060 -1600 -2080 60 607 -1620 -2100 Q Q CD Q CD -1640 x x Mix -2120 -x Mix C & * Leaky * Leaky -1660 -2140 0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 Running Time (s) Running Time (s) (a) SVHN (b) CIFAR10\nTable 1: The true partition function for Leaky-ReLU RBM with different number of hidden units\nand = 0\nFigure 8: Training leaky RBM with different sampling algorithms"}, {"section_index": "8", "section_name": "4.1 STUDY ON TOY EXAMPLES", "section_text": "(a) I = 20 (b) I = 25 (c) I = 30\nAs we discussed in Section 3.1, leaky RBM with J hidden units is a union of 2J truncated Gaussian. distributions. Here we perform a study on the leaky RBM with a small number hidden units. Since. in this example the number of hidden units is small, we can integrate out all possible configurations of h. However, integrating a truncated Gaussian distribution with general affine constraints does. not have analytical solutions, and several approximations have been developed (e.g., Pakman & Paninski, 2014). To compare our results with the exact partition function, we consider a special case. that has the following form:\nFigure 9: Toy Datasets with different number of visible units\nJ 1 1- Z (2)- Qj 2J QjE{1,c},Vj j=1\nWe randomly initialize W and use SVD to make columns orthogonal. Also, we scale |W l tc satisfy I - WWT 0. The leakiness parameter is set to be O.01. For Salakhutdinov & Murray (2008) (AIS-Energy), we use 105 particles with 105 intermediate distributions. For the proposed method (AIS-Leaky), we use only 104 particles with 103 intermediate distributions. In this small problem we study the cases when the model has 5, 10, 20 and 30 hidden units and 3072 visible units The true log partition function log Z is shown in Table 1 and the difference between log Z and the estimates given by the two algorithms are shown in Table 2.\nTable 5: The log lokelihood and true partition function for ReLU-Bernoulli RBM with different number of visible units.\nTable 6: The difference between the true partition function and the estimations of two algorithms with standard deviation.\nAssuming that we want to estimate the partition function Z of p(v) with p(v) = p*(v)/Z and p*(v) x _n exp(-E(v,h)), Salakhutdinov & Murray (2008) start from a initial distribution. Po(v) x h exp(-Eo(v, h)), where computing the partition Zo of po(v) is tractable and we can. draw samples from po(v). They then use the \"geometric path' to anneal the intermediate distribution. as pk(v) p(v) = n exp(-Eo(v, h) - (1 - k)E(v, h)), where they grid k from 1 to 0. If we let o = 1, we can draw samples vk from pk(v) by using samples Uk-1 from pk-1(v) for k 1 .. (i)\nSalakhutdinov & Murray (2008) use the initial distribution with independent visible units and with-. which results in a multivariate Gaussian distribution po(v). Compared with the meta algorithm. shown in Algorithm 2 which anneals between leakiness, AIS anneals between energy functions.\nw,wf-cw,w p(v) x exp I _ U nj>0 nj0\nCompared to (6), it is equivalent to the setting where b = 0. Geometrically, every W, passes through the origin. We further put the additional constraint W, I W;, Vi / j. Therefore. we divide the whole space into 2 equally-sized regions. A three dimensional example is shown in Figure 3. Then the partition function of this special case has the analytical form\nFrom Table 1, we observe that AIS-Leaky has significantly better and more stable estimations than AIS-Energy especially and this gap increases as we increase the number of hidden units.. AIS-Leaky achieves this with orders magnitude reduced computation -e.g., here it uses ~.1%. of resources used by conventional AIS. For example, when we increase J from 5 to 30, the bias (dif-. ference) of AIS-Leaky only increases from 0.02 to 0.13; however, the bias of AIS-Energy increases from 1.76 to 9.6. We further study the implicit connection between the proposed AIS-Leaky and. AIS-Energy in Appendix E, which shows AIS-Leaky is a special case of AIS-Energy under certain. conditions.\nnon-linearity of the distribution increases (the leakiness value c decreases), the standard AIS canno. effectively estimate the partition function within feasible computational time. On the other hand, i1. also confirm the proposed idea, annealing the leakiness, can serve as an effective building block for. algorithms without enhancing the algorithm complexity. Note that the unsatisfactory performance of AIS may be addressed by Grosse et al. (2013). From Appendix E, the two-stage algorithm used. here can also be improved by applying Grosse et al. (2013).\nTable 2: The difference between the true partition function and the estimations of two algorithn with standard deviation.\nTable 7: The difference (with standard deviation) between the true partition function and the esti mations of AIS-Energy under different leakiness..\nTable 3: The log-likelihood performance of Bernoulli-Gaussian RBM and leaky RBM"}, {"section_index": "9", "section_name": "F.3.1 MNIST AND CALTECH DATASETS", "section_text": "It is known that the reconstruction error is not a proper approximation of the likelihood (Hinton, 2012). One commonly adopted way to compare generative models is to sample from the model, and visualize the images to check the quality. However, Theis et al. (2016) show the better visu- alization does not imply better likelihood. Also, the single layer model cannot adequately model the complicated natural images (the result for Bernoulli-Gaussian RBM has been shown in Ran- zato & Hinton (2010)), which makes the visualization comparison difficult (Appendix F has few visualization results).\nWe study MNIST and Caltech 101 Silhouettes datasets with 500 hidden units and train the mode with CD-25. The results are shown in Table 8 and Table 9. The leaky RBM is better than con ventional Bernoulli RBM and some deep models on MNIST data. Although leaky RBM deos no outperform Su et al. (2017), but it enjoys the advantage of the simpler sampling procedure (Gaussia1 distribution vs truncated Gaussian distribution) in the binary visible unit case.\nFortunately, our accurate estimate of the partition function for leaky RBM can produce a reli able quantitative estimate of the representation power of leaky RBM. We compare the Bernoulli. Gaussian RBM2, which has Bernoulli hidden units and Gaussian visible units. We trained both. nodels with CD-203 and momentum. For both model, we all used 500 hidden units. We initializec W by sampling from Unif(0, 0.01), a = 0, b = 0 and o = 1. The momentum parameter was 0.9 anc. the batch size was set to 100. We tuned the learning rate between 10-1 and 10-6. We studied twc. benchmark data sets, including CIFAR10 and SVHN. The data was normalized to have zero mear. and standard deviation of 1 for each pixel. The results of the log-likelihood are reported in Table 3.\nTable 8: The testing log-likelihood result on MNIST\nFrom Table 3, leaky RBM outperforms Bernoulli-Gaussian RBM significantly. The unsatisfactory performance of Bernoulli-Gaussian RBM may be in part due to the optimization procedure. If we tune the decay schedule of the learning-rate for each dataset in an ad-hoc way, we observe the performance of Bernoulli-Gaussian RBM can be improved by ~ 300 nats for both datasets. Also. increasing CD-steps brings slight improvement. The other possibility is the bad mixing during the CD iterations. The advanced algorithms Tieleman (2008); Tieleman & Hinton (2009) may help Although Nair & Hinton (2010) demonstrate the power of ReLU in terms of reconstruction error and classification accuracy, it does not imply its superior generative capability. Our study confirms leaky RBM could have much better generative performance compared to Bernoulli-Gaussian RBM.\nTable 9: The testing log-likelihood result on Caltech 101 Silhouettes.\nIn this section, we show the idea of annealing between leakiness benefit the mixing in Gibbs sam pling in other settings. A common procedure for comparison of sampling methods for RBM is through visualization. Here, we are interested in more quantitative metrics and the practical benefits of improved sampling. For this. we consider optimization performance as the evaluation metric.\nThe gradient of the log-likelihood function L(0|vd of general RBM models is\naL(0\\vdata) aE(v,h) dE(v,h) Eh\\vdata de de de\nSince the second expectation in (9) is usually intractable, different approximation algorithms are used (Fischer & Igel, 2012).\n-1520 -2000 -1540 -2020 -1560 -2040 -1580 -2060 -1600 2080 607 607 -1620 -Q CD -2100 Q CD -x Mix -x Mix -1640 X - Leaky -2120 & - Leaky - PCD PCD -1660 -2140 0 0.5 1 1.5 2 0 0.5 1 1.5 2 Iterations 104 Iterations x104 (a) SVHN (b) CIFAR10\nFigure 4: Training leaky RBM with different sampling algorithms\nThe results are shown in Figure 4. The proposed sampling procedure is slightly better than typical CD steps. The reason is we only anneals the leakiness for 20 steps. To get accurate estimation requires thousands of steps as shown in Section 4 when we estimate the partition function. There- fore, the estimated gradient is still inaccurate. However, it still outperforms the conventional CD algorithm. On the other hand, unlike the binary RBM case shown in Tieleman (2008), PCD does not outperform CD with 20 mixing steps for leaky RBM.\nThe drawback of Algorithm 2 is that sampling v from N ((I - WW')-1Wb, (I - WW')-1. requires computing mean, covariance and the Cholesky decomposition of the covariance matrix in every iteration, which are computationally expensive. We study a mixture algorithm by combin ing CD and the idea of annealing leakiness. The mixture algorithm replaces the sampling fron N ((I - WW ')-1Wb, (I - WW ')-1) with sampling from the empirical data distribution. The. resulted mix algorithm is almost the same as CD algorithm while it anneals the leakiness over the iterations as Algorithm 2. The results of the mix algorithm is also shown in Figure 4..\nIn this paper, we study the properties of the exponential family distribution produced by leaky RBM This study relates the leaky RBM model and truncated Gaussian distribution and reveals an under lying positive definite constraint of training leaky RBM. We further proposed a meta sampling algo rithm, which anneals between leakiness during the Gibbs sampling procedure. We first demonstrate the proposed sampling algorithm is significantly more effective and efficient in estimating the par tition function than the conventional AIS algorithm. Second, we show that the proposed sampling algorithm has comparatively better mixing properties (compared to CD). A few direction are worth further study; in particular we are investigating on speeding up the naive projection step; either us ing the barrier function as shown in Hsieh et al. (2011) or by eliminating the need for projection by artificially bounding the domain via additional constraints..\n4We studied the PCD extension of the proposed sampling algorithm. However, the performance is not a stable as CD.\nIn this section, we compare two gradient approximation procedures. The baselines are the conven- tional contrastive divergence (CD) (Hinton, 2002) and persistent contrastive divergence (Tieleman 2008) (PCD). The second method is using Algorithm 2 (Leaky) with the same number of mixing steps as CD. The experiment setup is the same as that of Section 4.\nThe mix algorithm is slightly worse than the original leaky algorithm, but it also outperforms the conventional CD algorithm without additional computation cost. The comparison in terms of CPU time is shown in Appendix F. Annealing the leakiness helps the mix algorithm explore different modes of the distribution, thereby improves the training. The idea could also be combined with more advanced algorithms (Tieleman, 2008; Tieleman & Hinton, 2009)4."}]
HyenWc5gx
[{"section_index": "0", "section_name": "REPRESENTATION STABILITY AS A REGULARIZER FOR IMPROVED TEXT ANALYTICS TRANSFER LEARNING", "section_text": "Matthew Riemer. Elham Khabiri. and Richard Goodwin\nAlthough neural networks are well suited for sequential transfer learning tasks, the catastrophic forgetting problem hinders proper integration of prior knowledge. In this work, we propose a solution to this problem by using a multi-task objective based on the idea of distillation and a mechanism that directly penalizes forget- ting at the shared representation layer during the knowledge integration phase of training. We demonstrate our approach on a Twitter domain sentiment analysis task with sequential knowledge transfer from four related tasks. We show that our technique outperforms networks fine-tuned to the target task. Additionally, we show both through empirical evidence and examples that it does not forget useful knowledge from the source task that is forgotten during standard fine-tuning. Sur- prisingly, we find that first distilling a human made rule based sentiment engine into a recurrent neural network and then integrating the knowledge with the target. task data leads to a substantial gain in generalization performance. Our experi- ments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval 2016 Task 4 Subtask A (Nakov et al.]2016) dataset we surpass the state of the art established during the competition with a comparatively simple model archi- tecture that is not even competitive when trained on only the labeled task specific. data.\nTable 4: Some transfer learning examples from each knowledge source to SemEval 2016 where the GRU model successfully predicts sentiment when using the forgetting cost paradigm, but not with. fine-tuning based integration.\nConsidering that we have shown a neural network can distill and improve a representation learnec by a logical rule engine, how the final representation differs from the logic of the original engin. is of practical interest. We thus compare the agreement of our fine-tuned rule based GRU with the original rule model on the SemEval testing set. We find that the transferred model achieves 78.7%. agreement with the rule model when the rule model is right. This clearly indicates that our fina. model is not deterministic based on the rule engine, and has a probability of adding errors ever. when the original rule model works well. However, our model actually has 44.7% accuracy on the. examples the rule model got wrong. Our approach yields significant gains in comparison to the. original rule classifiers, improving from 57.8% to 64.4% test set accuracy before even incorporating in auxiliary knowledge sources."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In our experiments we tried to find a balance between an ensemble model that is powerful enougl. to have an adaptive weighted average decision function and not so powerful that it overfits on our. limited training and validation data. Our model is quite similar in architecture to the gating network. component of a hierarchical mixture of experts model (Jacobs et al.[1991), (Jordan & Jacobs|[1994) We tried our model over all four representations at once and found that it overfits. Our experiments. showed it is more effective to adopt a greedy ensembling strategy where all models are combined. with the best performing model on the validation set at each phase until only two models are left Finally, these two models are combined with the same mechanism. (Riemer et al.2016) suggests. that a many element gating network can be improved with a sparsity constraint, but this did not work. as well as the greedy strategy for our model and experiments..\nMore formally, for any two models A and B combined in an ensemble, we train the followin mechanism using Stochastic Gradient Descent:\nSource Tweet Label Fine-Tuning Forgetting Cost. Logical Rules John Kasich should feel proud of his performance at the. Positive Neutral Positive #GOPDebate Thursday night. He looked more presi-. dential than the rest of the field. Logical Rules @ BrunoMars I'm so tired of you dressing like you ain't Negative Neutral Negative got no money. You went from wearing Gucci loafers to. 6th grade boy Sketchers. Logical Rules @DavidVonderhaar loving the beta Vahn, even playing. Positive Neutral Positive it on PC with a PS4 controller without aim assist, can't. wait for November 6. Movie Reviews. Selena Gomez presented Amy Schumer with an award. Positive Negative Positive and a heap of praise at the Hollywood Film Awards on. November 1. Movie Reviews mailjet: It's Fri...we mean Star Wars Day. May the force Positive Neutral Positive be with all of your emails! https://t.co/FbDdjiJVUT Movie Reviews Straight Outta Compton's success hopefully convinces Positive Neutral Positive New Line Cinema to give Ice Cube the right budget for. the last Friday movie.. Emoticons That ball Kris Bryant just hit is the 2nd farthest ball I've Positive Neutral Positive ever seen hit. He is officially ridiculous.. Emoticons This fandom's a mess omg, I wouldn't be surprise if to-. Negative Positive Negative morrow there's a trend who says Niall's going to marry. his cousin #WeKnowTheTruth. Emoticons Christians snapchat story makes me want to kill my-. Negative Neutral Negative self..like I feel like a depressed 8th grader going through. that emo phase\nSequential transfer learning methodologies leverage knowledge representations from a source task. in order to improve performance for a target task. A significant challenge faced when transferring. neural network representations across tasks is that of catastrophic forgetting (or catastrophic inter-. ference). This is where a neural network experiences the elimination of important old information when learning new information. The very popular strategy of fine-tuning a neural network involves. first training a neural network on a source task and then using the model to simply initialize the. weights of a target task network up to the highest allowable common representation layer. However. it is highly susceptible to catastrophic forgetting, because in training for the target task it has no ex- plicit incentive to retain what it learned from the source task. While one can argue that forgetting the source task should not matter if only the target task is of interest, our paper adds to the recent empir-. ical evidence across problem domains (Li & Hoiem]2016),(Rusu et al.2016) that show additional network stability can lead to empirical benefits over the fine-tuning algorithm. It seems as though for many Deep Learning problems we can benefit from an algorithm that promotes more stability. to tackle the well known stability-plasticity dilemma. One popular approach for addressing this. problem is rehearsals (Murre]1992), (Robins]1995). Rehearsals refers to a neural network training. strategy where old examples are relearned as new examples are learned. In the transfer setting it can be seen as related to multi-task learning (Caruana1997) where two tasks are trained at the same time, rather than sequentially, while sharing a common input encoder to a shared hidden represen-. tation. However, in rehearsals the representation is biased in favor of the source task representation. through initialization. This technique is very sensible because while fine-tuning is susceptible to. catastrophic forgetting, multi-task learning is not (Caruana1997).\nModel Description Accuracy on SemEval Test Set. Distilled GRU Trained on Full Ensemble. 66.0% Full Ensemble 65.9% Ensemble with Logical Rules and Both Movie Review Tasks. 65.7% Ensemble with Logical Rules and Binary Movie Reviews. 65.4% Ensemble with Logical Rules and Five Class Movie Reviews 65.1% Ensemble with Logical Rules and Emoticon Prediction. 65.0% Ensemble with Both Movie Review Tasks. 62.1% GRU Trained on Only SemEval Data. 53.6% SwissCheese (Bethard et al. 2016) 64.6% NTNUSentEval (Jahren et al.T|2016) 64.3% UniPI (Attardi & Sartiano). 2016 63.9% CUFE (Nabil et al.]|2016) 63.7% INSIGHT-1 (Ruder et al. 2016) 63.5%\nOne of the biggest issues with the standard rehearsals paradigm is that it requires a cached mem. ory of training examples that have been seen in the past. This can be a massive requirement a the number of source tasks and training data sizes scale. One compelling technique for addressing this problem is the concept of pseudorehearsals (Robins|1995), (Robins] 1996), where relearning i performed on an artificially constructed population of pseudoitems instead of the actual old exam. ples. Unfortunately, current automatic techniques in the text analytics domain have not yet mastere. producing linguistically plausible data. As such, the pseudorehearsals paradigm is likely to waste. computational time that could be spent on learning realistic patterns that may occur during testing In our work, we extend the Learning without Forgetting (LwF) paradigm of (Li & Hoiem2016) tc. the text analytics domain using Recurrent Neural Networks. In this approach, the target task data is. used both for learning the target task and for rehearsing information learned from the source task b. leveraging synthetic examples generated for the target task input by the model that only experiencec training on the source task data. As argued byLi & Hoiem (2016), this setup strikes an importan balance between classification performance, computational efficiency, and simplicity in deployment\nTable 5: Empirical three way sentiment classification results on the SemEval 2016 Task 4 Subtasl A test set.\nRegardless of whether they are applied to real source task examples, real target task examples or synthetic examples, paradigms in the style of rehearsals all address the shortcomings of neural. network forgetting by casting target task integration as a multi-task learning problem. However,. this is not quite the purpose of the multi-task learning architecture, which was designed for joint. learning of tasks from scratch at the same time. The key disconnect is that in multi-task learning, the transformation from the shared hidden layer to the outputs for each task are all learned and updated with the changing hidden representation. This would imply that, in the framework of rehearsals, it is possible for there to be significant changes during learning of the network's representation, and thus its abilities on the source task itself. While it would be desirable to claim we were allowing. our source task network to become even better based on the target task than it was before, this motivation seems idealistic in practice. One reason this is idealistic is because multi-task learning. generally only works well when tasks are sampled at different rates or alternatively given different. priority in the neural network loss function (Caruana 1997). As a result, it is most likely that. auxilirary source tasks will receive less priority from the network for optimization than the target task. Additionally, we observe in our experiments, and it has been observed by others in (Rusu. et al.|[2015), that it is generally not possible to distill multiple complex tasks into a student network at full teacher performance for all tasks. This seems to imply the degradation of the source task performance during training is somewhat inevitable in a multi-task learning paradigm..\nwhere yensemble is the prediction vector of the combined ensemble. yA and yb are the output vectc of the individual models.."}, {"section_index": "2", "section_name": "6.2 ENSEMBLE RESULTS", "section_text": "We address this issue with our proposed forgetting cost technique. We demonstrate that it, in fact. can be valuable to keep the hidden to output transformation of the source tasks fixed during knowl edge integration with the target task. This way, we impose a stronger regularization on the hidder representation during target task integration by not allowing it to change aspects that were importan to the source task's performance without direct penalization in the neural network's loss function We demonstrate empirically both that freezing the source task specific weights leads to less deterio. ration in the accuracy on the source task after integration, and that it achieves better generalizatior performance in our setting. The forgetting cost is practical and easy to implement in training any. kind of neural network. In our experiments, we explore application of the forgetting cost in a re current neural network to the three way Twitter sentiment analysis task of SemEval 2016 Task Subtask A and find it to achieve consistently superior performance to reasonable baseline transfe learning approaches in four examples of knowledge transfer for this task..\nOur ensemble model was trained on what was set aside as the validation data during the initia training with early stopping. In the first phase of combining, the model transferred from the logica rule source task was combined with each model. In the second phase, the model based on transfel from the binary movie review sentiment model was combined with each model. In the third phase the two remaining models were combined. The results of our ensemble in Table|5 suggest that i1 is possible to further improve the performance of a single sequential transfer model by intelligently combining its predictions with models that have other perspectives. This is because they are modeled using different source tasks for prior knowledge. Impressively, our final distilled model surpasses results from all prior models on the SemEval 2016 benchmark using the same final architecture of a 50 hidden unit GRU model that is clearly not even competitive when trained simply on the task specific labeled data. The prior best model SwissCheese (Bethard et al.] 2016) consists ol random forests ensemble built utilizing multiple convolutional neural network models and distant supervision. In fact, we achieve superior results despite using over an order of magnitude less total data for training our model.\nWe also demonstrate how powerful distillation can be in the domain of text analytics when paired with the idea of the forgetting cost. Significantly, we show that a high quality gazetteer based logical rule engine can be distilled using unlabeled data into a neural network and used to significantly im- prove performance of the neural network on the target task. This is achieved with a novel extension of the LwF paradigm byLi & Hoiem|(2016) to the scenario of a source task with the same output space as the target task. This can be a very promising direction for improving the ability of humans to directly convey knowledge to deep learning algorithms. Indeed, a human defined rule can contain far more information than a single training example, as that rule can be projected on to many unla- beled examples that the neural network can learn from. This is the reason human teachers generally begin teaching human students tasks by going over core rules at the onset of learning. Moreover, we showcase that multiple expert networks trained on the target task with prior knowledge from different source tasks can be effectively combined in an ensemble and then distilled into a single GRU model (Cho et al.2014), (Chung et al.]2014). Leveraging this combination of distillation\nWe would also like to underscore that our total improvement of 1.5% as a result of creating an en. semble with our best transferred model from the logical rule source task can be viewed as quit. disappointing, despite achieving state of the art results. In fact, in the theoretical limit of having. decision model that switches to the best already learned model at each point, our four transferre representations would achieve 85.1% accuracy together. For the combination of the movie reviev. based models and logical rule based model we can get to 81.4% accuracy. Moreover, we can ge. 76.5% accuracy with just the logical rule based transfer model and the emoticon prediction base. transfer model. Unfortunately, we achieve nowhere near these theoretical results despite represen. tations that are apparently quite diverse. This seems indicative that there are significant gains yet t. be uncovered in integrating these representations..\nmA =o(WAyA+bA mB = o(WByB + bB) mA aA= mA+mB m B aB mA+mB\nmA aA= mA+mB\nmB a B mA+mB\nYensemble = aAyA + aBYB"}, {"section_index": "3", "section_name": "7 CONCLUSION", "section_text": "and knowledge transfer techniques allows us to achieve state of the art accuracy on the SemEval task with a model that performs 11% worse than the best prior techniques when trained only on the. labeled data.\nWe consider a new methodology called the forgetting cost for preventing the catastrophic forgetting. problem of neural network sequential transfer learning. The forgetting cost is practical and easy to. implement. We have demonstrated for the challenging task of Twitter sentiment analysis that it can. uncover significant gains in generalization performance and that it seems to not forget knowledge. traditionally forgotten from the source task during fine-tuning. Our strong empirical results still mo tivate multiple avenues with high potential for continued exploration in text analytics. Using logical. rules to improve neural network models is a promising direction for humans to efficiently contribute. to increased model performance. Additionally, the large diversity of representations learned from multiple classifiers with the same target task but different source tasks seems to indicate there is. potential to see even much greater gains when integrating multiple sources of knowledge transfer.."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Giuseppe Attardi and Daniele Sartiano. Unipi at semeval-2016 task 4: Convolutional neural net works for sen-timent classification. Proceedings of SemEval, pp. 220-224, 2016.\nYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157-166, 1994.\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015\nArtur S d'Avila Garcez, Krysia Broda, and Dov M Gabbay. Neural-symbolic learning system. foundations and applications, 2012\nThe combination of logic rules and neural networks has been explored in a variety of different archi. tectures and settings. These neural-symbolic systems (Garcez et al.[2012) include early examples. such as KBANN (Towell et al.][1990) that construct network architectures from given rules to per- form reasoning. (Hu et al.|2016) very recently also looked at the problem of distilling logical rules. into a neural network text analytics classifier. However, our approach is much more generic as it can. be applied to integrate knowledge from any kind of pre-made classifier and treats the rule engine as. a black box. In (Hu et al.[2016) they consider the individual rules and leverage an iterative convex. optimization algorithm alongside the neural network to regularize the subspace of the network. In our work we demonstrate that, by guarding against catastrophic forgetting, it is possible to efficiently. leverage rules for transfer by utilizing a generic sequential knowledge transfer framework. We do\nAlec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision 2009.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006.\nSince the work of (Bucilu et al.|2006) and (Hinton et al.|. 2015) showed that an ensemble of neural network classifier can be distilled into a single model, knowledge distillation from a teacher network to a student network has become a growing topic of neural network research. In (Ba & Caruana. 2014) it was shown that a deep teacher neural network can be learned by a shallow student network. This idea was extended in (Romero et al.2014), where it was demonstrated that a deep and nar- row neural network can learn a representation that surpasses its teacher. The use of distillation as. a means of sharing biases from multiple tasks was explored in (Lopez-Paz et al.]2016), where the. teacher network is trained with the output of the other tasks as input. It is not obvious how to extend a recurrent neural network to best use this kind of capability over a sequence. The idea of distill- ing from multiple source task teachers into a student network was highlighted in the reinforcement learning setting in (Rusu et al.]2015). Additionally, the concept of using distillation for knowledge. transfer was also explored in (Chen et al.|2015), where function preserving transformations from smaller to bigger neural network architectures were outlined. This technique could also provide value in some instances for our approach where wider or deeper neural networks are needed for the task being transferred to than was needed for the original task. Distillation over target task data was first proposed as a means of elevating catastrophic forgetting in sequential knowledge transfer as ap-. plied to image classification in (Li & Hoiem2016). We extend this approach for its first application. to our knowledge for text analytics problems, with a recurrent neural network architecture, and in the setting where the source task and target task have the same output. The chief distinction of our proposed forgetting cost is that source task specific parameters are held fixed during integration with. the target task as opposed to the joint training of all parameters used byLi & Hoiem(2016). Our ex-. periments empirically support the intuition that freezing these parameters leads to greater retention. of source task performance after target task integration and better generalization to the target task.\nSteven Bethard, Daniel M. Cer, Marine Carpuat, David Jurgens, Preslav Nakov, and Torsten Zesch (eds.). Proceedings of the 1Oth International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, 2016. The Associa- tion for Computer Linguistics. ISBN 978-1-941643-95-2. URL http://aclweb.org/ anthology/s/s16/\nAn ensemble over multiple diverse models trained for the same sentiment analysis task was also considered in (Mesnil et al.]2014) for the IMDB binary movie reviews sentiment dataset (Maas et al.|2011). We tried this ensemble model in our work and found that it gave very limited improve. nent. Our ensemble technique learns a more powerful weighted average based on the soft targets of each task and a multi-step greedy binary fusion approach that works better for the Twitter senti-. nent analysis task in our experiments. Knowledge transfer from multiple tasks was considered tc estimate the age of Twitter users based on the content of their tweets in (Riemer et al.]2015). We. xperimented with the hidden layer sharing approach outlined in that work and found that even wher using just a single softmax combining layer, it would overfit on our limited training and validation data. Progressive neural networks (Rusu et al.|2016) is a recently proposed method very similar in. motivation to our forgetting cost as it is directly trying to solve the catastrophic forgetting problem. The idea is that learned weight matrices relate the fixed representations learned on the source task. to the construction of representations for the target task. In our experiments, the progressive neural. networks approach consistently fails to even match the results achieved with fine-tuning. We hy-. oothesize that although using fixed representations to aid learning addresses catastrophic forgetting. it suffers from the curse of dimensionality. As such, when training data is relatively small given the complexity of the task, it is prone to overfitting as it effectively increases the input dimension size hrough shared fixed representations.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nnot need to make any modification to the architecture of the neural network during testing and d not need iterative convex optimization during training..\nIn the sequential knowledge transfer problem setting explored in this paper, training is first con- ducted solely on the source task examples S, including Ks training examples (xsi, ysi) E S where x si is the input representation and ysi is the output representation. After training is complete on S we would like to now use prior knowledge obtained in the model trained on S to improve general- ization on a new target task with examples T, which includes KT training examples (xTi, yTi) E T Here we assume that the input representations x s; and xTi are semantically aligned in the same rep- resentation space. As such, if there is useful knowledge in S that applies in some direct or indirect way to the target task that is not present in T, we would expect a good knowledge integration ap- proach to generalize better to the target task than it is possible to using the training data in T alone Strong performance for the sequential knowledge transfer problem is a first step towards the greater goal of a mechanism for effective lifelong learning (Thrun][1996).\nBrage Ekroll Jahren, Valerij Fredriksen, Bjorn Gamback, and Lars Bungum. Ntnusenteval at semeval-2016 task 4: Combining general classifiers for fast twitter sentiment analysis. Proceed- ings of SemEval, pp. 103-108, 2016.\nMichael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm Neural computation, 6(2):181-214, 1994"}, {"section_index": "5", "section_name": "3.2 FORGETTING COST FOR TUNING A TARGET TASK MODEL", "section_text": "David Lopez-Paz, Leon Bottou, Bernhard Scholkopf, and Vladimir Vapnik. Unifying distillation and privileged information. stat, 1050:26, 2016\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christophe. Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting. of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 142-150. Association for Computational Linguistics, 2011.\nwhere L is some loss function (we use mean squared error in our experiments) and yinit is the sofi label generated for the target task input xT; based on the model after training just on S. The model trained just on S is also used to initialize the weights of the target task model before integratior with T as we do in the standard fine-tuning paradigm. Q f is a hyperparameter that can be utilized to control the extent of allowed forgetting. Of course, a very similar way to express this idea would be to mix synthetic training examples T' with the same input as T and output generated by the model trained just on S' with the true target task training examples T. In this case, the mixing rate of the teacher generated training examples is analogous to our forgetting parameter a f determining the prioritization. These techniques perform quite similarly in our experiments, but we actually find that the formulation in equations|1|and 3|perform slightly better on the test set. For example, this formulation is superior by O.4% accuracy in tuning a distilled representation of a logical rule engine We conjecture that learning tasks in the same gradient step when they are related to the same input data results in slightly less noisy gradients.\nJacob MJ Murre. Learning and categorization in modular neural networks. 1992\nMahmoud Nabil. Mohamed Aly. and Amir F Atiya. Cufe at semeval-2016 task 4: A gated recurrent model for sentiment classification. Proceedings of SemEval, pp. 52-57, 2016.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-1543, 2014."}, {"section_index": "6", "section_name": "3.3 FORGETTING COST FOR KNOWLEDGE TRANSFER FROM A RELATED TASK", "section_text": "The assumption in section3.2|that the output of the source task data S' should be in the same rep-. resentation space as the output for the target task data T is quite a big one. It rules out the vast majority of knowledge sources that we can potentially leverage. As such, we propose an extension that does not make this restriction for application in sequential knowledge transfer of tasks that are not directly semantically aligned. We update our model to include another predicted output separate from y:\nYinit = finit(Wfixedhshared + bfixed\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2) 123-146, 1995.\nwhere yinit is a predicted output attempting to recreate the soft labels of the original model trained just on S. finit is the non-linearity used in the final layer of the source task model. Weight matrix Wfixed and bias b fixed are taken from the final layer of the source task model and are not updated\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318, 2016\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nZhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer Vision, pp. 614-629. Springer, 2016\nThe most straightforward application of our proposed forgetting cost paradigm is for the case of. integrating a neural network that has been trained on source task data S, which has outputs in the. same representation space as the outputs for the target task data T. In this case, the forgetting cost. amounts to the addition of a regularization term in the objective function during the integration phase when we train using T. This promotes the neural network to be able to recreate the soft labels of the. initialized model found after training on S before integration is started with T. More formally:\nLoss = L(y,y) + QfL(yinit,Y\nAnthony Robins. Consolidation in neural networks and in the sleeping brain. Connection Science 8(2):259-276. 1996\nduring integration with the target task data T. As a result, the loss function is updated from section 3.2\nSebastian Ruder. Parsa Ghaffari, and John G Breslin. Insight-1 at semeval-2016 task 5: Deep learn ing for multilingual aspect-based sentiment analysis. arXiv preprint arXiv:1609.02748, 2016\nwhere the hidden state is shared between both terms in the objective function. Up to the shared hid den layer, we initialize the model for the target task with the weights learned just using S. Randon. matrices and bias vectors are now used to initialize the prediction of y based on the shared hidder representation. This can be seen as a weak form of restricting the model parameters that can be. useful for regularization. The hidden representation is in effect constrained so that it is promotec. not to change in key areas that have a large effect on the output vector of the source task model. Or. the other hand, there is little regularization for parameters that have little effect on the output vector. for the source task model.\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil lation. arXiv preprint arXiv:1511.06295, 2015.\nAndrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray. Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, pp. 1642. Citeseer, 2013."}, {"section_index": "7", "section_name": "RECURRENT NEURAL NETWORK MODEI", "section_text": "In recent years, recurrent neural network models have become a tool of choice for many NLP tasks In particular, the LSTM variant (Hochreiter & Schmidhuber1997) has become popular as it allevi- ates the vanishing gradients problem (Bengio et al.f 1994) known to stop recurrent neural networks from learning long term dependencies over the input sequence. In our experiments we use the sim- pler GRU network (Cho et al.2014), (Chung et al.2014) that generally achieves the same accuracy despite a less complex architecture. Each time step t is associated with an input xt and a hidden state ht. The mechanics of the GRU are defined with the following equations:\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations. from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015\nSebastian Thrun. Is learning the n-th thing any easier than learning the first? Advances in neura information processing systems, pp. 640-646. 1996\nt- rt = o(Wxrxt+ Wnrh ht = tanh(Wxhxt +rt 0 Wnhht ht =Zt0 hp-1+(1-zt) o h\nGeoffrey G Towell, Jude W Shavlik, and Michiel O Noordewier. Refinement of approximate do main theories by knowledge-based neural networks. In In Proceedings of the Eighth National Conference on Artificial Intelligence. Citeseer. 1990."}, {"section_index": "8", "section_name": "A MAPPING SENTIMENT RULES TO SOFT TARGETS", "section_text": "where o denotes an element-wise product. Wxz, Wxr, and Wxh represent learned matrices that project from the input size to the hidden size. Wnz, Whr, and Wnh represent learned matrices that project from the hidden size to the hidden size. In our work we evaluate the GRU in the categorical prediction setting. For each document, the hidden state after the last word h1 is used for the prediction y of the label y. As such, we treat ht, as the shared hidden representation hshared from section3.3 for our experiments.\nThe gazetteer based logical rule engine separates sentences and phrases in the text. It then applies dictionaries of positive and negative sentiment words and phrases to the corresponding text. For. each positive or negative phrase found, it checks to see if negation or double negation are applied. and modifies the polarity of the sentiment accordingly. The result for any piece of text is a count of positive and negative sentiment occurrences. For this task, we simply count the total number of positive and negative indicators to give an overall positive, negative or neutral score. To be concrete,. we have a simple procedure for mapping positive and negative word counts to soft labels that could. be used for distillation. If there are no positive or negative words, the output vector is a one hot. vector corresponding to a neutral label. If there are an unequal number of positive and negative sentiment words, the neutral label is zero and the raw counts are sent to the softmax function to create a soft label over the positive and negative word occurrences. Finally, if there are an equal amount of positive and negative words, we consider the added total sentiment words plus one in the neutral label as well as the number of positive words and negative words before sending these totals through a softmax function.\nThe prediction goes through one other non-linear function f after the final hidden state is derived. In our experiments we use the softmax function, but others are useful in different settings. A mode that builds on top of GRUs with an external memory storage paradigm (Kumar et al.]2015) currently. holds the state of the art on movie review sentiment analysis. However, we focus just on the straight. forward single layer GRU model in our experiments so that we can more easily disentangle factors. of influence on performance. Our GRU model was fed a sequence of fixed 300 dimensional Glove. vectors (Pennington et al.|2014), representing words based on analysis of 840 billion words from a. common crawl of the internet, as the input xt for all tasks. It has been shown in a number of paper. that tuning the word embeddings during training could increase performance, and it is possible ou. approach could have performed better had we done so.."}, {"section_index": "9", "section_name": "B SIZE SELECTION FOR THE RULE DISTILLATION TASK", "section_text": "In Table|6 we detail the performance of distilling a logical rule engine into a GRU based recurren. neural network by imposing soft labels over unlabeled tweets. The fact that we keep our word rep. resentations fixed with general purpose unsupervised data makes it difficult for the GRU to distil. the entire model without a large number of examples. Additionally, as there were a large numbe. of examples in our distillation experiments, we did not experience high run to run variation anc. only trained a single GRU model for each distillation experiment (as opposed to picking the best. validation error of 10 parallel training routines as in our transfer experiments). Our distilled GRU i\nOur neural network models were implemented in Theano (Theano Development Team 2016) an trained with Stochastic Gradient Descent. As we did not use an advanced optimization method an\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014\nLoss = L(y,y) + QfL(yinit, Yinit\nZt = o(Wxzxt+ Wnzht-1 rt = o(Wxrxt+ Wn t = tanh(Wxhxt + rt o Wnhht- h+ =Zt O ht-1 1-z\ny = f(WyhhL+by\nnoticed run to run variation in performance, for all of our transfer learning models we trained 10 parallel versions and chose the one with the highest validation accuracy. The SemEval 2016 Task 4 Subtask A training set consists of 10,000 total training examples, but we were only able to receive 8,906 because of tweet removals when we used the downloading script. For the target task data across our experiments, 7,600 examples of the SemEval training set examples were used for training and the rest for validation. The GRU model achieves only 53.6% accuracy on the SemEval testing data when just training with the target task data and random initialization. In order to improve, we consider knowledge transfer from GRUs trained for the following source tasks to the SemEval targe task data:\nDistilling Logical Rules: Knowledge distillation can be performed using teacher models that are. very different in structure than their neural network based student models. We demonstrate with this task that a compilation of logical linguistic rules can be used as an effective teacher for a GRU by having the GRU attempt to create the output of the rule engine generated over unlabeled in domain. data. Specifically, our gazetteer based logical rule engine separates sentences and phrases in the text.. It then applies dictionaries of positive and negative sentiment words and phrases to the corresponding text. For each positive or negative phrase found, it checks to see if negation or double negation are applied, and modifies the polarity of the sentiment accordingly. The result for any piece of text is. a count of positive and negative sentiment occurrences. For this task, we simply count the total. number of positive and negative indicators to give an overall positive, negative or neutral score. We provide addition details on how we mapped rules to soft targets for the student network to recreate in Appendix|A] We utilized a GRU model with 50 hidden units and 50,000 unlabeled examples for our source task model. We distill off the soft labels as in (Hinton et al.]2015), but set our temperature. fixed at 1.0. It is possible that our performance could have improved by tuning this parameter.. Additional details about the selection of the network and data size are included in Appendix B The logical rule model itself achieves 57.8% accuracy on the SemEval testing data and the rules. distilled into a GRU as explained in section4 achieves 58.9% accuracy before any integration with the SemEval target task data. We leverage this task for comparison of knowledge transfer techniques when the source task and target task share an output space as discussed in section|3.2\nTable 6: Logical rule engine distillation performance and SemEval 2016 Task 4 Subtask A accuracy as a function of the number of hidden units in the GRU and the number of training examples. The 50 hidden unit and 50,o00 training example model performs the best on the SemEval training set.\nbetter on the testing set than the original classifier, likely because this input representation prevent the model from overfitting to the idiosyncrasies of the rule engine. This actually underscores a important point for the distillation of abstract knowledge. If the target task is known during distil lation, it may be beneficial to stop short of totally distilling the original knowledge as it may hur down stream performance past a certain point. We impose a simple policy where the best hidder unit and training example combination is selected based on performance on the training data of th target task. As a result, we use the model with 50 hidden units based on 50,000 training examples i our experiments integrating with other knowledge. This model is a pretty good one to choose, anc achieves high transfer performance relative to models that overfit on the teacher network.\nBinary Movie Reviews: For knowledge transfer from related tasks as discussed in section|3.3|we first consider the Stanford Sentiment Treebank (Socher et al.]2013), which is a popular sentiment dataset based on the movie review domain. We consider one source task to be the binary (positive. and negative) sentence level sentiment subtask which contains 6,920 training examples, 872 valida- tion examples, and 1,821 testing examples. Our GRU model with 40 hidden units achieves 85.5% accuracy on this task.\nFive Class Movie Reviews: We also consider another source task leveraging the Stanford Sentimen. Treebank data from the fine grained (very positive, positive, neutral, negative, and very negative. sentence level sentiment substask which contains 8,544 training examples, 1,101 validation exam ples, and 2,210 testing examples. We use a GRU model with 200 hidden units to accommodate for. the increased task complexity and achieve 45.9% accuracy. This fine grained model can actually be assessed directly on the SemEval task by projecting from five classes to three classes, but it only. achieves 44.2% accuracy with no tuning on the target task data. Our performance on these twc. movie review source tasks is quite similar to what was reported in (Tai et al.]2015) when using a. similar setup, but with LSTMs for both subtasks..\nEmoticon Heuristic: Finally, we consider a semi-supervised task based on emoticon prediction mo. tivated by the successful work in (Go et al.]2009), leveraging it in the twitter sentiment domain anc. its use as a vital component of the SemEval competition winning system (Bethard et al.|2016). We. find unlabelled tweets that contain smileys, frowns, or laughing emoticons. We remove emoticon. from the tweet before prediction and compile a dataset of 250,000 training examples, 50,000 vali. dation examples, and 100,O00 testing examples for each of the three classes. This is multiple order of magnitude smaller than the 90 million tweets used in (Bethard et al.]2016) to allow for quic experimentation. Our GRU model with 50 hidden units achieves 63.4% accuracy on the emoticoi. prediction test set.\nHidden Units Examples Alignment with Teacher Accuracy on SemEval Test Set 25 50,000 88.3% 59.1% 25 300,000 91.9% 58.6% 50 50,000 88.6% 58.9% 50 300,000 93.0% 58.5% 75 50,000 88.7% 58.9% 75 300,000 93.6% 58.3% 100 50,000 88.6% 58.7% 100 300,000 93.8% 58.1% 125 50,000 88.5% 58.7% 125 300,000 93.7% 58.3% 150 50.000 88.5% 59.0% 150 300,000 94.0% 58.5%\nFine-Tuning: The representation is simply initialized with the representation found after training on the source task and then trained as usual on the target task. This approach was pioneered in (Hinton & Salakhutdinov|2006), in application to unsupervised source tasks and applied to transfer learning in (Bengio et al.|2012), and (Mesnil et al.). The learning rate is tuned by a grid search based on the validation set performance.\nProgressive Networks: We also compare with our implementation of a progressive neural network (Rusu et al.|2016), where the representation learned for the source task is held fixed and integratec with a target task specific model via lateral connections trained using the target task data. The learning rate is also tuned based on a grid search using the validation set.\nLearning without Forgetting (LwF): In the LwF paradigm, joint training is performed after pa rameter initialization. This is achieved by treating the target task data and the output generated by the source task model based on the target task input data as two jointly learned tasks as in (Caruana 1997). As opposed to our proposed forgetting cost, the source task specific parameters are not held fixed while training on the target task data. The learning rate and mixing rate between the tasks are tuned by a grid search based on validation set performance. We first consider a version of the LwF model that leverages a random initialization of the target task specific parameters and initializatior of all parameters learned on the source task with the learned values. We also consider another for mulation that we call Greedy LwF. This is actually more closely aligned with the original paper (L & Hoiem2016). All source task parameters are first held fixed, and the target task specific param eters are learned alone before joint training with all of the parameters unfrozen as a second step For the case of source tasks with output in the space of the target task output, there are no source task specific parameters, so the forgetting cost can be viewed as a viable interpretation of the LwF paradigm appropriate in that setting.\nForgetting Cost: Finally, we compare each baseline model with our proposed forgetting cost de scribed in section[3] The learning rate as well as a f from equations[1and 3]were tuned by a grid search based on the validation set performance.\nOur experimental results on the SemEval data validate our intuition that the forgetting cost shoul lead to stronger regularization and better generalization performance. One thing to note about ou orogressive neural networks implementation is that it effectively has only one hidden layer, becaus we hold our embeddings fixed during model training and the same embeddings are shared amon he models used for all of the tasks. It is possible that having multiple layers of lateral connec tions is important to achieving good performance. However, this setting was not applicable in ou experiments. Our results for sequential knowledge transfer on the SemEval benchmark are quit encouraging as the forgetting cost outperforms baselines significantly in all cases.\nWe additionally have validated the intuition that equation[1should perform stronger regularization than equation |3|when equation|1|is applicable. In fact, for our distilled logical rule model tuning experiments, we found that equation|1performs 3% better on the test set. In an attempt to understand more about what caused this performance difference, we monitored testing set performance at each epoch and noticed that equation|3|is actually prone to overfitting away from a good solution on the test set. However, it often finds a pretty good one comparable to equation|1|early in training. When equation |1|could be applied, it seems to be a useful regularization to constrain both the hidden layer and the output layer to align with the model learned on the source task. In equation [3] the\nWe consider multiple sequential knowledge transfer algorithms for experimental comparison. Each. uses only the source task data for learning the source task and only the target task data for integrating with the target task. This way integration is fast and simple, because it does not incorporate storage and replay of examples from the potentially very large source task as argued in (Li & Hoiem2016).\nWe empirically evaluate the generalization performance of the forgetting cost for sequential knowl edge transfer from four different source tasks in Table[1and Table[2] The source task considered in Tableis distilling a logical rule model, leveraging the technique outlined in equation In Table2 we leverage the forgetting cost for related task knowledge transfer as outlined in equation[3\nhidden to output transformation learned for the target task can in contrast learn to deviate from th transformation learned for the source task.."}, {"section_index": "10", "section_name": "5.4 SOURCE TASK PERFORMANCE AFTER TARGET TASK INTEGRATION", "section_text": "In Table 3|we explore the retention of empirical performance on the source task for knowledge transfer algorithms after integration with the target task is complete. Apparently in these cases allowing relearning of the source task model during integration with the target task data is indeed destructive to source task performance. LwF outperforms Fine-Tuning significantly in knowledge retention for movie reviews, but interestingly does not for the emoticon heuristic. The effect of the greedy target task initialization strategy also appears inconsistent. It seems it is possible that this greedy initialization could improve our proposed forgetting cost paradigm in some cases as well. However, a rigorous analysis of the tradeoffs for this initialization approach is beyond the scope of this paper.\nAs the source task representation is literally stored fixed as part of the target task representation in progressive neural networks, it is not clear how to assess any effective forgetting of the source task during target task integration. As a result. we omit them from our source task forgetting experiments"}, {"section_index": "11", "section_name": "5.5 INSPECTION OF LEARNED REPRESENTATIONS", "section_text": "Now that we have established the empirical benefits of our proposed forgetting cost, we will demon strate what it achieves qualitatively through examples. In Table4|we include a sample of examples that are predicted correctly by transferring the knowledge source with the forgetting cost paradigm and not with fine-tuning based integration. The effect is, perhaps, easiest to understand for the rule based and movie review based transfer scenarios. For the rule based transfer setting you can liter- ally map insights that are not forgotten to their respective logical rule in the model, as is the case in these examples. Moreover, we can see movie domain specific terminology such as '\"May the force be with' is seemingly forgotten with standard fine-tuning, but not when the forgetting cost regularization is applied.\nTable 3: Evaluation of accuracy on the source task after integration with the target task data o. SemEval 2016 Task 4 Subtask A. The accuracy after only source task training prior to integratior with the target task is included for reference as a baseline..\nTable 1: Evaluation of target task tuning methodologies for a distilled rule model to the task of SemEval 2016 Task 4 Subtask A.\nSource Task Fine-Tuning Progressive Networks LwF Greedy LwF Forgetting Cost 57.3% 54.5% 58.1% 58.8% 59.7% Binary Movie Reviews Five Class Movie Reviews 57.4% 54.6% 57.1% 56.6% 58.2% Emoticon Heuristic 55.8% 53.2% 57.7% 56.7% 58.6%\nTable 2: Evaluation of knowledge transfer from three source tasks to the task of SemEval 2016 Task 4 Subtask A."}]
HJjiFK5gx
[{"section_index": "0", "section_name": "A.6.1 S2S-EaSy BASELINE", "section_text": "Chengtao Li\nIn our initial seq2seq baseline tests for ADDiTioN we represented the data for 90 + 160 = 250 as the sequence: 90X160X250 However, we found that such a model was not able to fit the training data even when trained with 32 samples per number of digits. So we instead compared to the much stronger S2s-Easy baseline presented in Reed & de Freitas (2016). This baseline makes it much easier to learn addition through the following two modifications to the model: 1) reverse input digits and 2) generate reversed output digits immediately at each time step, such that the data sequence looks like: output: 052 input 1: 090 input 2: 061 This model is quite specific to the ADDiTiON task (and would not work on the NANoCRAFT task for instance) and results in a very strong baseline None-the-less, as we showed in Figure 7 our model still significantly outperforms this baseline.\nMassachusetts Institute of Technology Cambridge. MA 02139. USA\nMOVE MANY(dOWn),PUSH ACT_MOVE(right),STAY <END>,POP\n{dtarlow, algaunt, mabrocks, nkushman}@microsoft.com"}, {"section_index": "1", "section_name": "A.6.2 BOOTSTRAPPING", "section_text": "On the ADDiTioN task we found that both our model and the original NPI model were somewhat. sensitive to the choice of initial seed. To test this sensitivity we ran our experiments for this task. using a bootstrapping process (Efron & Tibshirani]1994). We ran all models using 100 different. seeds for each model. We then sampled 25 seed subsets, with replacement. For each subset, we. choose the best seed using a validation set which was one-quarter the size of the original dataset but consisted only of 10-digit samples. We performed this resampling procedure 100 times, and in. Figure 7|we report the mean and standard deviation across the resampled seed sets..\nWe propose the Neural Program Lattice (NPL), a neural network that learns to per form complex tasks by composing low-level programs to express high-level pro grams. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hi- erarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "A critical component of learning to act in a changing and varied world is learning higher-level. abstractions of sequences of elementary tasks. Without such abstractions we would be forced to reason at the level of individual muscle contractions, making everyday tasks such as getting ready. for work and making dinner almost impossible. Instead, as humans, we learn a hierarchy of skills. starting with basic limb movements and eventually getting to the level of tasks such as get ready. for work or drive to the airport. These abstractions have many different names. For example, in. computer programming they are called functions or subroutines and in reinforcement learning they. are called options or temporally extended actions. They facilitate learning in two important ways.. First, they enable us to learn faster, i.e. with lower sample complexity. Second, they enable us to. strongly generalize from our prior experience so that we can, for example, drive to a new location. once we have learned how to drive to a few other locations..\nA primary mechanism used for learning is watching others perform a task. During such demon-. strations, one typically observes the elementary operations performed, such as the movements of. individual limbs or the mouse clicks in a computer interface. In some cases, the demonstrations can. also provide supervision of the abstract operations (i.e., the abstraction hierarchy) that generated the elementary operations, either through a formal annotation process or through informal natural. language descriptions. Recent work on Neural Programmer-Interpreters, NPI (Reed & de Freitas.. 2016), has shown that when the training data includes both elementary and abstract operations,. learning the abstractions results in strong generalization capabilities. This enables, for example, the. ability to add very large numbers when trained only on the addition of relatively small numbers.."}, {"section_index": "3", "section_name": "4.1 SAMPLE COMPLEXITY", "section_text": "Task: We study the sample complexity using a task we call NANoCRAFT. In this task we consider an environment similar to those utilized in the reinforcement learning literature. The perceptual input comes from a 2-D grid world where each grid cell can be either empty or contain a block with both color and material attributes. The task is to move around the grid world and place blocks in the appropriate grid cells to form a rectangular building. The resulting building must have a set of provided attributes: (1) color, (2) material, (3) location, and sizes in the (4) X and (5) Y dimensions As shown in the example in Figure4 at each step the agent can take one of two primitive actions, place a block at the current grid cell with a specific color and material, or move in one of the four\nwith additional indexes for i and l on all of the inputs and outputs\n*Work done primarily while author was an intern at Microsoft Research.\nF1gure 4: ANORAE: An illustrative example program, where the agent (denoted as \"*\") is required to build 34 rectangular red wooden build-. ing at a certain location in a 66 grid world. We can see that some of the blocks are already in place in the initial world-state. Tobuild the building, the agent (pro- gram) first makes two calls to MOVE_MANY to move into place in the X and Y dimensions. and. then calls BUILD_WALL four times to build the four walls of the building."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Figure 5: NANoCRAFT Sample Complexity: The x-axis varies the number of samples containing. full program abstractions, while the y-axis shows the accuracy. NPL-{64,128,256} shows the accu- racy of our model when trained with 64/128/256 training samples. NPI shows the accuracy of NPI. which can utilize only the samples containing full program abstractions. Finally, Seq-{64,128,256} shows the accuracy of a seq2seq baseline when trained on 64/128/256 samples. It's performance does not change as we vary the number of samples with full program abstractions since it cannot. utilize the additional supervision they provide..\nProviding supervision of the abstract operations during a demonstration requires significant addi. tional effort, however, and so in typical real-world scenarios we will observe only the elementary. operations. For example, we can see a person's limbs move (elementary operations), but we can- not see the mental states that led to these movements (abstract operations). In the same vein, we\ncan easily capture a user's clicks in an online application or their real-world movements using a skeletal tracking depth camera (Microsoft Corp. Redmond WA). NPI cannot directly be applied on data like this, however, because the data does not contain the abstraction hierarchy. This motivates the desire for a model which can learn an abstraction hierarchy from only sequences of elementary operations, but this is an ill-posed problem that requires either additional modeling assumptions or some strongly supervised data. In this work, we take a first step by assuming access to a small number of strongly supervised samples that provide the components of the abstraction hierarchy and disambiguate which of infinitely many abstraction hierarchies are preferred. While we currently only consider domains without noise, we believe our work provides a starting point for future re- search on adding additional modeling assumptions that could remove the need for strong supervision altogether.\n4 8* ADD, PUSH 0 ADD1, PUSH ACT WRITE(3),STAY 0 2 5 CARRY, PUSH ACT PTR MOVE(1, 1eft),STAY 0 0 0 0 4 8* Act_wRIte(1),sTAy *s ACT_PTR_MOVE(1, right),STAY .... 0 0 0 0 2 <END>,POP LSHIFT, PUSH 1 0 0 ACT PTR MOVE(0, 1eft),STAY ACT_PTR_MOVE(1, 1eft),STAY *m 0 0 ACT PTR MOVE(2, 1eft),STAY ACT_PTR_MOVE(3, 1eft),STAY <END>,POP 0 4* 8 <END>,POP ADD1, PUSH 0 2* 5 AcT_WRITe(7),STAy LSHIFT, PUSH 0 1 0 0* 4 8 ACT_PTR_MOVE(0, 1eft),STAY ACT PTR MOVE(1, 1eft),STAY 0 0 3 0* 2 5 ACT PTR MOVE(2, 1eft),STAY .... ACT PTR MOVE(3, 1eft),STAY 0* 1 0 <END>,POP <END>,POP 0* 7 3 <END>,POP\ncardinal directions. We explored both a fully observable setting, and a partially observable setting In the fully observable setting, the world is presented as a stack of 3 grids, one indicating the material of the block at each location (or empty), a similar one for color and a final one-hot grid indicating the agent's location. In the partially observable setting, the agent is provided only two integers. indicating the color and material of the block (if any) at the current location. Finally, in both settings the world input state contains an auxiliary vector specifying the five attributes of the building to be built. In each sample, a random subset of the necessary blocks have already been placed in the world, and the agent must walk right over these locations without placing a block.\nOur key contributions can be summarized as follows:\nExperiment Setup: We assume that data with full programmatic abstractions is much more diffi cult to obtain than data containing only flat operation sequences!2|so we study the sample complexity in terms of the number of such samples. All experiments were run with 10 different random seeds, and the best model was chosen using a separate validation set which is one-quarter the size of the training set.\nResults: Figure5|shows the sample complexity for the NANoCRAFT task in the fully observable setting. We can see that NPL significantly outperforms the NPI baseline (NPI) when only a subset the total training samples have full abstractions. NPL similarly outperforms a sequence-to-sequence baseline (Seq-*) trained on all of the available data. We also performed preliminary experiments for the partially observable setting, and obtained similar results.."}, {"section_index": "5", "section_name": "4.2 GENERALIZATION ABILITY", "section_text": "The NPI model is based on a Recurrent Neural Network (RNN) which, at each step, either calls ar abstract program, performs an elementary operation, or returns from the current program. To make this decision, each step of the RNN takes as input: (1) a learnable embedding of program to execute (2) embedded arguments for this program, and (3) an embedding of the current world state. Calling an abstract program resets the LSTM hidden state to zero and updates the program and argument: provided as input to the following steps. Returning from an abstract program inverts this process restoring the hidden state and input program and arguments to those from before the program was called. Performing an elementary operation updates the world state, but leaves the current progran and arguments in place, and performs the standard LSTM update of the hidden state.\nTask:We study generalization ability using the ADDiTioN task fromReed & de Freitas(2016) The objective of this task is to read in two numbers represented as digit sequences and compute the. digit sequence resulting from the summation of these two numbers. The goal is to let the mode. learn the basic procedure of long-hand addition: repeatedly add two one-digit numbers, write dowr . the result (and the carry bit if necessary) and move to the left until the beginning of the numbers. is reached. The whole procedure is represented using a four-row scratch pad, where the first anc. second rows are input digit sequences, the third row is the carry digit and the forth row the result The model is provided a world-state observation which only provides a partial view into the ful. scratchpad state. Specifically, it is provided the integers at the location of four different pointers. each in one row of the scratchpad. The model has two possible elementary operations, either move. a pointer left or right, or write a single digit into one of the four pointer locations. All four pointer.. start at the rightmost location (the least significant digit), and are gradually moved to the left by the.\nRather than present the details of the NPI model as in Reed & de Freitas (2016), we will cast it in. the formulation that we will use throughout the paper. The main difference is that our presentation will explicitly maintain a call stack, which we will refer to as Stack-based NPI. Morally this does not change the model, but it will enable the extension to weaker supervision described in section|3.\nThe basic structure of the reformulated model can be seen in Figure[1] The model learns a library of programs, G, and arguments, R, to these programs, where each program g E Rn and each argument\n2Operation sequences can be obtained by observing a human demonstrating a task, whereas full abstractions require additional effort to annotate such traces.\nFigure 6: ADDITION: An il lustrative example program of the addition of 25 to 48. We have four pointers (denoted \"*'), one for each row as of the scratch pad. We re- peatedly call ADD1 until we hit the left most entry in the scratch pad. Each call to ADD1 we call ACT_WRITE tc write the result, CARRY to write the carry digit (if nec essary) and LSHIFT to shif all four pointers to the left tc work on the next digit. The digit sequence on the fourth row of scratch pad is the result of the addition.\nThere are several technical issues that arise in developing NPL, which are addressed in this paper In section 2 we reformulate the NPI model to explicitly include a program call stack, which is. necessary for the later modeling developments. Next we need to formulate a training objective for weakly supervised data instances. Ideally we could treat the abstract operations as latent quantities and optimize the marginalized log probability that arises from summing out the abstract operations. However, there are exponentially many such abstraction hierarchies, and so this is computationally. intractable. To overcome this challenge, we compute an approximate dynamic program by building. on two ideas from the literature. First, we draw inspiration from Connectionist Temporal Classifica- tion, CTC (Graves et al.]2006), observing that it provides a method for learning with latent align-. ments. In section|3.1|we reformulate the CTC objective into a feedforward process that executes a dynamic program. Applying this to our problem, however, requires handling the program call stack.. In section|3.2|we do this through an approximation analogous to that of Stack-Augmented Recurrent Nets, StackRNNs (Joulin & Mikolov2015), resulting in a fully-differentiable feedforward process. that executes a dynamic program to approximately compute the marginalized log probability that we. desire. Finally, we observe in section 3.3|that there are alternative dynamic programs for approxi-. mating the desired marginalized log probability and present one that uses more computation to more. closely resemble the exact (exponentially expensive) dynamic program while remaining tractable.\nWe show how ideas from CTC and StackRNNs can be adapted and extended to enable the training of NPI-like models from only flat sequences of elementary operations and world states. We introduce a method to compute a more accurate approximation of marginalized log proba- bilities in such models. On the long-hand addition task from|Reed & de Freitas|(2016) and a new task involving arrang- ing blocks in a grid-world, we demonstrate empirically that using NPL to train with elementary operation sequences combined with only a few training samples with full program traces can achieve similar performance to NPI but with weaker supervision.\nGENERALIZATION ON ADDITION 0.8 0.6 0.4 0.2 # DIGITS 0 5 50 500 0=S2S-Easy-16 =0=S2S-Easy-32 ==NPI-1 -0=NPI-16 =NPL-16-1\nFigure 1: Stack-based NPI: Four time steps from the execution of the stack-based NPI model. Each color/hash pattern represents a unique set of unchanging data values which, over time, move up and down (and in and out of) the stack. Operations below the dotted line to calculate the new world state are executed only at test time, since we do not have access to fworld at training time, and the training data contains the correct sequence of world states.\nFigure 7: ADDiTion Generalization Performance: The x-axis varies the number of input digits for the samples in the test set, while the y-axis shows the accuracy. All models are trained on addition programs with inputs of 1 to 10 digits. NPL-16-1 shows the accuracy of our model when trained with 16 total samples (per number of digits), of which 1 sample (per number of digits) includes full program abstractions. NPI-1 and NPI-16 show the accuracy of the NPI model when trained with 1 total samples and 16 total samples respectively (per number of digits), all containing full program abstractions. S2s-Easy-16 and S2S-Easy-32 show the performance of the S2s-Easy baseline when trained with 16 and 32 samples respectively (per number of digits).\nr E Rm is represented as an embedding, with n and m as the embedding dimensions. When a program is called with a list of arguments it performs a sequence of actions, where each action is one of: OP, PUSH, or POP. OP performs an elementary operation, e.g. move one step. PUSH calls to another program. POP returns from the current program back to the parent program.\nprogram throughout the execution. Figure 6 gives an example of a full program trace as well as stat of the scratch pad at a particular timestep..\nAn LSTM-based controller, shown in Figure|2 is used to generate the sequence of actions, de- ciding the action at timestep t based on the cur- rently running program and arguments, gr . the LSTM's internal state ht. and an observation of the current world state, wt. To support calls to and returns from subprograms, the controller state contains two call stacks. one for the inter nal RNN state. which we denote as M (green in Figure[1], and one for the program and argu- ments, which we denote as S (red in Figure[1) M and St refer to the elements at depth-d of the stacks at timestep t\nExperiment Setup: A primary advantage of learning programmatic abstractions over sequences. is an increased generalization capability. To evaluate this, we train our model on samples ranging. from 1 to 1O input digits . The training data contains an equal number of samples of each length (number of digits), and includes full program abstractions for only one randomly chosen sample for each length such that [FULL = 10. We then test NPL using samples containing a much large. number of digits, ranging up to 1,o0o. On this task we found that both our model and the original. NPI model were somewhat sensitive to the choice of initial seed, so we sample many different seeds. and report both the mean and standard deviation, using a bootstrapping setup (Efron & Tibshirani. (1994)) which is detailed in Appendix[A.6.2\nCompared Models: We originally compared to a standard flat LSTM sequence model. However, we found that even with 32 samples per digit such a model was not able to fit even the training data for samples with more than 4 or 5 digits, so we did not present these results3 Instead, we compare to a model called S2s-Easy, which is the strongest baseline for this task from (Reed & de Freitas [2016). This model is custom-designed for learning addition and so it represents a very strong baseline. We discuss the model details in Appendix[A.6.1 For completeness we also compare to a reimplementation of NPI in two different training regimes.\nThe training data for NPI requires full execu-. tion traces. We use to denote all the observa tions recorded in a single full exectution trace.. Specifically, for timestep t in the execution we. define t , to be the input world state, and t. to be the decision of which of the following ac-. tions to take:\nt t+1 t+2 t+3 Ta =PUSH Tto t+1 S M FSt St+3=St RNN Mt FMt+3=hout Cell Mt St+1=gout = gin1 St+2 = gout RNN RNN Mt+1=0 Mf+2 = hout Cell Cell (empty) (empty) +1 t+2 W W W Jworld Jworld Jworld\nt t Pa Po action op prog LSTM out RNN yt Cell gin Iout enc W\nFigure 2: RNN Cell: A zoomed in view of the internals of an RNN cell from Figure[1\nResults: Figure|7|shows the generalization capabilities of our model on the ADDiTioN task. Our model with \"one-shot' strong supervision (NPL-16-1) significantly outperforms the S2S-Easy base- line even when the baseline is provided twice as many training samples (S2s-Easy-32). This is particularly notable given that the S2s-Easy model is specifically designed for the addition task. This result highlights the generalization capabilities our model brings by learning the latent struc- tures which generate the observed sequences of elementary operations. Furthermore, we can see that\nNote that, as with the original NPI model, we also include arguments for both the operation anc program calls, but for notational simplicity we subsume those into t and t respectively..\nthese latent structures are learned mostly from the unlabeled sequences, since the vanilla NPI mode trained with only 1 sample per digit (NPI-1) cannot generalize beyond the 10-digit data on which it was trained. Finally, we can see that just a single fully supervised sample is sufficient since it enables our model to perform comparably with a vanilla NPI model trained with FULL supervisior for all samples (NPI-16).\nThe stack updates are formally defined as.\n[t = POP]M + [t = OP]hout + [t = PUSH]O, d = 0 [t = POP]M + [t = OP]M + [t = PUSH]hout d = 1 [t = POP]Md+1+ [t =OP]Mt+ [t = PUSH]Md-1 d > 1 [t = POP]St + [t = OP]St + [t = PUSH]gout d = 0 St+1 [t = POP]Sa+1+ [t = OP]St + [ = PUSH]Sd-1 d > 0"}, {"section_index": "6", "section_name": "We have already discussed the most relevant past work upon which we directly build: CTC (Graves et al.]2006), StackRNNs (Joulin & Mikolov]2015) and NPI (Reed & de Freitas] 2016)", "section_text": "The conditions in the Iverson brackets choose which type of update should be performed based on the action type. POPing from the stack moves all items up one location in the stack. Performing an elementary OP, updates the top element of stack M to contain the new RNN hidden state but other- wise leaves the stacks unchanged. PUsHing onto the stack pushes the new program and arguments. gout, onto stack S, pushes a default (zero) hidden state onto stack M, and moves all of the other. elements in the stacks down one location..\nNeural Programs Training neural networks to perform algorithmic tasks has been the focus of. much recent research. This work falls into two main categories: weakly supervised methods that learn from input-output examples, and strongly supervised methods that additionally have access to. the sequence of elementary actions performed to generate the output..\nThe work on learning neural programs from input-output data was sparked by the surprising effec-. tiveness of the Neural Turing Machine (NTM) (Graves et al.|2014). Similar to NTMs, many of the proposed architectures have used differentiable memory (Kurach et al.2016) Graves et al.. 2016 Weston et al.f2014] Sukhbaatar et al.]2015bf Neelakantan et al.[2016f Gaunt et al. 2016 Feser et al.2016), while others have used REINFORCE (Williams|1992) to train neural networks that use sampling-based components to model memory access (Andrychowicz & Kurach2016f|Zaremba & Sutskever2015). Some of this work has considered learning addition from input-output samples, a similar, but more challenging setup than our ADDiTioN domain.Zaremba & Sutskever(2014) makes use of a few training tricks to enable a standard LSTM to learn to add numbers up to length. 9 when training on numbers of the same length.Kalchbrenner et al.(2015) proposes an architec-. ture that is able to learn to add 15-digit numbers when trained on numbers of the same length. The Neural GPU model from (Kaiser & Sutskever2015) learns to add binary numbers 100 times longer than those seen during training, but requires tens of thousands of training samples and extensive hyperparameter searches. Additionally, using a decimal instead of binary representation with the Neural GPU model (as in our ADDiTiON task) is also reported to have a significant negative impact. on performance.\nThe LSTM output is passed in parallel through four different decoder networks to generate the following probability distributions:\nthe action the arguments for the program c CA the program to be called. the elementary operation to be p\n= argmaxyeg Pg\nAt training time our objective is to find neural network parameters 0 which maximize the following (log) likelihood function:\nOPp(OP)p()+t = PUSHpPUSH)p() POP] pt(POP p(t +\n+[t =POP] p(P PUSh)n.(.\nT p(r) =II p(t) (0) = logp() t=1\nNPI (Reed & de Freitas 2016) and NPL distinguish themselves from the above work with the. explicit modeling of functional abstractions. These abstractions enable our model, with only 16 samples, to perfectly generalize to data sequences about 100 times as long as those in the training. data. Furthermore, concurrent work (Cai]2016) has shown that an unmodified NPI model can be trained to perform more complex algorithms such as BubbleSort, QuickSort and topological sorting. by learning recursive procedures, and we expect that our method can be directly applied to reduce. the amount of needed supervision for these tasks as well..\nIn this section we introduce our core contribution, a new framework for training NPI-like model when the training data contains only sequences of elementary actions instead of full program abstrac. tions. The basis of our framework is the Neural Program Lattice, which approximately compute. marginal probabilities using an end-to-end differentiable neural network..\nIn this section, the training data is an elementary operation trace , which includes a sequence of elementary steps, Ao, and a corresponding sequence of world states, Aw. For each elementary step. X', the elementary operation performed is ? and the input world state is Aw. We define O as a many-to-one map from a full execution trace to it's elementary operation trace . With these\nReinforcement Learning. In the reinforcement learning domain the most related work to ours is the options framework, for building abstractions over elementary actions (Sutton et al.]1999). This framework bears many similarities to both our model and to NPI. Specifically. at each time step the\n= M = the current LSTM internal state, gin = St = the current program and arguments, W Tt = the current world state..\nInside the RNN cell, as shown in Figure[2] gn. and w' are passed through a task specific encoder net- work, fenc to generate a combined embedding ut which is passed directly into an LSTM (Hochreiter & Schmidhuber1997). Formally,\nt=fe t = fistm(ut, htn.\nThe work on learning algorithms from sequence data has utilized both related techniques to ours as well as tackled related tasks. The most related techniques have augmented RNNs with various. attention and memory architectures. In addition to those we have discussed earlier (Reed & de Fre tas]2016] Joulin & Mikolov2015), Grefenstette et al.(2015) proposes an alternative method fo augmenting RNNs with a stack. From a task perspective, the most related work has considered vari. nts of the scratchpad model for long-hand addition, similar or our ADDiTion domain. This worl nas focused largely on more standard RNN architectures, starting with Cottrell & Tsung(1993] which showed that the standard RNN architectures at the time (Jordan]1997|Elman1990) coulc successfully generalize to test samples approximately 5 times as long as those seen during training. f a few longer samples were included in the training set. More recently,Zaremba et al.(2015. showed that an RNN architecture using modern LSTM or GRU controllers can perfectly generaliz. o inputs 20 times as long as than those seen in the training data when trained in either a supervise. r reinforcement learning setting. However this work was focused on trainability rather than dat efficiency and so they utilized hundreds of thousands of samples for training..\ndefinitions and p() as defined in equation2.3 our desired (log) marginal likelihood for a single example becomes\nagent can choose either a one-step primitive action or a multi-step action policy called an option. As with our procedures, each option defines a policy over actions (either primitive or other options). and terminates according to some function. Much of the work on options has focused on the tabular. setting where the set of possible states is small enough to consider them independently. More recen work has developed option discovery algorithms where the agent is encouraged to explore regions. that were previously out of reach (Machado & Bowling2016) while other work has shown the. benefits of manually chosen abstractions in large state spaces (Kulkarni et al.]2016). However. option discovery in large state spaces where non-linear state approximations are required is still. an open problem, and our work can be viewed as a method for learning such options from expert. trajectories.\nComputing this quantity is intractable because the number of possible executions [O-1()| is ex ponential in the maximum length of and each execution may have unique stack states. In the following sections, we describe how to approximately compute this quantity so as to enable learning from weak supervision. To also learn from strong supervision, we simply add log p() terms to the objective for each strongly supervised example r.\nMuch work in reinforcement learning has also considered domains similar to ours. Specifically. grid-world domains similar to NANOCRAFT are quite standard environments in the reinforcement learning literature. One recent example is Sukhbaatar et al.(2015a), which showed that even the strongest technique they considered struggled to successfully perform many of the tasks. Their. results highlight the difficultly of learning complex tasks in a pure reinforcement learning setup. In future work we would like to explore the use of our model in setups which mix supervised learning. with reinforcement learning."}, {"section_index": "7", "section_name": "3.1 CTC AS A FEED-FORWARD NETWORK", "section_text": "In formulating a loss function which approximates the exponential sum in equation 3.1] the first. challenge is aligning the elementary steps, A, in the training data, to the timesteps, t, of the model. Specifically, when the model calls into a program or returns from a program in a given timestep,. it does not perform any elementary operation in that timestep. As a result, the alignment between elementary steps in the data and the timesteps of the model depends crucially on the choice of high level abstraction. To overcome this challenge, we draw inspiration from CTC (Graves et al.]2006).\nIn this paper, we proposed the Neural Program Lattice, a neural network framework that learns a hi-. erarchical program structure based mostly on elementary operation sequences. On the NANoCRAFT and ADDiTioN tasks, we show that when training with mostly flat operation sequences, NPL is able. to extract the latent programmatic structure in the sequences, and achieve state-of-the-art perfor mance with much less supervision than existing models..\nCTC is an RNN-based neural network architecture used in speech recognition to handle the analo gous problem of aligning audio sequence inputs to word sequence outputs. It can be seen as a com bination of an RNN and a graphical model. The RNN computes a distribution over possible output for each timestep, while the graphical model consumes those distributions and uses a dynamic pro gram to compute the marginal distribution over possible label sequences. A crucial assumption i that the RNN outputs at each timestep are conditionally independent, i.e. no feedback connection exist from the output layer back into the rest of the network. This assumption is incompatible witl the NPI model because action decisions from timestep t determine the world state, hidden state, anc program input for the next timestep. In section|3.2|we will adapt the CTC idea to work in the NP setting. In this section we prepare by reformulating CTC into a feed forward neural network tha can be trained with standard back propagation."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Making neural programming architectures generalize via recursion. 2016. Under submission to ICLR 2017.\nThe main challenge solved by CTC is finding the alignment between the elementary steps, i, ob. served in the training data and the timesteps, t, of the model. To facilitate alignment discovery, the output layer in a CTC network is a softmax layer with a unit for each elementary operation in O, the. set of elementary operations, as well as one additional unit for a BLANK output where no elementary. operation is performed because (in our case) the model calls into a new program or returns from the. current program. Define E O'T as an output sequence over the alphabet O' = O U BLANK. Additionally, define the many-to-one map B from an output sequence to A, the sequence of el. ementary operations created by removing all of the BLANK outputs from . As discussed above. the CTC model assumes that the RNN inputs at time t are independent of the decisions made by. w = (w1, .., wT) and gin = (gin, ..., 9in) are provided as inputs and are thus independent of the. output decisions. We can then formally define\nBradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994\nJeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990.\nJohn K Feser, Marc Brockschmidt, Alexander L Gaunt, and Daniel Tarlow. Neural functional pro gramming. arXiv preprint arXiv:1611.01988, 2016.\nAlexander L Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction.. arXiv preprint arXiv:1608.04428, 2016.\npa(POP|w,gin) + pt(PUSH|w,gin), Bt = BL U, Jin (pa(OP|w, gin)Pt(t|w, gin), otherwise |w| p(B|w,gin) =][pt(Bt|w,Jin) t=1 L(0|Xo, w, gin) = logp(o|w, gin) = log p(|w, gin) eB-1()\nAlex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. Connectionist tem-. poral classification: labelling unsegmented sequence data with recurrent neural networks. In Pro. ceedings of the 23rd international conference on Machine learning. pp. 369-376. ACM. 2006\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska. Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538. (7626):471-476, 2016.\nThe dynamic program used by CTC to compute this likelihood is based on y,, the total probability that as of timestep t in the model we have generated X1:i, the first i elementary actions in Ao. yt is.\n(0) = log p() tEO-1()\ncalculated from w1:t and c the first t elements in w and qi. respectively. Formally\nMichael I Jordan. Serial order: A parallel distributed processing approach. Advances in psychology 121:471-495, 1997.\nyt =pt -+ p(BLANK|w1:t\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pp. 190-198, 2015.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nIn the last section we assumed that the RNN inputs w, and gin were defined independently of th assumptions to handle the full Stack-based NPI model described in section[2] The key idea is tha rather than propagating forward all possible stack states, which leads to a combinatorial explosion we will propagate forward a single stack state which is a weighted average of all possible stacl states, where the weights are computed based on local probabilities of actions at each timestep This operation is analogous to that used in StackRNNs (Joulin & Mikolov2015). The result is a. tractable and differentiable forward execution process that no longer exactly computes the desirec. marginal likelihood. However, we will show experimentally that learning with this model for weakl. supervised examples leads to the behavior that we would hope for if we were learning from the tru marginal log likelihood. That is, we can share model parameters while training on strongly an weakly labeled examples, and adding the weakly labeled data improves generalization performance.\nMarlos C Machado and Michael Bowling. Learning purposeful behaviour in the absence of rewards arXiv preprint arXiv:1605.07700, 2016\nMicrosoft Corp. Redmond WA. Kinect for Xbox 360\nIn more detail, we estimate all quantities specified in r but not in X using a soft-argmax function that computes deterministic functions of the previously observed or estimated quantities. These estimated quantities are a, g, and implicitly w. Both w and g can be directly replaced with a. soft-argmax as follows:\nScott Reed and Nando de Freitas. Neural p ammer-interpreters. ICLR, 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Maze base: A sandbox for learning from games. arXiv preprint arXiv:1511.07401, 2015a\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440-2448, 2015b.\nReplacing decision t with a soft-argmax changes the stack updates from equation2.1|into differ entiable stack updates similar to those used in|Joulin & Mikolov(2015). Formally,.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen learning. Machine learning, 8(3-4):229-256, 1992\nWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv rint arXiv:1410.4615, 2014\n1t 1:t ,Jin EB-1(1:i)\nThis formulation allows the likelihood to be computed in a feed-forward manner and the gradients of 0 to be computed using standard back propagation through time. Note that if there were feedback connections in the model, then it would not be sufficient to only use y, as the dynamic programming state; we would need to keep track of all the different possible stack states after having produced the sequence prefix, which is what leads to the intractability.\nwt = y iEI g)=>pg(y) Jout = YEG\nRichard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework. for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181-211. 1999\niEI (ie1(yt/yt+1)pt(a)pt(), a=OP (yt/yt+1)pa(a), a F OP at(POP)M + a(OP)htut + at(PUSH)O, d = 0 t(POP)M+ at(OP)M + (PUSH)hout, Mt+1 d = 1 at(POP)Ma+1+ at(OP)Mt+ at(PUSH)Mt-1, d > 1 at(POP)St + a(OP)St + at(PUSH)gout, d = 0 lat(POP)Sd+1+ at(OP)St + at(PUSH)St-1 d > 0\nvith a introduced for notational simplicity. This change enables htn and gin to now depend on the listribution over output decisions at time t -- 1 via the stack, as gin = St and htn = Mt, where St"}, {"section_index": "9", "section_name": "A APPENDIX", "section_text": "Figure 3: NPL lattice: Eac responds to one timestep, an in a timestep corresponds to. depth, l, and elementary op dex. i. A subset of the lattice are shown with blue arrows transitions, green for OP and. POP. Blurred Blurred All Paths Computational Gra Stack World Return Cost Acc Execute All Paths False False True Highest E NPL True False True Medium Me CTC+StackRNN True True False Lowest Lo"}, {"section_index": "10", "section_name": "A.1 DATASET DETAILS", "section_text": "Table|2[lists the set of programs and elementary operations we used to generate the data for ADDI TION and NANOCRAFT. The programs and elementary operations for ADDiTiON are identical to those in Reed & de Freitas(2016). Note that when training with weak supervision the training data contains only the elementary operations and does not contain the programs or arguments.\nTable 1: Outlines the tradeoff between representational accuracy and computational cost for twc extreme solutions and NPL\nTable 2: Programs, arguments and elementary operations used for generating training data of AD DITION and NANOCRAFT tasks.\n(0) = log pa(POP)yi t<T"}, {"section_index": "11", "section_name": "A.2 IMPLEMENTATION DETAILS", "section_text": "This gives a fully differentiable model for approximately maximizing the marginal probability of X\nAlthough the model we have defined so far is fully differentiable, the difficultly in training smoothe models of this form has been highlighted in the original Neural Turing Machine work (Graves et al 2014) as well as much of the follow on work (Gaunt et al.] 2016]Kurach et al.]2016Grave et al. 2016]Neelakantan et al.[ 2016} Joulin & Mikolov2015). To help alleviate this difficulty, w introduce in this section the neural lattice structure after which Neural Program Lattices are namec\nTo motivate the need for this lattice, consider the set of possible program execution paths as a tree with a branch point for each timestep in the execution and a probability assigned to each path. Exaci gradients could be computed by executing every path in the tree, calculating the gradient for each path, and then taking an average of the gradients weighted by the path probabilities. This solution is impractical however since it requires computation and memory that scales exponentially with the number of timesteps. To avoid this problem, the NTM and related techniques perform a single forward execution which is meant to approximately represent the simultaneous execution of all of the paths in the tree. To avoid the exponential explosion, the state at each timestep, i.e. tree depth, is approximated using a fixed-sized, representation. The approximation representation chosen by both NTM and Joulin & Mikolov|(2015) is a soft-argmax of the states generated by performing each of the possible actions on the previous approximate state.\nWe observe that these two choices are really extreme points on what is a continuous spectrum ol options. Instead of choosing to maintain a separate state representation for every path, or to group together all paths into a single representation, we can group together subsets of the paths and main tain an approximate state representation for each subset. This allows us to move along this spectrum by trading higher memory and computational requirements for a hopefully closer approximation of. the marginal probability.\nigure 3: NPL lattice: Each slice cor-. esponds to one timestep, and each node. n a timestep corresponds to a given call. epth, l, and elementary operation in-. ex. i. A subset of the lattice transitions re shown with blue arrows for PUSH ansitions, green for OP and orange for. OP.\nPrograms Description Calls ADD Multi-digit addition. ADD1 ADD1 Single-digit addition. ACT_WRITE/CARRY/LSHIFT. CARRY Write carry digit. ACT_PTR_MOVE/ACT_WRITE LSHIFT Shift four pointers left. ACT_PTR_MOVE ACT_WRITE Write result to environment. Elementary Operation ACT_PTR_MOVE Move pointer to left/right. Elementary Operation NANOCRAFT Build a rectangular fence. MOVE_MANY/BUILD_WALL MOVE_MANY Move multiple steps in one direction ACT_MOVE BUILD_WALL Build a wall along one direction PLACE_AND_MOVE PLACE_AND_MOVE Move one step and build a block ACT_MOVE/ACT_PLACE_BLOCK ACT_MOVE Move one step to a direction.. Elementary Operation ACT_PLACE_BLOCK Build a block at current location Elementary Operation\nThe last remaining complexity is that X does not indicate the necessary number of model timesteps Thus the likelihood function must sum over all possible execution lengths up to some maximum T and ensure that the final action is a return, i.e. POp. If we define I = o then formally,\nHere we describe the implementation details of the various component neural networks inside our implementation of the NPL. Note that the mappings are all the same for both ADDiTioN and NANOCRAFT except for fenc which is task dependent.\nfenc for ADDiTion: We represent the environment observation, (latent) programs and ar-. guments as one-hot vectors of discrete states. We feed the concatenation of one-hot vectors for environment observation and argument through a linear decoder (with bias) to get a uni-. fied arg-env representation. We then embed the programs (via fembed) into an embedding. space. Finally we feed the concatenation of arg-env vector and program vector through a. 2-layer MLP with rectified linear (ReLU) hidden activation and linear decoder.. fenc for NAnoCRAFT: We represent the environment observation as a grid of discrete. states. Here we first embed each entry into an embedding space, and then feed this embed- ding through two convolutional layers and two MLP layers with ReLU hidden activation. and linear decoder. We represent argument again as one-hot vectors and embed programs into an embedding space. Finally we feed the concatenation of argument vectors, convolu- tional vectors of environment observation and program vector through a 2-layer MLP with. ReLU hidden activation and linear decoder.. fistm: We employ a two-layer LSTM cell for the mapping. The size of the hidden states is. set to 128 for both ADDITION and NANOCRAFT. fprog: This mapping will map the LSTM hidden state to a probability distribution over. programs. The hidden state output of fistm is mapped through a linear projection to an 8-. dimensional space, and then another linear projection (with bias) with so ftmax generates faction and fop: Each of these encoders will output a probability distribution. We feed the. top hidden states by fistm first through a linear projection (with bias) and then a sof tmax. function to pt. and pt respectively..\nIn our implementation we group together execution paths at each timestep by call depth, l E L and number of elementary operations performed so far, i E I, and maintain at each timestep a separate embedded state representation for each group of execution paths. Thus the unrolled linea architecture shown in Figure [1|becomes instead a lattice, as shown in Figure 3] with a grid oi approximate program states at each timestep. Each node in this lattice represents the state of al paths that are at depth l and elementary operation i when they reach timestep t. Each node contains a soft-argmax of the stack states in M and S and an RNN cell identical to that in Figure2 Fo each node we must also compute yt', the probability that at timestep t the execution is at depth and at elementary operation i and has output the elementary operation sequence A1:i. As before we can compute this recursively as:\nWhen the operation sequence is too long, y' ,' will become vanishingly small as t grows. To prevent timestep and storing the normalized values and normalization constant separately. The new update rule becomes:\n1 lpt,l- 1(PUSH)yt,l Yi\n(PUSH)y y P\nand we normalize the values and maintain a log-summation of the normalization constants\n+1.l t,l+] L.. . -] y Pa,i Pa.i\nt+1,l t,l+ t.- (PUSH)yi .- Yi Pa,i POF Yi +Pa,i\nSimilarly, the averaged call stack values are computed recursively as follows:\n) = log_sum_exp(log(y'), log(pt(pop)) + log(yt,o) + Yt\nlog(yt+1) = 1og_sum_exp(log(y),log(p (pop)) + log(yt) + Y\nIn Section [3.3|we did not include the boundary conditions in our discussion to improve the read- ability. Our implementation, however, must account for the bounds on l, and i, as shown in Iverson brackets in the full update equations below:\nWe have left out the boundary conditions from the above updates for readability, the details of these are discussed in AppendixA.4\nFinally, the likelihood function approximately maximizes the probability of paths which at any timestep have correctly generated all elementary operations in X, are currently at depth 0 and are returning from the current program. Formally,\n(0) = log t. 9 tET\nRemark: The specific choice to group by elementary operation index, and call depth was motivated by the representational advantages each provides. Specifically:\nTable[1summarizes these advantages and the computational trade-offs discussed earlier\nAs mentioned before, NPL can be trained jointly with full program abstractions (referred to as FULL) as well as elementary operation sequences (referred to as OP). When training with FULI samples, the training procedure is similar to that for NPI and we use this setting as one of our baselines. For each dataset on which we test NPL, we include mostly OP samples with only a smal number of FULL samples. We pre-train the model solely on FULL samples for a few iterations tc get a good initialization. After that, in each step we train with a batch of data purely from FULL o1 OP based on their proportions in the dataset and generate the parameter update in that step using the corresponding objective. For all tasks, we train the NPL using ADAM (Kingma & Ba] 2015) with base learning rate of 10-4 and batch size of 1. We decay the learning rate by a factor of 0.95 every 10,000 iterations. These settings were chosen using a manual search based on performance on the validation data."}, {"section_index": "12", "section_name": "4 EXPERIMENTS", "section_text": "t. ^t,l =yi t.l t.l Yi Yi Yi i,l i,l\n11/yt+1,2)pa{t(a) 21. Q++1'(POP)Mt + a1,(OP)p,s- A)hoy Qr, d = 0 Qt,7-1,l(PUSH)ht,lt1 t,l,l d =1 2.i 'out,i-1' 1.l- Qi,i t,l+1 t,l, d> 1 Qi, d = 0 +1 di t,l,l Qii )st,i d > 0 A\n(yt/yt+1,)pa,s(a) a. CT,C [t<L]a}+1(POP)M;+1+ [0<i]a}1(OP)p}-1A%)hbut,i-1+ [0 <l]at,l~1(PUSH)0, d = 0 [4<L]a}+1(POP)M},++[0<ia}(OP)Po,-1(A)M},-1+ rt+1,l M d =1 d>1 (POP)St/+1+ [0 <i]q1(0P)p6,i-1(A%)S6,-1+ d =0 at,l [1 < L]Qt\"4+1(pOP)S4'th+ [0 <i]Q4(OP)po2~1(A%)st}-1+ d>0\nGrouping by elementary operation index: allows the model to represent the input worl. state exactly instead of resorting to the fuzzy world state representation from equation|3.2 Grouping by call depth: allows the representation to place probability only on executio paths that return from all subprograms they execute, and return only once from the top leve. program as specified in equation|3.4\nFinally, in practice we find that values of the y's quickly underflow, and so we renormalize them at each timestep, as discussed in Appendix[A.3\nIn this section, we demonstrate the capability of NPL to learn on both the long-hand addition task (ADDiTion) from|Reed & de Freitas (2016) and a newly introduced task involving arranging. blocks in a grid-world (NANOCRAFT). We show that using the NPL to train with mostly the weak supervision of elementary operation traces, and very few full program traces, our technique sig. nificantly outperforms traditional sequence-to-sequence models, and performs comparably to NPI models trained entirely with the strong supervision provided by full program traces. Details of the. experimental settings are discussed in Appendix|A.5."}]
HJOZBvcel
[{"section_index": "0", "section_name": "LEARNING TO DISCOVER SPARSE GRAPHICAL MODELS", "section_text": "Eugene Belilovsky\nUniversity of Paris-Saclay, France\nFigure 4: Average test likelihood for COAD and BRCA subject groups in gene data and neuroimaging data using different number of selected edges. Each experiment is repeated 50 times for genetics data. It is repeated approximately 1500 times in the fMRI to obtain significant results due high variance in the data. DeepGraph with averaged permutation dominates in all cases for genetics data, while DeepGraph+Permutation is superior o1 equal to competing methods in the fMRI data.\nSpearman correlations between pairs of solutions, as it is a measure of a monotone link between twc variables. DeepGraph has far better stability in the genome experiments and is competitive in the fMRI data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Table 4: Avg. execution time over 10 trials for. Table 3: Average Spearman correlation results for real data 50 and 500 node problem on a CPU for Graph. showing stability of solution amongst 50 trials. Lasso, BDMCMC, and DeepGraph We use the network DeepGraph-39, the same network and parameters from synthetic experiments, using the same evaluation protocol as used in the genomic data. For both control and autism patients we use time series from 35 random subjects to estimate edges and corresponding precision matrices We find that for both the Autism and Control group we can obtain edge selection comparable to graph. lasso for very few selected edges. When the number of selected edges is in the range above 25 we begin to perform significantly better in edge selection as seen in Fig.4 We evaluated stability of the. results as shown in Tab.[3] DeepGraph outperformed the other methods across the board..\nProbabilistic graphical models provide a powerful framework for describing the dependencies betweer a set of variables. Many applications infer the structure of a probabilistic graphical model from data to elucidate the relationships between variables. These relationships are often represented by ar undirected graphical model also known as a Markov Random Field (MRF). We focus on a commor MRF model, Gaussian graphical models (GGMs). GGMs are used in structure-discovery settings fo rich data such as neuroimaging, genetics, or finance (Friedman et al.]2008] Ryali et al]2012)Mohar et al.[2012f|Belilovsky et al.[2016). Although multivariate Gaussian distributions are well-behaved determining likely structures from few examples is a complex task when the data is high dimensiona It requires strong priors, typically a sparsity assumption, or other restrictions on the structure of the graph, which now make the distribution difficult to express analytically and use.\nABiDE has high variability across sites and subjects. As a result, to resolve differences between approaches, we needed to perform 1O0o folds to obtain well-separated error bars. We found that the birth-death MCMC method took very long to converge on this data, moreover the need for many folds to obtain significant results amongst the methods made this approach prohibitively slow to evaluate\nA standard approach to estimating structure with GGMs in high dimensions is based on the classic. result that the zeros of a precision matrix correspond to zero partial correlation, a necessary and sufficient condition for conditional independence (Lauritzen]1996). Assuming only a few conditional. dependencies corresponds to a sparsity constraint on the entries of the precision matrix, leading to a. combinatorial problem. Many popular approaches to learning GGMs can be seen as leveraging the.\nhttp://preprocessed-connectomes-project.github.io/abide.\nUniversity of Montreal, Canada\nmatthew.blaschko@esat.kuleuven.be"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive (and generally superior) performance, compared with analytical methods.\nResting State Functional Connectivity We evaluate our graph discovery method to study brain functional connectivity in resting-state fMRI data. Correlations in brain activity measured via fMRI reveal functional interactions between remote brain regions. These are an important mea- sure to study psychiatric diseases that have no known anatomical support. Typical connec- tome analysis describes each subject or group by a GGM measuring functional connectivity be- tween a set of regions (Varoquaux & Craddock2013). We use the ABIDE dataset (Di Mar- tino et all2014), a large scale resting state fMRI dataset. It gathers brain scans from 539 in- dividuals suffering from autism spectrum disorder and 573 controls over 16 sites For our experiments we use an atlas with 39 regions of interest derived in Varoquaux et al.[(2011) 50 nodes (s) 500 nodes (s) Gene BRCA Gene COAD ABIDE Control ABIDE Autistic Graph Lasso 0.25 .003 0.34 0.004 0.21 .003 0.21 .003 sk1ea rn GraphLassoCV 4.81 554.7 Ledoit-Wolfe 0.12 0.002 0.15 0.003 0.13 .003 0.13 .003 BDgraph 42.13 N/A Bdgraph 0.07 0.002 0.08 0.002 N/A N/A DeepGraph 0.27 5.6 003\nGraphLasso(35 samples) GraphLasso(368 samples DeepGraph(35 samples)\nGraphLasso(35 samples) DeepGraph(35 samples) GraphLasso(368 samples O 8\nwhich can be seen as a penalized maximum-likelihood estimator. Here O and are the precision and sample covariance matrices, respectively. A large variety of alternative regularization penalties extend the priors of the graphical lasso (Danaher et al.]2014]Ryali et al]2012][Varoquaux et al.]2010). How- ever, several problems arise in this approach. Constructing novel surrogates for structured-sparsity assumptions on MRF structures is challenging, as a prior needs to be formulated and incorporated into a penalized maximum likelihood objective which then needs an efficient optimization algorithm to be developed, often within a separate research effort. Furthermore, model selection in a penalized. maximum likelihood setting is difficult as regularization parameters are often unintuitive..\nWe propose to learn the estimator. Rather than manually designing a specific graph-estimatioi. procedure, we frame this estimator-engineering problem as a learning problem, selecting a functioi. from a large flexible function class by risk minimization. This allows us to construct a loss functio that explicitly aims to recover the edge structure. Indeed, sampling from a distribution of graphs an. empirical covariances with desired properties is often possible, even when this distribution is no. analytically tractable. As such we can perform empirical risk minimization to select an appropriat. function for edge estimation. Such a framework gives more easy control on the assumed level o. sparsity (as opposed to graph lasso) and can impose structure on the sampling to shape the expecte. distribution, while optimizing a desired performance metric."}, {"section_index": "3", "section_name": "DISCUSSION AND CONCLUSIONS", "section_text": "Our method was competitive with strong baselines. Even in cases that deviate from standard GGM sparsity assumptions (e.g. Laplacians, small-world) it performed substantially better. When fine. tuning on the target distribution performance further improves. Most importantly the learned estimator. generalizes well to real data finding relevant stable edges. We also observed that the learned estimators. generalize to variations not seen at training time (e.g. different n or sparsity), which points to this. potentialy learning generic computations. This also shows potential to more easily scale the method. to different graph sizes. One could consider transfer learning, where a network for one size of data is used as a starting point to learn a network working on larger dimension data..\nFor particular cases we show that the problem of interest can be solved with a polynomial function. which is learnable with a neural network (Andoni et al.|. 2014). Motivated by this fact, as well as theoretical and empricial results on learning smooth functions approximating solutions to combinato. rial problems (Cohen et al.]2016f Vinyals et al.]2015), we propose to use a particular convolutional neural network as the function class. We train it by sampling small datasets, generated from graphs. with the prescribed properties, with a primary focus on sparse graphical models. We estimate from. this data small-sample covariance matrices (n < p), where n is the number of samples and p is the. dimensionality of the data. Then we use them as training data for the neural network (Figure[2) where. target labels are indicators of present and absent edges in the underlying GGM. The learned network can then be employed in various real-world structure discovery problems..\nPenalized maximum likelihood can provide performance guarantees under restrictive assumptions on. the form of the distribution and not considering the regularization path. In the proposed method one could obtain empirical bounds under the prescribed data distribution. Additionally, at execution time. the speed of the approach can allow for re-sampling based uncertainty estimates and efficient model. selection (e.g. cross-validation) amongst several trained estimators..\nIn Section|1.1|we review the related work. In Section 2|we formulate the risk minimization view of graph-structure inference and describe how it applies to sparse GGMs. Section2.3|describes and motivates the deep-learning architecture we chose to use for the sparse GGM problem in this work In Section|3|we describe the details of how we train an edge estimator for sparse GGMs. We then evaluate its properties extensively on simulation data. Finally, we show that this edge estimator trained only on synthetic data can obtain state of the art performance at inference time on real neuroimaging and genetics problems, while being much faster to execute than other methods.\nWe have introduced the concept of learning an estimator for determining the structure of an undirectec. graphical model. A network architecture and sampling procedure for learning such an estimatoi. for the case of sparse GGMs was proposed. We obtained competitive results on synthetic data witl. various underlying distributions, as well as on challenging real-world data. Empirical results show. that our method works particularly well compared to other approaches for small-world networks, ar. important class of graphs common in real-world domains. We have shown that neural networks car. obtain improved results over various statistical methods on real datasets, despite being trained witl samples from parametric distributions. Our approach enables straightforward specifications of nev. priors and opens new directions in efficient graphical structure discovery from few examples.."}, {"section_index": "4", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Lopez-Paz et al. (2015) analyze learning functions to identify the structure of directed graphica models in causal inference using estimates of kernel-mean embeddings. As in our work, they demonstrate the use of simulations for training while testing on real data. Unlike our work, they primarily focus on finding the causal direction in two node graphs with many observations.\nOur learning architecture is motivated by the recent literature on deep networks.Vinyals et al.(2015 have shown that neural networks can learn approximate solutions to NP-hard combinatorial problems,. and the problem of optimal edge recovery in MRFs can be seen as a combinatorial optimization. problem. Several recent works have been proposed which show neural architectures for graph input. data (Henaff et al.]2015] Duvenaud et all2015] Li et al.]2016). These are based on multi layer convolutional networks, as in our work, or multi-step recurrent neural networks. The input in our approach can be viewed as a complete graph, while the ouput a sparse graph, thus none of these are. directly applicable. A related use of deep networks to approximate a posterior distribution can be. found in|Balan et al.(2015). Finally,Gregor & LeCun(2010); Xin et al.(2016) use deep networks to approximate steps of a known sparse recovery algorithm..\nFigure 5: Example solution from DeepGraph and Graph Lasso in the small sample regime on the same 35 samples, along with a larger sample solution of Graph Lasso for reference. DeepGraph is able to extract similat key edges as graphical lasso\nFigure 5: Example solution from DeepGraph and Graph Lasso in the small sample regime on the same 35 samples, along with a larger sample solution of Graph Lasso for reference. DeepGraph is able to extract similar key edges as graphical lasso We show the edges returned by Graph Lasso and DeepGraph for a sample from 35 subjects (Fig.5) in the control group. We also show the result of a large-sample result based on 368 subjects from graphical lasso. In visual evaluation of the edges returned by DeepGraph we find that they closely align with results from a large-sample estimation procedure. Furthermore we can see several edges in the subsample which were particularly strongly activated in both methods.\nThis work is partially funded by Internal Funds KU Leuven, FP7-MC-CIG 334380, DIGITEO 2013 0788D - SOPRANO, and ANR-11-BINF-0004 NiConnect. We thank Jean Honorio for providing pre-processed Cancer Genome Data.\nAlexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks\nThe design of the estimator in Equation (1) is not explicitly minimizing this risk functional. Thus modifying the estimator to fit a different class of graphs (e.g. small-world networks) while minimizing R(f) is not obvious. Furthermore, in practical settings the optimal X is unknown and precision matrix entries can be very small. We would prefer to directly minimize the risk functional. Desired structural assumptions on samples from P on the underlying graph, such as sparsity, may imply that the distribution is not tractable for analytic solutions. Meanwhile, we can often devise a sampling procedure for P allowing us to select an appropriate function via empirical risk minimization. Thus it is sufficient to define a rich enough F over which we can minimize the empirical risk over the W\nrrogate, l : RNe Ne, given by:. l(fw(),Y) =(Yij log(fw()) + (1-yij) log(1- f())) iF7\nBayesian approaches to structure learning rely on priors on the graph combined with sampling. techniques to estimate the posterior of the graph structure. Some approaches make assumptions. on the decomposability of the graph (Moghaddam et al.2009). The G-Wishart distribution is a. popular distribution which forms part of a framework for structure inference, and advances have been. recently made in efficient sampling (Mohammadi & Wit] 2015). These methods can still be rather. slow compared to competing methods, and in the setting of p > n we find they are less powerful.\nXj xj[XV\\ Xj L xj[XV\\i,]\n-L(,Y)~P[(J(), 1 Here l : LNe LNe -> R+ is the loss function. For graphical model selection the 0/1 loss function is. the natural error metric to consider (Wang et al.|2010). The estimator with minimum risk is generally. not possible to compute as a closed form expression for most interesting choices of P, such as those arising from sparse graphs. In this setting, Eq. (1) achieves the information theoretic optimal recovery. rate up to a constant for certain P corresponding to uniformly sparse graphs with a maximum degree. but only when the optimal X is used and the non-zero precision matrix values are bounded away from. zero (Wang et al.]2010]Ravikumar et al.]2011).\nmean | - | mean || Empirical 0.0267 0.543 Graph Lasso 0.0223 0.680 DeepGraph 0.0232 0.673\nWe discuss how the described approach can be applied to recover sparse Gaussian graphical models. A typical assumption in many modalities is that the number of edges is sparse. A convenient property. of these GGMs is that the precision matrix has a zero value in the (i, j)th entry precisely when. variables i and j are independent conditioned on all others. Additionally, the precision matrix anc partial correlation matrix have the same sparsity pattern, while the partial correlation matrix has. normalized entries.\nTable 5: Covariance prediction of ABIDE data. Averaged over 50 trials of 35 samples from the ABIDE Contro\nExperimental Setup Method Prec@5% AUC CE Glasso 0.464 0.038 0.726 0.021 0.02 Glasso (optimal) 0.519 0.035 0.754 0.019 0.02 Gaussian Random Graphs BDGraph 0.587 0.033 0.811 0.017 0.15 (n=35,p=39,sparsity=2%) DeepGraph-39 0.590 0.026 0.810 0.019 0.03 DeepGraph-39+Perm 0.598 0.026 0.831 0.017 0.03 Glasso 0.732 0.046 0.562 0.013 0.32 Glasso (optimal) 0.847 0.029 0.595 0.011 0.33 Gaussian Random Graphs BDGraph 0.861 0.015 0.654 0.013 0.33 (n=35,p=39,sparsity=15%) DeepGraph-39 0.678 0.032 0.643 0.012 0.33 DeepGraph-39+Perm 0.792 0.023 0.660 0.011 0.33\ne propose to sinulale our a prlorl assunptons ol for i E {1,. parsity and Gaussianity to learn fw(), which can. Sample G hen produce predictions of edges from the input data. Sample We model P(x|G) as arising from a sparse prior on. Xi{x he graph G and correspondingly the entries of the. Construct recision matrix O. To obtain a single sample of end for X corresponds to n i.i.d. samples from N(0, O-1). Select Func1 We can now train fw() by generating sample pairs. Optimize: n , Y). At execution time we standardize the input. f lata and compute the covariance matrix before evaluating fw(). 7 he sparse GGM is given in Algorithm[1] A weakly-informative sp dge is equally likely with small probability, versus structured spars onfigurations. For obtaining the training samples (, Y) in this case v recision matrix, O, with the desired number of zero entries distribu lo this and assure the precision matrices lie in the positive definite co riangular sparse matrix and then multiply it by its transpose. This p he experimental section. Alternatively, an MCMC based G-Wisha mployed if specific structures of the graph are desired (Lenkoski]20\nGaussian Random Graphs (n=35,p=39,sparsity=15%)\nTable 6: For each scenario we generate 100 graphs with 39 nodes, and corresponding data matrix sampled from distributions with those underlying graphs. The number of samples is indicated by n..\nWe can now train fw() by generating sample pairs Optimize: min N N=1 i(f(s), Yr)) (, Y). At execution time we standardize the input f EF data and compute the covariance matrix before evaluating fw(). The process of learning fw fo1 the sparse GGM is given in Algorithm|1] A weakly-informative sparsity prior is one where each edge is equally likely with small probability, versus structured sparsity where edges have specific configurations. For obtaining the training samples (, Y) in this case we would like to create a sparse precision matrix, O, with the desired number of zero entries distributed uniformly. One strategy tc do this and assure the precision matrices lie in the positive definite cone is to first construct an uppe. triangular sparse matrix and then multiply it by its transpose. This process is described in detail in the experimental section. Alternatively, an MCMC based G-Wishart distribution sampler can be employed if specific structures of the graph are desired (Lenkoski]2013).\nUsing our framework it is possible to attempt to directly predict an accurate covariance matrix given a noisy one constructed from few observations. This is a more challenging task than predicting the edges. In this section we. show preliminay experiments which given an empirical covariance matrix from few observations attempts tc. predict a more accurate covariance matrix that takes into account underlying sparse data dependency structure.\nThe sparsity patterns in real data are often not uniformly distributed. Many real world networks. have a small-world structure: graphs that are sparse and yet have a comparatively short average. distance between nodes. These transport properties often hinge on a small number of high-degree.. nodes called hubs. Normally, such structural patterns require sophisticated adaptation when applying. estimators like Eq. (1). Indeed, high-degree nodes break the small-sample, sparse-recovery properties. of l1-penalized estimators (Ravikumar et al.|2011). In our framework such structural assumptions. appear as a prior that can be learned offline during training of the prediction function. Similarly. priors on other distributions such as general exponential families can be more easily integrated. As. the structure discovery model can be trained offline, even a slow sampling procedure may suffice\nWe evaluate this network using the ABIDE dataset described in Section 3] The ABIDE data has a large number of samples allowing us to obtain a large sample estimate of the covariance and compare it to our estimator as well as graphical lasso and empirical covariance estimators. Using the large sample ABIDE empirical covariance matrix We find that we can obtain competitive l2 and loo norm using few samples. We use 403 subjects from the ABIDE Control group each with a recording of 150 - 200 samples to construct covariance matrix, totaling 77 330 samples (some correlated). This acts as our very approximate estimate of the population . We then evaluate covariance estimation on 35 samples using the empirical covariance estimator, graphical lasso, and DeepGraph trained to output covariance matrices. We repeat the experiment for 50 different subsamples of the data. We see in|5|that the prediction approach can obtain competitive results. In terms of l2 graphical lasso performs better however our estimate is better than empirical covariance estimation and much faster then graphical lasso. In some applications such as robust estimation a fast estimate of the covariance matrix (automatically embedding sparsity assumptions) can be of great use. For loo error we see the empirical covariance estimation outperforms graphical lasso and DeepGraph for this dataset, while DeepGraph performs better in terms of this metric."}, {"section_index": "5", "section_name": "A.2 ADDITIONAL SYNTHETIC RESULTS ON SPARSITY", "section_text": "We investigate the affect of sparsity on DeepGraph-39 which has been trained with input that has sparsity 96% - 92% sparse. We find that DeepGraph performs well at the 2% sparsity level despite not seeing this at training time. At the same time performance begins to degrade for 15% but is still competitive in several categories. The results are shown in Table[6] Future investigation can consider how alternate variation of sparsity at training time will affect these results.\nWe may ignore the denominator, D, as we are interested in I(Pi,j|z = 0). Thus we are left with a recursive formula that yields a high degree polynomial. From|Andoni et al.(2014 Theorem 3.1 using gradient descent, a neural network with only two layers can learn a polynomial function of degree d to arbitrary precision given sufficient hidden units."}, {"section_index": "6", "section_name": "A.3 APPLICATION OF LARGER NETWORK ON SMALLER INPUT", "section_text": "We perform preliminary investigation of application of a network trained for a larger number of nodes to a smaller set of nodes. Specifically, we consider the breast invasive carcinoma groups gene data. We now take all. 175 valid genes from Appendix C.2 of Honorio et al.(2012). We take the network trained on 500 nodes in the. synthetic experiments section. We use the same experimental setup as in the gene experiments. The 175 175\nRemark 1. Naively the polynomial from the recursive definition of partial correlation is of degree bounded by 2p-2. In the worst case, this would seem to imply that we would need an exponentially\nAlgorithm 1 Training a GGM edge estimator\nOne challenge is that outputs of our covariance predictor must be on the positive semidefinite cone, thus we choose to instead predict on the cholesky decompositions, which allows us to always produce positive definite covariances. We train a similar structure to DeepGraph-39 structure modifying the last layer to be fully connected linear layer that predicts on the cholesky decomposition of the true covariance matrices generated by our model with a squared loss.\nIn this work we propose to use a neural network as our function fw. To motivate this let us consider the extreme case when n > p. In this case ~ and thus entries of -1 or the partial correlation that are almost equal to zero can give the edge structure\nWe note these results are preliminary, as the covariance predicting networks were not heavily optimized, moreover the ABIDE dataset is very noisy even when pre-processed and thus even the large sample covariance estimate. may not be accurate. We believe this is an interesting alternate application of our paper..\nPi,j|Z = (Pi,j|Z\\zo - Pi,zo|Z\\zoPj,zo|Z\nEdge Selection Breast invasive carcinoma Subjects 234 DeepGraph ledoit DeepGraph+Permute random -236 glasso 238 -240 242 244 -246 248 250 20 40 60 80 100 120 Edaesin support\nFigure 1: (a) Illustration of nodes and edges \"seen\" at edge 4,13 in layer 1 and (b) Receptive field at layer 1. All entries in grey show the o, in covariance matrix used to compute o4,13. (c) shows the. dilation process and receptive field (red) at higher layers.\ngrowing number of hidden nodes to approximate it. However, this problem has a great deal oJ. structure that can allow efficient approximation. Firstly, higher order monomials will go to zerc. quickly with a uniform prior on Pi,j, which takes values between O and 1, suggesting that in many. cases a concentration bound exists that guarantees non-exponential growth. Furthermore, the. existence result is shown already for a shallow network, and we expect a logarithmic decrease in the number of parameters to peform function estimation with a deep network (Cohen et al.]2016).\nFigure 6: Average test likelihood over 50 trials of applying a network trained for 500 nodes, used on a 175 nod. problem"}, {"section_index": "7", "section_name": "A.4 PERMUTATION AS ENSEMBLE METHOD", "section_text": "Moreover, there are a great deal of redundant computations in Eq. (5) and an efficient dynamic programming implementation can yield polynomial computation time and require only low orde polynomial computations with appropriate storage of previous computation. Similarly we would like to design a network that would have capacity to re-use computations across edges and approximate low order polynomials. We also observe that the conditional independence of nodes i, j given Z car be computed equivalently in many ways by considering many paths through the nodes Z. Thus we can choose any valid ordering for traversing the nodes starting from a given edge.\nAs discussed in Section[2.3] permuting the input and averaging several permutations can produce an improvec result empirically. We interpret this as a typical ensembling method. This can be an advantage of the proposec. architecture as we are able to easily use standard ensemble techniques. We perform an experiment to furthe verify that indeed the permutation of the input (and subsequent inverse permutation) allows us to produce. separate classifiers that have uncorrelated errors.\nWe use the setup from the synthetic experiments with DeepGraph-39 in Section[3with n = 35 and p = 39 We construct 20 permutation matrices as in the experimental section. Treating each as a separate classifier. we compute the correlation coefficient of the errors on 50 synthetic input examples. We find that the average. correlation coefficient of the errors of two classifiers is 0.028 0.002, suggesting they are uncorrelated. Finally. we note the individual errors are relatively small, as can already be inferred from our extensive experimental results in Section[3 We however compute the average absolute error of all the outputs across each permutation for this set of inputs as 0.03, notably the range of outputs is 0 to 1. Thus since prediction error differ at each. permutation but are accurate we can average and yield a lower total prediction error..\nWe propose a series of shared operations at each edge. We consider a feedforward network wher each edge i, j is associated with a fixed sized vector, oh,;, of dimensionality d at each layer, k > 0 ., is initialized to the covariance entries at k = 0. For each edge we start with a neighborhood o the 6 adjacent nodes, i, j, i-1, i+1, j-1, j+1 for which we take all corresponding edge values from the covariance matrix illustrated in Figure[1 We proceed at each layer to increase the nodes considere for each edge, the output at each layer progressively increasing the receptive field making sure al values associated with the considered nodes are present. The receptive field here refers to the origina covariance entries which are accessible by a given, oh ; (Luo et al.2010). The equations defining the process are shown in Figure 1 Here a neural network fwk 1s applied at each edge at each layer and a dilation sequence dk is used. We call a network of this topology a D-Net of depth l. We use dilatior here to allow the receptive field to grow fast, so the network does not need a great deal of layers. We make the following observations:\nFinally we note that our method is extremely efficient computationally thus averaging the results of severa permutations is practical even as the graph becomes large.\nlayer 1, edge 4,13 layer 2, edge 4,13 =Pi,J O,j=fw2(0i,j,O-d2,3Oi,j-d2Oi+d2,j-d2 111415 Oi.j=fw(0) a) Yi,j=0(W 10111213141516 10111213141516\n): : Yi.i 10111213141516 10111213141516 (b (c\nDeepGraph ledoit DeepGraph+Permute random -236 glasso 238 240 242 1- -244 -246 -248 250 20 40 60 80 100 120 Edges in support\ncovariance matrix from 40 samples and padded to the appropriate size. We observe that DeepGraph has similar performance to graph lasso while permuting the input and ensembling the result gives substantial improvement\nProposition 2. For general P it is a necessary condition for P-consistency that the receptive field of D-Net covers all entries of the covar at any edge it is applied.\nProof. Consider nodes i and j and a chain graph such that i and j are adjacent to each other in the. matrix but are at the terminal nodes of the chain graph. One would need to consider all other variables to be able to explain away the correlation. Alternatively we can see this directly from expanding. Eq. (5).\nIntuitively adjacent edges have a high overlap in their receptive fields and can easily share information about the non-overlapping components. This is analogous to a parametrized message passing. For example if edge (i,j) is explained by node k, as k enters the receptive field of edge (i,j - 1)\nthe path through (i, j) can already be discounted. In terms of Eq.5|this can correspond to storing computations that can be used by neighbor edges from lower levels in the recursion.\nHere f,k is shared amongst all nodes and thus we can implement this as a special kind of convolutional. network. We make sure that to have considered all edges relevant to the current set of nodes in the receptive field which requires us to add values from filters applied at the diagonal to all edges In Figure[1|we illustrate the nodes and receptive field considered with respect to the covariance. matrix. This also motivates a straightforward implementation using 2D convolutions (adding separate convolutions at i, i and j, j to each i, j at each layer to achieve the specific input pattern described) shown in (Figure2).\nConsidering the general n > p case is illustrative. However, the main advantages of making the. computations differentiable and learned from data is that we can take advantage of the sparsity and. structure assumptions on the target function to obtain more efficient results than naive computation of partial correlation or matrix inversion. As n decreases our estimate of pi,; becomes inexact and here. a data driven model which can take advantage of the assumptions on the underlying distribution can. more accurately recover the graph structure..\nThe convolution structure is dependent on the order of the variables used to build the covariance. matrix. which is arbitrary. Permuting the input data we can obtain another estimate of the output. In the experiments, we leverage these various estimate in an ensembling approach, averaging the results. of several permutations of input. We observe that this generally yields a modest increase in accuracy. but that even a single node ordering can show substantially improved performance over competing. methods in the literature."}, {"section_index": "8", "section_name": "3 EXPERIMENTS", "section_text": "Our experimental evaluations focus on the challenging high dimensional settings in which p > r. and consider both synthetic data and real data from genetics and neuroimaging. In our experiments. we explore how well networks trained on parametric samples generalize, both to unseen syntheti data and to several real world problems. In order to highlight the generality of the learned networks. we apply the same network to multiple domains. We train networks taking in 39, 50, and 500 node. graphs. The former sizes are chosen based on the real data we consider in subsequent sections. We. refer to these networks as DeepGraph-39, 50, and 500. In all cases we have 50 feature maps of 3 : kernels. The 39 and 50 node network with 6 convolutional layers and dg = k + 1. For the 500 nod. network with 8 convolutional layers and dk = 2k+1. We use ReLU activations. The last layer has. 1 1 convolution and a sigmoid outputing a value of 0 to 1 for each edge..\nWe sample P(X|G) with a sparse prior on P(G) as follows. We first construct a lower diagonal. matrix, L, where each entry has a probability of being zero. Non-zero entries are set uniformly between c and c. Multiplying LLI gives a sparse positive definite precision matrix, O. This gives. us our P(O|G) with a sparse prior on P(G). We sample from the Gaussian V(0, O-1) to obtain.\nInput Data 1.00 0.20 0.40 0.10 0.26 0.20 0.02 0.18 0.12 0.20 Standardize 0.20 1.00 0.37 0.06 0.04 0.57 0.23 0.04 0.19 0.30 0.40 0.37 1.00 0.03 0.14 0.39 0.07 0.14 0.02 0.28 Conv. Dilated 0.11 0.06 0.03 1.00 0.20 0.020.04 0.25 0.00 0. 1x1 Estimate 0.26 0.04 0.14 0.20 1.00 0.17 0.23 0.04 0.28 0.15 Conv. Conv. layer Covariance 0.20 0.57 0.32 0.02 0.17 1.00 0.04 0.05 0.05 U.22 layers layer 0.02 0.23 0.07 0.04 0.23 0.04 1.00 0.03 0.06 0.42 0.18 0.04 0.14 0.25 0.04 0.05 0.03 1.00 0.23 0.09 0.12 0.19 0.02 0.09 0.28 0.05 0.06 0.23 1.00 0.24 0.20 0.30 0.28 0.15 0.15 0.22 0.42 0.09 0.24 1.00\nInput Data 1.00 0.20 0.40 0.10 0.26 0.20 0.02 0.18 0.12 0.20 Standardize 0.20 1.00 0.37 0.06 0.04 0.57 0.23 0.04 0.19 0.30 0.40 0.37 1.00 0.03 0.14 0.39 0,07 0.14 0.02 0.2 1.00 0.20 0.02 0.00 Conv. Dilated 0.11 0.06 0.03 0.04 0.25 0.15 1x1 Estimate 0.26 0.04 0.14 0.20 1.00 0.17 0.23 0.04 0.28 0.151 Conv. Conv. layer Covariance 0.20 0.57 0.32 0.02 0.17 1.00 0.04 0.05 0.05 U.22 layers layer 0.02 0.23 0.07 0.04 0.23 0.04 1.00 0.03 0.06 0.42 0.18 0.04 0.14 0.25 0.04 0.05 0.03 1.00 0.23 0.09 0.12 0.19 0.02 0.09 0.28 0.05 0.06 0.23 1.00 0.24 0.20 0.30 0.28 0.15 0.15 0.22 0.42 0.09 0.24 1.00\nFigure 2: Diagram of the DeepGraph structure discovery architecture used in this work. The input is first standardized and then the sample covariance matrix is estimated. A neural network consisting of multiple dilated. convolutions and a final 1 1 convolution layer is used to predict edges corresponding to non-zero entries in the precision matrix.\nUltimately our choice of architecture that has shared computations and multiple layers is highly. scalable as compared with a naive fully connected approach and allows leveraging existing optimized. 2-D convolutions. In preliminary work we have also considered fully connected layers but this proved. to be much less efficient in terms of storage and scalibility than using deep convolutional networks\nsamples of X. Here a corresponds approximately to a specific sparsity level in the final precisior matrix, which we set to produce matrices 92 - 96% sparse and c chosen so that partial correlation range 0 to 1.\nSynthetic Data Evaluation To understand the properties of our learned networks, we evaluated. them on different synthetic data than the ones they were trained on. More specifically, we used a completely different third party sampler so as to avoid any contamination. We use DeepGraph-39 on a variety of settings. The same trained network is utilized in the subsequent neuroimaging evaluations. as well. DeepGraph-500 is also used to evaluate larger graphs..\nFor each scenario we repeat the experiment for 100 different graphs and small sample observations showing the average area under the ROC curve (AUC), precision@k corresponding to 5% of possible edges, and calibration error (CE) (Mohammadi & Wit2015).\nFor graphical lasso we use the partial correlations to indicate confidence in edges; BDGraph automatically returns posterior probabilities as does our method. Finally to understand the effect of the regularization parameter we additionally report the result of graphical lasso under optimal regularizer setting on the testing data.\nOur method dominates all other approaches in all cases with p > n (which also corresponds to the training regime). For the case of random Gaussian graphs with n=35 (as in our training data), and graph sparsity of 95%, we have superior performance and can further improve on this by averaging permutations. Next we apply the method to a less straightforward synthetic data, with distributions typical of many applications. We found that, compared to baseline methods, our network performs particularly well with high-degree nodes and when the distribution becomes non-normal. In particular our method performs well on the relevant metrics with small-world networks, a very common family of graphs in real-world data, obtaining superior precision at the primary levels of interest. Figure|3 shows examples of random and Watts-Strogatz small-world graphs used in these experiments.\nTraining a new network for each number of samples can pose difficulties with our proposed method Thus we evaluted how robust the network DeepGraph-39 is to input covariances obtained from fewer or more samples. We find that overall the performance is quite good even when lowering the number of samples to n = 15, we obtain superior performance to the other approaches (Table|1). We also applied DeepGraph-39 on data from a multivariate generalization of the Laplace distribution (Gomez et al.] 1998). As in other experiments precision matrices were sampled from the G-Wishart at a sparsity of 95%.Gomez et al.(1998| Proposition 3.1) was applied to produce samples. We find that DeepGraph-39 performs competitively, despite the discrepancy between train and test distributions Experiments with variable sparsity are considered in the supplementary material, which find that for very sparse graphs, the networks remain robust in performance, while for increased density performance degrades but remains competitive.\nUsing the small-world network data generator (Peeters et al.]2015), we demonstrate that we can. update the generic sparse prior to a structured one. We re-train DeepGraph-39 using only 1000. examples of small-world graphs mixed with 1000 examples from the original uniform sparsity model We perform just one epoch of training and observe markedly improved performance on this test case. as seen in the last row of Table[1.\nEach network is trained continously with new samples generated until the validation error saturates For a given precision matrix we generate 5 possible X samples to be used as training data, with a total of approximately 100K training samples used for each network. The networks are optimized using ADAM (Kingma & Ba] 2015) coupled with cross-entropy loss as the objective function (cf Sec.[2.1). We use batch normalization at each layer. Additionally, we found that using the absolute value of the true partial correlations as labels, instead of hard binary labels, improves results.\nWe used the BDGraph R-package to produce sparse precision matrices based on the G-Wishart distribution (Mohammadi & Wit, 2015) as well as the R-package rags2ridges (Peeters et al. 2015) to generate data from small-world networks corresponding to the Watts-Strogatz model (Watts & Strogatz| 1998). We compared our learned estimator against the scikit-learn (Pedregosa et alf 2011) implementation of Graphical Lasso with regularizer chosen by cross-validation as well as the Birth-Death Rate MCMC (BDMCMC) method from Mohammadi & Wit(2015).\nFor our final scenario we consider the very challenging setting with 500 nodes and only n = 5. samples. We note that the MCMC based method fails to converge at this scale, while graphical lass. is very slow as seen in the timing performance and barely performs better than chance. Our metho. convincingly outperforms graphical lasso in this scenario. Here we additionally report precision a just the first 0.05% of edges since competitors perform nearly at chance at the 5% level..\nExperimental Setup Method Prec @ 5% AUC CE Glasso 0.361 0.011 0.6240.006 0.07 Glasso (optimal) 0.384 0.011 0.639 0.007 0.07 Gaussian Random Graphs BDGraph 0.441 0.011 0.715 0.007 0.28 n= 35,p = 39) DeepGraph-39 0.463 0.009 0.738 0.006 0.07 DeepGraph-39+Perm 0.487 0.010 0.740 0.007 0.07 Glasso 0.539 0.014 0.696 0.006 0.07 Glasso (optimal) 0.571 0.011 0.704 0.006 0.07 Gaussian Random Graphs BDGraph 0.648 0.012 0.776 0.007 0.16 (n = 100,p = 39) DeepGraph-39 0.567 0.009 0.759 0.006 0.07 DeepGraph-39+Perm 0.581 0.008 0.771 0.006 0.07 Glasso 0.2330.010 0.566 0.004 0.07 Glasso (optimal) 0.263 0.010 0.5780.004 0.07 Gaussian Random Graphs BDGraph 0.261 0.009 0.630 0.007 0.41 n=15,p=39) DeepGraph-39 0.3260.009 0.6640.008 0.08 DeepGraph-39+Perm 0.360 0.010 0.672 0.008 0.08 Glasso 0.312 0.012 0.605 0.006 0.07 Glasso (optimal) 0.337 0.011 0.622 0.006 0.07 Laplacian Random Graphs BDGraph 0.298 0.009 0.687 0.007 0.36 (n = 35, p = 39) DeepGraph-39 0.415 0.010 0.711 0.007 0.07 DeepGraph-39+Perm 0.445 0.011 0.717 0.007 0.07 Glasso 0.3870.012 0.5880.004 0.11 Glasso (optimal) 0.4530.008 0.6400.004 0.11 Gaussian Small-World Graphs BDGraph 0.428 0.007 0.691 0.003 0.17 (n=35,p=39) DeepGraph-39 0.479 0.007 0.709 0.003 0.11 DeepGraph-39+Perm 0.453 0.007 0.712 0.003 0.11 DeepGraph-39+update 0.560 0.008 0.821 0.002 0.11 DeepGraph-39+update+Perm 0.555 0.007 0.805 0.003 0.11\nGaussian Random Graphs n=100p=39\nGaussian Random Graphs (n = 15,p = 39)\nGaussian Small-World Graphs (n=35,p=39)\nGlasso Glasso (optimal) BDGraph DeepGraph-39 DeepGraph-39+Perm DeepGraph-39+update DeepGraph-39+update+Perm\nTable 1: For each case we generate 100 sparse graphs with 39 nodes and data matrices sampled (with n samples) from distributions with those underlying graphs. DeepGraph outperforms other methods in terms of AP, AUC and precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has better performance in all cases except n > p. We compute the average executi\nfrom distributions with those underlying graphs. DeepGraph outperforms other methods in terms of AP, AUC and precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has bette:. performance in all cases except n > p. We compute the average execution time of our method compared to Graph Lasso and BDGraph on a. CPU in Table4 We note that we use a production quality version of graph lasso (Pedregosa et al. 2011), whereas we have not optimized the network execution, for which known strategies may be. applied (Denton et al.]2014).\nExperimental Setup Method Prec@0.05% Prec @ 5% AUC CE random 0.052 0.002 0.053 0.000 0.500 0.000 0.05 Glasso 0.156 0.010 0.055 0.001 0.501 0.000 0.05 Gaussian Random Graphs Glasso (optimal) 0.162 0.010 0.055 0.001 0.501 0.000 0.05 (n=50,p=500) DeepGraph-500 0.449 0.018 0.109 0.002 0.543 0.002 0.06 (a) (b) DeepGraph-500+Perm 0.583 0.018 0.116 0.002 0.547 0.002 0.06\nExperimental Setup Method Prec@0.05% Prec @ 5% AUC CE random 0.052 0.002 0.053 0.000 0.500 0.000 0.05 Glasso 0.156 0.010 0.055 0.001 0.501 0.000 0.05 Gaussian Random Graphs Glasso (optimal) 0.162 0.010 0.055 0.001 0.501 0.000 0.05 (n=50,p=500) DeepGraph-500 0.449 0.018 0.109 0.002 0.543 0.002 0.06 (a) (b) DeepGraph-500+Perm 0.583 0.018 0.116 0.002 0.547 0.002 0.06\nTable 2: Experiment on 500 node graphs with only 50 samples repeated 100 times. Figure 3: Example o Improved performance in all metrics. (a) random and (b) smal\nCancer Genome Data We perform experiments on a gene expression dataset described in|Honoric et al.(2012). The data come from a cancer genome atlas from 2360 subjects for various types of cancer. We used the first 50 genes from[Honorio et al.(2012] Appendix C.2) of commonly regulatec genes in cancer. We evaluated on two groups of subjects, one with breast invasive carcinoma (BRCA consisting of 590 subjects and the other colon adenocarcinoma (CODA) consisting of 174 subjects\nEvaluating edge selection in real-world data is challenging. We use the following methodology: for each method we select the top-k ranked edges, recomputing the maximum likelihood precision matrix with support given by the corresponding edge selection method. We then evaluate the likelihood on a held-out set of data. We repeat this procedure for a range of k. We rely on Algorithm O in Hara & Takemura (2010) to compute the maximum likelihood precision given a support. The experiment is repeated for each of CODA and BRCA subject groups 150 times. Results are shown in Figure|4] In all cases we use 40 samples for edge selection and precision estimation. We compare with graphical lasso as well as the Ledoit-Wolf shrinkage estimator (Ledoit & Wolf] 2004). We additionally consider the MCMC based approach described in previous section. For graphical lasso and Ledoit-Wolf, edge selection is based on thresholding partial correlation (Balmand & Dalalyan]2016).\nAdditionally, we evaluate the stability of the solutions provided by the various methods. In several applications a low variance on the estimate of the edge set is important. On Table 3] we report"}]
BJVEEF9lx
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recent progress in artificial intelligence is driven by the ability to learn representations from data Yet not all kinds of representations are equal, and many of the fundamental properties of representa tions (both as theoretical constructs and as observed experimentally in humans) are missing. Perhaps the most critical property of a system of representations is compositionality, which as described suc cinctly in (Fodor & Lepore|2002), is when (i) it contains both primitive symbols and symbols tha are complex; and (ii) the latter inherit their syntactic/semantic properties from the former. Compo sitionality is powerful because it enables a system of representation to support an infinite number oi semantically distinct representations by means of combination. This argument has been supportec experimentally; a growing body of evidence (Spelke & Kinzler| 2007) has shown that humans pos- sess a small number of primitive systems of mental representation - of objects, agents, number anc geometry - and new representations are built upon these core foundations.\nRepresentations learned with modern machine learning methods possess few or none of these prop- erties, which is a severe impediment. For illustration consider that navigation depends upon some representation of geometry, and yet recent advances such as end-to-end autonomous driving (Bo- jarski et al.|2016) side-step building explicit geometric representations of the world by learning to map directly from image inputs to motor commands. Any representation of geometry is implicit and has the advantage that it is economical in only possessing information necessary for the task However, this form of representation lacks (i) the ability to reuse these representations for other related tasks such as predicting object stability or performing mental rotation, (ii) the ability to com- pose these representations with others, for instance to represent a set or count of geometric objects and (iii) the ability to perform explicit inference using representations, for instance to infer why a particular route would be faster or slower.\nThis contribution provides a computational model of mental representation which inherits the com. positional and productivity advantages of symbolic representations, and the data-driven and eco-. nomical advantages of representations learned using deep learning methods. To this end, we model. mental representations as a form of data-structure, which by design possess various forms of com-. positionality. In addition, in step with deep learning methods we refrain from imposing a particular. representations on a system and allow it instead be learned. That is, rather than specify a concrete. data type (for example polygons or voxels for geometry), we instead define a class of representations. as abstract data types, and impose invariants, or axioms, that any representation must adhere to.\nMathematicians have sought an axiomatic account of our mental representations since the end of the nineteenth century, but both as an account of human mental representations, and as a means of specifying representations for intelligent systems, the axiomatic specifications suffer from a number"}, {"section_index": "1", "section_name": "LEARNING APPROXIMATE DISTRIBUTION-SENSITIVE DATA STRUCTURES", "section_text": "Armando Solar Lezama\nasolar@csail.mit.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We model representations as data-structures which are distribution sensitive, i.e.,. which exploit regularities in their usage patterns to reduce time or space com-. plexity. We introduce probabilistic axiomatic specifications to extend abstract. data structures - which specify a class of representations with equivalent logi. ca1 behavior - to a distribution-sensitive data structures. We reformulate synthesis of distribution-sensitive data structures as a continuous function approximation. problem, such that the functions of a data-structure deep neural networks, such as. a stack, queue, natural number, set, and binary tree.\nof problems. Axioms are universally quantified - for all numbers, sets, points, etc - while humans in contrast, are not uniformly good at manipulating numbers of different magnitude (Hyde| 2011 Nuerk & Willmes] 2005]Dehaene1997), rotating geometry of different shapes (Izard et al.]2011 or sets of different cardinality. Second, axioms have no algorithmic content: they are declarativ rules which do not suggest how to construct concrete representations that satisfy them. Third, onl simple systems have reasonable axioms, whereas many representations are complex and cannc in practice be fully axiomitized; conventional axiomatic specifications do not readily accommo date partial specification. A fourth, potentially fatal threat is offered by|Dehaene (1997), where h shows that there are infinitely many systems, most easily dismissed by even a child as clearly nc number-like, which satisfy Peano's axioms of arithmetic. Moreover these ''nonstandard models c arithmetic\"' can never be eliminated by adding more axioms, leading Dehaene to conclude \"'Hence our brain does not rely on axioms.\".\nWe extend, rather than abandon, the axiomatic approach to specifying mental representations, and employ it purely as mechanism to embed domain specific knowledge. We model a mental represen- tation as an implementation of an abstract data type which adheres approximately to a probabilistic axiomatic specification. We refer to this implementation as a distribution-sensitive data-structure.\nIn summary, in this paper:\nempty : Stack push : Stack Item -> Stack pop : Stack -> Stack Item isempty : Stack -> {0, 1}\nThe meaning of the constants and functions is not specified in the interface. To give meaning to these names, we supplement the abstract data type with a specification as a set of axioms. The specification as a whole is the logical conjunction of this set of axioms. Continuing our example for all s E Stack. i E Item:\npop(push(s, i)) = (s, i) isempty(empty) = 1 isempty(push(s, i)) = 0 pop(empty) =\npop(push(s, i)) = (s,i isempty(empty) = 1 sempty(push(s, i)) = pop(empty) =\nWe introduce probabilistic axiomatic specifications as a quantifier-free relaxation of a con- ventional specification, which replaces universally quantified variables with random vari- ables. Synthesis of a representation is formulated as synthesis of functions which collectively satisfy the axioms. When the axioms are probabilistic, this is amounts of maximizing the probability that the axiom is true.. We present a number of methods to approximate a probabilistic specification, reducing it to a continuous loss function.. We employ neural networks as function approximators, and through gradient based opti- mization learn representations for a number of fundamental data structures..\nAbstract data types model representations as a set of types and functions which act on values of those types. They can also be regarded as a generalized approach to algebraic structures, such as lattices, groups, and rings. The prototypical example of an abstract data type is the Stack, which models an ordered, first-in, last-out container of items. We can abstractly define a Stack of Items. in part. by defining the interface:\nempty : Stack push : Stack Item -> Stack pop : Stack -> Stack Item sempty : Stack -> {0,1}\nThe interface lists the function names and types (domains and range). Note that this is a functional (rather than imperative) abstract data type, and each function in the interface has no internal state. For example, push is a function that takes an instance of a Stack and an Item and returns a Stack. empty : Stack denotes a constant of type Stack. the empty stack of no items.\nAn important property of an abstract data types which supports algorithmic compositionality is. encapsulation. Encapsulation means that the particular details of how the functions are implemented should not matter to the user of the data type, only that it behaves as specified. Many languages. enforce that the internals are unobservable, and that the data type can only be interacted with through. its interface. Encapsulation means that data-structures can be composed without reasoning about. their internal behavior.\nIn this paper however, we focus on parametric compositionality. Some data structures, in particulat containers such as a stack, or set, or tree are parametric with respect to some other type, e.g. the type of item. Parametric compositionality means for example that if we have a representation of a set, and a representation of a number, we get a set of numbers for free. Or, given a representations for a tree and representations for Boolean logic, we acquire the ability to form logical expressions for free."}, {"section_index": "3", "section_name": "2.2 DISTRIBUTION SENSITIVE DATA STRUCTURES", "section_text": "Axiomatic specifications almost always contain universal quantifiers. The stack axioms are quan tified over all possible stacks and all possible items. Real world use of a data structure is howeve never exhaustive, and rarely uniform. Continuing our stack example, we will never store an infinit number of items, and the distribution over how many items are stored, and in which order relative tc each other, will highly non-uniform in typical use cases. Conventional data structures are agnostic to these distributional properties.\nData structures that exploit non-uniform query distributions are typically termed distribution. sensitive (Bose et al.|2013), and are often motivated by practical concerns since queries observed in. real-world applications are not uniformly random. An example is the optimum binary search tree on. n keys, introduced by Knuth (Bose et al.]2013), which given a probability for each key has an av-. erage search cost no larger than any other key. More generally, distribution-sensitive data structures. exploit underlying patterns in a sequence of operations in order to reduce time and space complexity\nTo make the concept of a distribution-sensitive data-structure precise, we first develop the concept o1 an probabilistically axiomatized abstract data type (T, O, F), which replaces universally quantified variables in its specification with random variables. T and O are respectively sets of type and interface names. F is a set of type specifications, each taking the form m : t for a constant of type T, or o : T1 -> T2 denoting a function from T1 to T2. Here t E T or a Cartesian product T . . . Tn\nA concrete data type implements an abstract data type by assigning a value (function or constant) to each name in O. A concrete data type is deemed a valid implementation only with respect to an algebraic specification A. A is a set of equational axioms of the form p = q, p and q are constants,. random variables, or transformations of random variables by functions in O..\nSince a transformation of a random variable yields a random variable, and an axiom is simply a predicate of its left and right hand side arguments, random variables present in an axiom implies that the axiom itself is a Boolean valued random variable. For example if we have a distribution over items i of the stack, axiom (1) itself is a random variable which is true or false depending on i, push, pop, and can only be satisfied with some probability. We let P[A(o)] denote the probability\nA concrete representation of a stack is a data structure which assigns constants and functions to the names empty, push, pop and isempty. The data structure is a stack if and only if it satisfies the specification.\nThere are a number of distinct forms of compositionality with respect to data structures. One ex ample is algorithmic compositionality, by which we can compose algorithms which use as primitive operations the interfaces to these representations. These algorithms can in turn form the interfaces to other representations, and so on.\nProbabilistic axioms do not imply that the concrete data-structure itself is probabilistic. On the. contrary, we are concerned with specifying and synthesizing deterministic concrete data structures which exploit uncertainty stemming only from the patterns in which the data-structure is used.\nEach type t E T will correspond to a finite dimensional real valued multidimensional array Rn Interface functions are continuous mappings between these arrays.."}, {"section_index": "4", "section_name": "UNROLL AXIOMS", "section_text": "Axiom (1) of the stack is intensional in the sense that it refers to the underlying stack s. This provides an inductive property allowing us to fully describe the behavior of an unbounded number of push and pop operations with a single equational axiom. However, from an extensional perspective, we do not care about the internal properties of the stack; only that it behaves in the desired way. Put plainly, we only care that if we push an item i to the stack, then pop, that we get back i. We do not care that the stack is returned to its initial state, only that it is returned to some state that will continue to obey this desired behavior.\nAn extensional view leads more readily to approximation; since we cannot expect to implement stack which satisfies the inductive property of axiom 1 if it is internally a finite dimensional vector Instead we can unroll the axiom to be able to stack some finite number of n items:"}, {"section_index": "5", "section_name": "APPROXIMATE DISTRIBUTIONS WITH DATA", "section_text": "We approximate random variables by a finite data distribution assumed to be a representative set of samples from that distribution. Given an axiom p = q, we denote p and q as values (arrays computed by evaluating p and q respectively with concrete data from the data distributions of random variables and the interface functions.\nWe relax equality constraints in axioms to a distance function, in particular the L2 norm. This. transforms the equational axioms into a loss function. Given i axioms, the approximate maximum likelihood concrete data type o* is then:\n* = argmin >llPi-qill 2\nConstants and parameterized functions (e.g. neural networks) which minimizes this loss function then compose a distribution-sensitive concrete data type.\nP[A(o)]:= P[^iPi= qi\nWhen P[A(o)] = 1, can be said to fully satisfy the axioms. More generally, with respect to a. space of concrete data types, we denote the maximum likelihood o* as one which maximizes the probability that the axioms hold:.\n= arg max P[A(o)\nA probabilistic specification is not easier to satisfy than a universally quantified one, but it can lend itself more naturally to a number of approximations. In this section we outline a number of. relaxations we apply to a probabilistic abstract data type to make synthesis tractable."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "Natural number (from Peano's axioms Stack Queue Set Binary tree\nWith the exception of natural number (for which we used Peano's axioms), we use axiomitizations from (Dale & WalkerJ 1996). As described in section 4, since we use finite dimensional representa tions we unroll the axioms some finite number of times (e.g., to learn a stack of three items rathei than it be unbounded) and''extensionalize' them.\nIn each example we used we used single layer convolutional neural networks with 24, 3 by 3 filters and rectifier non-linearities. In container examples such as Stack and Queue, the Item type was sampled from MNIST dataset, and the internal stack representation was chosen (for visualization) to also be a 28 by 28 matrix. We minimized the equational distance loss function described in section 3 using the adam optimization algorithm, with a learning rate of O.0001 In figures 1 and 2 we visualize the properties of the learned stack.\nTo explore compositionality, we also learned a Stack, Queue and Set of Number, where Numbe. was itself a data type learned from Peano's axioms.\n1. 2. 3. 4. 5. 6. 7. push empty stack pop\nFigure 1: Validation of stack trained on MNIST digits, and introspection of internal representation. Row push shows images pushed onto stack from data in sequence. Row pop shows images taken from stack using pop function. Their equivalence demonstrates that the stack is operating correctly.. Row stack shows internal representation after push and pop operations. The stack is represented as an image of the same dimension as MNIST (28 by 28) arbitrarily. The stack learns to compress three images into the the space of one, while maintaining the order. It deploys an interesting interlacing. strategy, which appears to exploit some derivative information..\nThe learned internal representations depend on three things (i) the axioms themselves, (ii) the archi. tecture of the networks for each function in the interface, and (iii) the optimization procedure. In th stack example, we observed that if we decreased the size of the internal representation of a stack, w. would need to increase the size and complexity of the neural network to compensate. This implie. that statistical information about images must be stored somewhere, but there is some flexibility ove. where.\nFigure 2: Generalization of the stack. Top left to top right, 10 images stacked in sequence using push. Bottom right to bottom left: result from calling pop on stack 10 times. This stack was trained to stack three digits. It appears to generalize partially to four digits but quickly degrades after that Since the stack is finite dimensional, it is not possible for it to generalize to arbitrarily long sequences of push operations.\nitem queue stack\nFigure 3: Left: Stack versus queue encoding. Three MNIST images (top row) were enqueued onto the empty queue (middle row left), and pushed onto the empty stack (bottom row left). Middle row shows the internal queue representation after each enqueue operation, while bottom is the internal stack representation after each push. In this case, the learned stack representation compresses pixel intensities into different striated sections of real line, putting data about the first stacked items at lower values and then shifting these to higher values as more items are stacked. This strategy appears different from that in figure 1, which notably was trained to a lower error value. The internal queue representation is less clear; the hexagonal dot pattern may be an artifact of optimization or critical to its encoding. Both enqueue and push had the same convolutional architecture. Right: Internal representations of natural numbers from O (top) to 19 (bottom). Natural numbers are internally represented as a vector of 10 elements. Number representations on the left are found by repeateding the succesor function, e.g. (succ(zero), succ(succ(zero)), ...). Numbers on the right are found by encoding machine integers into this internal representation.\nGiven the same architecture, the system learned different representations depending on the axioms. and optimization. The stack representation learned in figure 1 differs from that in figure 3, indicating. that there is not a unique solution to the problem, and different initialization strategies will yield different results. The queue internal representation is also different to them both, and the encoding. is less clear. The queue and stack representations could have been the same (with only the interface functions push, pop, queue and dequeue taking different form)..\nAs shown in figure 2, data-structures exhibit some generalization beyond the data distributions or which they are trained. In this case, a stack trained to store three items, is able to store four with some error, but degrades rapidly beyond that. Of course we cannot expect a finite capacity represen tation to store an unbounded number of items; lack of generalization is the cost of having optimized performance on the distribution of interest.\nOur contribution builds upon the foundations of distribution-sensitive data structures (Bose et al.. 2013), but departs from conventional work on distribution-sensitive data structures in that: (i) we\npush pop\nsynthesize data structures automatically from specification, and (ii) the distributions of interest ar complex data distributions, which prevents closed form solutions as in the optimum binary tree\nOur approach to learning representation can be viewed as a form of data-type synthesis from speci-. fication. From the very introduction of abstract data types, verification that a given implementation. satisfies its specification was a motivating concern (Guttag et al.[1978] Guttag1978] Spitzen & Wegbreit]1975). Modern forms of function synthesis (Solar-Lezama2009 Polikarpova & Solar- Lezama2016) use verification as an oracle to assist with synthesis. Our approach in a broad sense is. similar, in that derivatives from loss function which is derived from relaxing the specification, guide. the optimization through the paramterized function spaces.\nProbabilistic assertions appear in first-order lifting (Poole2003), and Sampson (Sampson et al. 2014) introduce probabilistic assertions. Implementation of data type is a program. Main difference is that we synthesize data type from probabilistic assertion. Sumit's work (Sankaranarayanan2014 seeks upper and lower bounds for the probability of the assertion for the programs which operate on uncertain data.\nRecent work in deep learning has sought to embed discrete data structures into continuous form. Examples are the push down automata (Sun et al.||1993), networks containing stacks (Grefenstette et al.2015), and memory networks (Sukhbaatar et al.[2015). Our approach can be used to synthe- size arbitrary data-structure, purely from its specification, but is parameterized by the neural network structure. This permits it more generality, with a loss of efficiency."}, {"section_index": "7", "section_name": "8 DISCUSSION", "section_text": "In this contribution we presented a model of mental representations as distribution sensitive data. structures. and a method which employs neural networks (or any parameterized function) to syn thesize concrete data types from a relaxed specification. We demonstrated this on a number ol examples, and visualized the results from the stack and queue.\nOne of the important properties of conventional data structures is that they compose; they can be. combined to form more complex data structures. In this paper we explored a simple form of para- metric composition by synthesizing containers of numbers. This extends naturally to containers of containers, .e.g sets of sets, or sets of sets of numbers. Future work is to extend this to richer forms. of composition. In conventional programming languages, trees and sets are often made by compos-. ing arrays, which are indexed with numbers. This kind of composition ls fundamental to building. complex software from simple parts.\nIn this work we learned representations from axioms. Humans, in contrast, learn representations mostly from experience in the world. One rich area of future work is to extend data-structure learning to the unsupervised setting, such that for example an agent operating in the real world would learn a geometric data-structures purely from observation."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning Deep Architectures for AI, volume 2. 2009. ISBN 22000o0006. doi: 10.1561/2200000006.\nVarious forms of machine learning and inference learn representations of data. Our approach bears. resemblance to the auto-encoder (Bengio]2009), which exploits statistics of a data distribution to. learn a compressed representation as a hidden layer of a neural network. As in our approach, an. auto-encoder is distribution sensitive by the constraints of the architecture and the training proce-. dure (the hidden layer is of smaller capacity than the data and which forces the exploitation of regularities). However, an auto-encoder permits just two operations: encode and decode, and has no notion explicit notion of compositionality.\nA step closer to our approach than the auto-encoder are distributed representations of words as developed in (Mikolov et al.]2ooo). These representations have a form of compositionality such that vector arithmetic on the representation results in plausible combinations (Air + Canada = Air- Canada).\nStanislas Dehaene. The Number sense, volume 53. 1997. ISBN 9780199753871. doi: 10.1017. CB09781107415324.004.\nJerry A Fodor and Ernest Lepore. The Compositionality Papers. Oxford University Press, 2002\nJohn Guttag. Algebraic Specification of Abstract Data Types. Software Pioneers, 52:442-452, 197 ISSN 0001-5903. doi: 10.1007/BF00260922\nDaniel C. Hyde. Two Systems of Non-Symbolic Numerical Cognition. Frontiers in Human Neuro science, 5(November):1-8, 2011. 1SSN 1662-5161. doi: 10.3389/fnhum.2011.00150.\nDavid Poole. First-order probabilistic inference. In IJCAI International Joint Conference on Artif cial Intelligence. pp. 985-991. 2003. URL[ht t p /www.cs.ubc.ca/spider/poole/\nSriram Sankaranarayanan. Static Analysis for Probabilistic Programs : Inferring Whole Program Properties from Finitely Many Paths. pp. 447-458, 2014. 1SSN 15232867. doi: 10.1145/2462156. 2462179.\nProsenjit Bose, John Howat, and Pat Morin. Space-Efficient Data Structures, Streams, and Al gorithms: Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday. chapter A History, pp. 133-149. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013. ISBN 978-3-642- 40273-9. doi: 10.1007/978-3-642-40273-9{\\-}10. URLhttp://dx.doi.0rg/10.1007/ 978-3-642-40273-9{}10\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations ofWords and Phrases and their Compositionality. Arvix, 1:1-9, 2000. ISSN 0003-6951. doi: 10.1162/jmlr. 2003.3.4-5.951. URLhttp://www.crossref.org/deleted{_}D0I.htm1\nHc Nuerk and K Willmes. On the magnitude representations of two-digit numbers. Psy- chology Science, 47(1):52-72, 2005. URL http://www.pabst-publishers.de/ psychology-science/1-2005/ps{_}1{_}2005{_}52-72.pdf\nNadia Polikarpova and Armando Solar-Lezama. Program Synthesis from Polymorphic Refinement Types. PLDI: Programming Languages Design and Implementation, 2016. URL http:// arxiv.0rg/abs/1510.08419"}]
S1Jhfftgx
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "David M. Blei, Andrew Bagnell, and Andrew K. McCallum. Learning with scope, with applicatior to information extraction and classification. In Uncertainty in Artificial Intelligence (UAI). 2002\nJay Yoon Lee\nCarnegie Mellon University Pittsburgh, PA.\nIncreasingly, practitioners apply neural networks to complex problems in natu ral language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference proce dure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 81%, while improving accuracy\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric P. Xing. Harnessing deep neura. networks with logical rules. In Association for Computational Linguistics (ACL), 2016.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victo Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks fo. natural language processing. Machine Learning, pp. 1378-1387, 2016."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many neural networks have discrete-valued output units that correspond to an inference or predictior about an input. Often, a problem might involve multiple discrete outputs. Unlike multiclass classi fication, which associates a single discrete output with each input, so called structured predictior problems associate multiple outputs with each input. For example, in multi-label classification instead of predicting a single relevant class pertaining to the image or sentence, we must predict all relevant classes: the image contains a dog, a tree, and a sky. In sequence prediction problems, the discrete outputs might be a sequence of words or symbols that must form a coherent translation of a source language sentence (Cho et al.|2014) Sutskever et al.2014), description of an image (Vinyals et al. 2015b), answer to a question (Kumar et al.|2016), or a parse-tree for an input sentence (Vinyals et al. 2015a). Crucially, in structured prediction, the output values are interdependent. Even though neural networks usually predict outputs independently or sequentially (one output at a time), the hidden units allow them to successfully capture many dependencies.\nAndrew McCallum and Ben Wellner. Conditional models of identity uncertainty with applications tc noun coreference. In Neural Information Processing Systems (NIPS), 2005.\nRyan McDonald and Fernando Pereira. Learning of approximate dependency parsing algorithms. In EACL, 2006.\nLev Ratinov and Dan Roth. Design challenges and misconceptions in named entity recognition. I Computational Natural Language Learning (CoNNL). 2009\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In Neural Information Processing Systems (NIPS), 2014.\nAs a motivating example, consider a sequence-to-sequence network that inputs a sentence and outputs a sequence of \"shift-reduce'\"' commands that describe the sentence's parse tree. Briefly, the shift-.\nMichael Wick. Jean-Baptiste Tristan\n{michael.wick, jean.baptiste.tristan}@oracle.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Edward Grefenstette. Karl Moritz Hermann. Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Neural Information Processing Systems (NIPS). 2015.\nTerry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. Dual decompo sition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 1288-1298. Association for Computa tional Linguistics, 2010.\nGordon Lyon. Syntax-directed least-errors anallysis for context-free languages: A practical approach Programming Languages, 17(1), January 1974\nSometimes, the outputs must obey hard constraints. For example, in sequence labeling with BILOU encoding, a 'begin' marker B cannot immediately follow an 'inside' marker I (Ratinov & Roth 2009). In clustering, pairwise binary decisions must obey transitivity so that they yield a valid equivalence class relation over the data points (McCallum & Wellner2005][Wick et al.]2006}2008] n syntactic/dependency parsing, the output sequence must encode a valid parse tree (McDonald & Pereira]2006][Vinyals et al.]2015a]Dyer et al.2016). In formal language generation or neural compilers the output must belong to a context free language or compile (Reed & de Freitas2016). In dual decomposition approaches to joint inference, copies of variables must satisfy equality constraints Koo et al.] 2010] Rush et al.2010] Rush & Collins2012). Finally, in some ensemble methods he outputs of multiple conditionally independent classifiers must reach a consensus on the outpu class. Indeed, there are a tremendous number of problems that require hard constraints on the outputs Unlike softer dependencies, violating a hard-constraint is often unacceptable because the output ol he network would not \"type-check\"' causing problems for downstream components. Unfortunately n practice, networks are not always able to exactly learn constraints from the training data alone\nMichael Wick, Aron Culotta, and Andrew McCallum. Learning field compatibilities to extrac database records from unstructured text. In Proceedings of the 2006 Conference on Empirica Methods in Natural Language Processing, EMNLP '06, pp. 603-611, Stroudsburg, PA, USA, 2006 Association for Computational Linguistics. ISBN 1-932432-73-6.\nreduce commands control a parsing algorithm by indicating how and when to use its stack. Each. command controls whether to shift (s) a token onto the stack, reduce (r) the top of the stack into a parent tree node, or push (!) the current reduction back onto the stack.\nTo be successful, the network must generate commands that imply a valid tree over the entire input. sentence. However, the decoder outputs just a single command at a time, producing some outputs. that are not globally-consistent, valid shift-reduce programs. Indeed, the output may not have enough shifts to include every input token in the tree or may attempt to reduce when the stack is empty. For example, the following input sentence \" So it 's a very mixed bag . \" comprises ten space-delimited. tokens (the quotations are part of the input), but our unconstrained sequence-to-sequence network. outputs an invalid sequence with only nine shifts ssr!sr!ssssrrr!rr!ssrrrrrr!. We must introduce another shi ft so the last token is pushed onto the stack and issue another reduce so it is inserted into the tree.\nVe could attempt to fix the output with post-processing, but where is the right place to inse hese commands in the sequence? There are 406 = choose(29, 2) candidate locations. Furthe omplicating our post-processing dilemma is the fact that the output contains several other error hat are seemingly unrelated to the constraint. Instead, we could attempt to fix the problem with nore sophisticated decoder, but this is difficult because the decoder outputs a single character at eac ime-step and our constraints are global, limiting corrections to the end of the sequence when it is to ate to rectify an earlier decision. A beam search is less myopic, but in practice most of the network utput mass is peaked on the best output token, resulting in little improvement.\nIn this paper, we propose an inference method for neural networks that enforces output constraint.. without employing combinatorial discrete search. The idea is to modify some (or all) of the weight. for each instance at test-time, iteratively nudging them, until the network's efficient unconstrainec. inference procedure produces a valid output. We achieve this by expressing the hard constraints a an optimization problem over the continuous weights and employ back-propagation to change ther. Prima facie, back-propagation is doomed because the constraint loss is necessarily a function of th. argmax that produced the discrete values. However, we circumvent this problem by optimizing ove. the energy of the violating outputs instead. Since the weights directly determine the output throug. the energy, we are able to manipulate the unconstrained inference procedure to produce the desire. result. Much like scoped-learning, the algorithm customizes the weights for each example at test-tim. (Blei et al.|2002), but does so in a way to satisfy the constraints.."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Consider a neural network that generates a variable length output vector y = {yi}1 from a variable length input vector x = {x}mx. For example, in image classification, the input vector encodes fixed multi-dimensional tensor of pixel intensities and the output vector comprises just a single element corresponding to the discrete class label. In sequence-to-sequence, the input might be a variable length vector of French tokens, and the output would be a variable length vector of its English translation. It is sometimes convenient to think of the network as a function from input to output\nMichael Wick, Khashayar Rohanimanesh, Andrew McCallum, and AnHai Doan. A discriminative approach to ontology alignment. In In proceedings of the 14th NTII WS at the conference for Very Large Databases (VLDB), 2008.\nWhen applied to the above example, our method removes enough energy mass from the invalid outpu space in only twelve steps, allowing unconstrained decoding to produce a valid output sequence:.\nInterestingly, the network generates an additional s command at the beginning of the sequence while also producing a cascade of error correction in later time steps: the new output now satisfies the constraints and is a perfectly correct parse. Of course, enforcing constraints does not always lead to an improvement in accuracy, but we find that often it does in practice, especially for a well-trained network. We find that our method is able to completely satisfy constraints in up to 81% of the outputs\nf(x;W)+>y\nHowever, for the purpose of exposition, we separate the neural network into a real-valued model. (negative energy function) that scores the compatibility of the outputs (given the weights and input and an inference procedure that searches for high scoring outputs..\nThen, inference is the problem of finding the values of the outputs y that maximize the negative. energy given fixed inputs x and weights W. Thus, we can rewrite the neural network as the function:\nThe purpose of separating the model from the inference procedure is so we can later formalize oui optimization problem. We emphasize that this formulation is consistent with existing neural networks Indeed, inference in feed-forward networks is a single feed-forward pass from inputs to outputs When the outputs only depend on each other through hidden states that only depend on earlier layer. of the network, feed-forward inference is exact in the sense that it finds the optimum of Equation|3 For recurrent neural networks (RNNs), each output depends on hidden states that are functions ol previous output values. However, we can still think of the usual procedure that produces the highes scoring output at each time step as a local greedy approximation to global inference; of course, the procedure can optionally be improved with a beam.\nA major advantage of neural networks is that once trained, inference is extremely efficient. Howeve constraints can render inference intractable due to discrete search. Our goal is take advantage of the fact that unconstrained inference is inexpensive and design a constrained inference algorithm tha exploits such a procedure as a black box. Our method iteratively adjusts the weights for each test-tim input, concentrating the probability mass on the feasible region so that unconstrained inference becomes increasingly likely to generate an output that satisfies the constraints.\nConsider the following. constrained inference problem for neural networks\nmax J(x,y,W y s.t. y E Lx\nWith this in mind, let g(y, L) +> r for r E R+ be a function that measures a loss between a sentence y and a grammar such that g(y, L) = O if and only if there are no grammatical errors in y. That is, g(y, ) = 0 for the feasible region and is strictly positive everywhere else. For a large class of CFLs, g could be the least errors count function (Lyon[1974) or a weighted version thereof. We could then express CFL membership as an equality constraint and minimize the Lagrangian\nFor the model, let y; be a discrete output from an output unit and let (yi; x, W) be its corresponding real-valued log-space activation score (e.g., the log of the softmax for locally normalized models or simply a linear activation value for globally normalized models). Define the negative energy I over a collection of output values y as an exponentiated sum of log-space activation scores\nI(y;x, W) = exp Y(Yi;X,W)\nf(x; W) +> argmax I(y;x, W y\nIn this work, we focus on constraints that require the outputs to belong to an input-dependent context-. free language L* (CFL). The idea is to treat the output space of the neural network as the terminal. symbols, and devise the appropriate production rules and non-terminals to express constraints on them. An advantage of employing CFLs over other formalisms such as first order logic (FOL) is. that CFLs are intuitive for expressing constraints on the outputs, especially for language models and. sequence-to-sequence networks. For example, when modeling Python or Java code, it is easy to express many of the desired programming language's constraints using a CFL, but cumbersome in. FOL. Indeed, CFLs are an expressive class of languages..\nTo motivate our algorithm, we begin with the ideal optimization problem and argue that unlike for linear models with local constraints, the resulting Lagrangian is not well suited for globally constrained inference in neural networks. We ultimately settle on an alternative objective function that reasonably models our constrained inference problem. Although our algorithm lacks the theoretical guarantees enjoyed by classic relaxation algorithms we nevertheless find it works well in practice.\nNaively enforcing the constraint requires combinatorial discrete search, which is intractable in general. Instead. we prefer a smooth optimization problem with meaningful gradients to guide the search.\nmin max (x, y, W) + Xg(y, L A y\nHowever, this dual optimization problem has a major flaw. Our constraints are global and do no. necessarily factorize over the individual outputs. Consequently, there is just a single dual variable. X. Optimizing does little more than eliminate a single contour of output configurations at a time resulting in a brute-force trial and error search.\nInstead, observe that the network's weights control the negative energy of the output configurations By properly adjusting the weights, we can affect the outcome of inference by removing mass from invalid outputs. The weights are likely to generalize much better than the single dual variable because in most neural networks, the weights are tied across space (e.g., CNNs) or time (e.g., RNNs). As a result, lowering the negative energy for a single invalid output has the effect of lowering the negative energy for an entire family of invalid outputs, enabling faster search. With this in mind, we introduce an independent copy Wy of the network's weights W and minimize with respect to these \"dual weights\" instead of the dual variable. This is powerful because we have effectively introduced an exponential number of \"dual variables\"' (via the energy, which scores each output) that we can easily control via the weights; although similar, the new optimization is no longer equivalent to the original\nWhile a step in the right direction, the objective still requires combinatorial search because (1) th maximization involves two non-linear neural networks and (2) a greedy decoding algorithm is unabl to cope with the global loss gO because the constraints do not factorize over the individual output In contrast the functions involved in classic Lagrangian relaxation methods for NLP have multipliei for each output variable that can be combined with linear models to form a single unified decodin problem for which efficient inference exists (Koo et al.[2010] Rush et al.]2010] Rush & Collin 2012). Since our non-linear functions and global constraints do not afford us the same ability, w must modify the optimization problem for a final time so that we can employ the network's efficier inference procedure as a black-box. In particular, we (1) remove the negative-energy term tha involves the original weights W and compensate with a regularizer that attempts to keep the dua weights Wx as close to these weights as possible and (2) maximize exclusively over the networ parameterized by Wx. The result is a different optimization problem on which our algorithm is basec\nmin I(x,y,Wx)g(y,L*) + a|W - Wx||2 y = argmax I(x, y, Wx Wx y\nAlgorithm 1 Constrained inference for neural nets\nConsider the structured prediction problem of syntactic parsing in which the goal is to input a sentence comprising a sequence of tokens and output a tree describing the grammatical parse of the sentence One way to model the problem with neural networks is to linearize the representation of the parse tree and then employ the familiar sequence-to-sequence model (Vinyals et al.|[2015a).\nLet us suppose we linearize the tree using a sequence of shift (s) and reduce (r, r!) commands that control an implicit shift reduce parser. Intuitively, these commands describe the exact instructions for converting the input sentence into a complete parse tree: the interpretation of the symbol s is that we\nmin max I(x,y, W) + I(x,y, W)g(y, L Wx y\nInformally, our algorithm alternates the maximization (by running efficient unconstrained inference) and minimization (by performing SGD) until it produces a feasible output or it exceeds a maximum number of iterations. For each test-example, we re-initialize the dual weights to the trained weights to ensure the network does not deviate too far from the trained network. More precisely see Algorithm[1\nTeulalnels Inputs: test instance x, input specific CFL L*, pretrained weights W. W W #reset instance-specific weights while not converged do. y < f(x; W) #perform inference using weights W. Wx W n #update instance-specific weights with SGD or a variant thereof end while\nshift an input token onto the stack and the interpretation of the symbol r is that we start (or continue) reducing (popping) the top elements of the stack, the interpretation of a third symbol ! is that we stop reducing and push the reduced result back onto the stack. Thus, given an input sentence and an output sequence of shift-reduce commands, we can deterministically recover the tree by simulating a shift reduce parser. For example, the sequence ssrr! ssr! rr!rr! encodes a type-free version of the parsetree (S (NP the ball) (VP is (NP red))) forthe input sentence\"the ballisred' It is easy to recover the tree structure from the input sentence and the output commands by simulating a shift reduce parser, performing one command at a time as prescribed by the classic algorithm.\nNote that for output sequences to form a valid tree over the input, the sequence must satisfy a number. of constraints. First, the number of shifts must equal the number of input tokens mx, otherwise either. the tree would not cover the entire input sentence or the tree would contain spurious terminal symbols. Second, the parser cannot issue a reduce command if there are no items left on the stack. Third, the. number of reduces must be sufficient to leave just a single item, the root node, on the stack.\nWe can express most of these constraints with a CFI\nG -> sRr! R >sRr L = R > Rr! R >RR R ->e\nIntuitively, Rule 1 states that a valid shift-reduce command set must begin with a shift (since stack is initially empty, there is nothing to reduce) and end with a reduce that places the final result on the stack. Rule 2 states that if we do a shift, then we need to reduce the shifted token at some point in the future. Rule 3 states that if we do not shift then we are allowed to reduce only if we also push the result on the stack. Rule 4 allows for multiple subtrees. Rule 5 is the base case.\nNote, however, that this grammar is for a general purpose shift-reduce language, but we need to. constrain the number of shifts to equal the number of input tokens mx. Since the constraint is a bit verbose to express with production rules, we can instead write the regular language (s(r!)*)mx (r!)*. where m is the number of elements in x and intersect it with our CFL..\nLx = Ln(s(r!)*)mx(r!)"}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "There has been recent work in applying neural networks to structured prediction problems. For. example, the recent structured prediction energy networks (SPENS) combines graphical models and. neural networks via an energy function defined over the output variables (Belanger & McCallum. 2016). SPENS focuses on soft constraints (via the energy function) and performs inference by. relaxing the binary output variables to be continuous and then backpropagating into them. In contrast.. our method focuses on hard constraints and we backpropagate into the weights rather than into the. outputs directly. We could combine our method with SPENs to handle soft constraints; for example. by back-propagating the output energy into the weights instead of the relaxed outputs themselves\nThere has been recent work on applying neural networks to parsing problems that require the ability to handle hard constraints. For example, by employing a sequence-to-sequence network (Vinyals et al. 2015a) or a custom network designed for shift reduce parsing (Dyer et al.2016). The former requires\nRather than relying on a general purpose algorithm to compute g(y, L) that measures the number. of grammatical errors, we instead implement it specifically for our language. Let ctt-1(b(i)) be the. function that counts the number of times proposition b(i) is true. Now. define the following loss\n=(m-ct(y=s))2 ct (yj=r)-ct(yj E{s,!} +ct(yi=r)-(ct(yiE{s,!})) y.Lx\nThe first term measures the amount of violation due to the regular language and the second and third erms measure the amount of violation according to the CFL.\ntask inference weights changed (W) conversion rate accuracy unconstrained none 0.0% 75.6% constrained all 65.2 % 82.4% azbz constrained output only 20.9% 77.8% constrained encoder only 58.2% 82.5% constrained decoder only 57.4% 82.3% unconstrained none 0.0% 84.0% sr no types. constrained all 81.8% 84.4% unconstrained none 0.0% 87.8% constrained all 79.2% 88.3% constrained output only 5.0% 88.1% sr with types. constrained decoder (top layer) 36.2% 88.2% constrained decoder (all layers) 54.7% 88.3% constrained decoder (top) + attention 38.0% 88.1% constrained decoder (all) + attention 56.5% 88.2%\nTable 1: Conversion rates on all three tasks with 100 steps of SGD. Note that satisfying the constraints has no negative affect on accuracy and often has a positive affect.\nbzazbzazbzazazbzbzbzbzbz -) zbaaazbaaazbaaaaaazbzbzbzbzb\niteration Output loss accuracy 0 zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.260 75.0 39 zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.259 75.0 40 zbaaazbaaazbaaaaaazbzbzbaaazb 0.250 80.0 72 zbaaazbaaazbaaaaaazbzbzbaaazb 0.249 80.0 73 0.0 100.0 zbaaazbaaazbaaaaaazbzbzbzbzb\nTable 2: An example for which enforcing the constraints improves accuracy. Red indicates errors The output changes more than once before the constraints are finally enforced. Greedy decoding with constraints might correct this example because the spurious a's are at the end of the sequence.\nAnother intriguing approach is to distill the hard constraints into the weights at training time using a. teacher network (Hu et al.|2016). The method is appealing because it does not require constrained inference or combinatorial search. However, the method must achieve a difficult balance between the loss due to the training data and the loss due to the constraint violations. Further, it would crucially rely on network's ability to generalize the constraints learned on the training data to the testing data.\nFinally, our method highly resembles dual decomposition and more generally Lagrangian relaxatior for structured prediction (Koo et al. 2010] Rush et al.]2010, Rush & Collins2012).In such techniques, it is assumed that a computationally efficient inference algorithm can maximize over a superset of the feasible region (indeed this assumption parallels our exploitation of the fact that unconstrained inference in the neural network is efficient). Then, the method employs gradient descent to gradually concentrate this superset onto the feasible region until the constraints are satisfied. However, for computational reasons, these techniques assume that the constraints factorize over the output and that the functions are linear so that they can be combined into a single model. In contrast, we have a single dual variable so we instead minimize with respect to the weights, which generalize better over the output. Further, we are unable to combine the dual into a single model ove which we can do inference because the network is highly non-linear.\nthe output to form a valid parse tree and hence they employ post-processing to ensure this property The latter satisfies constraints as part of the decoding process by sampling over a combinatorial space Our approach does not rely on post processing or discrete search..\nIn this section we empirically evaluate our constrained inference procedure on two sequence-to sequence tasks. The first is a transduction task between two simple languages, which we describe. next. The second is the sequence-to-sequence shift-reduce parsing task described in Section4.\nazazbzazbzbzazbzbzbzbzbz > aaaaaazbaaazbzbaaazbzbzbzbzb\niteration output loss accuracy 0 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2472 66.7 11 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2467 66.7 2 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2462 66.7 3 aaaaaazbaaazbzbaaazbzbzbzbzb 0.0 100.0\nTable 3: An example for which enforcing the constraints improves accuracy. Red indicates errors. Note that greedy decoding with constraints would not fix the errors in the middle since errors are made before constraints are violated. In contrast, the proposed method takes the constraints into account in a globall manner, allowing earlier errors to be corrected by future constraint violations.\nbzbzbzbzazbzbzazazazazbz- >zbzbzbzbaaazbzbaaaaaaaaaaaazb\niteration Output loss accuracy 0 0.2954 74.2 zbzbzbzbaaazbaaaaaaaaaaaazbaaa 4 zbzbzbzbzbaaaaaaaaazbzbaaaaaa 0.0 60.0\nTable 4: An example for which enforcing the constraints degrades accuracy. Errors in red\nA transducer T : L1 -> L2 is a function from a source language to a target language. For the purpose of the experiments T is known and our goal is to learn it from data. We choose a transducer similal to those studied in recent work (Grefenstette et al.|2015). The source language Lo is (az |bz) * and the target language L1 is (aaa | zb) *. The transducer is defined to map az to aaa and bz to zb. For example, T(bzazbz)+ >zbaaazb. The training set comprises 1934 sequences of length 2-20 and the test set contain sentences of lengths 21-24. As is common practice, we employ shorter sentences for training to require generalization to longer sentences at test time.\nWe employ a thirty-two hidden unit single-layered, attentionless, sequence-to-sequence long short term memory (LSTM) in which the decoder LSTM inputs the final encoder state at each time-step. Th encoder and decoder LSTMs each have their own set of weights. We train the network for 1000 epoch using RMSProp to maximize the likelihood of the output (decoder) sequences in the training set. Th network achieves perfect train accuracy while learning the rules of the output grammar nearly perfectl even on the test-set. However, despite learning the train-set perfectly, the network fails to learn th input-specific constraint that the number of a's in the output should be three times as the number ir the input. We implement a loss for this constraint and evaluate how well our method enforces th\nat test-time: g(y, L) = (n + m)-1 3r.I(x = a))\nThe top section of Table[1contains the results for this a zbz task. We use the term converted to refe to a sentence that initially had a constraint-violation, but was later fixed by the constrained-inferenc. procedure. The conversion rate is the percentage of such sentences that we convert: on this task, up to two-thirds. We experiment with which subset of the weights is best for satisfying the constraints finding that it is best to modify them all. We also report accuracy to study an initial concern Specifically, we had to omit the negative energy of the original weights W from our optimization problem, Equation7I potentially allowing the network to find a set of dual weights W that happer to satisfy the constraints, but that have poor performance. However, we found this not to be the case In fact, we report the token-wise accuracy over the examples for which the unconstrained neura network violated constraints and find that on the contrary, accuracy improves. Further, we find the regularizer is unnecessary since the initialization W, = W ensures the network never drifts too far\nYi n + m, the combined intput/output length, normalizes between O and 1. For constrained inference we run Algorithm|1and employ vanilla stochastic gradient descent with a learning rate of O.05 and no weight decay. We cap the number of iterations at a maximum of 100..\nIn order to gain a better understanding of the algorithm's behavior, we provide data-cases that highlight both success and failure (Tables 2 3 4). The title of these tables is the input and the desired ground truth output. The rows of the table show the network's output at each iteration (as indicated) The loss column is the constraint loss weighted by the output's energy (x, y, W)g(y, L), and the final column is the token-wise accuracy between the output and the ground truth.\niteration output loss accuracy 0 Ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0857 33.3% 11 Ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0855 33.3% 12 Sssr!ssssrr!srrr!rr!ssrrrrrr! 0.0000 100.0%\nTable 5: A shift-reduce example for which the method successfully enforces constraints. The initial output has only nine shifts, but there are ten tokens in the input. Enforcing the constraint not only corrects the number of shifts to ten, but changes the implied tree structure to the correct tree.\nTable|2|contains an example for which our method successfully satisfies the constraints resulting. in perfect accuracy. However, because the constraint violation appears at the end of the string, a. greedy decoder that opportunistically enforces constraints on the fly could potentially correct this. error. In Table|3|we show a more interesting example for which such a greedy decoder would not. be as successful. In particular, the unconstrained network outputs the final aaa too early in the. sequence, but the constraint that controls the number of a's in the output is not violated until the end of the sequence. In contrast, our method takes the constraint into account globally, allowing. the network to not only rectify the constraint, but to achieve perfect accuracy on the sentence. (in just four gradient updates). Finally, in Table 4] we show an example for which enforcing the. constraints hurts the accuracy. The updates causes the network to erroneously change outputs that. were actually correct. This can happen if (a) the underlying network is sometimes inaccurate in. its output or confidence/probabilities thereon or (b) the gradient steps are too large causing the. network to completely leapfrog over the correct solution in a single step. The latter can be avoided by. normalizing the constraint loss so it does not grow unbounded with the number of outputs and by. erring on the side of a smaller learning rate.."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "We presented an algorithm for satisfying constraints in neural networks that avoids combinatoria search, but employs the network's efficient unconstrained procedure as a black box. We evaluate the algorithm on two sequence to sequence tasks, a toy transducer problem and a real-world shif reduce parsing problem. We found that the method was able to completely rectify up to 80% o violated outputs when capping the number of iterations at 100. Often, enforcing constraints cause the accuracy to improve, dispelling initial concerns that adjusting the weights at test-time woul be treacherous. Our method currently lacks the same theoretical guarantees as classic Lagrangia relaxation methods, so in future work we want to focus on supplemental theory and additiona objective functions. We also hope to extend the work to handle soft constraints, for example, a imposed by an external language model.\nWe repeat the same experiment (middle section of Table|1), but on the shift-reduce parsing tasl lescribed in Section4 We convert the Wall Street Journal portion of the Penn Tree Bank (PTB) intc hift-reduce commands and randomly split into 30k train and 9.2k test examples. We increase the umber of hidden units to sixty-four to accommodate the larger input space (50k words) and emplo Equation[10](normalized by sequence length) for the constraint loss. We measure the sequence ligned token accuracy. Otherwise, we employ the exact same experimental parameters as the a zbz ask, both for training the LSTM and for our algorithm. We find that our algorithm performs ever etter on the real-world task, converting over 80% of the violated outputs. We again find that ou rocedure has no negative impact on accuracy, which in fact improves, but not as substantially as fo he a zbz task. Table5|contains a successful example that we had previously highlighted in Section1 The algorithm satisfies the constraints, and also corrects the remaining output errors.\nFinally, we conduct a version of the shift-reduce experiment that includes the phrase types (e.g noun-phrase (NP)). To accommodate the larger output space (output alphabet size increases to 479) ve employ a larger network with 128 hidden units, attention and three-layers. Note that even this nore sophisticated network fails to learn the constraints from data and adding layers does not help The larger network affords us the opportunity to experiment with modifying different subsets of weights for enforcing constraints. As seen in the last section of Table[1 modifying all the weights vorks best, converting 79.2% of the violating sentences; again without negatively affecting accuracy"}]
BJ46w6Ule
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Marc Goessling\nEran Borenstein and Shimon Ullman. Combined top-down/bottom-up segmentation. IEEE Trans actions on Pattern Analysis and Machine Intelligence. 30(12):2109-2125. 2008\nArthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (methodological), pp. 1-38, 1977.\nMichael Elad. Sparse and redundant representations. Springer, 2010."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We consider the task of learning a compact binary representation (e.g. Goessling & Amit, 2015). That means we are seeking a parsimonious set of experts, which can explain a given collection o. multivariate data points. In contrast to most existing approaches the emphasis here is on finding. experts that are individually meaningful and that have disjoint responsibilities. Ideally, each exper. explains only one factor of variation in the data and for each factor of variation there is exactly on expert that focuses on it.\nJohn A Hartigan. Partition models. Communications in statistics-Theory and methods, 19(8):2745 2756, 1990.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura computation, 14(8):1771-1800, 2002.\nWe start by describing a simple model family, which forms the basis of our work. A partition model (Hartigan, 1990) makes use of a manually specified partitioning of the D variables into subsets\nL {1,...,D}= Se l=1\nFor each subset of variables x(Se) = (x(d))des, there exists a separate model Pe. It is then typicall assumed that variables in different subsets are conditionally independent, i.e.,\nGeoffrey McLachlan and David Peel. Finite mixture models. John Wiley & Sons, 2004\nL P(x|h)=II Pe(x(Se)[h(l)) l=1\nAndrew Ng. Sparse autoencoder. CS294A Lecture Notes, 72:1-19, 2011.\nThe model is completed by specifying a prior distribution P(h) for the latent state h. One advantag. of partition models is that estimating Pe from observations is straightforward, while learning exper. models in general requires computationally involved procedures (Bengio et al., 2013). However, i. order to be able to define a satisfactory partitioning of the variables some prior knowledge aboi. the dependence structure is needed. For image data a common choice is to use a regular grid tha. divides the image into patches (e.g. Pal et al, 2002). In general, a good partitioning is characterize. by providing weakly dependent subsets of variables so that the conditional independence assumptio. () is reasonable and the distribution of the latent variables is easy to model. Unfortunately, ofte. there simply is no single fixed partitioning that works well for the whole dataset because the st\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In International Conference on Machine Learning, pp. 1096-1103, 2008.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828 2013.\namit@galton.uchicago.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present a new approach for learning compact and intuitive distributed rep resentations with binary encoding. Rather than summing up expert votes as in products of experts, we employ for each variable the opinion of the most reliable expert. Data points are hence explained through a partitioning of the variables into expert supports. The partitions are dynamically adapted based on which ex- perts are active. During the learning phase we adopt a smoothed version of this model that uses separate mixtures for each data dimension. In our experiments we achieve accurate reconstructions of high-dimensional data points with at most a dozen experts.\nFormally, the experts P, k = 1, ..., K, are probability distributions that depend on binary latent variables h(k). The latent state h specifies which experts are active and has to be inferred for each D-dimensional data point x. The active experts then define a probability distribution P. The goal of representation learning is to train experts such that the conditional likelihood P(x | h) of the data given the latent activations is maximized.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998."}, {"section_index": "3", "section_name": "6 DERIVATIVES", "section_text": "of variables, which are affected by different factors of variation, might overlap. This restricts th scenarios in which partition models are useful.\nf() = x log + (1 - x) log(1 -\nek rkHk7 rk k\nThe first and second derivative of the log-likelihood with respect to the composed probability are\nThat means, each variable x(d) is explained by only a single expert k*(d). The partitioning intc expert supports S(h) = {d E {1,..., D} : k*(d) = k} is determined dynamically based on the latent configuration h. We hence call our model a dynamic partition model."}, {"section_index": "4", "section_name": "2.1 INFERENCE", "section_text": "In the inference step we try to find for each data point xn the subset of experts {k : hn(k) = 1} tha maximizes P(xn hn). To do this, we suggest to sequentially activate the expert that most improve. the likelihood, until the likelihood cannot be improved anymore. This approach is called likelihood matching pursuit (Goessling & Amit, 2015). The greedy search works well for our model because we are working with a small set of experts and each expert focuses on a rather different structure in the data. Consequently, the posterior distribution on the latent variables given xn is often highly peaked at a state hn (note that for high-dimensional data the effect of the prior P(h) is typically negligible).\ndu Tk dk dhk\nConsequently, the derivatives of the log-likelihood with respect to the expert probabilities are\ndf df du x - Tk dk d u dk\nd p\nThe derivative of the con. osed probability with respect to the levels of expertise is\nd p kE- ek'k' Pk - Pl dek E2 E\nHere, k*(d) denotes the expert with the highest level of expertise e(d) among all experts k wit hn(k) = 1."}, {"section_index": "5", "section_name": "2.2.1 EXPERTISE-WEIGHTED COMPOSITION", "section_text": "In order to compute the estimator in () the levels of expertise e have to be known. Since in this. paper we are trying to train the experts as well as the associated levels of expertise we consider a. smoothing of the maximum-expertise composition () to motivate our learning procedure. Rather than using the expert with the highest level of expertise, we form a mixture of the active experts where the mixture weight is proportional to the level of expertise. Thus, the smoothed composition\nek p = TkHk, rk(Vk+ 3)- 2 U rk ek k k\nIn this paper we extend partition models to allow for dynamically adapting partitionings. In Section. we introduce the model and present an appropriate learning procedure. Related work is discussed. in Section 3. Special emphasis is given to the comparison with products of experts (Hinton, 2002). Experiments on binary and real-valued data are performed in Section While it is important to explain high-dimensional data points through multiple experts, our work shows that it is possible to assign the responsibility for individual variables to a single expert (rather than having all active experts speak for every variable).\nOur main proposal is to define for each expert Pg its level of expertise ek E RP for all variables. We can then dynamically partition the variables based on the active experts. Specifically, for each variable we employ the most reliable (active) expert.\nD P(x|h) = Pk*(d)(x(d)) k*(d) = argmax ek k:h(k)=1 d=1\ndf 1-x x x - du\n2 x 1-x u2(1 -\nVe see that d2 f /d? < 0 for E (0, 1), i.e., the log-likelihood is a strictly concave function of j\nIn contrast to traditional approaches, which combine multiple experts for individual variables, train- ing the experts in a dynamic partition model is trivial. Indeed, the maximum-likelihood estimates are simply the empirical averages over all observations for which the expert was responsible. For example, the expert means can be estimated from training data xn., n = 1, . . . , N, as\nN 1{k*(d)=k}xn(d uk(d) = n=1 N 1{kn(d)=k} n=1\ndf df d dek dek\n1 f(p,v) = 2v 2\nD K ek(d) if h(k) =1 P(x|h)=II k':n(k')=1 ek(d) rk(d)Pk(x(d)) if h(k) = 0 d=1 k=1\nThe derivative of the log-likelihood with respect to the composed mean and variance are\ndf x - df x-)2 1 x - d dv 2v2 2v 2v2 U\nIn contrast to classical mixture models (e.g.McLachlan & Peel, 20o4) we use different mixture weights for each dimension d E {1, ..., D}. The mixture weight r(d) is the degree of responsibil-. ity of k-th expert for the d-th dimension and depends on the latent state h. An expert with a medium level of expertise assumes full responsibility if no other reliable expert is present and takes on a low. degree of responsibility if experts with a higher level of expertise are present..\nThe derivative of the composed mean and variance with respect to the levels of expertise are\nd kE-ek'k' Hk - l dek E2 E 9\nV[P] = Err[V[Pe]] + Vrn[E[Pk]\ndv qkE du ek'qk' qk q Uk Vk-U+ (k- )2 - dek E2 dek E E E"}, {"section_index": "6", "section_name": "2.2.2 EXPERT UPDATE", "section_text": "The sequential inference procedure (from Section 2.1) provides for each data point xn the latent rep resentation hn. We denote the corresponding expert responsibilities (using the current estimates fo the level of expertise) by rnk. The smooth analog to the hard update equation (B) is a responsibility weighted average of the training samples\nFor binary data, the log-likelihood of the smoothed model is a concave function of x(d), see Section 6.1.2 We could therefore in principal perform an optimization for the experts opinions using Newton's method. There are a few complications though. One problem is that the second derivative is proportional to the squared responsibility and hence close to O if the level of expertise is small. Consequently, template updates in regions with low expertise would be unstable. To deal with that we could add a penalty on the squared log-odds for example. Another problem is that the Newton steps may lead to probability estimates outside of [0, 1]. This can be dealt with by pulling the estimates back into the unit interval. Note that working on the log-odds scale is not possible because the log-likelihood of our model is not concave in the expert log-odds. Because of these complications we use the simple, fast and robust heuristic () instead of Netwon's method.\nwhere vo is the empirical variance of all training samples"}, {"section_index": "7", "section_name": "2.2.3 EXPERTISE UPDATE", "section_text": "We now turn to the updates of the levels of expertise. The log-likelihood of the smoothed mode () as a function of eg is rather complex. Using gradient descent is thus problematic because the derivatives with respect to ek can have very different scales, which makes it difficult to choose ai appropriate learning rate and hence the convergence could be slow. However, exact optimization i. not necessary because in the end only the order of the levels of expertise matters. Consequently, w propose to adjust e(d) only based on the sign of the gradient. We simply multiply or divide the current value by a constant C. If the gradient is very close to 0 we leave e(d) unchanged. For al our experiments we used C = 2. Larger values can speed up the convergence but sometimes lead t a worse solution. Using an exponential decay is common practice when learning levels of expertis (e.g.Herbster & Warmuth, 1998).\nIn the learning procedure we perform the expertise update first. We then recompute the responsibil ities using these new levels of expertise and update the experts. Our algorithm typically converges after about 10 iterations.\nthe variance of a mixture is always larger than the smallest variance of its components. In other words, the precision of the smoothed model is maximized when all the mixture weight (individually for each dimension) is concentrated on the most precise expert. We can thus learn a dynamic parti- tion model in an EM manner (Dempster et al., 1977) by interleaving inference steps with updates of the experts and levels of expertise in the smoothed model.\ndf df d p df dv dek du dek dv dek\nN rnk(d)xn(d) + E0 Pk(d) = n=1 N rnk(d)+ e n=1\nFor stability we added a term that shrinks the updated templates towards some target o if the total responsibility of the expert is small. In our experiments we set o to the average of all training examples. The update rule implies that the experts have local supports, in the sense that they are uninformative about variables for which they are not responsible..\nFor binary data the mean templates s are all we need. Continuous data x E RD is modeled. through Gaussians and hence we also have to specify the variance vk of the experts. We again use a responsibility-weighted average\nN rnk(d)(xn(d) - k(d))2 + eV0 n=1 Jb N rnk(d) + e n=1\nHerbster & Warmuth (1998) proposed an algorithm for tracking the best expert in a sequential pre. diction task. In their work it is assumed that a linear ordering of the variables is known such that. the expert with the highest level of expertise is constant on certain segments. In contrast to that,. our approach can be applied to an arbitrary permutation of the variables. Moreover, they consider. a single sequence of variables with a fixed partitioning into experts supports. In our setup the par-. titioning changes dynamically depending on the observed sample. However, the greatest difference. to our work is that Herbster & Warmuth (998) do not learn the individual experts but only focus on training the levels of expertise\nLuicke & Sahani (oo8) studied a composition rule that also partitions the variables into expert supports. In their model the composed template is simply the maximum of the experts templates k. This rule is only useful in special cases. A generalization, in which the composition depends. on the maximum and the minimum of the expert templates s(d), was considered by Goessling & Amit (oi5). While the motivation for that rule was similar, the maximum-expertise rule in this paper is more principled and can be applied to continuous data..\nIn the work by Amit & Trouve (oo7) a simple average (i.e., an equal mixture) of the individua. templates was used. With such a composition rule, all experts are equally responsible for each of th variables and hence specialization on local structures is not possible. To circumvent this problem. in their work e(d) was manually set to 1 for some subset of the dimensions (depending on a laten. shift variable) and to O elsewhere.\nA popular model family with latent binary representation are products of experts (Hinton, 2002). In such a model the individual distributions Pg are multiplied together and renormalized. Computation of the normalizing constant is in general intractable though. A special case, in which an explicit normalization is possible, are restricted Boltzmann machines (Hinton, 2002). In these models the experts are product Bernoulli distributions with templates k E 0, 1D. The composed distribution is then also a product Bernoulli distribution with composed template.\nPRBM(d) =0 w k:h(k)=1\nAnother common model for representation learning are autoencoders (Vincent et al., 20o8), which can be considered as mean-field approximations of restricted Boltzmann machines that use latent variables h(k) with values in 0, 1. To obtain a sparse representation a penalty on the number of active experts can be added (Ng, 20l1). Such approaches are also known as sparse dictionaries (e.g., Elad, 20i0) and are based on opinion pools of the form k h(k)wx(d). The strength of the sparsity penalty is an additional tuning parameter which has to be tuned. In dynamic partition models sparse activations are inherent. In the next section, we experimentally compare products of experts, autoencoders and sparse dictionaries to our proposed model.\nFigure 1: Expert training for the synthetic dataset. Each panel shows the probabilities (white/black corresponds to (d) = 0/1) of the 10 experts (rows) for the 10 dimensions (columns). 1st panel: Random initialization. 2nd-4th panel: Our learning procedure after 3/5/15 iterations.\nFigure 2: Trained experts for the synthetic data after 1,o0o iterations using an autoencoder (1st panel), a sparse dictionary (2nd panel) and a restricted Boltzmann machine (3rd panel)."}, {"section_index": "8", "section_name": "4.1 SYNTHETIC DATA", "section_text": "We consider a synthetic example and try to learn the underlying factors of variation. The datase consists of the 32-element subset {(0, 1), (1, 0)}5 C {0, 1}10. Note that there are 5 factors of. variation corresponding to the state of the pairs (x(2l-1), x(2l)) for l = 1,...,5 with the two. factor levels (0, 1) and (1, 0). Indeed, the distribution can be easily expressed through a partition. model with partitioning\n1,2}U{3,4}U{5,6}U{7,8}U{9,10"}, {"section_index": "9", "section_name": "and corresponding models", "section_text": "Pe(x(2l-1),x(2l)) =21{x(2l-1)=0, x(2l)=1} + 21{x(2l-1)=1, x(2l)=0}.\nWe show that our dynamic partition model is able to learn these factors of variation without requiring a manual specification of the partitioning. Here, the total number of experts we need to accuratel reconstruct all data points happens to be equal to the number of dimensions. However, in other cases the number of required experts could be smaller or larger than D. We ran our learning algorithn for 15 iterations starting from a random initialization of the experts. The resulting templates afte 3, 5 and 15 iterations are shown in Figure . We see that each of the final experts specializes ir exactly two dimensions d and d + 1. Its opinion for these variables are close to O and 1, respectively while the opinions for the remaining variables are about 1/2. Every data point can now be (almost) perfectly reconstructed by using exactly 5 of these experts.\nFor comparison we trained various other models with 10 experts, which use a sum-of-log-odds composition. We first tried an autoencoder (Vincent et al., 2oo8), which in principle could adop the identity map because it uses (in contrast to our model) a bias term for the observable and laten variables. However, the gradient descent learning algorithm with tuned step size yielded a differen representation (Figure , 1st panel). While the reconstruction errors are rather low, they are clearly nonzero and the factors of variations have not been disentangled. Next, we considered a dictionary with a sparse representation (e.g., Elad, 2010). The sparsity penalty was adjusted so that the average number of active dictionary elements was around 5. The learning algorithm again yielded highly dependent experts (Figure , 2nd panel). Finally, we trained a restricted Boltzmann machine through batch persistent contrastive divergence (Tieleman, 2oo8) using a tuned learning rate. Note that a\n930 7 8\nFigure 3: Trained experts for MNIST digits. Left: Expert probabilities (white/black corresponds to k(d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\nFigure 4: Reconstruction of MNIST test examples using likelihood matching pursuit. Each column. visualizes the composed Bernoulli templates during the sequential inference procedure (top down for one sample. The bottom row are the original data points..\nrestricted Boltzmann machine in principle only requires 5 experts to model the data appropriately because it uses bias terms. However, we again learned 10 experts (Figure 3rd panel). While the results look better than for the previous two models they are still far from optimal. In earlier work Goessling & Amit (2015) we performed a quantitative comparison for a similar dataset, which showed that the reconstruction performance of models with sum-of-log-odds composition is indeed suboptimal."}, {"section_index": "10", "section_name": "4.2 MNIST DIGITS", "section_text": "We now consider the MNIST digits dataset (LeCun et al.l, 1998), which consists of 60,000 training samples and 10,000 test samples of dimension 28 28 = 784. We ran our learning algorithm for 10\n4 4 4 9 X 7141z1954563207 4\n8288822888 9997774 7 G 4 003092239 6694444280 33553535333\nFigure 5: Dynamic supports for 5 MNIST experts. Left column: Expert probabilities. Remaining columns: Composed Bernoulli templates for 10 latent configurations. The cast opinion of the expert is shown in shades of red (white/red corresponds to (d) = 0/1).\nFigure 6: Trained experts for Weizmann horses. Left: Expert probabilities (white/black corresponds to (d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\niterations and trained 100 experts (Figure 3). We see that some experts specialize on local structures while others focus on more global ones. In Figure we visualize the inference procedure for some test samples using these 100 learned experts. On average 12 experts were activated for each data point. For easier visualization we show at most 10 iterations of the likelihood matching pursuit algorithm. The reconstructions are overall accurate and peculiarities of the samples are smoothec out. In Figure we illustrate how the expert supports change based on the latent representation Depending on which other experts are present the supports can vary quite a bit."}, {"section_index": "11", "section_name": "4.3 WEIZMANN HORSES", "section_text": "The following experiment shows that our model is able to cope with very high-dimensional data. The Weizmann horse dataset (Borenstein & Ullman, 2008) consists of 328 binary images of siz 200 240. We used the first 300 images to train 20 experts (Figure ) and used the remaining 28. images for testing. Some of the experts are responsible for the background and the central regio of the horse while other experts focus on local structures like head posture, legs and tail. In Figur we illustrate the partitioning of the test examples into expert opinions. For simplicity we use exactly 4 experts to reconstruct each sample. Not all characteristics of the samples are perfectl reconstructed but the general pose is correctly recovered. The same dataset was used to evaluat the shape Boltzmann machine (Eslami et al., 2014), where 2,000 experts were learned. For thos experiments the images were downsampled to 32 32 pixels. This is a factor 50 smaller than th full resolution of 48,000 dimensions that we use..\n82888228880 77699977749 2030922339 6644444280 33553535333\nFFFKFF\nFigure 7: Decomposition of the test examples from the Weizmann horse dataset. 1st column: Original data points. 2nd column: Reconstructions (shown are the composed Bernoulli templates) 3rd-6th column: Partitioning into experts opinions (white/black corresponds to (d) = 0/1, gray indicates regions for which the expert is not responsible).\nFigure 8: Reconstructions of the test examples from the Caltech motorcycle dataset. Odd rows Original data. Even rows: Reconstructions (shown are the composed Gaussian means)."}, {"section_index": "12", "section_name": "4.4 CALTECH MOTORCYCLES", "section_text": "We also experimented with real-valued data using the Caltech-101 motorcycle dataset (Fei-Fei et al 2007), which consists of 798 images of size 100 180. The first 750 images were used for trainin. and the remaining 48 images for testing. We trained 50 experts by running our learning procedur for 10 iterations. In Figure we visualize the reconstructed test examples. The reconstruction are a bit blurry since we use a fairly sparse binary representation. Indeed, for each data point ol average only 7 experts were employed. Note that the shapes of the motorcycles are reconstructec quite accurately."}, {"section_index": "13", "section_name": "5 DISCUSSION", "section_text": "In order to improve the reconstructions for continuous image data we could use real-valued latent variables in addition to binary ones (as in Hinton et al ( 998)). This would allow us to model inten- sities and contrasts more accurately. The inference procedure would have to be adapted accordingly. such that continuous activations can be returned.\nOur work focused on product distributions. In order to apply the proposed approach to models with dependence structure one can make use of an autoregressive decomposition (e.g., Goessling & Amit 2016). If the joint distribution is written as a product of conditional distributions then we can employ the same composition rule as before. Indeed, we can model composed the conditionals as\nP(x(d)[x(1:d-1),h) = Pz*(d(x(d)|x(1:d-1))\nP(x(d)|x(1:d-1),h) = Pk*(d)(x(d)|x(1:d-1))\nwhere Px are autoregressive expert models and k*(d) is the active expert with the highest level of expertise for dimension d."}]
HJ9rLLcxg
[{"section_index": "0", "section_name": "DATASET AUGMENTATION IN FEATURE E SPACE", "section_text": "thatKrizhevsky et al.(2012) used for input space data augmentation when training AlexNet (we crop to 2424). To simulate sequence input the images are fed into the network one row of pixels per. time step similar to the SA setup in (Dai & Le2015).\nTerrance DeVries and Graham W. Taylor\nFor each dataset we train a 2-layer MLP on the context vectors produced by the sequence encoder. Both MLP and SA use the same number of hidden units in each layer: 256 per layer for MNIST. and 1024 per layer for CIFAR-10. We conduct four different test scenarios on the MNIST dataset. To control for the representation, as a baseline we trained the classifier only on context vectors from. the original images (i.e. SA with no augmentation). We then compare this to training with various. kinds of dataset augmentation: traditional affine image transformations in input space (shifting, ro-. tation, scaling), extrapolation between nearest neighbours in input space, and extrapolation betweer nearest neighbours in representational space. For both extrapolation experiments we use three near-. est neighbours per sample and = 0.5 when generating new data. For CIFAR-10, our baseline is. trained using context vectors extracted from cropped and flipped images. Against this baseline we. test the addition of extrapolation between nearest neighbours in representational space, using the. same setup as the MNIST test. Due to the size of the datasets we apply an approximate nearest. neighbour algorithm (Wan et al.]2016).\nDataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in su- pervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a sim pler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating. or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsu- pervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Work- ing in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data.\nResults are reported in Table 4 For MNIST, we find that extrapolating in feature space not only performs better than the baseline, but it also achieves a lower error rate compared to domain-specific data augmentation in input space. A similar outcome is observed in CIFAR-10, where feature space. extrapolation reduces error rate by O.3%. Interestingly, we note that the baseline test for this dataset already leveraged image transformations to improve performance, so the additional reduction in error rate could indicate that both kinds of augmentation, extrapolation in feature space and manual transformation in pixel space. could complement each other..\nTable 4: Test error (%) on MNIST and CIFAR-10. Averages over 10 and 5 runs, respectively"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the major catalysts for the resurgence of neural networks as \"deep learning\"' was the influx. of the availability of data. Labeled data is crucial for any supervised machine learning algorithm to. work, even moreso for deep architectures which are easily susceptible to overfitting. Deep learning. has flourished in a few domains (e.g. images, speech, text) where labeled data has been relatively. simple to acquire. Unfortunately most of the data that is readily available is unstructured and un- labeled and this has prevented recent successes from propagating to other domains. In order to. leverage the power of supervised learning, data must be manually labeled, a process which requires. investment of human effort. An alternative to labeling unlabeled data is to generate new data with. known labels. One variant of this approach is to create synthetic data from a simulation such as a. computer graphics engine (Shotton et al.]2013] Richter et al.2016), however, this may not work if. the simulation is not a good representation of the real world domain. Another option is dataset aug-. mentation, wherein the existing data is transformed in some way to create new data that appears to. come from the same (conditional) data generating distribution (Bengio et al.2011). The main chal-. lenge with such an approach is that domain expertise is required to ensure that the newly generated. data respects valid transformations (i.e. those that would occur naturally in that domain)..\nIn this paper, we demonstrate a new domain-independent data augmentation technique that car. be used to improve performance when training supervised learning models. We train a sequenc. autoencoder to construct a learned feature space in which we extrapolate between samples. Thi technique allows us to increase the amount of variability within the dataset, ultimately resulting ir. a more robust model. We demonstrate our technique quantitatively on five datasets from differen. domains (speech, sensor processing, motion capture, and images) using the same simple architectur. and achieve near state-of-the-art results on two of them. Moreover, we show that data augmentatior. in feature space may complement domain-specific augmentation.\nIn this work, we consider augmentation not by a domain-specific transformation, but by perturb. ing, interpolating, or extrapolating between existing examples. However, we choose to operate nc in input space, but in a learned feature space.Bengio et al.(2013) and Ozair & Bengio (2014 claimed that higher level representations expand the relative volume of plausible data points withi the feature space, conversely shrinking the space allocated for unlikely data points. As such, whe traversing along the manifold it is more likely to encounter realistic samples in feature space tha compared to input space. Unsupervised representation learning models offer a convenient way o learning useful feature spaces for exploring such transformations. Recently, there has been a retur to interest in such techniques, leading to, e.g., variational autoencoders (Kingma & Welling]2014 generative adversarial networks (Goodfellow et al.|2014), and generative stochastic networks (Alai et al.][2016), each of which could be used to generate useful feature spaces for augmentation..\nAn important finding is that the extrapolation operator, when used in feature space, generated usefu ynthetic examples while noise and interpolation did not. Additional synthetic data experiment where we could control the complexity of the decision boundary revealed that extrapolation onl mproved model performance in cases where there were complex class boundaries. In cases witl simple class boundaries, such as linear separability or one class encircling another, extrapolatio nindered model performance, while interpolation helped. Our current hypothesis is that interpola tion tends to tighten class boundaries and unnecessarily increase confidence, leading to overfitting This behaviour may cause the model to ignore informative extremities that can describe a comple decision boundary and as a result produce an unnecessarily smooth decision boundary. As mos igh-dimensional, real datasets will typically have complex decision boundaries, we find extrapola tion to be well suited for feature space dataset augmentation.\nBy manipulating the vector representation of data within a learned feature space a dataset can be augmented in a number of ways. One of the most basic transformations that can be applied to the"}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "data is to simply add random noise to the context vector. In the context of class-imbalanced data Chawla et al.(2002) proposed interpolating between samples in feature space. Similarly extrapola- tion between samples could also be applied. We investigate some of these methods to see which is. most effective for improving the performance of supervised learning models when augmented data is added to the dataset.\nIn this work, we demonstrate that extrapolating between samples in feature space can be used t augment datasets and improve the performance of supervised learning algorithms. The main benef of our approach is that it is domain-independent, requiring no specialized knowledge, and can there fore be applied to many different types of problems. We show that models trained on datasets tha have been augmented using our technique outperform models trained only on data from the origi nal dataset. Just as dataset augmentation in input space has become standard for visual recognitio tasks, we recommend dataset augmentation in feature space as a domain-agnostic, general-purpos framework to improve generalization when limited labeled data is available."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "or many years, dataset augmentation has been a standard regularization technique used to reduc. verfitting while training supervised learning models. Data augmentation is particularly popular fc. isual recognition tasks as new data can be generated very easily by applying image manipulatior. uch as shifting, scaling, rotation, and other affine transformations. When training LeNet5, on. f the most early and well-known convolutional neural network architectures, LeCun et al.(1998 pplied a series of transformations to the input images in order to improve the robustness of th. nodel. Krizhevsky et al.[(2012) also used image transformations to generate new data when trainin. he renowned AlexNet model for the 2012 Large Scale Visual Recognition Challenge (ILSVRC. hey claimed that dataset augmentation reduced the error rate of the model by over 1%. Creatin. ew data has since been a crucial component of all recent large-scale image recognition models.\nYoshua Bengio, Gregoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen tations. In ICML (1), pp. 552-560, 2013.\nNitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research. 16:321-357. 2002\nKyunghyun Cho, Bart van Merrienboer, Calar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for. statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in. Natural Language Processing (EMNLP), pp. 1724-1734, 2014.\nChris Ellis, Syed Zain Masood, Marshall F Tappen, Joseph J Laviola Jr, and Rahul Sukthankar Exploring the trade-off between accuracy and observational latency in action recognition. Inter national Journal of Computer Vision, 101(3):420-436, 2013.\nIan Goodfellow. Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Svstems. pp. 2672-2680. 2014.\nNacereddine Hammami, Mouldi Bedda, and Nadir Farah. Spoken Arabic digits recognition using MFCC based on GMM. In Sustainable Utilization and Development in Engineering and Tech. nology (STUDENT), 2012 IEEE Conference on, pp. 160-163. IEEE, 2012.\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large. target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting. of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp. 1-10, 2015."}, {"section_index": "4", "section_name": "3 MODEL", "section_text": "Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In The Internationa Conference on Learning Representations (ICLR), 2015..\nOur dataset augmentation technique works by first learning a data representation and then applying transformations to samples mapped to that representation. Our hypothesis is that, due to manifold unfolding in feature space, simple transformations applied to encoded rather than raw inputs will result in more plausible synthetic data. While any number of representation learning models could\nUnfortunately, dataset augmentation is not as straightforward to apply in all domains as it is for im ages. For example, Schluter & Grill (2015) investigated a variety of data augmentation techniques for application to singing voice detection. These include adding Gaussian noise to the input, shifting the pitch of the audio signal, time stretching, varying the loudness of the audio signal, applying ran- dom frequency filters, and interpolating between samples in input space. They found that only pitch shifting and random frequency filtering appeared to improve model performance. While performing well on audio data, these augmentation techniques cannot be applied to other domains. As such, the process of designing, implementing, and evaluating new data augmentation techniques would need to be repeated for each new problem.\nImportant to our work are sequence-to-sequence learning (seq2seq) models which were first de-. veloped independently by Cho et al.(2014) and Sutskever et al.(2014). Generally these models convert a sequence of inputs from one domain into a fixed-length context vector which is then used. to generate an output sequence, usually from a different domain. For example, the first application. of seq2seq learning by Cho and Sutskever was to translate between English and French. Sequence-. to-sequence learning has recently been used to achieve state-of-the-art results on a large variety of. sequence learning tasks including image captioning (Vinyals et al.2015b), video captioning (Venu- gopalan et al.2015), speech recognition ((Chan et al.2016), (Bahdanau et al.2016)), machine translation ((Jean et al.]2015), (Luong et al.]2015)), text parsing (Vinyals et al.]2015a), and con- versational modeling (Vinyals & Le2015). The seq2seq architecture can also be used to create. sequence autoencoders (SA) by creating a model that learns to reconstruct input sequences in its. output (Srivastava et al.J2015}Dai & Le 2015). We use a variant of sequence autoencoders in our. work to create a feature space within which we can manipulate data to augment a training set.\nYT Y2 / Decoder eeeoeen Encoder Data Augmentation Encoder eepooe Ck Sequence Classifier Static Classifier X1 X2 XT (a) Sequence autoencoder (b) Encode and apply data transform (c) Decode and/or classify\nFigure 1: System architecture composed of three steps. (a) A sequence autoencoder learns a feature space from unlabeled data, representing each sequence by a context vector (C). (b) Data is encodec to context vectors and augmented by adding noise, interpolating, or extrapolating (here we depic interpolation). (c) The resulting context vectors can either be used directly as features for supervised learning with a static classifier, or they can be decoded to reconstruct full sequences for training a sequence classifier.\nbe explored, we use a sequence autoencoder to construct a feature space. The main reason we adop. SA is that we favour a generic method that can be used for either time series or static data"}, {"section_index": "5", "section_name": "3.1 SEOUENCE AUTOENCODER", "section_text": "Juan Jose Rodriguez, Carlos J Alonso, and Jose A Maestro. Support vector machines of interval based features for time series classification. Knowledge-Based Systems. 18(4):171-178. 2005\nAn autoencoder consists of two parts: an encoder and a decoder. The encoder receives data as in put and, by applying one or more parametrized nonlinear transformations, converts it into a new. representation, classically lower-dimensional than the original input. The decoder takes this repre. sentation and tries to reconstruct the original input, also by applying one or more nonlinear trans-. formations. Various regularized forms of autoencoders have been proposed to learn overcomplete. representations\nJamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake Mat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116-124, 2013.\nA sequence autoencoder works in a similar fashion as the standard autoencoder except that the encoder and decoder use one or more recurrent layers so that they can encode and decode variable- length sequences. In all of our experiments, we use a stacked LSTM (Li & Wu]2015) with two layers for both the encoder and decoder (Figure[1a). During the forward pass, the hidden states of the recurrent layers are propagated through the layer stack. The encoder's hidden state at the final time step, called the context vector, is used to seed the hidden state of the decoder at its first time step.\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc representations using 1stms. In Proceedings of the 32nd International Conference on Machine Learning (1CML-15), pp. 843-852, 2015\nThe main difference between our implementation of the SA and that of|Dai & Le(2015) is how the context vector is used in the decoder. Dai and Le follow the original seq2seq approach of|Sutskever et al.(2014) and use the context vector as input to the decoder only on the first time step, then use the output of the previous times step as inputs for all subsequent time steps as follows:\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing svstem op. 3104-3112. 2014\nwhere f is the LSTM function. s is the state of the LSTM (both hidden and cell state). c is the context vector, and y is the output of the decoder. We instead modify the above equation so that the decoder is conditioned on the context vector at each time step as was done in (Cho et al.[2014):\nyo = fso,c) Yt =f(St-1,Yt-1,C)\nWe found that conditioning the decoder on the context vector each time step resulted in improved reconstructions, which we found to be critical to the success of the data augmentation process..\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nXiangang Li and Xihong Wu. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4520-4524. IEEE. 2015.\nMoshe Lichman. UCI machine learning repository, 2013. URL http: //archive.ics. uci. edu/m1\nDavid Llorens, Federico Prat, Andres Marzal, Juan Miguel Vilar, Maria Jose Castro, Juan-Carlos Amengual, Sergio Barrachina, Antonio Castellanos, Salvador Espana Boquera, JA Gomez, et al. The UJIpenchars database: a pen-based database of isolated handwritten characters. In LREC, 2008.\nyo = fso,c) Yt=fSt-1,Yt-1\nOriol Vinyals and Quoc Le. A neural conversational model. In International Conference on Machine Learning:Deen Ie ino Workshon.2015\nIn order to augment a dataset, each example is projected into feature space by feeding it through the sequence encoder, extracting the resulting context vector, and then applying a transformation in feature space (Figure|1b). The simplest transform is to simply add noise to the context vectors. however, there is a possibility with this method that the resulting vector may not resemble the same class as the original, or even any of the known classes. In our experiments, we generate noise by drawing from a Gaussian distribution with zero mean and per-element standard deviation calculated across all context vectors in the dataset. We include a y parameter to globally scale the noise:\nc, =ci+yX,X ~N{0,o?}\nwhere i indexes the elements of a context vector which corresponds to data points from the training set. A more directed approach for data augmentation follows the techniques introduced by Chawla et al. (2002). For each sample in the dataset, we find its K nearest neighbours in feature space which share its class label. For each pair of neighbouring context vectors, a new context vector can then be generated using interpolation:\nc'=(CkCj) +Cj\nC=CCk)+Cj\nIn the case of extrapolation, X is a value in the range {0, oo} which controls the degree of extrapola tion. While X could be drawn from a random distribution for each new sample we found that setting X = 0.5 worked well as a default value in most cases, so we use this setting in all of our tests"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In all experiments, we trained a LSTM-based sequence autoencoder in order to learn a feature space from the available training examples. Each hidden layer, including the context vector, had the same number of hidden units and a dropout probability of p = O.2. The autoencoders were trained using Adam (Kingma & Ba]2015) with an initial learning rate of O.001, which was reduced by half whenever no improvement was observed in the validation set for 10 epochs. Finally, we reversed the order of the input sequences as suggested by Sutskever et al.(2014). We found that reversing the order of input sequences caused the model to train faster and achieve better final solutions..\nFor all classification experiments where interpolation or extrapolation was applied to generate new samples, we applied the following procedure unless otherwise stated. For each sample in the datase we found the 10 nearest in-class neighbours by searching in feature space. We then interpolated or extrapolated between each neighbour and the original sample to produce a synthetic example which was added to the augmented dataset. For all tests, the baseline model and the augmented dataset model(s) were trained for the same number of weight updates regardless of dataset size."}, {"section_index": "7", "section_name": "4.1 VISUALIZATION - SINUSOIDS", "section_text": "To gain an intuition of the method we start by working with a synthetic dataset of sinusoids. Si nusoids work well as a test case for this technique as they have a known behaviour and only two dimensions (amplitude and time), so we can easily observe the effects of the dataset augmentation process. To create a training set, sinusoids were generated with amplitude, frequency, and phase drawn from a uniform distribution.\nFor this toy problem, we trained a sequence autoencoder with 32 hidden units in each layer. We then applied different data augmentation strategies to observe the effects on the \"synthetic\" sinusoids.\nwhere c' is the synthetic context vector, c; and c; are neighbouring context vectors, and is a variable in the range {0, 1} that controls the degree of interpolation. In our experiments, we use X = 0.5 so that the new sample balances properties of both original samples. In a similar fashion,. extrapolation can also be applied to the context vectors:.\nOnce new context vectors have been created, they can either be used directly as input for a learning task, or they can be decoded to generate new sequences (Figure 1c). When interpolating between two samples, the resulting decoded sequence is set to be the average length of the two inputs. When extrapolating between two samples the length of the new sequence is set to be the same as that of c;\nFor each test we extracted the context vectors of two input sinusoids, performed an operation, ther decoded the resulting context vectors to generate new sequences..\nWe first augmented data by adding random noise to the context vectors before decoding. The nois. magnitude parameter from Equation|1|was set to O.5. In Figure2a|the blue and green \"parent samples are shown in bold while the augmented \"child' samples are thinner, lighter lines. Impor. tantly, we observe that all new samples are valid sinusoids with stable, regular repeating patterns Although mimicking the major properties of their parents the generated samples have small change in amplitude, frequency, and phase, as would be the expected effect for the addition of random noise\nFor a more directed form of data augmentation we experimented with interpolating between sinu-. soids within the space of the context vectors. Figure [2b|demonstrates interpolation between two sinusoids using Equation|2|while varying the parameter from O to 1. Unlike the results obtained by Bengio et al.(2013) where the transition between classes occurs very suddenly we find that the. samples generated by our model smoothly transition between the two parent sinusoids. This is an. exciting observation as it suggests that we can control characteristics of the generated samples by combining two samples which contain the desired properties..\nIn a similar fashion to interpolation we can also extrapolate between two samples using Equation. 3] For this experiment we again vary the parameter from O to 1 to generate a range of samples.. As seen in Figure [2cl this appears to have the effect of exaggerating properties of each sinusoid. with respect to the properties of the other sinusoid. For example, we see that new samples generated from the blue parent sinusoid increase in amplitude and decrease in phase shift. Conversely, samples generated from the green parent sinusoid decrease in amplitude and increase in phase shift. The behaviour of the extrapolation operation could prove very beneficial for data augmentation as it could be used to generate extra samples of rare or underrepresented cases within the dataset, which. is a common failure case.\nThe UJ1 Pen Characters dataset (v2) contains 11,640 instances of 97 different characters hand written by 60 participants (Llorens et al.]2008). All samples were collected using a tablet PC and a stylus. Characters are defined by a sequence of X and Y coordinates, and include upper and lower case ASCII letters, Spanish non-ASCII letters, the 10 digits, and other common punctuation and symbols. As with the sinusoids in Section4.1] handwritten characters are suitable for evaluating dataset augmentation methods as they have an expected shape and can be easily visualized.\nAs a preprocessing step for this dataset we first applied local normalization to each sample to get a fixed size, followed by a global normalization across the dataset as a whole. A sequence autoencode. with 128 hidden units per layer was trained to construct the feature space within which data aug mentation could take place. Figure 3a|demonstrates the effects of interpolating between characters in feature space. In this example we use the \"@\" symbol. We see that the resulting characters share\n(a) Random noise (b) Interpolation (c) Extrapolation\nFigure 2: Sinusoids with various transformations applied in feature space. (a) Random noise added with y = 0.5. (b) Interpolation between two sinusoids for values of X between O and 1. (c) Extrap- olation between two sinusoids for values of between 0 and 1. Best viewed in colour\nFigure 3: Interpolation (a) and extrapolation (b) between handwritten characters. Character (0,i) is interpolated/extrapolated with character (j,O) to form character (i,j), where iis the row number and is the column number. Original characters are shown in bold.\ncharacteristics of the two parent inputs, such as the length of the symbol's tail or the shape of the central \"a\"'. Visually the majority of generated samples appear very similar to their parents, which is expected from interpolation, but is not necessarily useful from the perspective of data augmentation.\nWhen augmenting data for the purpose of improving performance of machine learning algorithms i1 is desirable to create samples that are different from the data that is already common in the dataset To this end, extrapolating between samples is preferable, as shown in Figure 3b] Extrapolated data. displays a wider variety compared to samples created by interpolation. We hypothesize that it is this added variability that is necessary in order for data augmentation to be useful.."}, {"section_index": "8", "section_name": "4.3 SPOKEN ARABIC DIGITS", "section_text": "For our first quantitative test we use the Arabic Digits dataset (Lichman2013) which contains 8,800 samples of time series mel-frequency cepstrum coefficients (MFCCs) extracted from audio clips o1 spoken Arabic digits. Thirteen MFCCs are available for each time step in this dataset. To preprocess the data we apply global normalization. To evaluate our data augmentation techniques we used the official train/test split and trained ten models with different random weight initializations\nAs a baseline model we trained a simple two layer MLP on the context vectors produced by a SA. Both models used 256 hidden units in each hidden layer. The MLP applied dropout with p = 0.5. after each dense layer. To evaluate the usefulness of different data augmentation techniques we trained a new baseline model on datasets that had been augmented with newly created samples The techniques we evaluated were: adding random noise to context vectors, interpolating between two random context vectors from the same class, interpolating between context vectors and their nearest neighbours from the same class, and extrapolating between context vectors and their nearest neighbours from the same class. The results of our tests are summarized in Table[1\nTable 1: Test set error on Arabic Digits dataset averaged over 10 runs\nWe find that our simple baseline model achieves competitive performance after training on the extracted context vectors, demonstrating the feature extracting capability of the sequence autoen- coder. The naive data augmentation approach of adding random noise to the context vectors further improves performance. Of interest, we find that adding new samples generated using interpolation techniques diminishes the performance of the model, which confirms our hypothesis that good data augmentation techniques should add variability to the dataset. Of the two interpolation techniques.\n(a) Interpolation (b) Extrapolation\nwe see that interpolating between neighbouring samples performs better than simply interpolating with randomly chosen samples of the same class. Finally we observe that extrapolating between samples improves model performance significantly, reducing the baseline error rate by almost half Our results rival those of Hammami et al.(2012), which to our knowledge are state-of-the-art on this dataset.\nOur second quantitative test was conducted on the Australian Sign Language Signs dataset (AUS. LAN). AUSLAN was produced byKadous(2002) and contains 2,565 samples of a native signer. signing 95 different words or phrases while wearing high quality position tracking gloves. Each. time series sample is, on average, 57 frames in length and includes 22 features: roll, pitch, yaw finger bend, and the 3D coordinates of each hand. To preprocess the raw data we first locally centre each sample and then apply global normalization. For evaluation, we perform cross validation with. 5 folds, as is common practice for the AUSLAN dataset..\nThe baseline model for these tests was a two layer MLP with 512 hidden units in each layer, witl. dropout (p = 0.5) applied on each. Similar to Arabic Digits, dataset we find that the simple MLI. can achieve competitive results when trained on the context vectors extracted from the sequence au toencoder (see Table2). In this case, however, we observe that adding random noise to the contex. vectors did not improve performance. One possible explanation for this outcome is that the AUS. LAN dataset has much more classes than the Arabic Digits dataset (95 versus 10) so there is highe. probability of a randomly augmented context vector jumping from one class manifold to another. Traversing instead along the representational manifold in a directed manner by extrapolating be. tween neighbouring samples results in improved performance over that of the baseline model. Ou. results also match the performance of Rodriguez et al.(2005), which to our knowledge is the bes. 5-fold cross validation result for the AUSLAN dataset..\nThe final time series dataset we considered was the UCF Kinect action recognition dataset (Ellis. et al.]2013). It contains motion capture data of participants performing 16 different actions such. as run, kick, punch, and hop. The motion capture data consists of 3-dimensional coordinates fo. 15 skeleton joints for a total of 45 attributes per frame. In total there are 1,280 samples within the. dataset. To preprocess the dataset we first shift the coordinates of each sample so that the central. shoulder joint of the first frame is located at the origin. Global normalization is also applied.\nWith the UCFKinect dataset our main goal was to determine the effectiveness of interpolation ii. feature space for generating new sequences that combine the characteristics and actions of the tw. \"seed\"' examples. We found that in order to produce natural looking results, the two actions to b combined must already share some properties. For example, Figure 4a|and |4b show motion captur sequences of a person stepping forward and a person stepping to the left, respectively. Both of these. actions take approximately the same amount of time to perform, and each skeleton moves thei left leg first, then their right leg. Due to these preexisting similarities the action sequences can b interpolated in feature space to produce a natural looking sequence of a skeleton stepping diagonall. forward and to the left (Figure4c). These results emulate what was previously observed in Sectio .3l which indicated that similar properties are necessary for successful blending of examples..\nOur secondary goal with the UCFKinect dataset was to quantitatively evaluate the performance. of extrapolation-based data augmentation. To compare to previous results, we used 4-fold cross. validation (see Table|3[for a summary of results). We found that extrapolating between samples in\nTable 2: CV error on AUSLAN dataset averaged over 5 folds\n200 0 200 600 800 1000 600 200 600800 600 600 600 800 8001000 600 1000 -1000 (a) \"Step front' action from validation set. 200 200 200 :1 200 200 200 400 400 400 600 600 800 800 -800 800 2000 1000 200 200 -200 200 200 200 400600800 1000 600 200 4006008001000 600 200 400 6008001000 600 200 400 600 800 1000 600 1000 -1000 1000 1000 (b) \"Step left' action from validation set.. 200 200 200 0 0 0 200 200 200 400 600 600 800 -800 800 1000 1000 1000 200 200 200 200 400 200 400 200 200 600 200 -600 600 600 8001000 800 6008001000 800 800\n200 -200 -2 400 600 800 800 1000 1000 200 200 200 200 200 600 200 600 600 600 800 800 600 800 800 800 1000 1000 1000\n200 200 200 +-200 100 400 600 800 800 200 200 600 600 800 800 800 1000 -1000\n1:9 200 100 -600 600 800 -800 1000 200 200 200 600 600 600 -600 600 8001000 800 400 600 800 800 600 800 -1000 800 1000 1000 100 1000\n(c) Generated sequence combining \"step front\"' and ''step left\"'\nHaving successfully applied dataset augmentation in feature space to improve the accuracy of se. quence classification tasks, we now experiment with applying our technique to static data. For. these experiments we concentrate on the image domain where manual data augmentation is already prevalent. We find that augmenting datasets by extrapolating within a learned feature space improves. classification accuracy compared to no data augmentation, and in some cases surpasses traditional. (manual) augmentation in input space.\nIn our experiments we consider two commonly used small-scale image datasets: MNIST and CIFAR-10. MNIST consists of 2828 greyscale images containing handwritten digits from O to 9. There are 60,000 training images and 10,000 test images in the official split. CIFAR-10 consists of 32 32 colour images containing objects in ten generic object categories. This dataset is typically. split into 50,000 training and 10,000 test images.\nIn all of our image experiments, we apply the same sequence autoencoder (SA) architecture as shown in Figure 1a to learn a representation. No pre-processing beyond a global scaling is applied to the MNIST dataset. For CIFAR-10 we apply global normalization and the same crop and flip operations\nFigure 4: A new motion capture sequence can be generated by interpolating between samples. By combining the 'step front\"' action (a) with the \"step left' action (b) we can generate a new sequence of a character stepping diagonally forward and the to left (c)..\nrepresentational space improved the performance of our untuned model by more than 1%, which is quite significant. Our results are 2.5 percentage points below the current state-of-the-art result produced by Beh et al.(2014), but further tuning of the model could improve results.\nTable 3: CV error on UCFKinect dataset averaged over 4 folds"}]
BJC_jUqxe
[{"section_index": "0", "section_name": "A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING", "section_text": "From the figure we can tell that the model trained without the penalization term have lots of redun dancies between different hops of attention (Figure|3a), resulting in putting lot of focus on the wor 'it\" (Figure|3c), which is not so relevant to the age of the author. However in the right column, the model shows more variations between different hops, and as a result, the overall embedding focuse. on '\"mail-replies spam\"' instead. (Figure3d)\n*Montreal Institute for Learning Algorithms (MILA), Universite de Montreal t CIFAR Senior Fellow\nFor the Yelp dataset, we also observe a similar phenomenon. To make the experiments more ex- plorative, we choose to plot heat maps of overall attention heat maps for more samples, instead of plotting detailed heat maps for a single sample again. Figure 4 shows overall focus of the sentence embedding on three different reviews. We observe that with the penalization term, the model tends to be more focused on important parts of the review. We think it is because that we are encouraging it to be focused, in the diagonals of matrix AAT' (Equation|8).\nThis paper proposes a new model for extracting an interpretable sentence embed- ding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, senti- ment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.\nTo validate if these differences result in performance difference, we evaluate four models trained. on Yelp and Age datasets, both with and without the penalization term. Results are shown in Table 3] Consistent with what expected, models trained with the penalization term outperforms their. counterpart trained without.\nIn SNLI dataset, although we observe that introducing the penalization term still contributes to en couraging the diversity of different rows in the matrix sentence embedding, and forcing the network to be more focused on the sentences, the quantitative effect of this penalization term is not so obvious on SNLI dataset. Both models yield similar test set accuracies."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Having multiple rows in the sentence embedding is expected to provide more abundant information. about the encoded content. It makes sence to evaluate how significant the improvement can be brought by r. Taking the models we used for Age and SNLI dataset as an example, we vary r from. 1 to 30 for each task, and train the resulting 10 models independently (Figure5). Note that when. r = 1, the sentence embedding reduces to a normal vector form.\nMuch progress has been made in learning semantically meaningful distributed representations of. individual words, also known as word embeddings (Bengio et al.. 2001 Mikolov et al.]2013) On the other hand, much remains to be done to obtain satisfying representations of phrases and. sentences. Those methods generally fall into two categories. The first consists of universal sentence embeddings usually trained by unsupervised learning (Hill et al.|2016). This includes SkipThought. vectors (Kiros et al.|2015), ParagraphVector (Le & Mikolov|[2014), recursive auto-encoders (Socher et al.|2011f|2013), Sequential Denoising Autoencoders (SDAE), FastSent (Hill et al.]2016), etc.\nFrom this figure we can find that, without having multiple rows, the model performs on-par with. its competitiors which use other forms of vector sentence embeddings. But there is significant\nThe other category consists of models trained specifically for a certain task. They are usually. combined with downstream applications and trained by supervised learning. One generally finds. that specifically trained sentence embeddings perform better than generic ones, although generic. ones can be used in a semi-supervised setting, exploiting large unlabeled corpora. Several models. have been proposed along this line, by using recurrent networks (Hochreiter & Schmidhuber|1997 Chung et al.[[2014), recursive networks (Socher et al.[|2013) and convolutional networks (Kalchbren- ner et al.2014f dos Santos & Gatti]2014Kim2014) as an intermediate step in creating sentence representations to solve a wide variety of tasks including classification and ranking (Yin & Schutze 2015} Palangi et al.]2016] Tan et al.]2016] Feng et al.2015).A common approach in previous methods consists in creating a simple vector representation by using the final hidden state of the. RNN or the max (or average) pooling from either RNNs hidden states or convolved n-grams. Ad-. ditional works have also been done in exploiting linguistic structures such as parse and dependence. trees to improve sentence representations (Ma et al.]2015] Mou et al.2015b] Tai et al.]2015).\n0.85 0.85 0.80 0.75 0.80 ACernncy 0.70 Acennecy 0.65 0.75 1 1 0.60 5 5 10 10 0.55 0.70 20 20 0.50 30 30 0.45 0.65 0 5 10 15 20 25 30 0 2 4 6 8 10 12 14 Epoches Epoches (a) (b)\nFor some tasks people propose to use attention mechanism on top of the CNN or LSTM model to. introduce extra source of information to guide the extraction of sentence embedding (dos Santos et al.]2016). However, for some other tasks like sentiment classification, this is not directly appli-. cable since there is no such extra information: the model is only given one single sentence as input. In those cases, the most common way is to add a max pooling or averaging step across all time steps\nFigure 5: Effect of the number of rows (r) in matrix sentence embedding. The vertical axes indicates test set accuracy and the horizontal axes indicates training epoches. Numbers in the legends stand for the corresponding values of r. (a) is conducted in Age dataset and (b) is conducted in SNLI. dataset.\n*This work has been done during the 1st author's internship with IBM Watson\nTable 3: Performance comparision regarding the penalization term"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "0000c n m 2 h1h2e nn Ws1 tanh n1 <>h2<>h3>h4 Ws2 softmax A C n en (b) (a)\ndifference between having only one vector for the sentence embedding and multiple vectors. The. models are also quite invariant with respect to r, since in the two figures a wide range of values between 10 to 30 are all generating comparable curves..\nIntroducing attention mechanism allows the final sentence embedding to directly access previou LSTM hidden states via the attention summation. Thus the LSTM doesn't need to carry every piec of information towards its last hidden state. Instead, each LSTM hidden state is only expected t provide shorter term context information around each word, while the higher level semantics, whicl requires longer term dependency, can be picked up directly by the attention mechanism. This settin reliefs the burden of LSTM to carry on long term dependencies. Our experiments also support that as we observed that our model has a bigger advantage when the contents are longer. Further more the notion of summing up elements in the attention mechanism is very primitive, it can be something more complex than that, which will allow more operations on the hidden states of LSTM.\nThe model is able to encode any sequence with variable length into a fixed size representation without suffering from long-term dependency problems. This brings a lot of scalability to the model: without any modification, it can be applied directly to longer contents like paragraphs, articles, etc Though this is beyond the focus of this paper, it remains an interesting direction to explore as a future work.\nFigure 1: A sample model structure showing the sentence embedding model combined with a fully connected and softmax layer for sentiment analysis (a). The sentence embedding M is computed as multiple weighted sums of hidden states from a bidirectional LSTM (h1, ..., hn), where the summa tion weights (A1, ..., Ain) are computed in a way illustrated in (b). Blue colored shapes stand fo hidden representations, and red colored shapes stand for weights, annotations, or input/output.\nAs a downside of our proposed model, the current training method heavily relies on downstream. applications, thus we are not able to train it in an unsupervised way. The major obstacle towards enabling unsupervised learning in this model is that during decoding, we don't know as prior how the different rows in the embedding should be divided and reorganized. Exploring all those possible divisions by using a neural network could easily end up with overfitting. Although we can still dc unsupervised learning on the proposed model by using a sequential decoder on top of the sentence. embedding, it merits more to find some other structures as a decoder.."}, {"section_index": "3", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to acknowledge the developers of Theano (Theano Development Team 2016) and Lasagne. The first author would also like to thank IBM Watson for providing resources fundings and valuable discussions to make this project possible, and Caglar Gulcehre for helpfu discussions."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Rejean Ducharme, and Pascal Vincent. A neural probabilistic language model. Ir Advances in Neural Information Processing Systems, pp. 932-938, 2001.\nSection 2 details on our proposed self-attentive sentence embedding model, as well as a regular ization term we proposed for this model, which is described in Section2.2[ We also provide a visualization method for this sentence embedding in section 2.3] We then evaluate our model ir author profiling, sentiment classification and textual entailment tasks in Section|4\nSamuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large anno tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. A fast unified model for parsing and sentence understanding. arXiv preprini arXiv:1603.06021. 2016"}, {"section_index": "5", "section_name": "2.1 MODEL", "section_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Asso ciation for Computational Linguistics, 2016.\nThe proposed sentence embedding model consists of two parts. The first part is a bidirectiona. LSTM. and the second part is the self-attention mechanism, which provides a set of summatior weight vectors for the LSTM hidden states. These set of summation weight vectors are dottec. with the LSTM hidden states, and the resulting weighted LSTM hidden states are considered as. an embedding for the sentence. It can be combined with, for example, a multilayer perceptron tc\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nIn this paper, we introduced a fixed size, matrix sentence embedding with a self-attention mecha nism. Because of this attention mechanism, there is a way to interpret the sentence embedding in depth in our model. Experimental results over 3 different tasks show that the model outperforms other sentence embedding models by a significant margin.\nA common approach in many of the aforementioned methods consists of creating a simple vector representation by using the final hidden state of the RNN or the max (or average) pooling from either RNNs hidden states or convolved n-grams. We hypothesize that carrying the semantics along all time steps of a recurrent model is relatively hard and not necessary. We propose a self-attention mechanism for these sequential models to replace the max pooling or averaging step. Different from previous approaches, the proposed self-attention mechanism allows extracting different aspects of the sentence into multiple vector representations. It is performed on top of an LSTM in our sentence embedding model. This enables attention to be used in those cases when there are no extra inputs. In addition, due to its direct access to hidden representations from previous time steps, it relieves some long-term memorization burden from LSTM. As a side effect coming together with our proposed self-attentive sentence embedding, interpreting the extracted embedding becomes very easy and explicit.\nbe applied on a downstream application. Figure[1 shows an example when the proposed sentence. embedding model is applied to sentiment analysis, combined with a fully connected layer and a softmax layer. Besides using a fully connected layer, we also proposes an approach that prunes. weight connections by utilizing the 2-D structure of matrix sentence embedding, which is detailed. in Appendix|A] For this section, we will use Figure|1[to describe our model..\nSuppose we have a sentence. which has n tokens. resented in a se lence of word embeddings\nMinwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. Applying deep learn. ing to answer selection: a study and an open task. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2015, Scottsdale, AZ, USA, December 13-17, 2015, pp 813-820, 2015.\nNow each entry in the sequence S are independent with each other. To gain some dependency be tween adjacent words within a single sentence, we use a bidirectional LSTM to process the sentence:.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nAnd we concatenate each ht with ht to obtain a hidden state ht. Let the hidden unit number for each unidirectional LSTM be u. For simplicity, we note all the n hs as H, who have the size n-by-2u\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. 2014\na = softmax (ws2tanh (Ws1HT)\nQuoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICMI volume 14, pp. 1188-1196, 2014\nHere Ws1 is a weight matrix with a shape of da-by-2u. and ws2 is a vector of parameters with size da, where da is a hyperparameter we can set arbitrarily. Since H is sized n-by-2u, the anno- tation vector a will have a size n. the softmax() ensures all the computed weights sum up to 1 Then we sum up the LSTM hidden states H according to the weight provided by a to get a vector representation m of the input sentence.\nJi Young Lee and Franck Dernoncourt. Sequential short-text classification with recurrent and con volutional neural networks. arXiv preprint arXiv:1603.03827. 2016\nThis vector representation usually focuses on a specific component of the sentence, like a special set of related words or phrases. So it is expected to reflect an aspect, or component of the semantics in a sentence. However, there can be multiple components in a sentence that together forms the overall semantics of the whole sentence, especially for long sentences. (For example, two clauses linked together by an\"'and.'') Thus, to represent the overall semantics of the sentence, we need multiple m's that focus on different parts of the sentence. Thus we need to perform multiple hops of attention Say we want r different parts to be extracted from the sentence, with regard to this, we extend the ws2 into a r-by-da matrix, note it as Ws2, and the resulting annotation vector a becomes annotation matrix A. Formally,\nHere the softmax() is performed along the second dimension of its input. We can deem Equation 6Jas a 2-layer MLP without bias, whose hidden unit numbers is da, and parameters are {Ws2, Ws1}.\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using. bidirectional 1stm model and inner-attention. arXiy preprint arXiv:1605.09090. 2016b\nThe embedding vector m then becomes an r-by-2u embedding matrix M. We compute the r weighted sums by multiplying the annotation matrix A and LSTM hidden states H, the resulting matrix is the sentence embedding:"}, {"section_index": "6", "section_name": "2.2 PENALIZATION TERM", "section_text": "Horia Margarit and Raghav Subramaniam. A batch-normalized recurrent network for sentimen classification. In Adyances in Neural Information Processing Systems. 2016\nThe embedding matrix M can suffer from redundancy problems if the attention mechanism always. provides similar summation weights for all the r hops. Thus we need a penalization term to encour- age the diversity of summation weight vectors across different hops of attention..\nS = (w1, W2,... Wn\nHere w; is a vector standing for a d dimentional word embedding for the i-th word in the sentence S is thus a sequence represented as a 2-D matrix, which concatenates all the word embeddings together. S should have the shape n-by-d.\nht = LSTM(wt,\nht = LSTM(wt,ht\nH = (h1, h2,... hn)\nOur aim is to encode a variable length sentence into a fixed size embedding. We achieve that by choosing a linear combination of the n LSTM hidden vectors in H. Computing the linear combina- tion requires the self-attention mechanism. The attention mechanism takes the whole LSTM hidden states H as input, and outputs a vector of weights a:.\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090, 2016a.\nA = softmax (Ws2tanh (Ws1HT)\nMingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural networks for sentence embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pp. 174-179, 2015.\nM = AH\nThe best way to evaluate the diversity is definitely the Kullback Leibler divergence between any of the summation weight vectors. However, we found that not very stable in our case. We conjectur it is because we are maximizing a set of KL divergence (instead of minimizing only one, which i the usual case), we are optimizing the annotation matrix A to have a lot of sufficiently small o even zero values at different softmax output units, and these vast amount of zeros is making th training unstable. There is another feature that KL doesn't provide but we want, which is, we wan each individual row to focus on a single aspect of semantics, so we want the probability mass in th annotation softmax output to be more focused. but with KL penalty we cant encourage that.\nLili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. Discriminative neural sentence model ing by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2315-2325, Lisbon, Portugal, September 2015b. Association for Computational Linguistics. URL http://aclweb.org/anthology/D15-1279\nWe hereby introduce a new penalization term which overcomes the aforementioned shortcomings Compared to the KL divergence penalization, this term consumes only one third of the computation. We use the dot product of A and its transpose, subtracted by an identity matrix, as a measure of redundancy.\nTsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. arXiv preprin arXiv:1607.04492, 2016a\nHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song. and Rabab Ward. Deep sentence embedding using long short-term memory networks: Analysi. and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Lan. guage Processing, 24(4):694-707, 2016.\nLet's consider two different summation vectors a' and a' in A. Because of the softmax, all entries within any summation vector in A should sum up to 1. Thus they can be deemed as probability. masses in a discrete probability distribution. For any non-diagonal elements ai (i j) in the AAT matrix, it corresponds to a summation over elementwise product of two distributions:.\nAnkur P. Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attentio. model for natural language inference. In Proceedings of EMNLP, 2016..\nn 0<aij = > k=1\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nwhere af. and al. are the k-th element in the a' and a' vectors, respectively. In the most extreme case, where there is no overlap between the two probability distributions a' and a', the correspond ay; will be O. Otherwise, it will have a positive value. On the other extreme end, if the two distributions are identical and all concentrates on one single word, it will have a maximum value of 1. We subtract an identity matrix from AAT so that forces the elements on the diagonal of AAT to approximate 1, which encourages each summation vector a' to focus on as few number of words as possible, forcing each vector to be focused on a single aspect, and all other elements to O, which punishes redundancy between different summation vectors."}, {"section_index": "7", "section_name": "2.3 VISUALIZATION", "section_text": "The interpretation of the sentence embedding is quite straight forward because of the existence o1. annotation matrix A. For each row in the sentence embedding matrix M, we have its corresponding. annotation vector a'. Each element in this vector corresponds to how much contribution the LSTM. hidden state of a token on that position contributes to. We can thus draw a heat map for each row of. the embedding matrix M This way of visualization gives hints on what is encoded in each part ol the embedding, adding an extra layer of interpretation. (See Figure|3aand 3b).\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. arXiv preprint arXiv:1511.06361, 2015.\nWenpeng Yin and Hinrich Schutze. Convolutional neural network for paraphrase identification In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 901-911, 2015.\nVarious supervised and unsupervised sentence embedding models have been mentioned in Section. 1 Different from those models, our proposed method uses a new self-attention mechanism that. allows it to extract different aspects of the sentence into multiple vector-representations. The matrix structure together with the penalization term gives our model a greater capacity to disentangle the. latent information from the input sentence. We also do not use linguistic structures to guide our. sentence representation model. Additionally, using our method we can easily create visualizations. that can help in the interpretation of the learned representations..\nP=||(AAT -I) 2\nHere I| ll stands for the Frobenius norm of a matrix. Similar to adding an L2 regularization term this penalization term P will be multiplied by a coefficient, and we minimize it together with the original loss, which is dependent on the downstream application.\nThe second way of visualization can be achieved by summing up over all the annotation vectors,. and then normalizing the resulting weight vector to sum up to 1. Since it sums up all aspects of semantics of a sentence, it yields a general view of what the embedding mostly focuses on. We can. figure out which words the embedding takes into account a lot, and which ones are skipped by the. embedding. See Figure3c|and3d\nTheano Development Team. Theano: A {Python} framework for fast computation of mathemati cal expressions. arXiv e-prints, abs/1605.0, 2016. URLhttp://arxiv.org/abs/1605. 02 6 8 8"}, {"section_index": "8", "section_name": "APPENDIX", "section_text": "Some recent work have also proposed supervised methods that use intra/self-sentence attention.Ling. et al.[(2015) proposed an attention based model for word embedding, which calculates an attention weight for each word at each possible position in the context window. However this method cannot be extended to sentence level embeddings since one cannot exhaustively enumerate all possible sentences. Liu et al.(2016a) proposes a sentence level attention which has a similar motivation but done differently. They utilize the mean pooling over LSTM states as the attention source, and use that to re-weight the pooled vector representation of the sentence.."}, {"section_index": "9", "section_name": "4 PRUNED MLP FOR STRUCTURED MATRIX SENTENCE EMBEDDING", "section_text": "As a side effect of having multiple vectors to represent a sentence, the matrix sentence embedding is usually several times larger than vector sentence embeddings. This results in needing more pa rameters in the subsequent fully connected layer, which connects every hidden units to every units in the matrix sentence embedding. Actually in the example shown in Figure[1] this fully connected layer takes around 90% percent of the parameters. See Table4 In this appendix we are going tc introduce a weight pruning method which, by utilizing the 2D structure of matrix embedding, is able to drastically reduce the number of parameters in the fully connected hidden layer.\nApart from the previous 2 variants, we want to note that Li et al. (2016) proposed a same self attention mechanism for question encoding in their factoid OA model. which is concurrent to ou work. The difference lies in that their encoding is still presented as a vector, but our attentior produces a matrix representation instead, with a specially designed penalty term. We applied the model for sentiment anaysis and entailment, and their model is for factoid QA.\nInheriting the notation used in the main paper, let the matrix embedding M has a shape of r by u. and let the fully connected hidden layer has b units. The normal fully connected hidden layer wil require each hidden unit to be connected to every unit in the matrix embedding, as shown in Figure 1 This ends up with r u b parameters in total.\nThe LSTMN mode1 (Cheng et al. 2016) also proposed a very successful intra-sentence level atten tion mechanism, which is later used byParikh et al.|(2016). We see our attention and theirs as having different granularities. LSTMN produces an attention vector for each of its hidden states during the recurrent iteration, which is sort of an \"online updating'\"' attention. It's more fine-grained, targeting at discovering lexical correlations between a certain word and its previous words. On the contrary our attention mechanism is only performed once, focuses directly on the semantics that makes senst for discriminating the targets. It is less focused on relations between words, but more on the seman tics of the whole sentence that each word contributes to. Computationally, our method also scales up with the sentence length better, since it doesn't require the LSTM to compute an annotation vectoi over all of its previous words each time when the LSTMN computes its next step.\nHowever there are 2-D structures in the matrix embedding, which we should make use of. Each row (m; in Figure[1) in the matrix is computed from a weighted sum of LSTM hidden states, which means they share some similarities\nTo reflect these similarity in the fully connected layer, we split the hidden states into r equally sized groups, with each group having p units. The i-th group is only fully connected to the i-th row in the matrix representation. All connections that connects the i-th group hidden units to other rows of the matrix are pruned away. In this way, Simillarity between different rows of matrix embedding are reflected as symmetry of connecting type in the hidden layer. As a result, the hidden layer can be interperated as also having a 2-D structute, with the number (r) and size (p) of groups as its two dimensions (The M in Figure6). When the total number of hidden units are the same (i.e.,"}, {"section_index": "10", "section_name": "EXPERIMENTAL RESULTS", "section_text": "We first evaluate our sentence embedding model by applying it to 3 different datasets: the Age dataset, the Yelp dataset, and the Stanford Natural Language Inference (SNLI) Corpus. These 3 datasets fall into 3 different tasks, corresponding to author profiling, sentiment analysis, and tex tual entailment, respectively. Then we also perform a set of exploratory experiments to validate properties of various aspects for our sentence embedding model.\nOQ00 q 0 Mv Mh mi m2 m M"}, {"section_index": "11", "section_name": "4.1 AUTHOR PROFILING", "section_text": "We compare our model with two baseline models: biLSTM and CNN. For the two baseline model: The biLSTM model uses a bidirectional LSTM with 300 dimensions in each direction, and use ma pooling across all LSTM hidden states to get the sentence embedding vector, then use a 2-laye. ReLU output MLP with 3000 hidden states to output the classification result. The CNN mode. uses the same scheme, but substituting biLSTM with 1 layer of 1-D convolutional network. Durin training we use O.5 dropout on the MLP and O.o001 L2 regularization. We use stochastic gradier descent as the optimizer, with a learning rate of O.06, batch size 16. For biLSTM, we also clip th norm of gradients to be between -0.5 and 0.5. We searched hyperparameters in a wide range an. find the aforementioned set of hyperparameters yields the highest accuracy.\nFigure 6: Hidden layer with pruned weight connections. M is the matrix sentence embedding, M and Mh are the structured hidden representation computed by pruned weights.\nTable 4: Model Size Comparison Before and After Pruning\nFor our model, we use the same settings as what we did in biLSTM. We also use a 2-layer ReLU. output MLP, but with 2000 hidden units. In addition, our self-attention MLP has a hidden layer with. 350 units (the dg in Section 2), we choose the matrix embedding to have 30 rows (the r), and a coefficient of 1 for the penalization term..\nhttp://pan.webis.de/clef16/pan16-web/author-profiling.html\nMv Mh m m m idde ddi\nThe Author Profiling dataset'Iconsists of Twitter tweets in English, Spanish, and Dutch. For some of the tweets, it also provides an age and gender of the user when writing the tweet. The age range are split into 5 classes: 18-24, 25-34, 35-49, 50-64, 65+. We use English tweets as input, and use those tweets to predict the age range of the user. Since we are predicting the age of users, we refer to it as Age dataset in the rest of our paper. We randomly selected 68485 tweets as training set, 4000 for development set, and 4o0o for test set. Performances are also chosen to be classification accuracy.\nHidden layer. Softmax Other Parts Total Accuracy Yelp, Original, 6=3000 54M 15K 1.3M 55.3M 64.21% Yelp, Pruned, p=150, q=10 2.7M 52.5K 1.3M 4.1M 63.86% Age, Original, b=4000 72M 20K 1.3M 73.2M 80.45% Age, Pruned, p=25, q=20 822K 63.75K 1.3M 2.1M 77.32% SNLI, Original, b=4000 72M 12K 22.9M 95.0M 84.43% SNLI, Pruned, p=300, q=10 5.6M 45K 22.9M 28.6M 83.16%\nTable 1: Performance Comparision of Different Models on Yelp and Age Dataset\nWe train all the three models until convergence and select the corresponding test set performance according to the best development set performance. Our results show that the model outperforms. both of the biLSTM and CNN baselines by a significant margin.\nTable4|takes the model we use for yelp dataset as a concrete example, and compared the number of parameters in each part of the model, both before and after pruning. We can see the above pruning. method drastically reduces the model size. Note that the p and q in this structure can be adjusted freely as hyperparameters. Also, we can continue the corresponding pruning process on top of M. and Mh over and over again, and end up with having a stack of structured hidden layers, just like. stacking fully connected layers.\nThe subsequent softmax layer will be fully connected to both M, and Mn, i.e., each unit in the softmax layer is connected to all units in M, and Mn. This is not a problem since the speed of softmax is largely dependent of the number of softmax units, which is not changed.In addition, for applications like sentiment analysis and textural entailment, the softmax layer is so tiny that only contains several units.\nExperimental results in the three datasets has shown that, this pruning mechanism lowers perfor mances a bit, but still allows all three models to perform comparable or better than other models. compared in the paper."}, {"section_index": "12", "section_name": "B DETAILED STRUCTURE OF THE MODEL FOR SNLI DATASET", "section_text": "In Section [2|we tested our matrix sentence embedding model for the textual entailment task on the SNLI dataset. Different from the former two tasks, the textual entailment task consists of a pail of sentences as input. We propose to use a set of multiplicative interactions to combine the twc\nI Gated Encoder C Hypothesis Premise Mr\nFigure 2: Heatmap of Yelp reviews with the two extreme score"}, {"section_index": "13", "section_name": "4.2 SENTIMENT ANALYSIS", "section_text": "We choose the Yelp datasef2[for sentiment analysis task. It consists of 2.7M yelp reviews, we take the review as input and predict the number of stars the user who wrote that review assigned to the corresponding business store. We randomly select 500K review-star pairs as training set, and 2000\nFigure 7: Model structure used for textual entailment task\nhttps://www.yelp.com/dataset_challenge\nOn the other dimension, another form of similarity exists too. For each vector representation m; in M, the j-th element m; is a weighted sum of an LSTM hidden unit at different time steps. And for. a certain j-th element in all vector representations, they are summed up from a same LSTM hidden unit. We can also reflect this similarity into the symmetry of weight connections by using the same pruning method we did above. Thus we will have another 2-D structured hidden states sized u-by-q,. noted as Mh in Figure6\nfor development set, 20oo for test set. We tokenize the review texts by Stanford tokenizer. We use 100 dimensional word2vec as initialization for word embeddings, and tune the embedding during training across all of our experiments. The target number of stars is an integer number in the range of 1, 5], inclusive. We are treating the task as a classification task, i.e., classify a review text into one of the 5 classes. We use classification accuracy as a measurement.\nFor the two baseline models, we use the same setting as what we used for Author Profiling dataset except that we are using a batch size of 32 instead. For our model, we are also using the same setting, except that we choose the hidden unit numbers in the output MLP to be 30o0 instead. We also observe a significant performance gain comparining to the two baselines. (Table|1)\nComparing the two matrix embeddings corresponds to the green dashed rectangle part in the figure. which computes a single matrix embedding (Fr) as the factor of semantic relation between the two sentences. To represent the relation between Mn and Mp, Fr can be connected to Mp and M. through a three-way multiplicative interaction. In a three-way multiplicative interaction, the value of anyone of Fr, M and Mp is a function of the product of the others. This type of connection is. originally introduced to extract relation between images (Memisevic|2013). Since here we are just computing the factor of relations (Fr) from Mn and Mp, it corresponds to the encoder part in the. Factored Gated Autoencoder in Memisevic (2013). We call it Gated Encoder in Figure7\nAs an interpretation of the learned sentence embedding, we use the second way of visualizatio described in Section2.3|to plot heat maps for some of the reviews in the dataset. We randoml select 5 examples of negative (1 star) and positive (5 stars) reviews from the test set, when the mode has a high confidence (> 0.8) in predicting the label. As shown in Figure[2] we find that the mode majorly learns to capture some key factors in the review that indicate strongly on the sentimen behind the sentence. For most of the short reviews, the model manages to capture all the key factor. that contribute to an extreme score, but for longer reviews, the model is still not able to capture al related factors. For example, in the 3rd review in Figure[2b], it seems that a lot of focus is spent o one single factor, i.e., the \"so much fun', and the model puts a little amount of attention on othe key points like 'highly recommend\", \"amazing food\", etc..\nFirst we multiply each row in the matrix embedding by a different weight matrix. Repeating it over all rows, corresponds to a batched dot product between a 2-D matrix and a 3-D weight tensor.. Inheriting the name in (Memisevic2013), we call the resulting matrix as factor. Doing the batched. dot for both hypothesis embedding and premise embedding, we have Fh and Fp, respectively..\nFh. = batcheddot(Mn, Wfh Fp = batcheddot(Mp, W"}, {"section_index": "14", "section_name": "4.3 TEXTUAL ENTAILMENT", "section_text": "We use the biggest dataset in textual entailment, the SNLI corpus (Bowman et al.] 2015) for oui evaluation on this task. SNLI is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral. The mode. will be given a pair of sentences, called hypothesis and premise respectively, and asked to tell if the semantics in the two sentences are contradicting with each other or not. It is also a classificatior task, so we measure the performance by accuracy.\nHere Wfb and W. are the two weight tensors for hypothesis embedding and premise embedding\nThe factor of the relation (Fr) is just an element-wise product of Fn and Fp (the triangle in the middle of Figure7:\nF =FnO Fp\nWe process the hypothesis and premise independently, and then extract the relation between the twc. sentence embeddings by using multiplicative interactions proposed in Memisevic (2013) (see Ap. pendix[B|for details), and use a 2-layer ReLU output MLP with 4000 hidden units to map the hidden representation into classification results. Parameters of biLSTM and attention MLP are shared across hypothesis and premise. The biLSTM is 300 dimension in each direction, the attention MLP has 150 hidden units instead, and both sentence embeddings for hypothesis and premise have 30 rows (the r). The penalization term coefficient is set to 0.3. We use 300 dimensional GloVe (Pennington et al.]2014) word embedding to initialize word embeddings. We use AdaGrad as the optimizer with a learning rate of O.01. We don't use any extra regularization methods, like dropout or L2 normalization. Training converges after 4 epochs, which is relatively fast..\nHere O stands for element-wise product. After the Fr layer, we then use an MLP with softmax output to classify the relation into different categlories\nTable 2: Test Set Performance Compared to other Sentence Encoding Based Methods in SNLI Datse\nThe overall structure of our model for SNLI is dipicted in Figure[7 For both hypothesis and premise, we extract their embeddings (Mn and Mp in the figure) independently, with a same LSTM and attention mechanism. The parameters of this part of model are shared (rectangles with dashed orange line in the figure).\nFp. = batcheddot(Mn, Wfh Fp = batcheddot(Mp, Wfp\nThis task is a bit different from previous two tasks, in that it has 2 sentences as input. There are. a bunch of ways to add inter-sentence level attention, and those attentions bring a lot of benefits To make the comparison focused and fair, we only compare methods that fall into the sentence. encoding-based models. i.e., there is no information exchanged between the hypothesis and premise before they are encoded into some distributed encoding..\nWe find that compared to other published approaches, our method shows a significant gain ( 1%) to them, except for the 3ooD NSE encoders, which is the state-of-the-art in this category. However the 0.2% different is relatively small compared to the differences between other methods.\nIn this subsection we are going to do a set of exploratory experiments to study the relative effect of each component in our model."}, {"section_index": "15", "section_name": "4.4.1 EFFECT OF PENALIZATION TERM", "section_text": "it ' s an interesting phenomena . Not sure what the spammers get from. it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from. it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get fron it . If you comment on Fastco you will get a lot of mail-replies spam .\nwe have a great work dinner here there be about 20 us and the staf do a great job time the course the food be nothing extraordinary I order the New York strip the meat can have use a little more marbling the cornbread we get before the salad be the good thing I eat the whole night 1 annoying thing at this place be the butter be s hard / cold you can not use it on the soft bread get with it\nthis place be great for lunch / dinner happy hour too the staff be very nice and helpful my new spot\nprice reasonable staff - helpful attentive portion huge enough for 2. you get the chimichanga plate food too salty as u know when you. cook or add anything with cheese it have it own salt no need add more to the meat ... pls kill the salt and then you can taste the.. goodness of the food ... ty.\nFigure 4: Attention of sentence embedding on 3 different Yelp reviews. The left one is trainec without penalization, and the right one is trained with 1.0 penalization.\nSince the purpose of introducing the penalization term P is majorly to discourage the redundancy in the embedding, we first directly visualize the heat maps of each row when the model is presented with a sentence. We compare two identical models with the same size as detailed in Section 4.1 trained separately on Age dataset, one with this penalization term (where the penalization coefficient is set to 1.0) and the other with no penalty. We randomly select one tweet from the test set and compare the two models by plotting a heat map for each hop of attention on that single tweet. Since there are 30 hops of attention for each model, which makes plotting all of them quite redundant, we only plot 6 of them. These 6 hops already reflect the situation in all of the 30 hops.\nit ' s an interesting phenomena . Not sure what the spammers get frorr it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam ..\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nFigure 3: Heat maps for 2 models trained on Age dataset. The left column is trained without the. penalization term, and the right column is trained with 1.0 penalization. (a) and (b) shows detailed. attentions taken by 6 out of 30 rows of the matrix embedding, while (c) and (d) shows the overall attention by summing up all 30 attention weight vectors..\nwe have a great work dinner here there be about 20 us and the staff. do a great job time the course the food be nothing extraordinary I order the New York strip the meat can have use a little more marbling the cornbread we get before the salad be the good thing I. eat the whole night 1 annoying thing at this place be the butter be so. hard / cold you can not use it on the soft bread get with it.\nthis place be great for lunch / dinner happy hour too the staff be very nice and helpful my new spot"}]
B1ckMDqlg
[{"section_index": "0", "section_name": "OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4\nResults: Tables 2, 3, and 4 show the results of our largest models, compared with published. results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En->Fr and En->De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity. scores are also better.? On the Google Production dataset, our model achieved 1.01 higher test BLEU. score even after training for only one sixth of the time..\nNoam Shazeer', Azalia Mirhoseini*l, Krzysztof Maziarz*2, Andy Davis', Quoc Le', Geoffre Hinton' and Jeff Dean'"}, {"section_index": "1", "section_name": "E MACHINE TRANSLATION - EXPERIMENTAL DETAILS", "section_text": "Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com bined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this ex- periment with a single MoE-augmented model. See Appendix E for details on model architecture We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.\nModel Architecture for Single Language Pair MoE Models: Our model is a modified versior of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the numbe of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoI ayers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We us. an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input anc output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensiona output projection. We add residual connections around all LSTM and MoE layers to encourag gradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub word units (also known as \"wordpieces\") (Schuster & Nakajima, 2012) for inputs and outputs in ou System.\nThe capacity of a neural network to absorb information is limited by its number ol parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increas ing model capacity without a proportional increase in computation. In practice however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditiona computation, achieving greater than 1000x improvements in model capacity witl only minor losses in computational efficiency on modern GPU clusters. We in troduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up tc thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the Mo to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.\nResults: Results for the single-pair GNMT models, the multilingual GNMT model and the mul tilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beat. the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and ever beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on Englisl > Korean seems to be a result of severe overtraining, as for the rarer language pairs a small numbe of real examples were highly oversampled in the training corpus.\nWe use a shared source and target vocabulary of 32K wordpieces. We also used the same bear search technique as proposed in (Wu et al., 2016).\nWe train models with different numbers of experts in the MoE layers. In addition to a baselin model with no MoE layers, we train models with flat MoE layers containing 32 experts, and model with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 anc the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input i processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forwar network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains |512 2048] + [2048 * 512] = 2M parameters. The output of the MoE layer is passed through a sigmoic function. We use the strictly-balanced gating function described in Appendix F.\nTable 5: Multilingual Machine Translation (bold values represent best results)."}, {"section_index": "2", "section_name": "1.1 CONDITIONAL COMPUTATION", "section_text": "Exploiting scale in both training data and model size has been central to the success of deep learn ing. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images. (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to. a roughly quadratic blow-up in training costs, as both the model size and the number of training. examples increase. Unfortunately, the advances in computing power and distributed computation. fall short of meeting such demand..\nTraining: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 2ooo training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropProb =- 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU."}, {"section_index": "3", "section_name": "6 CONCLUSION", "section_text": "To ensure balanced expert utilization we set wimportance = 0.01 and wload = 0.01, as described in Section 4 and Appendix A.\nMetrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the publi implementation of Moses (on Github), which was also used in (Luong et al., 2015a)."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks. also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better.\n5For performance reasons, we use a slightly different attention function from the one described in (Wu et al. 2016) - See Appendix G\n2Reported perplexities relative to the tokenization used by both our models and GNMT.\n1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff} @ google.com 2Jagiellonian University, Cracow, krzysztof.maziarz@ student.uj.edu.pl"}, {"section_index": "5", "section_name": "ABSTRACT", "section_text": "GNMT-Mono GNMT-Multi MoE-Multi MoE-Multi vs. GNMT-Multi Parameters[278M / mode] 278M 8.7B ops/timestep 212M 212M 102M training time, hardware. various 21 days, 96 k20s|12 days, 64 k40s Perplexity (dev) 4.14 3.35 -19% French > English Test BLEU 36.47 34.40 37.46 +3.06 German -> English Test BLEU 31.77 31.17 34.80 +3.63 Japanese -> English Test BLEU 23.41 21.62 25.91 +4.29 Korean -> English Test BLEU 25.42 22.87 28.71 +5.84 Portuguese -> English Test BLEU 44.40 42.53 46.13 +3.60 Spanish -> English Test BLEU 38.00 36.04 39.39 +3.35 English -> French Test BLEU 35.37 34.00 36.59 +2.59 English -> German Test BLEU 26.43 23.15 24.53 +1.38 English -> Japanese Test BLEU 23.66 21.10 22.78 +1.68 English -> Korean Test BLEU 19.75 18.41 16.62 -1.79 English -> Portuguese Test BLEU 38.40 37.35 37.90 +0.55 English -> Spanish Test BLEU 34.50 34.25 36.21 +1.96\nModel Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as. described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the. computational budget of the entire model from 85M to 102M ops/timestep.\nThis work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and ad dressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large train. ing sets. We look forward to seeing many novel implementations and applications of conditional. computation in the years to come.\nVarious forms of conditional computation have been proposed as a way to increase model capacity. without a proportional increase in computational costs (Davis & Arel, 2013:; Bengio et al., 2013: Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi. et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example. basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating. decisions.\n4While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively. giving them a slight advantage over the other reported results.\nMoE layer G(x)2 G(x)n-1 MoE MoE layer layer Expert 1 Expert 2 Expert 3 Expert n-1 Expert n Gating Network"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other publishec. methods. Figure 4 shows test perplexity as a function of number of words in the (training data's source sentences processed for models with different numbers of experts. As can be seen from the. Figure, as we increased the number of experts to approach 2048, the test perplexity of our mode. continued to improve.\n6.0 #Experts=0 #Experts=0 5.5 #Experts=32 #Experts=32 #Experts=512 #Experts=512 #Experts=2048 #Experts=2048 5.0 4.5 4.0 3.5 3.0 2.5 2.0 0 1 2 3 4 5 6 7 8 9 0.0 0.5 1.0 1.5 2.0 Number of source words processed 1e9 Number of source words processed 1e10\n#Experts=0 #Experts=0 5.5 #Experts=32 #Experts=32 #Experts=512 #Experts=512 #Experts=2048 #Experts=2048 5.0 letiy 4.5 Pereeey errl 4.0 3.5 3.0 2.5 2.0 0 1 2 3 4 5 6 8 9 0.0 0.5 1.0 1.5 2.0 1e9 1e10\nFigure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network\nFigure 4: Perplexity on WMT'14 En-> Fr (left) and Google Production En-> Fr (right) datasets as a function of number of words processed. The large differences between models at the beginnin, of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts.\nWhile these ideas are promising in theory, no work to date has yet demonstrated massive improve. nents in model capacity, training time, or model quality. We blame this on a combination of the following challenges:\nWe found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indefinite article \"a\" introduces the direct object in a verb phrase indicating importance or leadership..\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nEmmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computatior in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.\nExpert 381 Expert 752 Expert 2004 ... with researchers , .... ... plays a core ... ... with rapidly growing .... ... to innovation . ... plays a critical ... .. under static conditions ... ... tics researchers .. ... provides a legislative .... ... to swift ly ... ... the generation of .... ... play a leading ... ... to dras tically ... ... technology innovations is ... ... assume a leadership .... ... the rapid and ... ... technological innovations , ... ... plays a central ... ... the fast est ... ... support innovation throughout .. ... taken a leading ... ... the Quick Method .... ... role innovation will .... ... established a reconciliation .... ... rec urrent ) ... .. research scienti st .... ... played a vital ... ... provides quick access .... ... promoting innovation where ... ... have a central ... ... of volatile organic ....\nRonan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. Neural Computing, 2002.\nMarc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015.\nIn this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1o00x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets.\nDue to some peculiarities in our infrastructure which have since been fixed, at the time we ran some. of the machine translation experiments, our models ran faster if every expert received exactly the. same batch size. To accommodate this, we used a different gating function which we describe below\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010.\nRecall that we define the softmax gating function to be"}, {"section_index": "7", "section_name": "1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "Our approach to conditional computation is to introduce a new type of general purpose neural net work component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num. ber of experts, each a simple feed-forward neural network, and a trainable gating network whicl. selects a sparse combination of the experts to process each input (see Figure 1). All parts of the. network are trained jointly by back-propagation..\nSparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply G,(x component-wise with a sparse mask M(G,(x)) and normalize the output. The mask itself is a function of G,(x) and specifies which experts are assigned to each input example:\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URLhttp://arxiv.org/abs/1603.04467.\nModern computing devices, especially GPUs, are much faster at arithmetic than at branch- ing. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. Large batch sizes are critical for performance, as they amortize the costs of parameter trans- fers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be com- putationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional com-. putation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by. network bandwidth instead of computational capacity. Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing. Model capacity is most critical for very large data sets. The existing literature on condi- tional computation deals with relatively small image recognition data sets consisting of up. to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters.\nTable 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMT'14 En-> Fr translation model. For each expert i, we sort the inputs in a training batch. in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the. input sentences.\nAndrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013\nGo(x) = Softmax(x. Wq\nWhile the introduced technique is generic, in this paper we focus on language modeling and machine. translation tasks, which are known to benefit from very large models. In particular, we apply a MoE. convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. The MoE is called once for each position in the text, selecting a potentially different combination. of experts at each position. The different experts tend to become highly specialized based on syntax. and semantics (see Appendix E Table 9). On both language modeling and machine translation. benchmarks, we improve on best published results at a fraction of the computational cost.\nG,(x);M(G(x)) G(x)i =\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. IEEE Conference on Computer Vision and Pattern Recognition, 2015..\nif v; is in the top k elements of v TopK(v, k)i otherwise.\nGeoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jait Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networ for acoustic modeling in speech recognition: The shared views of four research groups. IEl Signal Processing Magazine, 2012.\nSince its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994). the mixture-of-experts approach has been the subject of much research. Different types of expert. architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp,. 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009). and deep networks. Other work has focused on different expert configurations such as a hierarchical structure (Yao et al., 2009), infinite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble. model in the format of mixture of experts for machine translation. The gating network is trained on. a pre-trained ensemble NMT model.\nBatchwise Mask: To force each expert to receive the exact same number of examples, we intro duce an alternative mask function, Mbatchwise(X, m), which operates over batches of input vectors Instead of keeping the top k values per example, we keep the top m values per expert across th k|X| training batch, where m = , so that each example is sent to an average of k experts. n\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 199\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nif Xy,i is in the top m values for to expert Mbatchwise(X,m)j,i otherwise\nThe works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob- lems may contain many sub-problems each requiring different experts. They also allude in their. conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.\nMelvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viegas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558,2016. URL http://arxiv.org/abs/1611.04558\nMichael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm Neural Computing, 1994.\nif x; > T Mthreshold(x, ) otherwise\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nReinhard Kneser and Hermann. Ney. Improved backingoff for m- gram language modeling.. 199.\nX n Lbatchwise(X,T, m) =>~>(Mthreshold(x,T)i- Mbatchwise(X, m)j,s)(Xj,i- Ti j=1 i=1\nThe Mixture-of-Experts (MoE) layer consists of a set of n \"expert networks\" E1, ... , En, and a. 'gating network\" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the. same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters..\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS. 2012.\nQuoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corradc. Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervise learning. In ICML, 2012.\nLet us denote by G(x) and E,(x) the output of the gating network and the output of the i-th experi network for a given input x. The output y of the MoE module can be written as follows:\nn G(x)iEi(x i=1\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention based neural machine translation. EMNLP, 2015a.\nWe save computation based on the sparsity of the output of G(x). Wherever G(x); = 0, we need not compute E,(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating. network chooses a sparse weighted combination of \"experts\", each of which is itself a secondary. mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B..\nWhere U and W are trainable weight matrices and V is a trainable weight vector\nFor performance reasons, in our models, we used a slightly different attention functior\nHasim Sak. Andrew W Senior, and Francoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338-342, 2014.\nOur implementation is related to other models of conditional computation. A MoE whose experts are. simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout describec. in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.\nMike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASsP, 2012\nWith our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions..\nBabak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. JMLI 2009.\nTop-K Mask: To implement top-k gating in this formulation, we would let M(v) = TopK (v, k) where:\nAs our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise func- tion during training (such as Mbatchwise) requires modifications to the inference when we may not. have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time:\nOur work builds on this use of MoEs as a general purpose neural network component. While Eigen. et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional. application of the MoE allows for different gating decisions at each position in the text. We also. realize sparse gating and demonstrate its use as a practical way to massively increase model capacity\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nTo learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical.\nThe attention mechanism described in GNMT (Wu et al.. 2016) involves a learned \"Attention Func tion\" A(xi, y) which takes a \"source vector\" x; and a \"target vector\" y, and must be computed for. every source time step i and target time step j. In GNMT, the attention function is implemented as. a feed forward neural network with a hidden layer of size n. It can be expressed as:.\nn AGNMT(xi,yj) =>` Vatanh((xiU)d+ (yjW)d d=1\nMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b.\nn A(xi,Yj) => Vatanh((x;U)d)tanh((y;W)d d=1\nSoftmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is tc multiply the input by a trainable weight matrix W. and then apply the So ftmax function\nVolker Tresp. Mixtures of Gaussian Processes. In NIPS. 2001\nGo(x) = Softmax(x. Wq\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa. Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.\nNoisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values.. setting the rest to -oo (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically. scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix Wnoise..\nBangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classificatior experts uncovers interactions between brain regions. In NIPS. 2009..\nH(x); = (x : Wg)i + StandardNormal() : Softplus((x Wnoise)\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward. connections for neural machine translation. arXiv preprint arXiv:1606.04199, 2016\nif v, is in the top k elements of eepTopK(v, k)i Otherwise.\nTraining the Gating Network We train the gating network by simple back-propagation, along. with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectifiers. Gradients also back-. propagate through the gating network to its inputs. Our method differs here from (Bengio et al., 2015) who use boolean gates and a REINFORCE-style approach to train the gating network..\nOn modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as. to amortize the overhead of parameter loads and updates. If the gating network chooses k out of. n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately kb < b examples. This causes a naive MoE implementation to become. very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the. following techniques for increasing the batch size:.\nMixing Data Parallelism and Model Parallelism: In a conventional distributed training setting. multiple copies of the model on different devices asynchronously process distinct batches of data. and parameters are synchronized through a set of parameter servers. In our technique, these differen batches run synchronously so that they can be combined for the MoE layer. We distribute the. standard layers of the model and the gating network according to conventional data-parallel schemes.. but keep only one shared copy of each expert. Each expert in the MoE layer receives a combinec. batch consisting of the relevant examples from all of the data-parallel input batches. The same set. of devices function as data-parallel replicas (for the standard layers and the gating networks) anc. as model-parallel shards (each hosting a subset of the experts). If the model is distributed over a. devices, and each device processes a batch of size b, each expert receives a batch of approximately. kbd examples. Thus, we achieve a factor of d improvement in expert batch size..\nIn the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device\nucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPs, 2015\nG(x) = Softmax(KeepTopK(H(x), k)\nThis technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion- parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware."}, {"section_index": "8", "section_name": "A LOAD-BALANCING LOSS", "section_text": "As discussed in section 4, for load-balancing purposes, we want to define an additional loss functio. to encourage experts to receive roughly equal numbers of training examples. Unfortunately, th. number of examples received by an expert is a discrete quantity, so it can not be used in back propagation. Instead, we define a smooth estimator Load(X) of the number of examples assigned t. each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients throug. the estimator. This is the purpose of the noise term in the gating function. We define P(x, i) as th. probability that G(x); is nonzero, given a new random choice of noise on element i, but keepin. the already-sampled choices of noise on the other elements. To compute P(x, i), we note that th. G(x); is nonzero if and only if H(x); is greater than the kth-greatest element of H(x) excludin. itself. The probability works out to be:.\nTaking Advantage of Convolutionality: In our language models, we apply the same MoE to each. time step of the previous layer. If we wait for the previous layer to finish, we can apply the Mo to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps..\nIncreasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may. involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last para graph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow. for a large increase in batch size.\nWhere kth_excluding(v, k, i) means the kth highest component of v, excluding component i. Sim plifying, we get:"}, {"section_index": "9", "section_name": "3.2 NETWORK BANDWIDTH", "section_text": "Another major performance concern in distributed computing is network bandwidth. Since the ex perts are stationary (see above) and the number of gating parameters is small, most of the communi- cation involves sending the inputs and outputs of the experts across the network. To maintain com- putational efficiency, the ratio of an expert's computation to the size of its input and output must ex ceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_size hidden_size and hidden_size output_size, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers.\nWhere is the CDF of the standard normal distribution\nWe can now define the load loss to be the square of the coefficient of variation of the load vector. multiplied by a hand-tuned scaling factor wload."}, {"section_index": "10", "section_name": "BALANCING EXPERT UTILIZATION", "section_text": "Experiments: We trained a set of models with identical architecture (the MoE-256 model de. scribed in Appendix C), using different values of wimportance and wload. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in Importance and Load, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.\nWe take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss Limportance, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor wimportance. This additional loss encourages all experts to have equal importance.\nImportance(X) = G(x) xEX\nTable 6: Experiments with different combinations of losses\nCV(Load(X)) max(Load(X)) Wimportance Wload Test Perplexity CV(Importance(X)) mean(Load(X)) 0.0 0.0 39.8 3.04 3.01 17.80 0.2 0.0 35.6 0.06 0.17 1.47 0.0 0.2 35.7 0.22 0.04 1.15 0.1 0.1 35.6 0.06 0.05 1.14 0.01 0.01 35.7 0.48 0.11 1.37 1.0 1.0 35.7 0.03 0.02 1.07\nTest Perplexity CV(Importance(X)) CV(Load(X)) max(Load(X)) Wimportance Wload 1 mean(Load(X)) 0.0 0.0 39.8 3.04 3.01 17.80 0.2 0.0 35.6 0.06 0.17 1.47 0.0 0.2 35.7 0.22 0.04 1.15 0.1 0.1 35.6 0.06 0.05 1.14 0.01 0.01 35.7 0.48 0.11 1.37 1.0 1.0 35.7 0.03 0.02 1.07\nP(x,i) = Pr x : Wg)i + StandardNormal( . Softplus((x : Wnoise)i > kth_excluding(H(x), k,\nP(x,i) = Pr((x . Wg)i + StandardNormal( . Softplus((x . Wnoise)i > kth_excluding(H(x), k,i)\n(x . Wg)i - kth_excluding(H(x), k,i) P(x,i) = Softplus((x : Wnoise)i\nLoad(X); = P(x,i) xEX\nLload(X) = wload : CV(Load(X))\nWe have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate. 1\nInitial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a. state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and Wnoise to all zeros, which yields no signal and. some noise.\nLimportance (X) ) = wimportance : CV(Importance(X))\n'Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of k. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.\nWhile this loss function can ensure equal importance, experts may still receive very different num bers of examples. For example, one expert may receive a few examples with large weights, anc another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function Lload , which ensures balanced loads. Appendix A contains the definition of this function, along with experimental results."}, {"section_index": "11", "section_name": "5.1 1 BILLION WORD LANGUAGE MODELING BENCHMARK", "section_text": "Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words\na b Gprimary(x)iG;(x)j Ei,j(x) YH = i=1 j=1\nPrevious State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use. models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreite & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these. models vary from 2 million to 151 million. Quality increases greatly with parameter count, as dc. computational costs. Results for these models form the top line of Figure 2-right..\nMoE Models: Our models consist of two stacked LSTM layers with a MoE layer between then (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on mode. architecture, training regimen, additional baselines and results, see Appendix C.\nLoadprimary(X);. Load,(X(i) LoadH(X)i,j X\nLoadprimary and Load, deonte the Load functions for the primary gating network and ith sec ondary gating network respectively. X(i) denotes the subset of X for which G : > 0\nThe results of these models are shown in Figure 2-left. The model with 4 always-active experts. performed (unsurprisingly) similarly to the computationally-matched baseline models, while the. largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set\nModel Architecture: Our model consists of five layers: a word embedding layer, a recurren. Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer. the number of units in each LSTM layer, and the input and output dimensionality of the MoE laye. are all equal to 512. For every layer other than the softmax, we apply dropout (Zaremba et al.. 2014) to the layer output, dropping each activation with probability DropProb, otherwise dividing. by (1 - DropProb). After dropout, the output of the previous layer is added to the layer output This residual connection encourages gradient flow (He et al., 2015)..\n55 Baseline Models LL LSTM Models F F Flat MoE Models M M MoE Models 45 50 H II Hierarchical MoE Models 45 40 40 35 M 35 30 H M 107 108 109 1010 106 107 108 Model Par eters Excluding Embedding and Softm\nMoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contain [512 * 1024] + [1024 * 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinar MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096 h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to th number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for th ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example i processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.\nFigure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets.\nVaried Computation, High Capacity: In addition to the largest model from the previous section we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger experts. Details can\n3we have not found the need for deeper hierarchies\nResults: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert.\nIf the number of experts is very large, we can reduce the branching factor by using a two-level. hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com. bination of \"experts\", each of which is itself a secondary mixture-of-experts with its own gating network.3 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat- ing network by Gprimary, the secondary gating networks by (G1, G2..Ga), and the expert networks. by (Eo.o, Eo.1..Ea.b). The output of the MoE is given by:.\nImportanceh(X)i,j = Gprimary(x)i. Gi(x)j xEX\nLow Computation, Varied Capacity: To investigate the effects of adding capacity, we trained. a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-. adds per training example per timestep in the forwards pass, excluding the softmax layer. We call. this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1. million parameters. For all the MoE layers, 4 experts were active per input..\nIt would seem simpler to let LoadH(X)i.; = Load,(X) , but this would not have a gradient with respect to the primary gating network, so we use the formulation above..\nTable 1: Summary of high-capacity MoE-augmented models with varying computational budgets vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.\nComputationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4. experts are always used. In addition, we trained four more computationally-matched baseline model with no sparsity:\nbe found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right Table 1 compares the results of these models to the best previously-published result on this dataset Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation..\nComputational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clus ters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational effi. ciency in TFLOPS/GPU by dividing the number of floating point operations required to proces. one training batch by the observed step time and the number of GPUs in the cluster. The operatior. counts used here are higher than the ones we report in our ops/timestep numbers in that we includ the backwards pass, we include the importance-sampling-based training of the softmax layer, anc we count a multiply-and-add as two separate operations. For all of our MoE models, the floating. point operations involved in the experts represent between 37% and 46% of the total..\nTraining: The models were trained on a cluster of 16 K40 GPUs using the synchronous methoc. described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,o00 words. Ir. the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for. all models, except for MoE-4, which took 18 hours (since all the expert computation was performec on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning. rate was increased linearly for the first 1ooo training steps, and decreased after that so as to be. proportional to the inverse square root of the step number. The Softmax output layer was trainec. efficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For eacl. model, we performed a hyper-parmeter search to find the best dropout probability, in increments o. 0.1.\nFor our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29. TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from O.74- 0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available. parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely. due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.\nTo ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A..\nResults: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al. 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in cluding the end of sentence symbol. Results are reported in Table 7. For each model, we repor the test perplexity, the computational budget, the parameter counts, the value of DropProb, and the computational efficiency.\n55 After Training on 10B words After Training on 100B words 50 45 40 35 30 107 108 109 1010 1011 Model Parameters Excluding Embedding and Softmax\nTable 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).\nModel Test Test ops/timestep #Params excluding Total Drop- TFLOPS Perplexity Perplexity (millions) embed. & softmax #Params Prob per GPU 10 epochs (final) (millions) (billions) (observed) Kneser-Ney 5-gram* 67.6 0.00001 1.8 LSTM-512-512* 54.1 2.4 2.4 0.8 0.1 LSTM-1024-512* 48.2 4.7 4.7 0.8 0.1 LSTM-2048-512* 45.0 43.7 9.4 9.4 0.8 0.1 0.61 LSTM-2048-512 44.7 9.4 9.4 0.8 0.1 1.21 4xLSTM-512 46.0 8.4 8.4 0.8 0.1 1.07 MoE-1-Wide 46.1 8.4 8.4 0.8 0.1 1.29 MoE-1-Deep 45.7 8.4 8.4 0.8 0.1 1.29 MoE-4 45.0 8.4 8.4 0.8 0.1 0.52 MoE-32 39.7 8.4 37.8 0.9 0.1 0.87 MoE-256 35.7 8.6 272.9 1.1 0.1 0.81 MoE-256-h 36.0 8.4 272.9 1.1 0.1 0.89 MoE-1024-h 34.6 8.5 1079.0 1.9 0.2 0.90 MoE-4096-h 34.1 8.9 4303.4 5.1 0.2 0.74 2xLSTM-8192-1024* 34.7 30.6 151.0 151.0 1.8 0.25 1.09 MoE-34M 31.3 33.8 4313.9 6.0 0.3 1.22 MoE-143M 28.0 142.7 4371.1 6.0 0.4 1.56\nFigure 3: Language modeling on a 100 billion word corpus. Models have similar computationa budgets (8 million ops/timestep).\nOn the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as. the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.\nWe constructed a similar training set consisting of shuffled unique sentences from Google's internal. news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a. baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024,.\nTest Test #Parameters ops/timestep Training TFLOPS Perplexity Perplexity excluding embedding Time /GPU 10 epochs 100 epochs and softmax layers 10 epochs Best Published Results 34.7 30.6 151 million. 151 million 59 hours, 32 k40s 1.09 Low-Budget MoE Model 34.1 4303 million 8.9 million 15 hours, 16 k40s 0.74 Medium-Budget MoE Model 31.3 4313 million 33.8 million. 17 hours, 32 k40s 1.22 High-Budget MoE Model 28.0 4371 million 142.7 million 47 hours, 32 k40s 1.56\nMoE-1-Wide: The MoE layer consists of a single \"expert\" containing one ReLU-activatec hidden layer of size 4096. MoE-1-Deep: The MoE layer consists of a single \"expert\" containing four ReLU-activatec hidden layers, each with size 1024. 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out put of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models publishec in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones.\n4303 million 4313 million 4371 million\n4096. 16384. 65536, and 131072 experts. This corresponds to up to 137 billion parameters in th MoE layer. Details on architecture, training, and results are given in Appendix D."}, {"section_index": "12", "section_name": "C.2 MORE EXPENSIVE MODELS", "section_text": "We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M. and 143M ops/timestep. Similar to the models above, these models use a MoE layer between twc LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al.. 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size. 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size. 8192. Both models have 4B parameters in the MoE layers. We searched for the best DropProb foi. each model, and trained each model for 10 epochs..\nResults: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increasec model capacity helps more on larger training sets.\nEven at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at respectable 0.72 TFLOPS/GPU.\nThe two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%."}, {"section_index": "13", "section_name": "5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR)", "section_text": "Model Architecture: Our model was a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E."}, {"section_index": "14", "section_name": "D 10O BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS", "section_text": "Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep. models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32. 32, 64, 128, 256 and 256, respectively.\nTraining: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models. which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param eters. For all models, training batch sizes are approximately 2.5 million words. Models are trained. once-through over about 100 billion words..\nTable 2: Results on WMT'14 En-> Fr newstest2014 (bold values represent best results)\nWe implement several memory optimizations in order to fit up to 1 billion parameters per GPU First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:\nThe Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the per. parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator. we set 1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment. estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step. the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010)..\nTable 8: Model comparison on 100 Billion Word Google News Dataset\nTable 3: Results on WMT'14 En -> De newstest2014 (bold values represent best results).\nModel Test Test ops/timestep #Params excluding Total TFLOPS Perplexity Perplexity (millions) embed. & softmax #Params per GPU .1 epochs 1 epoch (millions) (billions) (observed) Kneser-Ney 5-gram 67.1 45.3 0.00001 76.0 4xLSTM-512 54.5 47.0 8.4 8.4 0.1 1.23 MoE-32 48.5 40.4 8.4 37.8 0.1 0.83 MoE-256-h 42.8 35.3 8.4 272.9 0.4 1.11 MoE-1024-h 40.3 32.7 8.5 1079.0 1.2 1.14 MoE-4096-h 38.9 30.9 8.6 4303.4 4.4 1.07 MoE-16384-h 38.2 29.7 8.8 17201.0 17.3 0.96 MoE-65536-h 38.2 28.9 9.2 68791.0 68.9 0.72 MoE-131072-h 39.8 29.2 9.7 137577.6 137.7 0.30\nTable 4: Results on the Google Production En-> Fr dataset (bold values represent best results)\nModel Eval Eval Test Test ops/timestep Total Training Perplexity BLEU Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.60 37.27 2.69 36.57 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 2.78 35.80 2.87 35.56 214M 278M 6 days/96 k80s\nResults: We evaluate our model using perplexity on a holdout dataset. Results are reported in. Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE\nDatasets: We benchmarked our method on the WMT'14 En->Fr and En->De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto cols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina- tion of newstest2012 and newstest2013 was used as the development set. We also tested the same model on Google's Production English to French data.\nModel Test Test ops/timenstep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.69 40.35 85M 8.7B 3 days/64 k40s MoE with 2048 Experts (longer training). 2.63 40.56 85M 8.7B 6 days/64 k40s GNMT (Wu et al., 2016) 2.79 39.22 214M 278M 6 days/96 k80s GNMT+RL (Wu et al., 2016) 2.96 39.92 214M 278M 6 days/96 k80s PBMT (Durrani et al., 2014) 37.0 LSTM (6-layer) (Luong et al., 2015b) 31.5 LSTM (6-layer+PosUnk) (Luong et al., 2015b) 33.1 DeepAtt (Zhou et al., 2016) 37.7 DeepAtt+PosUnk (Zhou et al., 2016) 39.2\nModel Test Test ops/timestep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 4.64 26.03 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 5.25 24.91 214M 278M 1 day/96 k80s GNMT +RL (Wu et al., 2016) 8.08 24.66 214M 278M 1 day/96 k80s PBMT (Durrani et al., 2014) 20.7 DeepAtt (Zhou et al., 2016) 20.6"}]
B1hdzd5lg
[{"section_index": "0", "section_name": "WORDS OR CHARACTERS? FINE-GRAINED GATING FOR READING COMPREHENSION", "section_text": "Table 5: Word tokens with highest and lowest gate values. High gate values favor character-level representa tions, and low gate values favor word-level representations..\nGate values Word tokens\nzhiliny,wcohen, rsalakhu}@cs.cmu.edu\nPrevious work combines word-level and character-level representations using con-. catenation or scalar weighting, which is suboptimal for high-level tasks like read ing comprehension. We present a fine-grained gating mechanism to dynamically. combine word-level and character-level representations based on properties of the. words. We also extend the idea of fine-grained gating to modeling the interaction. between questions and paragraphs for reading comprehension. Experiments show. that our approach can improve the performance on reading comprehension tasks. achieving new state-of-the-art results on the Children's Book Test and Who Did. What datasets. To demonstrate the generality of our gating mechanism, we also. show improved results on a social media tag prediction task|1.\nWe present a fine-grained gating mechanism that dynamically combines word-level and character- level representations based on word properties. Experiments on the Twitter tag prediction dataset show that fine-grained gating substantially outperforms scalar gating and concatenation. Our method also improves the performance on reading comprehension and achieves new state-of-the-art results on CBT and WDW. In our future work, we plan to to apply the fine-grained gating mechanism for combining other levels of representations, such as phrases and sentences. It will also be intriguing to integrate NER and POS networks and learn the token representation in an end-to-end manner."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Finding semantically meaningful representations of the words (also called tokens) in a document i. necessary for strong performance in Natural Language Processing tasks. In neural networks, token. are mainly represented in two ways, either using word-level representations or character-level repre. sentations. Word-level representations are obtained from a lookup table, where each unique token i represented as a vector. Character-level representations are usually obtained by applying recurren. neural networks (RNNs) or convolutional neural networks (CNNs) on the character sequence of the. token, and their hidden states are combined to form the representation. Word-level representation . are good at memorizing the semantics of the tokens while character-level representations are mor. suitable for modeling sub-word morphologies (Ling et al.2015}[Yang et al.]2016a). For example considering \"cat\"' and \"cats\", word-level representations can only learn the similarities between th. two tokens by training on a large amount of training data, while character-level representations, b. design, can easily capture the similarities. Character-level representations are also used to alleviat. the difficulties of modeling out-of-vocabulary (OOV) tokens (Luong & Manning2016)."}, {"section_index": "2", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by NVIDIA, the Office of Naval Research Scene Understanding grant N000141310721, the NSF grant IIS1250956, and Google Research"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.\nDanqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016.\nHybrid word-character models have been proposed to leverage the advantages of both word-level. and character-level representations. The most commonly used method is to concatenate these twc representations (Yang et al.]2016a). However, concatenating word-level and character-level repre-. sentations is technically problematic. For frequent tokens, the word-level representations are usually. accurately estimated during the training process, and thus introducing character-level representa. tions can potentially bias the entire representations. For infrequent tokens, the estimation of word-. level representations have high variance, which will have negative effects when combined with the. character-level representations. To address this issue, recentlyMiyamoto & Cho(2016) introduced a scalar gate conditioned on the word-level representations to control the ratio of the two repre-. sentations. However, for the task of reading comprehension, preliminary experiments showed that. this method was not able to improve the performance over concatenation. There are two possible. reasons. First, word-level representations might not contain sufficient information to support the. decisions of selecting between the two representations. Second, using a scalar gate means applying. the same ratio for each of the dimensions, which can be suboptimal..\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.\nBhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. Tweet2ve Character-based distributed representations for social media. In ACL, 2016b.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phi1 Blunsom. Teaching machines to read and comprehend. In NIPS, pp. 1693-1701, 2015..\nIn this work, we present a fine-grained gating mechanism to combine the word-level and character-. level representations. We compute a vector gate as a linear projection of the token features followed\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. In ICLR, 2016.\neSt or but But These these However however among Among that when When although Although because Because until many Many than though Though this This Since since date where Where have That and And Such such number so which by By. how before Before with With between Between even Even if. est Sweetgum Untersee Jianlong Floresta Chlorella Obersee PhT Doctorin Jumonville WFTS WTSP Boven Pharm Nederrijn Otrar Rhin Magicicada WBKB Tanzler. KMBC WPLG Mainau Merwede RMJM Kleitman Scheur Bodensee Kromme Horenbout Vorderrhein Chlamydomonas Scantlebury Qingshui Funchess."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016a.\nby a sigmoid activation. We then multiplicatively apply the gate to the character-level and word level representations. Each dimension of the gate controls how much information is flowed from the word-level and character-level representations respectively. We use named entity tags, part-of- speech tags, document frequencies, and word-level representations as the features for token proper ties which determine the gate. More generally, our fine-grained gating mechanism can be used tc model multiple levels of structure in language, including words, characters, phrases, sentences and paragraphs. In this work we focus on studying the effects on word-character gating.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention su. reader network. In ACL, 2016.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. In AAAI, 2016.\nTo better tackle the problem of reading comprehension, we also extend the idea of fine-grained. gating for modeling the interaction between documents and queries. Previous work has shown the importance of modeling interactions between document and query tokens by introducing various. attention architectures for the task (Hermann et al.. 2015] Kadlec et al.2016).Most of these use an inner product between the two representations to compute the relative importance of documen tokens. The Gated-Attention Reader (Dhingra et al.|2016a) showed improved performance by re placing the inner-product with an element-wise product to allow for better matching at the semantic level. However, they use aggregated representations of the query which may lead to loss of infor-. mation. In this work we use a fine-grained gating mechanism for each token in the paragraph anc. each token in the query. The fine-grained gating mechanism applies an element-wise multiplicatior. of the two representations.\nMinh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machine translation wit hybrid word-character models. In ACL, 2016.\nYasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. In EMNLP, 2016\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint arXiv:1607.04315, 2016\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457, 2016.\nWe show improved performance on reading comprehension datasets, including Children's Book Test (CBT), Who Did What, and SQuAD. On CBT, our approach achieves new state-of-the-art results without using an ensemble. Our model also improves over state-of-the-art results on the Who Did What dataset. To demonstrate the generality of our method, we apply our word-character fine grained gating mechanism to a social media tag prediction task and show improved performance Over previous methods.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,o00+ questions for machine comprehension of text. In EMNLP, 2016.\nMarek Rei, Gamal KO Crichton, and Sampo Pyysalo. Attending to characters in neural sequence labelin. models. arXiv preprint arXiv:1611.04361, 2016.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machin reading. arXiv preprint arXiv:1606.02245, 2016.\nOur contributions are two-fold. First, we present a fine-grained word-character gating mechanism. and show improved performance on a variety of tasks including reading comprehension. Second. to better tackle the reading comprehension tasks, we extend our fine-grained gating approach to modeling the interaction between documents and queries..\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, pp. 2692-2700, 2015\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprir arXiv:1608.07905, 2016."}, {"section_index": "5", "section_name": "2 RELATED WORK", "section_text": "Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative inte gration with recurrent neural networks. In NIPS, 2016.\nZhilin Yang, Ruslan Salakhutdinov, and William Cohen. Multi-task cross-lingual sequence tagging fron scratch. arXiv preprint arXiv:1603.06270, 2016a\nZhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, and William W Cohen. Review networks for captior generation. In NIPS, 2016b.\nYang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016.\nHybrid word-character models have been proposed to take advantages of both word-level and character-level representations.Ling et al.(2015) introduce a compositional character to word (C2w) model based on bidirectional LSTMs.Kim et al.(2016) describe a model that employs a convolutional neural network (CNN) and a highway network over characters for language model- ing.Miyamoto & Cho[(2016) use a gate to adaptively find the optimal mixture of the character-level and word-level inputs.Yang et al.(2016a) employ deep gated recurrent units on both character and word levels to encode morphology and context information. Concurrent to our work, Rei et al. (2016) employed a similar gating idea to combine word-level and character-level representations, but their focus is on low-level sequence tagging tasks and the gate is not conditioned on linguistic features.\nishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Learning multi-relational semantic using neural-embedding models. In NIPS 2014 workshop on Learning Semantics, 2014..\nThe gating mechanism is widely used in sequence modeling. Long short-term memory (LSTM) networks (Hochreiter & Schmidhuber1997) are designed to deal with vanishing gradients through the gating mechanism. Similar to LSTM, Gated Recurrent Unit (GRU) was proposed byCho et al. (2014), which also uses gating units to modulate the flow of information. The gating mechanism can also be viewed as a form of attention mechanism (Bahdanau et al.]2015] Yang et al. 2016b) Over two inputs.\nSimilar to the idea of gating, multiplicative integration has also been shown to provide a benefit. in various settings.Yang et al.(2014) find that multiplicative operations are superior to additive. operations in modeling relations. Wu et al.(2016) propose to use Hadamard product to replace sum operation in recurrent networks, which gives a significant performance boost over existing RNN. models.Dhingra et al.(2016a) use a multiplicative gating mechanism to achieve state-of-the-art. results on question answering benchmarks.\nReading comprehension is a challenging task for machines. A variety of models have been proposed to extract answers from given text (Hill et al.[2016] Kadlec et al.]2016]Trischler et al.[2016f Chen\net al.[2016] Sordoni et al.]2016] Cui et al.[2016). Yu et al.(2016) propose a dynamic chunk reader to extract and rank a set of answer candidates from a given document to answer questions. Wang & Jiang(2016) introduce an end-to-end neural architecture which incorporates match-LSTM and pointer networks (Vinyals et al.2015)\nIn this section, we will describe our fine-grained gating approach in the context of reading com prehension. We first introduce the settings of reading comprehension tasks and a general neura network architecture. We will then describe our word-character gating and document-query gating approaches respectively."}, {"section_index": "6", "section_name": "3.1 READING COMPREHENSION SETTING", "section_text": "The reading comprehension task involves a document P = (p1, P2, .::,Pm) and a query Q (q1, q2, ::. , qn), where M and N are the lengths of the document and the query respectively. Eacl token pi is denoted as (w, C), where w; is a one-hot encoding of the token in the vocabulary an C; is a matrix with each row representing a one-hot encoding of a character. Each token in the quer qj is similarly defined. We use i as a subscript for documents and j for queries. The output of th problem is an answer a, which can either be an index or a span of indices in the document.\nNow we describe a general architecture used in this work, which is a generalization of the gatec attention reader (Dhingra et al.|[2016a). For each token in the document and the query, we compute a vector representation using a function f. More specifically, for each token pi in the document we have h = f(w, Ct). The same function f is also applied to the tokens in the query. Let H? and Hg denote the vector representations computed by f for tokens in documents and queries respectively. In Section|3.2 we will discuss the \"word-character\"' fine-grained gating used to define the function f.\nSuppose that we have a network of K layers. At the k-th layer, we apply RNNs on Hk-1 and Hy to. obtain hidden states Pk and Qk, where Pk is a M d matrix and Qk is a N d matrix with d being. the number of hidden units in the RNNs. Then we use a function r to compute a new representation for the document Hk = r(Pk, Qk). In Section3.3. 3 we will introduce the \"document-query\" fine-. grained gating used to define the function r..\nand Hg tc\nAfter going through K layers, we predict the answer index a using a softmax layer over hidder states Hk. For datasets where the answer is a span of text, we use two softmax layers for the start. and end indices respectively.\nGiven a one-hot encoding w; and a character sequence C;, we now describe how to compute the vector representation h, = f(w;, C) for the token. In the rest of the section, we will drop the subscript i for notation simplicity\nWe first apply an RNN on C and take the hidden state in the last time step c as the character-level representation (Yang et al.]2016a). Let E denote the token embedding lookup table. We perform a matrix-vector multiplication Ew to obtain a word-level representation. We assume c and Ew have the same length de in this work\nPrevious methods defined f using the word-level representation Ew (Collobert et al.]2011), the character-level representation c (Ling et al.[[2015), or the concatenation [Ew; c](Yang et al.||2016a). Unlike these methods, we propose to use a gate to dynamically choose between the word-level and. character-level representations based on the properties of the token. Let v denote a feature vector that encodes these properties. In this work, we use the concatenation of named entity tags, part- of-speech tags, binned document frequency vectors, and the word-level representations to form the. feature vector v. Let d., denote the length of v..\nThe gate is computed as follows.\ng = o(Wgv+ bg\nCombined Representation 1 - x Sigmoid Concat Lookup Char RNN NER POS Frequency Lookup Word token.\nFigure 1: Word-character fine-grained gating. The two lookup tables are shared. \"NER\", \"POs\", \"frequency' refer to named entity tags, part-of-speech tags, document frequency features.\nThe final representation is computed using a fine-grained gating mechanism\nh=f(c,w)=gOc+(1-g)O(Ew\nAn illustration of our fine-grained gating mechanism is shown in Figure [1 Intuitively speaking. when the gate g has high values, more information flows from the character-level representation t the final representation; when the gate g has low values, the final representation is dominated by th word-level representation.\nThough|Miyamoto & Cho(2016) also use a gate to choose between word-level and character-level. representations, our method is different in two ways. First, we use a more fine-grained gating mech. anism, i.e., vector gates rather than scalar gates. Second, we condition the gate on features that better reflect the properties of the token. For example, for noun phrases and entities, we would expect the. gate to bias towards character-level representations because noun phrases and entities are usually. less common and display richer morphological structure. Experiments show that these changes are. key to the performance improvements for reading comprehension tasks..\nOur approach can be further generalized to a setting of multi-level networks so that we can combine multiple levels of representations using fine-grained gating mechanisms, which we leave for future work."}, {"section_index": "7", "section_name": "3.3 DOCUMENT-OUERY FINE-GRAINED GATING", "section_text": "Given the hidden states Pk and Qk, we now describe how to compute a representation H that. encodes the interactions between the document and the query. In this section, we drop the superscrip1. k (the layer number) for notation simplicity. Let p; denote the i-th row of P and q; denote the j-row of Q. Let dn denote the lengths of pi and qj..\nAttention-over-attention (AoA) (Cui et al.2016) defines a dot product between each pair of tokens in the document and the query, i.e., pt qj, followed by row-wise and column-wise softmax non- linearities. AoA imposes pair-wise interactions between the document and the query, but using a dot product is potentially not expressive enough and hard to generalize to multi-layer networks. The gated attention (GA) reader (Dhingra et al.|2016a) defines an element-wise product as p Og where gi is a gate computed by attention mechanism on the token pi and the entire query. The intuition for the gate gi is to attend to important information in the document. However, there is no direct pair-wise interaction between each token pair.\nwhere Wg and bg are the model parameters with shapes de d, and de, and o denotes an element. wise sigmoid function.\n(M*N) * d Element-wise product Document Tanh Attention i-th vector Hidden States M * d j-th vector M * d Hidden States. N * d Query\nFigure 2: Paragraph-question fine-grained gating\nI; = tanh(p O q;\nhi = softmax(uIj + w, w;bh1 + bn2)I j\nwhere up is a d,-dimensional model parameter, bp1 and bp2 are scalar model parameters, w; and w, are one-hot encodings for pi and q; respectively. We additionally use one-hot encodings in the. attention mechanism to reinforce the matching between the same tokens since such information is not fully preserved in I; when k is large. The softmax nonlinearity is applied over all j's. The final. hidden states H are formed by concatenating the h,'s for each token pi.."}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "We first present experimental results on the Twitter dataset where we can rule out the effects of. different choices of network architectures. to demonstrate the effectiveness of our word-character fine-grained gating approach. Later we show experiments on more challenging datasets on reading. comprehension to further show that our approach can be used to improve the performance on high-. level NLP tasks as well."}, {"section_index": "9", "section_name": "4.1 EVALUATING WORD-CHARACTER GATING ON TWITTER", "section_text": "We evaluate the effectiveness of our word-character fine-grained gating mechanism on a social media tag prediction task. We use the Twitter dataset and follow the experimental settings in|Dhingra et al (2016b). We also use the same network architecture upon the token representations, which is ar LSTM layer followed by a softmax classification layer (Dhingra et al.]2016b). The Twitter datase consists of English tweets with at least one hashtag from Twitter. Hashtags and HTML tags hav been removed from the body of the tweet, and user names and URLs are replaced with specia tokens. The dataset contains 2 million tweets for training, 10K for validation and 50K for testing with a total of 2,039 distinct hashtags. The task is to predict the hashtags of each tweet.\nWe present a fine-grained gating method that combines the advantages of the above methods (i.e. both pairwise and element-wise). We compute the pairwise element-wise product between the hid- den states in the document and the query, as shown in Figure[2] More specifically, for pi and qj, we have\nwhere q; can be viewed as a gate to filter the information in pi. We then use an attention mechanism over I,; to output hidden states h, as follows.\nWe compare several different methods as follows. Word char concat uses the concatenation of word-level and character-level representations as in Yang et al.[(2016a); word char feat concat concatenates the word-level and character-level representations along with the features described in\nTable 1: Performance on the Twitter dataset. \"word\"' and \"char' means using word-level and character-level representations respectively.\nModel Precision@ 1 Recall@10 Mean Rank word (Dhingra et al. 2016b 0.241 0.428 133 char (Dhingra et al. 2016b) 0.284 0.485 104 word char concat 0.2961 0.4959 105.8 word char feat concat 0.2951 0.4974 106.2 scalar gate 0.2974 0.4982 104.2 fine-grained gate 0.3069 0.5119 101.5\nTable 2: Performance on the CBT dataset. The \"GA word char concat' results are extracted from Dhingra et al.[(2016a). Our results on fine-grained gating are based on a single model. \"CN\" and \"NE\" are two widely used question categories. \"dev\" means development set, and \"test' means test set.\nModel CN dev CN test NE dev NE test GA word char concat 0.731 0.696 0.768 0.725 GA word char feat concat 0.7250 0.6928 0.7815 0.7256 GA scalar gate 0.7240 0.6908 0.7810 0.7260 GA fine-grained gate 0.7425 0.7084 0.7890 0.7464 FG fine-grained gate 0.7530 0.7204 0.7910 0.7496 Sordoni et al.(2016 0.721 0.692 0.752 0.686 Trischler et al.72016 0.715 0.674 0.753 0.697 Cu1 et al.(2016) 0.722 0.694 0.778 0.720 Munkhdalai & Yu|(2016 0.743 0.719 0.782 0.732 Kadlec et al.. ((2016) ensemble 0.711 0.689 0.762 0.710 Sordoni et al.(2016) ensemble 0.741 0.710 0.769 0.720 Trischler et al.. .(2016) ensemble 0.736 0.706 0.766 0.718\nSection 3.2; scalar gate uses a scalar gate similar to|Miyamoto & Cho(2016) but is conditioned on the features; fine-grained gate is our method described in Section 3.2. We include word char feat concat for a fair comparison because our fine-grained gating approach also uses the token features.\nThe results are shown in Table[1 We report three evaluation metrics including precision@1, re call@10, and mean rank. Our method outperforms character-level models used in Dhingra et al (2016b) by 2.29%, 2.69%, and 2.5 points in terms of precision, recall and mean rank respectively We can observe that scalar gating approach (Miyamoto & Cho] 2016) can only marginally improve over the baseline methods, while fine-grained gating methods can substantially improve model per formance. Note that directly concatenating the token features with the character-level and word-leve representations does not boost the performance, but using the token features to compute a gate (as done in fine-grained gating) leads to better results. This indicates that the benefit of fine-grained gating mainly comes from better modeling rather than using additional features."}, {"section_index": "10", "section_name": "4.2 PERFORMANCE ON READING COMPREHENSION", "section_text": "We evaluate our model on cloze-style question answering benchmarks\nAfter investigating the effectiveness of the word-character fine-grained gating mechanism on the Twitter dataset, we now move on to a more challenging task, reading comprehension. In this section, we experiment with two datasets, the Children's Book Test dataset (Hill et al.2016) and the SQuAD dataset (Rajpurkar et al.]2016)\nTable 3: Performance on the Who Did What dataset. \"dev\" means development set, and \"test\"' means test set 'WDW-R' is the relaxed version of WDW.\nModel WDW dev WDW test WDW-R dev WDW-R test Kadlec et al.(2016 0.570 0.590 Chen et al.(2016) 0.640 0.650 Munkhdalai & Yu (2016 0.665 0.662 0.670 0.667 Dhingra et al.(2016a 0.716 0.712 0.726 0.726 this paper 0.723 0.717 0.731 0.726\nTable 4: Performance on the SQuAD dev set. Test set results are included in the brackets\nThe Children's Book Test (CBT) dataset is built from children's books. The whole dataset has 669,343 questions for training, 8,000 for validation and 10,o00 for testing. We closely follow the setting inDhingra et al.[(2016a) and incrementally add different components to see the changes in performance. For the fine-grained gating approach, we use the same hyper-parameters as inDhingra et al.(2016a) except that we use a character-level GRU with 100 units to be of the same size as the word lookup table. The word embeddings are updated during training..\nIn addition to different ways of combining word-level and character-level representations, we also compare two different ways of integrating documents and queries: GA refers to the gated attention reader (Dhingra et al.[2016a) and FG refers to our fine-grained gating described in Section[3.3\nThe results are reported in Table2. We report the results on common noun (CN) questions anc. named entity (NE) questions, which are two widely used question categories in CBT. Our fine. grained gating approach achieves new state-of-the-art performance on both settings and outperforms. he current state-of-the-art results by up to 1.76% without using ensembles. Our method outperform the baseline GA reader by up to 2.4%, which indicates the effectiveness of the fine-grained gating. mechanism. Consistent with the results on the Twitter dataset, using word-character fine-grainec. gating can substantially improve the performance over concatenation or scalar gating. Furthermore. we can see that document-query fine-grained gating also contributes significantly to the final results.\nWe also apply our fine-grained gating model to the Who Did What (WDw) dataset (Onishi et al. 2016). As shown in Table[3] our model achieves state-of-the-art results compared to strong baselines We fix the word embeddings during training.\nThe Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset collected recently (Rajpurkar et al.2016). It contains 23,215 paragraphs come from 536 wikipedia articles. Unlike other reading comprehension datasets such as CBT, the answers are a span of text rather than a single word. The dataset is partitioned into a training set (80%, 87,636 question-answer pairs), a development set (10%, 10,600 question-answer pairs) and a test set which is not released.\nModel F1 Exact Match GA word 0.6695 0.5492 GA word char concat. 0.6857 0.5639 GA word char feat concat 0.6904 0.5711 GA scalar gate 0.6850 0.5620 GA fine-grained gate 0.6983 0.5804 FG fine-grained gate 0.7125 0.5995 FG fine-grained gate + ensemble 0.7341 (0.733) 0.6238 (0.625) Yu et al.(2016 0.712 (0.710) 0.625 (0.625) Wang & Jiang (2016 0.700 (0.703) 0.591 (0.595)\nFigure 3: Visualization of the weight matrix Wg. Weights for each features are averaged. Red means high and. yellow means low. High weight values favor character-level representations, and low weight values favor word level representations. \"Organization\", \"\"Person', \"Location\", and \"O\" are named entity tags; \"DOCLEN-n'. are document frequency features (larger n means higher frequency, n from O to 4); others are POS tags..\nFigure 4: Visualization of gate values in the text. Red means high and yellow means low. High gate values favor character-level representations, and low gate values favor word-level representations..\nWe report our results in Table4 Exact match' computes the ratio of questions that are answered correctly by strict string comparison, and the F1 score is computed on the token level. We car observe that both word-character fine-grained gating and document-query fine-grained gating car substantially improve the performance, leading to state-of-the-art results among published papers. Note that at the time of submission, the best score on the leaderboard is 0.716 in exact match and 0.804 in F1 without published papers. A gap exists because our architecture described in Section 3.1|does not specifically model the answer span structure that is unique to SQuAD. In this work, we focus on this general architecture to study the effectiveness of fine-grained gating mechanisms.\nWe visualize the model parameter W g as described in Section[3.2 For each feature, we average the corresponding weight vector in Wg. The results are described in Figure[3] We can see that namec entities like \"Organization\"' and noun phrases (with tags \"NNP\" or \"NNPS') tend to use character level representations, which is consistent with human intuition because those tokens are usuall infrequent or display rich morphologies. Also, DOCLEN-4, WH-adverb (\"WRB\"), and conjunctior (\"IN'' and \"CC'') tokens tend to use word-level representations because they appear frequently\nWe also sample random span of text from the SQuAD dataset, and visualize the average gate values in Figure4] The results are consistent with our observations in Figure[3] Rare tokens, noun phrases. and named entities tend to use character-level representations, while others tend to use word-level representations. To further justify this argument, we also list the tokens with highest and lowest gate values in Table 5\nN 8 O E NN VP9 RF DOCLEN- Or B EN-2 PER C DO\nBS C Et SYM ON O PER DOCLEN-"}]
ryUPiRvge
[{"section_index": "0", "section_name": "EXTRAPOLATION AND LEARNING EOUATIONS", "section_text": "Georg Martius & Christoph H. Lampert\nIn classical machine learning, regression is treated as a black box process oj identifying a suitable function from a hypothesis set without attempting to gair insight into the mechanism connecting inputs and outputs. In the natural sciences however, finding an interpretable function for a phenomenon is the prime goal as i allows to understand and generalize results. This paper proposes a novel type o function learning network, called equation learner (EQL), that can learn analytica expressions and is able to extrapolate to unseen domains. It is implemented as ar end-to-end differentiable feed-forward network and allows for efficient gradien based training. Due to sparsity regularization concise interpretable expressions car be obtained. Often the true underlying source expression is identified.\n(c) F-3 2.5 2.0 MLP 1.5 0.81 -2.36 14 EQL 1.0 0.0 0.00.0 0.90.0 0.0 -0.0 0.5 EQL (no cos) mult mult mult 2 0.0 0.82 0.86 0.92 -0.83 0.98 0.68 0.33 0.58 System 0.5 0.01 0.0 0.00.59 0.020.0 1.0 mul mult 1.5 0.77 0.82 -1.02 /0.57 3 -2 -1 0 1 2 3 EQL EQL (no cos X1 = X2 = X3 = X x4 = -0.2x"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can - at least conceptually - be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically This setting, which we call extrapolation generalization, is the topic of the present paper.\nlearned formula (EQL): 0.61(x1 + x1x2)(cos(-2.36x1) + 0.71) + 0.33x2x3x4 learned formula (EQL (no c0s)): 0.33(1 + x2) sin(3.14x1) + 0.33x2x3x4\nFigure 4: Formula learning analysis. (a) for F-1, (b) for F-2, and (c) for F-3. (left) y for a single cut. through the input space for the true system equation (1012), and for an instance of EQL, and MLP (right) shows the learned networks correspondingly, see Fig.2|for details. The formula representations. where extracted from the networks. For F-3 the algorithm fails with the overcomplete base and typically (9/10 times) ends up in a local minima. With less base function (no cosine) the right formula. is found. Both results are presented. See text for a discussion..\nWe are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer theit behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting.\nThe following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions\nTable 3: Interpolation and extrapolation performance for formula learning. See Tab.1for detail\ndataset method interpolation extrapol. (near) extrapol. (far) F-1 EQL 0.010 0.000 0.015 0.005 0.026 0.015 MLP 0.011 0.000 0.32 0.12 0.920 0.420 SVR 0.011 0.28 1.2 F-2 EQL 0.01 0.00 0.013 0.004 0.026 0.019 MLP 0.01 0.00 0.2 0.014 0.49 0.043 SVR 0.011 0.3 0.94 F-3 EQL 0.01 0.000 0.047 0.012 0.35 0.11 EQL (no cos) 0.01 0.000 0.01 0.000 0.011 0.001 MLP 0.01 0.000 0.084 0.007 0.4 0.021 SVR 0.01 0.071 0.39"}, {"section_index": "2", "section_name": "REGRESSION AND EXTRAPOLATION", "section_text": "We consider a multivariate regression problem with a training set {(x1, y1), ..., (xN, yn)} with. x E Rn, y E Rm. Because our main interest lies on extrapolation in the context of learning the. dynamics of physical systems we assume the data originates from an unknown analytical function (o. system of functions), : Rn -> Rm with additive zero-mean noise, , i. e. y = $(x) + and E = 0 The function may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function y : Rn -> Rm that. approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves. minimal expected error E|[(x) - $(x)|[2. In practice, we only have particular examples of the. function values available and measure the quality of predicting in terms of the empirical error or."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "If training and test data are sampled from the same distribution then we speak about an interpolatio problem. In the extrapolation setting the training data is assumed to cover only a limited range of th data domain. In the example of the robot arm, for instance, the training may be restricted to a certai joint angle range or maximal velocity. For testing we want to make predictions about the unseer domains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlying functional relationship instead of just minimizing the empirical error, as detailed below. As usual, w split the data that is available at training time into a part for model training and a part for validatior or model selection."}, {"section_index": "4", "section_name": "LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION", "section_text": "The main model we propose is a multi-layered feed-forward network with computational units. specifically designed for the extrapolation regression tasks. For an L-layer network, there are L - ]. hidden layers, each consisting of a linear mapping followed by non-linear transformations. For. simplicity of notation, we explain the network as if each hidden layer had the same structure (k inputs, k outputs). In practice, each layer can be designed independently of the others, of course, as. long as input/output dimensions match.\nThe linear mapping at level l maps the k'-dimensional input y(l-1) to the d-dimensional intermediate representation z given by\nwhere y(l-1) is the output of the previous layer, with the convention y(0) = x. The weight matrix. W(l) E Rdk' and the bias vector b(l) E Rd are free parameters that are learned during training. The. non-linear transformation contains u unary units, f : R -> R, for i = 1, ..., u, and v binary units. g; : R R -> R for j = 1, ..., v. Their outputs are concatenated to form the layer output.\nLet us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set.\n(0) := +2\n-x1 - 0.01x3 + x3 sin (x2) + 0.1x4 cos (x2) + 9.81 sin (x2) cos (x2) Y3= sin? (x2) + 1 -0.2x4 - 19.62 sin (x2) + x1 cos (x2) + 0.01x3 cos (x2) - x3 sin (x2) cos (x2) Y4 = sin2 (x2) + 1\nThe formulas contain divisions which are not included in our architecture due to their singularitie To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamic is outside the hypothesis class. In this case we cannot expect great extrapolation performance an this is confirmed by the experiments. In Fig.6(b,c) the extrapolation performance is illustrated b slicing through the input space. The near extrapolation performance is still acceptable for both EQ and MLP, but as soon as the training region is left further even the best instances differ considerabl from the true values, see also the numeric results in Tab.[4 The SVR is performing poorly also fo near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions ar rarely used.\ngj(Zu+2j-1,Zu+2j) := zu+2j-1:Zu+2j for j=1,...,U\n/L (L-1) +b(L)\nThe architecture is depicted in Fig.1] We call the new architecture Equation Learner (EQL) and denote the function it defines by\n(a) (b) (c) 1.4 0.01 extrapol. EQL 1.2 MLP valerror 000001/oy True 0.00 SVR error 988 EQL 1 0.05 0.01 0.04 0.100 0.02 0.03 0.4 0.02 0.2 0.03 0.010 ... .e 0.01 emee e 0.0 -0.04 0.001 : 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 x = Z/100 x = Z/100 0 2 4 6 8 (e) (d) s formula interpolation extrapolation 1 2 y = 1.98x2 - 1.42x + 0.618 - 1.45sigm(3.65x 0.3) EQL 0.00042 0.0061 0.0038 3 y = -0.38z + 2.47sigm(-2.25z - 2.77) + 0.38 MLP 0.002 0.0180 0.0024 with z = cos(2.32x - 0.08) SVR 0.00067 0.0057 0.0014 4 y = 0.221z + 0.42sigm(0.75z - 3.73)\nN 1 E(D) = ly(xi) -yill2 N i=1\nFigure 5: X-Ray transition energies. (a) Measured data and predicted values by EQL and (b). visualized prediction error for all methods for one train/validation splitting. (c) EQL solutions during. model selection in validation error sparsity space, see Appendix A1 for details. (d) numeric results. Reported are RMS errors with standard deviation for 10 independent train/validation splits. In real units the error is in 100 keV and is well below the difference between neighboring high-Z elements.. (e) learned formulas for different sparsities s (lowest dot for each s in (c))..\nsplits. The data is scaled to lie in [0, 1], i. e. x = Z/100 and y = Ka2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T = 50000 was used. Figure 5|presents the data, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig.5(e) that can be used to gain insights into the potential relationship.\nC 1 bl\nConsider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see Fig.6(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector x = (x1, ..., x4).\nIn total, the nonlinear stage has k = u + v outputs and d = u + 2v inputs. The unary units, f1, . .., f. receive the respective component, z1, .. . , Zu as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter I; E {0, 1, 2, 3}\nWe set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1 = 1 = x3, y2 = x2 = x4 and\nZi if I = 0, sin(zi) if I =1, for i = 1,..., u. cOs(zi) if I =2, (sigm(zi) if I = 3,\nremaining component, Zu+1, ..., Zu+2u, as input in pairs of two. They are multiplication units that compute the product of their two input values:.\n1 1 2 y2 x OOO sin sin COS COS W(1) W(2) W(L) (all-to-all) sigm (all-to-all) sigr (all-to-all) X\n(a) (b) (c) >p 1.0 MLP 1.0 EQL 0.5 0.5 System 0.0 0.0 44 0.1 0.5 -0.5 1.0 -1.0 2 -1 0 1 2 -2 -1 0 2 X1=X2 = X3 =X4 = x X1 = X2 = X3 =x4=x\nFigure 6: Cart-pendulum system. (a) sketch of the system. The lengths and masses are set to 1, the gravitation constant is 9.81 and the friction constant is 0.01. (b,c) slices of outputs y3 and y4 for. inputs x1 = x2 = x3 = x4 = x for the true system equation (Eq.13), and best EQL, MLP instances.\nFigure 1: Network architecture of the proposed Equation Learner (EQL) for 3 layers (L = 3) and one neuron per type (u = 4, v = 1).\nTable 4: Interpolation and extrapolation performance for cart-pendulum dynamics. See Tab.1f details. Note that predicting O would yield an error of O.96 on the far test set.\ninterpolation extrapol. (near) extrapol. (far) EQL 0.0103 0.0000 0.0621 0.0208 0.180 0.056 MLP 0.0101 0.0000 0.0184 0.0008 0.195 0.006 SVR 0.0118 0.227 0.639\nThe proposed network architecture differs in two main aspects from typical feed-forward networks the existence of multiplication units and the possibility of sine and cosine as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space.\nSigmoid nonlinearities are the canonical choice of activation function for artificial neural networks (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular radial basis functions Broomhead & Lowe(1988) we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work."}, {"section_index": "5", "section_name": "CONCLUSIONS", "section_text": "We presented a new network architecture called EQL that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differ. entiable, which allows end-to-end training using backpropagation. By sequencing L1 regularization. and fixing Lo norm we achieve sparse representations with unbiased estimation of factors within. the learned equations. We also introduce a model selection procedure specifically designed to select. for good extrapolation quality by a multiobjective criterion based on validation error and sparsity The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns. concise functional forms that may provide insights into the relationships within the data, as we show. on physical measurements of x-ray transition energies..\nThe ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units Durbin & Rumelhart(1989) and Pi-Sigma-unitShin & Ghosh(1991). The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network.\nThe optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which. case extrapolation may be poor.\nIf the origin of the data is not in the hypothesis class, i.e. the underlying expression cannot be. represented by the network and good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address ir future work alongside the application to even larger examples. We expect good scaling capabilities tc larger systems due to the gradient based optimization. Apart from the extrapolation we also expec improved interpolation results in high-dimensional spaces, where data is less dense..\nFinally. each layer of the network contains unary units that act as identity maps. which in particula gives the network the option to learn functions with smaller number of nonlinearities than the total. network depths"}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "The EQL is fully differentiable in its free parameters 0 = { W(1), . ..,. b(L)}. whicl allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objectiveTibshirani(1996)."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Debasish Basak, Srimanta Pal, and Dipak Chandra Patranabis. Support vector regression. Neura Information Processing-Letters and Reviews, 11(10):203-224, 2007\n|D| L 1 ll(x;)-yil+x L(D) = N i=1 l=1\nShai Ben-David. John Blitzer. Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151-175, 2010\n(a) (b) 1.0 MLP 1.0 EQL 0.5 0.5 System 0.0 0.0 0.1 0.5 -0.5 -1.0 1.0 2 -1 0 1 2 -2 -1 0 1 2 X1 =X2 =X3 =X4= x X1=X2=X3 =X4= x\nThis work was in parts funded by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: \"Life-long learning of visual scene understanding\" (L3ViSU). GM received funding from the People Programme (Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734.\nwhere D() denotes the current mini-batch and a is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use Q = 0.001 and a mini-batch size of 20\nChristopher M Bishop. Neural networks for pattern reco gnition. Oxford University Press, 1995\nDavid S Broomhead and David Lowe. Radial basis functions, multi-variable functional interpolatior and adaptive networks. Technical report, DTIC Document, 1988..\nThe role of the L regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improvec generalization, it also results in a systematic underestimation of the function values.\nLazlo Gyorfi, Wolfgang Hardle, Pascal Sarda, and Philippe Vieu. Nonparametric curve estimation from time series, volume 60. Springer, 2013.\nTherefore, we follow a hybrid regularization strategy: at the beginning of the training procedure (t < t1) we use no regularization ( = 0), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting A to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training (t > t2 we disable L1 regularization ( = 0) but enforce the same Lo norm of the weights. This is achieved by keeping all weights w E W1..L that are close to 0 at 0, i. e. if |w| 0.001 then w = 0 during the remaining epochs. This ensures that the learned model finds not only a function of the righ parametric form, but also fits the observed values as closely as possible. We observed that the exact T is total number of update steps. T was selected large enough to ensure convergence. Note, thai convergence to a sparse structure is important here, so early stopping will be disadvantageous.\nJudea Pearl. Causality. Cambridge University Press, 2000\nHoifung Poon and Pedro M. Domingos. Sum-product networks: A new deep architecture, 2012"}, {"section_index": "8", "section_name": "MODEL SELECTION FOR EXTRAPOLATION", "section_text": "Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.\nEQL networks have a number of hyper-parameters, e. g. the number of layers, the number of units. and the regularization constant. Unfortunately, standard techniques for model selection, such as. evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on. interpolation quality. In order to extrapolate the network has to find the \"right' formula. But how can. we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if. we have the choice between cos(x) and its truncated power series approximation 1 - x2 /2 + x4/24 the first one is preferred. We use the number of active hidden units in the network as a proxy for. the complexity of the formula, see Appendix A1 for details. One could also think of differentiating. between the unit types. In any case, this argumentation is only correct if the model explains the data. well, i. e. it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances w. r. t. validation error and sparsity and select the one with the smallest L2 norm. (in rank-space), see Eq. (15).\nMichael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. Science 324(5923):81-85, 2009. ISSN 0036-8075. doi: 10.1126/science.1165893. URL http:/7 science.sciencemag.org/content/324/5923/81\nAlex J Smola and Bernhard Scholkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199-222, 2004.\nDonald F. Specht. A general regression neural network. IEEE Transactions on Neural Networks (TNN), 2(6):568-576, 1991"}, {"section_index": "9", "section_name": "RELATED WORK", "section_text": "In the field of machine learning, regression is often treated as a black box process of identifying. a suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)Williams & Rasmussen (2006) or Support Vector Regression (SVR) Smola & Scholkopf (2004), or a multi-layer network of suitable expressive power Specht (1991). The goal is to find a prediction function that leads to a small expected error on future data, not\nRobert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistica Society. Series B (Methodological), pp. 267-288, 1996.\nthat is, a linear combination of L2 loss and L1 regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam Kingma & Ba (2015) for calculating the updates:\n0t+1 =0t+ Adam de\nK-R Muller. Alexander J Smola, Gunnar Ratsch, Bernhard Scholkopf, Jens Kohlmorgen, and Vladimir Vapnik. Predicting time series with support vector machines. In Artificial Neural Networks. (ICANN), pp. 999-1004. Springer, 1997.\nFurthermore, the optimization process may only find a local optimum of the training objective vhich depends on the initialization of the parameters. We use independent runs to quantify expecte Derformance deviations.\nnecessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a \"biologically plausible\"' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained e. g. allowing only for sparse linear models.\nThe task of learning a true, nonlinear, functional dependence from observing a physical system has received little attention in the machine learning literature so far, but forms the basis of the field of system identification. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common."}, {"section_index": "10", "section_name": "A1: MODEL SELECTION DETAILS", "section_text": "Causal learning is an area of recent research that aims at identifying a causal relation between multiple observables, which are typically the result of a physical process. Classically, this tasks reduces to finding a minimal graphical model based only on tests of conditional independence Pearl (2o00) Although very successful in some fields, this classical approach only provides a factorization of the problem, separating causes and effects, but it leaves the exact functional dependency unexplained Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions Peters et al.(2014). The topic of learning a regression function with emphasis on extrapolation performance has not been studied much in the literature so far. Existing work on time series prediction deals with extrapolation in the temporal domain, i. e. predict the next value(s) Wiener (1949). By our nomenclature, this is typically rather an interpolation task, when the prediction is based on the behaviour of the series at earlier time steps but with similar value distribution Muller et al.[(1997); Gyorfi et a1.(2013). Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the domain adaptation setting. In particular, since we assume a common labeling function, our setting would fall under the covariate shift settingQuionero-Candela et al.(2009). Unfortunately, this connection is not particularly useful for our problem. As domain adaptation typically does not make additional assumptions about how the data distribution may change, existing methods need access to some unlabeled data from the test distribution already at training time Ben-David et al. (2010). In our setting this is not possible to obtain.\nL k (9) = ev 0.01) l=1 i=1\nwhere O is the heavyside function and O.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula)"}, {"section_index": "11", "section_name": "SELECTION CRITERIA", "section_text": "As stated in the main text, we strive to choose the model that is both simple and has good performance. in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let r() and rs($) be the ranks of the network w. r. t. the validation error. and sparsity s()respectively, then the network with minimal squared rank norm is selected:.\narg min [r +rs($)2\nOn the technical level, EQL networks are an instance of general feed-forward networks for function approximation Bishop (1995). In contrast to recent trends towards deep learning Bengio (2009) Bengio et al.(2013), our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, EQL networks resemble sum-product networks (SPNs)Poon & Domingos(2012) and Pi-Sigma networks (PSNs) Shin & Ghosh (1991), in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas EQL networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in EQL multiplication is optional.\n(a) (b) val. error extrapol. 140 error 0.10 0.10 120 100 0.08 0.05 80 0.06 60 0.04 0.02 40 0.02 20 0.01 .:i. 0 0 .: S 0 10 20 30 40 0 50 100 150\nal. error rV extrapo 140 error 120 0. ).10 100 0.0 ).05 80 0.0 60 0.0 0.02 40 0.0 20 ).01 0 0 :: S 0 10 20 30 40 0 50 100 150\nFinding equations for observations is also known as symbolic regression where a search is performed. in a certain function space, typically done with evolutionary computation. With these techniques it. is possible to discover physical laws such as invariants and conserved quantities|Schmidt & Lipson (2009). Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based. optimization problem. Related to symbolic regression is finding mathematical identities for instance. to find computationally more efficient expressions. InZaremba et al.(2014) this was done using. machine learning to overcome the potentially exponential search space.\nFigure 7: Model selection criteria. (a) extrapolation performance depending on validation error and sparsity (s) for the kin-4-end dataset as an illustration. (b) the same as in (a) but in rank-space. Circle. arcs indicate the L2 norm iso-lines."}, {"section_index": "12", "section_name": "EXPERIMENTAL EVALUATION", "section_text": "In order to understand how the method depends on the amount of noise and the number of datapoint we scan through the two parameters and present the empirical results in Fig.[8] In general the method. is robust to noise and as expected, more noise can be compensated by more data..\nWe demonstrate the ability of EQL to learn physically inspired models with good extrapolatior. quality by experiments on synthetic and real data. For this, we implemented the network training anc\nWe actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by s. For a given network phi we get\nIn Fig.7|the extrapolation performance of all considered networks for the kin2D-4-end dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error..\nTable 1: Numeric results on pendulum dataset. Reported are the mean and standard deviation of the root mean squares error (RMS) ( E, Eq. (1) on different test sets for 10 random initializations..\nPendulum. We first present the results of learning the equations of motion for a very simple. physical system: a pendulum. The state space of a pendulum is X = R R where the first value is. the angle of the pole in radians and the second value is the angular velocity. In the physics literature these are usually denoted as (0, w), but for our purposes, we call them (x1, x2) in order to keep the. notation consistent between experiments. The pendulum's dynamic behavior is governed by the. following two ordinary differential equations:.\nx1 = X2 and x2 = -g sin(x1)\nAs training data, we sample 1000 points uniformly in the hypercube [-h, h] [-h, h] for h = 2 Note that this domain contains more than half of a sine period, so it should be sufficient to identif the analytic expression. The target values are disturbed by Gaussian noise with standard derivatio = O.01. We also define three test sets, each with 1oo0 points. The interpolation test set i sampled from the same data distribution as the training set. The extrapolation (near) test set contain data sampled uniformly from the data domain [- 3h, 3h] [- 3h, 3h]\\ [-h, h] [-h, h], which i relatively near the training region and the extrapolation (far) test set extends the region to furthe outside: [-2h, 2h] [-2h, 2h]\\ [-h, h] [-h, h]. We train a 2-layer EQL and perform model selectio among the hyper-parameters: the regularization strength E 10{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3 and the number of nodes u = v E {1, 3, 5}. All weights are randomly initialized from a norma distribution with o = /1/(k' + d). The unit selection I is set such that all unit types are equall often. To ensure convergence we chose T = 10000 epochs. We compare our algorithm to a standar multilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: X a for EQL, number of layers L E {2,3}, and number of neurons k E {5, 10, 20}. A second baselin is given by epsilon support vector regression (SVR) Basak et al.(2007) with two hyperparameter C E 10{-3,-2,-1,0,1,2,3,3.5} and e E 10{-3,-2,-1,0} using radial basis function kernel with widtl Y E {0.05, 0.1, 0.2, 0.5, 1.0}.\nFigure 8: Interpolation performance (a) and extrapolation performance (b) (on the noise-free test set depending on the number of data points and the size of the additive noise for kin-4-end dataset as an illustration. The white line represent an arbitrary threshold below which we consider a successful solution of the interpolation and extrapolation task.\n(a) (b) 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (4OdB) (30dB) (20dB) (10dB) (5dB) (0dB) (4OdB) (30dB) (20dB) (10dB) (5dB) (0dB) 20000 20000 20000 20000 extrapol. test error 10000 10000 10000 10000 error 0.32 5000 5000 poonts 5000 5000 0.32 0.10 0.10 # 1000 1000 # 1000 1000 0.03 0.03 0.01 500 500 500 500 0.01 100 100 100 100 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (40dB) (30dB) (20dB) (10dB) (5dB) 0dB) (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) -noise (SNR) -noise (SNR)\ninterpolation extrapol. (near) extrapol. (far) EQL 0.0102 0.0000 0.012 0.002 0.016 0.007 MLP 0.0138 0.0002 0.150 0.012 0.364 0.036 SVR 0.0105 0.041 0.18\n(a) (b) 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) (40dB) (30dB) (20dB) (10dB) )(5dB) (0dB) 20000 20000 20000 20000 extrapol. test error 10000 10000 10000 10000 error 0.32 poonis 5000 5000 poonie : 5000 5000 0.32 0.10 0.10 # 1000 1000 0.03 # 1000 1000 0.03 0.01 500 500 500 500 0.01 100 100 100 100 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) -noise (SNR) -noise (SNR)\nNumeric results are reported in Tab.1] As expected all models are able to interpolate well with a test error on the order of the noise level (o = 0.01). For extrapolation however, the performance. differ between the approaches. For MLP the prediction quality decreases quickly when leaving the. training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically. on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure2} while the MLP and SVR simply learns a function that interpolates the training values, EQL finds the correct functional expression and. therefore predicts the correct values for any input data..\nDouble pendulum kinematics. The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum Schmidt & Lipson(2009). The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1, x2). These positions where not measured such that we supply them by the following formula: y1 = cos(x1), y2 = cos(x1) + cos(x1 + x2), y3 = sin(x1), y4 = sin(x1) + sin(x1 + x2) where (y1, y3) and (y2, y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short traiectories. The first\n(a) b 1.0 1.0 MLP SVR 0.5 0.5 EQL -1.0 -0.32 System 0.0 0.0 0.0 0.0 sin id 0.5 0.5 1.0 0.32 -1.0 -1.0 -6 -4 -2 0 2 4 6 6 -4 -2 0 2 4 6 -0.0 0.0 y2 y1 L1 X1\nFigure 3: Double pendulum kinematics. (a) training trajectory (in y-space). (b) extrapolation test trajectory (in y-space) with output of a learned EQL instance. (c) slices of output y4 for inputs x1 = x2 = x for the true system, one of EQL, MLP, and SVR instances. (d) numeric results see Tab.1|for details. Note, that predicting 0 would yield a mean error of 0.84.\ncovers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. 3(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered Nevertheless the angle values are confined to [, ]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig.3(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig.[3(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig.3(d).\nModel selection is performed to determine as above, u = v E {3, 5}, (MLP: k E {5, 10, 20}) anc layer number L E {2, 3}\nRobotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic. arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. Foi training the arm is controlled by sinusoidal joint target angles with amplitude in [-/2, /2], eacl. joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4 and 5 segment arms respectively, with added noise as above. For testing extrapolation performance. the amplitude -, was used. Note that the extrapolation space is much larger than the training. space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end. and the coordinates of all segment positions kin-5-all. The numerical results, see Tab.[2] shows tha. our method is able to extrapolate in these cases. Model selection as above with u = v E {10, 20}. (MLP: k E {10, 50?) and layer number L E {2, 3, 4}. To illustrate the dependence on the amount o.\nFigure 2: Learning pendulum dynamics. (a) slices of outputs y1 (left) and y2 (right) for inputs x1 = x2 = x for the true system equation (Eq.9) and one of EQL, MLP, SVR instances. The shaded area marks the training region and the vertical bars show the size of the near and far extrapolation domain. (b) one of the learned networks. Numbers on the edges correspond to the entries of W and numbers inside the nodes show the bias values b. All weights with w] < 0.01 and orphan nodes are omitted. Learned formulas: y1 = 0.103x2, y2 = sin(-x1), which are correct up to symmetry (1/g = 1.01).\n(a) (b) (c) 2 2.0 Test MLP data 1.5 SVR EQL 1.0 EQL output 0.5 System 0 0.0 -0.5 1.0 1.5 -2 2.0 2 -2 -1 0 1 2 -2 -1 0 1 2 4 -2 0 2 4 Y2 Y2 X1 = X2 = X EQL MLP SVR (d) extrapolation error 0.0003 0.00003 0.58 0.03 0.25\nTable 2: Extrapolation performance for kinematic of robotic arms. See Tab.[1for details. Standarc. deviations for 5 random initializations. Interpolation error for all methods is around 0.012 0.02\nnoise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.\nLearning complex formula. In order to find out whether EQL can also learn more complicate formulas, we consider three examples with four-dimensional input and one-dimensional output\ny = 1/3(sin(x1) + sin (2x2 + /8) + x2 - x3x4 F- y = 1/3 (sin(x1) + x2 cos(2x1 + /4) + x3 - x? F- = 1/3((1 + x2) sin(x1) + x2x3x4 F-\nThe first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise produc units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with h = 1 as input data range. We use 10000 points for training set and validation set (90%-10% split) and 5000 points for each of the test sets. Model selection for EQL is performed as above using the number of layers L E 2, 3, 4. The number of units is set to u = v = 10. For the MLP, we select L and X from the same set as above as well as k E {10, 30}\nX-Ray transition energies. As a further example we consider data measured in atomic physics When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from Deslattes et al.(2003), where we consider one specific transition, called the K a2 line, because it was measured for all elements. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is K a2 Z2 according to Moseley's law Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10 Z 100, which is split into training/validation sets in the range [10, 91] (70/10 data points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops) Since we have so little data we evaluate the performance for 10 independent training/validation\nkin-3-end kin-4-end kin-5-all EQL 0.017 0.000 0.012 0.000 0.011 0.000 MLP 0.389 0.014 0.415 0.020 0.346 0.013 SVR 0.235 0.590 0.260\ny = 1/3 (sin(x1) + sin (2x2 + /8) + x2 - x3x4) F-1 y = 1/3 (sin(x1) + x2 cos(2x1 + /4) + x3 - x4) F-2 y = 1/3((1 + x2) sin(x1) + x2x3x4) F-3"}]
SJCscQcge
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Nina Narodytska Shiya Kasiviswanatha Samsung Research America Mountain View. CA 94043. USA\nNina Narodytska\nMartin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, an Li Zhang. Deep learning with differential privacy. In ACM CCS, 2016..\n[n.narodytska,kasivisw}@gmail.com\nFigure 3: Experiments in Figures 1|and|2[for the high-resolution ImageNet1o00 dataset. The results are again for good images from a set of 1000 randomly selected images. We use a slightly modified version of Algorithm RANDADv that perturbs a set of 50 pixels.\nDeep neural networks are powerful and popular learning models that achieve state. of-the-art pattern recognition performance on many computer vision, speech, and. language processing tasks. However, these networks have also been shown suscep. tible to carefully crafted adversarial perturbations which force misclassification of. the inputs. Adversarial examples enable adversaries to subvert the expected system behavior leading to undesired consequences and could pose a security risk when. these systems are deployed in the real world..\nKen Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in th details: Delving deep into convolutional nets. In BMVC, 2014b\nWe consider the general k-misclassification problem (Definition 1) where an adversarial attacl ensures that the true label does not appear in the top-k predictions of the network. We utilize : local-search procedure, which is an incomplete search procedure that is widely used for solving. combinatorial problems appearing in diverse domains such as graph clustering, scheduling, logisticss. and verification (Lenstra1997). For a general optimization problem it works as follows. Consider ar. objective function f(z) : Rn _> R where the goal is to minimize f(z). The local-search procedur. works in rounds, where each round consists of two steps. Let zi-1 be the solution iterate after rounc. i - 1. Consider round i. The first step is to select a small subset of points Z = {z1, ..., Zn}, a sc. called local neighborhood, and evaluate f(z) for every z; E Z. Usually, the set Z consist of points. that are close to current z,-1 for some measure of distance which is domain specific. The second step. selects a new solution z, taking into account the previous solution zi-1 and the points in Z. Hence. Z; = g(f(zi-1), f(z1), ..., f(zn)), where g is some pre-defined transformation function.\nAlhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers' robustness to adversarial perturbations. CoRR, abs/1502.02590, 2015.\nIn this work, we focus on deep convolutional neural networks and demonstrate. that adversaries can easily craft adversarial examples even without any interna. knowledge of the target network. Our attacks treat the network as an oracle (black box) and only assume that the output of the network can be observed on the probed inputs. Our first attack is based on a simple idea of adding perturbation. to a randomly selected single pixel or a small set of them. We then improve the. effectiveness of this attack by carefully constructing a small set of pixels to perturb by using the idea of greedy local-search. Our proposed attacks also naturally. extend to a stronger notion of misclassification. Our extensive experimental results. illustrate that even these elementary attacks can reveal a deep neural network's. vulnerabilities. The simplicity and effectiveness of our proposed schemes mean that they could serve as a litmus test for designing robust networks.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversaria examples. In ICLR Workshop, 2015\n(a) First, we need to define the cost function f. Let I be the image (with true label c(I)) whose adversarial image we want to generate for a target neural network NN. For some input image I we use the objective function fc(1)(I) which equals the probability assigned by the network NN that the input image I belongs to class c(I). More formally, fc(1)(I) = 0c(I) where NN(I) = (01,..., 0c) with o; denoting the probability as determined by NN that image I belongs to class j. Our local-search procedure aims to minimize this function. (b) Second, we consider how to form a neighborhood set of images. As mentioned above, the local- search procedure operates in rounds. Let I,-1 be the image after round i - 1. Our neighborhood will consist of images that are different in one pixel from the image I,-1. In other words, if we measure the distance between I,-1 and any image in the neighborhood as the number of perturbed pixels, then this distance is the same (equal to one) for all of them. Therefore, we can define the neighborhood in terms of a set of pixel locations. Let (Px, Py), be a set of pixel locations. For the first round (Px, Py )o is randomly generated. At each subsequent round, it is formed based on a set of pixel locations which were perturbed in the previous round. Let (P*, P*)i-1 denote the pixel locations that were perturbed in round i - 1 (formally defined below). Then (Px, Py)i = (x, y) , {(a,b)E(P*,P*)i-1}{xE[a-d,a+d],yE[b-d,b+d]}\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015.\nJan Karel Lenstra. Local search in combinatorial optimization. Princeton University Press, 1997\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2014\nIn this paper, we investigate the problem of robustness of state-of-the-art convolutional neural networks (CNNs) to simple black-box adversarial attacks. The rough goal of adversarial attacks is as follows: Given an image I that is correctly classified by a machine learning system (say, a CNN), is it possible to construct a transformation of I (say, by adding a small perturbation to some or all the pixels) that now leads to misclassification by the system. Since large perturbations can trivially lead to misclassification, the attacks seek to limit the amount of perturbation applied under some chosen metric. More often than not, in these attacks, the modification done to the image is so subtle that the changes are imperceptible to a human eye. Our proposed attacks also share this property, in addition to being practical and simplistic, thus highlighting a worrying aspect about lack of robustness prevalent in these modern vision techniques.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR, 2016"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Eli Biham and Adi Shamir. Differential cryptanalysis of des-like cryptosystems. Journal of CRYP TOLOGY, 4(1):3-72, 1991.\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers from adversarial to random noise. arXiv preprint arXiv:1608.08967, 2016.\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press,2016. URL http://www.deeplearningbook.org\nWe adapt this general procedure to search critical sets efficiently as explained below. Our optimization problem will try to minimize the probability that the network determines an perturbed image has the class label of the original image, and by using a local-search procedure we generate perturbed images which differ from the original image in only few pixels. Intuitively, in each round, our local-search procedure computes an implicit approximation to the gradient of the current image by understanding the influence of a few pixels on the output, which is then used to update the current image.\nAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world arXiv preprint arXiv:1607.02533, 2016.\nc(I)() = Oc(I) where NN(I) =01...,0\nPatrick McDaniel, Nicolas Papernot, and Z. Berkay Celik. Machine learning in adversarial settings IEEE Security & Privacy, 14(3):68-72, 2016\n(Px, Py)i = {(a,b)E(P*,P*)i-1}{xE[a-d,a+d],yE[b-d,b+d]}\nwhere d is a parameter. In other words, we consider pixels that were perturbed in the previous round, and for each such pixel we consider all pixels in a small square with the side length 2d centered at that pixel. This defines the neighborhood considered in round i.. (c) Third, we describe the transformation function g of a set of pixel locations. The function g takes as. input an image I, a set of pixel locations (Px, Py), a parameter t that defines how many pixels. will be perturbed by g, and two perturbation parameters p and r. In round i of the local-search procedure, the function g(Ii-1, (Px, Py)i-1,t,p,r) outputs a new image, such that exactly t. pixels of I-1 are perturbed, and an auxiliary set of pixel locations (P*, P+) to record which t. pixels where perturbed at this round, so we have (I, (P*, P*)i) = g(Ii-1, (Px, Py)i-1, t, p, r).. Next we describe transformations that g performs in round i. As the first step, g constructs a set of. perturbed images based on (Px, Py )i-1: {PERT(Ii-1,P, (x, y))} (x,y)E(Px,Py)i-1 where PeRT is the perturbation function defined through (1). Then it computes the score of each. image in I as VI E I : score(I) = fc(1)(I), and it sorts (in decreasing order) images in I based on the above score function to construct\nDeDlleldlleOnlddyeSdpldldlldCksDasedOnldllleeh assumptions about the adversarial knowledge of the target network. The first line of work assumes that the adversary has detailed knowledge of the network architecture and the parameters resulting from training (or access to the labeled training set) (Szegedy et al.2014f |Goodfellow et al.|2015 Moosavi-Dezfooli et al.||2016f |Papernot et al.[[2016c). Using this information, an adversary constructs a perturbation for a given image. The most effective methods are gradient-based: a small perturbation is constructed based on the gradients of the loss function w.r.t. the input image and a target label Often, adding this small perturbation to the original image leads to a misclassification. In the second line of work an adversary has restricted knowledge about the network from being able to only observe the network's output on some probed inputs (Papernot et al.2016b). Our work falls into this category While this black-box model is a much more realistic and applicable threat model, it is also more challenging because it considers weak adversaries without knowledge of the network architecture parameters, or training data. Interestingly, our results suggest that this level of access and a small number of queries provide sufficient information to construct an adversarial image.\nShaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: towards real-time objec detection with region proposal networks. In NIPS, pp. 91-99, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014.\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks Visualising image classification models and saliency maps. In ICLR Workshop. 2014.\n(a) (b) (c) (d)\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellov and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014..\nTable 1: The top row shows the original images and the bottom row shows the perturbed images. The misclassification is as follows: (a) a stingray misclassified as a sea lion, (b) an ostrich misclassified as a goose, (c) a jay misclassified as a junco, and (d) a water ouzel misclassified as a redshank.\nAs we operate in a black-box setting, we use a gradient-free approach to adversarial image generatior. Papernot et al.(2016b) were the first to discuss a black-box attack against deep learning systems. Their attack crucially relies on the observation that there is a transferability (generalization) propert. in adversarial examples, i.e., adversarial examples form one model transfers to another. Our propose. attacks on the other hand is much more simple and direct, does not require this transferability property. and hence is more effective in constructing adversarial images, in addition to having some othe. computational advantages. We demonstrate that our method is capable of constructing adversaria. images for several network architectures trained on different datasets. In particular in this pape. we consider the CIFAR10, MNIST, SVHN, STL10, and ImageNet1000 datasets, and two popula network architectures, Network-in-Network (Lin et al.,2014) and VGG (Simonyan & Zissermar 2014). In Table[1] we show four images from the ImageNet1000 dataset. The original images are i1. the upper row. The bottom row shows the corresponding perturbed images produced by our algorithn. which are misclassified by a VGG CNN-S network (Chatfield et al.]2014a).\nAlgorithm LocSeARcHADv shows the complete pseudocode of our local-search procedure. At the high level, the algorithm takes an image as input, and in each round, finds some pixel locations tc perturb using the above defined objective function and then applies the above defined transformatior function to these selected pixels to construct a new (perturbed) image. It terminates if it succeeds to push the true label below the kth place in the confidence score vector at any round. Otherwise, i1 proceeds to the next round (for a maximum of R rounds). Note that the number of pixels in an image perturbed by Algorithm LocSEARcHADv is at most t R and in practice (see Tables4]5land6 in Section 6) it is much less. In round i, we query the network at most the number of times as the number of pixels in (Px, Py); which after the first round is at most 2d 2d t (again in practice this is much less because of the overlaps in the neighborhood squares).\nIn Section 6] we demonstrate the efficacy of Algorithm LOcSEARcHADV in constructing adversaria. images. We first highlight an interesting connection between the pixels perturbed and their influence measured by a notion of called saliency map..\nOur Contributions. In this work, we present simple and effective black-box adversarial attacks or deep convolutional neural networks. We make the following main contributions in this paper\nA Relation to Saliency Maps. Simonyan et al.(2014) introduced the notion of saliency map as a way to rank pixels of the original images w.r.t. their influence on the output of the network. The. intuition is that influential pixels in the saliency map are more likely to be important pixels that represent objects and, for example, can be used for weakly supervised object localization. Formally, let NNc(1)(I) denote the probability assigned to true class c(I) by the network NN on input I E IRlw h\n(1) The first question we investigate is the influence of perturbing a single pixel on the prediction To do so, we devise a simple scheme, based on randomly selecting a single pixel and applying a strong perturbation to it. Somewhat surprisingly, we noticed that a few trails of this random experiment is already quite enough in generating adversarial images for low resolution image sets In fact, in many cases, for misclassification, the amount of perturbation needed to be applied to the\nI = {PERT(I-1,P,(x,y))} (x,y)E(Px,Py)i-1\nVI eI : score(I) = fcD(I)\nand it sorts (in decreasing order) images in I based on the above score function to construct sorted(). Pixels whose perturbation lead to a larger decrease of f are more likely useful in constructing an adversarial candidate. From sorted(), it records a set of pixel locations (Px, P+)i based on the first t elements of sorted(T), where the parameter t regulates the number of pixels perturbed in each round. Formally,\nP*,P*) ={(x,y) : PERr(Ii-1,P, (x,y)) E sorted()[1 : t]}\nwhere sorted()[1 : t] represents the first t sorted images in sorted(T). Finally, I, is constructed. from I,-1 by perturbing each pixel in location (x, y) E (Px, P*) with a perturbation value r. The. perturbation is performed in a cyclic way (as explained in Algorithm CyCLiC) so that we make. sure that all coordinate values in I, are within the valid bounds of LB and UB. Note that at the end of every round i, I, is a valid image from the image space I..\nWe want to point out that the function g uses two perturbation parameters, p and r. The value of. r is kept small in the range [0, 2]. On the other hand, we do not put any explicit restrictions on the. value of p. The best choice of p will be one that facilitates the identification of the \"best' pixels to perturb in each round. In our experiments, we adjust the value of p automatically during the search We defer this discussion to the experimental section..\nselected pixel is also quite small. For high-resolution images, a similar phenomena holds, excep our scheme now picks a random set of around 50 pixels. These simple experiments show the eas of generating adversarial images for modern deep CNNs without knowledge of either the networl. architecture or its parameters. There is however one shortcoming in these approaches in that the. perturbed image might have pixel values that are outside some expected range. (2) We overcome this above shortcoming by showing that lower perturbation suffices if we carefull. select the pixels for perturbation. The approach is based the idea of greedy local search, an iterative. search procedure, where in each round a local neighborhood is used to refine the current image an in process minimizing the probability of the network assigning high confidence scores to the tru class label. Again while the algorithm is quite simple, it is rather effective in generating adversaria. images with quite small perturbations. We also show an interesting connection between the pixel chosen for perturbation by our approach and the saliency map of an image, as defined by Simonyar. et al.(2014), that ranks pixels based on their influence on the output score. In effect our approacl. identifies pixels with high saliency scores but without explicitly using any gradient informatior (as needed in the definition of saliency map (Simonyan et al.f2014)). Intuitively, in each round. our local-search based approach computes an implicit approximation to the gradient of the curren image by understanding the influence of a few pixels on the output, which is then used to update. the current image. (3) We perform extensive experimental evaluations, and show that our local-search based approacl reliably generates adversarial examples with little perturbation (even when compared to a recen. elegant adversarial attack proposed by Goodfellow et al.[(2015) which needs perfect knowledg of the network). Another feature of our attack is that, by design, our approach only perturbs a very small fraction of the pixels during the adversarial image generation process (e.g., on the ImageNet1000 dataset we on average perturb only about 0.5% of the pixels per image). Mos previous attacks require the ability to perturb all the pixels in the image. (4) Our approaches naturally extend to a stronger notion of misclassification (that we refer to as. k-misclassification), where the goal is to ensure that the true label of the image does not ever. appear in the top-k predictions of the network (obtained by sorting the confidence score vector This notion especially captures the fact that many modern systems (e.g., ImageNet competitior entrants) are evaluated based on top-k predictions. To the best of our knowledge, these are the firs adversarial attacks on deep neural networks achieving k-misclassification.\nOutput: Success/Failure depending on whether the algorithm finds an adversarial image or not\nInput: Image I with true label c(I) E {1, ..., C}, two perturbation parameters p E R and r E 0, 2], and four other parameters: the half side length of the neighborhood square d E N, the number of pixels perturbed at each round t E N, the threshold k E N for k-misclassification, and an upper bound on the number of rounds R E N.\nSzegedy et al.[(2014) used a box-constrained L-BFGS technique to generate adversarial examples They also showed a transferability (or generalization) property for adversarial examples, in that. adversarial examples generated for one network might also be misclassified by a related network with. possibly different hyper-parameters (number of layers, initial weights, etc.). However, the need for. a solving a series of costly penalized optimization problems makes this technique computationally. expensive for generating adversarial examples. This issue was fixed byGoodfellow et al.(2015]. who motivated by the underlying linearity of the components used to build a network proposed an. elegant scheme based on adding perturbation proportional to sign of the network's cost function. gradient. Recently, Moosavi-Dezfooli et al.(2016) used an iterative linearization procedure to. generate adversarial examples with lesser perturbation. Another recent attack proposed byPapernot. et al.(2016c) uses a notion of adversarial saliency maps (based on the saliency maps introduced. by (Simonyan et al.|2014)) to select the most sensitive input components for perturbation. This attack. has been adapted by[Grosse et al.[(2016) for generating adversarial samples for neural networks used. as malware classifiers. However, all these above described attacks require perfect knowledge of the target network's architecture and parameters which limits their applicability to strong adversaries. with the capability of gaining insider knowledge of the target system..\nThe saliency map of I is the matrix M E Rwxh such that Mi,j = maxbe[e] |Wc(1)(b, x, y)|, where Wc(1)(b, x, y) is the element of Wc(1) corresponding to channel b and location (x, y). Pixels with higher scores are considered more influential. In subsequent works, this notion has been extended to adversarial saliency maps that can be useful in generating adversarial perturbations (Papernot et al. 2016c).\nOur focus in this paper is the setting of black-box attacks, where we assume that an adversary ha. only the ability to use the network as an oracle. The adversary can obtain output from supplied input. and use the observed input-output relationship to craft adversarial images' In the context of deej. neural networks, a black-box attack was first proposed byPapernot et al.(2016b) with the motivatior. of constructing an attack on a remotely hosted system?|Their general idea is to first approximate th. target network by querying it for output labels, which is used to train a substitute network, which i. then used to craft adversarial examples for the original network. The success of the attack cruciall depends on the transferability property to hold between the original and the substitute network. Ou. black-box attack is more direct, and completely avoids the transferability assumption, making it fa. more applicable. We also avoid the overhead of gathering data and training a substitute networl. Additionally, our techniques can be adapted to a stronger notion of misclassification..\nComputing the exact saliency scores for an image requires complete access to the network NN, which. we do not assume. However, a natural hypothesis is that the pixels selected by Algorithm Loc. SeARchADv for perturbation are related to pixels with large saliency scores. We use the Ima. geNet1000 dataset to test this hypothesis. In Figure[3] we present some qualitative results. As can be seen from the pictures, the pixels perturbed by Algorithm LocSEARcHADv appear correlated with. pixels with high saliency scores. Quantitatively, we observed that the pixels that occupy top-10%. of the saliency map, on average contain more than 23% of the pixels chosen by Algorithm Loc. SEARcHADv for perturbation (and this overlap only grows when we consider a bigger chunk of. pixels picked by their saliency scores). Note that this is correlation is not though a random occurrence. For an image I, let S1 denote the set of pixels in I that rank among the top-10% in the saliency. map. If we pick a random set of around 200 pixels (this is on average number of pixels perturbed per image by Algorithm LocSEARcHADV perturbs, see Table|5), we expect only about 10% to them to intersect with S1 and standard tail bounds show that the probability that at least 23% of. the pixels of this random set intersects with S1 is extremely small?|Therefore, it appears that Al. gorithm LocSEARcHADv rediscovers part of the high salient score pixels but without explicitly. computing the gradients.\nA complementary line of work has focused on building defenses against adversarial attacks. Although. designing defenses is beyond scope of this paper, it is possible that adapting the previous suggested defense solutions such as Jacobian-based regularization (Gu & Rigazio|2015) and distillation (Pa pernot et al.]2016d) can reduce the efficacy of our proposed attacks. Moreover, the recently proposed technique of differentially private training (Abadi et al.]2016) can also prove beneficial here..\nThe study of adversarial instability have led to development of solutions that seeks to improv training to in return increase the robustness and classification performance of the network. I some case, adding adversarial examples to the training (adversarial training) set can act like regularizer (Szegedy et al.]2014] Goodfellow et al.]2015] Moosavi-Dezfooli et al.2016). Th phenomenon of adversarial instability has also been theoretically investigated for certain familie of classifiers under various models of (semi) random noise (Fawzi et al.20152016). However, a we discuss later, due to peculiar nature of adversarial images generated by our approaches, a simpl adversarial training is only mildly effective in preventing future similar adversarial attacks.\nThe security of machine learning in settings distinct from deep neural networks is also an area of active research with various known attacks under different threat models. We refer the reader to a recent survey byMcDaniel et al.(2016) and references therein.\nDatasets. We use 5 popular datasets: MNIST (handwritten digits recognition dataset), CIFAR10. (objects recognition dataset), SVHN (digits recognition dataset), STL1O (objects recognition dataset) and ImageNet1000 (objects recognition dataset).\nNotation and Normalization. We denote by [n] the set {1,..., n}. The dataset of images is partitioned into train and test (or validation) subsets. An element of a dataset is a pair (I, c(I)) foi an image I and a ground truth label c(I) of this image. We assume that the class labels are drawn from the set {1, ..., C}, i.e., we have a set of C E N possible labels. We assume that images have l channels (in experiments we use the RGB format) and are of width w E N and height h E N. We say that (b, x, y) is a coordinate of an image for channel b and location (x, y), and (+, x, y) is a pixel of an image where (+, x, y) represents all the l coordinates corresponding to different channels at location (x, y). I(b, x, y) E R is the value of I at the (b, x, y) coordinate, and similarly I(*, x, y) e R represents the vector of values of I at the (*, x, y) pixel.\nModels.We trained Network-in-Network (Lin et al.]2014) and VGG (Simonyan & Zisserman. 2014) for MNIST, CIFAR, SVHN, STL10, with minor adjustments for the corresponding image sizes Network-in-Network is a building block of the commonly used GoogLeNet architecture that has demonstrated very good performance on medium size datasets, e.g. CIFAR10 (Zagoruyko] 2015) VGG is another powerful network that proved to be useful in many applications beyond image. classification, like object localization (Ren et al. 2015). We trained each model in two variants: with and without batch normalization (Ioffe & Szegedy2015). Batch normalization was placed before a ReLU layer in all networks. For the ImageNet1o00 dataset, we used pre-trained VGG models from (Chatfield et al.|2014b) (we did not train them from scratch due to limited resources). All Caffe VGG models were converted to Torch models using the loadcaffe package (Zagoruyko]2016a] These models use different normalization procedures which we reproduced for each model based on provided descriptions. Tables4|and 5](the second column ERRTop-1) show the top-1 (base. error for all datasets and models that we considered. The results are comparable with the known state-of-the-art results on these datasets (Benenson2016).\nIt is a common practice to normalize the image before passing it to the network. A normalized. image has the same dimension as the original image, but differs in the coordinate values. In this work we treat the normalization procedure as an external procedure and assume that all images are normalized. As we always work with normalized images, in the following, a reference to image means a normalized input image. We denote by LB and UB two constants such that all the coordinates of. all the normalized images fall in the range [LB, UB]. Generally, LB < 0 and UB > 0. We denote by I C Rl wh the space of all (valid) images which satisfy the following property: for every I E I, for. all coordinates (b, x, y) E [l] [w] [h], I(b, x, y) E [LB, UB].\nRelated Techniques. There are quite a few approaches for generating adversarial images (as. discussed in Section2). Most of these approaches require access to the network architecture and its parameter values [Szegedy et al.]2014 [Goodfellow et al.]2015] Moosavi-Dezfooli et al.]2016 Papernot et al.2016cj1 The general idea behind these attacks is based on the evaluating the. network's sensitivity to the input components in order to determine a perturbation that achieves the adversarial misclassification goal. Among these approaches, the attack approach (known as.\nWe start by describing our experimental setup. We used Caffe and Torch machine learning frameworks. to train the networks. All algorithms to generate adversarial images were implemented in Lua within Torch 7. All experiments were performed on a cluster of GPUs using a single GPU for each run..\nWe denote by NN a trained neural network (trained on some set of training images). NN takes an. image I as an input and outputs a vector NN(I) = (o1, ..., 0c), where o; denotes the probability. as determined by NN that image I belongs to class j. We denote (NN(D),k) a function that returns a set of indices that are the top-k predictions (ranked by decreasing probability scores with.\n9we can also use here a standard hypothesis testing for a proportion. The null-hypothesis is that the probability of intersection equals 0.1 as with random Bernoulli trails, and test statistic Z = (0.23 - 0.1)//(0.1)(1 - 0.1)/200 = 6.12 indicates that the null-hypothesis can be rejected at significance level 0.01. 1Or\nties broken arbitrarily) of the network NN. For example, if NN(I) = (0.25, 0.1, 0.2, 0.45), then (NN(I),1) = {4} (corresponding to the location of the entry 0.45). Similarly, (NN(I),2) = {4,1}, (NN(I),3) ={4, 1, 3}, etc.\nAdversarial Goal. Before we define the goal of black-box adversarial attacks, we define misclassi fication for a NN. In this paper, we use a stronger notion of misclassification, which we refer to as k-misclassification for k E N.\nIn other words, k-misclassification means that the network ranks the true label below at least k other labels. Traditionally the literature on adversarial attacks have only considered the case where k = 1 Note that an adversary that achieves a k-misclassification for k > 1 is a stronger adversary than one achieving an 1-misclassification (k-misclassification implies k'-misclassification for all 1 < k' < k) If k = 1, we simply say that NN misclassifies the image.\nIn our setting, an adversary Adv is a function that takes in image I as input and whose output is another image ADv(I) (with same number of coordinates as I). We define an adversarial image as. one that fools a network into k-misclassification..\nDefinition 2 (Adversarial Image) Given access to an image I, we say that an ADv(I) is a k- adversarial image (resp. adversarial image) if c(I) E (NN(I), k) and c(I) (NN(ADv(I)), k (resp. c(I) E (NN(I),1) and c(I) (NN(ADv(I)),1))\nThe goal of adversarial attacks is to design this function ADv that succeeds in fooling the network fo. a large set of images. Ideally, we would like to achieve this misclassificatior3 by adding only some. small perturbation (under some metric) to the image. The presence of adversarial images shows tha there exist small perturbations in input that produce large perturbations at the output of the last laye\nAdversarial threat models can be divided into two broad classes|The first class of models roughl. assumes that the adversary has a total knowledge of the network architecture and the parameter resulting from training (or access to the labeled training set). The second class of threat models, a considered in this paper, make no assumptions about the adversary having access to the networl. architecture, network parameters, or the training set. In this case, the adversary has only a black-bo. (oracle) access to the network, in that it can query the network NN on an image I and observe the. output NN(I). In our experimental section (Section|6), we also consider a slight weakening of thi. black-box model where the adversary has only the ability to use a proxy of the network NN as al. Oracle.\nTable 3: Results on ImageNet1000 using VGG CNN-S (Caffe) network (Chatfield et al.J2014a Columns from left to right: the original image, top 150 pixels chosen according to their saliency. scores (in white), the absolute difference between the perturbed image and the true image (the pixels that are perturbed appear in white), and the perturbed image. Adversarial misclassification (rows from top to bottom): a ruffed grouse misclassified as a frilled lizard, an artichoke misclassified as a sleeping bag, a bubble misclassified as a fountain, and a hare misclassified as a cheetah..\nA black-box threat model in the context of deep neural networks was first considered byPaperno1. et al.(2016b). There is however one subtle difference between the threat model considered here. and that considered by Papernot et al.[(2016b) in what the adversary can access as an output. While the adversary presented in (Papernot et al.||2016b) requires access to the class label assigned by the. network which is the same level of access needed by our simple randomized adversary (presented in. Section4), our local-search adversary (presented in Section5) requires access to oc(1) (the probability assigned to the true label c(I) by the network on input I) and the vector (for checking whether. k-misclassification has been achieved). Our adversarial approaches does not require access to the. complete probability vector (NN(I)). Also as pointed out earlier, compared to (Papernot et al.. 2016b), our approach is more direct (needs no transferability assumption), requires no retraining, and. can be adapted to achieve k-misclassification rather than just 1-misclassification..\nthe \"fast-gradient sign method') suggested by[Goodfellow et al.(2015) stands out for being able to efficiently generate adversarial images. Here we compare the performance of our local-search basec attack against this fast-gradient sign method1\nFor completeness, we now briefly explain the fast-gradient sign method of Goodfellow et al.[(2015) Given an image Io, a label a E {1,...,C}, and a network NN, the fast-gradient sign method sign(V 1=1, Loss(NN(I), a)) is the sign of the network's cost function gradient (here Loss(NN(I), a). denotes the loss function of the network NN given input I and class a). We vary a over all possible labels in the dataset and choose the best result where this procedure is successful in generating an adversarial image. Without general guidelines for setting e, we experimented with several values of e starting from 0.07 and increasing this number. We found that the value e = 0.212|was the smallest. value where the fast-gradient sign method started to yield competitive performance compared to our algorithm. Smaller values of e leads to generation of fewer adversarial images, e.g., at e = 0.1, the percentage of generated adversarial images is reduced by around 10% as compared to the value at. e = 0.2 for the CIFAR10 dataset on the Network-in-Network model. Larger values of e tends to"}, {"section_index": "3", "section_name": "BLACK-BOX GENERATION: A FIRST ATTEMPT", "section_text": "2For the ImageNet1000 dataset, we set e differently as discussed later.\nIn this section, we present a simple black-box adversary that operates by perturbing a single pixel (or. a small set of pixels) selected at random. In the next section, we build upon this idea to construct an adversary that achieves better success by making adaptive choices..\n11 Another reason for picking this approach for comparison is that it is also heavily utilized in the recent black-box attack suggested byPapernot et al.(2016b), where they require additional transferability assumptions which is not required by our attack.\n3Note that the misclassification is at test time, once the trained network has been deployed. 4More fine-grained classification has also been considered in (Papernot et al.|2016c) where adversaries are categorized by the information and capabilities at their disposal.\ngenerate more adversarial images, but this comes at the cost of an increase in the perturbation. As we. discuss later, our local-search based approach yields better results than the fast-gradient sign methoc in both the volume of adversarial images generated and the amount of perturbation applied. Anothe. important point to remember is that unlike the fast-gradient sign method, our approach is based or. a weaker and more realistic assumption on the adversarial power, making our attacks more widely. applicable.\nPower of One Pixel. Starting point of our investigation is to understand the influence of a singl pixel in an adversarial setting. Most existing adversarial attacks operate by applying the sam perturbation on each individual pixel while minimizing the overall perturbation (Szegedy et al.|2014 Goodfellow et al.|2015f [Moosavi-Dezfooli et al.|2016), while recent research have yielded attack that perturb only a fraction of the pixels (Papernot et al.|2016c bf|Grosse et al.||2016). However, in al these cases, no explicit restriction is placed on the number of pixels that can be perturbed. Therefore it is natural to ask: whether it is possible to force the network to misclassify an image by modifying a single pixel? If so, how strong should this perturbation be? We run several experiments to she light on these questions. For simplicity, in this section, we focus the case of 1-misclassification, eve though all discussions easily extend to the case of k-misclassification for k > 1. We begin with useful definition.\nImplementing Algorithm LocSeARcHADv. For each image I, we ran Algorithm Loc SEARcHADv for at most 150 rounds, perturbing 5 pixels at each round, and use squares of side length 10 to form the neighborhood (i.e., R = 150, t = 5, d = 5). With this setting of parameters we perturb a maximum of t R = 750 pixels in an image. The perturbation parameter p was adaptively adjusted during the search. This helps in faster determination of the most helpful pixels in generating the adversarial image. Let I be the original image. For some round i of the algorithm define Oc(1) = avg(x,y){0c(1) : (x, y) E (P*, P+)i-1}, where Oc(1) is the probability assigned to class label c(I) in NN(PeRr(I,-1,P, x, y)) (here Oc(1) provides an approximation of the average confidence of the network NN in predicting the true label over perturbed images). At each round we increase the value of p if oc(1) is close to one and decrease p if oc(1) is low, e.g., below O.3. For Algorithm CyCLIC, we set r = 3/2. To avoid perturbing the most sensitive pixels frequently, we make sure that if a pixel is perturbed in a round then we exclude it from consideration for the next 30 rounds.\nDefinition 3 (Critical Pixel) |Given a trained neural network NN and an image I, a pixel (+, x, y. in I is a critical pixel if a perturbation of this pixel generates an image that is misclassified by the network NN. In other words, (+, x, y) is a critical pixel in I if there exists another neighboring image Ip which differs from I only in values at the pixel location (x, y) such that c(I) (NN(Ip), 1).\nExperimental Observations. For ease of comparison with the fast-gradient sign method (Good fellow et al.]2015), we set k = 1 and focus on achieving 1-misclassification. Tables|4|and[5 show the results of our experiments on the test sets. The first column shows the dataset name. The sec ond column (ERRTop-1) presents the top-1 misclassification rate on the corresponding test dataset without any perturbation (base error). ERRTop-1(ADv) is the top-1 misclassification rate where each original image in the test set was replaced with an generated perturbed image (using either ou. approach or the fast-gradient sign method (Goodfellow et al.2015) which is denoted as FGsM)13\nf(I(b,u,v) if x # u or y F r defn (x,y) p sign(I(b, u, v) otherwise\nIn the following, we sa pixel (*.x.u) in image I is critical iff c(D) d (NN(I.~ 9\nCritical Pixels are Common. Our first experiment is to investigate existence of critical pixels ii. he considered dataset of images. To do so, we perform a simple procedure that picks a location (x, y he perturbed image is run through the trained network, and we check whether it was misclassified o an exhaustively repeat this procedure for all pixels in an image, for computational efficiency w nstead perform it only on a fraction of randomly chosen pixels, and our results somewhat surprisingl. suggest that in many cases this is sufficient to generate an adversarial image. Algorithm RANDAD. resents the pseudo-code for this experiment. Algorithm RANDADv, selects U random pixels (wit eplacement) and performs checks whether the pixel is critical or not. The algorithm output is ai inbiased estimate for the fraction of critical pixels in the input image I. Note that the algorithm ca. ail in generating an adversarial image (i.e., in finding any critical pixel for an image). The following. lefinition will be useful for our ensuing discussion..\n1 1 |I(b,x,y) - ADv(I)(b,x,y) PTB TADV l xwxh IETADV b,x,y\nwhere I E Rlwxh is the original image and ADv(I) E Rlx wxh is the corresponding adversaria image. Note that the inner summation is measuring the L1-distance between I and ADv(I). The #pTBPixELS column shows the average percentage of perturbed pixels in the successful adversaria images. Similarly, TImE column shows the average time (in seconds) to generate a successfu adversarial image. Finally, the last column indicates the type of network architecture.\nAs is quite evident from these results, Algorithm LocSEARcHADv is more effective than the fasi gradient sign method in generating adversarial images, even without having access to the network architecture and its parameter values. The difference is quite prominent for networks trained with batch normalization as here we noticed that the fast-gradient sign method has difficulties producing adversarial images14Another advantage with our approach is that it modifies a very tiny fractior of pixels as compared to all the pixels perturbed by the fast-gradient sign method, and also in many cases with far less average perturbation. Putting these points together demonstrates that\nOur first observation is that sometimes even small perturbation to a pixel can be sufficient to obtain an adversarial image. Table2 shows two images and their adversarial counterparts, with p = 1. Often original and adversarial images are indistinguishable to the human eye, but sometimes the critical pixel is visible (Table2).\n13Note that by explicitly constraining the number of pixels that can be perturbed, as we do in our approach. it might be impossible to get to a 1o0% misclassification rate on some datasets. Similarly, the fast-gradient sign method fails to achieve a 100% misclassification rate even with larger values of e (Moosavi-Dezfooli et al. 2016).\n5In the definition of critical pixel we have not considered how well the original image I is classified by NN, i.e., whether c(I) E (NN(I), 1). In particular, if c(I) (NN(I), 1) then by definition all pixels in the image are critical even without any perturbation. In our experiments, we ignore these images and only focus on images I where c(I) E (NN(I), 1), which we refer to as good images (Definition|4)\n*In general, we observed that models trained with batch normalization are somewhat more resilient tc adversarial perturbations probably because of the regularization properties of batch normalization (Ioffe & Szegedy2015)\nThe image I, can be generated in multiple ways, here we consider a class of sign-preserving perturbation functions defined as follows. Let PeRT(I, p, x, y) be a function that takes as input an\nIn the following, we say an adversarial generation technique ADv, given an input image I, succeeds in generating an adversarial image ADv(I) for a network NN iff c(I) E (NN(I),1) and c(I) (NN(ADv(I)), 1). The CONF column shows the average confidence over all successful adversarial images for the corresponding technique. The PTB column shows the average (absolute) perturbation added per coordinate in cases of successful adversarial generation. More formally, let T denote the test set and TApy C T denote the set of images in T on which ADv is successful. Then,.\n(a) original (b) perturbed (c) original (d) perturbed\nTable|5|shows the results for several variants of VGG network trained on the ImageNet1000 dataset These networks do not have batch normalization layers (Chatfield et al.] 2014b] Zagoruyko] 2016a] We set e = 1 for the fast-gradient sign method as a different pre-processing technique was used fo this network (we converted these networks from pre-trained Caffe models). Results are similar tc that observed on the smaller datasets. In most cases, our proposed local-search based approach is more successful in generating adversarial images while on average perturbing less than 0.55% of the pixels.\nTable 2: The row contains original images followed by misclassified images where only one pixel (pointed using a black arrow) was perturbed with perturbation parameter p = 1. After perturbation in the first case (images (a) and (b)) an automobile gets misclassified as a truck, and in the second case (images (c) and (d)) a cat gets misclassified as a dog.\nCase of Larger k's. We now consider achieving k-misclassification for k 1 using Algo rithm LocSeARcHADv. In Table [6] we present the results as we change the goal from 1 misclassification to 4-misclassification on the CIFAR10 dataset. We use the same parameters as before for Algorithm LocSeARchADv. As one would expect, as we increase the value of k the effectiveness of the attack decreases, perturbation and time needed increases. But overall ou local-search procedure is still able to generate a large fraction of adversarial images at even k = with a small perturbation and computation time, meaning that these images will fool even a systen that is evaluated on a top-4 classification criteria. We are not aware of a straightforward extension o the fast-gradient sign method (Goodfellow et al.2015) to achieve k-misclassification.\nWe also tried to understand the effect of larger perturbation parameter values. We set U to half the. number of pixels in each image. After usual training of the neural network using the training set (se. Section[6|for more details about training), we ran Algorithm RANDADv on 1000 randomly drawr. images from the test set of the corresponding dataset. In our experiments, we varied perturbatior parameter in the range {1, 5, 10, 100}. Before we consider our results, we note some of the perturba. tion values that we use to construct the adversarial image might construct images that are not in the original image space|However, these results are still somewhat surprising, because even though we. allow large (even out-of-range) perturbation, it is applied to exactly one pixel in the image, and i. appears that it suffices to even pick the pixel at random..\nFigures1and2|show results for 4 datasets (more details about the datasets and the networks are presented in Section[6). On the x-axis we show the perturbation parameter p. In Figure[1] the y-axis represents the output of Algorithm RANDADv averaged over good images for the network7|The. first observation that we can make is that the critical pixels are common, and in fact, as p grows. the fraction of critical pixels increases. For example, in CIFAR10, with p = 100, almost 80% (or average) of the pixels randomly selected are critical. In Figure[2l the y-axis represents the fraction. of successful adversarial images generated by Algorithm RANDADv, i.e., fraction of inputs where Algorithm RANDADv is successful in finding at least one critical pixel. Again we notice that as p. grows it gets easier for Algorithm RANDADv to construct an adversarial image..\nWe trained several modifications of Network-in-Network model for the CIFAR10 dataset, varying. the initial value of the learning rate, the size of filters, and the number of layers in the network. We. observed that between 25% to 43% of adversarial images generated by Algorithm LocSEARcHADV using the original network were also adversarial for these modified networks (at k = 1). The. transferability of adversarial images that we observe here has also been observed with other attacks too (Szegedy et al.[2014f [Goodfellow et al.[2015f Papernot et al.[2016b a) and demonstrates the wider applicability of all these attacks."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "We investigate the inherent vulnerabilities in modern CNNs to practical black-box adversarial attacks. We present approaches that can efficiently locate a small set of pixels, without using any gradien. information, which when perturbed lead to misclassification by a deep neural network. Our extensive experimental results, somewhat surprisingly, demonstrates the effectiveness of our simple approaches. in generating adversarial examples.\nAnother observation is that for the MNIST and STL1O datasets, Algorithm RANDADV succeeds in finding fewer critical pixels as compared to SVHN and CIFAR10 datasets. We give the following explanation for this observation. The majority of pixels in an MNIST image belong to the background\n6We fix this shortcoming using a local-search based strategy in the next section.. 'Note by focusing on good images, we make sure that we are only accounting for those cases whe. perturbation is needed for creating an adversarial image..\nFinally, we believe that our local-search approach can also be used for attacks against other machin learning systems and can serve as an useful tool in measuring the robustness of these systems..\nAlgorithm LocSeARcHADv is successful in generating more adversarial images than the fast gradient sign method, while modifying far fewer pixels and adding less noise per image. On the other side, the fast-gradient sign method takes lesser time in the generation process and generally seems to produce higher confidence scores for the adversarial (misclassified) images.\nEven Weaker Adversarial Models. We also consider a weaker model where the adversary does. not even have a black-box (oracle) access to the network (NN) of interest, and has to rely on a black-box access to somewhat of a \"similar' (proxy) network as NN. For example, the adversary. might want to evade a spam filter A, but might have to develop adversarial images by utilizing the. output of a spam filter B, which might share properties similar to A..\nDefenses against these attacks is an interesting research direction. However, we note that here that by limiting the perturbation to some pixels (being localized) the adversarial images generated by our local-search based approach do not represent the distribution of the original data. This means for these adversarial images, the use of adversarial training (or fine-tuning), a technique of training (or fine-tuning) networks on adversarial images to build more robust classifiers, is not very effective. In fact, even with adversarial training we noticed that the networks ability to resist new local-search based adversarial attack improves only marginally (on average between 1-2%). On the other hand we suspect that one possible counter-measure to these localized adversarial attacks could be based on performing a careful analysis of the oracle queries to thwart the attempts to generate an adversarial image.\np086 b VinN ViaN ViGN Perturbation parameter 100 Perturbation parameter 100 Perturbation parameter 100 Perturbation parameter 100 (a) MNIST (b) SVHN (c) CIFAR10 (d) STL10\nFigure 1: Output of Algorithm RANDADv (averaged over good images). The results are for two networks: a Network-in-Network and b) VGG. The perturbation parameter p is varied from {1, 5, 10, 100}\n0.8 0.8 0.8 0.8 0.6 0.6 .6 0.6 . 0.2 0.2 ViGN VinN ViGN 100 Perturbation parameter 100 100 100 Perturbation parameter Perturbation parameter Perturbation parameter (a) MNIST (b) SVHN (c) CIFAR10 (d) STL10\nFigure 2: Fraction of images where Algorithm RANDADv succeeds in finding at least one critical pixel. Again we only start with only good images.\nhence, these pixels are less likely to be critical. On the other hand, STL10 contains high resolutior images, 96 96, where perhaps a single pixel has less of an impact on the output prediction. The latter observation motivated us to generalize the notion of a critical pixel to a critical set.\nDefinition 5 (Critical Set) Given a trained neural network NN and an image I, a critical set of I i. a set of pixels U(x,y){(+, x, y)} in I such that a perturbation of these pixels generates an image thai. is misclassified by the network NN..\nTable 4: Results for four datasets: CIFAR10. STL10, SVHN, and MNIST. The entries denote by denoted by \"_\" are the cases where the fast-gradient sign method fails to produce any adversarial image in our experimental setup\nThe general goal will be to find critical sets of small size in an image. With this notion of critica. set, we considered constructing adversarial images on the high-resolution ImageNet1o00 datase. pixels in a set. Similarly, we can devise a simple extension to Algorithm RANDADv to operate witl. a set of pixels and to output an unbiased estimate for the fraction of critical sets of some fixed size. (50 in our case) in the input image|Note that a set size of 50 pixels is still a tiny fraction of all the. pixels in a standard (center) crop of size 224 224, namely just 0.09%. We use a larger perturbatior. parameter p than before, and set (U) the budget on the number of trials on an image as 5000. Figure. shows our results. Overall, we note that we can draw similar conclusions as before, i.e., increasing. the perturbation parameter creates more critical sets making them easier to find and relatively smal perturbations are sufficient to construct adversarial images.\n#PTBPIXELS TIME Dataset ERRTOP-1 ERRTOP-1(ADV) CONF PTB Technique Network (%) (in sec) ImageNet1000 93.59 0.29 0.29 0.43 12.72 LOCSEARCHADV (Ours) VGG CNN-S (Caffe) 58.27 ImageNet1000 85.51 0.49 1.00 100.00 4.74 FGSM (Goodfellow et al.2015) VGG CNN-S (Caffe) ImageNet1000 91.36 0.28 0.29 0.40 10.01 LOCSEARCHADV (Ours) VGG CNN-M (Caffe) 58.96 ImageNet1000 87.85 0.48 1.00 100.00 4.36 FGSM (Goodfellow et al.2015) VGG CNN-M (Caffe) ImageNet1000 92.82 0.29 0.30 0.41 11.09 LOCSEARCHADV (Ours) VGG CNN-M 2048 (Caffe) 58.80 ImageNet1000 88.43 0.52 1.00 100.00 4.42 FGSM [Goodfellow et al.]2015] VGG CNN-M 2048 (Caffe) ImageNet1000 72.07 0.30 0.54 0.55 73.64 LOCSEARCHADV (Ours) VGG ILSVRC 19 (Caffe) 46.40 ImageNet1000 85.05 0.52 1.00 100.00 23.94 FGSM [Goodfellow et al.2015 VGG ILSVRC 19 (Caffe)"}, {"section_index": "5", "section_name": "BLACK-BOX GENERATION: A GREEDY APPROACH", "section_text": "Table 5: Results for the ImageNet1000 dataset using a center crop of size 224 224 for each image\nThe results from Section4|show that most images have critical pixels such that modifying these pixels significantly leads to a failure of NN to classify the image correctly. However, one shortcoming of Algorithm RANDADv was that to build adversarial images, we sometimes had to apply a large perturbation to a single pixel (or a small set of pixels). Hence, there might exist a pixel (or a set of pixels) in the adversarial image whose coordinate value could lie outside the valid range [LB, UB] To overcome this issue, we need to redesign the search procedure to generate adversarial images that still belong to the original image space I (defined in Section3). Here a brute-force approach is generally not feasible because of computational reasons, especially in high-resolution images. Hence we need to develop an efficient heuristic procedure to find the right small set of pixels to be perturbed Our solution presented in this section is based on performing a greedy local search over the image Space.\nTable 6: Effect of increasing k on the performance of Algorithm LocSEARcHADv (without batch normalization).\n8Searching over all pixel sets of size 50 pixels is computationally prohibitive, which again motivates the nee for a randomized strategy as proposed in Algorithm RANDADv.\nDataset ERRTOP-1 ERRTOP-1(ADV) CONF PTB #PTBPTXELS TTME Technique Network (%) (in sec) NNs trained with batch normalization. CIFAR10 97.63 0.47 0.04 3.75 0.68 LOCSEARCHADV (Ours) NinN 11.65 CIFAR10 70.69 0.55 0.20 100.00 0.01 FGSM Goodfellow et al.2015 NinN CIFAR10 97.51 0.74 0.04 3.16 0.78 LOCSEARCHADV (Ours) VGG 11.62 CIFAR10 11.62 FGSM [Goodfellow et al.2015) VGG STL10 58.17 0.42 0.02 1.20 7.15 LOCSEARCHADV (Ours) NinN 29.81 STL10 54.85 0.53 0.20 100.00 0.03 FGSM [Goodfellow et al.2015 NinN STL10 65.76 0.47 0.02 1.11 13.90 LOCSEARCHADV (Ours) VGG 26.50 STL10 26.50 FGSM Goodfellow et al.2015] VGG - SVHN 97.06 0.47 0.05 4.51 1.02 LOCSEARCHADV (Ours) NinN 9.71 SVHN 48.62 0.49 0.20 100.00 0.02 FGSM [Goodfellow et al.2015] NinN SVHN 81.10 0.66 0.07 5.43 2.15 LOCSEARCHADV (Ours) VGG 4.77 SVHN 4.77 FGSM [Goodfellow et al.2015] VGG MNIST 91.42 0.54 0.20 2.24 0.64 LOCSEARCHADV (Ours) NinN 0.33 MNIST 1.65 0.58 0.20 100.00 0.02 FGSM [Goodfellow et al.2015] NinN MNIST 93.48 0.63 0.21 2.20 0.64 LOCSEARCHADV (Ours) VGG 0.44 MNIST 0.44 FGSM [Goodfellow et al.2015] VGG NNs trained without batch normalization CIFAR10 97.89 0.72 0.04 3.24 0.58 LOCSEARCHADV (Ours) NinN 16.54 CIFAR10 93.67 0.93 0.20 100.00 0.02 FGSM [Goodfellow et al.2015 NinN CIFAR10 97.98 0.77 0.04 2.99 0.72 LOCSEARCHADV (Ours) VGG 19.79 CIFAR10 90.93 0.90 0.20 100.00 0.04 FGSM [Goodfellow et al.2015] VGG STL10 52.65 0.56 0.02 1.17 6.42 LOCSEARCHADV (Ours) NinN 35.47 STL10 87.16 0.94 0.20 100.00 0.04 FGSM (Goodfellow et al.2015] NinN STL10 59.38 0.52 0.01 1.09 19.65 LOCSEARCHADV (Ours) VGG 43.91 STL10 91.36 0.93 0.20 100.00 0.10 FGSM [Goodfellow et al.2015] VGG SVHN 92.31 0.68 0.05 4.34 1.06 LOCSEARCHADV (Ours) NinN 6.15 SVHN 73.97 0.84 0.20 100.00 0.01 FGSM [Goodfellow et al.2015 NinN SVHN 88.34 0.68 0.05 4.09 1.00 LOCSEARCHADV (Ours) NinN 7.31 SVHN 76.78 0.89 0.20 100.00 0.04 FGSM Goodfellow et al.2015 VGG\n[T] Technique c) C LOCSEARCHADV (Ours)\nNetwork\nVGG CNN-M (Caffe) VGG CNN-M (Caffe)\n#PTBPIXELS TIME Dataset k ERRTOP-k ERRTOP-k(ADV) CONF PTB Network (%) (in sec) CIFAR10 1 16.54 97.89 0.72 0.04 3.24 0.58 NinN CIFAR10 2 6.88 76.65 0.88 0.07 5.50 1.02 NinN CIFAR10 3 3.58 59.02 0.90 0.08 7.09 1.85 NinN CIFAR10 4 1.84 48.89 0.90 0.09 7.63 2.12 NinN"}]
SyJNmVqgg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "With large amount of training data as its fuel, deep neural networks (DNN) have achieved state of-art performances in multiple tasks. Examples include deep convolutional neural network (CNN for image understanding(Krizhevsky et al.]2012) Ioffe & Szegedy2015] He et al.]2015]Ren et al.]2015) and recurrent neural networks (RNN) for natural language processing (Cho et al. 2014] Kiros et al.] 2015] Dai & Le]2015}Shang et al.]2015). To effectively train DNN with large scale of data, typically mini-batch based Stochastic Gradient Descent (SGD) (and its variants such as Adagrad(Duchi et al.]2011), Adadelta (Zeiler2012) and Adam (Kingma & Ba]2014)) is used. The mini-batch based SGD training is a sequential process, in which mini-batches of data D = {Di,... Dt,..., DT} arrive sequentially in a random order. Here Dt = (d1,..., dm) is the mini-batch of data arriving at the t-th time step and consisting of M training instances. After Lt. , based on which the neural network model gets updated: and gt =\nHere l() is the loss function specified by the neural network and nt is the learning rate at t-th step\nWith the sequential execution of SGD training, the neural network evolves constantly from a raw state to a fairly mature state, rendering different views even for the same training data. For example as imposed by the spirit of Curriculum Learning (CL) (Bengio et al.2009) and Self-Paced Learning (SPL) (Kumar et al.]2010), at the baby stage of the neural network, easy examples play important roles whereas hard examples are comparatively negligible. In contrast, at the adult age, the neural\nFigure 5: Test accuracy curves of different data filtration strategies on C-MNIST dataset. The x-axis records the number of effective training instances.\nWorks done when Yang Fan is an intern at Microsoft Research Asia"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Wt+1 = Wt - ntgt.\nS1:D1, W S2: D2, W2 St:Dt, Wt a1 f(s1) r1 a2 f(s2) r2 at f(st) rt\nS1: D1, W S2:D2,W2 St:Dt,Wt a f(sD) r1 a f(s2) r L f(s T\nThe measure of hardness in CL is typically determined by heuristic understandings of data (Bengio et al.[[2009] Spitkovsky et al.]2010]Tsvetkov et al.[|2016). As a comparison, Self-Paced Learning (SPL)(Kumar et al.[2010f Jiang et al.]2014a bf Supancic & Ramanan 2013) quantifies the hard- ness by the loss on data. In SPL, those training instances with loss values larger than a threshold n will be neglected and n gradually increases in the training process such that finally all training instances will play effects. Apparently SPL can be viewed as a data filtration strategy considered in this paper.\nRecently researchers have noticed the importance of data scheduling for training Deep Neural Net. work models. For example, in (Loshchilov & Hutter2015), a simple batch selection strategy basec on the loss values of training data is proposed for speed up neural networks training.. (Tsvetkov et al.2016) leverages Bayesian Optimization to optimize a curriculum function for training dis-. tributed word representations. The authors of (Sachan & Xing2016) investigated several hand. crafted criteria for data ordering in solving Question Answering tasks based on DNN. Our works. differs significantly with these works in that 1) We aim to filter data in randomly arrived mini-batches. in training process to save computational efforts, rather than actively select mini-batch; 2) We lever. age reinforcement learning to automatically derive the optimal policy according to the feedback of. training process, rather than use naive and heuristic rules..\nFigure 1: Basic structure of SGD accompanied with NDF. Blue part refers to SGD training process and yellow part is NDF.\nnetwork tends to favor harder training examples, since easy ones bring minor changes. It remains ar important question that, how to optimally and dynamically allocate training data at different stages. of SGD training?\nThe proposed Neural Data Filter (NDL) for data filtration is based on deep reinforcement learning. (DRL)(Mnih et al.]2013] 2016] Lillicrap et al.][2015a] Silver et al.][2016), which applies deep neu- ral networks to reinforcement learning (Sutton & Barto|1998). In particular, NDL belongs to policy. based reinforcement learning, seeking to search directly for optimal control policy. REINFORCE (Williams1992) and actor-critic (Konda & Tsitsiklis|1999) are two representative policy gradient algorithms, with the difference that actor-critic adopts value function approximation to reduce the. high variance of policy gradient estimator in REINFORCE..\nA possible approach is to solve this problem in an active manner: at each time step t, the mini. batch data Dt is chosen from all the left untrained data (Tsvetkov et al.]2016) Sachan & Xing. 2016). However, this typically requires a feed-forward pass over the whole remaining dataset at each training step, making it computationally expensive. We therefore consider a passive way in this paper, in which the random ordering of all the mini-batches is pre-given and maintained during the. training process. What actually do is, after receiving the mini-batch Dt of M training instances, we. dynamically determine which instances in Dt are used for training and which are filtered, based on. the features extracted from the feedforward pass only on Dt. Acting in this way avoids unnecessary computational steps on those filtered data and thus speeds-up the training process..\nIn this paper we introduce Neural Data Filter (NDF), a reinforcement learning framework to selec-. t/filter data for training deep neural network. Experiments on several deep neural networks training demonstrate that NDF boosts the convergence of Stochastic Gradient Descent. Going beyond data. filtration, the proposed framework is able to supervise any sequential training process, thus opens a. new view for self-adaptively tuning/controlling machine learning process..\nPrevious works such as curriculum learning (CL) and self-paced learning (SPL) can be leveraged to fulfill such a data filtration task. However, they are typically based on simple heuristic rules, such as. shuffling the sequence length to train language model (Bengio et al.]2009), or abandoning training. instances whose loss values are larger than a human-defined threshold (Kumar et al.]2010] Jiang. et al.2014a).\nAs to future work, on one aspect, we aim to test NDF to more tasks and models, such as Con volutional Neural Network (CNN) for image classification. We would also plan to give cleare explanation on the behavior of NDF, such as what data is dropped at different phrases of training. and whether the proposed critic function is good enough. On the other aspect, we aim to apply such a reinforcement learning based teacher-student framework to other strategy design problems for machine learning, such as hyper-parameter tuning, structure learning and distributed scheduling. with the hope of providing better guidance for controlled training process.\nIn this work, we propose a Neural Data Filter (NDF) framework from a more principled and self. adaptive view. In this framework, as illustrated in Figure 1] the SGD training for DNN is naturall. casted into a Markov Decision Process (MDP) (Sutton & Barto1998) and data filtration strategy is. fully controlled through deep reinforcement learning (Mnih et al.l 2013fLillicrap et al.|2015b}[Mnil et al.2016). In such an MDP, a state (namely S1, :. : , St, ::.) is composed of two parts: the mini. batch of data arrived and the parameters of the current neural network model, i.e, s = Dt, W, ? In each time step t, NDF receives a representation f(st) for current state from SGD, outputs the. action at specifying which instances in Dt will be filtered according to its policy At. Afterwards. the remaining data determined by at will be used by SGD to update the neural network state anc. generate a reward rt (such as validation accuracy), which will be leveraged by NDF as the feedbacl. for updating its own policy.\nFrom another view, while SGD acts as the trainer for base model, i.e., DNN, it meanwhile is the trainee of reinforcement learning module. In other words, reinforcement learning acts at the teach- er module while SGD for DNN is the student. Speaking more ambitiously, such a teacher-student. framework based on reinforcement learning goes far beyond data filtration for neural network train-. ing: On one hand, the base model the can be benefitted is not limited to neural networks; on the other. the action space in reinforcement learning teacher module covers any strategies in machine learning process, such as hyper-parameter tuning and distributed scheduling. Through carefully designed interaction between the two modules, the training process of general machine learning models can. be more elaborately controlled.\nThe rest of the paper is organized as follows: in the next section [2] we will introduce the details. of Neural Data Filter (NDF), including the MDP language to model Stochastic Gradient Descent. training, and the policy gradient algorithms to learn NDF. Then in section 3l the empirical results\nof training LSTM RNN will be shown to verify the effectiveness of NDF. We discuss related worl in subsequent section4and conclude the paper in the last section 5"}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "We introduce the mathematical details of Neural Data Filter (NDF) for SGD training in this section As a summary, NDF aims to filter certain amount of training data within a mini-batch, in order to achieve better convergence speed for SGD training. To achieve that, as introduced in last section and Figure[1 we cast Stochastic Gradient Descent training for DNN as a Markov Decision Process (MDP), termed as SGD-MDP.\nSGD-MDP: As traditional MDP, SGD-MDP is composed of the tuple < s, a, P, r, y >, illustrate as:\nCorinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Multi-class classification with maxi mum margin multiple kernel. In ICML (3), pp. 46-54, 2013.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3079-3087, 2015\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nA(s,a;O) = Pe(as) =ao(0f(s)+b) +(1-a)(1-o(0f(s)+b)\nVijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPS, volume 13, pp. 1008-1014 1999.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nM Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189-1197, 2010.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa. David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXi preprint arXiv:1509.02971, 2015a\ns is the state, corresponding to the mini-batch data arrived and current neural network state: St = (Dt, Wt). {0, 1}M, where M is the batch size and am E {0, 1} denotes whether to filter the mth. data instance in Dt or not' Those filtered instances will have no effects to neural network. training. uniform distribution of sequentially arrived training batch data; 2) The optimization process specified by Gradient Descent principle (c.f. equation |1). The randomness comes from. stochastic factors in training, such as dropout (Srivastava et al.||2014). r = r(s, a) is the reward, set to be any signal indicating how well the training goes, such as. validation accuracy, or the lost gap for current mini-batch data before/after model update.. Furthermore future reward r is discounted by a discounting factor y E [0, 1] into the cumu- lative reward.\nNDF samples the action a by its policy function A = Pe(a[s) with parameters O to be learnt. For example, NDF policy A can be set as logistic regression:\nState Features: The aim of designing state feature vector f(s) is to effectively and efficiently represent SGD-MDP state. Since state s includes both arrived training data and current neural. network state, we adopt three categories features to compose f(s):\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nData features, contains information for data instance, such as its label category (we use 1 of [Y| representations), the length of sentence, or linguistic features for text seg ments (Tsvetkov et al.]2016). Data features are commonly used in Curriculum Learning (Bengio et al.2009fTsvetkov et al.|2016). Neural network features, include the signals reflecting how well current neural network is trained. We collect several simple features, such as passed mini-batch number (i.e. iteration), the average historical training loss and current validation accuracy. They are proven to be effective enough to represent current neural network status. Features to represent the combination of both data and model. By using these features, we target to represent how important the arrived training data is for current neural network We mainly use three parts of such signals in our classification tasks: 1) the predicted prob- abilities of each class; 2)the cross-entropy loss, which appears frequently in Self-Paced\n1we consider data instances within the same mini-batch are independent with each other, therefore for statement simplicity, when the context is clear, a will be used to denote the remain/filter decision for single data instance, i.e., a E {0, 1}. Similarly, the notation s will sometimes represent the state for only one training instance.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXi preprint arXiv:1509.02971, 2015b.\nThe state features f(s) are computed once each mini-batch training data arrives\nThe whole process for training neural networks is listed in Algorithm[1 In particular, we take th. similar generalization framework proposed in (Andrychowicz et al.|. 2016), in which we use pa of training data to train the policy of NDF (Step 1 and 2), and apply the data filtration model to th training process on the whole dataset (Step 3). The detailed algorithm to train NDF policy will b. introduced in the next subsection..\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015\nAlgorithm 1 Training Neural Networks with Neural Data Filter.\nInput: Training Data D. 1. Sample part of NDF training data D' from D.. 2. Optimize NDF policy network A(s; O) (c.f. equation[2) based on D' by policy gradient 3. Apply A(s; O) to full dataset D to train neural network model by SGD.. Output: The Neural Network Model..\nNDF-REINFORCE. NDF-REINFORCE is based on REINFORCE algorithm (Williams||1992), a elegant Monto-Carlo based policy gradient method which favors action with high sampled reward The algorithm details are listed in Algorithm 2] Particularly, as indicated in equation 3] NDF REINFORCE will support data filtration policy leading to higher cumulative reward vt.\nAlgorithm 2 NDF-REINFORCE algorithm to train NDF policy\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958. 2014.\nJames S Supancic and Deva Ramanan. Self-paced learning for long-term tracking. In Proceeding. of the IEEE conference on computer vision and pattern recognition. pp. 2379-2386. 2013.\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of ini tialization and momentum in deep learning. In Sanjoy Dasgupta and David Mcallester (ed s.), Proceedings of the 3Oth International Conference on Machine Learning (ICML-13), vol ume 28, pp. 1139-1147. JMLR Workshop and Conference Proceedings, May 2013.URL http://jmlr.org/proceedings/papers/v28/sutskever13.pdf"}, {"section_index": "3", "section_name": "NDF-ActorCritic", "section_text": "The gradient estimator in REINFORCE poses high variance given its Monto-Carlo nature. Further-. more, it is quite inefficient to update policy network only once in each episode. We therefore design.. NDF-ActorCritic algorithm based on value function estimation. In NDF-ActorCritic, a parametric. value function estimator Q(s, a; W) (i.e., a critic) with parameters W for estimating state-action"}, {"section_index": "4", "section_name": "Learning algorithms ( (Kumar et al.]2010] Jiang et al.2014a} Sachan & Xing][2016); 3) th margin value", "section_text": "Policy gradient methods are adopted to learn NDF policy A. In particular, according to different policy gradient methods, we designed two algorithms: NDF-REINFORCE and NDF-ActorCritic.\nlgorithm 2 NDF-REINFORCE algorithm to train NDF policy Input: Training data D'. Episode number L. Mini-batch size M. Discount factor y E [0, 1]. for each episode l = 1, 2, . . . , L do Initialize the base neural network model. Shuffle D' to get the mini-batches sequence D' = {D1, D2, ... , DT}. for t = 1, ..., T do Sample data filtration action for each data instance in D = {di,. ,dm}: a {am} m=1, am A(sm, a; O), sm is the state corresponding to the dm 1M Update neural network model by Gradient Descent based on the selected data in Dt. Receive reward rt. end for for t = 1,..., T do Compute cumulative reward vt = rt + yrt+1 + ... + T-trT Update policy parameter O: d log A(s, am; O) OO+ avt (3) do end for end for Output: The NDF policy network A(s, a; O)\nA log A(s, am; O) eO+avt ao m\nQ(s,a; W) = o(w relu(f(s)W1a) + b)\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nAlgorithm 3 NDF-ActorCritic algorithm to train NDF policy"}, {"section_index": "5", "section_name": "3.1 EXPERIMENTS SETUP", "section_text": "We conduct experiments on two different tasks/models: IMDB movie review sentiment classifi cation (with Recurrent Neural Network) and MNIST digital image classification (with Multilayei Perceptron Network). Different data filtration strategies we applied to SGD training include:\nvalue function is leveraged to avoid the high variance of vt from Monto-Carlo sampling in NDF REINFORCE. It remains an open and challenging question that how to define optimal value function estimator Q(s, a; W) for SGD-MDP. Particularly in this work, as a preliminary attempt, the follow- ing function is used as the critic:\nwhere f(s) = (f(s1); f(s2);..., f(sm)) is a matrix with M rows and each row f(sm) represents state features for the corresponding training instance dm. W = {wo, Wi, b} is the parameter set to be learnt by Temporal-Difference algorithm. Base on such a formulation, the details of NDF- ActorCritic is listed in Algorithm|3\nInput: Training data D'. Episode number L. Mini-batch size M. Discount factor y E [0, 1]. for each episode l = 1, 2, . . . , L do. Initialize the base neural network model. Shuffle D' to get the mini-batches sequence D' = {D1, D2, ... , DT} for t = 1, ..., T do Sample data filtration action for each data instance in Dt = {d1,...,dm}: a : 1M I M Update neural network model by Gradient Descent based on the selected data. Receive reward rt.. Update policy(actor) parameter O: O O + Q(s, a; W) m d log A(s,am;O) Update critic parameter W: dQ(s',a'; W) q =rt-1+yQ(s,a;W)-Q(s',a';W),W =W-q (5) aw a'ta,s'ts end for end for Output: The NDF policy network A(s, a; O)\ndQ(s',a'; W) q=rt-1+yQ(s,a;W)-Q(s',a;W), W=W-q aw\nUnfiltered SGD. The SGD training algorithm without any data filtration. Here rather than vanilla sgd (c.f. equation|1), we use its advanced variants such as Adadelta (Zeiler2012 or Adam (Kingma & Ba2014) to each of the task. Self-Paced Learning (SPL) (Kumar et al.]2010). It refers to filtering training data by its 'hardness', as reflected by loss value. Mathematically speaking, those training data d satisfying l(d) > n will be filtered out, where the threshold n grows from smaller to larger during training process. In our implementation, to improve the robustness of SPL, following the widely used trick (Jiang et al.f2014b), we filter data using its loss rank in one mini-batch, rather than the absolute loss value. That is to say, we filter data instances with top K largest training losses within a M-sized mini-batch, where K linearly drops from M - 1 to 0 during training. NDF-REINFORCE. The policy trained with NDF-REINFORCE, as shown in Algorithm 2 We use a signal to indicate training speed as reward. To be concrete, we set an accuracy threshold t E [0, 1] and record the first mini-batch index i, in which validation accuracy\nFor all strategies other than Plain SGD, we make sure that the base neural network model will not be updated until M un-trained, yet selected data instances are accumulated. In that way we make sure that the batch size are the same for every strategies (i.e., M), thus convergence speed is only determined by the effectiveness of data filtration strategies, not by different batch size led by different number of filtered data. For NDF strategies, we initialize b = 2 (c.f. equation|2), with the goal of maintaining training data at the early age, and use Adam (Kingma & Ba]2014) to optimize the policy. The model is implemented with Theano (Theano Development Team2016) and run on one Telsa K40 GPU.\nIMDB movie review datasef|is a binary sentiment classification dataset consisting of 50k movie review comments with positive/negative sentiment labels (Maas et al.]2011). We apply LSTM (Hochreiter & Schmidhuber1997) RNN to each sentence, and the last hidden state of LSTM is fed into a logistic regression classifier to predict the sentiment label (Dai & Le]2015). The model size (i.e., word embedding size hidden state size) is 256 512 and mini-batch size is set as M = 16. Adadelta (Zeiler2012) is used to perform LSTM model training.\nThe detailed results are shown in Figure 2 whose x-axis represents the number of effective training instances and y-axis denotes the accuracy on test dataset. All the curves are results of 5 repeated runs. From the figure we have the following observations:\nhttp://ai.stanford.edu/~amaas/data/sentiment/\nexceeds , then the reward is set as rT = log(-/T). Note here only terminal rewar. exists (i.e., rt = 0, Vt < T). NDF-ActorCritic. The policy trained with NDF-ActorCritic, as shown in Algorithm. Discount factor is set as y = 0.95. Since actor-critic algorithm makes it possible to update policy per time step, rather than pe. episode, different with the terminal reward set in NDF-REINFORCE, validation accurac. is used as the immediate reward for each time step. To save time cost, only part of validatio set is extracted to compute validation accuracy. Randomly Drop. To conduct more comprehensive comparison, for NDF-REINFORCI and NDF-ActorCritic, we record the ratio of filtered data instances per epoch, and the. randomly filter data in each mini-batch according to the logged ratio. In this way we for. m two more baselines, referred to as RandDropREINFORCE and RandDropActorCriti respectively.\nThe IMDB dataset contains 25k training sentences and 25k test sentences. For NDF-REINFORCE and NDF-ActorCritic, from all the training data we randomly sample 10k and 5k as the train- ing/validation set to learn data filtration policy. For NDF-REINFORCE, the validation accuracy. threshold is set as t = 0.8. For NDF-ActorCritic, the size of sub validation set to compute imme- diate reward is 1k. The episode number is set as L = 30. Early stop on validation set is used to control training process in each episode..\nNDF (shown by the two solid lines) significantly boosts the convergence of SGD training for LSTM. With much less data, NDF achieves satisfactory classification accuracy. For example, NDF-REINFORCE achieves 80% test accuracy with only roughly half training data (about 40k) of Plain SGD consumes (about 80k). Furthermore, NDF significantly outperforms the two Randomly Drop baselines, demonstrating the effectiveness of learnt policies. Self-Paced Learning (shown by the red dashed line) helps for the initialization of LSTM however, it delays training after the middle phrase. For the two variants of NDF, NDF-REINFORCE performs better than NDF-ActorCritic. Our conjecture for the reason is: 1) For NDF-REINFORCE, we use a terminal reward fully devoted to indicate training convergence; 2) The critic function (c.f., equation 4) may not be expressive enough to approximate true state-action value functions. Deep critic function should be the next step.\n0.9 0.8 0.6 0.5 0 20000 40000 60000 80000 100000 120000 Number of Training Instances RandomDropREINFORCE SPL NDF-ActorCritic RandomDropActorCritic UnfilteredSGD NDF- REINFORCE\nFigure 2: Test accuracy curves of different data filtration strategies on IMDB sentiment classification dataset. The x-axis records the number of effective training instances.\n20.00% 18.00% 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 0 5 10 15 20 25 30 35 40 Iteration Number NDF-REINFORCE NDF-ActorCritic\nTo better understand the learnt policies of NDF, in Figure[3|we plot the ratio of filtered data instances per every certain number of iterations. It can be observed that more and more training data are kept during the training process, which are consistent with the intuition of Curriculum Learning and Self- Paced Learning. Furthermore, the learnt feature weights for NDF policies (i.e. 0 in equation[2) are listed in Table[1] From the table, we can observe:\nFigure 3: Data filtration ratio during training LSTM with NDF-REINFORCE and NDF-ActorCritic policies.\nLonger movie reviews, with positive sentiments are likely to be kept. Margin plays critical value in determining the importance of data. As reflected by its fairl large positive weights, training data with large margin is likely to be kept. . Note that the feature - log py is the training loss, its negative weights mean that trainin instances with larger loss values tend to be filtered, thus more and more data will be kej since loss values get smaller and smaller during training, which is consistent with the curve\nTable 1: Feature weights learnt for NDF policies learnt in IMDB sentiment classification. The first row lists all the features (i.e., f(s)) categorized into the three classes described in Section 2. normalized means the feature value is scaled between 0, 1]. yo, y1 is the 1-of-2 representation for sentiment label."}, {"section_index": "6", "section_name": "3.3 IMAGE CLASSIFICATION ON CORRUPTED-MNIST", "section_text": "We further test different data filtration strategies for multilayer perceptron network training on im. age recognition task. The dataset we used is MNIST, which consists of 60k training and 10k testing. images of handwritten digits from 10 categories (i.e., 0, ..., 9). To further demonstrate the effec tiveness of the proposed neural data filter in automatically choosing important instances for training. we manually corrupt the original MNIST dataset by injecting some noises to the original pictures as follows: We randomly split 60k training images into ten folds, and flip (i - 1) 10% randomly cho-. sen pixels of each image in the i-th fold, i = 1, 2, . . . , 10. The 10k test set are remained unchanged Flipping a pixel means setting its value r as r = 1.0 - r. Such a corrupted dataset is named as. C-MNIST. Some sampled images from C-MNIST are shown in Figure4\nA three-layer feedforward neural network with size 784 300 10 is used to classify the C-MNIS7 dataset. For data filtration policy, different from the single-layer logistic regression in equation2 in this task, NDF-REINFORCE and NDF-ActorCritic leverage a three-layer neural network witl model size 24 12 1 as policy network, where the first layer node number 24 is the dimensior of state features fs] and sigmoid function is used as the activation function for the middle layer 10k randomly selected images out of 60k training set acts as validation set to provide reward sig nals to NDF-REINFORCE and NDF-ActorCritic. For NDF-REINFORCE, the validation accuracy threshold is set as t = 0.90. For NDF-ActorCritic, the immediate reward is computed on the whol validation set. The episode number for policy training is set as L = 50 and we control training in each episode by early stopping based on validation set accuracy. We use Adam (Kingma & Ba 2014) to optimize policy network.\nThe test set accuracy curves (averaged over five repeated runs) of different data filtration strategies are demonstrated in Figure[5] From Figure5 we can observe:."}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "Plenty of previous works talk about data scheduling (e.g., filtration and ordering) strategies for ma. chine learning. A remarkable example is Curriculum Learning (CL) (Bengio et al.[2009) showing. that a data order from easy instances to hard ones, a.k.a., a curriculum, benefits learning process\nfs is similar to the features in Table except that (yo, y1) and (logpo, logp1) are switched int yo, : . . , y9) and (log po, : . . , log p9) respectively, given there are ten target classes in mnist classification.\nin Figure[3] However, such a trend is diminished by the negative weight values for neura network features, i.e., historical training accuracy and normalized iteration.\nSimilar to the result in IMDB sentiment classification. NDF-REINFORCE achieves the best convergence speed; The performance of NDF-ActorCritic is inferior to NDF-REINFORCE. In fact, NDF- ActorCritic acts similar to sgd training without any data filtration. This further shows although Actor-Critic reduces variance compared with REINFORCE, the difficulty in de- signing/training better critic functions hurts its performance"}]
ByOK0rwlx
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Mitsuru Ambai & Takuva Matsumoto\nDenso IT Laboratory. Inc\nmanbai,tmatsumoto}@d-itlab.co.jp\nSong Han, Huizi Mao, and William J. Dally. Deep Compression - Compressing Deep Neura Networks with Pruning, Trained Quantization and Huffman Coding. ICLR, 2016\nSam Hare, Amir Saffari, and Philip H. S. Torr. Efficient Online Structured Output Learning for Keypoint-Based Object Tracking. CVPR, pp. 1894-1901, 2012.\nGary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: a Database for Studying Face Recognition in Unconstrained Environments. University oJ Massachusetts Amherst Technical Report, (07-49), 2007.\nSergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, pp. 81-87, 2015.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up Convolutional Neural Networks with Low Rank Expansions. BMVC, 2014.\nOmkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep Face Recognition. BMVC, 2015"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. ECCV, pp. 525-542, 2016.\nt is widely believed that deeper networks tend to achieve better performance than shallow ones in vari- ous computer vision tasks. As a trade-off of such im- oressive improvements, deeper networks impose heavy computational load both in terms of processing time and memory consumption due to an enormous amount f network parameters. For example, VGG-16 model Simonyan & Zisserman 2015) requires about 528 MBytes to store the network weights where fully con nected layers account for 89% of them. A large number f multiplications and additions must also be processed at each layer which prevent real-time processing, con- sume vast amounts of electricity, and require a large number of logic gates when implementing a deep net- vork on a FPGA or ASIC\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR, 2015.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep Fried Convnets. ICCV. pp. 1476-1483. 2015.\nThis article addresses the above issues. Specifically, we aimed to reduce the test-time computational load of a pre-trained network. Since our approach does not depend on a network configuration. (e.g. a choice of an activation function, layer structures, and a number of neurons) and acts as a. post-processing of network training, pre-trained networks shared in a download site of MatConvNet. (Vedaldi & Lencl 2015) and Model Zoo (BVLC) can be compressed and accelerated. Our method is outlined in Figure[1] The main idea is to factorize both weights and activations into integer and non-integer components. Our method is composed of two building blocks, as shown below..\nYamauchi Yuji, Ambai Mitsuru, Sato Ikuro, Yoshida Yuichi, Fujiyoshi Hironobu, and Yamashita. Takayoshi. Asymmetric Feature Representation for Object Recognition in Client Server System ACCV, pp. 598-612, 2014.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona Networks for Classification and Detection. PAMI, 2015"}, {"section_index": "2", "section_name": "TERNARY WEIGHT DECOMPOSITION AND BINARY AC- TIVATION ENCODING FOR FAST AND COMPACT NEU- RAL NETWORK", "section_text": "Misha Denil, Babak Shakibi, Laurent Dinh, Marc' Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. NIPS, pp. 2148-2156, 2013.\nTakayoshi Yamashita & Hironobu Fujiyoshi\n{yamashita,hf}@cs.chubu.ac.jp"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued. natrices, our method factorizes both weights and activations into integer and non integer components. In our method, the real-valued weight matrix is approximated. y a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since. the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes. 2 bits per element. At test-time, an activation vector that passed from a previous. layer is also transformed into a weighted sum of binary vectors, -1, +1}, which. enables fast feed-forward propagation based on simple logical operations: AND. XOR, and bit count. This makes it easier to deploy a deep network on low-power. CPUs or to design specialized hardware.\nIn our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15 acceleration and memory compression. up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our. experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error..\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278-2323, 1998\nMichael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. ICLR. 2014\nreal-valued activation vector M. real-valued real-valued weight matrix. MW Computable by. WT CT XOR,AND,BitCount 2~ real-valued\nAndrea Vedaldi and Karel Lenc. MatConvNet: Convolutional Neural Networks for MATLAB\nFigure 1: Our network compression model\nTernary weight decomposition for memory compression: We introduce a factored representation where the real-valued weight matrix is approximated by a multiplication of a ternary basis matrix and a real-valued co-efficient matrix. While the ternary basis matrix is sufficiently informative to reconstruct the original weights, it only consumes 2 bits per element. The number of rows of the co efficient matrix is also smaller than that of the original weight matrix. These compact representations result in efficient memory compression."}, {"section_index": "4", "section_name": "A BINARY VS. TERNARY", "section_text": "Figure5Jillustrates the reconstruction errors of a 4096 1000 weight matrix of the last fully connected layer in VGG-16 model (Simonyan & Zisserman]2015). We tested both the binary and ternary constraints on M.., for comparison. The reconstruction error J. monotonically decreased along with an increase in kw. It was clear that the ternary basis provided better reconstruction than the binary basis.\nBinary activation encoding for fast feed-forward propagation: It has been reported that an inne. product between a ternary and binary vector can be computed extremely fast by using three logica. operations: AND, XOR, and bit count (Ambai & Satol2014). To use this technique, we approximate the activation vector by a weighted sum of binary vectors. This binary encoding must be processed as fast as possible at test-time. To overcome this issue, we use a fast binary encoding method based on small lookup table.\n300 250 binary basis ternary basis 200 150 100 50 0 0 1/2 D D 3/2 D 2 D 0 O . number of basis vectors k"}, {"section_index": "5", "section_name": "1.1 RELATED WORK", "section_text": "There have been extensive studies on accelerating and compressing deep neural networks, e.g., on. an FFT-based method (Mathieu et al.]2014), re-parameterization of a weight matrix (Yang et al. 2015), pruning network connection (Han et al.]2015 2016), and hardware-specific optimization (Vanhoucke et al.] 2011). In the following paragraphs, we only review previous studies that are. intimately connected to ours\nFigure 5: 4096 1000 weight matrix of last fully connected layer in VGG-16 model (Simonyan & Zisserman 2015) is decomposed under two different constraints: (blue) {-1, +1} and (red) -1, 0, +1}\nThere is an another series of studies, integer decomposition (Hare et al.]2012) Yuji et al.]2014 Ambai & Sato2014), which involved accelerating test-time speed of a classifier by using fas1 logical operations. Although their contributions are limited to a shallow architecture such as a lineai SVM, they achieved a noticeable acceleration. In these approaches, a real-valued weight vector is approximated by a weighted sum of a few binary or ternary basis vectors. To use fast logica operations, they extracted binary features from an image. Hare et al.(2012) and Yuji et al.(2014 exploited binary basis vectors, and Ambai & Sato (2014) investigated a case of ternary basis to improve approximation quality.\nIn a manner of speaking, our method is a unified framework of matrix/tensor factorization and integer decomposition reviewed in the above and inherits both their advantages. While the weight matrix is factorized to exploit low-rank characteristics, the basis matrix is restricted to take only three integer values, {1, 0, +1}. In contrast to recent binary weighted networks such as XNOR-Net (Rastegari et al.2016) which quantizes both activations and weights during backpropagation, it is not necessary. for our method to change training algorithms at all. We can benefit from recent sophisticated training techniques, e.g. batch normalization (Ioffe & Szegedy| 2015), in combination with our method Furthermore, our method does not need (iterative) end-to-end retraining which is needed for several previous studies such as network pruning (Han et al.]2015) 2016) and distillation (Hinton et al. 2014).\nIn this section, we introduce our compression model and discuss time and space complexity. We consider a convolutional layer with a filter size of wx wy c, where wx and wy are the spacial size and c is a number of input channels. If wx = wy = 1, we can regard this layer as a fully connected layer. This three dimensional volume is reshaped to form a D- dimensional vector where D1 = wx wy X c. The filter weights and biases can be formulated by W E RD1 x Do and b E RDo where Do is a number of output channels. Let x E RD1 denote an activation vector obtained by\nIt was pointed out byDenil et al.(2013) that network weights have a significant redundancy. Motivated by this fact, researchers have been involved in a series of studies on matrix/tensor factorization (Jaderberg et al.][2014]Zhang et al.[2015). In these studies, a weight matrix (or tensor) was factorized by minimizing an approximation error of original weights or activations.Jaderberg et al.|(2014). exploited 1-D separable filter decomposition to accelerate feed-forward propagation.Zhang et al.. (2015) proposed low-rank approximation based on generalized SVD to compress an entire deep. network. Taking into account the lessons learned from these best practices, we also exploit the. redundancy of the weights.\nTable 1: Number of operations\noperation floating point logical multiply-adds AND XOR bit count original (W ' x) D1Do 0 0 0 proposed (CMMcx) kxkw+ kwD0 (D1kxkw)/B (D1kxkw)/B (D1kxkw)/B\nTable 2: Memory consumption. Real value is represented in single precision (32 bits/element\nvectorizing the corresponding three dimensional volume. In test-time, we need to compute W ' x + 1 followed by a non-linear activation function..\nIn our compressed network, W is decomposed into two matrices before test-time as follows\nW ~ MwCw,\nwhere Mw E{-1, 0, +1}Dr kw is a ternary basis matrix, Cw E Rkw x Do is a co-efficient matrix.. and kw is the number of basis vectors, respectively. Since Mw only takes the three values, it. consumes only 2 bits per element. Setting a sufficiently small value to k, further reduces total memory consumption. From the viewpoint of approximation quality, it should be noted that a large. number of elements in W takes close to zero values. To fit them well enough, a zero value must be. included in the basis. The ternary basis satisfies this characteristic. In practice, the ternary basis gives. better approximation than the binary basis, as we discuss in Section|3.\nThe activation vector x is also factored to the following form\nwhere M E {-1, +1}D1 k is a binary basis matrix, cx E Rk is a real-valued co-efficient vector bx E R is a bias, and kx is the number of basis vectors, respectively. Since elements of x are ofter. biased, e.g., activations from ReLU take non-negative values and have a non-zero mean, bx is addec to this decomposition model. While c, and bx reflect a range of activation values, M, determines approximated activation values within the defined range. This factorization must be computed a1. test-time because the intermediate activations depend on an input to the first layer. However, ir practice, factorizing x into Mx, Cx, and bx requires an iterative optimization, which is very slow. Since a scale of activation values within a layer is almost similar regardless of x, we pre-computec. canonical c, and b, in advance and only optimized M, at test-time. As we discuss in Section|4] ar. optimal M, under fixed cx and bx can be selected using a lookup table resulting in fast factorization.\noriginal proposed w Mw Cw variables Cx, bx size (bits) 32 : D1D0 2.D1kw 32 . kwDo 32.(kx+1)\nx ~ Mxcx+bx1\nW'x+ b~ (M,Cw)'(Mzcx+ bx1) + b = C'M'Mzcc+ bzCM'1+ b\nThe time and space complexity are summarized in Tables[1and[2] As can be seen from Table[1] most of the floating operations are replaced with logical operations. In this table, B means the bit width of a variable used in the logical operations, e.g., B = 64 if a type of unsigned long long is used in. C/C++ language. Table2|suggests that if kw is sufficiently smaller than D1 and Do, the total size of. Mw and Cw is reduced compared to the original parameterization..\nAlgorithmI Decompose into MI Lw and C Require: W, kw. Ensure: factorized components Mw and Cw. 1: RW 2: for i 1 to kw do Initialize m) by three random values {-1, 0, +1}. 3: Minimize ||R - m) c | by repeating the following two steps until convergence. 4: [Step 1] c8 m(\"R/m\"ml] 5: [Step 2] mij arg min|r - ac|2, for j = 1, ., D 6: aE{-1,0,+1} RR-mcw .(i).(i) 7: 8: end for.\nTo factorize W, we need to solve the following optimization problem\nl|W - MwCw||F Jw min Mw,C\nHowever, the ternary constraint makes this optimization very difficult. Therefore, we take an. iterative approach that repeats rank-one approximation one by one, as shown in Algorithm 1] Let vector of Cw. Instead of directly minimizing Eq. (4), we iteratively solve the following rank-one. approximation, (i) o(i)l2\ni |R - m?c(i) min m (i) c(i) W\nBinary decomposition for a given activation vector x can be performed by minimizing\nJx(Mx,Cx,bx;x) =||x- (Mxcx + bx1)|l2\narg min (x-(cx+bx))2,j=1,,D1, m e{-1,+1}1xkx\nOur method makes this decomposition faster by pre-computing canonical cx and bx from training. data and only optimizing M, at test-time using lookup table. This compromise is reasonable because. of the following two reasons: (1) scale of activation values is similar regardless of vector elements\nwhere x; is a j-th element of x. Since kx is sufficiently small, 2kx possible solutions can be exhaustively verified (in line 5 of Algorithm|2)..\nwithin a layer, and (2) cx and bx reflect a scale of approximated activation values. Knowing these. properties, cx and bx are obtained by minimizing Jx(Mx, Cx, bx; x) ,where is constructed as. training data. Second, n elements are randomly sampled from x;. The sampled nN elements are concatenated to form a vector x E RnN+. We use c and bx as constants at test-time, and discard. M.\ncan be efficiently found using a lookup table as follows\nq (L - 1)(x; - Pmin)/(Pmax Pmin 1 min(max([q+1/2,1),L)"}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "We tested our method on three different convolutional neural networks: CNN for handwritten digit (LeCun et al.|1998), VGG-16 for ImageNet classification (Simonyan & Zisserman]2015), and VGG Face for large-scale face recognition (Parkhi et al.]2015). To compute memory compression rate, size of W and a total size of Mw and Cw were compared. To obtain a fair evaluation of computatior time, a test-time code of forward propagation was implemented without using any parallelizatior scheme, e.g., multi-threading or SIMD, and was used for both compressed and uncompresse networks. The computation time includes both binary activation encoding and calculation of Eq. (3 We used an Intel Core i7-5500U 2.40-GHz processor."}, {"section_index": "7", "section_name": "5.1 CNN FOR HANDWRITTEN DIGITS", "section_text": "MNIST is a database of handwritten digits which consists of 60000 training and 10000 test sets of 28 28 gray-scale images with ground-truth labels from O to 9. We trained our CNN by using an example code in MatConvNet 1.0-beta18 (Vedaldi & Lenc]2015). Our architecture is similar to LeNet-5 (LeCun et al.][1998) but has a different number of input and output channels. Each layer's configuration is shown below:.\nAt test-time, we only need to solve the optimization of Eq. (7) for each x. This can be regarded as the nearest neighbour search in one-dimensional space. We call 3cx + bx a prototype. There are 2kx possible prototypes because takes 2k possible combinations. The nearest prototype to x; and an i)\nPreparing lookup table: We define L bins that evenly divide one-dimensional space in a range from. the smallest to largest prototype. Let xj denote a representative value of the l-th bin. This is located at the center of the bin. For each xi, we solve Eq. (7) and assign the solution to the bin..\nActivation encoding: At test-time, x; is quantized into L-levels. In other words, x; is transformed to an index of the lookup table. Let pmax and pmin denote the largest and smallest prototype, respectively. We transform x ; as follows:\nThe range from pmin to Pmax is linearly mapped to the range from 1 to L by Eq. (8). The term q is rounded and truncated from 1 to L by the max and min function in Eq. (9). If L is sufficiently large, the solution assigned to the l-th bin can be regarded as a nearly optimal solution because the difference between x; and the center of the bin x becomes very small. We found that L = 4096 is sufficient. The time complexity of this encoding is O(D1).\n3.5 3.5 k=1 0 k =1 3 3 k =2 0 kx=2 /5 (%) enn nrnn e! a = 3 (%) nnrn nrnr e! 0 k =D /5 = D 12 k = 3 2.5 W 2.5 - / kx =4 2 2 k =,D /2 K = D 1.5 1.5 lnesssse lneeeese k F D O 1 1 0.5 0.5 0 0 0 20 40 60 80 100 0 1 2 3 4 5 6 7 8 memory compression rate (%) acceleration rate (x times faster). (a) error vs. memory compression (b) error vs. acceleration.\nFigure 2: Results of MNIST. The first fully connected layer was decomposed\nwhere the parameters of a convolutional layer are denoted as (conv<receptive field size>-<number. of output channels>), and parameters of a fully connected layer are denoted as (fc<number of inpui channels>-<number of output channels>). The (maxpool) is 2 2 subsampling without overlapping The error rate of this network is 0.86%.\nWe applied our method to the first fully connected layer (fc1024-640) and set n = 10 and N = 100( to learn cx and bx from randomly chosen nN- activations. The cases of kx = 1,2,3,4 anc kw = Do, Do/2, Do/5 were tested. This means that kw was set to 640, 320, and 128.\nFigures2(a) and (b) show the relationships among the increases in error rates, memory compression rates, and acceleration rates. It was observed that error rates basically improved along with increasing kx and saturated at kx = 4. It is interesting that kx = 2, only 2 bits per element for encoding an activation x, still achieved good performance. While the smaller kw achieved better compression and acceleration rate, error rates rapidly increased when kw = Do/5. One of the well balanced parameters was (kx, kw) = (4, Do/2) which resulted in 1.95 faster processing and a 34.4% memory compression rate in exchange of a 0.19% increase in the error rate.."}, {"section_index": "8", "section_name": "5.2 VGG-16 FOR IMAGENET CLASSIFICATION TASK", "section_text": "A dataset of ILSVRC2012 (Russakovsky et al.] 2015) consists of 1.2 million training, 50,000 validation, and 100,o00 test sets. Each image represents one of 1000 object categories. In this experiment, we used a network model of VGG-16 (model D in (Simonyan & Zisserman]2015)) that consists of 13 convolutional layers and 3 fully connected layers followed by a softmax layer. The architecture is shown below:.\nwhere layers before the first fully connected layer are omitted\nThe three lines with circles in Figure [3|show these results. It should be noted that much higher. acceleration rates and smaller compression rates with small loss of accuracies were achieved than the case of the network for MNIST. Interestingly, the case of kw = Do/4 still performed well due to the low-rank characteristics of weights in the VGG-16 network..\nAlthough the error rates rapidly increased when kw took much smaller values, we found that this could be improved by tuning k, of the third layer. More specifically, we additionally tested the\nFirst, all three fully connected layers were compressed with our algorithm. We set n = 10 and N = 1000 to learn cx and 6x from randomly chosen nN activations. The cases of kx = 2, 3, 4 and kw = Do/2, Do/4, Do/8, Do/16 were tested. The case of kx = 1 was omitted because this setting resulted in a very high error rate. Note that each of the fully connected layers has different Do. The kw was independently set for each layer according to its Do. The top-5 error rates were evaluated on the validation dataset. The top-5 error rate of the original network is 13.4%\n30 30 = D /16 k =2 kx=2 = D /16 25 W (%) k =3 25 kx=3 0 kx=4 kx=4 = D /8 * * kx = 4 (kw = D0 in FC3) kx = 4 (kw = Do in FC3) 20 20 = D /8 k = D /4 9-doo u! 15 ..11:::: 15 K - D 14 K = D 12 lneeesse 10 10 .+++++ k = D /2 5 5 * j 0 0 5 10 15 20 0 10 20 30 40 50 memory compression rate (%) acceleration rate (x times faster) (a) error vs. memory compression (b) error vs. acceleration.\nFigure 3: Results of VGG-16. The last three fully connected layers were decomposed\nTable 3: Best balanced parameters for decomposing three fully connected layers of VGG-16\nOriginal Proposed Top-5 error (%) 13.4 14.8 MBytes kw msec kx MBytes ratio msec ratio fc25088-4096 392.0 142.4 Do/8 4 11.1 2.8% 6.1 23.5 fc4096-4096 64.0 22.8 Do/8 4 8.5 13.3% 3.0 7.5 fc4096-1000 15.6 5.7 Do 4 4.8 30.7% 2.3 2.5 total 471.6 170.9 24.4 5.2% 11.4 15.0\nTable 4: Reults of decomposing convolutional lavers of VGG-16\nNext, we also tested to compress convolutional layers. In this experiment, kw and kx were set to Do and 4. This setting accelerates each of the layers averagely 2.5 times faster. Table4 shows positions of compressed layers, top-5 errors, and acceleration rates of the entire network. Although k, and k. must be larger than those of fully connected layers to avoid error propagation, it is still beneficial for entire acceleration. In summary, while compressing fully connected layers is beneficial for reducing memory. compressing convolutional layers is beneficial for reducing entire computation time.\nThis network outputs a 4096-dimensional descriptor. We can verify whether two face images are identical, by evaluating the Euclidean distance of two l2-normalized descriptors extracted from\nfollowing cases. While kw was set to Do/2, Do/4, Do/8, and Do/16 for the first and second. layers, kw was fixed to Do for the third layer. The kx was set to 4. This is plotted with a red line in Figure[3] In this way, the memory compression rate and acceleration rate noticeably improved. Setting appropriate parameters for each layer is important to improve the total performance. Table[3 shows the details of the best balanced case in which 15 faster processing and 5.2% compression. rate were achieved in exchange of a 1.43% increase in error rate..\n5 5 4.5 4.5 K = D /16 D./16 K 2 kx =2 4 W 4 D = D /8 ++ = 3 k.D /8 3.5 (%) 3.5 W.. (%) 0 kx=4 ER ER 3 3 E = D 12 E 2.5 2.5 lnssesse ilnrrrese 2 2 1.5 1.5 1 1 0.5 0.5 k = D 0 0 0 5 10 15 20 0 10 20 30 40 50 60 memory compression rate (%) acceleration rate (x times faster) (a) error vs. memory compression (b) error vs. acceleration\nFigure 4: Results of VGG-Face. The last two fully connected layers were decomposed\nTable 5: Reults of decomposing convolutional layers of VGG-Face\nthem. In our experiment, we did not apply a descriptor embedding technique based on triplet los. minimization (Parkhi et al.|2015). Following the evaluation protocol introduced in a previous pape: (Parkhi et al.|2015), we used Labeled Faces in the Wild dataset (LFW) (Huang et al.|2007), whicl includes 13,233 face images with 5,749 identities. The LFW defines 1200 positive and 1200 negative pairs for testing. We used the 2400 test pairs to compute ROC curve and equal error rate (EER). Th EER is defined as an error rate at the ROC operating point where the false positive and false negative rates are equal. The EER of the original network is 3.8%.\nFirst, the two fully connected layers were compressed using our algorithm. We set n = 10 and. N = 1000 to learn cx and bx from randomly chosen nN- activations. We tested the cases of kx = 1, 2, 3, 4, and kw = Do/2, Do/4, Do/8, Do/16. Figure 4[reveals an interesting fact that even the fastest and smallest network configuration, kx = 1 and kw = Do/16, had less impact on the EER, in contrast to the previous ImageNet classification task in which the recognition results were. corrupted when kx = 1. This indicates that the 4096-dimensional feature space is well preserved regardless of such coarse discretization of both weights and activations..\nWe proposed a network compression model that consists of two components: ternary matrix decom position and binary activation encoding. Our experiments revealed that the proposed compression model is available not only for multi-class recognition but also for feature embedding. Since our approach is post-processing for a pre-trained model, it is promising that recent networks designed. for semantic segmentation, describing images, stereo matching, depth estimation, and much more can also be compressed with our method. For future work, we plan to improve approximation error. further by investigating the discrete optimization algorithm..\nNext, we also tested to compress convolutional layers. In this experiment, kw and kx were set to Do and 4 which are the the same setting used in Table4 Table[5|shows positions of compressed layers and EERs. The acceleration rates were almost the same as the results shown in Table[4] This is because architecture of VGG-face is the same as VGG-16 and we used the same parameter for ku and kx. Interestingly, compressing multiple layers from 2nd to 1Oth still preserves the original EER As can be seen from this table, our method works very well depending on a certain kind of machine learning task."}]
HJ7O61Yxe
[{"section_index": "0", "section_name": "MODELING RELATIONAL TIME SERIES USING GAUS- SIAN EMBEDDINGS", "section_text": "Ludovic Dos Santos*Ludovic Denoyer, Benjamin Piwowarski & Patrick Gallinari\nJan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International journal o forecasting, 2006."}, {"section_index": "1", "section_name": "Ali Ziat*", "section_text": "Ludovic Dos Santos, Benjamin Piwowarski, and Patrick Gallinari. Multilabel classification on het erogeneous graphs with gaussian embeddings. In ECML-KDD. 2016\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In IIIE ICASSP, 2013.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Relational time series, i.e. multiple time series where the observations are correlated both inside. each series and between series occur in many domains such as ecology, medicine, biology, earth. observation by satellite imagery or local measurements, multimedia or even social data analysis The correlations between the different observed series can come from a proximity (e.g. earth obser-. vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In the. statistical literature, the modeling of relational time series has been the topic of a dedicated field. spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method- ologies have been developed for handling a large variety of spatio-temporal phenomena, with an. emphasis on the analysis of natural observations like weather prediction, ecology or remote sensing. In the machine learning domain, there exists a vast literature dedicated to sequence or time series prediction. Recently, deep recurrent neural networks have witnessed notable successes in different. sequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar-. bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despite a large number of recent developments, the modeling and analysis of relational time series has only. attracted a few attention in the field of representation learning. In addition, most of the models are. deterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamics. of the series.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nDP Kingma and M Welling. Auto-encoding variational bayes. In ICLR, 2014.\nRahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. NIPs 2015 Workshop, 2015\nKR Muller, A J Smola, G Ratsch, B Scholkopf, J Kohlmorgen, and V Vapnik. Using support vector machines for time series prediction. Kernel methods--support vector learning, 99.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning, 2014.\nIlya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural ne works. In Proceedings of ICML, 2011.\nLuke Vilnis and Andrew McCallum. Word representations via gaussian embedding. ICLR, 2015\nWe propose a new state space model for relational time series able to model the uncertainty at the observation and at the modeling levels. The principle of this approach is to associate each point o a time series to a Gaussian distribution in a latent space, the distribution over the observed values being directly computed from these latent distributions. The model has two main components. One is responsible for the dynamics in the latent space. This component is thus modeling the evolutior of the Gaussian distribution considering both the temporal intra-series and the relational inter-serie.\nChristopher K. Wikle. Modern perspectives on statistics for spatio-temporal data. Wiley Interdisci plinary Reviews: Computational Statistics, 7(1):86-98, 2015.\nAli Ziat, Gabriella Contardo, Nicolas Baskiotis, and Ludovic Denoyer. Learning embeddings fo. completion and prediction of relational multivariate time-series. In ESANN, 2016\n*Both authors contributed equally to this work\nJerome T Connor, R Douglas Martin, and Les E Atlas. Recurrent neural networks and robust time series prediction. Neural Networks. IEEE Transactions on. 1994\nNoel A. C. Cressie and Christopher K. Wikle. Statistics for spatio-temporal data. Wiley series in probability and statistics. Hoboken, N.J. Wiley, 2011. ISBN 978-0-471-69274-4.\nMarco Fraccaro, Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic lavers. Advances in neural information processing svstems 2016. 2016"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc We propose a new dynamical state space model, based on representation learn- ng, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unob served values together with a confidence in the prediction"}, {"section_index": "4", "section_name": "IMPACT OF MINIMIZING THE KL-DIVERGENCE ON PREDICTED VALUES", "section_text": "dependencies. A second component acts as a decoder and maps the latent representations associate with each series to the corresponding observations in the output space..\nIn this section, we show that the structural regularization term between two time series bounds the difference predicted observations\nThe contributions of the paper are thus: (i) a new dynamical model for relational time series in- spired by representation learning; (ii) a stochastic component for modeling the uncertainties at the observation and dynamic levels\nSince we use diagonal covariance matrices and that the KL-divergence is invariant by multiplying both random variables by the same scalar, we can show that:\nThe paper is organized as follows. In Section 2 we introduce some related work on forecasting in time series, representation learning for time series, and recent deep learning works focusing on modeling uncertainty. The model is presented in Section 3 together with four different variants Section 4 presents experimental results on four datasets, and section 5 concludes this work and gives some perspectives.\nThen, using Pinsker's inequality one can see that minimizing the KL-divergence also minimize the total variation norm (which can be more intuitive in some cases), leading to:\nThe classical topic of time series modeling and forecasting has given rise to an extensive literature In statistics, classical linear models include many variations around auto-regressive and moving average models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions of these models based on neural networks have been proposed as early as the 90s, opening the way to many other non linear models including kernel methods (Muller et al. (99)).\nd d DKL(0k2 2 k=1 k=1\nRelational time series have mainly been studied in the field of spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approach using the first and second-order moments of the process for modeling the spatio-temporal dependen cies. More recently, dynamical state models, where the current state is conditioned on the past have been explored (Wikle (2015)). These models have been considered both for continuous/discrete space and time components. However, the most common way is to consider discrete time, leading to the modeling of time series of spatial processes as we do here. When space is discrete, the mode comes down to a general vectorial autoregressive formulation. These models face a curse of dimen sionality in the case of a large number of sources. Different strategies have been adopted to solve thi. problem such as embedding the spatio-temporal process in a low-dimensional manifold or param eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machine learning for modeling dynamical phenomena. Also, for complex underlying processes, observations only provide an incomplete description of the process dynamics so that modeling uncertainty at the data and model levels is an important topic.\nFinally, each component of the random vectors Z(t) being pairwise independent, we have\nCombining the the inequalities above. ve can straightforwardly show the following inequality\nn the last 10 years, there has been a growing interest in learning latent representations for exampl through neural networks and deep learning. Dynamical state space models such as recurrent neura. networks (RNN), which have been used for time series forecasting in different contexts since the early nineties (Connor et al. (1994)), have recently witnessed important successes in different areas. or general sequence modeling problems, leading to breakthroughs in domains like speech (Graves et al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), anc. many others. Among this family, the model closest to ours is the dynamic factor graph model o. Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting anc. mputation. However this model does not consider relational dependencies which is the focus of ou approach.\nMost of the above models make use of pointwise representations and do not model explicitly the uncertainties present in the process and/or in the observations. Recently, in the learning repre- sentation community, there has been a growing interest in using distributions as latent representa- tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) all make use of Gaussian distributions for representing different items like words (Vilnis & McCallum (2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi- cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time series prediction, but they have mainly been considered for univariate time series prediction (Hachino & Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state space formulation.\nRecent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) deal with uncertainty by modeling distributions in the observation space, mapping random variables within a latent space to observations with a deep neural network. Extension of the variational in\nDKL( DKI DkL(e k=1 k=1\nd d 1 d k=1\nd d d dTV dTI k=1 k=1 k=1\nd: DKL(2 Uz I 2\nference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) but. contrarily to those works, we take into account relationships (both temporal and relational). Fur thermore, in our model, we work directly with random variables to predict observations from time series. This gives us direct access to the output distribution with no need to sample or work with. intractable distributions.\nLet us consider a set of n temporal sequences' x1, .., Xn such that x, (t E R is the value of the ith (1) sequence at time t defined by x; = (x. , ::, X: simplification, we consider that all the series have the same length, but this is not restrictive.\nWe model the dependencies between the different series through a graph, the different series sources. being the graph vertices and the links modeling explicit dependencies between the sources. These. links can reflect a spatial proximity between the sources of the series, a similarity of behavior be-. tween users or any other predefined relation. These explicit relations will be modeled in the latent space. Our hypothesis is that they will constrain the representation of linked sources to be similar. one to another in the latent space, this similarity being controlled by the strength of the link between the two time series, denoted e;.. We assume that the graph structure is static in time and is provided as a prior information. The model can be extended to learn these static dependencies but this is not considered here.\nLet us denote t the size of the prediction horizon. The forecasting problem considered here is to compute for all series i the values x. wardly extended to the imputation problem that aims at predicting missing values."}, {"section_index": "5", "section_name": "3.2 INFORMAL DESCRIPTION", "section_text": "The proposed model is a dynamic state space model: the dynamics is modeled in a continuous latent state space and the observations are generated from states in this latent space. State space models have already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and for spatio-temporal processes (e.g. Wikle & Hooten (2010)).\nTo handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussian representations (RDG), that represents latent factors as distributions in a latent space and learns the series dynamics in this latent space. The distributions themselves are estimated using observations like for any other representation learning model. Besides being more adapted to handling the noise inherent to the process and to the observations, the model can be used to predict the posterior distri- bution of the variables associated to the series and in particular the confidence or variance associated to the predictions.\nThe model is an extension of the deterministic model of (Ziat et al. (2016)) and has two main components: (i) Decoding component: we consider that each series corresponds to a particular trajectory in an unknown latent space. Each series x. ., Z(T), z(t) being the latent factor explaining the observed random variables in Rd denoted Z(1), .. as a multivariate\n'For simplicity, we consider univariate time series, but the model can be trivially extended to multivariate time series.\nOur model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy. namical process model but does not consider any explicit modeling of uncertainty. In this paper, we propose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of the. model in (Ziat et al. (2016)).\nBoth the observations and the dynamics are subject to uncertainties. Usually, the observations cor respond to a partial view of the underlying generating process and the dynamics being hidden is not directly accessible and should be modeled as a stochastic process.\nusing a decoding function mapping z(t) to X(t) = f(z(t). (ii) Dynamic component:The second component models the series dynamics in the latent space. We suppose that dynamics can be captured for all series through a function h that maps the latent random variable Z(t) to the next constraints are introduced to reflect prior knowledge about the relational dependency structure of the series. For any couple of series i and j with a known dependency, i.e. such that ei,j > 0 we add a corresponding constraint on Z(t) and Z(t) a as explained in Section 3.3.3.\nIn the following, we explain how the distributions corresponding to the random variables Z are learned. jointly to the functions f (decoder component) and h (dynamic component)"}, {"section_index": "6", "section_name": "3.3 MODEL DEFINITION", "section_text": "n T n T-1 L(,,f,h)=De LAp ADy i=1t=1 i=1 t=1 n +Ar j=1t=1\nwhere Apy and AR are hyperparameters weighting the importance of the different elements in the loss function. The first term corresponds to the decoding component, and forces both f and the learned distributions of variables Z to \"explain' the observations, the second term, the dynamic component encourages h to model the time dynamics in the latent space, while the third term captures the relations between the pairs of series. In the following, we use for f a linear function and h will be either a linear or non-linear function (see Section 3.3.2).\nLearning: Learning the model is performed through the minimization of the loss function. L(, , f, h) with respect to , , f and h. To simplify the notations, the parameters of f and h are not made explicit in the notations - f and h are supposed to be differentiable. At the end of. the learning process, all the latent distributions for each of the time steps are known for the training. data, as well as the decoding function f and the dynamical one h. We used ADAM (Kingma & Ba (2015)) as a stochastic gradient descent technique. This optimization can be easily made on a large scale dataset, and/or by using GPUs.."}, {"section_index": "7", "section_name": "3.3.1 FROM LATENT SPACE TO OBSERVATIONS", "section_text": "The first loss measures the difference between the expected value of f and the observation using a mean-square error:\nWe define a global loss function (, , f, h) where and are the means and covariance matrices. for all the series and for all the time steps between 1 and T. The loss is a sum of three terms: (i) a decoding loss pe, (ii) a dynamical loss py and (iii) a structural loss r:.\nx(t) of each series can be predicted The mapping onto the latent space is learned so that the values x from their respective Gaussian embedding Z(t) through the f function. We define below two al- ternative decoding loss functions pe, used in the experiments for measuring the error between the model.\nEfZt ADei(f\nWhen considering a linear decoding function such as f() =< 0, : > , 0 being the set of parameters of f, De. can be rewritten as as:\nWhen f is a linear function. this loss can be written as:\nd k=1\nMinimizing De. only updates the mean of the distributions, whereas minimizing De2 updates both the mean and the variance. More specifically, an observed value with pe, will pull the variances representation. Moreover, this effect will be higher for the dimensions of the latent space where the value of 0 is higher. This is sensible since variance is reduced for the dimensions that are important for the prediction."}, {"section_index": "8", "section_name": "3.3.2 MODELING DYNAMICS", "section_text": "predict the representation of the next state of time series i, Z. The function h maps a dis.\nWe propose in the following two alternative functions for h. For the first one, we consider that the latent representation at time (t + 1) is a linear transformation of the latent distribution at time t. The transformed variable is also a Gaussian and its parameters can be easily computed. In this case, h is a linear function from Rd to Rd which is represented by a matrix y E Md.d(R):\nAt last, r corresponds to a structural regularization over the graph structure that encourages th model to learn similar representations for time series that are interdependent. This forces the mode to learn representations that reflect the structure dependencies between the series. Recall that thes.\n2Dk1(Z(t)||zt) =(tr(t)-\nDe1\nThe second loss aims at measuring the distance between the random variable modeling the predicted observations and the observations. This is the expectation of the mean squared error between the predictions and the observations:\nADe? H1 -x\nThe loss function py aims at finding values Z) and a dynamic model h, that will be used to. et al. (2016)), we use a Kullback-Leibler divergence (noted DkL(|l-)) to compare the distribution at (t + 1) to the distribution predicted by h..\nLinear transformations of random vectors might be too restrictive to model complex processes. As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hm for predicting the means and one for hc for predicting the variance: the next mean is given by\nNote hat in the second case, we also make the hypothesis that the resulting distribution (for Z is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has a simple analytic form from which the gradient can be easily computed2\nMinimizing the regularization term r has a direct impact on the distributions of the predicted observations for connected times series. More precisely, we have the following inequality:.\nd:DkL Z 2"}, {"section_index": "9", "section_name": "4.1 DATASETS AND BASELINES", "section_text": "Experiments have been performed on four datasets respectively extracted from Google Flu Trends3. WHO4 and from two datasets from Grand Lyon (GL) (respectively data from traffic conditions and from car parks occupancy). All the series are normalized. For all datasets, we used binary. dependency relations indicating whether two series are related or not. The Google Flu Trend (GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in. 29 countries. This dataset spans about ten years of time. The binary relations between series are. defined a priori so that the series of two countries i and j are linked, i.e. ei,; = 1 in Equation (1),. only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T) dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France) Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between series. are based on the geographical proximity of roads. There are 130 relations in total. The GL Park. (GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to the. occupancy of the 30 busiest car parks. It has the same window and period of time as the previous dataset, and the binary relations between series are based on the geographical proximity of car. parks. There are 74 relations in total. The WHO dataset provides the number of deaths caused by. diphtheria over 91 different countries, giving rise to 91 time series. The binary relations between. series are defined so that two series are linked if the corresponding countries share a common. frontier. There are 228 links in total..\nWe compare our approach with five baselines : Auto-Regressive (AR), a monovariate linear. auto-regressive model. It computes its predictions based on a learned linear function of a fixed number p of past values of the series. The order p of the model is a hyperparameter of the model. selected by a grid search. Feed Forward Neural Network (FFNN), representative of non-linear\nhttp://www.google.org/flutrends 4http://www.who.int. 5http://data.grandlyon.com\ndependencies are supposed to be provided as priors for this model. We define this regularization loss\nAr(Z. = DkL(z\ndTv(X,Y) = sup (Dx(A) -Dy(A)D AeBorel\nwith X and Y being to random variables of density distribution respectively Dx and Dy, and Borel being the Borel set of Rn (roughly, cuboids in Rn). This means that having relatively similar representations (regarding the KL-divergence) constrains the predicted values to be similar. For more details see Appendix A.\nDuring inference when forecasting values, the latent distributions at (T + 1) are deduced from the ones at time T and follow N(h((T), (T), distributions at (T + 2) follow N(h o h((T), (T)), and so on...\nFigure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic tion task. RDGk,1 corresponds to the variant with losses (De,Dy,)..\nauto-regressive models of order p where the non-linear function is modeled as a feed-forward neura. network with one hidden layer of size s. In this case, p and s are hyperparameters selected by gric. search. RNN, a recurrent neural network with one hidden layer of size s of recurrent units and tanl. non-linearities. The RNN model is a state space non-linear auto-regressive model with exogenous. inputs (the past values of the series). Note that this model should in principle be able to learr. the inter-series dependencies, but the dependencies are not modeled explicitly as they are in ou. model. Also the RNN does not introduce explicit modeling of uncertainties. KF (Kalman (1960)). is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowsk. & LeCun (20o9)), a state of the art model that learns continuous deterministic latent variables. by modeling the dynamics and the joint probabilities between series. All the hyperparameters o. the baselines have been set using a validation set by grid search, including the best architectures. for the dynamic model h when it is a multi-layer perceptron with one hidden layer or a linear model.\nFor the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding th experimental protocol, models are evaluated using cross-validation with rolling origin.."}, {"section_index": "10", "section_name": "4.2 RESULTS", "section_text": "Let us first present the performance of our model w.r.t. the baselines for prediction at horizon 1 ir Figure 1b We have tested the four variants of our approach i.e combinations of De. or De, with Dy, or Dy,. The proposed model obtains the best results on all the datasets except GFT where KF performs better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others (GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for py, usually gets better results than other models, the best combination being the use of the MSE expectation erroi for the decoder and the non-linear model for the dynamics (denoted RDG2.2 on the figure). The non linear dynamical model used for Dy, usually gets better results than other models, the best combination being the use of the MsE expectation error for the decoder and the non-linear model for the dynamics (denoted RDG2.2 on the figure).\nFigure 1a shows the prediction quality (RMSE) at (T+1), (T+2), (T+3), (T+ 4) and (T+ 5) and. illustrates the ability of RDG to predict correctly at different horizons. Here again, the performance. of RDG is very close to the performance of the Recurrent Neural Network. One can remark that at (T + 5) KF does not goes the distance since it performs well at (T + 1) but quite badly at (T + 5). in comparison to other baselines.\nRDG has the additional property of modeling the uncertainty associated to its predictions, which is not the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre dictions made by our model together with their associated variance computed through the Gaussian embeddings. First, one can see that the ground truth values are always within the confidence interval provided by our model, which means that RDG computes relevant minimum and maximum possible values. Another aspect is that the size of the interval increases with the prediction horizon, which is\nModel GL-T GL-P GFT WHO AR 0.0752 0.0892 0.0626 0.0832 FFNN 0.0751 0.0894 0.045 0.0838 RNN 0.0709 0.0890 0.0431 0.0795 KF 0.0711 0.0833 0.0388 0.0799 DFG 0.0712 0.0911 0.0592 0.0795 RDG1,1 0.0742 0.0902 0.0607 0.0848 RDG1,2 0.0707 0.0834 0.0434 0.0796 RDG2,1 0.0765 0.0896 0.0589 0.0831 RDG2,2 0.0718 0.0828 0.0429 0.0795\n0.50 0.6 groundtruth groundtruth 0.45 prediction +- variance prediction +- variance prediction test prediction test 0.40 0.5 0.35 0.4 0.30 0.25 0.- 0.20 0.15 0.10 0.05 15 35 0.1 0 5 10 20 25 30 5 10 15 20 25 30 35\nFigure 2: Forecasts on GFT (two different time series of the dataset) with the RDG2.2 model showing its range of confidence: E(f(Z(t)))var(f(Z(t))). Prediction at 25+n corresponds to f(hn(Z(25))\nwhat is expected from such a model. The latter is then able to predict relevant confidence values fo. its predictions.\nComparison between RDG with/without structural regularization or uncertainty. We com pare in Table 1 the results between our model when taking into account the neighborhood grapl (r # O) or not (R = O): forecasts are uniformly worse for all datasets when we do not take into account the neighborhood graph, it suggests that the regularizer improves the model when the input graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor. responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our mode. outperforms the previous one for all the datasets..\nDataset Model GL-T GL-P GFT WHO Rainstorm 0.0710 0.0886 0.0440 0.0804 RDG (XR = 0) 0.0719 0.900 0.0441 0.0807 RDG 0.0707 0.0828 0.0388 0.0795\nModel Rainstorm RDG (R = 0) RDG\nTable 1: RMSE at T + 1 on the four datasets"}, {"section_index": "11", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We have proposed a model for relational time series forecasting. Our model (RDG) is based or latent Gaussian embeddings, and has shown competitive performance on four different dataset. compared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic tions, providing for example confidence intervals for each prediction. Future work will investigat more complex dynamic and prediction functions, as well as observing the behavior of the model fo. imputation tasks."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "TG Barbounis, JB Theocharis, MC Alexiadis, and PS Dokopoulos. Long-term wind speed anc power forecasting using local recurrent neural network models. IEEE TEC, 2006."}]
BJ6oOfqge
[{"section_index": "0", "section_name": "TEMPORAL ENSEMBLING FOR SEMI-SUPERVISED LEARNING", "section_text": "Table 5: The network architecture used in all of our tests\nSamuli Laine\nslaine@nvidia.com\nIn this paper, we present a simple and efficient method for training deep neural. networks in a semi-supervised setting where only a small portion of training data. is labeled. We introduce self-ensembling, where we form a consensus prediction. of the unknown labels using the outputs of the network-in-training on different. epochs, and most importantly, under different regularization and input augmenta-. tion conditions. This ensemble prediction can be expected to be a better predictor. for the unknown labels than the output of the network at the most recent training. epoch, and can thus be used as a target for training. Using our method, we set. new records for two standard semi-supervised learning benchmarks, reducing the. (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with. 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels. and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using ran. dom images from the Tiny Images dataset as unlabeled extra inputs during train-. Ing. Finally, we demonstrate good tolerance to incorrect labels..\ncould be an interesting avenue for future work to incorporate a generative component to our solution We also envision that our methods could be applied to regression-type learning tasks.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We thank the anonymous reviewers, Tero Karras, Pekka Janis, Tim Salimans, Ian Goodfellow, as well as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im prove this article.\nIt has long been known that an ensemble of multiple neural networks generally yields better pre-. dictions than a single network in the ensemble. This effect has also been indirectly exploited when. training a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),. or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singh. et al., 2016), where training always focuses on a particular subset of the network, and thus the com-. plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this idea by forming ensemble predictions during training, using the outputs of a single network on different. training epochs and under different regularization and input augmentation conditions. Our train-. ing still operates on a single network, but the predictions made on different epochs correspond to an. ensemble prediction of a large number of individual sub-networks because of dropout regularization.."}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems 27 (NIPS). 2014.\nLeo Breiman. Bagging predictors. Machine Learning. 24(2). 1996\nThis ensemble prediction can be exploited for semi-supervised learning where only a small portion. of training data is labeled. If we compare the ensemble prediction to the current output of the net- work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels of the unlabeled inputs. Therefore the labels inferred this way can be used as training targets for the unlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen- tation. Indeed, without neither, there would be much less reason to place confidence in whatever labels are inferred for the unlabeled training data..\nBenjamin Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassin human-level performance on imagenet classification. CoRR, abs/1502.01852, 2015.\nWe describe two ways to implement self-ensembling, I-model and temporal ensembling. Both ap proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin We furthermore observe that self-ensembling improves the classification accuracy in fully labeled cases as well, and provides tolerance against incorrect labels.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks witl stochastic depth. CoRR, abs/1603.09382, 2016.\nThe recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin ciple as our work, and the I-model can be seen as a special case of it. The I-model can also be seen as a simplification of the T-model of the ladder network by Rasmus et al. (2015), a previously presented network architecture for semi-supervised learning. Our temporal ensembling method has connections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels\nNAME DESCRIPTION input 32 x 32 RGB image noise Additive Gaussian noise = 0.15 conv1a 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv1b 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv1c 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) pool1 Maxpool 2 2 pixels drop1 Dropout, p = 0.5 conv2a 256 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv2b 256 filters, 3 3, pad = 'same', LReLU ( = 0.1) conv2c 256 filters, 3 3, pad = 'same', LReLU (a = 0.1) pool2 Maxpool 2 2 pixels drop2 Dropout, p = 0.5 conv3a 512 filters, 3 3, pad = 'valid', LReLU (a = 0.1) conv3b 256 filters, 1 1, LReLU (a = 0.1) conv3c 128 filters, 1 1, LReLU (a = 0.1) pool3 Global average pool (6 6 -> 1 1 pixels) dense Fully connected 128 -> 10 output Softmax\ntaila@nvidia.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Sander Dieleman, Jan Schluter, Colin Raffel, Eben Olson, Soren Kaae Sonderby, et al. Lasagne. First release., 2015.\nYarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. CoRR, abs/1506.02142, 2016.\n-model w(t) Yi cross- Zi entropy stochastic network weighted >loss augmentation with dropout sum squared Zi difference Temporal ensembling w(t) Yi cross- Zi entropy stochastic network weighted Xi >loss augmentation with dropout sum squared difference >Zi\nLars Maalge, Casper Kaae Sonderby, Soren Kaae Sonderby, and Ole Winther. Auxiliary deep gen erative models. CoRR, abs/1602.05473, 2016.\nFigure 1: Structure of the training pass in our methods. Top: II-model. Bottom: temporal en sembling. Labels yi are available only for the labeled inputs, and the associated cross-entropy loss component is evaluated only for those\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributiona smoothing with virtual adversarial training. In Proc. International Conference on Learning Rep. resentations (1CLR), 2016.\nAlgorithm 1 11-model pseudocode\nAugustus Odena. Semi-supervised learning with generative adversarial networks. Data Efficie Machine Learning workshop at ICML 2016. 2016.\nScott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and An drew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. CoRR. abs/1412.6596, 2014.\nWe present two implementations of self-ensembling during training. The first one, I-model, en courages consistent network output between two realizations of the same input stimulus, under twc different dropout conditions. The second method, temporal ensembling, simplifies and extends this by taking into account the network predictions over multiple previous training epochs.\nTim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization tc accelerate training of deep neura1 networks. CoRR, abs/1602.07868, 2016\nTim Salimans. Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. CoRR, abs/1606.03498, 2016.\nWe shall describe our methods in the context of traditional image classification networks. Let the. training data consist of total of N inputs, out of which M are labeled. The input stimuli, available for all training data, are denoted x, where i E {1... N}. Let set L contain the indices of the labeled. inputs, [L] = M. For every i E L, we have a known correct label yi E {1...C}, where C is the number of different classes.\nSaurabh Singh, Derek Hoiem, and David A. Forsyth. Swapout: Learning an ensemble of deep architectures. CoRR, abs/1605.06465, 2016."}, {"section_index": "4", "section_name": "2.1 I-MODEL", "section_text": "The structure of I-model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. During training, we evaluate the network for each training input x; twice, resulting in prediction vectors z and zi. Our loss function consists of two components. The first component is the standard cross entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs penalizes different predictions for the same training input x; by taking the mean square difference\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. CoRR. abs/1412.6806. 2014\nGiorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural networks robust to label noise: a loss correction approach. CoRR, abs/1609.03683, 2016\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems 28 (NIPS). 2015.\nSainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. CoRR, abs/1406.2080, 2014.\nIt is important to notice that, because of dropout regularization, the network output during training is a stochastic variable. Thus two evaluations of the same input x; under same network weights yield different results. In addition, Gaussian noise and augmentations such as random translation are evaluated twice, resulting in additional variation. The combination of these effects explains the difference between the prediction vectors z; and zy. This difference can be seen as an error in classification, given that the original input x; was the same, and thus minimizing it is a reasonable goal.\nIn our implementation, the unsupervised loss weighting function w(t) ramps up, starting from zero. along a Gaussian curve during the first 80 training epochs. See Appendix A for further details about. this and other training parameters. In the beginning the total loss and the learning gradients are thus. dominated by the supervised loss component, i.e., the labeled data only. We have found it to be very important that the ramp-up of the unsupervised loss component is slow enough-otherwise the network gets easily stuck in a degenerate solution where no meaningful classification of the data. is obtained.\nOur approach is somewhat similar to the T-model of the ladder network by Rasmus et al. (2015), but conceptually simpler. In the II-model, the comparison is done directly on network outputs, i.e., afte. softmax activation, and there is no auxiliary mapping between the two branches such as the learned denoising functions in the ladder network architecture. Furthermore, instead of having one \"clean' and one \"corrupted\"' branch as in T-model, we apply equal augmentation and noise to the inputs for both branches.\nXiaojin Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sci ences, University of Wisconsin-Madison, 2005.\nAs shown in Section 3. the I-model combined with a good convolutional network architecture provides a significant improvement over prior art in classification accuracy..\nXiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa gation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002\nAnalyzing how the II-model works, we could equally well split the evaluation of the two branches i two separate phases: first classifying the training set once without updating the weights 0, and the training the network on the same inputs under different augmentations and dropout, using the jus. obtained predictions as targets for the unsupervised loss component. As the training targets obtaine this way are based on a single evaluation of the network, they can be expected to be noisy. Tempora. ensembling alleviates this by aggregating the predictions of multiple previous network evaluation. into an ensemble prediction. It also lets us evaluate the network only once during training, gaining. an approximate 2x speedup over the I-model..\nTable 5 details the network architecture used in all of our tests. It is heavily inspired by ConvPool CNN-C (Springenberg et al., 2014) and the improvements made by Salimans & Kingma (2016). Al data layers were initialized following He et al. (2015), and we applied weight normalization anc mean-only batch normalization (Salimans & Kingma, 2016) with momentum 0.999 to all of them We used leaky ReLU (Maas et al., 2013) with a = 0.1 as the non-linearity, and chose to use max pooling instead of strided convolutions because it gave consistently better results in our experiments\nAll networks were trained using Adam (Kingma & Ba, 2014) with a maximum learning rate of Amax = 0.003, except for temporal ensembling in the SVHN case where a maximum learning rate of Amax = 0.001 worked better. Adam momentum parameters were set to 1 = 0.9 and 2 = 0.999 as suggested in the paper. The maximum value for the unsupervised loss component was set to Wmax . M/N, where M is the number of labeled inputs and N is the total number of training inputs. For II-model runs, we used wmax = 100 in all runs except for CIFAR-100 with Tiny Images where we set wmax = 300. For temporal ensembling we used wmax = 30 in most runs. For the corrupted label test in Section 3.5 we used wmax = 300 for 0% and 20% corruption, and wmax = 3000 for corruption of 50% and higher. For basic CIFAR-100 runs we used wmax = 100, and for CIFAR-100 with Tiny Images we used wmax = 1000. The accumulation decay constant of temporal ensembling was set to Q = 0.6 in all runs.\nThe structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocode in Algorithm 2. The main difference to the 1I-model is that the network and augmentations ar. evaluated only once per input per epoch, and the target vectors for the unsupervised loss componen are based on prior network evaluations instead of a second evaluation of the network.\nAfter every training epoch, the network outputs zi are accumulated into ensemble outputs Z, b updating Z < Z, + (1 - Q)zi, where is a momentum term that controls how far the ensemble reaches into training history. Because of dropout regularization and stochastic augmentation, Z thus contains a weighted average of the outputs of an ensemble of networks f from previous training epochs, with recent epochs having larger weight than distant epochs. For generating the training targets z, we need to correct for the startup bias in Z by dividing by factor (1 at). A simila. bias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-. ization (Salimans & Kingma, 2016). On the first training epoch, Z and are zero as no data fron previous epochs is available. For this reason, we specify the unsupervised weight ramp-up functior w(t) to also be zero on the first training epoch..\nIn all runs we ramped up both the learning rate X and unsupervised loss component weight w during. the first 80 epochs using a Gaussian ramp-up curve exp[-5(1 T)2], where T advances linearly from zero to one during the ramp-up period. In addition to ramp-up, we annealed the learning rate. X to zero and Adam 1 to 0.5 during the last 50 epochs, but otherwise we did not decay them. during training. The ramp-down curve was similar to the ramp-up curve but time-reversed and with a scaling constant of 12.5 instead of 5. All networks were trained for 300 epochs with minibatch. size of 100.\nSquared difference gave slightly but consistently better results than cross-entropy loss in our tests\nbetween the prediction vectors z; and z,.1 To combine the supervised and unsupervised loss terms,. we scale the latter by time-dependent weighting function w(t). By comparing the entire output. vectors z; and z, we effectively ask the \"dark knowledge\" (Hinton et al., 2015) between the two evaluations to be close, which is a much stronger requirement compared to asking that only the final classification remains the same, which is what happens in traditional training..\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. CoRR, abs/1605.02688, May 2016\nAlgorithm 2 Temporal ensembling pseudocode. Note that the updates of Z and could equally well be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.\nCIFAR-10 Following previous work in fully supervised learning, we pre-processed the images us ing ZCA and augmented the dataset using horizontal flips and random translations. The translations were drawn from -2, 2[ pixels, and were independently applied to both branches in the II-model.\nRequire: x; = training stimuli Require: L = set of training input indices with known labels Require: yi = labels for labeled inputs i E L. Require: = ensembling momentum, 0 < < 1 Require: w(t) = unsupervised weight ramp-up function. Require: fe(x) = stochastic neural network with trainable parameters 0 Require: g(x) = stochastic input augmentation function Z0[NxC] > initialize ensemble predictions. 0[NxC] > initialize target vectors for t in [1, num_epochs] do for each minibatch B do ZiEB<fe(g(xiEB,t)) > evaluate network outputs for augmented input. loss {- [B] iE(BnL) log zi[yi] > supervised loss component +w(t)C|B| iEB|zi-zi|l2 > unsupervised loss component update 0 using, e.g., ADAM > update network parameters end for ZaZ+(1-a)z > accumulate ensemble predictions. zZ/(1-at) > construct target vectors by bias correction. end for return 0\nSVHN We pre-processed the input images by biasing and scaling each input image to zero mean and unit variance. We used only the 73257 items in the official training set, i.e., did not use the provided 531131 extra items. The training setups were otherwise similar to CIFAR-10 except that horizontal flips were not used.\nModel convergence As discussed in Section 2.1, a slow ramp-up of the unsupervised cost is very important for getting the models to converge. Furthermore, in our very preliminary tests with 250 labels in SVHN we noticed that optimization tended to explode during the ramp-up period, and we eventually found that using a lower value for Adam 2 parameter (e.g., 0.99 instead of 0.999) seem. to help in this regard.\nWe do not attempt to guarantee that the occurrence of labeled inputs during training would be some. how stratified; with bad luck there might be several consecutive minibatches without any labeled inputs when the label density is very low. Some previous work has identified this as a weakness, and. have solved the issue by shuffling the input sequences in such a way that stratification is guaranteed. e.g. Rasmus et al. (2015) (confirmed from the authors). This kind of stratification might further. improve the convergence of our methods as well..\nTiny Images, extra data from restricted categories The restricted extra data in Section 3.3 was extracted from Tiny Images by picking all images with labels corresponding to the 100 categories used in CIFAR-100. As the Tiny Images dataset does not contain CIFAR-100 categories aquar- ium_fish and maple tree, we used images with labels fish and maple instead. The result was a total of 237 203 images that were used as unlabeled extra data. Table 6 shows the composition of this extra data set.\nThe benefits of temporal ensembling compared to I-model are twofold. First, the training is faste because the network is evaluated only once per input on each epoch. Second, the training targets z can be expected to be less noisy than with II-model. As shown in Section 3, we indeed obtair somewhat better results with temporal ensembling than with I-model in the same number of training epochs. The downside compared to I-model is the need to store auxiliary data across epochs, anc the new hyperparameter a. While the matrix Z can be fairly large when the dataset contains a larg. number of items and categories, its elements are accessed relatively infrequently. Thus it can be stored, e.g., in a memory mapped file.\nIt is worth noting that the CIFAR-100 dataset itself is a subset of Tiny Images, and we did no explicitly prevent overlap between this extra set and CIFAR-100. This led to approximately a thirc of the CIFAR-100 training and test images being present as unlabeled inputs in the extra set. The other test with 500k extra entries picked randomly out of all 79 million images had a negligibl overlap with CIFAR-100.\nAn intriguing additional possibility of temporal ensembling is collecting other statistics from the network predictions z besides the mean. For example, by tracking the second raw moment of the network outputs, we can estimate the variance of each output component zi,j. This makes it possible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani, 2016). Based on this information, we could, e.g., place more weight on more certain predictions vs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenues as future work."}, {"section_index": "5", "section_name": "3 RESULTS", "section_text": "Our network structure is given in Table 5, and the test setup and all training parameters are detailed. in Appendix A. We test the I-model and temporal ensembling in two image classification tasks. CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different random seeds.\nAlthough it is rarely stated explicitly, we believe that our comparison methods do not use input aug mentation. i.e.. are limited to dropout and other forms of permutation-invariant noise. Therefore w report the error rates without augmentation, unless explicitly stated otherwise. Given that the abilit of an algorithm to extract benefit from augmentation is also an important property, we report th classification accuracy using a standard set of augmentations as well. In purely supervised trainin the de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randor translations, while SVHN is limited to random translations. By using these same augmentations w can compare against the best fully supervised results as well. After all, the fully supervised result should indicate the upper bound of obtainable accuracy.\nImplementation Our implementation is written in Python using Theano (Theano. Development Team, 20i6) and Lasagne (Dieleman et al., 2015), and is available at nttps://qithub.com/smlaine2/tempens.\nTable 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels)\nError rate (%) with # labels. 4000 All (50000) Supervised-only 35.56 1.59 7.33 0.04 with augmentation 34.85 1.65 6.05 0.15 Conv-Large, T-model (Rasmus et al., 2015) 20.40 0.47 CatGAN (Springenberg, 2016) 19.58 0.58 GAN of Salimans et al. (2016) 18.63 2.32 I-models 16.55 0.29 6.90 0.07 H-model with augmentation. 12.36 0.31 5.56 0.10 Temporal ensembling with augmentation 12.16 0.24 5.60 0.10\nTable 2: SVHN results for 500 and 1000 labels, aver. ages of 10 runs (4 runs for all labels)..\nTable 6: The Tiny Images (Torralba et al., 2008) labels and image counts used in the CIFAR-10 plus restricted extra data tests (rightmost column of Table 4). Note that the extra input images wer. supplied as unlabeled data for our networks, and the labels were used only for narrowing down th. full set of 79 million images.\nError rate (%) with # labels Model 500 1000 All (73257) Supervised-only 35.18 5.61 20.47 2.64 3.05 0.07 with augmentation 31.59 3.60 19.30 3.89 2.88 0.03 DGN (Kingma et al., 2014) 36.02 0.10 Virtual Adversarial (Miyato et al., 2016) 24.63 ADGM (Maalge et al., 2016) 22.86 SDGM (Maalge et al., 2016) 16.61 0.24 GAN of Salimans et al. (2016) 18.44 4.8 8.11 1.3 I-model 7.05 0.30 5.43 0.25 2.78 0.03 H-model with augmentation 6.65 0.53 4.82 0.17 2.54 0.04 Temporal ensembling with augmentation 5.12 0.13 4.42 0.16 2.74 0.06"}, {"section_index": "6", "section_name": "3.1 CIFAR-10", "section_text": "CIFAR-10 is a dataset consisting of 32 32 pixel RGB images from ten classes. Table 1 shows a 2.1 percentage point reduction in classification error rate with 4000 labels (400 per class) compared to earlier methods for the non-augmented II-model..\nEnabling the standard set of augmentations further reduces the error rate by 4.2 percentage points to 12.36%. Temporal ensembling is slightly better still at 12.16%, while being twice as fast tc train. This small improvement conceals the subtle fact that random horizontal flips need to be done independently for each epoch in temporal ensembling, while H-model can randomize once per a pair of evaluations, which according to our measurements is ~0.5 percentage points better thar independent flips.\nA principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provide results only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, loca stretching inside the network, and is known to improve classification results substantially. They quote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentations and fractional max pooling-in fact, their baseline result is already better than any previous semi- supervised results. By enabling semi-supervised learning they achieve a 17% drop in classification error rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85% to 12.16%).\nThe street view house numbers (SVHN) dataset consists of 32 32 pixel RGB images of real-world house numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the\n6.90 0.07 5.56 0.10 5.60 0.10\nLabel # Label # Label # Label # apple 2242 baby 2771 bear 2242 beaver 2116 bed 2767 bee 2193 beetle 2173 bicycle 2599 bottle 2212 bowl 2707 boy 2234 bridge 2274 bus 3068 butterfly 3036 camel 2121 can 2461 castle 3094 caterpillar 2382 cattle 2089 chair 2552 chimpanzee 1706 clock 2375 cloud 2390 cockroach 2318 couch 2171 crab 2735 crocodile 2712 cup 2287 dinosaur 2045 dolphin 2504 elephant 2794 fish* 3082 flatfish 1504 forest 2244 fox 2684 girl 2204 hamster 2294 house 2320 kangaroo 2563 keyboard 1948 lamp 2242 lawn_mower 1929 leopard 2139 lion 3045 lizard 2130 lobster 2136 man 2248 maple* 2149 motorcycle 2168 mountain 2249 2128 mushroom 2390 mouse oak_tree 1995 orange 2650 orchid 1902 otter 2073 palm_tree 2107 pear 2120 pickup_truck 2478 pine_tree 2341 plain 2198 plate 3109 poppy 2730 porcupine 1900 possum 2008 rabbit 2408 raccoon 2587 ray 2564 road 2862 rocket 2180 rose 2237 sea 2122 seal 2159 shark 2157 shrew 1826 skunk 2450 skyscraper 2298 snail 2369 snake 2989 spider 3024 squirrel 2374 streetcar 1905 sunflower 2761 sweet_pepper 1983 table 3137 tank 1897 telephone 1889 television 2973 tiger 2603 tractor 1848 train 3020 trout 2726 tulip 2160 turtle 2438 wardrobe 2029 whale 2597 willow_tree 2040 wolf 2423 woman 2446 worm 2945\n18.44 4.8\n18.44 4.8\n7.05 0.30 6.65 0.53 5.12 + 0.13\nTable 3: CIFAR-100 results with 10000 labels, aver ges of 10 runs (4 runs for all labels).\nError rate (%) with # labels 10000 All (50000) Supervised-only 51.21 0.33 29.14 0.25 with augmentation 44.56 0.30 26.42 0.17 II-model 43.43 0.54 29.06 0.21 H-model with augmentation 39.19 0.36 26.32 0.04 Temporal ensembling with augmentation 38.65 0.51 26.30 0.15\nTable 4: CIFAR-100 + Tiny Images results. averages of 10 runs\nError rate (%) with # unlabeled auxiliary inputs from Tiny Images Random 500k Restricted 237k 1-model with augmentation. 25.79 0.17 25.43 0.32 Temporal ensembling with augmentation 23.62 0.23 23.79 0.24\nofficial 73257 training examples following Salimans et al. (2016). Even with this choice our erro rate with all labels is only 3.05% without augmentation..\nWe also investigated the behavior with 500 labels, where we obtained an error rate less than hali. of Salimans et al. (2016) without augmentations, with a significantly lower standard deviation as well. When augmentations were enabled, temporal ensembling further reduced the error rate to. 5.12%. In this test the difference between H-model and temporal ensembling was quite significan. at 1.5 percentage points.\nIn SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that they use fractional max pooling, which is a very augmentation-like technique due to the random, local stretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised- only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given that in a separate experiment our network matched the best published result for non-augmented SVHN when extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us to conclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyond what simple translations can achieve. Our temporal ensembling technique obtains better error rates for both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported by Sajjadi et al. for 732 labels."}, {"section_index": "7", "section_name": "3.3 CIFAR-1OO AND TINY IMAGES", "section_text": "The CIFAR-100 dataset consists of 32 32 pixel RGB images from a hundred classes. We are not aware of previous semi-supervised results in this dataset, and chose 1oooo labels for our ex periments. Table 3 shows error rates of 43.43% and 38.65% without and with augmentation, re. spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisec. learning with labeled inputs only.\nWe ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al. 2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR 100 categories, and another with a restricted set of 237k images from the categories that correspond to those found in the CIFAR-100 dataset (see appendix A for details). The results are shown in Table 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2.7 percentage points (from 26.30% to 23.63%), indicating a desirable ability to learn from random natural images. Temporal ensembling benefited much more from the extra data than the I-model Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve\nTable 2 compares our method to the previous state-of-the-art. With the most commonly used 1000 labels we observe an improvement of 2.7 percentage points, from 8.11% to 5.43% without augmen- tation, and further to 4.42% with standard augmentations..\nStandard supervised Temporal ensembling 100 100 (%) 90 90 eceenncy 80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 0 1 epoch 300 1 epoch 300 0% 20% 50% 80% 90% -0% 20% 50% 80% 90%\n(%) eaeeeeee eaeeeeeeaeen 90 90 80 70 60 60 50 50 40 40 30 Wwwwwwwwwmww 30 20 20 10 10 0 0 1 epoch 300 1 epoch 300 0% 20% 50% 80% 90% 0% 20% 50% 80% 90%\nthe classification accuracy further. This indicates that in order to train a better classifier by addin? extra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as the. actual inputs-in our case, natural images. We hypothesize that it may even be possible to use. properly crafted synthetic data as unlabeled inputs to obtain improved classifiers..\nIn order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k per epoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and 50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomly on each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch we updated only the rows of Z that corresponded to inputs used on that epoch.\nWhen all labels are used for traditional supervised training, our network approximately matches the state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015. Mishkin & Matas, 2016) at 6.05%, and without augmentation (Salimans & Kingma, 2016) at 7.33% The same is probably true for SVHN as well, but there the best published results rely on extra data that we chose not to use.."}, {"section_index": "8", "section_name": "3.5 TOLERANCE TO INCORRECT LABELS", "section_text": "In a further test we studied the hypothesis that our methods add tolerance to incorrect labels by assigning a random label to a certain percentage of the training set before starting to train. Figure 2. shows the classification error graphs for standard supervised training and temporal ensembling.\nClearly our methods provide considerable resistance to wrong labels, and we believe this is because. the unsupervised loss term encourages the mapping function implemented by the network to be. flat in the vicinity of all input data points, whereas the supervised loss term enforces the mapping. function to have a specific value in the vicinity of the labeled input data points. This means that. even the wrongly labeled inputs play a role in shaping the mapping function-the unsupervised. loss term smooths the mapping function and thus also the decision boundaries, effectively fusing. the inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient fol locking the clusters to the right output vectors through the supervised loss term. The difference to. classical regularizers is that we induce smoothness only on the manifold of likely inputs instead.\nFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part of the labels is randomized. With standard supervised training (left) the classification accuracy suffers when even a small portion of the labels give disinformation, and the situation worsens quickly as the portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling (right) shows almost perfect resistance to disinformation when half of the labels are random, and retains over ninety percent classification accuracy even when 80% of the labels are random.\nGiven this premise, it is perhaps somewhat surprising that our methods reduce the error rate also when all labels are used (Tables 1 and 2). We believe that this is an indication that the consis- tency requirement adds a degree of resistance to ambiguous labels that are fairly common in many classification tasks, and that it encourages features to be more invariant to stochastic sampling.\nof over the entire input domain. For further analysis about the importance of the gradient of th mapping function, see Simard et al. (1998)."}, {"section_index": "9", "section_name": "4 RELATED WORK", "section_text": "There is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we wil concentrate on the ones that are most directly connected to our work\n-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections int an encoder-decoder type network architecture, targeted at semi-supervised learning. In T-model, al but the highest lateral connections in the ladder network are removed, and after pruning the un necessary stages, the remaining network consists of two parallel, identical branches. One of th branches takes the original training inputs, whereas the other branch is given the same input cor rupted with noise. The unsupervised loss term is computed as the squared difference between th (pre-activation) output of the clean branch and a denoised (pre-activation) output of the corrupte branch. The denoised estimate is computed from the output of the corrupted branch using a para metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our I-model differs fron the I-model in removing the parametric nonlinearity and denoising, having two corrupted paths and comparing the outputs of the network instead of pre-activation data of the final layer.\nSajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so callec. transform/stability loss, which is founded on the same principle as our work. During training, they. run augmentation and network evaluation n times for each minibatch, and then compute an unsu. pervised loss term as the sum of all pairwise squared distances between the obtained n networl. outputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular. ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity loss. term (Sajjadi et al., 2016a) that we do not use. Our I-model can be seen as a special case of the. transform/stability loss obtained by setting n = 2. The computational cost of training with trans. form/stability loss increases linearly as a function of n, whereas the efficiency of our tempora ensembling technique remains constant regardless of how large effective ensemble we obtain via the. averaging of previous epochs' predictions.\nIn bootstrap aggregating, or bagging, multiple networks are trained independently based on subsets. of training data (Breiman, 1996). This results in an ensemble that is more stable and accurate than the individual networks. Our approach can be seen as pulling the predictions from an implicit ensemble that is based on a single network, and the variability is a result of evaluating it under different dropout and augmentation conditions instead of training on different subsets of data. In. work parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training.. hopefully corresponding to different local minima, and use them as an explicit ensemble..\nThe general technique of inferring new labels from partially labeled data is often referred to as boot. strapping or self-training, and it was first proposed by Yarowsky (1995) in the context of linguistic. analysis. Whitney & Sarkar (2012) analyze Yarowsky's algorithm and propose a novel graph-based. label propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) infer. labels for unlabeled training data by comparing the associated inputs to labeled training inputs using. a suitable distance metric. Our approach differs from this in two important ways. Firstly, we never. compare training inputs against each other, but instead only rely on the unknown labels remaining. constant, and secondly, we let the network produce the likely classifications for the unlabeled inputs instead of providing them through an outside process..\nIn addition to partially labeled data, considerable amount of effort has been put into dealing with densely but inaccurately labeled data. This can be seen as a semi-supervised learning task where part of the training process is to identify the labels that are not to be trusted. For recent work in this area. see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al (2014) presented a simple bootstrapping method that trains a classifier with the target composed of a convex combination of the previous epoch output and the known but potentially noisy labels. Our temporal ensembling differs from this by taking into account the evaluations over multiple epochs.\nGenerative Adversarial Networks (GAN) have been recently used for semi-supervised learning with promising results (Maalge et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). I"}]
BJuysoFeg
[{"section_index": "0", "section_name": "REVISITING BATCH NORMALIZATION FOI PRACTICAL DOMAIN ADAPTATION", "section_text": "(a) GF1 image (b) GF2 image (c) Tianhui image\nLiu', Xiaodi Hou.\nyttonhao@pku.edu.cn winsty@gmail.com shijianping5000@gmail.con liujiaying@pku.edu.cn xiaodi.hou@gmail.com\nDeep neural networks (DNN) have shown unprecedented success in various com- puter vision applications such as image classification and object detection. How- ever, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al., 2015) shows that a DNN has strong depen dency towards the training dataset, and the learned features cannot be easily trans. ferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state- of-the-art performance despite its surprising simplicity. Furthermore, we demon- strate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.\nFigure 5: Remote sensing images in different domains\n(a) Original image. (b) Without AdaBN (c) AdaBN (a) Original image. (b) Without AdaBN (c) AdaBN"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled. training images that are not easy to obtain. One common practice is to use labeled data from other. related source such as a different public dataset, or harvesting images by keywords from a search. engine. Because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at. capturing dataset bias in its internal representation (Torralba & Efros, 2011), which eventually leads to overfitting. imperfectly paired training and testing sets usually leads to inferior performance..\nIn this paper, we propose a simple yet effective approach called AdaBN for batch normalized DNI domain adaptation. We hypothesize that the label related knowledge is stored in the weight matri of each layer, whereas domain related knowledge is represented by the statistics of the Batch Nor malization (BN) (Ioffe & Szegedy, 2015) layer. Therefore, we can easily transfer the trained mode to a new domain by modulating the statistics in the BN layer. This approach is straightforward tc implement, has zero parameter to tune, and requires minimal computational resources. Moreover our AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domai adaptation and semi-supervised settings. Fig. 1 illustrates the flowchart of AdaBN. To summarize our contributions are as follows:\nFigure 6: Visual cloud detection results on GF1 dataset. White pixels in (b) and (c) represent the detected cloud regions."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Known as domain adaptation, the effort to bridge the gap between training and testing data distribu- tions has been discussed several times under the context of deep learning (Tzeng et al., 2014; Long et al., 2015; Tzeng et al., 2015; Ganin & Lempitsky, 2015). To make the connection between the domain of training and the domain of testing, most of these methods require additional optimiza- tion steps and extra parameters. Such additional computational burden could greatly complicate the training of a DNN which is already intimidating enough for most people."}, {"section_index": "3", "section_name": "CONCLUSION AND FUTURE WORKS", "section_text": "Training Testing Input 0.5 0.5 0.4 0.4 Conv/FC 0.3 0.3 0.2 0.2 0.1 0.1 90 0 -5 10 -10 5 5 10 BatchNorm 3 2 1 0 -1 Activation -2 3 -2 Output E -2\nIn this paper, we have introduced a simple yet effective approach for domain adaptation on batch. normalized neural networks. Besides its original uses, we have exploited another functionality of. Batch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of each BN layer in source domain with those in target domain. The proposed method is easy to. implement and parameter-free, and it takes almost no effort to extend to multiple source domains. and semi-supervised settings. Our method established new state-of-the-art results on both single and. multiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on. cloud detection for large-size remote sensing images further demonstrate the effectiveness of our. method in practical use. We believe our method opens up a new direction for domain adaptation..\nIn contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusion. loss to update the weights in CNN for domain adaptation, our method only modifies the statistics of BN layer. Therefore, our method is fully complementary to other existing deep learning basec methods. It is interesting to see how these different methods can be unified under one framework.\nFigure 1: Illustration of the proposed method. For each convolutional or fully connected layer, we use different bias/variance terms to perform batch normalization for the training domain and the test domain. The domain specific normalization mitigates the domain shift issue"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Rahaf Aljundi, Remi Emonet, Damien Muselet, and Marc Sebban. Landmarks-based kernelizec subspace alignment for unsupervised domain adaptation. In CVPR. 2015.\nMahsa Baktashmotlagh, Mehrtash Harandi, Brian Lovell, and Mathieu Salzmann. Unsupervisec domain adaptation by domain invariant projection. In ICCV, pp. 769-776, 2013.\nLiang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016a."}, {"section_index": "5", "section_name": "2 RELATED WORK", "section_text": "Domain transfer in visual recognition tasks has gained increasing attention in recent literature (Bei. jbom, 2012; Patel et al., 2015). Often referred to as covariate shift (Shimodaira, 2000) or datasei. bias (Torralba & Efros, 2011), this problem poses a great challenge to the generalization ability of. a learned model. One key component of domain transfer is to model the difference between source. and target distributions. In Khosla et al. (2012), the authors assign each dataset with an explicit bias. vector, and train one discriminative model to handle multiple classification problems with different. bias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrep. ancy (MMD) (Gretton et al., 2012). This approach projects each data sample into a Reproducing. Kernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrep. ancies, many methods are proposed, including sample selections (Huang et al., 2006; Gong et al. 2013), explicit projection learning (Pan et al., 2011; Gopalan et al., 2011; Baktashmotlagh et al.. 2013) and principal axes alignment (Fernando et al., 2013; Gong et al., 2012; Aljundi et al., 2015)..\nE Knuth Donald. The art of computer programming. Sorting and searching, 3:426-458, 1999\nAll of these methods face the same challenge of constructing the domain transfer function -- a high. dimensional non-linear function. Due to computational constraints, most of the proposed transfer functions are in the category of simple shallow projections, which are typically composed of kernel. transformations and linear mapping functions.\nBasura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual do main adaptation using subspace alignment. In ICCV, pp. 2960-2967, 2013.\nYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180-1189, 2015.\nIn the field of deep learning, feature transferability across different domains is a tantalizing yet generally unsolved topic (Yosinski et al., 2014; Tommasi et al., 2015). To transfer the learned representations to a new dataset, pre-training plus fine-tuning (Donahue et al., 2014) have become de facto procedures. However, adaptation by fine-tuning is far from perfect. It requires a considerable amount of labeled data from the target domain, and non-negligible computational resources to re train the whole network\nMuhammad Ghifary, W Bastiaan Kleijn, and Mengjie Zhang. Domain adaptive neural networks fo. object recognition. In PRICAI: Trends in Artificial Intelligence, pp. 898-904. 2014\n1. We propose a novel domain adaptation technique called Adaptive Batch Normalizatior (AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset which is ideal for domain adaptation tasks. 2. We validate the effectiveness of our approach on standard benchmarks for both single source and multi-source domain adaptation. Our method outperforms the state-of-the-art methods. 3. We conduct experiments on the cloud detection for remote sensing images to further demonstrate the effectiveness of our approach in practical use.\nAlessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, pp. 181-189, 2010.\nTianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu Chiyuan Zhang, and Zheng Zhang. MXNet: A flexible and efficient machine learning library for. heterogeneous distributed systems. NIPs Workshop on Machine Learning Systems. 2016b.\nA series of progress has been made in DNN to facilitate domain transfer. Early works of domair adaptation either focus on reordering fine-tuning samples (Chopra et al., 2013), or regularizing MMD (Gretton et al., 2012) in a shallow network (Ghifary et al., 2014). It is only until recently that the problem is directly attacked under the setting of classification of unlabeled target domair using modern convolutional neural network (CNN) architecture. DDC (Tzeng et al., 2014) used the classical MMD loss to regularize the representation in the last layer of CNN. DAN (Long et al. 2015) further extended the method to multiple kernel MMD and multiple layer adaptation. Besides adapting features using MMD, RTN (Long et al., 2016) also added a gated residual layer for classi fier adaptation. RevGrad (Ganin & Lempitsky, 2015) devised a gradient reversal layer to compensat the back-propagated gradients that are domain specific. Recently, by explicitly modeling both pri vate and shared components of the domain representations in the network, Bousmalis et al. (2016 proposed a Domain Separation Network to extract better domain-invariant features.\nRaghuraman Gopalan, Ruonan Li, and Rama Chellappa. Domain adaptation for object recognition An unsupervised approach. In ICCV, pp. 999-1006, 2011.\nGregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CVPR, 2016.\nAnother related work is CORAL (Sun et al., 2016). This model focuses on the last layer of CNN. CORAL whitens the data in source domain, and then re-correlates the source domain features tc. target domain. This operation aligns the second order statistics of source domain and target domain. distributions. Surprisingly, such simple approach yields state-of-the-arts results in various text clas-. sification and visual recognition tasks. Recently, Deep CORAL (Sun & Saenko, 2016) also extends the method into DNN by incorporating a CORAL loss..\nJiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Scholkopf, and Alex J Smola Correcting sample selection bias by unlabeled data. In NIPS, pp. 601-608, 2006.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. In ICML, pp. 448-456, 2015.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. In ACM MM, pp. 675-678, 2014."}, {"section_index": "6", "section_name": "2.1 BATCH NORMALIZATION", "section_text": "In this section, we briefly review Batch Normalization (BN) (Ioffe & Szegedy, 2015) which is. closely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal covariate shifting - a common problem while training a very deep neural network. It first standard-. izes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch. Formally, given the input to a BN layer X E Rnp, where n denotes the batch size, and p is the. feature dimension, BN layer transforms a feature j E {1... p} into:.\nAditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba. Undoin the damage of dataset bias. In ECCV, pp. 158-171. 2012.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In N1PS. pp. 1097-1105. 2012\nMingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, pp. 97-105, 2015.\nYj =YjXj+ Rj\nwhere x; and y; are the input/output scalars of one neuron response in one data sample; X.; denotes. the jth column of the input data; and y; and , are parameters to be learned. This transformation. guarantees that the input distribution of each layer remains unchanged across different mini-batches. For Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facil itate model convergence, leading to much faster training speed for CNN. Moreover, if training data. are shuffled at each epoch, the same training sample will be applied with different transformations, or in other words, more comprehensively augmented throughout the training. During the testing. phase, the global statistics of all training samples is used to normalize every mini-batch of test data.\nSinno Jialin Pan. Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. 1EEE Transactions on Neural Networks, 22(2):199-210. 2011.\nVishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53-69, 2015\nExtensive experiments have shown that Batch Normalization significantly reduces the number o iteration to converge, and improves the final performance at the same time. BN layer has become : standard component in recent top-performing CNN architectures, such as deep residual network (H et al., 2016), and Inception V3 (Szegedy et al., 2015).\nHidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000.\nBaochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation arXiv preprint arXiv:1607.01719. 2016\nBaochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI 2016."}, {"section_index": "7", "section_name": "3.1 A PILOT EXPERIMENT", "section_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.\nThe Batch Normalization (BN) technique is originally proposed to help SGD optimization by align. ing the distribution of training data. From this perspective, it is interesting to examine the BN parameters (batch-wise mean and variance) over different dataset at different layers of the network.\nTatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset bias. German Conference on Pattern Recognition, 2015.\nXj-E[X.j] Xj= /Var[X.j]\nIn Sec. 3.1, we first analyze the domain shift in deep neural network, and reveal two key observa tions. Then in Sec. 3.2, we introduce our Adaptive Batch Normalization (AdaBN) method based on these observations.\nIn this pilot experiment, we use MXNet implementation (Chen et al., 2016b) of the Inception-BN model (Ioffe & Szegedy, 2015) pre-trained on ImageNet classification task (Russakovsky et al 2015) as our baseline DNN model. Our image data are drawn from (Bergamo & Torresani, 2010) which contains the same classes of images from both Caltech-256 dataset (Griffin et al., 2007) an Bing image search results. For each mini-batch sampled from one dataset, we concatenate the meai and variance of all neurons from one layer to form a feature vector. Using linear SVM, we car almost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing datase1 Fig. 2 visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clea that BN statistics from different domains are separated into clusters.\nAntonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, pp. 1521-1528 2011.\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion Maximizing for domain invariance. arXiv preprint arXiv:1412.3474. 2014.\nEric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer acro. domains and tasks. In ICCV, pp. 4068-4076, 2015.\nLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, pp. 3320-3328, 2014.\nFigure 2: t-SNE (Van der Maaten & Hinton, 2008) visualization of the mini-batch BN feature vecto. distributions in both shallow and deep layers, across different datasets. Each point represents the BN statistics in one mini-batch. Red dots come from Bing domain, while the blue ones are fron. Caltech-256 domain. The size of each mini-batch is 64..\noth observations motivate us to adapt the representation across different domains by BN layer\nGiven the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows' :\nAlgorithm 1 Adaptive Batch Normalization (AdaBN)\nThe intuition behind our method is straightforward: The standardization of each layer by domai ensures that each layer receives data from a similar distribution, no matter it comes from the sourc\n'In practice we adopt an online algorithm (Donald, 1999) to efficiently estimate the mean and variance\n(a) Shallow layer distributions (b) Deep layer distributions\n1. Both shallow layers and deep layers of the DNN are influenced by domain shift. Domain adaptation by manipulating the output layer alone is not enough. 2. The statistics of BN layer contain the traits of the data domain.\nCompute BN output y;(m) := Yj\nFor K domain adaptation where K > 2, we standardize each sample by the statistics in its owr domain. During training, the statistics are calculated for every mini-batch, the only thing that we need to make sure is that the samples in every mini-batch are from the same domain. For (semi )supervised domain adaptation, we may use the labeled data to fine-tune the weights as well. As a result, our method could fit in all different settings of domain adaptation with minimal effort.\nCompared with CORAL (Sun et al., 2016), one natural question is why we transform the neuron responses independently, not decorrelate and then re-correlate the responses together as suggested in Sun et al. (2016). Under certain conditions, decorrelation could improve the performance. How-. ever, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singulai covariance matrices that is hard to be inversed. As a result, the covariance matrix is always sin. gular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is. computationally intensive, especially if we plan to apply AdaBN to all layers of the network.."}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets and empirically analyze our AdaBN model. We also evaluation our method on a practical application with remote sensing images."}, {"section_index": "9", "section_name": "4.1 EXPERIMENTAL SETTINGS", "section_text": "We first introduce our experiments on two standard datasets: Office (Saenko et al., 2010) an Caltech-Bing (Bergamo & Torresani, 2010).\nOffice (Saenko et al., 2010) is a standard benchmark for domain adaptation, which is a collection of 4652 images in 31 classes from three different domains: Amazon(A), DSRL(D) and Webcam(W) Similar to (Tzeng et al., 2014; Sun et al., 2016; Long et al., 2015), we evaluate the pairwise do- main adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we evaluate our method on three transfer tasks {A, W} -> D, {A, D} -> W, {D, W} -> A.\nCaltech-Bing (Bergamo & Torresani, 2010) is a much larger domain adaptation dataset, which con tains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(C) and Bing(B) The images in the Bing set are collected from Bing image search engine by keyword search. Ap parently Bing data contains noise, and its data distribution is dramatically different from that of Caltech-256.\nWe compare our approach with a variety of methods, including four shallow methods: SA (Fernand et al., 2013), LSSA (Aljundi et al., 2015), GFK (Gong et al., 2012), CORAL (Sun et al., 2016) and four deep methods: DDC (Tzeng et al., 2014), DAN (Long et al., 2015), RevGrad (Ganin 8 Lempitsky, 2015), Deep CORAL (Sun & Saenko, 2016). Specifically, GFK models domain shift b integrating an infinite number of subspaces that characterize changes in statistical properties fron the source to the target domain. SA, LSSA and CORAL align the source and target subspaces b explicit feature space transformations that would map source distribution into the target one. DD and DAN are deep learning based methods which maximize domain invariance by adding to AlexNe one or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in th deep model to encourage learning domain-invariant features. Deep CORAL extends CORAL t perform end-to-end adaptation in DNN. It should be noted that these deep learning methods hav the adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our methoc that delves into early convolution layers as well with the help of BN layers.\nWe follow the full protocol (Donahue et al., 2014) for the single source setting; while for multiple. sources setting, we use all the samples in the source domains as training data, and use all the samples in the target domain as testing data. We fine-tune the Inception-BN (Ioffe & Szegedy, 2015) model on source domain in each task for 100 epochs. The learning rate is set to O.01 initially, and then. is dropped by a factor 0.1 every 40 epochs. Since the office dataset is quite small, following the.\ndomain or the target domain. Although modulating statistics in one BN layer by AdaBN is a simple translation and scaling operation, such linear transformation in one layer can achieve a highly non- linear transformation through the whole deep CNN architecture. Thus, we believe this AdaBN process could approximate the intrinsically non-linear domain transfer function.\nTable 1: Single source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl standard unsupervised adaptation protocol.\nbest practice in Long et al. (2015), we freeze the first three groups of Inception modules, and set the learning rate of fourth and fifth group one tenth of the base learning rate to avoid overfitting. Fo Caltech-Bing dataset, we fine-tune the whole model with the same base learning rate."}, {"section_index": "10", "section_name": "4.2.1 OFFICE DATASET", "section_text": "Our results on Office dataset is reported in Table 1 and Table 2 for single/multi source(s), respec. tively. Note that the first 5 models of the Table 1 are pre-trained on AlexNet (Krizhevsky et al., 2012. instead of the Inception-BN (Ioffe & Szegedy, 2015) model, due to the lack of publicly availabl pre-trained Inception BN model in Caffe (Jia et al., 2014). Thus, the relative improvements over the. baseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm.\nFrom Table 1, we first notice that the Inception-BN indeed improves over the AlexNet on average. which means that the CNN pre-trained on ImageNet has learned general features, the improvements. on ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features our method improves the most over the baseline. Moreover, since our method is complementary tc. other methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple. combination exhibits O.5% increase in performance. This preliminary test reveals further potentia. of AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve. 1.7% over the baseline, and advance the state-of-the-art results for this dataset..\nNone of the compared methods has reported their performance on multi-source domain adaptation To demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL, which is the best performing algorithm in the single source setting. The result is reported in Table 2. We find that simply combining two domains does not lead to better performance. The result is generally worse compared to the best performing single domain between the two. This phenomenon suggests that if we cannot properly cope with domain bias, the increase of training samples may be reversely affect to the testing performance. This result confirms the necessity of domain adaptation. In this more challenging setting, AdaBN still outperforms the baseline and CORAL on average.. Again, when combined with CORAL, our method demonstrates further improvements. At last, our method archives 2.3% gain over the baseline..\nTable 2: Multi-source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl standard unsupervised adaptation protocol.\nMethod A->W D->W W ->D A->D D->A W ->A Avg AlexNet (Krizhevsky et al., 2012) 61.6 95.4 99.0 63.8 51.1 49.8 70.1 DDC (Tzeng et al., 2014) 61.8 95.0 98.5 64.4 52.1 52.2 70.6 DAN (Long et al., 2015) 68.5 96.0 99.0 67.0 54.0 53.1 72.9 Deep CORAL (Sun & Saenko, 2016) 66.4 95.7 99.2 66.8 52.8 51.5 72.1 RevGrad (Ganin & Lempitsky, 2015) 73.0 96.4 99.2 - - - Inception BN (Ioffe & Szegedy, 2015) 70.3 94.3 100 70.5 60.1 57.9 75.5 SA (Fernando et al., 2013) 69.8 95.5 99.0 71.3 59.4 56.9 75.3 GFK (Gong et al., 2012) 66.7 97.0 99.4 70.1 58.0 56.9 74.7 LSSA (Aljundi et al., 2015) 67.7 96.1 98.4 71.3 57.8 57.8 74.9 CORAL (Sun et al., 2016) 70.9 95.7 99.8 71.9 59.0 60.2 76.3 AdaBN 74.2 95.7 99.8 73.1 59.8 57.4 76.7 AdaBN + CORAL 75.4 96.2 99.6 72.7 59.0 60.5 77.2"}, {"section_index": "11", "section_name": "4.2.2 CALTECH-BING DATASET", "section_text": "To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing Dataset in Table 3. Compared with CORAL, AdaBN achieves better performance, which improves 1.8% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task C -> B. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier (Bergamc & Torresani, 2010). Thus, it is not easy to adapt on the Bing dataset from the relatively clean datasel Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains.\nTable 3: Single source domain adaptation results on Caltech-Bing (Bergamo & Torresani, 2010 dataset.\nIn this section, we investigate the influence of the number of samples in target domain to the perfor mance and empirically analyze the adaptation effect of different BN layers.."}, {"section_index": "12", "section_name": "4.3.1 SENSITIVITY TO TARGET DOMAIN SIZE", "section_text": "Since the key of our method is to calculate the mean and variance of the target domain on different BN layers, it is very natural to ask how many target images is necessary to obtain stable statistics. In this experiment, we randomly select a subset of images in target domain to calculate the statistics and then evaluate the performance on the whole target set. Fig. 3 illustrates the effect of using different number of batches. The results demonstrate that our method can obtain good results wher using only a small part of the target examples. It should also be noted that in the extremal case of one batch of target images, our method still achieves better results than the baseline. This is valuable in practical use since a large number of target images are often not available.\n0.76 0.71 Adapt BN Adapt BN Inception BN 0.7 Inception BN 0.75 0.69 0.74 0.68 0.67 0.73 0.66 0.72 0.65 0 2 4 6 8 10 12 0 20 40 60 80 100 (a) A->W (b) B>C\nFigure 3: Accuracy when varying the number of mini-batches used for calculating the statistics of. BN layers in A -> W and B -> C, respectively. For B -> C, we only show the results of using less than 100 batches, since the results are very stable when adding more examples. The batch size is 64. in this experiment. For even smaller number of examples, the performance may be not consistent and drop behind the baseline (e.g. 0.652 with 16 samples, 0.661 with 32 samples)..\nMethod C->B - B -> C Avg Inception BN (Ioffe & Szegedy, 2015) 35.1 64.6 49.9 CORAL (Sun et al., 2016) 35.3 67.2 51.3 AdaBN 35.2 68.1 51.7 AdaBN + CORAL 35.0 67.5 51.2\n0.71 Adapted BN 0.70 Inception BN 0.69 uo! 5 0.68 0.67 0.66 0.65 I 1 0 1 2 3a 3b 4a 4b 4c 5a 5b Adapted layers\nFigure 4: Accuracy when adapting with different BN blocks in B -> C. x = 0 corresponds to the result with non-adapt method, and 1, 2, 3a, 3b, 4a, 4b, 4c, 5a, 5b correspond to the nine different blocks in Inception-BN network...\nIn this experiment, we analyze the effect of adapting on different BN layers with our AdaBN method. According to the structure of Inception-BN network Ioffe & Szegedy (2015), we categorize the BN layers into 9 blocks: 1, 2, 3a, 3b, 4a, 4b, 4c, 5a, 5b. Since the back BN layers are influenced by the outputs of previous BN layers, when adapting a specific block we adapted all the blocks before it. Fig. 4 illustrates the adaptation effect for different BN layers. It shows that adapting BN layers consistently improves the results over the baseline method in most cases. Specifically, when incorporating more BN layers in the adaptation, we could achiever better transfer results."}, {"section_index": "13", "section_name": "4.4 PRACTICAL APPLICATION FOR CLOUD DETECTION IN REMOTE SENSING IMAGES", "section_text": "In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Clou. Detection in Remote Sensing Images. Since remote sensing images are taken by different satellites. with different sensors and resolutions, the captured images are visually different in texture, color. and value range distributions, as shown in Fig. 5. How to adapt a model trained on one satellite tc another satellite images is naturally a domain adaptation problem..\nOur task here is to identify cloud from the remote sensing images, which can be regarded as a semantic segmentation task. The experiment is taken under a self-collected dataset, which includes three image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 113. images with resolution over 6000x6000 pixels respectively. We name the three different datasets following the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhui datasets are for testing. We use a state-of-art semantic segmentation method (Chen et al., 2016a) as our baseline model.\nTable 4: Domain adaptation results (mIOU) on GF1 and Tianhui datasets training on GF2 datasets\nThe results on GF1 and Tianhui datasets are shown in Table 4. The relatively low results of the baseline method indicate that there exists large distribution disparity among images from different satellites. Thus, the significant improvement after applying AdaBN reveals the effectiveness of our method. Some of the visual results are shown in Fig. 6. Since other domain adaptation methods require either additional optimization steps and extra components (e.g. MMD) or post-processing distribution alignment (like CORAL), it is very hard to apply these methods from image classifi- cation to this large-size (6000x6000) segmentation problem. Comparatively, besides the effective performance, our method needs no extra parameters and very few computations over the whole adaptation process."}]
SJJN38cge
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Arash Shahriari\nResearch School of Engineering, Australian National University. Commonwealth Scientific and Industrial Research Organisation\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nTransfer learning is a popular practice in deep neural networks, but fine-tuning. of a large number of parameters is a hard challenge due to the complex wiring. of neurons between splitting layers and imbalance class distributions of original. and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions. based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to. make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target. domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy..\nMingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636, 2016\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.\nMaxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring mid-level im age representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717-1724. 2014\nKari Sentz and Scott Ferson. Combination of evidence in Dempster-Shafer theory, volume 4015 Citeseer, 2002."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In supervised learning, many classification algorithms assume the same distribution for training and testing data. Consequently, change of distribution requires rebuilding of the statistical models which is not always practical because of the hardship of recollecting of training data or heavy learning process. One of the solutions is transfer learning that transfers the classification knowledge into a new domain Pan & Yang(2010). This aims at learning of highly-generalized models with differ- ent probability distributions across domains to learn novel domains without labeled data |Wang & Schneider(2014)Zhang et al.[(2013). Here, the main challenge is to reduce the shifts in data dis- tribution between domains by algorithms that minimize the discriminant of the domains. It is worth mentioning that this could not get rid of domain-specific variations Long et al.(2016).\nXuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. Ir Advances in Neural Information Processing Systems. pp. 1898-1906. 2014.\nTransfer learning for deep neural networks has been proved highly beneficial to boost their overall. performance. Deep learning practices usually require huge amount of labeled data to learn powerful. models. The transfer learning enables adaptation to a different source with small training samples On the other hand, deep neural networks practically learn intermediate features. They could provide. better transfer among domains because some of them generalize well among various domains of knowledge [Glorot et al.(2011). These transferable features generally underlies several probability. distributions Oquab et al.(2014) which reduce the cross-domain discrepancyYosinski et al.(2014)\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in dee neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nKun Zhang, Bernhard Scholkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In 1CML (3), pp. 819-827, 2013\nThe common observation among several deep architectures is that features learned in bottom layers are not that specific, but transiting towards top layers makes them tailored to a dataset or task. A recent study Yosinski et al.(2014) of the generality or specificity of deep layers for the sake of transfer learning reveals two difficulties which may affect the transfer of deep features. First, top layers get quite specialized to their original tasks and second, some optimization difficulties rise due to the splitting of the network between co-adapted layers. In spite of these negative effects, it"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998\nis shown that transferred features not only perform better than random ones but also provide bette initialization. This gives a boost to the generalization of deep neural networks as well.\nIn this paper, we propose a framework for distributed transfer learning in deep convolutional net-. works. This tries to alleviate the burden of splitting networks in the middle of fragile co-adaptec layers. The intuition is that above difficulty relates to the complexity of deep architectures and also. class-imbalance in the transferred domain.\nOn the matter of network complexity, we argue that the splitting of layers leads to a hard optimization problem because of high complexity in the interconnections between neurons of co-adapted layers It seems that transfer learning is not able to thoroughly reconstruct the original powerful wiring for the transferred domain. This is due to the size of network and large number of interconnections across neurons. To address this issue, we fine-tune the convolutional filters separately and hence, reduce the complexity of the non-convex optimization.\nOn the other hand, it seems that the class-imbalance problem rises form different distribution of. data in original and transferred domains. This issue can be handled by cost-sensitive imbalanced classifications methods. By class-imbalance in transferred domain, we mean variable coverage of common classes in this domain and the ones from the original domain. It is probable that both original and transferred datasets have uniform distributions of data among their classes, but some classes in one domain may be fully or partly covered by the other domain. This results in imbalance. class distribution in the transfer learning..\nThe determination of a probabilistic distribution from the confusion matrix is highly effective to. produce a probability assignment which contributes to class-imbalance problems. This basic prob. ability assignment can be either constructed from recognition, substitution and rejection rates Xu. et al.[(1992) or both precision and recall rates of each classDeng et al.[(2016). The key point is. harvesting of maximum possible prior knowledge provided by the confusion matrix to overcome the. imbalance classification challenge.\nSince the power of deep convolutional models come from mutual optimization of all parameters we join the above distributed fine-tuned filters by a boosting scheme based on basic probability assignment. Our experiments confirm the functionality of our distributed strategy for deep transfer learning. The rest of paper is organized as follows. We present the formulation of our method in Section[2] report our experiments in Section[3|and conclude in Section4"}, {"section_index": "3", "section_name": "2 FORMULATION", "section_text": "In general, a confusion matrix represents the class-based predictions against actual labels in form of. a square matrix. Inspired by Dempster-Shafer theory, construction of basic probability assignmen. (BPA)Sentz & Ferson (2002) gives a vector which is independent of the number of class samples and sums up to one for each individual label. This basic probability assignment provides the abil. ity to reflect the difference contributions of a classifier to each individual classes or combine the outcomes of multiple week classifiers.."}, {"section_index": "4", "section_name": "2.1 BASIC PROBABILITY ASSIGNMENT", "section_text": "A raw two-dimensional confusion matrix indexed by predicted classes and actual labels provides some common measures of classification performance. They are accuracy (the proportion of the total number of predictions that were correct), precision (a measure of the accuracy provided thai\na specific class has been predicted), recall (a measure of the ability of a prediction model to se lect instances of a certain class from a dataset) and F-score (the harmonic mean of precision anc recall) Sammut & Webb(2011)\nnij riJ C nij Pij L] ni. i=1\nIt can be seen that the recall ratio is summed over the actual labels (rows) whilst the precision ratic is accumulated by the predicted classes (columns) of the confusion matrix C(). Now, we are able to define recall and precision matrices as\nR($) ={rij} P($) ={Pij} for ie [1... l, je [1...C]\nThe basic probability assignments of these matrices contain recall and precision probability elements for each individual class C; such that.\nrii mri Pii mpi j=1 Pij\nThese elements are synthesized to form the final probability assignments representing the recogn tion ability of classifier to each of the classes of set C.\nmri X mpi m; = mri mpi =- C mri X mpi\nHere, operator is an orthogonal sum which is applied by Dempster rule of combination Sentz & Ferson (2002). The overall contribution of the classifier cab be presented as a probability assignment vector\nSuppose a set of train/validation samples = {X1,..., Xx} from C = {C1,..., Cc} different. classes are assigned to a label set = {L1,..., Lc|} by a classifier () such that |C| = |L|. If each element (ns) of the confusion matrix C() is considered as the number of samples belonging. to class C; which assigned to label Lj, then we can define recall (r) and precision (pij) ratios as followsDeng et al.(2016)\nBPA() ={mi} for i e [1...C]\nIt is worth mentioning that BPA() should be computed by the train/validation set because we. assume that the test set does not include actual labels. Besides, combination of different classes under vertical or horizontal categories is a common practice in visual classification. The benefit. lies in the fact that bottom layers of deep convolutional architectures make better contribution to. detect first and second order features that are usually of specific directions (vertical vs horizontal). rather than detailed distinguished patterns of the objects. This leads to a powerful hierarchical. feature learning in the case that [C < [L|. In contrast, some classes can be divided to various. sub-categories although they all get the same initial labels and hence this holds C] > L to take. the advantage of top layers. In the above formulation, we do not merge or divide the original setup. of the datasets under study ([C] = [L|) although it seems that our BPA-based approach is also able to boost the trained classifiers for each of the merge/divide scenarios..\nConv Conv Conv Softmax Conventional Transfer Learning Conv Conv Conv Softmax Conv Conv Conv ... ... ... Softmax BPA Conv Conv Conv 000 000 ... Softmax Distributed Transfer Learning\nFigure 1: Conventional and Distributed Transfer Learning. The blue blocks (Conv) represent convo. lutional layers in the original domain, the red blocks (Softmax) show fine-tuned layers for the target domain and the green block corresponds to the basic probability assignment (BPA) respectively.."}, {"section_index": "5", "section_name": "2.2 DISTRIBUTED TRANSFER LEARNING", "section_text": "A general practice in transfer learning includes training of an original deep neural network on a dataset and then, fine-tuning of learned features for another dataset on a new target network.Bengio et al.(2012). The generality of selected features for both original and target domains is critical to the success of the transfer learning. For implementation, we train the original network and copy its bottom layers to form the target network. The top layers of the target network are initialized randomly and trained on the target dataset. We are able to employ backpropagation from top to. bottom layers and fine-tune their parameters for the target task or freeze the copied originals and only update top target layers. This can be decided by size of the target dataset and number of parameters in the original layers. Fine-tuning of large networks for small dataset leads to overfitting but for small network or large dataset, performance will be improved Sermanet et al.(2013).\nSuppose that C; is the predicted class for a test sample T provided by classifier . To revise the classification outcome by the BPA calculation, we multiply the test sample's unary poten- tials U(T) = {u1,..., ujc|} (probabilities of belonging to each class) by an assignment vector. M($) = {1 - m1,...,1 mgc|} (contributions of the classifier $ to each class) and pick the. maximum index as the revised predicted label.\nC(T) = I(arg max {u1 (1- m1),..., uc (1- mcD)}\nBased on our formulation for basic probability assignment (BPA) on Section2.1] we are able to follow the above transfer learning procedure by learning of a classifier (SVM or Softmax) and computing BPA() using Algorithm[1| Here, the learning means fine-tuning of target domain using the rained weights and biases of the original network. To implement this, we train the original fully. connected layers by the features calculated by presenting target's train set to convolutional layers of the same original network. We deploy this procedure for each of the available convolutional filters separately and compute the BPA of each individual single-filter network for train/validation sets. Then, we combine unary potentials of all the fine-tuned classifiers by employing BPA weights to come up with a unit set of class probabilities. Figure [1provides an overview of conventional and distributed transfer learning processes.\nThis implies that if classifier performs well on class C; (high m), it is highly probable that C(T) leans towards C;. At the same time, other minority classes like C; (low m;) have a chance to win if their unary potentials would be high enough (u; > ut). In contrast, if $ does poor classification on class C (low m), the possibility of updating C(T) to another class (C;) with even worse unary potential (u; < u) would be higher. Therefore, BPA shows quite successful in handling imbalance data distribution among classes.\nAs described in Section[1] employing probability assignment addresses the class-imbalance problem but does not reduce the complexity of optimization because of the fact that both forward learning and error backpropagation are applied to all the model parameters. To break this non-convex op timization, we introduce our distributed transfer learning strategy. For implementation, we replace the mutual learning of all the parameters with learning of each individual convolutional filter in a separate classifier fed by the bottom original layer. It means that we train a set of week single-filter classifiers F = {1,..., ||} which F equals the number of convolutional filters in the deep neural architecture.we follow the recipe of single classifier in Equation|5|but extend it to redefine\nsuch that m; is the probability assignment of class C; to week single-filter classifier . To com up with class of the test sample T, we update the Equation|6as follows\nU1j (1-m1j) Uij (1-m|c|j CF(T) = I(arg max 1 U1j (1-m1j) j=1U|c|j x (1-m|c|j\nHere, u; is the unary potential of class C, determined by the week single-filter classifier $;. Build ing on the above formulations, we are able to distribute the transfer learning among convolutional filters and join them later to implement a better fine-tuning for the target deep convolutional network according to the Algorithm2"}, {"section_index": "6", "section_name": "3 EXPERIMENTS", "section_text": "We conduct our experiments on MNIST, CIFAR and Street View House Numbers (SVHN) datasets.. The MNIST dataset|LeCun et al.(1998) contains 60, 000 training examples and 10, 000 test samples normalized to 20 20, centered by center of mass in 28 28 and sheared by horizontally shifting. such that the principal axis is vertical. The foreground pixels were set to one and the background to. zero. The CIFAR dataset Krizhevsky & Hinton(2009) includes two subsets. CIFAR-10 consists of 10 classes of objects with 6, 000 images per class. The classes are airplane, automobile, bird, cat,. deer, dog, frog, horse, ship and truck. It was divided to 5, 000 randomly selected images per class. as training set and the rest, as testing samples. The second subset is called CIFAR-100 having 600. images in each of 100 classes. These classes also come in 20 super-classes of five class each. The. SVHN datasetNetzer et al.(2011) was extracted from a large number of Google Street View images by automated algorithms and the Amazon Mechanical Turk (AMT) framework. It consists of over. 600, 000 labeled characters in full numbers and MNIST-like cropped digits in 32 32. Three subsets. are available containing 73, 257 digits for training, 26, 032 for testing and 531, 131 extra samples.\nWe consider two different scenarios to evaluate the performance of our distributed transfer learn ing algorithm. In the first experiment, we try to observe the performance of fine-tuning for pairs\nBPA(F) ={mij} for ie [1...cll, je 1...f\nFigure 2: Examples of MNIST, CIFAR and SVHN Datasets\nof datasets with close data distributions or number of classes. We select MNIST & SVHN and CIFAR-10 & CIFAR-100 as original-target domains and report the transfer learning results in form of train-test errors. In the second experiment, we apply transfer learning for pairs of datasets with far data/class setups which are MNIST & CIFAR-10 and SVHN & CIFAR-100. In this experiment we arrange the datasets to examine the effect of dissimilar distributions rather than overfitting.\nTable 2 shows the performance of conventional and distributed transfer learnings for the first sce nario. The first values before dash correspond to the training errors (left) and the second ones present the testing errors (right)\nIn this experiment, we target two pairs of datasets (original-target domains) which contain similar. data and perform number/object recognition tasks. We report the results for both conventional anc our distributed transfer learning methods. By conventional Bengio et al.(2012), we mean training. the original dataset and fine-tuning of the target one. With distributed, we aim at training the origina. dataset but employing the basic probability assignment for the transfer learning..\nIt can be seen that the results for the conventional transfer learning follows our argument on size of network and number of model parameters Sermanet et al.(2013). Compared to Table[1 MNIST does a poor job on transferring of SVHN due to the overfitting of SVHN over MNIST network. In. contrast, SVHN perform quite well on transferring MNIST..\n3 9 5 50 63 q46 7 S 8\nBefore moving forward to discuss the experiments, we report the baseline train-test errors for the datasets in Table[1 These results are produced by the deep learning library provided by the Oxford. Visual Geometry GroupVedaldi & Fulkerson (2008)\nTable 1: Baseline Performances of Deep Learning\nDasenne Train Error (%) Test Error (%) MNIST 0.04 0.55 SVHN 0.13 3.81 CIFAR-10 0.01 19.40 CIFAR-100 0.17 50.90\nTable 2: Performance of Conventional and Distributed Transfer Learning for Experiment\nOn the other hand, transferring of SVHN from MNIST does not overfit when our distributed transfe. learning is employed. In both settings of original-target domains, our distributed strategy outper. forms the conventional transfer learning approach.\nThe experiment on CIFAR pair exposes more interesting results due to the fact that both datasets have the same number of samples but completely different distributions among the classes. In prac- tice, CIFAR-100 includes all the classes of CIFAR-10 but CIFAR-10 does not have any clue of the several classes of CIFAR-100. The conventional experiments show that CIFAR-10 transfers well on CIFAR-100 but it cannot perform transferring although the target network does not overfit."}, {"section_index": "7", "section_name": "3.2 EXPERIMENT 2", "section_text": "For the first setup, CIFAR-10 does a better transfer learning than MNSIT although the number of classes are the same. It seems that CIFAR-10 provides better generalization due to higher diversity among its classes. Here, our distributed algorithm performs better than the conventional process and\nTarget Conventional MNIST SVHN MNIST 0.01 29.57 SVHN 0.35 1.04 Target Distributed MNIST SVHN MNIST 0.24 5.18 SVHN 0.16 0.46 Target Conventional CIFAR-10 CIFAR-100 CIFAR-10 0.53 68.44 CIFAR-100 0.11 24.08 Target Distributed CIFAR-10 CIFAR-100 CIFAR-10 0.29 54.32 CIFAR-100 0.05 18.24\nAll in all, the performance of our distributed transfer learning (bold values) is better than the con ventional scheme and also, outperforms the baseline deep learning practices.\nIn Table [3] we reports the results for both conventional and distributed transfer learnings on the. second scenario. Here, we pair datasets such that the similarity of their data distributions and number of classes get minimized and they are originally trained for different tasks. It is obvious that our. distributed transfer learning outperforms all the conventional results.\nTable 3: Performance of Conventional and Distributed Transfer Learning for Experiment 2\ntargeting of MNIST on CIFAR-10 network gives close performance to the deep learning outcomes. The second setup leads to the overfitting of SVHN over CIFAR-100 network due to huge number of samples. The other outcome is the poor performance of transferring CIFAR-100 over SVHN network as a result of huge conceptual gap between original-target domains.\nOur observations show that fine-tuning on training set and calculating BPA on validation, result in. better generalization of the transferred model on testing set. On the other hand, computing of BPA on. training plus validation sets gives higher performance in case of hugely different number of classes in original-target datasets. Since we employ BPA to address the class-imbalance problem, we reckon that it better captures the distribution of data by adjoining both train/validation sets especially when we intend to transfer few classes of original dataset to the larger number of classes in the target.."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "We introduce a novel transfer learning for deep convolutional networks that tackles the optimization complexity of a highly non-convex objective by breaking it to several distributed fine-tuning oper- ations. This also resolves the imbalance class coverage between original-target domains by using basic probability assignment across several week single-filter classifiers. By the above boosting, the overall performance shows considerable improvement over conventional transfer learning scheme We conduct several experiments on publicly available datasets and report the performance as train- test errors. The results confirm the advantage of our distributed strategy for the transfer learning.\nTarget Conventional MNIST CIFAR-10 MNIST 0.43 28.92 CIFAR-10 0.44 2.37 Target Distributed MNIST CIFAR-10 MNIST 0.25 20.85 CIFAR-10 0.23 0.95 Target Conventional SVHN CIFAR-100 SVHN 0.71 89.31 CIFAR-100 0.01 12.18 Target Distributed SVHN CIFAR-100 SVHN 0.46 61.10 CIFAR-100 0.28 7.25"}]
Hkg8bDqee
[{"section_index": "0", "section_name": "INTROSPECTION:ACCELERATING NEURAL NETWORK TRAINING BY LEARNING WEIGHT EVOLUTION", "section_text": "Abhishek Sinha\nDepartment of Electronics and Electrical Comm. Engg IIT Kharagpur West Bengal. India\n1. Set1 : Weight updates were carried out at training steps 12000 and 17( 2. Set2 : Weight updates at steps 15000 and 18000 . 3. Set3 : Weight updates at steps 12000 , 15000 and 19000 . 4. Sets : Weight updates at steps 14000 . 17000 and 20000\nWe observed that for the CIF ARj network that in order to reach a validation accuracy of 85.7% we need 40,000 iterations with normal SGD without any intervention with the introspection network I. In all the four sets where the introspection network was used, the target accuracy of 85.7% was reached in approximately 28,O00 steps. This shows that the introspection network is able to successfully generalize to a new dataset and new architecture and show significant gains in training time.\nOn CIFAR1. the time taken by I for prediction is negligible compared to the time required fo SGD. So the training times in the above cases on CIFAR1 can be assumed to be proportional tc. the number of SGD steps required."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "A comparison of the validation accuracy with and without updates by I at the four different sets of jump points are shown in figures[16]17][18|and 19] The results show that the while choice of jump points have some effect on the final result, the effects are not very huge. In general, we notice that. better accuracy is reached when the jumps take place in later training steps.\nNeural Networks are function approximators that have achieved state-of-the-ar accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolutior pattern from a simple network for accelerating training of novel neural networks.\n0.86 Plot of accuracy vs training steps for cifar-10 0.85 0.84 0.83 0.82 Cenre 0.81 0.8 0.79 0.78 Introspection network applied normal training 0.77 0.5 1.5 2 2.5 3 3.5 Training steps 4 104\nWe use a neural network to learn the training pattern from MNIST classifi cation and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.\nFigure 16: Validation accuracy plot for CIFAR1 with jumps at Set"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have been very successful in modeling high-level abstractions in data. How.. ever, training a deep neural network for any AI task is a time-consuming process. This is because a. large number of parameters need to be learnt using training examples. Most of the deeper network. can take days to get trained even on GPU thus making it a major bottleneck in the large-scale appli. cation of deep networks. Reduction of training time through an efficient optimizer is essential for. fast design and testing of deep neural nets..\nIn the context of neural networks, an optimization algorithm iteratively updates the parameters (weights) of a network based on a batch of training examples, to minimize an objective function The most widely used optimization algorithm is Stochastic Gradient Descent. Even with the adven of newer and faster optimization algorithms like Adagrad, Adadelta, RMSProp and Adam there is still a need for achieving faster convergence.\nFigure 18: Validation accuracy plot for CIFAR1 with jumps at Set3\nIn this work we apply neural network to predict weights of other in-training neural networks to accelerate their convergence. Our method has a very low memory footprint and is computationally efficient. Another aspect of this method is that we can update the weights of all the layers in parallel"}, {"section_index": "3", "section_name": "4.2.3 IMAGENET", "section_text": "*This work was done as part of an internship at Adobe Systems, Noida\nCIFARj were done to investigate two issues. The first was to investigate if the introspection net work trained on MNIST weight evolutions is able to generalize to a different network and different dataset. The second was to investigate the effect of varying the timing of the initial jump, the inter- val between successive jumps and the number of jumps. To investigate these issues, four separate training instances were performed with 4 different set of jump points:\nMausoom Sarkar\nAdobe Systems Inc. Noida Uttar Pradesh.India.\nkbalaji at adobe dot com\n0.86 Plot of accuracy vs trainii. Plot of accuracy vs training steps for cifar-10 0.86 0.85 0.85 0.84 .84 0.83 0.79 0.78 network applie 0.77 1.5 2.5 3.5 .5 3.5 Training steps 10 raining steps 104\nFigure 17: Validation accuracy plot for CIFAR1 with jumps at Set\nMWVW ).E 0.85 0.83 ntrospection network app 3.5 104\nFigure 19: Validation accuracy plot for CIFAR with jumps at Set4\nTo investigate the practical feasibility and generalization ability of our introspection network, we applied it in training AlexNet(Krizhevsky et al.||2012) (AlexNet1) on the ImageNet (Russakovsky"}, {"section_index": "4", "section_name": "2 RELATED WORK", "section_text": "et al.2015) dataset. It has 5 conv layers and 3 fully connected layers . Max pooling and loca response normalization have been used after the two starting conv layers and the pooling layer i there after the fifth conv layer as well. We use SGD with momentum of O.9 to train this network starting from a learning rate of 0.01. The learning rate was decreased by one tenth every 100, 00 iterations. The mini-batch size was 128. It takes approximately 300,000 steps for convergence. Th weight updates were carried out at training steps 120, 000 , 130, 000 , 144, 000 and 160, 000 .\nSeveral extensions of Stochastic Gradient Descent have been proposed for faster training of neura networks. Some of them are Momentum (Rumelhart et al.]1986), AdaGrad (Duchy et al.2011) AdaDelta (Zeiler2012), RMSProp (Hinton et al.]2012) and Adam (Kingma & Ba]2014). All o1 them reduce the convergence time by suitably altering the learning rate during training. Our methoc can be used along with any of the above-mentioned methods to further improve convergence time.\nWe find that in order to achieve a top-5 accuracy of 72%, the number of iterations required in the normal case was 196,O00. When the introspection network was used, number of iterations required to reach the same accuracy was 179,000. Again the time taken by I for prediction is negligible compared to the time required for SGD. A comparison of the validation accuracy with and without updates by I is shown in figure 20 The green lines indicate the steps at which the introspection network I is used. The corresponding plot of loss function against training steps has been shown in figure21\nIn the above approaches, the weight update is always a product of the gradient and the modi fied/unmodified learning rate. More recent approaches (Andrychowicz et al.] 2016) have tried tc learn the function that takes as input the gradient and outputs the appropriate weight update. This exhibited a faster convergence compared to a simpler multiplication operation between the learning rate and gradient. Our approach is different from this, because our forecasting Network does no use the current gradient for weight update, but rather uses the weight history to predict its futur value many time steps ahead where network would exhibit better convergence. Our approacl generalizes better between different architectures and datasets without additional retraining. Furthe our approach has far lesser memory footprint as compared to (Andrychowicz et al.] 2016). Also ou approach need not be involved at every weight update and hence can be invoked asynchronously which makes it computationally efficient.\nPlot of accuracy vs training steps for imageNet 0.74 normal training Introspection network applied 0.72 0.7 0.68 0.66 0.64 0.62 0.6 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 Training steps X105\nAnother recent approach, called Q-gradient descent (Fu et al. 2016), uses a reinforcement learning framework to tune the hyperparameters of the optimization algorithm as the training progresses. The Deep-Q Network used for tuning the hyperparameters itself needs to be trained with data from any specific network N to be able to optimize the training of N. Our approach is different because we use a pre-trained forecasting Network that can optimize any network N without training itself by data from N.\nFinally the recent approach by (Jaderberg et al.|2016) to predict synthetic gradients is similar to ou. work, in the sense that the weights are updates independently, but it still relies on an estimation o the gradient, while our update method does not..\nFigure 20: Validation accuracy plot for AlexNet1 on ImageNet\nOur method is distinct from all the above approaches because it uses information obtained from t training process of existing neural nets to accelerate the training of novel neural nets..\nThe results on Alexnetj show that our approach has a small memory footprint and computationally efficient to be able to scale to training practical large scale networks.\nThe evolution of weights of neural networks being trained on different classification tasks such as on MNIST and CIFAR-10 datasets and over different network architectures (weights from different layers of fully connected as well as convolutional architectures) as well as different optimization rules were analyzed. It was observed that the evolution followed a general trend independent of the task the model was performing or the layer to which the parameters belonged to. A major proportion of the weights did not undergo any significant change. Two metrics were used to quantify weight. changes:\nIn this section we provide a comparison with other optimizers and simple heuristics which can be. used to update the weights at different training steps instead of updations by introspection network"}, {"section_index": "5", "section_name": "4.4 COMPARISON WITH ADAM OPTIMIZER", "section_text": "We applied the introspection network on MNIST and MNIST; networks being trained with Adam optimizer with learning rates of 1e - 4 and 1e 3. The results in figure 22| and figure 23|show that while Adam outperforms normal SGD and SGD with introspection, we were able to successfully apply the introspection network on Adam optimizer and accelerate it.\nnormal training Introspection network applied 2.6 SSO 2.2 Training steps .6 .0 X 10\nFigure 21: Plot of loss function vs training steps for AlexNet1 on ImageNet\nDifference between the final and initial values of a weight scalar: This is a measure of how much a weight scalar has deviated from its initial value after training.In figure 4|we show the frequency histogram plot of the weight changes in a convolutional network trained for. MNIST image classification task, which indicates that most of the weight values do not. undergo a significant change in magnitude. Similar plots for a fully connected network. trained on MNIST dataset ( figure [6) and a convolutional network trained on CIFAR-10 dataset (figure[8) present similar observations. Square root of 2nd moment of the values a weight scalar takes during training: Through this measure we wish to quantify the oscillation of weight values. This moment has been. taken about the initial value of the weight. In figure[5] we show the frequency histogram plot of the second moment of weight changes in a convolutional network trained for the MNIST digit classification task, which indicates that most of the weight values do not. undergo a significant oscillations in value during the training.. Similar plots for a fully\nFor MNIST the max accuracy achieved by Adam with introspection was 99.34%, by normal. Adam was 99.3%, by SGD with introspection was 99.21% and by normal SGD was 99.08% . With. introspection applied on Adam the model reaches the max accuracy as achieved by normal Adam after only 7200 steps whereas the normal training required 10000 steps.\nFor M NI ST the max accuracy achieved by Adam with introspection was 96.9%, by normal Adam. was 95.7%, by SGD with introspection was 94.47% and by normal SGD was 93.39% . With intro-. spection applied on Adam the model reaches the max accuracy as achieved by normal Adam after only 8800 steps whereas the normal training required 15000 steps..\n0.9 0.985 foeanooe 0.9 Introspection on Sgd Introspection on adar Normal Adam Normal Sgd 0.97 3000 4000 5000 6000 7000 8000 9000 10000 training steps\nA very small subset of the all the weights undergo massive changes compared to the rest\nFigure 22: : Test accuracy comparison for MNIST for SGD and Adam optimiser in the presence and absence of introspection.\n0.015 0.010 0.005 0.000 -0.005 -0.010 0 10000 20000 30000 40000 50000 Training steps\n0.015 0.010 0.005 0.000 -0.005 -0.010 0 10000 20000 30000 40000 50000 Training steps\nA separate quadratic curve was fit to each of the weight values of the model on the basis of the 4 past weight values chosen from history.The weight values chosen from history were at the same steps as they were for updations by I. The new updated weight would be the value of the quadratic curve at some future time step.For M N I ST1 , experiments were performed by updating the weights to the value predicted by the quadratic function at a future timestep which was one of 1.25,1.3 ol 1.4 times the current time step. For other higher jump ratios the updates would cause the model tc diverge, and lower jump ratios did not show much improvement in performance. The plot showing the comparison in validation accuracy have been shown below in figure24\nFigure 1: Deviation of weight values from initialized values as a convolutional network gets trainec on MNIST dataset using SGD optimizer..\n0.9i 0.985 0.98 oe 0.975 0.97 0.965 Normal SGD With introspection network QuadraticFit(*1.4) Quadratic fit(*1.3) Quadratic Fit(*1.25) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000\nDeviation of weight value from initialization with training fully connected network on MNiST 0.8 yaue 0.6 0.4 0.2 0.0 0.2 0.4 0.6 -0.8 0 20000 40000 60000 80000 100000 Training steps\nfully connected network on MNIS Deviation of weight values from initialized values 0.8 when training a convolutional network on ClFAR-10 0.10 0.6 . 0.05 0.4 0.2 0.00 0.0 0.05 -0.2 0.10 biereee -0.4 oo lieeeeee 0.6 0.15 0.8 0 20000 40000 60000 80000 100000 0.20 Training steps 10000 20000 30000 40000 50000 Training steps\nFigure 24: Comparison of test accuracy for MNIST with weight updations by Intro-. spection and quadratic fit..\nThe max accuracy achieved with introspection applied was 99.21% whereas with quadratic fit it was 99.19%. We note that even though the best performing quadratic fit eventually almost reaches the same max accuracy than that achieved with introspection network, it required considerable exper imentation to find the right jump ratio.A unique observation for the quadratic fit baseline was that it would take the accuracy down dramatically, upto 9.8%, from which the training often never re- covers. Sometimes,the optimizers (SGD or Adam) would recover the accuracy, as seen in figure|24 Moreover, the quadratic fit baseline was not able to generalize to other datasets and tasks. The best performing jump ratio of 1.25 was not able to outperform Introspection on the CIFAR-10 dataset, as seen in figure25\nFigure 2: Deviation of weight values from initialized values as a fully-connected net- work gets trained on MNIST dataset using Adam optimizer."}, {"section_index": "6", "section_name": "3.1 WEIGHT PREDICTION", "section_text": "In the CIFAR-10 case, The maximum accuracy achieved via updations by introspection was 85.6. which was achieved after 25500 steps, whereas with updations by quadratic fit, the max accuracy of 85.45 was achieved after 27200 steps.\nWe collect the weight evolution trends of a network that is being trained and use the collected data to train a neural network I to forecast the future values of each weight based on its values in the previous time steps. The trained network I is then used to predict the weight values of an unseen. network N during its training which move N to a state that enables a faster convergence. The. time taken for the forecast is significantly smaller compared to the time a standard optimizer (e.g. SGD) would have taken to achieve the same accuracy. This leads to a reduction in the total training\nFor the normal training via SGD without any updations after 30oo0 steps of training, the max ac. curacy of 85.29 was achieved after 26500 steps, whereas the same accuracy was achieved by intro spection after only 21200 steps and after 27000 steps via updation by quadratic.."}, {"section_index": "7", "section_name": "connected network trained on MNIST (figure7) and a convolutional network trained or CIFAR-10 ( figure[9) dataset present similar observations", "section_text": "0.97 0.96 0.95 0.9 0.92 0.91 Introspection on adam Introspection on Sgd Normal Adam Normal Sgd 000 4000 6000 8000 10000 12000 14000 training steps\nThe few that did change significantly were observed to be following a predictable trend, where. they would keep on increasing or decreasing with the progress of training in a predictable fashion. In figuresand 3we show the evolution history of a few weights randomly sampled from the weight change histogram bins of figures4 6|and|8|respectively, which illustrates our observation.\nFigure 23: : Test accuracy comparison for MNIST3 for SGD and Adam optimiser in the presence and absence of introspection.\n0.85 0.84 0.83 0.82 0.81 With introspection network Normal SGD QuadraticFit(*1.25) 0.8 1.2 1.4 1.6 1.8 training steps 2.2 2.4 2.6 2.8 X10\nning Deviation of weight values from initialized values when training a convolutional network on CIFAR-10 0.10 0.05 0.00 0.05 0.10 0.15 100000 0.20 0 10000 20000 30000 40000 500 Training steps\nFigure 25: Comparison of test accuracy for CIFAR-10 with weight updations by Intro- spection and quadratic fit.\nFigure 3: Deviation of weight values from. initialized values as CNN gets trained on CIFAR-10 dataset using SGD optimizer.\nlog-Frequency Distribution of deviation of weight value from initialization 106 105 104 Frenneney 103 102 101 100 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35\nInstead of fitting a quadratic curve to each of the weights we tried fitting a linear curve. Experiments were performed on M N IST for jump ratios of 1.1 and 1.075 as the higher ratios would cause the model to diverge after 2 or 3 jumps.The result has been shown below in figure|26.\nWith in Normal SGD Linear Fit(*1.1) Linear fit(*1.075) 4000 training steps 000 training steps\n0.99 0.985 0.97 0.9 0.965 With introspection network Normal SGD Linear Fit(*1.1) Linear fit(*1.075) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000\nFigure 4: log-Frequency distribution of dif- ference between weight values before and after training for a network No trained on MNIST dataset using SGD optimizer.\nFigure 26: Comparison of test accuracy for MNIST with weight updations by Intro- spection and linear fit.\nAs no significant improvement in performance was observed the experiment was not repeated over cifar."}, {"section_index": "8", "section_name": "4.5 LINEAR INTROSPECTION NETWORK", "section_text": "We removed the ReLU nonlinearity from the introspection network and used the same training procedure of the normal introspection network to predict the future values at 2t. We then used this linear network on the M NI ST network. We found that it gave some advantage over normal SGD but was not as good as the introspection network as shown in figure27 Hence we did not explore this baseline for other datasets and networks.\nThe forecasting network I is a simple 1-layered feedforward neuralnet. The input layer consists oj four neurons that take four samples from the training history of a weight. The hidden layer consist. of 40 neurons, fully connected to the input layer, with ReLU activation. The output layer is a single neuron that outputs the predicted future value of the weight. In our experiments four was minimun numbers of samples for which the training of Introspection Network I converged."}, {"section_index": "9", "section_name": "4.5.1 ADDING NOISE", "section_text": "The weight values were updated by adding small gaussian random zero mean noise values . The experiment was performed over MNIST for two different std. value, the results of which have been shown below in figure|28\n0.99 0.985 0.98 0.975 0.97 0.965 With introspection network Noise(std =0.001) Normal SGD Noise(std =0.005) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000"}, {"section_index": "10", "section_name": "4.1 TRAINING OF INTROSPECTION NETWORK", "section_text": "The introspection network I is trained on the training history of the weights of a network No whicl was trained on MNIST dataset.The network No consisted of 3 convolutional layers and two full. connected layers, with ReLU activation and deploying Adam optimiser. Max pooling(2X2 poo. size and a 2X2 stride) was applied after the conv layers along with dropout applied after the first fc. layer. The shapes of the conv layer filters were [5, 5, 1, 8, 5, 5, 8, 16 and [5, 5, 16, 32[ respectivel whereas of the fc layer weight were [512, 1024] and 1024, 10] respectively.The network No wa. trained with a learning rate of 1e - 4 and batch size of 50. The training set of I is prepared as. follows. A random training step t is selected for each weight of No selected as a training sample. and the following 4 values are given as inputs for training I:.\nFigure 28: Test accuracy for M N IST with weight updations via gaussian noise\nSince no significant improvement was observed for the weight updations via noise for MNIST, the experiment was not performed over cifar-10.\n1. value of the weight at step t . 2. value of the weight at step 7t/10 3. value of the weight at step 4t/10 4. at step 0 (i.e. the initialized value)\nlog-Frequency Distribution of square root of 2nd moment about initialized value of weights 105 104 Frennnney 103 102 101 100 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Square root of 2nd moment about initialized value\n0.99 0.985 \\/ 0.98 0.975 eesl 0.97 0.965 0.96 With introspection network Normal SGD 0.955 LinearIntrospection 2000 4000 6000 training steps 8000 10000 12000 14000\nFigure 5: log-Frequency distribution of square root of 2nd moment of a weight. value(about initial value) along its training history. The weight values are taken from a network No trained on MNIST dataset using. SGD optimizer.\nFigure 27: Validation accuracy plot for MNIST using an introspection network without nonlinearity\ntime. The predictor I that is used for forecasting weights is a comparatively smaller neural network whose inference time is negligible compared to the training time of the network that needs to be trained(N). We call this predictor I Introspection network because it looks at the weight evolution during training.\nThe figure 1o|below shows a comparison of the weight evolution for a single scalar weight value. with and without using the introspection network I. The vertical green bars indicate the points at. which the introspection network was used to predict the future values. Post prediction, the network continues to get trained normally by SGD, until the introspection network I is used once again to. jump to a new weight value.\n0.99 0.985 0.98 0.975 0.97 0.965 With introspection network Normal SGD Noise(std =0.001) Noise(std =0.005) 0.96 2000 4000 6000 8000 10000 12000 14000 16000 training steps\nSome of the open questions to be investigated relate to determination of the optimal jump points and investigations regarding the generalization capacity of the introspection network to speed up training\nin RNNs and non-image tasks. Also, we noticed that applying the jumps in very early training steps. while training AlexNet1 tended to degrade the final outcomes. This may be due to the fact that ou. introspection network is extremely simple and has been trained only on weight evolution data fron MNIST. A combination of a more powerful network and training data derived from a diverse set. may ameliorate this problem.\n105 104 Frenneeey 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Deviation of Weiabt Valu.\nWe introduced a method to accelerate neural network training. For this purpose, we used a neura. network I that learns a general trend in weight evolution of all neural networks. After learning th trend from one neural network training, I is used to update weights of many deep neural nets on different tasks - MNIST, CIFAR-10, and ImageNet, with varying network architectures, activations. optimizers, and normalizing strategies(batch norm,lrn). Using the introspection network I led t faster convergence compared to existing methods in all the cases. Our method has a small memor footprint, is computationally efficient and is usable in practical settings. Our method is differen from other existing methods in the aspect that it utilizes the knowledge obtained from weights o one neural network training to accelerate the training of several unseen networks on new tasks. Th results reported here indicates the existence of a general underlying pattern in the weight evolutior of any neural network.\nFigure 6: log-Frequency distribution of dif- ference between weight values before and after training for a fully-connected network trained on MNIST dataset using Adam opti mizer.\nlog-Frequency Distribution of deviation of weight value from initialization 106 105 104 Frenneney 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Deviation ofWeight Value"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Figure 8: log-Frequency distribution of dif- ference between weight values before and af- ter training for a CNN trained on CIFAR-10 dataset using SGD optimizer.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009\n105 104 Frenneeey 103 102 101 100 0 1 2 3 4 5 Square root of 2nd moment about initialized value\nFigure 7: log-Frequency distribution of. square root of 2nd moment of a weight. value(about initial value) along its training. history. The weight values are taken from a fully-connected network trained on MNIST dataset using Adam Optimizer..\nlog-Frequency Distribution of square root of 2nd moment about initialized value of weights 106 105 104 Freenbeey 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Square root of 2nd moment about initialized value\nFigure 9: log-Frequency distribution of square root of 2nd moment of a weight. value(about initial value) along its training history. The weight values are taken from a CNN trained on CIFAR-10 dataset using. SGD Optimizer.\nEvolution of weights with and without Introspection network -70 SGD SGD + update using Introspection network -75 80 -85 90 0 5000 10000 15000 20000 Training Steps\nMatthew D. Zeiler. Adadelta:An adaptive learning method. 2012. URL https : / /arxiv. org. pdf/1212.5701v1.pdf"}, {"section_index": "12", "section_name": "A APPENDIX", "section_text": "In this section, we report some initial results of applying the introspection network I (trained on the. weight evolution of MNIST network N0) to accelerate the training of inception v1 network (Szegedy et al.|2014). We trained the inception v1 network on imagenet dataset with a mini-batchsize of 128 and a RMS optimizer(decay 0.9, momentum 0.9, epsilon 1.0) starting from a learning rate of 0.01. with a decay of 0.94 after every 2 epochs. The network training is still in progress, and we will. eventually report on the final outcome. However we thought it would be valuable to share the. preliminary results all the same\nWe found that applying introspection network seems to be reducing the training time quite signif. icantly. In Figures [29|and 30] we see that applying the introspection network leads to a gain of at least 730,o00 steps.After training for around 1.5 million steps, the maximum accuracy achieved. by normal training was 68.40%, whereas with introspection applied after every 300k steps the max. accuracy achieved was 69.06%.The network achieved the max accuracy of 68.40% after only 852k steps. With introspection applied at steps 200k, 400k and 600k the max accuracy achieved was. 68.69% and it reached the max accuracy achieved by the normal training of model after only 944k. Steps.\nFigure 10: Example of weight update using Introspection Network\nAdam optimizer was used for the training of the introspection network with a mini-batch size of 20.The training was carried out for 30k steps. The learning rate used was 5e-4 which decreased gradually after every 8k training steps. L1- error was used as the loss function for training . We experimented with both L2 error and percentage error but found that L1 error gave the best result over the validation set. The final training loss obtained was 3.1 and the validation loss of the final trained model was 3.4. These correspond to average L1 weight prediction error of 0.0031 and 0.0034 in the training and validation set respectively as the weight values are multiplied by 1o00 before they are input to I.\nHowever, we also observed that choosing the jump points early in the training does not lead tc eventual gains, even though a significant jump in accuracy is observed initially. Figure|31|shows the flattening of the test accuracy after a set of early jumps. It remains to be seen if further interventions later in the training can help maintain the initial accelerated convergence.\n0.65 0.6 0.5 0.45 With introspection network(jump step =300k) With introspection network(jump step =200k) 0.4 Without introspection network. training steps 10 12 14 16 x 10\nThe introspection network once trained can be then used to guide the training of other networks. We illustrate our method by using it to accelerate the training of several deep neural nets with varying architectures on 3 different datasets, namely MNIST, CIFAR-10 and ImageNet. We note that the same introspection network I, trained on the weight evolutions of the MNIST network No was used in all these different cases.\nFigure 29: Test accuracy plot for Inception V1 network with weight updates via intro- spection network at steps 2 105, 4 105 and 6 10(pink curve) and at steps 3 105 6 105 and 9 105(blue curve)\nAll the networks trained using I required comparatively less time to reach the same accuracy as normal SGD training. Also, when the same network was trained for the same time with and without updates by I, the former is observed to have better accuracy. These results show that there is a remarkable similarity in the weight evolution trajectories across network architectures,tasks and datasets.\nFour different neural networks were trained using I on MNIST dataset\nWith introspection network(jump step =300k) With introspection network(jump step =200k) Without introspection network With introspection network(jump step =300k x 10 raining ste\n0.68 0.66 nooe se] 0.64 0.62 0.6 With introspection network(jump step =300k) 0.58 Without introspection network. training steps 10 12 14 16 X 10\nAll the networks have been trained using either Stochastic Gradient Descent, or ADAM and the network I is used at a few intermediate steps to propel the network to a state with higher accuracy. We refer to the time step at which the introspection network I is applied to update all the weights as a \"jump point\".\nFigure 30: Test accuracy plot for Inception V1 network with weight updates via intro- spection network at steps 3 105, 6 105 9 x 105\nThe selection of the steps at which I is to be used is dependent on the distribution of the training step t used for training I. We show the effect of varying the timing of the initial jump and the time interval between jump points in section4.2.2] It has been observed that I gives a better increase in accuracy when it is used in later training steps rather than in the earlier ones.\nA convolutional neural network MNIST with 2 convolutional layer and 2 fully con nected layers(dropout layer after 1st fc layer is also present)with ReLU acitvations fo.\n1. A convolutional neural network MNIST with 2 convolutional layer and 2 fully con nected layers(dropout layer after 1st fc layer is also present)with ReLU acitvations for\n0.7 0.6 0.5 0.4 0.3 0.2 With introspection network(jump step =300k) 0.1 With introspection network(jump step =200k) With introspection network(early jumps Without introspection network 2 3 4 5 6 7 8 9 10 training steps X 10\nA comparison of the validation accuracy with and without updates by I is shown in figures[11]12 13and14 The green lines indicate the steps at which the introspection network I is used. For the M N I ST network with the application of the introspection network I at three points, we found that it took 251 seconds and 20000 SGD steps to reach a validation accuracy of 98.22%. In the same number of SGD steps, normal training was able to reach a validation accuracy of only 97.22%. In the same amount of time (251 seconds), normal training only reached 97.92%. Hence the gain in accuracy with the application of introspection network translates to real gains in training times.\nFigure 31: Test accuracy plots for Inception V1 network with weight updates via introspection network in early training. steps.\n0.992 Plot of a 0.99 ntrospection netw vork applied 0.988 0.985 0.986 0.984 0.982 0.98 0.978 0.976 0.965 With introspection network 0.974 Without introspection network 0000 12000 14000 I6000 0.972 training steps 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Training steps\nFigure 11: Validation accuracy plot for MNIST\nThe initial drop in accuracy seen after a jump in M N I ST2 figure[12|can be attributed to the fact that each weight scalar is predicted independently, and the interrelationship between the weight scalars in a layer or across different layers is not taken into consideration. This interrelationship is soon reestablished after few SGD steps. This phenomenon is noticed in the CIFAR and ImageNet cases tOO.\nclassification task on MNIST image dataset.Max pooling(2X2 pool size and a 2X2 stride). was applied after every conv layer. The CNN layer weights were of shape [5, 5,1, 8] and [5, 5, 32, 64] respectively and the fc layer were of sizes [3136, 1024] and [1024, 10].The weights were initialised from a truncated normal distribution with a mean of 0 and std of 0.01. The network was trained using SGD with a learning rate of 1e-2 and batch size of 50. It takes approximately 20,000 steps for convergence via SGD optimiser. For M NIST1, I was used to update all weights at training step 3000, 4000, and 5000.. 2. A convolutional network M N I ST, with 2 convolutional layer and 2 fully connected layers with ReLU acitvations. Max pooling(2X2 pool size and a 2X2 stride) was applied after ev-. ery conv layer. The two fc layer were of sizes [800, 500] and [500, 10] whereas the two conv layers were of shape [5, 5, 1, 20] and [5, 5, 20, 50] respectively. The weight initialisations. were done via xavier intialisation. The initial learning rate was O.01 which was decayed. via the inv policy with gamma and power being 1e - 4 and 0.75 respectively. Batch size of 64 was used for the training.It takes approximately 10,000 steps for convergence . The. network I was used to update weights at training step 2500 and 3000.. 3. A fully connected network M NIST; with 2 hidden layers each consisting of 256 hidden units and having ReLU acitvations. The network was trained using SGD with a learning. rate of 5e - 3 and a batch size of 100. The initial weights were drawn out from a normal distribution having mean O and std as 1.0. For this network the weight updations were. carried out at steps 6000, 8000 and 10000. 4. A RNN MNIST4 used to classify MNIST having a LSTM cell of hidden size of 128 followed by a fc layer of shape 128, 10] for classification. The RNN was trained on Adam optimizer with a learning rate of 5e - 4 and a batch size of 128. The weight updations for. this network were done at steps 2000,3000 and 4000. Since the LSTM cell uses sigmoid. and tanh activations, the RNN M N I ST4 allows us to explore if the introspection network, trained on ReLU can generalize to networks using different activation functions..\nFor the MNIST2 network, the figure [12|shows that to reach an accuracy of 99.11%, the number of iterations required by normal SGD was 6000, whereas with the application of the introspection network I, the number of iterations needed was only 3500, which represents a significant savings in time and computational effort.\n14000 000 6500\nFigure 13: Validation accuracy plot for M N I ST3\nFor MNIST; after 15oo0 steps of training,the max accuracy achieved by normal training of net work via Adam optimizer was 95.71% whereas with introspection network applied the max accuracy was 96.89%. To reach the max accuracy reached by normal training , the modified network(weights updated by I) took only 8300 steps.\nFor M N I ST4 after 7o00 steps of training, the max accuracy achieved by normal training of network was 98.65% achieved after 6500 steps whereas after modification by I it was 98.85% achieved aftel 5300 steps. The modified network(weights updated by I) reached the max accuracy achieved by normal network after only 4200 steps. It is notable that the introspection network I trained on weight evolutions with ReLU activations was able to help accelerate the convergence of an RNN network which uses sigmoid and tanh activations.\n0.994 0.992 0.99 0.988 aeenneey 0.986 rest 0.984 0.982 0.98 Jump*2.2 Jump(2) 0.978 Jump(*1.5) Jump(*1.3) 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2 training steps X10\n0.994 0.992 0.99 0.988 aeernner 0.986 eet 0.984 0.982 0.98 Jump(*2.2) Jump(*2) 0.978 Jump(*1.5) Jump(*1.3) 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 training steps x10\nFigure 15: Comparison of introspection networks trained with different jump ratios on MNIST network with Adam optimizer.Jump of 2.0 has a more consistent out performance compared to jump value of 2.2 even though it reaches a slightly higher accuracy\nFigure 14: Validation accuracy plot for MNIST Which is an RNN\nWe applied our introspection network I on a CNN CI F AR1 for classifying images in the CIFAR10. (Krizhevsky]2009) dataset. It has 2 convolutional layers, 2 fully connected layer and a final soft-. max layer with ReLU activation function. Max pooling (3X3 pool size and a 2X2 stride) and batch. normalization has been applied after each convolutional layer. The two conv layer filter weights. were of shape [5, 5, 3, 64] and [5, 5, 64, 64] respectively whereas the two fc layers and final softmax. ayer were of shape 2304, 384[,[384, 192 and 192, 10 respectively. The weights were initialized from a zero mean normal distribution with std of 1e - 4 for conv layers,0.04 for the two fc layers. and 1/192.0 for the final layer. The initial learning rate used is 0.1 which is decayed by a factor of. 0.1 after every 350 epochs. Batch size of 128 was used for training of the model which was trained. via the SGD optimizer. It takes approximately 40,O00 steps for convergence. The experiments on"}]
HyWDCXjgx
[{"section_index": "0", "section_name": "MULTI-LABEL LEARNING WITH THE RNNs FOR FASHION SEARCH", "section_text": "taey.16@navercorp.com\nWe build a large-scale visual search system which finds similar product images. given a fashion item. Defining similarity among arbitrary fashion-products is. still remains a challenging problem, even there is no exact ground-truth. To re. solve this problem, we define more than 90 fashion-related attributes, and com-. bination of these attributes can represent thousands of unique fashion-styles. We. then introduce to use the recurrent neural networks (RNNs) recognising multiple. fashion-attributes with the end-to-end manner. To build our system at scale, these. fashion-attributes are again used to build an inverted indexing scheme. In addition. to these fashion-attributes for semantic similarity, we extract colour and appear ance features in a region-of-interest (ROI) of a fashion item for visual similarity. By sharing our approach, we expect active discussion on that how to apply current. deep learning researches into the e-commerce industry..\nFigure 9: Examples of retrieved results on Holidays and UKB. The violet rectangles denote the ground-truth nearest-neighbors corresponding queries.\naligning zero-centering of the output feature space weakly. Therefore, we believe that a code from a well-trained neural model, itself, can be a good feature even to be binarized. In our experiment, such simple thresholding degrades mAP by O.02 on the Holidays dataset, but this method makes it possible to scaling up in the retrieval. In addition to the appearance feature, we extract colour feature using the simple (bins) colour histogram in HSV space, and distance between a query and a reference image is computed by using the weighted combination of the two distances from the colour and the appearance feature."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning technology has given great success in computer vision tasks such as efficient feature. representation (Razavian et al.[2014]|Babenko et al.[2014), classification (He et al.[|2016a] Szegedy et al.[2016b), detection (Ren et al.]2015fZhang et al.[2016), and segmentation (Long et al.[2015). Furthermore, image to caption generation (Vinyals et al.2015} Xu et al.2015) and visual ques- tion answering (VQA) (Antol et al.2015) are emerging research fields combining vision, language. (Mikolov et al.]2010), sequence to sequence (Sutskever et al.]2014), long-term memory (Xiong et al.[2016) based modelling technologies.\nTo evaluate empirical results of the proposed fashion-product search system, we select 3 million fashion-product images in our e-commerce platform at random. These images are mutually ex- clusive to the fashion-attribute dataset. We have again selected images from the web used for the queries. All of the reference images pass through the offline process as described in Sec. 3] and resulting inverted indexing database is loaded into main-memory (RAM) by our daemon system We send the pre-selected queries to the daemon system with the RESTful API. The daemon system then performs the online process and returns nearest-neighbor images correspond to the queries In this scenario, there are three options to get similar fashion-product images. Option 1 is that the fashion-attribute recognition model automatically selects fashion-category, the most likely to be queried in the given image. Option 2 is that a user manually selects a fashion-category given a query image. (see Fig. 10) Option 3 is that a user draw a rectangle to be queried by hand like Jing et al. (2015). (see Fig.11) By the recognized fashion-attributes, the retrieved results reflect the user's main needs, e.g. gender, season, utility as well as the fashion-style, that could be lacking when using visual feature representation only.\nThese computer vision researches mainly concern about general object recognition. However, ir our fashion-product search domain, we need to build a very specialised model which can mimic human's perception of fashion-product similarity. To this end, we start by brainstorming about what makes two fashion items are similar or dissimilar. Fashion-specialist and merchandisers are also involved. We then compose fashion-attribute dataset for our fashion-product images. Table 1|explains a part of our fashion-attributes. Conventionally, each of the columns in Table[1can be modelled as a multi-class classification. Therefore, our fashion-attributes naturally is modelled as a multi-label classification."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Online commerce has been a great impact on our life over the past decade. We focus on an online market for fashion related items' Finding similar fashion-product images for a given image query is a classical problem in an application to computer vision, however, still challenging due to the absence of an absolute definition of the similarity between arbitrary fashion items.\n)ption Option 2 (a) For the Option2, the guided information is \"pants' Option 1 Option 2 (b) For the option 2, the guided information is \"blouse\"\nTable 1: An example of fashion-attributes\nMulti-label classification has a long history in the machine learning field. To address this problem, a straightforward idea is to split such multi-labels into a set of multi-class classification problems. In our fashion-attributes. there are more than 90 attributes. Consequently. we need to build more than 90 classifiers for each attribute. It is worth noting that, for example, collar attribute can represent the upper-garments, but it is absent to represent bottom-garments such as skirts or pants, which means some attributes are conditioned on other attributes. This is the reason that the learning tree structure of the attributes dependency can be more efficient (Zhang & Zhang2010]Fu et al.]2012fGibaja & Ventura2015).\nRecently, recurrent neural networks (RNN) are very commonly used in automatic speech recognitior. (ASR)(Graves et al. 2013 Graves & Jaitly2014), language modelling (Mikolov et al.]2010) word dependency parsing (Mirowski & Vlachos2015), machine translation (Cho et al.[2014), anc. dialog modelling (Henderson et al.2014Serban et al.] 2016). To preserve long-term dependency in hidden context, Long-Short Term Memory (LSTM) (Hochreiter & Schmidhuber1997) and its. variants (Zaremba et al.2014f Cooijmans et al.[2016) are breakthroughs in such fields. We use this. LSTM to learn fashion-attribute dependency structure implicitly. By using the LSTM, our attribute. recognition problem is regarded to as a sequence classification. There is a similar work in Wang. et al.(2016), however, we do not use the VGG16 network (Simonyan & Zisserman. 2014) as ar image encoder but use our own encoder. To the best of our knowledge, it is the first work applying. LSTM into a multi-label classification task in the commercial fashion-product search domain.\nWe start by building large-scale fashion-attribute dataset in the last year. We employ maximum 100 man-months and take almost one year for completion. There are 19 fashion-categories and more than 90 attributes for representing a specific fashion-style. For example, top garments have the T- shirts, blouse, bag etc. The T-shirts category has the collar, sleeve-length, gender, etc. The gende. attribute has binary classes (i.e. female and male). Sleeve-length attribute has multiple classes (i.e long, a half, sleeveless etc.). Theoretically, the combination of our attributes can represent thousands of unique fashion-styles. A part of our attributes are in Table[1 ROIs for each fashion item in an image are also included in this dataset. Finally, we collect 1 million images in total. This internal dataset is to be used for training our fashion-attribute recognition model and fashion-product ROI detector respectively.\n(b) For the option 2, the guided information is \"blouse\"\nIn this section, we describe the details of our system. The whole pipeline is illustrated in Fig. 3 As a conventional information retrieval system, our system has offline and online phase. In offline process, we take both an image and its textual meta-information as the inputs. The reason we take additional textual meta-information is that, for example, in Fig. 1a dominant fashion item in the image is a white dress however, our merchandiser enrolled it to sell the brown cardigan as described\nFigure 10: Similar fashion-product search for the Option 1 and the Option 2\nGreat-category Fashion-category Gender Silhouette Collar sleeve-length (3 classes) (19 classes) (2 classes) (14 classes) (18 classes) (6 classes) bottom T-shirts male shirt long ... normal top female A-line turtle a half ... pants ... bags ... sleeveless ... round :..\nThe remaining of this paper is organized as follows. In Sec. 2] We describe details about our fashion-attribute dataset. Sec. 3 describes the proposed fashion-product search system in detail. Sec.4lexplains empirical results given image queries. Finally, we draw our conclusion in Sec.5\nCropped region Cropped region Option 3 Option 3 Figure 11: Similar fashion-product search for the Option 3.\nb) Textual meta-information: Textual meta-information women's clothes/ brend-new/ cardigan and knit/. women's shirts, blouse/. round-neck cardigan see-through blouse\nFigure 1: Examples of image and its textual meta-information\nin its meta-information. In Fig.1b, there is no way of finding which fashion item is to be sold with out referring the textual meta-information seller typed manually. Therefore, knowing intension (i.e. what to sell) for our merchandisers is very important in practice. To catch up with these intension, we extract fashion-category information from the textual meta. The extracted fashion-category in- formation is fed to the fashion-attribute recognition model. The fashion-attribute recognition model predicts a set of fashion-attributes for the given image. (see Fig.2) These fashion-attributes are. used as keys in the inverted indexing scheme. On the next stage, our fashion-product ROI detector. finds where the fashion-category item is in the image. (see Fig. 8) We extract colour and appear-. ance features for the detected ROI. These visual features are stored in a postings list. In these processes, it is worth noting that, as shown in Fig. 8f our system can generate different results in the fashion-attribute recognition and the ROI detection for the same image by guiding the fashion-. category information. In online process, there is two options for processing a user-query. We can\nFigure 11: Similar fashion-product search for the Option 3"}, {"section_index": "3", "section_name": "5 CONCLUSIONS", "section_text": "#shoes, #male, #leather, #top, #coat, #female, #bottom,#pants, #female, #bag,#female,#midimum-size, #under-ankle, #low-heel. #long-sleeved, #monochrom, #long, #skiny-shilloutte,. #handbag, #zipper-lock, #leather #monochrom,#shoelace #tailored-collar, #car-coat, #normal-waist, #belt-type, #normal-fit #double-button-type #botton-lock, #in-pocket, #fading NEO #top-bottom, #dress, #female. #bottom, #pants, #male, #top, #suit-jacket, #male, #shoes, #female, #leather, #slim,#mini,#pencil, #long,#sweetpants,#Elastic-waist, #tailored-collar, #long-sleeved, #ankle-boot, #high-heel, #round-neck, #long-sleeved #in-pocket, #sibori, #brend-logo #modern-fit,#two-button #monochrom,#buckle\nArtem Babenko, Anton Slesarev, Alexander Chigorin, and Victor S. Lempitsky. Neural codes f image retrieval. CoRR, abs/1404.1777, 2014.\nFigure 2: Examples of recognized fashion-attributes for given images\nKyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.\ntake a guided information, what the user wants to find, or the fashion-attribute recognition mode. automatically finds what fashion-category item is the most likely to be queried. This is up to the. user' s choice. For the given image by the user, the fashion-attribute recognition model generate.. fashion-attributes, and the results are fed into the fashion-product ROI detector. We extract coloui. and appearance features in the ROI resulting from the detector. We access to the inverted index. addressed by the generated a set of fashion-attributes, and then get a postings list for each fashion. attribute. We perform nearest-neighbor retrieval in the postings lists so that the search complexity i reduced drastically while preserving the semantic similarity. To reduce memory capacity and speec. up this nearest-neighbor retrieval process once more, our features are binarized and CPU depen.\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, and Aaron C. Courville. Recurrent batch normal ization. CoRR, abs/1603.09025, 2016\nBin Fu, Zhihai Wang, Rong Pan, Guandong Xu, and Peter Dolog. Learning tree structure of label dependency for multi-label learning. Advances in Knowledge Discovery and Data Mining, 2012.\nEva Gibaja and Sebastian Ventura. A tutorial on multilabel learning. The ACM Computing Surveys 2015.\nCropped region Cropped region Option 3 Option 3\nToday ' s deep learning technology has given great impact on various research fields. Such a success story is about to be applied to many industries. Following this trend, we traced the start-of-the art computer vision and language modelling research and then, used these technologies to create value for our customers especially in the e-commerce platform. We expect active discussion on that how to apply many existing research works into the e-commerce industry.\nXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and. C. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR abs/1504.00325, 2015.\nOffline Colour and appearence textual meta-info.: feature extraction Information ROI detection brand-new/ Extraction woman's wear/ recognition Attribute skirts Reference Online Inverted index postings postings ... Colour and appearence ROI detection feature extraction Nearest-Neighbor recognition Attribute search Query Search results\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In The IEEE Conference on Computer Vision and Pattern Recognition, 2016a.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997\nFigure 3: The whole pipeline of the proposed fashion-product search system. (Dashed lines denote the flows of the guided information.).\ndent intrinsic instruction (i.e. assembly popcnt instruction 1S use distance.\nJonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In The IEEE Conference on Com puter Vision and Pattern Recognition. 2015\nWe build our own vision encoder network (ResCeption) which is based on inception-v3 architecture (Szegedy et al.] 2016b). To improve both speed of convergence and generalization, we introduce a shortcut path (He et al.|2016a b) for each data-flow stream (except streams containing one convo- lutional layer at most) in all inception-v3 modules. Denote input of l-th layer , x' e R , output of. the l-th layer, x'+1, a l-th layer is a function, H : x' +> xl+1 and a loss function, L(0; x). Then. forward and back(ward)propagation is derived such that.\nImposing gradients from the loss function to l-th layer to Eq. (2\nIllulll-laO classification. In The European Conference on Machine Learning and Knowledge Discovery in Databases, 2009. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. The International Journal of Computer Vision, 2015. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In The AAAI Conference on Artificial Intelligence, 2016..\naL aL dxl+2 dxl+1 Oxl OxL Oxl+1 dxl dL 0Hx dH dxL Ox Ox aL OH(x) OxL Oxl i=L-1\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016b\nPiotr Mirowski and Andreas Vlachos. Dependency recurrent neural language models for sentence completion. CoRR, abs/1507.01193, 2015.\nx+1 H(x)+xl Oxl+1 OH(x') + 1 dxl dxl\nAs in the Eq. (3), the error signal, ft, oxt, goes down to the l-th layer directly through the shortcut path, and then the gradient signals from (L - 1)-th layer to l-th layer are added consecutively (i.e. instead of the multiplicative operation except initial error from the loss (i.e. from vanishing or exploding gradient problem. Fig. 4 depicts network architecture for shortcut\n1x1 conv. 1x7 7x1 1x7 conv.e 7x1 conv.e conv. conv. Output(depth-concat) MaxPool 1x1 conv. Input 1x1 conv. 1x1 conv. 1x7 conv. 7x1 conv.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du mitru Erhan. Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition. 2015\nFigure 4: Network architecture for shortcut paths (depicted in two red lines) in an inception-v3 module.\npaths in an inception-v3 module. We use projection shortcuts throughout the original inception-v modules due to the dimension constraint|3 To demonstrate the effectiveness of the shortcut paths i1 the inception modules, we reproduce ILSVRC2012 classification benchmark (Russakovsky et al. 2015) for inception-v3 and our ResCeption network. As in Fig.5a we verify that residual shortcu paths are beneficial for fast training and slight better generalization!4 The whole of the training curve is shown in Fig. 5b The best validation error is reached at 23.37% and 6.17% at top- and top-5, respectively. That is a competitive result To demonstrate the representation power o our ResCeption, we employ the transfer learning strategy for applying the pre-trained ResCeptio. as an image encoder to generate captions. In this experiment, we verify our ResCeption encode outperforms the existing VGG16 network[on MS-COCO challenge benchmark (Chen et al.|2015] The best validation CIDEr-D score (Vedantam et al.]2015) for c5 is 0.923 (see Fig.5c) and tes CIDEr-D score for c40 is 0.937\nJiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. CNN-RNN: A unified framework for multi-label image classification. CoRR, abs/1604.04573, 2016\ninception-v3 vs. ResCeption ResCeptior VGG16 vs. ResCeption train inception-v3 val ResCeption 0.8 wwwn 55 [%] no CIRr 30 40 0.2 35 VGG16 ResCeption 30 1000000 2000000 3000000 4000000 5000000 6000000 7000000 8000000 9000000 0.0 100000 200000 300000 400000 iterations 100000 200000 300000 400000 50000 Iterations steps (b) The whole training curve on ILSVRC2012 a) Early validation curve on ILSVRC2012 dataset. (c) Validation curve on MS-COCO dataset. dataset.\nLiliang Zhang, Liang Lin, Xiaodan Liang, and Kaiming He. Is faster R-CNN doing well for pedes trian detection? CoRR. abs/1607.07032. 2016.\nFigure 5: Training curves on ILSVRC2012 and MS-COCO dataset with our ResCeption model\nThe traditional multi-class classification associates an instance x with a single label a from previ ously defined a finite set of labels A. The multi-label classification task associates several finite sets of labels An. C A. The most well known method in the multi-label literature are the binary relevance method (BM) and the label combination method (CM). There are drawbacks in both BM\n'We submitted our final result with beam search on MS-COCO evaluation server and found out the beam search improves final CIDEr-D for c40 score by 0.02\nMin-Ling Zhang and Kun Zhang. Multi-label learning by exploiting label dependency. In The ACM International Conference on Knowledge Discovery and Data Mining, 2010..\n3If the input and output dimension of the main-branch is not the same, projection shortcut should be usec stead of identity shortcut.\nT-shirt Top blouse Bottom pants Fashion-attributes: leggings #Top-Bottom, #Dress, .. #Round-neck, others long #A half sleeved, #Knee-length dress . a half .. (O) := [P0seq(ao|ge1(I)) max {01,0seq}E0 P0seq(a1|ao, ge1(I)) Poseg(a2|ao, a1, ge1(I))\nFigure 6: An example of the fashion-attribute dependence tree for a given image and the objectiv function of our fashion-attribute recognition model.\nand CM. The BM ignores label correlations that exist in the training data. The CM directly takes into account label correlations, however, a disadvantage is its worst-case time complexity (Read et al.]2009j. To tackle these drawbacks, we introduce to use the RNN. Suppose we have ran- dom variables a E An, An C A. The objective of the RNN is to maximise the joint probability, p(at, at-1, at-2, .. . ao), where t is a sequence (time) index. This joint probability is factorized as a product of conditional probabilities recursively,\nAt,t-1,...A0 p(ao)p(a1|ao) p(a2|a1, ao p(ao,a1) p(ao,a1,a2) p(ao,a1,a2,... p(ao) I at-1,...,a\n8Our attribute recognition model is parameterized as 0 = [01; Oseq]. In our case, updating 01 as well as Ose in the gradient descent step helps for much better performance..\nFollowing the Eq. we can handle multi-label classification as sequence classification which is. illustrated in Fig. 6 There are many label dependencies among our fashion-attributes. Direct mod-. elling of such label dependencies in the training data using the RNN is our key idea. We use the ResCeption as a vision encoder 01, LSTM and softmax regression as our sequence classifier Oseq,. and negative log-likelihood (NLL) as the loss function. We backpropagage gradient signal from the. sequence classifier to vision encoder8|Empirical results of our ResCeption-LSTM based attribute. recognition are in Fig.2l Many fashion-category dependent attributes such as sweetpants, fad-. ing, zipper-lock, mini, and tailored-collar are recognized quite well. Fashion-category independent. attributes (e.g., male, female) are also recognizable. It is worth noting we do not model the fashion- attribute dependance tree at all. We demonstrate the RNN learns attribute dependency structure. implicitly. We evaluate our attribute recognition model on the fashion-attribute dataset. We split this dataset into 721544, 40000, and 40000 images for training, validating, and testing. We employ the early-stopping strategy to preventing over-fitting using the validation set. We measure precision. and recall for a set of ground-truth attributes and a set of predicted attributes for each image. The. quantitative results are in Table2\nTable 2: A quantitative evaluation of the ResCeption-LSTM based attribute recognition model\nMeasurement Train Validation Test Precision 0.866 0.842 0.841 Recall 0.867 0.841 0.842 NLL 0.298 0.363 0.363\nOur prediction model of the fashion-attribute recognition is based on the sequence generation pro cess in the RNN (Graves2013). The attribute-sequence generation process is illustrated in Fig 7 First, we predict a probability of the first attribute for a given internal representation of the im age i.e. Peseg (ao|ge, (I)), and then sample from the estimated probability of the attribute, ao ~ Peseg (ao|go1(I)). The sampled symbol is fed to as the next input to compute peseg (a1|ao, ge1(I)) This sequential process is repeated recursively until a sampled result is reached at the special end of-sequence (EOS) symbol. In case that we generate a set of attributes for a guided fashion-category we do not sample from the previously estimated probability, but select the guided fashion-category and then we feed into it as the next input deterministically. It is the key to considering for eacl seller' s intention. Results for the guided attribute-sequence generation is shown in Fig. 8\nPeseg(ao|ge1(I)) Peseg(a1|ao, ge1(I)) LSTM LSTM LSTM : ... ... seq LSTM LSTM LSTM ResCeption ao ~ pese (ao|ge1(I)) a1 ~ peseg(a1ao, ge1(I) go1(I) Guided information\nFigure 7: Guided sequence generation process"}, {"section_index": "4", "section_name": "3.4 Guided ROI DETECTION", "section_text": "Our fashion-product ROI detection is based on the Faster R-CNN (Ren et al.|2015). In the conven-. tional multi-class Faster R-CNN detection pipeline, one takes an image and outputs a tuple of (ROI. coordinate, object-class, class-score). In our ROl detection pipeline, we take additional informa tion, guided fashion-category from the ResCeption-LSTM based attribute-sequence generator. Our. fashion-product ROI detector finds where the guided fashion-category item is in a given image.Jing. et al.(2015) also uses a similar idea, but they train several detectors for each category independently. so that their works do not scale well. We train a detector for all fashion-categories jointly. Our. detector produces ROIs for all of the fashion-categories at once. In post-processing, we reject ROIs. that their object-classes are not matched to the guided fashion-category. We demonstrate that the. guided fashion-category information contributes to higher performance in terms of mean average. precision (mAP) on the fashion-attribute dataset. We measure the mAP for the intersection-of-union. (IoU) between ground-truth ROIs and predicted ROIs. (see Table[3) That is due to the fact that our. guided fashion-category information reduces the false positive rate. In our fashion-product search. pipeline, the colour and appearance features are extracted in the detected ROIs..\nbl0l Guided fashion-category: Guided fashion-category: Guided fashion-category: Guided fashion-category: skirt blouse T-shirt pants Recognition results: Recognition results:. Recognition results: Recognition results: #bottoms,#skirts,#woman #top,#blous,#woman, #top,#tshirts #woman #bottom,#pants,#woman, #maxi, #pleated-skirts, #waistline, #sleeveless, #normal-fit, #waistline, #long-line, #skiny-shilloutte, #no-slit #round-neck #round-neck, #normal-waist, #belt-type, #long-sleeved, #striped #button-lock, #in-pocket, #roll-up cuff, #fading Guided fashion-category: Guided fashion-category: Guided fashion-category: Guided fashion-category: leggings shirt dress dress Recognition results: Recognition results: Recognition results: Recognition results #top-bottom,#dress, #bottom, #leggings, #women, #top,#shirt,#women, #top-bottom,#dress,#women, #woman,#mini, #long #loose-fit,#button-lock #mini,#slim-fit,#straight-skirt #regular-fit, #round-neck, #pullover, #collared shirt, #round-neck, #long-sleeved #long-sleeved #long-sleeved\nGuided fashion-category: skirt Recognition results: #bottoms,#skirts,#woman #maxi,#pleated-skirts, #no-slit\nFigure 8: Examples of the consecutive process in the guided sequence generation and the guidea ROI detection. Although we take the same input image, results can be totally different guiding the fashion-category information.\nTable 3: Fashion-product ROI Detector evaluation. (mAP)\nIoU 0.5 0.6 0.7 0.8 0.9 Guided 0.877 0.872 0.855 0.716 0.225 Non-guided 0.849 0.842 0.818 0.684 0.223\nIoU 0.5 0.6 0.7 0.8 0.9 Guided 0.877 0.872 0.855 0.716 0.225 Non-guided 0.849 0.842 0.818 0.684 0.223\nTo extract appearance feature for a given ROI, we use pre-trained GoogleNet (Szegedy et al.]2015) In this network, both inception4 and inception5 layer's activation maps are used. We evaluate. this feature on two similar image retrieval benchmarks, i.e.Holidays (Jegou et al.[2o08) and. UK-benchmark (UKB) (Nister & Stewenius]2006). In this experiment, we do not use any post-. processing method or fine-tuning at all. The mAP on Holidays is 0.783, and the precision@4 and. recall@4 on UKB is 0.907 and 0.908 respectively. These scores are competitive against several deep feature representation methods (Razavian et al.]2014] Babenko et al.]2014). Examples of queries and resulting nearest-neighbors are in Fig.9I On the next step, we binarize this appearance fea-. ture by simply thresholding at 0. The reason we take this simple thresholding to generate the hash. code is twofold. The neural activation feature map at a higher layer is a sparse and distributed code. in nature. Furthermore. the bias term in a linear layer (e.g.. convolutional layer) compensates for"}]
rJEgeXFex
[{"section_index": "0", "section_name": "PREDICTING MEDICATIONS FROM DIAGNOSTIC CODES WITH RECURRENT NEURAL NETWORKS", "section_text": "S01C S02B S03B D07X 1.0 D07A H02A S01B C05A D10A A01A J05A R01A N02A B05C A12C B05X 0.8 L04A N02B S01A 0.6 J01D C030 0.4 J01M C07A N03A 0.2 J01X M03B 0.0\nJacek M. Baior. Thomas A. Lasko\nDepartment of Biomedical Informatics Vanderbilt University School of Medicine Nashville TN 37203 USA\n{jacek.m.bajor,tom.lasko}@vanderbilt.edu\nPredicted vs. actual medication classes for the patient in Case 1. The four-character sequence in the first and fourth columns is the ATC code for the medication therapeutic class, and an asterisk in the first column indicates that the predicted medication is in the actual medication list. Probabilities listed are the model predictions for the listed therapeutic class. In the predicted medications column, all predictions with probability at least 0.2 are listed\nIt is a surprising fact that electronic medical records are failing at one of their pri mary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and thai up to 25% of all active medications do not appear on the appropriate patient list Manual efforts to maintain these lists involve a great deal of tedious human labor which could be reduced by computational tools to suggest likely missing or in correct medications on a patient's list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a pa tient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predic tions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of er rors and omissions in the data, and the likelihood of models such as these to help correct them.\nFigure 4: Medication predictions for a complicated patient. Each vertical bar represents the pre diction for a single medication class, with the height of the bar representing the confidence of the prediction. Black labels with arrows indicate ATC therapeutic classes for medications the patient was actually taking. Colors and letters below the axis indicate organ system groups. More detail ir Appendix C\nprocessing step to improve performance, but clearly the semantic understanding it provides to ar algorithm can be usefu1 beyond the immediate learning problem (Mikolov et al.||2013). Investigating the embedding learned in this experiment shows some generalizable potential, but it also reveals the need for further refinement before it can be truly useful. Specifically, while it's easy to find tigh groups of ICD-9 codes that are strongly clinically related in our embedding, we also find groups fo. which we cannot see a meaningful clinical relationship.\nFor example, we see two groups of codes relating to kidney failure and diabetes mellitus, two classes. of very prevalent disease (Figure 5] insets). In other iterations with different parameter settings, the kidney failure codes were even embedded in a sequence reflecting the natural progression of the disease, with the code for dialysis (an intensive treatment for end-stage kidney failure) embedded. at the appropriate place. Interestingly, these were not the parameter settings that optimized overall. prediction performance. In other settings, such as our performance-optimal setting, the sequence is close to the natural progression of the disease, but not quite identical. Nevertheless, this is an. exciting result that suggests great potential."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The idea of exploiting the large amounts of data captured in electronic medical records for botl clinical care and secondary research holds great promise, but its potential is weakened by errors anc omissions in those records (Safran et al.][2007] de Lusignan & van Weel|2006). Among many othe. problems, accurately capturing the list of medications currently taken by a given patient is extremely challenging (Velo & Minuz2009). In one study, over 50% of electronic medication lists containec omissions (Caglar et al.] 2011), and in another, 25% of all medications taken by patients were no. recorded (Kaboli et al.]2004). Even medication lists provided by the patients themselves contain. multiple errors and omissions (Green et al.2010)\nMany efforts have been made to ensure the correctness of medication lists, most of them involving. improved communication between patients and providers (Keogh et al.]2016), but these efforts. have not yet been successful, and incorrect or incomplete medication documentation continues tc be a source of error in computational medical research. In this work we attempt to identify likely. errors and omissions in the record, predicting the set of active medications from the sequence o. most recent disease-based billing codes in the record. Predictions from such a model could be usec. either in manual medication reconciliation (a common process undertaken to correct the medicatior. record) or to provide a prior to other models, such as an NLP model attempting to extract medicatior. use from the narrative clinical text.\nFor this prediction problem, we settled on predicting the medications that occurred in the record during the same time span as the billing codes used. Originally, we intended to predict only the medications listed on the day of the reference point, but that turned out to greatly exacerbate the missing medication problem. After trying medications that fell on the reference day only, the week prior to the reference day, and the six months prior, our best performance both subjectively and objectively was achieved using the full time range of the input data.\nGiven the sequential nature of clinical data, we suspected that recurrent neural networks would be a good architecture for making these predictions. In this work we investigate this potential, comparing the performance of recurrent networks to that of similarly-configured feed forward networks.\nThe input for each case is a sequence of ICD-9 billing codes (Section2.1), for which the model produces a single, multi-label prediction of the therapeutic classes (Section 3.1) of medications taken by the patient during the period of time covered by the billing code sequence..\nWhile the performance of the recurrent networks was quite good, we believe it could be improved by including additional input data, such as laboratory test results, demographics, and perhaps vital"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Top predictions Prob. True labels Prob. S03B* Corticosteroids 97.01% S03B Corticosteroids 97.01% S01C* Antiinflammatory agents and antiinfectives in combi- 95.54% S01C Antiinflammatory agents and antiinfectives in combi- 95.54% nation nation S02B* Corticosteroids 95.54% S02B Corticosteroids 95.54% L01A Alkylating agents 94.00% D07X Corticosteroids, other combinations 93.37% D07X* Corticosteroids, other combinations 93.37% H02A Corticosteroids for systemic use, plain 91.06% H02A* Corticosteroids for systemic use, plain 91.06% D07A Corticosteroids, plain 90.83% D07A* Corticosteroids, plain 90.83% S01B Antiinflammatory agents 90.79% S01B* Antiinflammatory agents 90.79% D10A Anti-acne preparations for topical use 88.56% D10A* Anti-acne preparations for topical use 88.56% C05A Agents for treatment of hemorrhoids and anal fissures 88.52% for topical use C05A* Agents for treatment of hemorrhoids and anal fissures 88.52% R01A Decongestants and other nasal preparations for topi- 87.02% for topical use cal use A04A Antiemetics and antinauseants 87.95% J05A Direct acting antivirals 86.83% R01A* Decongestants and other nasal preparations for topi 87.02% A01A Stomatological preparations 86.11% cal use J05A* Direct acting antivirals 86.8% N02A Opioids 84.86% A01A* Stomatological preparations 86.11% B05C Irrigating solutions 82.56% N02A* Opioids 84.86% A12C Other mineral supplements 79.50% B05C* Irrigating solutions 82.56% B05X I.V. solution additives 74.84% A12C* Other mineral supplements 79.50% L04A Immunosuppressants 68.76% B05X* I.v. solution additives 74.84% N02B Other analgesics and antipyretics 57.24% L04A* Immunosuppressants 68.76% S01A Antiinfectives 54.59% N05A Antipsychotics 58.64% J01D Other beta-lactam antibacterials 43.40% N02B* Other analgesics and antipyretics 57.24% C03C High-ceiling diuretics 39.88% S01A* Antiinfectives 54.59% J01M Quinolone antibacterials 29.78% L03A Immunostimulants 45.96% C07A Beta blocking agents 27.08% A02B Drugs for peptic ulcer and gastro-oesophageal reflux 44.56% disease J01D* Other beta-lactam antibacterials 43.40% N03A Antiepileptics 20.00% C03C* High-ceiling diuretics 39.88% J01X Other antibacterials 5.88% B01A Antithrombotic agents 37.80% M03B Muscle relaxants, centrally acting agents 5.09% V03A All other therapeutic products 34.18% R06A Antihistamines for systemic use 31.78% A06A Drugs for constipation 31.57% J01M* Quinolone antibacterials 29.78% N05B Anxiolytics 29.42% D04A Antipruritics, incl. antihistamines, anesthetics, etc. 27.62% C07A* Beta blocking agents 27.08% L01X Other antineoplastic agents 24.72% R05C Expectorants, excl. combinations with cough sup- 20.43% pressants N03A* Antiepileptics 20.00%\nFurther evaluation of the embedding found that 49% of codes were strongly related semantically. to their nearest neighbor, 10% were loosely related, and 41% unrelated. This fraction of strongly. related nearest neighbors was lower than we had hoped, but much higher than expected by chance. (Figure 6), and it definitely improved classification performance. Furthermore, it was obvious by. inspection that in general, codes closer in the embedding were more semantically related than distant codes, but interestingly, the distance to the nearest such neighbor showed the opposite relationship. nearest neighbors that were very close were less likely to be semantically related than nearest neighbors that were far, and this trend is roughly linear across the full range of d (Figure[6). So the. sparser the points are in the embedded space, the more semantically related they are to their nearest. neighbor, but the causal direction of that effect and the technical reason for it are beyond the scope. of this initial work.\n585.9 Chronic kidney disease, unspecified. 585.3 Chronic kidney disease, Stage IlI (moderate) 585.6 End stage renal disease 585.4 Chronic kidney disease, Stage IV (severe) V45.11 Renal dialysis status . . 585.5 Chronic kidney disease, Stage V. : V45.1 Postsurgical renal dialysis status. 285.21 Anemia in chronic kidney disease. . : 0 787.1 Heartburn 727.00 Synovitis and tenosynovitis. 0 309.24 Adjustment disorder with anxiety. 0 831.00 Closed dislocation of shoulder. 724.3 Sciatica .701.4 Keloid scar 250.81 Diabetes with other specified manifestations, type I ... 362.01 Background diabetic retinopathy. 250.40 Diabetes with renal manifestations, type Il or unspecified, .. 250.80 Diabetes with other specified manifestations, type II or uns. - 250.50 Diabetes with ophthalmic manifestations, type II or uns 250.42 Diabetes with renal manifestations, type II or unspecifie 250.01 Diabetes without complication, type I. : 250.62 Diabetes with neurological manifestations, type Il or uncontrolled : 250.60 Diabetes with neurological manifestations, type II or 357.2 Polyneuropathy in diabetes. 1\n585.9 Chronic kidney disease, unspecified 585.3 Chronic kidney disease, Stage III (moderate) . : 585.6 End stage renal disease 585.4 Chronic kidney disease, Stage IV (severe) V45.11 Renal dialysis status . . 585.5 Chronic kidney disease, Stage V : V45.1 Postsurgical renal dialysis status 285.21 Anemia in chronic kidney disease F 0 787.1 Heartburn 727.00 Synovitis and tenosynovitis 0 309.24 Adjustment disorder with anxiety 4 0 831.00 Closed dislocation of shoulder 724.3 Sciatica701.4 Keloid scar 250.81 Diabetes with other specified manifestations, type I ... 362.01 Background diabetic retinopathy 250.40 Diabetes with renal manifestations, type Il or unspecified, . 250.80 Diabetes with other specified manifestations, type Il or unspecified, ... - 250.50 Diabetes with ophthalmic manifestations, type ll or unspecified, .. 250.42 Diabetes with renal manifestations, type Il or unspecified, uncontrol 250.01 Diabetes without complication, type I 250.62 Diabetes with neurological manifestations, type Il or unspecified, uncontrolled : 250.60 Diabetes with neurological manifestations, type Il or unspecified\nThis work is designed to test how well the complete set of medications a patient is actively taking a a given moment can be predicted by the sequence of diagnostic billing codes leading up to that mo ment, in the context of non-trivial label noise. It also explores whether sequence-oriented recursive neural nets can do a better job of that prediction than standard feed-forward networks\nICD-9 code Code description Time estimate (ago) 735.4 Other hammer toe (acquired) 2.4 years ago 729.5 Pain in limb 2.4 years ago 244.1 Other postablative hypothyroidism 1.5 years ago 285.9 Anemia, unspecified 1.5 years ago 244.1 Other postablative hypothyroidism 1.2 years ago 244.1 Other postablative hypothyroidism 11.5 months ago 733.00 Osteoporosis, unspecified 11.5 months ago 733.01 Senile osteoporosis 7.7 months ago 268.9 Unspecified vitamin D deficiency 7.7 months ago 729.5 Pain in limb 7.7 months ago 174.9 Malignant neoplasm of breast (female), unspecified. 7.7 months ago 722.52 Degeneration of lumbar or lumbosacral intervertebral disc 7.7 months ago 279.3 Unspecified immunity deficiency 7.7 months ago 733.01 Senile osteoporosis 6.4 months ago 733.01 Senile osteoporosis 6.2 months ago 244.1 Other postablative hypothyroidism 6.0 months ago 401.1 Benign essential hypertension 6.0 months ago V58.69 Long-term (current) use of other medications 1.9 weeks ago 733.01 Senile osteoporosis now 244.1 Other postablative hypothyroidism now V58.69 Long-term (current) use of other medications now"}, {"section_index": "3", "section_name": "2.1 MEDICAL BILLING CODES", "section_text": "Each time a patient has billable contact with the healthcare system, one or more date-stamped billing codes are attached to the patient record, indicating the medical conditions that are associated (or suspected to be associated) with the reason for the visit. While these codes are notoriously unreliable because they are only used for billing and not actual clinical practice (O'Malley et al.l|2005), they are. nevertheless useful in a research context (Bastarache & Denny2011f|Denny et al.2010), especially if they are used probabilistically (Lasko2014). In our institution, codes from the Internationa Classification of Diseases, Ninth Revision (ICD-9) have historically been used, although we have. recently transitioned to the tenth revision (ICD-1O). For this project, we used ICD-9 codes..\nThe ICD-9 hierarchy consists of 21 chapters roughly corresponding to a single organ system or. pathologic class (Appendix Bj. Leaf-level codes in that tree represent single diseases or disease subtypes. For this project, we used a subset of the two thousand most common leaf-level codes as. our input data.\nPredicted vs. actual medication classes for Case 2. Table structure as in case 1\nFigure 5: A t-SNE representation of our final embedding. The insets highlight two groups of codes (diabetes mellitus and kidney failure) that are strongly related clinically, and a third group that is not. Codes are colored by whether their nearest neighbor in the embedding space (which may be different from the nearest neighbor in this t-SNE space) is strongly related (blue), loosely related (orange), or unrelated (gray) from a clinical perspective..\nTop predictions Prob. True labels Prob. M05B Drugs affecting bone structure and mineralization 88.18% A11C Vitamin a and d, incl. combinations of the two 39.42% H03A Thyroid preparations. 84.82% N06A Antidepressants 20.88% H05A Parathyroid hormones and analogues 66.33% C10A Lipid modifying agents, plain 17.05% A11C* Vitamin a and d, incl. combinations of the two 39.42% N03A Antiepileptics 15.61% N02B Other analgesics and antipyretics 37.58% C09C Angiotensin ii antagonists, plain 10.38% A01A Stomatological preparations 23.05% L02B Hormone antagonists and related agents 4.22% A12A Calcium 21.59% N06A* Antidepressants 20.88% C07A Beta blocking agents. 20.81% 0.9 0.8 0.7 0.6 0.5 Al1C 0.4 N06A 0.3 C10A N03A 0.2 0.1 0.0 A B C D G H M N R\n0.9 0.07 0.8 P(d) 0.06 0.7 0.05 0.6 m = strongly related o 0.5 0.04 (p)d E m = loosely related 0.4 0.03 0.3 / 0.02 m = unrelated 0.2 0.01 0.1 0.0 0.00 10 20 30 40 50 nearest neighbor distance d\nA recurrent neural network is a variation in which the output of one node on input xt loops aroun to become an input to another node on input xt+1, allowing information to be preserved as it iterate over an input data sequence (Figure1). They were introduced in the 1980s (Rumelhart et al.1986) but achieved explosive popularity only recently, after the development of methods to more reliabl capture long-term dependencies, which significantly improved their performance on sequence-to sequence mapping (Hochreiter & Schmidhuber1997) Sutskever et al.2014).\nThe basic RNN unit has a simple internal structure (Figure2a). Output from the previous iteratior. ht-1 and the next input in a sequence xt are both fed to the network on the next iteration. The Long Short-Term Memory configuration (LSTM) introduces new, more complex internal structure (Figure[2b) consisting of four neural network layers and a cell state (ct), which is carried from one iteration to another. The additional layers form forget, input and output gates, which allow for the information to be forgotten (reset) or passed on to varying degrees..\nThe LSTM model and its variations are commonly used in applications where sequence and temporal. data are involved, such as in image captioning (Vinyals et al.]2014), language translation (Sutskever et al.|2014), and speech recognition (Graves et al.[2013). In many cases LSTM models define the state of the art, such as with a recent conversational speech recognizer that (slightly) outperforms. professional transcriptionists (Xiong et al.2016).\nMedication predictions for a simpler patient. Note that the high-prediction medications are clinicall reasonable given the billing codes in the sequence. Figure representation as in case 1.\nA recent variation on the LSTM architecture is the Gated Recurrent Unit (GRU) (Cho et al.l2014) which introduces a single update gate in place of input and forget gates (Figure|2). GRUs perform as well as or better than LSTMs in many cases (Chung et al.2014] Jozefowicz et al.]2015), and have the additional advantage of a simpler structure.\nsigns. We also suspect that if we can devise a way to convert our medication data into reliably. ordered sequences, we can more fully exploit the strengths of recurrent networks for medication prediction. We look forward to trying these and other variations in future work..\nIn this work we try both an LSTM and a GRU on our learning problem"}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by grants from the Edward Mallinckrodt, Jr. Foundation and the Nationa Institutes of Health R21LM011664 and R01EB020666. Clinical data was provided by the Vanderbil Synthetic Derivative, which is supported by institutional funding and by the Vanderbilt CTSA grant ULTR000445.\nLittle research in the computational medical domain has used recurrent neural networks. The ear liest example we are aware of is the use of an LSTM model that produced reasonable accuracy\nTop predictions Prob. True labels Prob. M05B Drugs affecting bone structure and mineralization 88.18% A11C Vitamin a and d, incl. combinations of the two 39.42% H03A Thyroid preparations 84.82% N06A Antidepressants 20.88% H05A Parathyroid hormones and analogues 66.33% C10A Lipid modifying agents, plain 17.05% A11C* Vitamin a and d, incl. combinations of the two 39.42% N03A Antiepileptics 15.61% N02B Other analgesics and antipyretics 37.58% C09C Angiotensin ii antagonists, plain 10.38% A01A Stomatological preparations 23.05% L02B Hormone antagonists and related agents 4.22% A12A Calcium 21.59% N06A* Antidepressants 20.88% C07A Beta blocking agents 20.81% 0.9 0.8 0.7 0.6 0.5 Al1C 0.4 N06A 0.3 C10A N03A 0.2 02B 0.1 B D G H M A P R S\nMost of the ICLR community are very familiar with recurrent neural networks and their variations,. but we include a conceptual description of them here for readers coming from other fields. More thorough descriptions are available elsewhere (Graves.. )lah 2015\nFigure 6: Semantic relatedness of nearest neighbors vs. the distance between them. Solid lines. are the conditional probabilities P(m[d) for the three values of m, dashed line is the marginal probability P(d) of nearest neighbor distances d. Surprisingly, nearest neighbors that are farther away (but still the nearest neighbor) are more strongly related than nearest neighbors that are closer. in the embedding space. Shaded regions, colored to correspond to the three values of m, are the 95%. CI for empirically estimated P(m) under random pairings, and represent the expected null result.."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Lisa Bastarache and Joshua C. Denny. The use of ICD-9 codes in genetic association studies. In AMIA Annu Symp Proc, volume 2011, pp. 1738, 2011.\nFigure 1: Simplified representation of a recurrent neural network (left) and an unrolled recurren1 neural network (right). x; is a single element in an input sequence x, h; is an output after a single pass through the recurrent unit. Adapted fromOlah (2015).\nSelin Caglar, Philip L Henneman, Fidela S Blank, Howard A Smithline, and Elizabeth A Henneman Emergency department medication lists are not accurate. The Journal of emergency medicine, 40 613-616, Jun 2011.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma- chine translation. CoRR. abs/1406.1078. 2014\nPredicted vs. actual medication classes for Case 3. Table structure as in case 1.\nEdward Choi, Andy Schuetz, Walter F. Stewart, and Jimeng Sun. Using recurrent neural networl models for early detection of heart failure onset. J Am Med Inform Assoc, Aug 2016b..\nFigure 2: Architectures of (a) Simple RNN, (b) LSTM, and (c) GRU units. x: a single element in an input sequence being considered in the current iteration, ht-1, ht: the output from the previous and current iterations, Ct-1, Ct: the cell states of the previous and current iterations. Adapted from Olah (2015).\nFrancois Chollet. Keras. https: //github. com/fchollet/keras 2015.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation o gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. 2014\nSimon de Lusignan and Chris van Weel. The use of routinely collected computer data for research in primary care: opportunities and challenges. Family practice, 23:253-263, Apr 2006\nJoshua C. Denny, Marylyn D. Ritchie, Melissa A. Basford, Jill M. Pulley, Lisa Bastarache, Kristin Brown-Gentry, Deede Wang, Dan R. Masys, Dan M. Roden, and Dana C. Crawford. Phewas demonstrating the feasibility of a phenome-wide scan to discover gene-disease associations Bioinformatics, 26(9):1205-1210, 2010\nVery recent work, contemporary with ours, used a GRU model with a semantic embedding in 32,787 patient records to predict the development of heart failure 3 - 6 months in the future, from medicatior. orders and billing codes in an 18-month window. The model achieved respectable accuracy (0.88. AUC), and demonstrated a meaningful 0.05 AUC improvement over a deep feedforward network. (Choi et al.]2016b).\nManuel Fernandez-Delgado, Eva Cernadas, Senen Barro, and Dinani Amorim. Do we need hun dreds of classifiers to solve real world classification problems? Journal of Machine Learning. Research, 15:3133-3181, 2014.\nC10A 0.2 C09A CO1E C02 G03B 0.0 A R H M N D R S\nAlex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012\nOther recent work from the same group used a GRU model in a multi-label context to predict the. medications, billing codes, and time of the next patient visit from a sequence of that same infor mation for previous visits, using 263,706 patient records. It achieved a recall@30 of 72.4 for the task, an improvement of 20 over a single-hidden-layer MLP with 2000 units (Choi et al.]2016a) This is an example of using one of the strengths of a recurrent network - predicting the next element. in a sequence. It contrasts with our work that exploits a different strength of recurrent networks predicting a sequence or class that is semantically distinct from but parallel to the elements of the. input sequence.\nAlex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. arXiv preprint, 1303.5778, 2013\nMedication predictions for a patient with only one ICD-9 code, repeated many times over five years The medications listed under true labels are not indicated for paralysis agitans (Parkinson's disease) but the patient was surely taking them for reasons not documented in the ICD-9 sequence. The model predicted mostly reasonable medications for a patient with Parkinson's disease, especially Dopaminergic agents, which is the primary treatment for the disease. Figure representation as ir case 1, above.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735 1780, November 1997\nThe closest work to ours from a medical domain perspective is a series of collaborative filter models (including co-occurrence counting, k-nearest neighbors, and logistic regression) that predict missing medications using a leave-one-drug-out evaluation design, with predictions based on the rest of the medications, ICD-9 billing codes, and demographic data. The models were trained and tested or data from 419 patients in three different clinics, with accuracy varying by clinic, as expected, bu not appreciably by model. Most models ranked the missing drug in the top 10 results between 40 and 50% of the time, and ranked the therapeutic class of the drug in the top 10 results between 50 and 65% of the time.\nRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurren network architectures. Journal of Machine Learning Research, 2015..\nMany aspects of our work can be found in these prior efforts, but none addresses our particula1 problem in the same way. Our work is unique in its learning problem of identifying all drugs a. patient is likely to be taking, based only on the billing codes in the record. Like most others cited, we. use recurrent neural networks in a multi-label predictive context, but in contrast to them we compare\nn2 n3 n n3 recurrent recurrent recurrent recurrent recurrent unit unit unit unit unit X1 X2 +3 X Xt\nh h n2 n3 h recurrent recurrent h3 recurrent recurrent recurrent unit unit unit unit unit X X1 X2 +3 X+\na) b) c) tanh tanh update output tanh forget input tanh output\na) b) c) tanh forget input update tanh output tanh output\nTop predictions Prob. True labels Prob. N04B Dopaminergic agents 97.66% C10A Lipid modifying agents, plain. 13.90% N03A Antiepileptics 34.01% C09A Ace inhibitors, plain. 9.21% N02B Other analgesics and antipyretics. 32.81% C01E Other cardiac preparations 5.56% N06A Antidepressants 26.10% C02C Antiadrenergic agents, peripherally acting 0.72% N02A Opioids 20.33% G03B Androgens 0.32% A14A Anabolic steroids 0.08% 1.0 0.8 0.6 0.4 C10A 0.2 C09A CO1E A14A C02C G03B 0.0 A B D H M N P R S\n(micro-AUC 0.86) in a 128-dimensional multi-label prediction of diagnoses from regularly sam pled, continuously-monitored, real-valued physiologic variables in an Intensive Care Unit setting This was an interesting initial application, but it turned out to be only O.001 better than the baseline classifier, which was a multi-layer perceptron with expert-designed features (Lipton et al.]2016). Given the dataset size (10,401 patient records) the lack of improvement may have been due to insuf- ficient data to power accurate feature learning in the recurrent network.\n1CC 11111O11, netl. Assessng tne accuracy of computerized medication histories. The American journal of managed care, 10:872-877, Nov 2004. Caroline Keogh, Allen Kachalia, Karen Fiumara, Dorothy Goulart, Jonathan Coblyn, and Sonali P. Desai. Ambulatory medication reconciliation: Using a collaborative approach to process improve- ment at an academic medical center. Joint Commission journal on quality and patient safety, 42: 186-194, Apr 2016.\nThomas A. Lasko. Efficient inference of Gaussian process modulated renewal processes with ap plication to medical event data. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence (UAI), July 2014.\nZachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. Learning to diagnose with LSTM recurrent neural networks. In Proceedings of the International Conference on Learning Representaitons (1CLR 2016), 2016."}, {"section_index": "6", "section_name": "3.1 DATA", "section_text": "Our source database was the deidentified mirror of Vanderbilt's Electronic Medical Record. whicl contains billing codes, medication histories, laboratory test results, narrative text and medical imag ing data for over 2 million patients, reaching back nearly 30 years (Roden et al.]2008). We obtainec. IRB approval to use this data in this research.\nFor this experiment we filtered all records in our database to include only the top 1,o00 most common medications and the top m = 2000 most common billing codes, which cover 99.5% of all medication occurrences and 85.1% of all billing code occurrences. We then included all records from the filtered data that had at least one medication occurrence and at least ten billing code occurrences. This resulted in 610,076 complete patient records, which we divided 80/5/15 into training, validation, and final test sets.\nKimberly J. O'Malley, Karon F. Cook, Matt D. Price, Kimberly Raiford Wildes, John F. Hurdle and Carol M. Ashton. Measuring diagnoses: ICD code accuracy. Health Serv Res, 40(5 Pt 2) 1620-1639, Oct 2005.\nA data instance d = { E, T, y} consisted of a sequence E = {e1, ..., en}, of one-hot billing code. vectors e, E {0, 1}m and their associated times T = {t1, ..., tn}, t, E R as input, and a multi-label. vector y E {0, 1}k of medication classes as the output target. The most recent n = 100 billing codes to a selected reference time point in a given patient record were collected into the input sequence E,. and their occurrence times into T, zero padding if necessary. All medications that occurred during. the time span of T were then collected into the output vector y. Practice patterns change over time,. so simply taking the most recent 100 codes for each patient could produce a biased result. To avoid this, we chose random reference points, stratified by medication. In other words, the reference points. were randomly chosen from the occurrences of each medication in the entire dataset, up to 10,000. points per medication. This resulted in 3.3 million data instances, an average of 5.4 instances per. patient record. Each patient's data was included in at most one of the training, validation, or test. Sets.\nBecause there are often many approximately equivalent medication choices for a given therapeutic. purpose, we converted medication names to their therapeutic class (beta blocker, immunosuppres-. sant, corticosteroid, etc.) as a synonym reduction step. This step also aggregated generic with brand names, as well as different formulations of the same active ingredient. For this task we used the Anatomical Chemical Classification System (ATCJ'! which is a multi-level ontology of medica-. tions, organized by both anatomic and therapeutic class. The top level is a broad categorization of medications (Appendix B), the bottom (fifth) level is individual medications, and we used the third. level, which contains 287 therapeutic classes of the approximately appropriate abstraction level for. our purpose. We used a publicly available mapping-|to translate between our medication names and. ATC codes, with manual mapping for the minority of medications that had no mapping entry. Our. set of medications used k = 182 third-level ATC codes, rendering our output label a 182-element-. long multi-label vector, in which an element is set y; = 1 if a medication in that class appeared in the set of medications identified for that instance, y = 0 otherwise. Some medications mapped to. more than one class, and we set y; = 1 for all of them..\nOur medication data was collected from structured order entry records and extracted using NLP (Xt et al.[20io) from mentions in the narrative text of a patient record that included the medication name, dose, route and frequency. As discussed above, we assumed (and our results demonstrate that the medication data is incomplete, and our hope was that a model learned from a sufficiently large dataset will be robust to the missing data.\nThis configuration represents the input billing codes in a sequence, but the output medications as. a multi-label vector. This is because ICD-9 codes are represented sequentially in our source data but medications are not. They are represented as a list that changes over time in the record. The.\nHua Xu, Shane P Stenner, Son Doan, Kevin B Johnson, Lemuel R Waitman, and Joshua C Denny. Medex: a medication information extraction system for clinical narratives. J Am Med Inform Assoc. 17(1:19-24. 2010\n'http://www.whocc.no/atc/structure_and_principles 2https://www.nlm.nih.gov/research/umls/rxnorm/\nto the most similar non-recurrent model we can construct. in order to evaluate the contribution of the temporal sequence information to the solution. Finally, we use one to four orders of magnitude more data (3.3 million instances, see Section|3.1) than these prior efforts, which we hope will give us a more realistic assessment of the various deep architectures we use on our problem.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa tions of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing System. 26, pp. 3111-3119. Curran Associates, Inc., 2013.\n1620-1639, Oct 2005. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. 12:2825-2830. 2011. D. M. Roden, J. M. Pulley, M. A. Basford, G. R. Bernard, E. W. Clayton, J. R. Balser, and D. R Masys. Development of a large-scale de-identified dna biobank to enable personalized medicine Clin Pharmacol Ther, 84(3):362-369, Sep 2008 D. G. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1: Foundations, pp. 318 - 362. MIT Press, 1986. Charles Safran, Meryl Bloomrosen, W Edward Hammond, Steven Labkoff, Suzanne Markel-Fox. Paul C. Tang, Don E. Detmer, and Expert Panel. Toward a national framework for the secondary use of health data: an american medical informatics association white paper. J Am Med Inform Assoc, 14(1):1-9, 2007. Konstantinos Sechidis, Grigorios Tsoumakas, and Ioannis Vlahavas. On the stratification of multi\nMatthew D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012\nusual goal of clinicians is to verify the list of medications at each visit, and if omissions or addition are indicated by the patient, to change the list to reflect that. But in the time-constrained realit of clinical practice, this reconciliation happens sporadically, and many clinicians are hesitant t. change an entry on the medication list for which they were not the original prescriber, so the timin. of the changes in the documentation do not reflect the timing of changes in reality. Therefore w. are reduced to predicting a single multi-label vector, representing the medications that the patien probably took during the span of time represented by the input codes. (We actually did attemp. some full sequence-to-sequence mappings, with various orderings of the medication sequences, bu. we did not achieve any promising results in that direction.).\nM. L. Zhang and Z. H. Zhou. A review on multi-label learning algorithms. IEEE Transactions or Knowledge and Data Engineering. 26(8):1819-1837. Aug 2014."}, {"section_index": "7", "section_name": "3.2.1 RECURRENT NEURAL NETWORKS", "section_text": "The optimal hyperparameters for the model were selected in the randomized parameter optimizatior (Bergstra & Bengio]2012), with the embedding dimension b = 32, number of layers, and number of nodes optimized by a few trials of human-guided search. Other optimized parameters included the fraction of dropout (between layers, input gates and recurrent connections), and L1 and L2 regularization coefficients (final values are presented in|Appendix A)\nBoth models were implemented using Keras (Chollet2015) and trained for 300 iterations using cross-entropy under the Adadelta optimizer (Zeiler2012).\n182 recurrent feed-forward layers layers + X3 + X100 t3 100 embedding embedding e e100 e e100 100 100\nFigure 3: Recurrent (left) and feed-forward (right) neural network architectures. Arrows indicate the flow of information. Input for both models is sequence of billing code observations e and sequence of. corresponding timestamps t. A code observation e; passes through an embedding layer, producing an embedding vector xi, which is then appended with time t. The processed matrix then passes through either recurrent layers or feed-forward layers. The output in both cases is a single vector of label probabilities.\nOur main technical goal was to test the performance of recurrent neural networks on this sequence centric prediction problem. To evaluate the specific gains provided by the recurrent architectures.. we compare performance against a fully connected feed-forward network configured as similarly. as possible to the recurrent networks, and (as baselines) a random forest and a constant-prevalence. model. We discuss the specific configurations of these classifiers in this section..\nWe tested both LSTMs and GRUs in this experiment. We configured both architectures to first compute a semantic embedding x E Rb of each input e; vector, before appending the times t;. (Figure[3) and feeding the result to three layers of recurrent units. The final output from the last pass of recurrent unit is as a multi-label prediction for each candidate medication.."}, {"section_index": "8", "section_name": "3.2.2 FULLY CONNECTED NEURAL NETWORK", "section_text": "The fully connected network used as similar an architecture as possible to the recurrent networks, i1 an attempt to isolate the gain achieved from the recurrence property. Specifically, we used the same architecture for embedding and timestamp appending (Figure|3)\nThis appendix lists the optimized parameters for the different models. Except where noted, param eters were optimized under random search.\nRecurrent Neural Network Models: (parameters marked with an asterisk were optimized with human-guided search.)\nHyperparameters were optimized using random search over the number of layers, number of nodes dropout, activation function between layers, L1 and L2 regularization coefficients (Appendix A]. (Surprisingly, the optimizer chose t anh over ReLU as the optimal activation function.).\nParameter Model GRU LSTM Dropout for input gates 0.1 0.25 Dropout for recurrent connections 0.75 0.75 L1 applied to the input weights matrices 0 0 L1 applied to the recurrent weights matrices 0 0 L2 applied to the input weights matrices 0.0001 0.0001 L2 applied to the recurrent weights matrices 0.0001 0.001 L2 applied to the output layer's weights matrices 0.0001 0.001 Dropout before the output layer 0.5 0.5 *Number of recurrent layers 3 3 *Number of nodes in recurrent units 400 400\nThe models were also implemented using Keras, and were trained using cross-entropy for 500 iter ations under the Adadelta optimizer.."}, {"section_index": "9", "section_name": "3.2.3 RANDOM FOREST", "section_text": "Because the random forest model is not easily structured to operate on sequences, we represented the input data as either binary occurrence vectors v E {0, 1}m, or bag-of-codes vectors w E Nm (counts of each code value in the sequence) rather than as sequences of codes with associated times. No embedding was used, because random forest code was not able to cope with the large size of the data in the (dense) embedded space.\nEven in the (sparse) original space, the full dataset was too large for the random forest code, so we implemented it as an ensemble of ten independent forests, each trained on one tenth of the training data, and their average score used for test predictions\nModels were implemented using scikit-learn (Pedregosa et al. 2011) with parameters optimize under random search (Appendix A)\nFeed Forward Neural Network Model.\nWhile other models could reasonably serve as a baseline for this work, we chose a random forest because they tend to perform well on widely varying datasets (Fernandez-Delgado et al.f2014), they are efficient to train and test, and they don't require a huge effort to optimize (in order to produce a fair comparison)."}, {"section_index": "10", "section_name": "3.3 CONSTANT-PREVALENCE MODEI", "section_text": "This minimum baseline model simply predicts the prevalence of each label for all instances. For example, if there were three possible medications, with prevalences of 0.3, 0.9, and 0.2, then the prediction of this model would be a constant [0.3, 0.9, 0.2] for each instance. We include this model in order to mitigate the fact that while all of our evaluation measures are suitable for comparing models on the same data, some are not well suited for external comparison because they depend, for example, on the prevalence of positive labels (Section|3.4). By including this model we can at least establish a true minimum baseline for reference.\nRandom Forest Model (binary input)"}, {"section_index": "11", "section_name": "3.4 EVALUATION", "section_text": "Our main evaluation focused on the models, although we also performed a separate evaluation oi the embedding\nThere are several possibilities for evaluation in a multi-label classification context (Sechidis et al. 2011f Zhang & Zhou2014). We chose micro-averaged area under the ROC curve (AUC) and la- bel ranking loss as the primary methods of evaluation, because they treat each instance with equal weight, regardless of the nature of the positive labels for that instance. In other words, we wanted primary measures that did not give a scoring advantage to instances with either very many or very few positive labels, or that included very rare or very prevalent labels. Additionally, both of these measures appeal to us as intuitive extensions of the usual binary AUC, when seen from the perspec- tive of a single instance. However, because these two measures don't reflect all aspects of multi-label prediction performance, we also include macro-averaged AUC, label ranking average precision and coverage error measures."}, {"section_index": "12", "section_name": "APPENDIX B", "section_text": "Micro-averaged AUC considers each of the multiple label predictions in each instance as either tru. or false, and then computes the binary AUC as if they all belonged to the same 2-class problem (Zhang & Zhou2014). In other words, micro-averaged AUC A is:\nThis appendix lists the top level classes for International Statistical Classification of Diseases anc Related Health Problems, Ninth Revision (ICD-9) and Anatomical Chemical Classification Systen (ATC).\n(x,x',l,l') : f(x,l) f(x',l'),(x,l),E S,(x',l') E S S||S\n001-139 Infectious and parasitic diseases 140-239 Neoplasms 240-279 Endocrine, nutritional and metabolic diseases, and immunity disorders 280-289 Diseases of the blood and blood-forming organs. 290-319 Mental disorders 320-359 Diseases of the nervous system 360-389 Diseases of the sense organs 390-459 Diseases of the circulatory system 460-519 Diseases of the respiratory system 520-579 Diseases of the digestive system 580-629 Diseases of the genitourinary system 630-679 Complications of pregnancy, childbirth, and the puerperium. 680-709 Diseases of the skin and subcutaneous tissue. 710-739 Diseases of the musculoskeletal system and connective tissue. 740-759 Congenital anomalies 760-779 Certain conditions originating in the perinatal period. 780-799 Symptoms, signs, and ill-defined conditions 800-999 Injury and poisoning V01-V91 Supplementary - factors influencing health status and contact with health se. 000-E999 Supplementary - external causes of injury and poisoning.\nN 1 1 LR {(l,l'):r()(l) >r(j)(l'),(l,l') E Y(j) x YGj) N |Y(j)|Y(j) j=1\nMacro-averaged AUC can be thought of as averaging the AUC performance of several one-vs-all classifiers, one model for each label. It treats each model equally, regardless of the prevalence of positive labels for that model. This gives a score of O.5 to the constant-prevalence model, at the cost of weighting instances differently in order to achieve that. This is in contrast to micro-averagec AUC, which can be thought of as averaging across instances rather than labels. It weighs each instance equally. at the cost of a 0.5 score no longer being the random-guessing baseline\nLabel ranking average precision gives the mean fraction of correct positive labels among all positive labels with lower scores for each label. The coverage error function calculates the mean number of labels on the ranked list that are needed to cover all the positive labels of the sample. Both of these depend on the prevalence of positive labels in a test instance.\nTop level groups ATC codes and their corresponding colors used in Figure4and|Appendix ("}, {"section_index": "13", "section_name": "1+ RESULTS AND DISCUSSION", "section_text": "The GRU model had the top performance by all measures, although the LSTM was a close seconc (Table 1), a performance pattern consistent with previous reports (Chung et al.]2014). The deep neural net performance was about O.01 worse in both measures, suggesting that the recurrent models were able to use the sequence information, but only to a small advantage over the most similar non temporal architecture. However, we note that both RNNs' performance peaked at the top end of ou tractable range for model size, while the feed-forward network peaked using a model about one thirc that size (Appendix A). Experimenting with the architecture, we found that increasing the numbe of nodes or layers for the feed-forward network increased training time but not performance. Thi suggests that the RNN performance was limited by the hardware available, and increasing the size of the model may further increase performance, and that the feed-forward network was limited by something else.\nBoth random forest models were weaker than the deep neural net, as might be expected from the need to resort to binary and bag-of-codes representations of the input data..\nLabel ranking loss LR gives the average fraction of all possible (positive, negative) label pairs for. each instance in which the negative label has a higher score than the positive label (Tsoumakas et al. 2010):\nN 1 LR {(l,l') :r()(l) >r(J)(l'),(l,l') E Y(j) x Y(j) N (j)||Y(j)\nWe evaluated the embedding based on how strongly related in a clinical semantic sense the nearest neighbor to each code is (in the embedding space).. Alicensed physi cian manually annotated the list of all 2o0o codes with its match category m E {strongly related,loosely related,unrelated}, and we computed the empirical. marginal probability P(m) of each category, the empirical conditional probability P(m[d) of the. match category given the nearest neighbor (Manhattan) distance d and the empirical marginal prob-. ability P(d). For comparison, we computed P(m) under 100 random code pairings..\nTable 1: Results of multi-label classification for each model. Baseline is the constant-prevalence model. Perfect is the best possible performance for our data under the given measure"}, {"section_index": "14", "section_name": "APPENDIX C", "section_text": "This appendix presents results from three illustrative cases from the dozen cases randomly selectec for individual evaluation.\n203.00 Multiple myeloma, without mention of having achieved remission. 4.8 months ago 273.1 Monoclonal paraproteinemia 4.8 months ago 285.9 Anemia, unspecified 4.8 months ago 276.50 Volume depletion, unspecified 4.8 months ago 733.00 Osteoporosis, unspecified 4.8 months ago 203.00 Multiple myeloma, without mention of having achieved remission 4.8 months ago 203.00 Multiple myeloma, without mention of having achieved remission. 2.9 months ago 203.01 Multiple myeloma, in remission. 2.9 months ago 273.1 Monoclonal paraproteinemia 2.9 months ago 273.1 Monoclonal paraproteinemia 1.6 months ago 279.3 Unspecified immunity deficiency 1.6 months ago 203.00 Multiple myeloma, without mention of having achieved remission. 1.6 months ago 781.2 Abnormality of gait 3.7 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission 3.7 weeks ago 401.9 Unspecified essential hypertension 3.7 weeks ago V12.54 Personal history of transient ischemic attack (TIA), and cerebral infarction without residual deficits 3.7 weeks ago 794.31 Nonspecific abnormal electrocardiogram [ECG] [EKG] 3.7 weeks ago 786.09 Other respiratory abnormalities 3.7 weeks ago 273.1 Monoclonal paraproteinemia 3.7 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission. 3.6 weeks ago V58.69 Long-term (current) use of other medications. 3.6 weeks ago 794.31 Nonspecific abnormal electrocardiogram [ECG] [EKG] 3.4 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission 4 days ago V42.82 Peripheral stem cells replaced by transplant. 4 days ago 203.01 Multiple myeloma, in remission. 3 days ago 38.97 Central venous catheter placement with guidance 3 days ago V42.82 Peripheral stem cells replaced by transplant. 3 days ago V58.81 Fitting and adjustment of vascular catheter. 3 days ago 203.00 Multiple myeloma, without mention of having achieved remission. 3 days ago V42.82 Peripheral stem cells replaced by transplant. 2 days ago 203.01 Multiple myeloma, in remission 2 days ago 203.00 Multiple myeloma, without mention of having achieved remission. 1 day ago V42.82 Peripheral stem cells replaced by transplant. 1 day ago 203.00 Multiple myeloma, without mention of having achieved remission. now V42.82 Peripheral stem cells replaced by transplant. now S01C S02B S03B D07X D07A H02A S01B 1.0 C05A D10A J05A R01A A01A N02A B05C A12C -\nA natural question is what performance is good enough for clinical use. While there is little clinical experience with multi-label classifiers, we would generally expect clinicians using a binary classifier in an advisory role to find an AUC 0.9 to be useful, and AUC 0.95 to be very useful. An AUC difference of O.01, and perhaps O.005 are potentially noticeable in clinical use.\nThis O.9/0.01 rule of thumb may loosely translate to our AUC variants, but it can directly translat to Label Ranking Loss LR (2). If we think of a single output prediction y E [0,1]k as a set of predictions for k binary labels, then 1 - AUC for that set of predictions is equivalent to LR for the original instance y. Therefore, values of LR 0.1 may be clinically useful, and LR 0.05 may be very useful.\nD07X D07A H02A S01B 1.0 C05A D10A A01A J05A R01A N02A B05C A12C B05X 0.8 L04A N02B S01A 0.6 J01D C030 0.4 J01M C07A N03A 0.2 J01X M03B\nA good example of missing medications is a case in which the record has multiple billing codes for both osteoporosis (which is very commonly treated with medication) and postablative hypothy roidism (a deliberately induced condition that is always treated with medication), but no medications of the appropriate classes were in the record. The GRU model predicted both of these classes, which the patient was almost surely taking.\nA good example of either missing billing codes or discontinued medications that remain documente as active is a case in which the record has at least five years of data consisting only of codes fo. Parkinson's disease, but which lists medications for high cholesterol, hypertension, and other hear. disease. The GRU model predicted a reasonable set of medications for Parkinson's disease and it. complications, but did not predict the other medications that are not suggested by the record..\nGiven how easy it was to find cases with apparently missing codes and medications, we conclude. that there is indeed a substantial amount of label noise in our data, and we therefore interpret our. models' performance as lower bounds on the actual performance. We are encouraged that this kinc of a model may actually be useful for identifying missing medications in the record, but of course. a more thorough validation, and possibly a more accurate model, would be necessary before using. in a clinical scenario. A definitive experiment would use off-line research, including reconciling. information from various electronic and human sources to establish the ground truth of which med-. ications were being taken on a particular day, but such efforts are labor intensive and expensive, and. can only be conducted on a very small scale.\nAn interesting byproduct of these models is the semantic embedding of ICD-9 codes used in the. recurrent networks (Figure 5). Transforming input to a semantic embedding is a common pre\nLabel Ranking Label Ranking Coverage Model Micro-AUC Loss Macro-AUC Avg. Precision Error GRU 0.927 0.076 0.861 0.603 62.6 LSTM 0.926 0.077 0.859 0.600 63.0 NN 0.916 0.086 0.835 0.570 67.3 RF (binary) 0.903 0.102 0.804 0.523 73.7 RF (counts) 0.894 0.111 0.787 0.497 77.3 Baseline 0.828 0.172 0.500 0.355 97.2 Perfect 1.0 0.0 1.0 1.0 15.0\nSubjectively examining performance on 20 randomly selected cases, we find very good detailed. predictions, but also evidence of both missing medications and missing billing codes. An example. of a good set of detailed predictions is from a complex patient suffering from multiple myeloma (a type of cancer) with various complications. This patient was taking 26 medications, 24 of which. had moderate to high probability predictions (Figure4). (We have found by eyeball that a prediction. cutoff of O.2 gives a reasonable balance between sensitivity and specificity for our model.) In the. other direction, only two of the high-prediction classes were not actually being taken, but those. classes, along with several of the other moderately-predicted classes, are commonly used for cancer. and are clinically reasonable for the case. (Details of this and the two cases below are in|Appendix.\nMedication predictions for a complicated patient. Each vertical bar represents the prediction for a single medication class, with the height of the bar representing the confidence of the prediction Black labels above arrows indicate ATC therapeutic classes for medications the patient was actually taking. Colors and letters below the axis indicate high-level therapeutic class groups."}]
rJ8uNptgl
[{"section_index": "0", "section_name": "TOWARDS THE LIMIT OF NETWORK OUANTIZATION", "section_text": "Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee\nyoojin.c,mostafa.e, jungwon2.lee}@samsung.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Network quantization is one of network compression techniques to reduce the re dundancy of deep neural networks. It reduces the number of distinct network pa. rameter values by quantization in order to save the storage for them. In this paper we design network quantization schemes that minimize the performance loss due to quantization given a compression ratio constraint. We analyze the quantitative. relation of quantization errors to the neural network loss function and identify tha the Hessian-weighted distortion measure is locally the right objective function fo the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When opti. mal variable-length binary codes, e.g., Huffman codes, are employed for furthe compression, we derive that the network quantization problem can be related tc the entropy-constrained scalar quantization (ECSQ) problem in information the ory and consequently propose two solutions of ECsQ for network quantization. i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm Finally, using the simple uniform quantization followed by Huffman coding, we show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively..\nFigure 2: Accuracy versus average codeword length per network parameter after network quanti zation, Huffman coding and fine-tuning for LeNet and 32-layer ResNet when Hessian is computed with 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradients are used instead of Hessian as an alternative..\nFigure[2shows the performance of Hessian-weighted k-means clustering when Hessian is computed with a small number of samples (1,000 samples). Observe that even using the Hessian computed. with a small number of samples yields almost the same performance. We also show the performance. of Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as. explained in Section[3.5] In particular, the square roots of the second moment estimates of gradients are used instead of Hessian, and using this alternative provides similar performance to using Hessian\nIn Table1 we summarize the compression ratios that we can achieve with different network quanti zation methods for pruned models. The original network parameters are 32-bit float numbers. Using the simple uniform quantization followed by Huffman coding, we achieve the compression ratios of 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the original model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per formance loss. Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we also compare our network quantization results to the ones in|Han et al. (2015a). Note that layer-by. layer quantization with k-means clustering is evaluated in Han et al. (2015a) while our quantization schemes including k-means clustering are employed to quantize network parameters of all layers together at once (see Section3.6)"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have emerged to be the state-of-the-art in the field of machine learning fol image classification, object detection, speech recognition, natural language processing, and machine translation (LeCun et al., 2015). The substantial progress of neural networks however comes with high cost of computations and hardware resources resulting from a large number of parameters. For example,Krizhevsky et al. (2012) came up with a deep convolutional neural network consisting of 61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neural networks with even larger numbers of parameters, e.g.,Simonyan & Zisserman (2014).\nBesides network quantization, network pruning has been studied for network compression to remov redundant parameters permanently from neural networks (Mozer & Smolensky 1989:|LeCun et al 1989;Hassibi & Stork, 1993; Han et al.,2015b; Lebedev & Lempitsky,2016; Wen et al.,2016 Matrix/tensor factorization and low-rank approximation have been investigated as well to find mor efficient representations of neural networks with a smaller number of parameters and consequentl to save computations (Sainath et al., 2013; Xue et al.],2013; Jaderberg et al.,2014; Lebedev et al. 2014;Yang et al. 2015:Liu et al.]2015: Kim et al. [2015: Tai et al.2015: Novikov et al.. 2015] Moreover, similar to network quantization, low-precision network implementation has been exam ined inVanhoucke et al.(2011);Courbariaux et al.(2014);[Anwar et al.(2015);Gupta et al.(2015) Lin et al.(2015a). Some extremes of low-precision neural networks consisting of binary or ternar parameters can be found inCourbariaux et al. (2015);Lin et al.] (2015b);Rastegari et al.](2016). w note that these are different types of network compression techniques, which can be employed oi top of each other."}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutiona neural networks for object recognition. In IEEE International Conference on Acoustics, Speecl\n100 100 99.5 90 99 80 98.5 70 (%) 98 60 Aceanccy 97.5 50 97 40 96.5 30 96 O-k-means 20 O-k-means -Hessian-weighted k-means (50,ooo) -Hessian-weighted k-means (50, ooo) 95.5 Hessian-weighted k-means (1, ooo) 10 Hessian-weighted k-means (1, ooo) Alt-Hessian-weiqhted k-means. AAlt-Hessian-weighted k-means. 95 0 2 3 0 1 1 3 4 5 6 8 9 Average codeword Iength (bits). Average codeword length (bits) (a) LeNet (b) ResNet\nThe large sizes of deep neural networks make it difficult to deploy them on resource-limited devices,. e.g., mobile or portable devices, and network compression is of great interest in recent years to. reduce computational cost and memory requirements for deep neural networks. Our interest in this paper is mainly on curtailing the size of the storage (memory) for network parameters (weights and biases). In particular, we focus on the network size compression by reducing the number of distinct network parameters by quantization..\nThis paper investigates the quantization problem of network parameters in deep neural networks We identify the suboptimality of the conventional quantization method using k-means clustering and newly design network quantization schemes so that they can minimize the performance loss due to quantization given a compression ratio constraint. In particular, we analytically show that Hessian can be used as a measure of the importance of network parameters and propose to minimize Hessian- weighted quantization errors in average for clustering network parameters to quantize. Hessian- weighting is beneficial in quantizing all of the network parameters together at once since it can handle the different impact of quantization errors properly not only within layers but also across layers. Furthermore, we make a connection from the network quantization problem to the entropy constrained data compression problem in information theory and push the compression ratio to the limit that information theory provides. Two efficient heuristic solutions are presented to this end. i.e., uniform quantization and an iterative solution for ECsQ. Our experiment results show that the proposed network quantization schemes provide considerable gain over the conventional method using k-means clustering, in particular for large and deep neural networks.\nThe most related work to our investigation in this paper can be found inGong et al. (2014);Han et al. (2015a), where a conventional quantization method using k-means clustering is employed for net. work quantization. This conventional approach however is proposed with little consideration for the impact of quantization errors on the neural network performance loss and no effort to optimize the quantization procedure for a given compression ratio constraint. In this paper, we reveal the subop timality of this conventional method and newly design quantization schemes for neural networks. Ir. particular, we formulate an optimization problem to minimize the network performance loss due tc quantization given a compression ratio constraint and find efficient quantization methods for neura. networks.\nTable 1: Summary of network quantization results with Huffman coding for pruned models\n+ Quantization all layers + Huffman coding\nThe main contribution of the paper can be summarized as follows.\nPhilip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization IEEE Transactions on Acoustics. Speech. and Signal Processing. 37(1:31-42. 1989\nMatthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024. 2014.\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123-3131, 2015.\nWe consider a neural network that is already trained, pruned if employed and fine-tuned before quan tization. If no network pruning is employed, all parameters in a network are subject to quantization For pruned networks, our focus is on quantization of unpruned parameters..\nThomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning anc stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011\nThe goal of network quantization is to quantize (unpruned) network parameters in order to reduce the. size of the storage for them while minimizing the performance degradation due to quantization. Fol network quantization, network parameters are grouped into clusters. Parameters in the same cluste share their quantized value, which is the representative value (i.e., cluster center) of the cluster they belong to. After quantization, lossless binary coding follows to encode quantized parameters intc binary codewords to store instead of actual parameter values. Either fixed-length binary coding o1. variable-length binary coding, e.g., Huffman coding, can be employed to this end..\nHerbert Gish and John Pierce. Asymptotically efficient quantizing. IEEE Transactions on Informa tion Theory, 14(5):676-683, 1968\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115. 2014.\nSuppose that we have total N parameters in a neural network. Before quantization, each parameter. is assumed to be of b bits. For quantization, we partition the network parameters into k clusters. Let C; be the set of network parameters in cluster i and let b; be the number of bits of the codeworc assigned to the network parameters in cluster i for 1 < i < k. For a lookup table to decode quantized.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a\nAccuracy % Compression ratio Original model. 99.25 Pruned model. 99.27 10.13 k-means 99.27 44.58 Pruning LeNet Hessian-weighted k-means. 99.27 47.16 + Quantization all layers Uniform quantization. 99.28 51.25 + Huffman coding. Iterative ECSQ 99.27 49.01 Deep compression (Han et al., 2015a) 99.26 39.00 Original model. 92.58 Pruned model 92.58 4.52 k-means 92.64 18.25 Pruning ResNet Hessian-weighted k-means. 92.67 20.51 + Quantization all layers Uniform quantization 92.68 22.17 + Huffman coding Iterative ECSQ 92.73 21.01 Deep compression (Han et al., 2015a) N/A N/A Original model. 57.16 Pruned model 56.00 7.91 Pruning k-means 56.12 30.53 AlexNet + Quantization all layers Alt-Hessian-weighted k-means. 56.04 33.71 + Huffman coding. Uniform quantization 56.20 40.65 Deep compression (Han et al., 2015a) 57.22 35.00\nk-means Hessian-weighted k-means ers Uniform quantization Iterative ECSQ\nIt is derived that the performance loss due to quantization in neural networks can be quan tified approximately by the Hessian-weighted distortion measure. Then, Hessian-weightec k-means clustering is proposed for network quantization to minimize the performance loss It is identified that the optimization problem for network quantization provided a compres sion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ problem when optimal variable-length binary coding is employed after quantization. Twc efficient heuristic solutions for ECsQ are proposed for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. As an alternative of Hessian, it is proposed to utilize some function (e.g., square root) o1. the second moment estimates of gradients when the Adam (Kingma & Ba,2014) stochastic gradient decent (SGD) optimizer is used in training. The advantage of using this alterna tive is that it is computed while training and can be obtained at the end of training at nc additional cost. It is shown how the proposed network quantization schemes can be applied for quantizing. network parameters of all layers together at once, rather than layer-by-layer network quan tization in Gong et al.](2014);Han et al.(2015a). This follows from our investigation tha Hessian-weighting can handle the different impact of quantization errors properly not only within layers but also across layers. Moreover, quantizing network parameters of all layers together, one can even avoid layer-by-layer compression rate optimization.\nThe rest of the paper is organized as follows. In Section[2] we define the network quantization prob. lem and review the conventional quantization method using k-means clustering. Section3|discusses Hessian-weighted network quantization. Our entropy-constrained network quantization schemes follow in Section Finally, experiment results and conclusion can be found in Section[5and Sec- tion6 respectively.\nvalues from their binary encoded codewords, we store k binary codewords (b; bits for 1 < i < k and corresponding quantized values (b bits for each). The compression ratio is then given by.\nNb Compression ratio = 1(C;[+1)b;+ kb\nObserve in (1) that the compression ratio depends not only on the number of clusters but also on the. sizes of the clusters and the lengths of the binary codewords assigned to them. in particular. whe. a variable-length code is used for encoding quantized values. For fixed-length codes, however, al. codewords are of the same length, i.e., b; = log2 k] for all 1 i k, and thus the compressior. ratio is reduced to only a function of the number of clusters, i.e., k, assuming that N and b are given.\nk 1 argmin |w-cil2 where * C W. |Ci C1,C2,...,Ck i=1 wECi WECi\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nWe observe two issues with employing g k-means clustering for network quantization\nYann Le Cun. Modeles connexionnistes de l'apprentissage. PhD thesis, Paris 6, 1987\nVadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554-2564, 2016\nVadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky Speeding-up convolutional neural networks using fine-tuned CP-decomposition. arXiv preprini arXiv:1412.6553, 2014.\nYann LeCun. John S Denker. Sara A Solla. Richard E Howard, and Lawrence D Jackel. Optima brain damage. In Advances in Neural Information Processing Systems, pp. 598-605, 1989.."}, {"section_index": "4", "section_name": "3.1 NETWORK MODEL", "section_text": "We consider a general non-linear neural network that yields output y = f (x; w) from input x, where w = w1 ... wyI is the vector consisting of all trainable network parameters in the network; N is the total number of trainable parameters in the network. A loss function loss(y, y) is defined as the objective function that we aim to minimize in average, where y = y(x) is the expected (ground- truth) output for input x. Cross entropy or mean square error are typical examples of a loss function Given a training data set Atrain, we optimize network parameters by solving the following problem, e.g., approximately by using a stochastic gradient descent (SGD) method with mini-batches:\nAlexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neura networks. In Advances in Neural Information Processing Systems. pp. 442-450. 2015\n1 w = argminL(rain;w), where L(X;w) loss(f(x; w),y(x)) W xEX\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164-171, 1993.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedinos. of the British Machine Vision Conference. 2014.\nProvided network parameters {wi}1 to quantize, k-means clustering partitions them into k dis- joint sets (clusters), denoted by C1, C2, . . . , Ck, while minimizing the mean square quantization error (MSQE) as follows:\nFirst, although k-means clustering minimizes the MSQE, it does not imply that k-means clustering minimizes the performance loss due to quantization as well in neural networks. K-means clustering treats quantization errors from all network parameters with equal im- portance. However, quantization errors from some network parameters may degrade the performance more significantly that the others. Thus, for minimizing the loss due to quan- tization in neural networks, one needs to take this dissimilarity into account. . Second, k-means clustering does not consider any compression ratio constraint. It simply minimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is however suboptimal when variable-length coding follows since the compression ratio de- pends not only on the number of clusters but also on the sizes of the clusters and assigned codeword lengths to them, which are determined by the binary coding scheme employed af- ter clustering. Therefore, for the optimization of network quantization given a compression ratio constraint, one need to take the impact of binary coding into account, i.e., we need to solve the quantization problem under the actual compression ratio constraint imposed by the specific binary coding scheme employed after clustering\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nDarryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015a.\nIn this section, we analyze the impact of quantization errors on the neural network loss functior and derive that the Hessian-weighted distortion measure is a relevant objective function for network quantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro- pose Hessian-weighted k-means clustering for network quantization to minimize the performance loss due to quantization in neural networks.\nThe average loss function L(X; w) can be expanded by Taylor series with respect to w as follows\n1 8L(X;w) =g(w)T8w+ dwTH(w)8w+O(J|8w||)\naL(X;w) a2L(X;w) H(w) = g(w dw Ow2\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nthe square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix. or Hessian. Assume that the loss function has reached to one of its local minima. at w = w. after training. At local minima, gradients are all zero, i.e., we have g(w) = 0, and thus the first term in. the right-hand side of (3) can be neglected at w = w. The third term in the right-hand side of (3). is also ignored under the assumption that the average loss function is approximately quadratic at the. local minimum w = w. Finally, for simplicity, we approximate the Hessian matrix as a diagonal. matrix by setting its off-diagonal terms to be zero. Then, it follows from (3) that.\nN 1 8L(X;w) ~ hii(w)|8Wi|2 2 i=1\nwhere h;(w) is the second-order partial derivative of the average loss function with respect to w. evaluated at w = w, which is the i-th diagonal element of the Hessian matrix H(w).\nSW=Wi-Wi\nJian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models witl singular value decomposition. In INTERSPEECH, pp. 2365-2369, 2013.\nAt a local minimum, the diagonal elements of Hessian, i.e., h,(w)'s, are all non-negative and thus. the summation in (6) is always additive, implying that the average loss function either increases or stays the same. Therefore, the performance degradation due to quantization of a neural network can be measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion. on the Hessian-weighted distortion measure can be found in Appendix[A.1.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nFor notational simplicity, we use w; = w; and h: = hu(w) from now on. The optimal clustering that minimizes the Hessian-weighted distortion measure is given by\nk iiW L argmin hii|wi -cj[2, where Cj C1,C2,...,Ck. j=1 wiECj Wi EC\nwhere w; is a quantized value of w;. Finally, combining (4) and (5), we derive that the local impact. of quantization on the average loss function at w = w can be quantified approximately as follows:\nN 1 8L(X;w) ~ hii(W)|Wi-Wi|2 I2 i=1\nWe call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for a network parameter in defining the distortion measure for clustering when its second-order partial derivative is larger, in order to avoid a large deviation from its original value, since the impact on the loss function due to quantization is expected to be larger for that parameter.\nHessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when fixed-length binary coding follows, where the compression ratio solely depends on the number of clusters as shown in Section[2.1 Similar to the conventional k-means clustering, solving this op timization is not easy, but Lloyd's algorithm is still applicable as an efficient heuristic solution for this problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular means."}, {"section_index": "5", "section_name": "A APPENDIX", "section_text": "For obtaining Hessian, one needs to evaluate the second-order partial derivative of the average loss function with respect to each of network parameters, i.e., we need to calculate"}, {"section_index": "6", "section_name": "A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR", "section_text": "The diagonal approximation for Hessian simplifies the optimization problem as well as its solutioi. for network quantization. This simplification comes with some performance loss. We conjecture tha the loss due to this approximation is small. The reason is that the contributions from off-diagona terms are not always additive and their summation may end up with a small value. However, diagona terms are all non-negative and therefore their contributions are always additive. We do not verify thi. conjecture in this paper since solving the problem without diagonal approximation is too complex we even need to compute the whole Hessian matrix, which is also too costly..\na2L(X;w) 1 02 L dw? loss(f(x;w),y(x)) hi |x| dw? w=w xEt w=w\nObserve that the relation of the Hessian-weighted distortion measure to the quantization loss hold. for any model for which the objective function can be approximated as a quadratic function witl respect to the parameters to quantize in the model. Hence, the quantization methods proposed ir this paper to minimize the Hessian-weighted distortion measure are not specific to neural network. but are generally applicable to quantization of parameters of any model whose objective function is. locally quadratic with respect to its parameters approximately..\nHessian computation and our network quantization are performed after completing network training. For the data set I' used to compute Hessian in (8), we can either reuse a training data set or use some. other data set, e.g., validation data set. We observed from our experiments that even using a small subset of the training or validation data set is sufficient to yield good approximation of Hessian for. network quantization.\nFinally, we do not consider the interactions between quantization and retraining in our formulation. in Section 3.2 We analyze the expected loss due to quantization assuming no further retraining. and focus on finding optimal network quantization schemes that minimize the performance loss. In. our experiments, however, we further fine-tune the quantized values (cluster centers) so that we can recover the loss due to quantization and improve the performance.."}, {"section_index": "7", "section_name": "3.5 ALTERNATIVE OF HESSIAN", "section_text": "Although there is an efficient way to obtain the diagonal of Hessian as discussed in the previous sub section, Hessian computation is not free. In order to avoid this additional Hessian computation, w. propose to use an alternative metric instead of Hessian. In particular, we consider neural network. trained with the Adam SGD optimizer (Kingma & Ba, 2014) and propose to use some function (e.g. square root) of the second moment estimates of gradients as an alternative of Hessian.\nWe compare uniform quantization with non-weighted mean and uniform quantization with Hessian weighted mean in Figure [3] which shows that uniform quantization with Hessian-weighted mear slightly outperforms uniform quantization with non-weighted mean\nThe Adam algorithm computes adaptive learning rates for individual network parameters from the. first and second moment estimates of gradients. We compare the Adam method to Newton's op. timization method using Hessian and notice that the second moment estimates of gradients in the Adam method act like the Hessian in Newton's method. This observation leads us to use some func. tion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian..\n100 100 90 90 80 80 70 70 (%) (%) 60 60 Aceanccy 50 50 40 40 30 30 20 20 1q-O-Uniform with non-weighted mean 1q-O-Uniform with non-weighted mean -Uniform with Hessian-weighted mean -Uniform with Hessian-weighted mean 0 1 0 Average codeword length (bits) Average codeword length (bits) 1 (a) Huffman coding (b) Huffman coding + fine-tuning\nThe advantage of using the second moment estimates from the Adam method is that they are com. puted while training and we can obtain them at the end of training at no additional cost. It makes Hessian-weighting more feasible for deep neural networks, which have millions of parameters.. We note that similar quantities can be found and used for other SGD optimization methods using. adaptive learning rates, e.g., AdaGrad (Duchi et al., 2011), Adadelta (Zeiler, 2012) and RMSProp (Tieleman & Hinton,2012)."}, {"section_index": "8", "section_name": "3.6 OUANTIZATION OF ALL LAYERS", "section_text": "We propose quantizing the network parameters of all layers in a neural network together at once. by taking Hessian-weight into account. Layer-by-layer quantization was examined in the previous work (Gong et al., 2014;[Han et al.,[2015a). However, e.g., inHan et al.] (2015a), a larger number of bits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers. which implies that they heuristically treat convolutional layers more importantly. This follows fron the fact that the impact of quantization errors on the performance varies significantly across layers some layers, e.g., convolutional layers, may be more important than the others. This concern is. exactly what we can address by Hessian-weighting..\nHessian-weighting properly handles the different impact of quantization errors not only within layers. but also across layers and thus it can be employed for quantizing all layers of a network together The impact of quantization errors may vary more substantially across layers than within layers. Thus, Hessian-weighting may show more benefit in deeper neural networks. We note that Hessian. weighting can still provide gain even for layer-by-layer quantization since it can address the different. impact of the quantization errors of network parameters within each layer as well..\nIn order to solve the ECsQ problem for network quantization, we define a Lagrangian cost function\nRecent neural networks are getting deeper, e.g., see Szegedy et al. (2015a b); He et al.(2015). For. such deep neural networks, quantizing network parameters of all layers together is even more advan. tageous since we can avoid layer-by-layer compression rate optimization. Optimizing compression\nRecall that we are interested in only the diagonal elements of Hessian. An efficient way of computing. the diagonal of Hessian is presented inLe Cun (1987); Becker & Le Cun (1988) and it is based on the back propagation method that is similar to the back propagation algorithm used for computing. first-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same. order of complexity as computing gradients..\nFigure 3: Accuracy versus average codeword length per network parameter after network quanti zation, Huffman coding and fine-tuning for 32-layer ResNet when uniform quantization with non weighted mean and uniform quantization with Hessian-weighted mean are used..\n(hi|wi-cj|2-Al0g2Pj) J(C,C2,...,Ck) = D+ XH N j=1 wiECj =dx(i,j)\nk k 1 hi|w-cj|2H=-Pjlog2Pj D = N j=1 wi ECj j=1\nAlgorithm 1 Iterative solution for entropy-constrained network quantization"}, {"section_index": "9", "section_name": "4.1 ENTROPY CODING", "section_text": "After quantizing network parameters by clustering, lossless data compression by variable-length bi. nary coding can be followed for compressing quantized values. There is a set of optimal codes that. achieve the minimum average codeword length for a given source. Entropy is the theoretical limit of. the average codeword length per symbol that we can achieve by lossless data compression, proved by Shannon (see, e.g.,Cover & Thomas (2012, Section 5.3)). It is known that optimal codes achieve. this limit with some overhead less than 1 bit when only integer-length codewords are allowed. So optimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes. commonly used when the source distribution is provided (see, e.g., Cover & Thomas (2012, Sec. tion 5.6)), or can be estimated.\nConsidering a compression ratio constraint in network quantization, we need to solve the clustering problem in (2) or (7) under the compression ratio constraint given by\nk 6 Compression ratio = > C, where b = |Ci|bi b + 1 bi+kb)/N N\nA heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantizatior. is presented in Algorithm1 It is similar to Lloyd's algorithm for k-means clustering. The key difference is how to partition network parameters at the assignment step. In Lloyd's algorithm, the. Euclidean distance (quantization error) is minimized. For ECsQ, the individual Lagrangian cost function, i.e., d(i, j) in (12), is minimized instead, which includes both quantization error and expected codeword length after entropy coding.\nwhich follows from (1). This optimization problem is too complex to solve for any arbitrary variable. length binary code since the average codeword length b can be arbitrary. However, we identify that. it can be simplified if optimal codes, e.g., Huffman codes, are assumed to be used. In particular optimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and. then we approximately have\nk 6 H =-pi log2 Pi, i=1\nk H=- Pi log2 Pi < R i=1\nwhere R ~ b/C. In summary, assuming that optimal coding is employed after clustering, one can approximately replace a compression ratio constraint with an entropy constraint for the clustering output. The network quantization problem is then translated into a quantization problem with an en- tropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in informatior theory. Two efficient heuristic solutions for ECSQ are proposed for network quantization in the fol lowing subsections, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm for k-means clustering.\nratios jointly across all individual layers (to maximize the overall compression ratio for a network) requires exponential time complexity with respect to the number of layers. This is because the total number of possible combinations of compression ratios for individual layers increases exponentially as the number of layers increases.\nIn this section, we investigate how to solve the network quantization problem under a constraint on the compression ratio. In designing network quantization schemes, we not only want to minimize the performance loss but also want to maximize the compression ratio. In Section[3] we explored how to quantify and minimize the loss due to quantization. In this section, we investigate how to take the compression ratio into account properly in the optimization of network quantization.\nC(n+1) C(n+1) U{wi} n for l= argmin\nn+1 ;W ang N n+\nargmin J(C1,C2,...,Ck) C1,C2,...,Ck.\nwhere H is the entropy of the quantized network parameters after clustering (i.e., source), given that pi = |C|/N is the ratio of the number of network parameters in cluster C; to the number of all network parameters (i.e., source distribution). Moreover, assuming that N > k, we have\nk 1 L b + kb ~ 0. N i=1\nIt is shown in|Gish & Pierce (1968) that the uniform quantizer is asymptotically optimal in mini mizing the mean square quantization error for any random source with a reasonably smooth density function as the resolution becomes infinite, i.e., as the number of clusters k -> oo. This asymptotic result leads us to come up with a very simple but efficient network quantization scheme as follows:"}, {"section_index": "10", "section_name": "4.4 ITERATIVE ALGORITHM TO SOLVE ECSQ", "section_text": "Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo. rithm, which is similar to Lloyd's algorithm for k-means clustering. Although this iterative solutior is more complicated than the uniform quantization in Section 4.3] it finds a local optimum for a. given discrete source. An iterative algorithm to solve the general ECsQ problem is provided in. Chou et al.(1989). We derive a similar iterative algorithm to solve the ECsQ problem for network. quantization. The main difference from the method in Chou et al. (1989) is that we minimize the. Hessian-weighted distortion measure instead of the non-weighted regular distortion measure for op. timal quantization. The detailed algorithm and further discussion can be found in Appendix[A.3."}, {"section_index": "11", "section_name": "5.1 EXPERIMENT MODELS", "section_text": "First, we evaluate our network quantization schemes for the MNIST data set with a simplified ver sion of LeNet5 (LeCun et al.,|1998), consisting of two convolutional layers and two fully-connectec\n1. We first set uniformly spaced thresholds and divide network parameters into clusters 2. After determining clusters, their quantized values (cluster centers) are obtained by taking the mean of network parameters in each cluster.\nNote that one can use Hessian-weighted mean instead of non-weighted mean in computing clus ter centers in the second step above in order to take the benefit of Hessian-weighting. A perfor. mance comparison of uniform quantization with non-weighted mean and uniform quantization with. Hessian-weighted mean can be found in Appendix|A.2.\nAlthough uniform quantization is a straightforward method, it has never been shown before in the literature that it is actually one of the most efficient quantization schemes for neural networks when optimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization is not always good; it is inefficient for fixed-length coding, which is also first shown in this paper..\nWe employ the proposed network quantization methods to quantize all of network param eters in a network together at once, as discussed in Section[3.6 We evaluate the performance of the proposed network quantization methods with and with out network pruning. For a pruned model, we need to store not only the values of unprunec parameters but also their respective indexes (locations) in the original model. For the index information, we compute index differences between unpruned network parameters in the original model and further compress them by Huffman coding as in|Han et al. (2015a) For Hessian computation, 50,o00 samples of the training set are reused. We also evaluate the performance when Hessian is computed with 1,000 samples only. Finally, we evaluate the performance of our network quantization schemes using Hessiar when its alternative is used instead, as discussed in Section[3.5] To this end, we retrain the considered neural networks with the Adam SGD optimizer and obtain the second momen estimates of gradients at the end of training. Then, we use the square roots of the seconc noment estimates instead of Hessian and evaluate the nerforma\nFigure 1: Accuracy versus average codeword length per network parameter after network quantiza tion for 32-layer ResNet.\nlayers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy For a pruned model, we prune 91% of the original network parameters and fine-tune the rest\nSecond, we experiment our network quantization schemes for the CIFAR-10 data set (Krizhevsky 2009) with a pre-trained 32-layer ResNet (He et al., 2015). The 32-layer ResNet consists of 464,154 parameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the origina network parameters and fine-tune the rest..\nThird, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al.,2012) for the ImageNet ILSVRC-2012 data set (Russakovsky et al.,2015). We obtain a pre-trained AlexNet Caffe model, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and fine-tune the rest. In fine-tuning, the Adam SGD optimizer is used in order to avoid the computation. of Hessian by utilizing its alternative (see Section|3.5). However, the pruned model does not recove. the original accuracy after fine-tuning with the Adam method; the top-1 accuracy recovered after pruning and fine-tuning is 56.00%. We are able to find a better pruned model achieving the original. accuracy by pruning and retraining iteratively (Han et al.,[2015b), which is however not used here.."}, {"section_index": "12", "section_name": "5.2 EXPERIMENT RESULTS", "section_text": "We first present the quantization results without pruning for 32-layer ResNet in Figure 1 where. the accuracy of 32-layer ResNet is plotted against the average codeword length per network pa . rameter after quantization. When fixed-length coding is employed, the proposed Hessian-weighted k-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means clustering yields better accuracy than others even after fine-tuning. On the other hand, when Huff-. man coding is employed, uniform quantization and the iterative algorithm for ECsQ outperform. Hessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions. underperform Hessian-weighted k-means clustering and even k-means clustering when fixed-length. coding is employed since they are optimized for optimal variable-length coding..\n100 100 90 90 80 80 70 70 (%) (%) 60 60 Aerneucy 50 50 40 40 30 30 20 O-k-means 20 O-k-means -Hessian-weighted k-means -Hessian-weiqhted k-means 10 OUniform quantization 10 O.Uniform quantization A Iterative EcsQ Iterative EcsQ 0 0 1 2 3 4 5 6 8 9 0 1 2 3 4 5 6 8 9 Codeword length (bits) Codeword 1ength (bits) (a) Fixed-length coding (b) Fixed-length coding + fine-tuning 100 100 90 90 80 80 70 70 (%) (%) 60 60 50 50 I. 40 40 A 30 30 20 O-k-means 20 O-k-means -Hessian-weighted k-means -Hessian-weighted k-means 10 O.Uniform quantization 10 D. Uniform quantization Iterative EcsQ AIterative EcsQ 0 o 0 1 2 3 4 5 9 0 1 2 3 4 6 9 Average codeword 1ength (bits) (c) Huffman coding. (d) Huffman coding + fine-tuning"}]
BJh6Ztuxl
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nThere is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and represen- tations based on the hidden states of recurrent neural networks such as LSTMs The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture.\n1anguagolnlo lllonlllcyCaplu We propose a framework that facilitates better understanding of the encoded rep-. resentations. We define prediction tasks around isolated aspects of sentence struc-. ture (namely sentence length, word content, and word order), and score repre-. sentations by the ability to train a classifier to solve each prediction task when. using the representation as input. We demonstrate the potential contribution of the. approach by analyzing different sentence representation mechanisms. The analy-. sis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded. vector's dimensionality on the resulting representations.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3061-3069, 2015.\nJeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure Machine learning, 7(2-3):195-225, 1991."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "While sentence embeddings or sentence representations play a central role in recent deep learning approaches to NLP, little is known about the information that is captured by different sentence em bedding learning mechanisms. We propose a methodology facilitating fine-grained measuremen of some of the information encoded in sentence embeddings, as well as performing fine-grainec comparison of different sentence embedding methods\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In Proceedings of ICASSP, 2013..\nIn sentence embeddings, sentences, which are variable-length sequences of discrete symbols, are encoded into fixed length continuous vectors that are then used for further prediction tasks. A simple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,. 2013a;b), and summing or averaging the vectors of the words participating in the sentence. This. continuous-bag-of-words (CBOw) approach disregards the word order in the sentence.1.\nAnother approach is the encoder-decoder architecture, producing models also known as sequence- to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). In this architecture, an encoder network (e.g. an LSTM) is used to produce a vector representation of the sentence, which is then fed as input into a decoder network that uses it to perform some prediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decoder networks are trained jointly in order to perform the final task.\n1We use the term CBOw to refer to a sentence representation that is composed of an average of the vectors of the words in the sentence, not to be confused with the training method by the same name which is used in the word2vec algorithm.\nThe trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order ing patterns in the training sentences when encoding novel sequences. In contrast, the skip-thought encoder does rely on such patterns. Its performance on the other tasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it was trained on a different corpus. Finally, the encoder-decoder's ability to recreate sentences (BLEU) is not entirely indicative of the quality of the encoder at representing aspects such as word identity and order. This suggests that BLEU is sub-optimal for model selection.\nMarco Baroni, Georgiana Dinu, and German Kruszewski. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 238-247, Baltimore, Maryland, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/p14-1023."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and 1?12150201\nKavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In\nThe resulting sentence representations are opaque, and there is currently no good way of comparing different representations short of using them as input for different high-level semantic tasks (e.g sentiment classification, entailment recognition, document retrieval, question answering, sentenc similarity, etc.) and measuring how well they perform on these tasks. This is the approach takei by Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenc embeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tel us much about the kind of information that is encoded in the representation, and does not help u form generalizable conclusions.\nJiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraph and documents. arXiv preprint arXiv:1506.01057, 2015\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen tations in vector space. arXiv preprint arXiv:1301.3781. 2013a\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen. tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013b.\nOur Contribution We take a first step towards opening the black box of vector embeddings fo. sentences. We propose a methodology that facilitates comparing sentence embeddings on a mucl finer-grained level, and demonstrate its use by analyzing and comparing different sentence repre. sentations. We analyze sentence representation methods that are based on LSTM auto-encoders anc the simple CBOw representation produced by averaging word2vec word embeddings. For each oj. CBOw and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef. fect of the dimensionality on the resulting representation. We also provide some comparison to the. skip-thought embeddings of Kiros et al. (2015).\nIn this work, we focus on what are arguably the three most basic characteristics of a sequence its length, the items within it, and their order. We investigate different sentence representations based on the capacity to which they encode these aspects. Our analysis of these low-level propertie. leads to interesting, actionable insights, exposing relative strengths and weaknesses of the differen representations.\nDonald B Rubin. Matching to remove bias in observational studies. Biometrics, pp. 159-183, 1973\nAllen Schmaltz, Alexander M Rush, and Stuart M Shieber. Word ordering without syntax. arXi preprint arXiv:1604.08633. 2016\nLimitations Focusing on low-level sentence properties also has limitations: The tasks focus or. measuring the preservation of surface aspects of the sentence and do not measure syntactic an. semantic generalization abilities; the tasks are not directly related to any specific downstream appli. cation (although the properties we test are important factors in many tasks - knowing that a mode is good at predicting length and word order is likely advantageous for syntactic parsing, while mod. els that excel at word content are good for text classification tasks). Dealing with these limitations. requires a complementary set of auxiliary tasks, which is outside the scope of this study and is lef. for future work.\nIlya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net works. In Advances in neural information processing systems, pp. 3104-3112, 2014.\nThe study also suffers from the general limitations of empirical work: we do not prove general theorems but rather measure behaviors on several data points and attempt to draw conclusions from these measurements. There is always the risk that our conclusions only hold for the datasets on which we measured, and will not generalize. However, we do consider our large sample of sentences from Wikipedia to be representative of the English language, at least in terms of the three basic sentence properties that we study.\nSentence representations based on averaged word vectors are surprisingly effective, and encode a non-trivial amount of information regarding sentence length. The information they contain\nSentence representations based on averaged word vectors are surprisingly effective, and encode a non-trivial amount of information regarding sentence length. The information they contair\nOmer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations In Proc. of CONLL, pp. 171-180, Baltimore, Maryland, 2014.\nKishore Papineni. Salim Roukos. Todd Ward. and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 4Oth annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nmanner (due to regularities in the natural language data).. LSTM auto-encoders are very effective at encoding word order and word content.. Increasing the number of dimensions benefits some tasks more than others.. Adding more hidden units sometimes degrades the encoders' ability to encode word content. Thi. degradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU ove. the decoder output is sub-optimal for evaluating the encoders' quality.. LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentence. when encoding novel sentences, while the skip-thought encoders do rely on such patterns..\nSentence Encoders The bag-of-words (CBOw) and encoder-decoder models are trained on 1 million sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tokenization, and constrain sentence lengths to be between 5 and 70 words\nFor the encoder-decoder models, we use an in-house implementation using the Torch7 toolkit (Col lobert et al., 2011). The decoder is trained as a language model, attempting to predict the correct word at each time step using a negative-log-likelihood objective (cross-entropy loss over the softmax layer). We use one layer of LSTM cells for the encoder and decoder using the implementation in Leonard et al. (2015)."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Word-level distributed representations have been analyzed rather extensively, both empirically anc theoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015) In contrast, the analysis of sentence-level representations has been much more limited. Commonl used approaches is to either compare the performance of the sentence embeddings on down-stream tasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltz et al., 2016; Sutskever et al., 2011).\nWe use the same size for word and sentence representations (i.e. d = k), and train models o sizes k E {100, 300, 500, 750, 1000}. We follow previous work on sequence-to-sequence learn ing (Sutskever et al., 2014; Li et al., 2015) in reversing the input sentences and clipping gradients Word vectors are initialized to random values\nWe evaluate the encoder-decoder models using BLEU scores (Papineni et al., 2002), a popular ma chine translation evaluation metric that is also used to evaluate auto-encoder models (Li et al.. 2015) BLEU score measures how well the original sentence is recreated, and can be thought of as a proxy for the quality of the encoded representation. We compare it with the performance of the models on the three prediction tasks. The results of the higher-dimensional models are comparable to those found in the literature. which serves as a sanity check for the quality of the learned models.\nWhile the resulting analysis reveals differences in performance of different models, it does not ade-. quately explain what kind of linguistic properties of the sentence they capture. Other studies analyze. the hidden units learned by neural networks when training a sentence representation model (Elman. 1991; Karpathy et al., 2015; Kadar et al., 2016). This approach often associates certain linguistic. aspects with certain hidden units. Kadar et al. (2016) propose a methodology for quantifying the contribution of each input word to a resulting GRU-based encoding. These methods depend on the specific learning model and cannot be applied to arbitrary representations. Moreover, it is still not. clear what is captured by the final sentence embeddings..\nAuxiliary Task Classifier For the auxiliary task predictors, we use multi-layer perceptrons with a single hidden layer and ReLU activation, which were carefully tuned for each of the tasks. We experimented with several network architectures prior to arriving at this configuration.\nOur work is orthogonal and complementary to the previous efforts: we analyze the resulting sentence embeddings by devising auxiliary prediction tasks for core sentence properties. The methodology. we purpose is general and can be applied to any sentence representation model..\nFurther details regarding the training and architectures of both the sentence encoders and auxiliary task classifiers are available in the Appendix"}, {"section_index": "4", "section_name": "ENCODER DECODER", "section_text": "We aim to inspect and compare encoded sentence vectors in a task-independent manner. The main idea of our method is to focus on isolated aspects of sentence structure, and design experiments to measure to what extent each aspect is captured in a given representation\nParameters of the encoder-decoder were tuned on a dedicated validation set. We experienced with different learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) (Hinton et al., 2012) and optimization techniques (AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), Adam (Kingma & Ba, 2014) and RMSprop (Tieleman & Hinton, 2012)). We also experimented with different batch sizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance\nIn each experiment, we formulate a prediction task. Given a sentence representation method, w. create training data and train a classifier to predict a specific sentence property (e.g. their length based on their vector representations. We then measure how well we can train a model to perform the task. The basic premise is that if we cannot train a classifier to predict some property of a sentenc based on its vector representation, then this property is not encoded in the representation (or rather not encoded in a useful way, considering how the representation is likely to be used).\nBased on the tuned parameters, we trained the encoder-decoder models on a single GPU (NVIDIA Tesla K40), with mini-batches of 32 sentences, learning rate of O.01, dropout rate of 0.1, and the AdaGrad optimizer; training takes approximately 10 days and is stopped after 5 epochs with no loss improvement on a validation set."}, {"section_index": "5", "section_name": "PREDICTION TASKS", "section_text": "Parameters for the predictions tasks as well as classifier architecture were tuned on a dedicated vali dation set. We experimented with one, two and three layer feed-forward networks using ReLU (Nai & Hinton, 2010; Glorot et al., 2011), tanh and sigmoid activation functions. We tried different hid den layer sizes: the same as the input size, twice the input size and one and a half times the inpu size. We tried different learning rates (0.1, 0.01, 0.001), dropout rates (0.1, 0.3, 0.5, 0.8) and differ ent optimization techniques (AdaGrad, AdaDelta and Adam)."}, {"section_index": "6", "section_name": "3.1 THE PREDICTION TASKS", "section_text": "We now turn to describe the specific prediction tasks. We use lower case italics (s, w) to refer to sentences and words, and boldface to refer to their corresponding vector representations (s, w) When more than one element is considered, they are distinguished by indices (w1, w2, w1, w?).\nOur underlying corpus for generating the classification instances consists of 20o,o0o Wikipedia. sentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences\n4https://radimrehurek.com/gensim\ncan also be used to reconstruct a non-trivial amount of the original word order in a probabilistic manner (due to regularities in the natural language data)\nFor the CBOw model, we train Skip-gram word vectors (Mikolov et al., 2013a), with hierarchical softmax and a window size of 5 words, using the Gensim implementation.4 We control for the embedding size k and train word vectors of sizes k E {100, 300, 500, 750, 1000}\nThe experiments in this work focus on low-level properties of sentences - the sentence length, the identities of words in a sentence. and the order of the words. We consider these to be the core elements of sentence structure. Generalizing the approach to higher-level semantic and syntactic properties holds great potential, which we hope will be explored in future work, by us or by others.\nOur best tuned classifier, which we use for all experiments, is a feed-forward network with one. hidden layer and a ReLU activation function. We set the size of the hidden layer to be the same size. as the input vector. We place a softmax layer on top whose size varies according to the specific task and apply dropout before the softmax layer. We optimize the log-likelihood using AdaGrad. We. use a dropout rate of 0.8 and a learning rate of O.01. Training is stopped after 5 epochs with no loss. improvement on the development set. Training was done on a single GPU (NVIDIA Tesla K40)..\nare used for each of the test and development examples. These sentences are a subset of the training set that was used to train the original sentence encoders. The idea behind this setup is to test the models on what are presumably their best embeddings.\nLength Task This task measures to what extent the sentence representation encodes its length Given a sentence representation s E Rk, the goal of the classifier is to predict the length (number. of words) in the original sentence s. The task is formulated as multiclass classification, with eight output classes corresponding to binned lengths.2 The resulting dataset is reasonably balanced, with a majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 test instances. Predicting the majority class results in classification accuracy of 20.1%."}, {"section_index": "7", "section_name": "10 ADDITIONAL EXPERIMENTS - CONTENT TASK", "section_text": "Word-content Task This task measures to what extent the sentence representation encodes the. identities of words within it. Given a sentence representation s E Rk and a word representation. w E Rd, the goal of the classifier is to determine whether w appears in the s, with access to neither. w nor s. This is formulated as a binary classification task, where the input is the concatenation of s and w.\nHow well do the models preserve content when we increase the sentence length? In Fig. 5 we plo content prediction accuracy vs. sentence length for different models..\nCBOW300 0.95 X X CBOW 100 neernree ED 750 ED 500 0.90 ED 1000 eonreten enneon 0.85 0.80 X X 0.75 0.70 0.65 5 10 15 20 25 30 Sentence length\nTo create a dataset for this task, we need to provide positive and negative examples. Obtaining positive examples is straightforward: we simply pick a random word from each sentence. For negative examples, we could pick a random word from the entire corpus. However, we found that such a dataset tends to push models to memorize words as either positive or negative words, instead of finding their relation to the sentence representation. Therefore, for each sentence we pick as a negative example a word that appears as a positive example somewhere in our dataset, but does not appear in the given sentence. This forces the models to learn a relationship between word anc sentence representations. We generate one positive and one negative example from each sentence The dataset is balanced, with a baseline accuracy of 50%.\nWord-order Task This task measures to what extent the sentence representation encodes word. order. Given a sentence representation s E Rk and the representations of two words that appear in. the sentence, w1, w2 E Rd, the goal of the classifier is to predict whether w1 appears before or after w2 in the original sentence s. Again, the model has no access to the original sentence and the two words. This is formulated as a binary classification task, where the input is a concatenation of the three vectors s, w1 and w2.\nFigure 5: Content accuracy vs. sentence length for selected models\nFor each sentence in the corpus, we simply pick two random words from the sentence as a positive. example. For negative examples, we flip the order of the words. We generate one positive and one negative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%."}, {"section_index": "8", "section_name": "APPENDIX III: SIGNIFICANCE TESTS", "section_text": "In this section we report the significance tests we conduct in order to evaluate our findings. In orde to do so, we use the paired t-test (Rubin, 1973)."}, {"section_index": "9", "section_name": "4 SENTENCE REPRESENTATION MODELS", "section_text": "Dim. Length Word content Word order 100 1.77e-147 0.0 1.83e-296 300 0.0 0.0 0.0 500 0.0 0.0 0.0 750 0.0 0.0 0.0 1000 0.0 0.0 0.0\nContinuous Bag-of-words (CBOW) This simple yet effective text representation consists of per forming element-wise averaging of word vectors that are obtained using a word-embedding metho. such as word2vec.\nTable 2: P-values for ED vs. CBOw over the different dimensions and tasks. For example, in the row where. dim equals 100, we compute the p-value of ED compared to CBOw with embed size of 100 on all three tasks\nDespite its obliviousness to word order, CBOw has proven useful in different tasks (Hill et al., 2016 and is easy to compute, making it an important model class to consider.\nEncoder-Decoder (ED) The encoder-decoder framework has been successfully used in a number of sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le. 2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back to. the sequence of words:\nDim. Length Word content Word order 100 vs. 300 0.0 8.56e-190 0.0 300 vs. 500 7.3e-71 4.20e-05 5.48e-56 500 vs. 750 3.64e-175 4.46e-65 0.11 750 vs. 1000 1.37e-111 2.35e-243 4.32e-61\nDEC : S E Rk +> s ={W1, W2,..., WN\nTable 3: P-yalues for ED models over the different dimensions and tasks\n1.00 CBOW300 0.95 X CBOW 100 Reernree ED 750 ED 500 0.90 ED1000 eoretn enneon 0.85 0.80 X X 0.75 0.70 0.65 5 10 15 20 25 30 Sentence length\nAs expected, all models suffer a drop in content accuracy on longer sentences. The degradation is roughly linear in the sentence length. For the encoder-decoder, models with fewer dimensions seem to degrade slower.\nAll the results reported in the summery of findings are highly significant (p-value < O.ooo1). The ones we found to be not significant (p-value > O.03) are the ones which their accuracy does not. have much of a difference, i.e ED with size 500 and ED with size 750 tested on the word order task (p-value=0.11), or CBOw with dimensions 750 and 1000 (p-value=0.3).\nGiven a sentence s = we aim to find a sentence representation s using an encoder.\nThe encoding process usually assumes a vector representation w; E Rd for each word in the vo- cabulary. In general, the word and sentence embedding dimensions, d and k, need not be the same The word vectors can be learned together with other encoder parameters or pre-trained. Below we describe different instantiations of ENC.\nDim. Length Word content Word order 100 vs. 300 0.0 0.0 1.5e-33 300 vs. 500 1.47e-215 0.0 3.06e-64 500 vs. 750 0.68 0.032 0.05 750 vs. 1000 4.44e-32 0.3 0.08\n35 90 35 35 ED 85 H CBOW aeeunnee y 90 80 30 30 30 ** ED BLEU 80 70 25 25 80 25 15 60 50 70 40 65 10 10 orrer 10 30 -ED -ED H CBOW 55 HCBOW 20 * ED BLEU 50 *ED BLEU 1 0 50 100 300 500 750 1000 100 300 500 750 1000 100 300 500 750 1000 Representation dimensions Representation dimensions Representation dimensions (a) Length test. (b) Content test. (c) Order test.\nTable 4: P-values for CBow models over the different dimensions and tasks\nFigure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference\nHere we investigate the specific case of an auto-encoder, where the entire encoding-decoding process can be trained end-to-end from a corpus of raw texts. The sentence representation is the final outpu vector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decode is similar to the LSTM encoder but with different weights."}, {"section_index": "10", "section_name": "EXPERIMENTAL SETUP", "section_text": "The bag-of-words (CBOw) and encoder-decoder models are trained on 1 million sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok- enization, and constrain sentence lengths to be between 5 and 70 words. For both models we control the embedding size k and train word and sentence vectors of sizes k E {100, 300, 500, 750, 1000} More details about the experimental setup are available in the Appendix."}, {"section_index": "11", "section_name": "6.1 LENGTH EXPERIMENTS", "section_text": "We begin by investigating how well the different representations encode sentence length. Figure 1a. shows the performance of the different models on the length task, as well as the BLEU obtained by the LSTM encoder-decoder (ED).\nWith enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated witl BLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remai relatively stable. while the BLEU score of the encoder-decoder model increases as more dimensior are added.\nSomewhat surprisingly, the CBOw model also encodes a fair amount of length information, with length prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as the CBOw representation consists of averaged word vectors, and we did not expect it to encode length at all. We return to CBOw's exceptional performance in Section 7.\nTo what extent do the different sentence representations encode the identities of the words in the sentence? Figure 1b visualizes the performance of our models on the word content test\nAll the representations encode some amount of word information, and clearly outperform the ran- dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoder to preserve word identities generally increases when adding dimensions, the performance peaks ai 750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective\nIn this section we provide a detailed description of our experimental results along with their analysis For each of the three main tests - length, content and order - we investigate the performance of different sentence representation models across embedding size."}, {"section_index": "12", "section_name": "6.3 WORD ORDER EXPERIMENTS", "section_text": "Figure 1c shows the performance of the different models on the order test. The LSTM encoders are. very capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%. of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatec with BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoder. to better encode order information..\nSurprisingly, the CBOw encodings manage to reach an accuracy of 70% on the word order task 20% above the baseline. This is remarkable as, by definition, the CBOw encoder does not attempt to preserve word order information. One way to explain this is by considering distribution patterns of words in natural language sentences: some words tend to appear before others. In the next section we analyze the effect of natural language on the different models.\nNatural language imposes many constraints on sentence structure. To what extent do the differ ent encoders rely on specific properties of word distributions in natural language sentences when encoding sentences?\nTo account for this, we perform additional experiments in which we attempt to control for the effec of natural language\nHow can CBOw encode sentence length? Is the ability of CBOw embeddings to encode length. related to specific words being indicative of longer or shorter sentences? To control for this, we. created a synthetic dataset where each word in each sentence is replaced by a random word from the dictionary and re-ran the length test for the CBOw embeddings using this dataset. As Figure 2a. shows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is not. the main component in CBOw's success at predicting length..\nRceeneee 65 0.55 60 0.50 55 CBOW CBow syn sent rm 0.45 50 ION 45 0.40 40 0.35 35 100 300 500 750 1000 5 10 15 20 25 30 35 Representation dimensions Sentence length.\nAn alternative explanation for CBOw's ability to encode sentence length is given by considering the. norms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases as sentences grow longer. We believe this is one of the main reasons for the strong CBOw results..\nencoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoder performance comes from the decoder, which also improves as we add more dimensions. At 1000 di. mensions, the decoder's language model may be strong enough to allow the representation produced by the encoder to be less informative with regard to word content.\nCBOw representations with low dimensional vectors (100 and 300 dimensions) perform exception. ally well, outperforming the more complex, sequence-aware models by a wide margin. If your task requires access to word identities, it is worth considering this simple representation. Interestingly. CBOw scores drop at higher dimensions.\nWhile the correlation between the number of averaged vectors and the resulting norm surprised us. in retrospect it is an expected behavior that has sound mathematical foundations. To understand. the behavior. consider the different word vectors to be random variables. with the values in each\ndimension centered roughly around zero. Both central limit theorem and Hoeffding's inequality tel. us that as we add more samples, the expected average of the values will better approximate the true mean, causing the norm of the average vector to decrease. We expect the correlation between th sentence length and its norm to be more pronounced with shorter sentences (above some number o samples we will already be very close to the true mean, and the norm will not decrease further), behavior which we indeed observe in practice.\nHow does CBOw encode word order? The surprisingly strong performance of the CBOW model on the order task made us hypothesize that much of the word order information is captured in general natural language word order statistics.\nTo investigate this, we re-run the word order tests, but this time drop the sentence embedding in training and testing time, learning from the word-pairs alone. In other words, we feed the network as input two word embeddings and ask which word comes first in the sentence. This test isolates general word order statistics of language from information that is contained in the sentence embedding (Fig 3)\nThe difference between including and remov- ing the sentence embeddings when using the. CBOW model is minor. while the LSTM-ED. suffers a significant drop. Clearly, the LSTM-. ED model encodes word order, while the pre. diction ability of CBOw is mostly explained by. general language statistics. However, CBOw. does benefit from the sentence to some extent:. we observe a gain of ~3% accuracy points. when the CBOw tests are allowed access to the. sentence representation. This may be explained. by higher order statistics of correlation between. word order patterns and the occurrences of spe. cific words."}, {"section_index": "13", "section_name": "How important is English word order for en", "section_text": "Results are presented in Fig. 4. When considering CBOw embeddings, word order accuracy drops. to chance level. as expected. while results on the other tests remain the same. Moving to the LSTM encoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-. tences. These results are somewhat surprising since the models were originally trained on \"real'\"'. non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-. quence encoder that for the most part does not rely on word ordering properties of natural language. when encoding sentences. The small and consistent drop in word order accuracy on the permuted. sentences can be attributed to the encoder relying on natural language word order to some extent,. but can also be explained by the word order prediction task becoming harder due to the inability tc.\n100 orrreene erereee eeeey 90 90 85 80 80 80 70 7. 10 70 60 60 - CBOW 65 Perm CBOw 50 Encoder-decoder 60 Perm ED 40 55 100 300 500 750 1000 100 300 500 750 1000 100 300 500 750 1000 Representation dimensions Representation dimensions Representation dimensions (a) Length test. (b) Content test. (c) Order test..\nFigure 4: Results for length, content and order tests on natural and permuted sentences\n90 aaereeee 85 ED EDno sent CBOW 80 CBOW no sent 75 oreer 70 65 100 300 500 750 1000 Representation dimensions\n90 85 ED ED no sent CBOW 80 CBOW no sent 75 70 65 100 300 500 750 1000 Representation dimensions\nFigure 3: Order accuracy w/ and w/o sentence repre- sentation for ED and CBOW models\ncoding sentences? To what extent are the models trained to rely on natural language word order when encoding sentences? To control for this, we create a synthetic dataset, PeRMUTED, in which the word order in each sentence is randomly permuted. Then, we repeat the length, content and order experiments using the PeRMUTED dataset (we still use the original sentence encoders that are trained on non-permuted sentences). While the permuted sentence representation is the same for CBOw, it is completely different when generated by the encoder-decoder.\nuse general word order statistics. The results suggest that a trained encoder will transfer well acros different natural language domains, as long as the vocabularies remain stable. When considering the decoder's BLEU score on the permuted dataset (not shown), we do see a dramatic decrease. in accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8. BLEU score. These results suggest that the decoder, which is thrown away, contains most of the. language-specific information.\nIn addition to the experiments on CBOw and LSTM-encoders, we also experiment with the skip thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder to neighboring sentences.\nGiven a sentence s, it first encodes it using an RNN, similar to the auto-encoder model. However, instead of predicting the original sentence, skip-thought predicts the preceding and following sen tences, Si-1 and si+1. The encoder and decoder are implemented with gated recurrent units (Cho et al., 2014).\nHere, we deviate from the controlled environment and use the author's provided model3 with the. recommended embeddings size of 480o. This makes the direct comparison of the models \"unfair\"' However, our aim is not to decide which is the \"best\"' model but rather to show how our method can. be used to measure the kinds of information captured by different representations..\nTable 1 summarizes the performance of the skip-thought embeddings in each of the prediction tasks on both the PeRMUTED and original dataset.\nTable 1: Classification accuracy for the prediction tasks using skip-thought embeddings"}, {"section_index": "14", "section_name": "9 CONCLUSION", "section_text": "We presented a methodology for performing fine-grained analysis of sentence embeddings using auxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods.\nCBOw is surprisingly effective - in addition to being very strong at content, it is also predictiv of length, and can be used to reconstruct a non-trivial amount of the original word order. 30 dimensions perform best, with greatly degraded word-content prediction performance on highe dimensions. With enough dimensions, LSTM auto-encoders are very effective at encoding word order an word content information. Increasing the dimensionality of the LSTM encoder does not signif icantly improve its ability to encode length, but does increase its ability to encode content an order information. 50o dimensional embeddings are already quite effective for encoding wor order, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops a 1000, suggesting that larger is not always better.\n3https://github.com/ryankiros/skip-thoughts\nLength Word content Word order Original 82.1% 79.7% 81.1% Permuted 68.2% 76.4% 76.5%\nThe performance of the skip-thought embeddings is well above the baselines and roughly similar. for all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, except in the order task where it lags somewhat behind. However, we note that the results are not directly comparable as skip-thought was trained on a different corpus..\nThe more interesting finding is its performance on the PeRmuTED sentences. In this setting we see a large drop. In contrast to the LSTM encoder-decoder, skip-thought's ability to predict length and word content does degrade significantly on the permuted sentences, suggesting that the encoding. process of the skip-thought model is indeed specialized towards natural language texts.."}]
HyFkG45gl
[{"section_index": "0", "section_name": "MACHINE SOLVER FOR PHYSICS WORD PROBLEMS", "section_text": "the classifier can work with partial questions, in the end all but 2 questions are classified correctly Therefore, the combined accuracy of the two neural networks, for the purpose of solving the physics problems, is 99.8%.\nMegan Leszczynski & Jose Moreira\nThere are several opportunities for future work. First, we would like to investigate more deeply hou. our neural networks work. In particular, what features of the word problem they are identifying anc. how specific units are responsible for that identification. Second, we could extend our solver by con sidering more complex physical situations, including additional forces, three-dimensional motion multiple objects, and so on. We would have to extend our canonical dynamical system to represen. those situations and/or use a collection of dynamical systems. We expect that the complexity of the. neural networks and the training/validation/test sets will grow accordingly. Finally, the more am. bitious goal would be to remove the canonical dynamical system(s) and train the networks to build. their own. We believe this would be closer to the way humans solve these physics problems..\nIBM T.J. Watson Research Center Yorktown Heights. NY 10598 USA\nmel255@cornell.edu, imoreira@us.ibm.com\nWe build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%."}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Martin Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems 2015. Software available from http:/tensorflow.org."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "We present a complete system architecture for a machine solver that automatically solves a class of physics word problems, namely classical mechanics of a point particle in free fall. This domair allows us to formulate one dynamical system to which all the physics problems in this domain car be mapped. The dynamical system describes how the state of the particle, defined by its locatior and velocity, changes over time. Correspondingly, the initial conditions for the dynamical systen include the location and velocity of the particle at the time origin..\nGiven the word problem as input, the solver must first learn to extract the parameters needed t produce the dynamical system and also learn to identify the type of question. Two independentl trained recurrent neural networks are used to complete these tasks. The first neural network, referre to as the labeler, learns to find the dynamical system parameters and locate the question within th problem statement. The second neural network, referred to as the classifier, identifies the type o question. Finally, the solver uses a numerical integrator to solve the dynamical system and produc the solution. We use a problem generator in order to produce disjoint datasets as input to the sys tem for training and testing. The generator produces short-answer high school-level physics wor problems with mixed units.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, November 1997. ISSN 0899-7667\nMohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In EMNLP. pp. 523-533. 2014\nAutomatically solving word problems has been a research interest of the natural language process ing community for some time, particularly with math word problems. The main challenge is tc develop a semantic representation of the word problem. Kushman et al.[(2014) learned to represent mathematical word problem with a system of equations, by aligning words in the word problem to templates. While their technique learns to induce multiple templates and assumes knowledge o1 numbers and nouns, we assume no knowledge of the words in the text but only map to one template\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014\nNate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve algebra word problems. 2014."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical ma-. chine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP. 2014), 2014b.\nan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press. Book available from http://www.deeplearningbook.org, 2016.\nAfter a brief related work section, we provide a more detailed description of the class of physics problems we address. We proceed to describe how the machine solver works and present exper imental results. We conclude with a summary of our work and proposals for future works. The appendices contain additional details that did not fit in the body of the paper.\nAnother study to solve math word problems was done by Hosseini et al. (2014). This study alsc. assumes the ability to identify numbers and nouns in the text and uses a dependency parser tc. determine relationships between words in the text. Like the other study, this approach generalizes tc. math word problems that require different equations. Shi et al.(2015) similarly used a parser to solve math word problems. However, their parser maps the word problems to a carefully defined language. they created called DOL, from which equations can be derived. Rather than use a parser to breal down the word problems, we use neural networks to learn to identify key pieces of information. Our. study is the first of our knowledge to apply recurrent neural networks to the task of solving worc. problems.\nWe chose to use recurrent neural networks (RNN) for the labeler and the classifier as both of their inputs consist of sequences of words. Recurrent neural networks are commonly used to process se- quences, and as a result have found application in natural language processing tasks such as machine translation (Cho et al.]2014b) and speech recognition (Graves et al.]2013). After experimenting with different models, we obtained the most success with Long Short-Term Memory (LSTM) vari- ants of RNNs. For additional discussion on RNNs in general, and LSTMs in particular, we refer the. reader to AppendixA\nShuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. Automatically solving number word problems by semantic parsing and reasoning. In EMNLP, 2015\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. The Journal of Machine Learning Research, 9(2579-2605):85, 2008"}, {"section_index": "4", "section_name": "RECURRENT NEURAL NETWORKS", "section_text": "We consider the following class of physical systems (see Figure[1(a)): In a two-dimensional space with gravity producing a downward constant acceleration g, there is one particle in free fall. That is no forces other than gravity are acting on the particle. Movement of the particle starts at time t = 0 with an initial position defined by displacements d1 and d2 and initial velocity with components vj and U2.\nThe labeler and classifier are both recurrent neural networks (RNNs). We provide background in formation on RNNs in this section, followed by an overview of Long Short-Term Memory (LSTM networks, which are an advanced type of RNNs and were used to build our networks. A recurren neural network receives the previous values of the hidden layer as input in addition to the curren input values into the network. Thus each hidden unit retains information about the history of th sequence. As explained inGoodfellow et al.[(2016), the fundamental behavior of recurrent neura networks can be captured in the following equation:\nThe time behavior of the particle can be represented by the dynamical system shown in Figure[1(b) The state vector x(t) = [x1(t), x2(t), x1(t), x2(t)]T' consists of two positions and two velocities and its derivative depends only on itself and the acceleration of gravity, as shown in the figure Combined with the initial condition x(0) = [d1, d2, V1, v2]T, the differential equation produces a unique solution.\nh(t) =1\nwhere h(t) represents the state of the RNN unit at time t, x(t) represents the current input, and 0 represents the weights and biases. The function f is usually hyperbolic tangent (Karpathy et al. 2015). It is important to note that the weights and biases are reused across time. Thus, while an RNN with one hidden layer can be unfolded in time to having many layers, the weights and biases between each of the unfolded layers are shared\nx1(t) 0 0 1 0 x1(t) 0 V2 x2(t) 0 0 0 1 x2(t) 0 + d2 x1(t) 0 0 0 0 x1(t) 0 V1 x2(t) 0 0 0 0 x2(t) -9 9 x1(0) d1 x2(0) d2 x1(0) U1 x2(0) X1 V2 d1 (a) (b)\nx1(t) 0 0 1 0 x1(t) 0 tV2 x2(t) 0 0 0 1 x2(t) 0 + d2 x1(t) 0 0 0 0 x1(t) 0 V1 x2(t) 0 0 0 0 x2(t) g x1(0) d1 x2(0) d2 x1(0) U1 X1 x2(0) U2 d1\nFigure 1: Physics domain (a): We consider a two-dimensional space with a free falling particle Displacements d1 and d, define the initial position of the particle, while v1 and v2 define its initial velocity. Gravity produces a constant acceleration g pointing straight down. The behavior of the particle is defined by the dynamical system shown in (b)..\n+ sigm T2n,4n 0 sigm tanh = +iOJ h = o O tanh(c)"}, {"section_index": "5", "section_name": "4 MACHINE SOLVER", "section_text": "In this section we describe the machine solver, which is composed of two recurrent neural network and the numerical integrator. The top-level system block diagram is shown in Figure|2\nA limitation of the basic recurrent neural network described above is that it cannot retain informatior over long sequences. If a key piece of information for predicting an output at the end of a long sequence occurs at the very beginning of the sequence, the basic recurrent neural network will likely fail as a result of training difficulties. A popular solution for this limitation is the Long Short-Term Memory (LSTM) - essentially a highly capable, more complex type of recurrent neural network (Hochreiter & Schmidhuber1997). An LSTM is composed of a memory cell, and input, output. and forget gates that determine how to modify and reveal the contents of memory cell. Each of these gates has its own set of weights and biases that are connected to the inputs. Therefore the number of weights within a layer of an LSTM is quadrupled from that of a basic recurrent neural network to 2n 4n, where n is the number of hidden units in the layer and assumes each layer has the same number of units. 2n is from the input being a concatenation of the output from the previous hidden layer (in time) with the current input, as occurs for all RNNs, and the 4n is for the connections to each of the three gates as well as to the memory cell input. More specifically, the equations for the LSTM are as follows (Graves2013); (Zaremba et al.]2014):\nOur machine solver computes answers to word problems in the domain just described. The word. problem must specify, sometimes indirectly, the five parameters of the dynamical system (d1, d2. V1, v2, and g). It must also include a question that can be answered by computing the time behavior of the system. We discuss how our machine solver works in the next section..\ns1gm + sigm T2n,4n 0 sigm tanh\nTRANSLATE Dynamical SOLVE Word System Solution problem x = Ax+ Bu\nAs both of our neural network models have only one hidden layer, h?-- merely refers to the current input. T2n,4n refers to the weight and bias transformation W x +b applied to the concatenated hidden layer inputs. The hyperbolic tangent and sigmoid functions are applied element-wise. The variables i, f, o, and j refer to the input gate, forget gate, output gate, and cell input, respectively.\nAnother potential solution to the inability of the basic recurrent neural network to capture long-termr. dependencies is the Gated Recurrent Unit (GRU) (Cho et al., 2014a), however, we had the most success with the LSTM for our specific labeler and classifier tasks.."}, {"section_index": "6", "section_name": "4.1 NEURAL NETWORK ARCHITECTURES", "section_text": "The data flow through the labeler and classifier neural networks is shown in Figure [3] We usec TensorFlowTMto develop the neural network models for both labeler and the classifier. TensorFlow is an open source library from Google that allowed us to easily explore different models and training settings with already implemented RNN cells and optimizers (Abadi et al.]2015). We quickly experiment with the provided optimizers to find the optimal optimizer for each network."}, {"section_index": "7", "section_name": "B CHOOSING THE RIGHT RNN CONFIGURATION", "section_text": "We selected the models for our RNNs by performing a grid search over the learning rate, the number of units, and the number of layers. The results of the grid search for the the labeler recurrent network are shown in Table 4|and the results for the classifier network are shown in Table 5] For each RNN we chose the most efficient model, in that it requires the least space and obtains the greatest accuracy with the lowest training time.\nClassifier Question Question (RNN) type Word Labeler Word Dynamical problem (RNN) Label System Parameters of Dynamical System\nTable 4: The chosen RNN network for the labeler has one layer of ten units with a learning rate of O.1. The notation x/y/z means x for overall accuracy, y for label accuracy, and z for questior accuracy, where accuracy is given as a proportion of correct predictions over total predictions. Al results shown use TensorFlow's Adam Optimizer and LSTM cel1..\nWord Labeler Word problem (RNN) Label problem Let the acceleration of gravity be 32 ft/s2 How far ? label 0 0 0 0 0 0 G A_UNIT QUEST QUEST QUEST\nWord Labeler Word problem (RNN) Label problem Let the acceleration of gravity be 32 ft/s2 How far ? label 0 0 0 0 0 0 G A_UNIT QUEST QUEST QUEST\nWord Labeler Word problem (RNN) Label\nFigure 4: Example of input to labeler with expected output. A label is associated with each word where O indicates other, or a word not needed for the dynamical system translation. Input text is. shortened for the example\nTable 5: The chosen network for the labeler has one layer of 1,Oo0 units. The values shown are accuracies given as a proportion of the number of correctly predicted classifications over total clas sifications. All results use TensorFlow's Gradient Descent Optimizer and LSTM cell.\nThe chosen RNN model is one that produces an output at each time step and has recurrent connection between hidden units, as described byGoodfellow et al.(2016) in Chapter 10, Figure 10.3. At each step of the input sequence, the RNN receives a word embedding and outputs a label for the word. The label that is outputted at each time step can fall into one of the ten categories shown in Table[1 In addition to tagging words for their relevancy to the dynamical system formulation, we tag the question part of the word problem to pass to the classifier.\nWe use three measures to assess the performance of the labeler: label accuracy, question accuracy,. and overall accuracy. Label accuracy is measured as having matching labels in the predicted and expected (generated) labels, not including the question part of the word problem. Question accuracy. is measured as having both the first word of the question and the last word of the question labeled. correctly, as label-based post processing to extract the question relies only on these indices. Overall. accuracy is measured as meeting both of the label and question accuracy criteria..\nTensorFlow is a trademark of Google Inc\nFigure 2: The first step from word problem to dynamical system is accomplished via neural net works. The second step from dynamical system to solution is achieved with a numerical integrator..\nClassifier Question Question (RNN) type Word Labeler Word Dynamical problem (RNN) Label System Parameters of Dynamical System\nInterestingly, for the classifier, we see that models with two or three layers and lower learning rates achieve an equivalent accuracy as the one-layer model. However, they are inferior to the one layer. model in that the multi-layer models require more space and usually require longer to train..\nThe labeler is an LSTM network with one hidden layer of ten units. Figure4 shows an example of. the data flow through the labeler. The input to the labeler is the full problem statement and the output. is a label for each word. The words are input into the labeler via an embedding that is randomly. initialized and trained simultaneously with the weights and biases. The weights are also randomly initialized and the biases are initialized to zero. To limit the exploration of the parameter space, we. set the dimension of the embedding to equal the number of hidden units..\nLearning Rate Layers Units 0.01 0.1 0.5 1 10 0.197/1.000/0.197 0.911/1.000/0.911 0.001/0.110/0.032 100 0.850/1.000/0.850 0.763/0.932/0.814 0.196/0.207/0.587 1000 0.048/0.281/0.525 0.882/0.907/0.955 0.225/0.230/0.975 2 10 0.000/0.000/0.000 0.037/0.099/0.048 0.005/0.009/0.354 100 0.096/0.337/0.096 0.000/0.000/0.000 0.000/0.000/0.000 1000 0.000/0.000/0.000 0.000/0.000/0.000 0.000/0.000/0.000 3 10 0.000/0.000/0.015 0.021/0.132/0.059 0.000/0.000/0.000 100 0.076/0.442/0.091 0.000/0.000/0.000 0.000/0.000/0.000 1000 0.000/0.000/0.000 0.000/0.000/0.000 0.000/0.000/0.000\nLearning Rate Layers Units 0.01 0.1 0.5 1 10 0.193 0.486 0.830 100 0.774 0.801 0.889 1000 0.980 0.997 1.000 2 10 0.163 0.424 0.637 100 0.833 0.875 0.819 1000 1.000 1.000 0.724 3 10 0.297 0.656 0.482 100 0.867 0.907 0.539 1000 1.000 1.000 0.695\nTable 1: Possible output word labels and corresponding dynamical system parameters\nLABEL DESCRIPTION QUEST Question G Value for gravity g A UNIT Unit for acceleration (gravity) g DUNIT Unit for initial height. d2 HEIGHT Initial height value or height of each story. d2 VUNIT Unit for velocity. V1, V2 V Initial velocity magnitude. V1, V2 THETA Angle of initial movement. V1, V2 STORY Value for number of stories (if applicable) d2 0 Other\nThis section is included to illustrate examples of the the labeler network incorrectly extracting the question. In each of these cases, the classifier receives as input the labeler's incorrect output. The classifier's handling of these errors is shown in Figure|12\n(1) Labeler input: Let the acceleration due to gravity on Planet Watson be 65 ft/s^2. A ping pong ball is released from the top of a 3 story building, where each story is 79 m. What is the maximum speed the ping pong ball obtains?\nWe train the labeler with TensorFlow's Adam Optimizer, an initial learning rate of O.1, and a mini batch size of 100 word problems. The Adam Optimizer uses adaptive learning rates and is par ticularly effective with sparse gradients (Kingma & Ba]2014). We use early stopping based on a validation accuracy or when the training accuracy stops improving. We chose the network architec- ture and training settings after performing a limited grid search across the number of layers, number of units per a layer, and learning rate. (See AppendixB])\n(2) Labeler input:Assume the acceleration due to gravity is 49 m/s^2. A ping pong ball is launched at a speed of 35 m/s and an elevation of 88 degrees. What is the magnitude of the veloci of the ping pong ball just before it touches the ground?\nLabeler output / classifier input: What is the magnitude of the velocity of the Classifier output: (speed : max) Expected output: (speed : x2=0)\nAfter the labeler assigns a label to each word, a post processing step maps the labels to the dynamical system parameters, converting the initial conditions and value of gravity to SI units if necessary..\n(3) Labeler input:Let the acceleration due to gravity on Planet Watson be 71 ft/s^2. A ping pon. ball is thrown at a speed of 53 mph and an elevation of 52 degrees. What is the magnitude of th velocity of the ping pong ball just before it touches the ground?.\nThe classifier is an LSTM network with one hidden layer of 1,O00 units. An example of the data flow through the classifier is shown in Figure [5] For the problems in our dataset, the formulation part of the word problem does not provide information necessary to classify the type of question Moreover, as sequences become longer, the performance of RNNs tend to decrease (Pascanu et al.. 2013). Armed with these two observations, we chose to only have the question part of the word problem as the sequence to input into the classifier..\nLabeler output / classifier input: What is the magnitude of the velocity of the Classifier output: (speed : max) Expected output: (speed : x2=0)\nFigure 12: Examples of incorrectly extracted questions from the labeler and the classifier's respons. to them. In all three cases, the question is cut short. The classifier still makes the correct the classification for the first case. but fails for the second and third cases"}, {"section_index": "8", "section_name": "D WORD EMBEDDINGS", "section_text": "Figure 5: Example of input to classifier with expected output. Symbol x1 refers to horizontal dis placement and symbol x2 refers to vertical displacement..\nTo input the words into both RNNs, the words were first encoded as word embeddings. Word embed- dings map words to a multi-dimensional space, providing the words with numerical representations which expose relationships between words. The final embeddings for the labeler network are 10 dimensional, and the embeddings for the classifier network are 1,Oo0-dimensional. Rather than use Word2Vec, we chose to train the embeddings simultaneously with the weights and biases. We were interested in seeing if embeddings trained for a particular task could capture intuitive word features, as can often be seen with embeddings trained with Word2Vec (Mikolov et al.[|2013).\nAs with the labeler, we encode the words of the sequence into word embeddings, matching the dimension of the word embedding to the number of hidden units, and training them with the weights and biases. In this case, a sequence would be one question. Unlike the labeler, there is only one output for each sequence, occurring on the last step of the sequence. For more information see Chapter 10, figure 10.5 of Goodfellow et al.(2016) for an illustration. The singular output is the. type of question, which can fall into one of the nine types shown in Table2.\nIn order to explore the results of the trained embeddings, we used scikit-learn's implementation of t SNE to map the high-dimensional embeddings down to two dimensions (van der Maaten & Hinton 2008). The results from t-SNE are shown in Figure[13] Words appear exactly as they appear in th word problem, and no stemmers are used.\nThe embeddings from the labeler network seem more intuitive, as numbers and similar units, such as. \"m/s\", \"mph', and \"ft/s\", are mapped to similar regions. We had hypothesized that the embedding. may capture some word function related to the task the embeddings were being trained to perform However, the objects seem to be distributed throughout the space and have no easily distinguishable. pattern, despite playing a similar functional role in each word problem. It is even more difficult to. discern any patterns from the embeddings from the classifier network. We do see that words such. as \"traveling\", \"traveled\", and \"travels\"' map near each other, as well as question words \"What\"' and. \"How'. We predict that the limited vocabulary in the question space of only forty words may con."}, {"section_index": "9", "section_name": "4.2 NUMERICAL INTEGRATOR", "section_text": "Classifier Question Question (RNN) type How far has the (x1 : x2 = 0) rock traveled when it strikes the around?\nClassifier Question Question (RNN) type How far has the. (x1 : x2 = 0) rock traveled when it th\nThe classifier is trained with TensorFlow's Gradient Descent Optimizer, an initial learning rate of 0.5, and a mini-batch size of 100 questions. As with the labeler, we performed a grid search to. choose these hyperparameters. (See AppendixB\nThe numerical integrator computes the evolution over time of the dynamical system shown in Fig ure[1(b). As input it receives the initial conditions, the value of g, and the type of question extracted from the labeler and the classifier. Using SciPy's ordinary differential equation integrator, a table of values representing the system's state to the point that the object hits the ground is iteratively constructed. The numerical solution is refined to a precision of O.001 (one part in a thousand), based on the type of the question. For example, if the question is about the maximum height, we produce\nTable 2: Possible Output Question Types\na first instance of the table, find the maximum height in that table, and then search for the maximum. around that value with increased precision, repeating until we reach the desired precision. Finally the question type is used to determine which value from the table to output from the solver. This. data flow is shown in Figure|6\nDynamical Numerical Solution System Integrator Type of question CHOOSE ONE\nFigure 6: Outputs from the labeler and the classifier feed into the numerical integrator, where the labeler outputs form the dynamical system to integrate and the classifier outputs control the focus. and output of the integrator."}, {"section_index": "10", "section_name": "4.3 TRAINING. VALIDATION. AND TEST SETS", "section_text": "The grammar also ensures that the training set is disjoint from the validation and test sets, partic. ularly in structure. Examples of generated problems are shown below in Figure7 This is vital in assessing the ability of the trained networks to generalize.\nWe implement the grammar in Python. When a new problem is instantiated, the grammar rules are. descended to build up the problem, making random choices when choices are available. Labels for each problem are also automatically generated. The complete generative model is shown in Figure|8 By using a problem generator to build our datasets, we are also free to choose the size of the dataset. Our problem generator is capable of generating ~26,000 different training problems and ~22,000 different test and validation problems.\ntribute to these more perplexing results by reducing the effectiveness of which t-SNE can determine the similarity between words.\negno aprizo ontal 150 of magnitude travels oal athe annonbal fow 200 40\nWe define the word problems with a grammar that is provided in the AppeNDix. The word problems. in the training, validation, and test sets are exclusively made up of problems that follow the specifi. cations laid out by the grammar. The grammar allows for mixed units, meaning that within the same. problem, the height may have a metric unit, while the velocity may have a U.S. customary unit. The. grammar also permits the initial conditions to be exposed in multiple ways. For instance, a theta value and speed will be provided in some problems, from which the solver would need to calculate. the initial vertical velocity using the theta, whereas in other problems no theta value may be pro. vided. Using mixed units and varying numbers of values to provide information about each initial. condition allows us to increase the complexity of the problems within the scope of the dynamical. system.\nFigure 13: Top: The embeddings from the labeler network for the top 100 most frequent words in the word problems. Bottom: The embeddings from the classifier network for all words in the questions."}, {"section_index": "11", "section_name": "WORD PROBLEM GRAMMAR", "section_text": "Assume the acceleration due to gravity is 85 ft/s2. A ping pong ball is dropped from the top of a story building, where each story is 89 m. What is the maximum speed the ping pong ball obtains\nA chair is launched at a speed of 51 mph and an angle from the horizontal of 28 degrees. Let the acceleration due to gravity on Planet Watson be 98 m/s2. How much time has passed when it. reaches its maximum height?\nNotation: \"object' is used as a parameter in order to enforce consistency between parts of the prob lem. Within a word problem, the same object must appear wherever an object symbol occurs. As used in the question part of the grammar, \"x1\" indicates horizontal displacement and \"x2\" indicates vertical displacement. When used with numbers, \"...' indicates the sequence of numbers continues in between the bars.\nFigure 7: Examples of generated problems that adhere to the grammar\nFigure 8: The generative model allows us to generate the input and output for the neural networks without requiring any manual annotation..\n(val test formulation(object)) ::= (Assumption). A object is (action)\nLabeler Accuracy (%) on Training Data. Classifier Accuracy (%) on Training Data. 100 100 9 0 0.10.20.30.40.50.60.70.80.911.11.21.31.41.51.61.71.81.9 2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Epoch Epoch Overall Tag Question\nFigure 9: Training accuracy of labeler (left) and classifier (right\nThe accuracy on the test set after the labeler and classifier have been independently trained are shown in Table|3] The accuracy of the combined RNN system amounts to an overall accuracy of 99.8% The labeler achieves 100% accuracy on predicting the non-question labels and incurs a small error on predicting the beginning and end of the question. As a result, the question that is extracted based on the labeler's predictions does not always match the true question. However, based on the classifier's accuracy of 99.8%, the classifier is often resilient to the errors that labeler makes in extracting the\n123...89 stat verb) from (location released dropped let go\nINPUT Words Formulation Labeler + Question (RNN) Labels OUTPUT Grammar Generator INPUT Words Classifier Question (RNN) Question type OUTPUT\nINPUT Words Formulation Labeler + Question (RNN) Labels OUTPUT Grammar Generator INPUT Words Classifier Question (RNN) Question type OUTPUT\n(object) ) vdl_lest _object (training_object) ::= golf ball stone|chair|feather|soccer ball|rock cannonbal1 (val_test_object) ::= pebble ping pong ball vacuum tennis ball basketball |hat (formulation(object)) ::=(training_formulation(object)) (val_test_formulation(object)) (training_formulation(object)) ::= A object is (action). (Assumption). (val_test_formulation(object)) ::= (Assumption). A object is (action) (assumption) ::= Let the acceleration due to gravity on Planet Watson (acceleration) . Assume the acceleration due to gravity is (acceleration) . (acceleration) ::= (accel_value) (accel_unit) (accel_value) =1|2|3|100 (accel_unit) ::= m/s2 | ft/s2 (action) ::=(moving) | (stationary) (moving) ::= (descent) I (projectile) (descent) ::= descending at a speed of (speed) moving downwards at a speed of (speed) (projectile) ::= (proj_verb) at a speed of (speed) and an (angle_word) of (angle) degrees (proj_verb) ::= thrown|fired|launched (speed) ::= (speed_value) (speed_unit) (speed_value) =0|1|2|..|99 (speed_unit) ::= m/s | ft/s | mph (angle_word) ::= elevation angle from the horizontal (angle) =1|2|3|.|89 (stationary) ::= (stat_verb) from (location) (stat_verb) ::= releaseddroppedlet go 14\nThe datasets consisted of 7,000 word problems for training, 2,000 word problems for validation, and. 1,000 word problems for test. The progress of training over time is shown in Figure9 As can be. seen in the left graph, the labeler learns to identify the beginning and end of the question faster than it learns to correctly predict the labels. The overall accuracy of the labeler is both limited by and equivalent to that of the label accuracy. With this particular model of the labeler, there is no problem for which the labeler correctly predicts the non-question labels, but incorrectly locates the question.\nThe training accuracy for the label, question, and overall reach 10o% for all by the end of the first epoch. The classifier also reaches 1o0% accuracy on the training set by the end of the first epoch.. The epoch is broken down into fractions as the training accuracy is evaluated every seven mini-. batches of 100 problems.\n::= elevation angle from the horizontal ::= 12|3...|89 := (stat verb) from(location ::= releaseddroppedlet go\nquestion. While the labeler incorrectly extracts ninety-one questions, the classifier only incorrectly classifies two questions from a test set of 1,000 word problems. Figure |12|in Appendix |C shows examples of the labeler's errors and how the classifier handles them.\nWe note that for the two wrongly classified cases, both shown in Figure[12] the classification error is. the same. That is, a question that should be about the speed of the object when it hits the ground is. classified as a question about the maximum speed the object reaches. The numerical answer to the problem is the same for both classes of question. Therefore, even in the case of wrongly classifiec questions, the system produces the right answer..\nThe high accuracy of the labeler and classifier are not a total surprise. LSTMs have been shown to be. very effective in learning context-free and even context-sensitive languages (Gers & Schmidhuber. 2001, Cleeremans et al.]1989, Rodriguez2001), including the ability to generalize and recognize structures not seen before. Our training, validation and test sets are from a regular language, as. described in Appendix E so an LSTM should do well in learning them. In fact, we have seer. situations (with the test. yalidation and test sets all with distinct structures) where the labeler an classifier both achieve perfect accuracy on all test problems. We decided to include the data on the. \"not so perfect\" case because it illustrates some important points (Figure|12)..\n(training max x2(object)) ::= What is the maximum height the object reaches?\nThe trained variables for both models consist of word embeddings for input to the RNN, and weigh and biases within the RNN and from the RNN to the final output. We focus our evaluation on th RNN weights, as we believe these are more specific to the our physics problem solver. For a evaluation of the word embeddings, please see Appendix|D\nThe distributions of weights for the labeler and classifier are shown in figures|10] As the labeler was an LSTM network, there are weights from the input and the previous hidden values to input, forget and an output gates, as well as to the memory cells. While there appears to be a high concentration of negative weights to the output gate and positive weights to the input gate, this is likely a result of random initialization of the weights as this pattern was not consistently found with other randon initializations. The output weights, which go from the output of the LSTM cell's hidden units tc the target labels, have a slightly wider range. The few number of zero weights indicates that the majority outputs from the hidden units of the LSTM cell contribute to making the final prediction o1 the label.\nThe LSTM weight distribution for the classifier is more uniform and compressed than that of the labeler. We believe this is due to the great increase in parameters since the classifier has 1,000-. dimensional embeddings and 1,00o hidden units, leading to 8 million weights (Karpathy et al. 2015). We predict that each piece of information captured by the trained embeddings and hidden. units makes a less significant contribution to the final prediction than with the labeler, as indicated by the classifier's smaller weight values. The range of the output values for the output weights similarly contributes to this prediction, with a very small range of weights which are mostly concentrated. around zero.\nAfter examining the general distribution of weights, we also wanted to explore potential patterns of specific weights. We chose to explore the heat map of the weights for labeler since there are a magnitude fewer connections, allowing the patterns to be more readily examined. We include the heat map of the weight matrices for the connections between the hidden units of the labeler to the output predictions in Figure[11 Looking at the heat map, hidden units 3 and 8 seem to have a similar weight distribution across the output categories. We also see seemingly logical pairs forming, such\nWhenever the grammar dictates a choice of construct (for example, when selecting the object of a word problem), a uniform random number generator is used to select one of the valid constructs Therefore, the frequency of a particular form in the training, validation and test sets ultimatel\nTable 3: Accuracies shown are on the test set of word problems for the system. The classifier is fed the extracted questions as identified by the labeler. The combined RNN system accuracy is based. on the final output of the system having the same dynamical system parameters and question type as. the generated output for a word problem..\nTable 6: Occurrence counts for different objects in word problems\nTable 7: Occurrence counts for different question types\n8- 2 6 4 -2 2 4 0 0 QUEST A_UNIT D_UNIT HEIGHT V_UNIT V THETA STORY Output Category\ndepend on how many random choices are necessary to produce that form and how many variations there are in each choice.\nTable[6Jillustrates the simple case of occurrence counts of the different objects in our word problems. The training set uses seven different objects, while the validation and test sets use six objects. Not. surprisingly, each object in the training set appears in approximately 1/7 of the total number o. problems in that set. Meanwhile, each object in the validation and test sets appears in approximately. 1/5 of the total number of problems in those sets..\nA more interesting situation is illuatrated in Table7|for the occurrence counts of question types As shown in Table[2] there are nine different question types. However, the grammar works by first choosing one of two groups of questions: either max-type questions (the first three in Table2) or conditional-type questions (the last six in Table 2). Within each group, there is equal probability for each question type. Consequently, as Table[7|shows, each of the max-type questions is approxi mately twice as common as each of the conditional-type questions.\nFigure 11: Heat map for labeler weights from LSTM hidden layer to output layer\nas the strong positive weights associated with D UNIT and HEIGHT for hidden unit 6 and for and THETA for hidden unit O. However, there are also features that are challenging to explain such as the strong positive contribution hidden unit 4 makes to predicting THETA while making a1 equally strong negative contribution to predicting STORY.\nWe have developed a machine solver for a word problems on the physics of a free falling object in. two-dimensional space with constant acceleration of gravity. The solver has three main components. The labeler labels each word of the problem to identify the parameters of a canonical dynamical system that describes the time evolution of the object, and the part of the problem that corresponds to the question being asked. The classifier classifies the question part. Finally, an integrator is used to solve the dynamical system, producing a numerical answer to the problem..\nA grammar-based generator is used to produce the training, validation and test set of problems for the neural networks. The grammar is specified so that the validation and test problems are structurally different from the training problems. We use a total of 10,o00 generated problems, partitioned into 7,000 for training, 2,000 for validation and 1,000 for testing.\nWhen measured against the test set of 1,ooo problems, the dynamical system parameters are cor- rectly identified in all of them. The question part is precisely identified in 909 cases, but because\nInput gate 181626642 Forget gate 16 45000 Input gate. 40000 Forget gate 14 40000 35000 12 35000 hrenneny 30000 30000 25000 8 25000 20000 20000 6 15000 15000 4 10000 10000 5000 5000 OL 0 3 -2 1 0 1 3 0.040.020.000.020.04 0.040.020.000.020.04 Input gate weights Forget gate weights Input gate weights Forget gate weights Output gate 16 Cell input 35000 Output gate 50000 Cell input 16 14 14 30000 12 40000 heenney 25000 30000 8642 20000 hedu 15000 10000 5000 10000 0 0 -3 1 0 2 3 3 -2 -1 0 0.040.020.000.020.04 0.040.020.000.020.04 Output gate weights Cell input weights Output gate weights Cell input weights Output Weights 800 Output Weights 700 600 500 heenneey Aou 300 200 100 Q 4 -2 0.6 Q L 6 0 4 0.4 0.2 0.0 0.2 0.4 0.6 Output weights Output weights\n(a) training set (b) validation set (c) test set object count object count object count golf ball 1052 pebble 336 pebble 156 stone 1007 342 ping pong ball ping pong ball 159 chair 987 vacuum 316 vacuum 165 feather 1020 tennis ball 355 tennis ball 163 soccer ball 965 basketball 325 basketball 178 rock 989 hat 326 hat 179 cannonball 980\n(a) training set (b) validation set (c) test set class count class count class count (x1 : max) 1163 (x1 : max) 326 (x1 : max) 168 (speed : max) 1157 (speed : max) 349 (speed : max) 180 1120 (x2 : max) 325 (x2 : max) 166 (x2 : max) (speed : max height) 610 (speed : max height) 160 (speed : max height) 64 (time : max height) 602 (time : max height) 158 (time : max height) 92 (x1 : x2=0) 598 (x1 : x2=0) 160 (x1 : x2=0) 88 (time : x2=0) 596 (time : x2=0) 194 (time : x2=0) 75 (speed : x2=0) 585 (speed : x2=0) 180 (speed : x2=0) 77 (x1 : max height) 569 (x1 : max height) 148 (x1 : max height) 90\nFigure 1O: Top left: labeler LSTM weight distributions. Top right: classifier LSTM weight distri butions. Bottom left: labeler output weight distributions. Bottom right: classifier output weight distributions."}]
HJTzHtqee
[{"section_index": "0", "section_name": "A COMPARE-AGGREGATE MODEL FOR MATCHING TEXT SEQUENCES", "section_text": "Question: Where does Sam marry Rosie? 0162013016 Arrnrrn Gonnor pue Areen se siy uueen berore at s!y Coononnoon buomoq berore opouy pue the other Hoddtts oF The Hoddits rennnn the shiee whhee maares Roose Cotton Plot Question: Can I have. auto insurance without a car. - ves ae Pooissse have aute nnnnnnnee umo vehheee you what be name monmmnme Pooey be that pnrre you corrree when Vou op nou uMn e vehheee conneet Answer\nQuestion: Where does Sam marry Rosie? 50 Arrnrrn cnnmned bu Gonnor pue Areen se s!y uueen berore le at s!y Coononnoon buomoq berore Fpoy pue the other The rennnn the shhee whhee Roose Cotton\nShuohang Wang\nSchool of Information Systems Singapore Management University\nshwang.2014@phdis.smu.edu.sq\nMany NLP tasks including machine comprehension, answer selection and text en- tailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general \"compare-aggregate\"' framework that performs word-level matching fol lowed by aggregation using Convolutional Neural Networks. We particularly fo cus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple com parison functions based on element-wise operations can work better than standard neural network and neural tensor network.\nLIon: dn Indye ULO udncE WILDOUL car ves be Pooissie have aute lnnnnnree vehheee Vou Wil what be e name moomnnme Pooee this be thet you corrrrge when nou op not umn O vehheee connet Answer"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence (Bowman et al.J2015). In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer (Richardson et al.]2013] Tapaswi et al.]2016). Table[1gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing.\nFigure 2: An visualization of the largest value of each dimension in the convolutional layer of CNN. The top figure is an example from the dataset MovieQA with CNN window size 5. The botton. figure is an example from the dataset InsuranceQA with CNN window size 3. Due to the sparsity. of the representation, we show only the dimensions with larger values. The dimensionality of th raw representations is 150.\n2016). Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately."}, {"section_index": "2", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we systematically analyzed the effectiveness of a \"compare-aggregate\" model on fou. different datasets representing different tasks. Moreover, we compared and tested different kinds. of word-level comparison functions and found that some element-wise comparison functions car outperform the others. According to our experiment results, many different tasks can share the same \"compare-aggregate\"' structure. In the future work, we would like to test its effectiveness or. multi-task learning.\nA common trait of a number of these recent studies on sequence matching problems is the use of a \"compare-aggregate\" framework (Wang & Jiang 2016b]He & Lin]2016Parikh et al.]2016). Ir such a framework, comparison of two sequences is not done by comparing two vectors each rep resenting an entire sequence. Instead, these models first compare vector representations of smalle units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by Wang & Jiang(2016b) for tex tual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM.He & Lin (2016) proposec a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN.Parikh et al.[(2016) proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with ar\nThis research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative."}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly. learning to align and translate. In Proceedings of the International Conference on Learning Rep resentations, 2014.\nSchool of Information Systems Singapore Management University"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "With recent advances of neural network models in natural language processing, a standard practice. for sequence modeling now is to encode a sequence of text as an embedding vector using models. such as RNN and CNN. To match two sequences, a straightforward approach is to encode each. sequence as a vector and then to combine the two vectors to make a decision (Bowman et al. 2015, [Feng et al.J2015). However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems (Hermann et al.. 2015 Hill et al. 2016] Rocktaschel et al.]2015).\nMichael Bendersky, Donald Metzler, and W Bruce Croft. Learning concept importance using weighted dependence model. In Proceedings of the third ACM International Conference on We Search and Data Mining. ACM, 2010.\nPlot: Aragorn is crowned King of Gon dor and taking Arwen as his queen before all present at his coronation bowing before Frodo and the other Hobbits . The Hobbits return to the Shire where Sam marries Rosie Cotton . .\nTable 1: The example on the left is a machine comprehension problem from MovieQA, where the. correct answer here is The Shire. The example on the right is an answer selection problem from InsuranceQA\nattention-weighted version of the other sequence to produce a series of comparison vectors. The. comparison vectors are then aggregated and fed into a feed forward network for final classification.\nAlthough these studies have shown the effectiveness of such a \"compare-aggregate\"' framework fo sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used (Hu et al.]2014} [Wang & Jiang]2016b) to combine two vectors representing twc units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequence. are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed He & Lin[(2016) used cosine similarity, Euclidear distance and dot product to define the comparison function, which seem to be better justifiable. Bu they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network.\nIn this paper, we argue that the general \"compare-aggregate\" framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets.\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the Con ference on Empirical Methods in Natural I.. Processing. 2014\nThe contributions of this work are twofold: (1) Using four different datasets, we show that our mode following the \"compare-aggregate' framework is very effective when compared with the state-of- the-art performance on these datasets. (2) We conduct systematic evaluation of different comparisor functions and show that a comparison function based on element-wise operations, which is no widely used for word-level matching, works the best across the different datasets. We believe tha these findings will be useful for future research on sequence matching problems. We have also made our code available online"}, {"section_index": "5", "section_name": "2 METHOD", "section_text": "In this section, we propose a general model following the \"compare-aggregate\"' framework for matching two sequences. This general model can be applied to different tasks. We focus our discus- sion on six different comparison functions that can be plugged into this general \"compare-aggregate' model. In particular, we hypothesize that two comparison functions based on element-wise oper ations, SuB and MuLT, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean\nQuestion: can i have auto insurance without a car\nGround-truth answer: yes, it be possible have auto insurance without own a vehicle. you will purchase what be call a name ..\nAnother candidate answer: insurance not be a tax or merely a legal obligation because auto insurance follow a car....\nHua He and Jimmy Lin. Pairwise word interaction modeling with deep neural networks for semantic. similarity measurement. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016\nKarl Moritz Hermann. Tomas Kocisky. Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the. Conference on Advances in Neural Information Processing Systems, 2015..\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings oJ the International Conference on Learning Representations. 2015\nhj tj CNN aj x t3 (1) NTN bilinear (NTN) or non-linear transformation (NN) or cosine similarity or element-wise subtraction ^ or etc. between two vectors. Ci Cosine Euclidean aj x hj aj hj (2) NN (3) EucCos soft attention. Element-wise subtraction Element-wise multiplication hj aj aj nj q2 q3 qq (4) Sub (5) Mult\nShengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. Match-srnn Modeling the recursive matching structure with spatial RNN. International Joint Conference on. Artificial Intelligence, 2016.\nFigure 1: The left hand side is an overview of the model. The right hand side shows the details about the different comparison functions. The rectangles in dark represent parameters to be learned. represents matrix multiplication\nBingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answe\nShuohang Wang and Jing Jiang. Machine comprehension using match-LSTM and answer pointer arXiv preprint arXiv:1608.07905, 2016a\ndistance. As we will show in the experiment section, these comparison functions based on element wise operations can indeed perform very well on a number of sequence matching problems"}, {"section_index": "6", "section_name": "2.1 PROBLEM DEFINITION AND MODEL OVERVIEW", "section_text": "tne Conference on tne Norin Amerlcan Cn lleAssOclallol TOlCOpilcllorlelLln 2016b. Yi Yang, Wen-tau Yih, and Christopher Meek. WikiQA: A challenge dataset for open-domair. question answering. In Proceedings of the Conference on Empirical Methods in Natural Languag. Processing, 2015.\nThe general setup of the sequence matching problem we consider is the following. We assume there. are two sequences to be matched. We use two matrices Q E RdQ and A E RdA to represent. the word embeddings of the two sequences, where Q and A are the lengths of the two sequences. respectively, and d is the dimensionality of the word embeddings. In other words, each column vector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the. goal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a. hypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may. be a question and A a candidate answer, and y indicates whether A is the correct answer to Q..\nWenpeng Yin, Hinrich Schutze, Bing Xiang, and Bowen Zhou. ABCNN: Attention-based convolu. tional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. 2015\nWe treat the problem as a supervised learning task. We assume that a set of training examples in the form of (Q, A, y) is given and we aim to learn a model that maps any pair of (Q, A) to a y.\nAn overview of our model is shown in Figure[1 The model can be divided into the following fou layers:\nMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja. Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016..\nAdam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the Conference on Association for Computational Linguistics. 2016\n1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q anc A to obtain two new matrices Q E RlQ and A E Rl A. The purpose here is to use some gate values to control the importance of different words in making the predictions on the sequence pair. For example, q, E R', which is the ith column vector of Q, encodes the ith word in Q. 2. Attention: We apply a standard attention mechanism on Q and A to obtain attention weights over the column vectors in Q for each column vector in A. With these attentior weights, for each column vector a, in A, we obtain a corresponding vector h;, which is ar attention-weighted sum of the column vectors of Q. 3. Comparison: We use a comparison function f to combine each pair of a, and h, into a vector tj.\nIn the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider.\nnspired by the use of gates in LSTM and GRU, we p rocess Q and A with the following formulas\nQ o(W'Q+ b' 8 eQ) O tanh(W\"Q+ b\" A o(W'A +b' 8 eA) O tanh(W\"A + bu\nIn this preprocessing step, the word order does not matter. Although a better way would be to use. RNN such as LSTM and GRU to chain up the words such that we can capture some contextual information, this could be computationally expensive for long sequences. In our experiments, we. only incorporated LSTM into the formulas above for the SNLI task..\nG softmax (W*Q + bs H QG,"}, {"section_index": "7", "section_name": "2.3 COMPARISON", "section_text": "The goal of the comparison layer is to match each a;, which represents the jth word and its context in A, with h, which represents a weighted version of Q that best matches a. Let f denote a comparison function that transforms a; and h; into a vector t; to represent the comparison result\nA natural choice of f is a standard neural network layer that consists of a linear transformatior followed by a non-linear activation function. For example, we can consider the following choice:\na NEURALNET (NN): t; = f(a;,h;) = ReLU(W + b\nAlthough this model follows more or less the same framework as the model proposed byParikh et al. (2016), our work has some notable differences. First, we will pay much attention to the comparison function f and compare a number of options, including some uncommon ones based on element-. wise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work byParikh et al.(2016). For example, we use a CNN layer instead of. summation and concatenation for aggregation. Our attention mechanism is one-directional instead. of two-directional.\nCA) where O is element-wise multiplication, and wi, wu E R'd and bi, bu E R' are parameters to. be learned. The outer product (: ex) produces a matrix or row vector by repeating the vector. or scalar on the left for X times. Here o(W'Q + b' eQ) and o(W'A + b' eA) act as gate. values to control the degree to which the original values of Q and A are preserved in Q and A. For. example, for stop words, their gate values would likely be low for tasks where stop words make little difference to the final predictions.\nThe general attention (Luong et al. 2015) layer is built on top of the resulting Q and A as follows\nwhere Ws E R'l and b E R' are parameters to be learned, G E RQA is the attention weight matrix, and H E RlA are the attention-weighted vectors. Specifically, hj, which is the jth column vector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best matches the jth word in A. Next we will combine h, and a, using a comparison function.\nHowever, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related a, is to h,. For this reason, a more natural choice used in some previous work is Euclidean distance or cosine similarity between a, and hy. We therefore consider the following definition of f :\n||a; - hj||2 EUCLIDEAN+COSINE (EUCCOS): t=f(aj,hj) = cos(aj,hj)\nNote that the operator O is element-wise multiplication. For both comparison functions, the resulting vector t, has the same dimensionality as a, and h;\nFinally, we consider combining SuB and MuLT followed by an NN layer as follows.\n(a-h) O(a-h) SUBMULT+NN: t; = f(aj,hj) = ReLU(W a; O h,.\nIn summary, we consider six different comparison functions: NN, NTN, EucCos, SuB, MuLT and SUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not been widely used in previous work for word-level matching."}, {"section_index": "8", "section_name": "2.4 AGGREGATION", "section_text": "CNN([t1,...,tA]) r\nr E Rnl is then used for the final classification. where n is the number of windows in CNN\nTable 2: The statistics of different datasets. Q:question/hypothesis, C:candidate answers for each question, A:answer/hypothesis, P:plot, w:word (average).\nNote that with EucCos, the resulting vector t, is only a 2-dimensional vector. Although EucCos is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors a, and h;. On the other hand, NN and NTN are too general and thus do not capture the intuition that we care mostly about the similarity between a, and hs.\nTo use something that is a good compromise between the two extreme cases, we consider the fol- lowing two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously byMou et al.(2016).\nSUBTRACTION (SUB): t=fa,h=a-hOa-h MULTIPLICATION (MULT): t; = f(aj, hj) = a; O hj.\nWe can see that Sub is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector t; produced by SuB. But by not summing up these entries, SuB. preserves some information about the different dimensions of the original two vectors. Similarly,. MuLT is closely related to cosine similarity but preserves some information about the original two. Vectors.\nAfter we apply the comparison function to each pair of a; and h; to obtain a series of vectors t, finally we aggregate these vectors using a one-layer CNN (Kim2014):\nMovieQA InsuranceQA WikiQA SNLI train dev test train dev test train dev test train dev test #Q 9848 1958 3138 13K 1K 1.8K*2 873 126 243 549K 9842 9824 #C 5 5 5 50 500 500 10 9 10 #w in P 873 866 914 - - - - - 1 #w in Q 10.6 10.6 10.8 7.2 7.2 7.2 6.5 6.5 6.4 14 15.2 15.2 #w in A 5.9 5.6 5.5 92.1 92.1 92.1 25.5 24.7 25.1 8.3 8.4 8.3\nMovieQA InsuranceQA WikiQA SNLI Models dev test dev test1 test2 MAP MRR train test Cosine Word2Vec 46.4 45.63 1 - Cosine TFIDF 47.6 47.36 SSCB TFIDF 48.5 1 - IR model 52.7 55.1 50.8 1 CNN with GESD 65.4 65.3 61.0 Attentive LSTM 68.9 69.0 64.8 IARNN-Occam 69.1 68.9 65.1 0.7341 0.7418 IARNN-Gate 70.0 70.1 62.8 0.7258 0.7394 CNN-Cnt 0.6520 0.6652 1 1 ABCNN - 0.6921 0.7108 1 CubeCNN 1 0.7090 0.7234 W-by-W Attention 85.3 83.5 1 match-LSTM 92.0 86.1 LSTMN 88.5 86.3 Decomp Attention 90.5 86.8 1 EBIM+TreeLSTM - - - 93.0 1 1 88.3 1 NN 31.6 76.8 74.9 72.4 0.7102 0.7224 89.3 86.3 1 NTN 31.6 75.6 75.0 72.5 0.7349 0.7456 91.6 86.3 EucCos 71.9 70.6 70.2 67.9 0.6740 0.6882 87.1 84.0 SUB 64.9 70.0 71.3 68.2 0.7019 0.7151 89.8 86.8 MULT 66.4 76.0 75.2 73.4 0.7433 0.7545 89.7 85.8 1 SUBMULT+NN 72.1 72.9 77.0 75.6 72.3 0.7332 0.7477 89.4 86.8\nTable 3: Experiment Results\nMovieQA InsuranceQA WikiQA SNLI Models dev test dev test1 test2 MAP MRR train test SUBMuLT+NN (no preprocess) 72.0 : 72.8 73.8 70.7 0.6996 0.7156 89.6 82.8 SUBMULT+NN (no attention) 60.4 - 69.4 70.4 67.8 0.7164 0.7238 89.0 84.4\nTable 4: Ablation Experiment Results. \"no preprocess': remove the preprocessing layer by directly using word embeddings Q and A to replace Q and A in Eqn.[1} \"no attention\"': remove the attention layer by using mean pooling of Q to replace all the vectors of H in Eqn.2\nIn all these tasks, we use matrix Q E RdQ to represent the question or premise and matrix A E RdA (k E [1, K]) to represent the kth answer or the hypothesis. For the machine comprehension task MovieQA (Tapaswi et al.]2016), there is also a matrix P E Rd P that represents the plot of a movie. Here Q is the length of the question or premise, A the length of the kth answer, and P the length of the plot.\nFor the InsuranceQA (Feng et al.[ 2015) dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al. 2015) datasets, we need to rank the candidate answers according to a question. For both tasks,.\nIn this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table2] We will fist introduce the task settings and the way we customize the \"compare-aggregate' structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table[3land the ablation study shown in Table 4\nFor the SNLI (Bowman et al.]2015) dataset, the task is text entailment, which identifies the relation-. ship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence Here K = 1, and there are exactly two sequences to match. The actual model structure is what we. have described before.\nthere are K candidate answers for each question. Let us use rk to represent the resulting vecto. produced by Eqn.9|for the kth answer. In order to select one of the K answers, we first define R = [r1, r2,..., rk]. We then compute the probability of the kth answer to be the correct one as follows:\np(k|R) softmax(w' tanh(W$R + b$ eK) + b eK) Ws E Rlnl, w E R', bs E R', b E R are parameters to be learned.\nFor the machine comprehension task MovieQA, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot P, the question Q and the answer Ag. For each k, we first match Q and P and refer to the matching result at position j as t, as generated by one of the comparison functions f. Similarly, we also match Ag with P and refer to the matching result at position j as tk.. We then define.\nrk CNN([tk,1,...,tk,]\nTo select an answer from the K candidate answers, again we use Eqn.10|to compute the probabili ties.\nThe implementation details of the modes are as follows. The word embeddings are initialized fron GloVe (Pennington et al.]2014). During training, they are not updated. The word embeddings no. found in GloVe are initialized with zero.\nThe dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba 2015) with the coefficients 1 = 0.9 and 2 = 0.999 to optimize the model. We do not use L2 regularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA which is a relatively small dataset, we also tune the learning rate and the batch size. For the others, we set the batch size to be 30 and the learning rate 0.002."}, {"section_index": "9", "section_name": "3.2 BASELINES", "section_text": "Here, we will introduce the baselines for each dataset. We did not re-implement these models bu simply took the reported performance for the purpose of comparison..\nSNLI: . W-by-W Attention: The model by[Rocktaschel et al.(2015), who first introduced attention mechanism into text entailment. . match-LSTM: The model by Wang & Jiang(2016b), which. concatenates the matched words as the inputs of an LSTM. . LSTMN: Long short-term memory-. networks proposed by Cheng et al.(2016). . Decomp Attention: Another \"compare-aggregate'. model proposed byParikh et al.(2016). EBIM+TreeLSTM: The state-of-the-art model proposed by Chen et al.[(2016) on the SNLI dataset.\nInsuranceQA: . IR model: This model by Bendersky et al.(2010) learns the concept information to help rank the candidates. o CNN with GESD: This model by Feng et al.(2015) uses Euclidean distance and dot product between sequence representations built through convolutional neural net- works to select the answer. . Attentive LSTM: Tan et al.(2016) used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. . IARNN-Occam: This model by[Wang et al.(2016) adds regularization on the attention weights. . IARNN-Gate: This model by[Wang et al.(2016) uses the representation of the question to build the GRU gates for each candidate answer.\nWikiQA: . IARNN-Occam and IARNN-Gate as introduced before. . CNN-Cnt: This model by Yang et al.(2015) combines sentence representations built by a convolutional neural network with logistic regression. . ABCNN: This model is Attention-Based Convolutional Neural Network proposed by|Yin et al.(2015). o CubeCNN proposed by[He & Lin[(2016) builds a CNN on all pairs of word similarity.\nMovieQA: All the baselines we consider come from Tapaswi et al.(2016)'s work: . Cosine Word2Vec: A sliding window is used to select the answer according to the similarities computed\nt! tk,J\nthrough Word2Vec between the sentences in plot and the question/answer. o Cosine TFIDF: This. model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similar ity. o SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is. built on the sentence level similarities.\nWe use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correc answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciproca Rank (MRR)."}, {"section_index": "10", "section_name": "3.4 FURTHER ANALYSES", "section_text": "To further explain how our model works, we visualize the max values in each dimension of the. convolutional layer. We use two examples shown in Table 1|from MovieQA and InsuranceQA. datasets respectively. In the top of Figure [2] we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then. this answer is more likely to be the correct one. Similarly, the bottom one of Figure 2 also shows. that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one.."}, {"section_index": "11", "section_name": "4 RELATED WORK", "section_text": "We review related work in three es of general structures for matching sequences\nSiamense network: These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity (Feng et al.[2015)|Yang et al.|2015), element-wise operation (Tai et al.[2015) |Mou et al. 2016) or neural network-based combination Bowman et al. (2015) are used for sequence matching..\nAttentive network: Soft-attention mechanism (Bahdanau et al. 2014} Luong et al.]2015) has been widely used for sequence matching in machine comprehension (Hermann et al.2015), text entail- ment (Rocktaschel et al.] 2015) and question answering (Tan et al.2016). Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation.\nCompare-Aggregate network: This kind of framework is to perform the word level match ing (Wang & Jiang 2016a Parikh et al. 2016 He & Lin 2016: Trischler et al. 2016; Wan et al."}]
ry18Ww5ee
[{"section_index": "0", "section_name": "REFERENCES", "section_text": ". Bergstra and Y. Bengio. Random search for hyper-parameter optimization. In JMLR, 2012\nJ. Bergstra et al. Algorithms for hyper oarameter optimization. In NIPS, 2011.\nPerformance of machine learning algorithms depends critically on identifying a. good set of hyperparameters. While recent approaches use Bayesian Optimiza tion to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present HyPERBAND, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. HyPERBAND is a principled early-stoppping method that adaptively allocates a pre. defined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare HyPERBAND with popular Bayesian Opti mization methods on several hyperparameter optimization problems. We observe that HyPERBAND can provide more than an order of magnitude speedups over. competitors on a variety of neural network and kernel-based learning problems."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In an effort to develop more efficient search methods, the problem of hyperparameter optimization ha. recently been dominated by Bayesian optimization methods (Snoek et al.]2012)Hutter et al.]2011 Bergstra et al.2011) that focus on optimizing hyperparameter configuration selection. These method. aim to identify good configurations more quickly than standard baselines like random search b selecting configurations in an adaptive manner; see Figure[1(a)] Existing empirical evidence suggest. that these methods outperform random search (Thornton et al.|2013]Eggensperger et al.]2013][Snoel et al.|[2015). However, these methods tackle a fundamentally challenging problem of simultaneously. fitting and optimizing a high-dimensional, non-convex function with unknown smoothness, anc. possibly noisy evaluations. To overcome these difficulties, some Bayesian optimization methods. resort to heuristics, at the expense of consistency guarantees, to model the objective function or speec up resource intensive subroutines,'[Moreover, these adaptive configuration selection methods are. intrinsically sequential and thus difficult to parallelize..\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPs, 2007.\nAn orthogonal approach to hyperparameter optimization focuses on speeding up configuratioi evaluation; see Figure|1(b)] These methods are adaptive in computation, allocating more resource. to promising hyperparameter configurations while quickly eliminating poor ones. Resources car take various forms, including size of training set, number of features, or number of iterations fo. iterative algorithms. By adaptively allocating these resources, these methods aim to examine orders o. magnitude more hyperparameter configurations than methods that uniformly train all configurations tc completion, thereby quickly identifying good hyperparameters. While there are methods that combine. Bayesian optimization with adaptive resource allocation (Swersky et al.2013f 2014) Domhan et al 2015), we focus on speeding up random search as it offers a simple, parallelizable, and theoretically. principled launching point and is shown to outperform grid search (Bergstra & Bengiol 2012).\nR. Rifkin and A. Klautau. In defense of one-vs-all classification. JMLR, 2004\n'Consistency can be restored by allocating a fraction of resources to performing random search"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A. Gyorgy and L. Kocsis. Efficient multi-start strategies for local search algorithms. JAIR, 41, 2011\nF. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proc. of LION-5, 2011. K. Jamieson and R. Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In Information Sciences and Systems (CISs), 2014 48th Annual Conference on,. pp. 1-6. IEEE, 2014. K. Jamieson and A. Talwalkar. Non-stochastic best arm identification and hyperparameter optimiza- tion. In AISTATS, 2015. A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. arXiv preprint arXiv:1605.07079, 2016. A. Krizhevsky. Learning multiple layers of features from tiny images. In Technical report, Department of Computer Science, Univsersity of Toronto, 2009. T. Krueger, D. Panknin, and M. Braun. Fast cross-validation via sequential testing. Journal of Machine Learning Research, 16:1103-1155, 2015. H. Larochelle et al. An empirical evaluation of deep architectures on problems with many factors of. variation. In ICML, 2007. L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit- based approach to hyperparameter optimization. arXiv:1603.06560, 2016. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In NIPS, 1993. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPs, 2007. G. Ratsch. T. Onoda, and K.R. Muller. Soft margins for adaboost. Machine Learning, 42:287-320.\nThe task of hyperparameter optimization is becoming increasingly important as modern data analysis pipelines grow in complexity. The quality of a predictive model critically depends on its hyperpa rameter configuration, but it is poorly understood how these hyperparameters interact with each other to affect the quality of the resulting model. Consequently, practitioners often default to either hand-tuning or automated brute-force methods like random search and grid search.\nvariation. In ICML, 2007. L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit based approach to hyperparameter optimization. arXiv:1603.06560, 2016. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and. function approximation. In NIPS, 1993. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS. Workshop on Deep Learning and Unsupervised Feature Learning, 2011..\nSabharwal, H. Samulowitz, and G. Tesauro. Selecting near-optimal learners via incremental data allocation. In AAAI, 2016. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house numbers digit classification. In ICPR, 2012. Snoek, H. Larochelle, and R. Adams. Practical bayesian optimization of machine learning. algorithms. In NIPS, 2012.\n0.30 0.25 0.20 1.7 SSO V2 0.10 V1 0.4 0.05 0.2 10 0.00 0.0 10 20 30 50 100 200 250 300 350 400 Resources Resources Allocated (a) Configuration Selection (b) Configuration Evaluation (c) Envelopes\nSwersky, J. Snoek, and R. Adams. Multi-task bayesian optimization. In NIPs, 2013\nFigure 1: (a) The heatmap shows the validation error over a two dimensional search space, with red corresponding to areas with lower validation error, and putative configurations selected in a sequential manner as indicated by the numbers. (b) The plot shows the validation error as a function of the resources allocated to each configuration (i.e., each line in the plot). Configuration evaluation methods allocate more resources to promising configurations. (c) The validation loss as a function of total resources allocated for two configurations. The shaded areas bound the maximum distance from the terminal validation loss and monotonically decreases with the resource.\nOur novel configuration evaluation method, HyPERBAND, relies on a principled early-stopping strategy to allocate resources, allowing it to evaluate orders of magnitude more configurations thai uniform allocation strategies. HyPERBAND is a general-purpose technique that makes minima assumptions, unlike prior configuration evaluation approaches (Swersky et al.|2013f Domhan et al 2015] Swersky et al.]2014] Gyorgy & Kocsis[2011f Agarwal et al.[2011). In this work, we describe HyPERBAND, provide intuition for the algorithm through a detailed example, and present a wide range of empirical results comparing HyPERBAND with well established competitors. We also briefl describe the theoretical underpinnings of HyPERBAND, however a thorough theoretical treatment i beyond the scope of this paper and is deferred toLi et al.(2016)."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Bayesian optimization techniques model the conditional probability p(f[) of a configuration's. performance on a metric f given a set of hyperparameters . For instance, SMAC uses random forests. to model p(f[) as a Gaussian distribution (Hutter et al.2011). TPE is a non-standard Bayesian. optimization algorithm based on tree-structured Parzen density estimators (Bergstra et al.| 2011). A third popular method, Spearmint, uses Gaussian processes (GP) to model p(f[) and performs slice. sampling over the GP's hyperparameters (Snoek et al.2012).\nAdaptive configuration evaluation is not a new idea. Maron & Moore(1993) considered a setting. where training time is negligible (e.g., k-nearest-neighbor classification) and evaluation on a large validation set is accelerated by evaluating on an increasing subset of the validation set, stopping early configurations that are performing poorly. Since subsets of the validation set provide unbiased estimates of its expected performance, this is an instance of the stochastic best-arm identificatior problem for multi-armed bandits (see Jamieson & Nowak (2014) for a brief survey)..\nK. Swersky, J. Snoek, and R. P. Adams. Freeze-thaw bayesian optimization. arXiv:1406.3896, 2014\nIn contrast, this paper assumes that evaluation time is negligible and the goal is to early-stop long. running training procedures by evaluating partially trained models on the validation set. Previous. approaches either require strong assumptions or use heuristics to perform adaptive resource allocation. Several works propose methods that make strong assumptions on the convergence behavior of training. algorithms, providing theoretical performance guarantees under these assumptions (Gyorgy & Kocsis. 2011} Agarwal et al.2011 Swersky et al.]2013] 2014]Domhan et al.]2015 Sabharwal et al. 2016). Unfortunately, these assumptions are often hard to verify, and empirical performance can. drastically suffer when they are violated. One recent work of particular interest proposes a heuristic. based on sequential analysis to determine stopping times for training configurations on increasing. subsets of the data (Krueger et al.J2015). However, it has a few shortcomings: (1) it is designed. to speedup multi-fold cross-validation and is not significantly faster than standard holdout, (2) it. is not an anytime algorithm and requires the set of configurations to be evaluated as an input, and (3) the theoretical correctness and empirical performance of this method are highly dependent on."}, {"section_index": "4", "section_name": "ADDITIONAL EXPERIMENTAL RESULTS", "section_text": "a user-defined \"safety-zone'| Lastly, in an effort avoid heuristics and strong assumptions, Sparks et al.(2015) proposed a halving style algorithm that did not require explicit convergence behavior and Jamieson & Talwalkar (2015) analyzed a similar algorithm, providing theoretical guarantees and encouraging empirical results. Unfortunately, these halving style algorithms suffer from the n vs B/n issue which we will discuss in Section3."}, {"section_index": "5", "section_name": "A.1 COMPARISON WITH CVST", "section_text": "Finally,[Klein et al.(2016) recently introduced Fabolas, a Bayesian optimization method that combines adaptive selection and evaluation. Similar to Swersky et al.(2013f 2014), it models the conditional validation error as a Gaussian process using a kernel that captures the covariance with downsampling rate to allow for adaptive evaluation. While we intended to compare HyPeRBAND with Fabolas, we encountered some technical difficulties when using the package3|and are working with the authors of. Klein et al.(2016) to resolve the issues."}, {"section_index": "6", "section_name": "HYPERBAND ALGORITHM", "section_text": "We replicated the classification experiments in Krueger et al.(2015) that train a support vectoi machine on the datasets from the IDA benchmark (Ratsch et al.]2001). All experiments were performed on Google Cloud Compute's n1-standard-1 instances. Following Krueger et al (2015), we evaluated HyPERBAND and CVST on the same 2d grid of 610 hyperparameters and recorded the best test error and duration for 50 trials . The only modification we made to their origina experimental setup was the data splits; instead of half for test and half for training, we used 1/11th fo test and 10/11th for training. HyPERBAND performed holdout evaluation using 1/1Oth of the training data as the validation set. We set n = 3, and R was set for each dataset so that a minimum resource of 5O datapoints is allocated to each configuration. Table2[shows that CVST and HyPERBAND achieve comparable test errors (the differences are well within the error bars), while HyPERBAND is significantly faster than CVST on all datasets. More granularly, while CVST on average has slightl lower mean error, HyPERBAND is within O.2% of CVST on 5 of the 7 datasets. Additionally, fo each of the 7 datasets. HyPERBAND does as well as or better than CVST in over half of the trials\nHYPERBAND extends the SUCCEssIVEHALVING algorithm proposed for hyperparameter optimiza tion in Jamieson & Talwalkar[(2015) and calls it as a subroutine. The idea behind SuccEssivE HALvinG follows directly from its name: uniformly allocate a budget to a set of hyperparameter configurations, evaluate the performance of all configurations, throw out the worst half, and repeat until one configurations remains. The algorithm allocates exponentially more resources to more promising configurations. Unfortunately, SUCCESsIVEHALVING requires the number of configu- rations n as an input to the algorithm. Given some finite time budget B (e.g. an hour of training time to choose a hyperparameter configuration), B/n resources are allocated on average across the configurations. However, for a fixed B, it is not clear a priori whether we should (a) consider many configurations (large n) with a small average training time; or (b) consider a small number of configurations (small n) with longer average training times.\nWe use a simple example to better understand this tradeoff. Figure[1(c)shows the validation loss as a function of total resources allocated for two configurations with terminal validation losses v1 and v. The shaded areas bound the maximum deviation from the terminal validation loss and will be referre. to as \"envelope\" functions. It is possible to differentiate between the two configurations when th envelopes diverge. Simple arithmetic shows that this happens when the width of the envelopes i. less than v2 - V1, i.e. when the intermediate losses are guaranteed to be less than 2-1 away from the terminal losses. There are two takeaways from this observation: more resources are needed tc. differentiate between the two configurations when either (1) the envelope functions are wider or (2. the terminal losses are closer together.\nHowever, in practice, the optimal allocation strategy is unknown because we do not have knowledge. of the envelope functions nor the distribution of terminal losses. Hence, if more resources are required before configurations can differentiate themselves in terms of quality (e.g., if an iterative training method converges very slowly for a given dataset or if randomly selected hyperparameter configurations perform similarly well) then it would be reasonable to work with a small numbei of configurations. In contrast, if the quality of a configuration is typically revealed using minimal resources (e.g., if iterative training methods converge very quickly for a given dataset or if randomly selected hyperparameter configurations are of low-quality with high probability) then n is the. bottleneck and we should choose n to be large"}, {"section_index": "7", "section_name": "A.2 LENET EXPERIMENT", "section_text": "We trained the LeNet convolutional neural network on MNIST using mini-batch SGD. Code is available for the network at http://deeplearning.net/tutorial/lenet.html The search space for the LeNet example discussed in Section|3.2|is shown in Table|3"}, {"section_index": "8", "section_name": "3.1 HYPERBAND", "section_text": "HyPERBAND, shown in Algorithm[1] addresses this \"n versus B/n\"' problem by considering several possible values of n for a fixed B, in essence performing a grid search over feasible value of n Associated with each value of n is a minimum resource r that is allocated to all configurations before some are discarded; a larger value of n corresponds to a smaller r and hence more aggressive early stopping. There are two components to HyPERBAND; (1) the inner loop invokes SUCCEssIvEHALV- ING for fixed values of n and r (lines 3-9) and (2) the outer loop which iterates over different values\nHyperparameter Scale Min Max Learning Rate. log 1e-3 1e-1 Batch size log 1e1 1e3 Layer-2 Num Kernels (k2) linear 10 60 Layer-1 Num Kernels (k1) linear 5 k2\nTable 3: Hyperparameter space for the LeNet application of Section3.2] Note that the number o1 kernels in Layer-1 is upper bounded by the number of kernels in Layer-2\n2The first two drawbacks prevent a full comparison to HyPeRBAND on our selected empirical tasks, however. for completeness, we provide a comparison in Appendix[A|toKrueger et al.(2015) on some experimental tasks replicated from their paper.\nThe CVST algorithm from Krueger et al.(2015) focuses on speeding up standard k-fold cross. validation. We did not include it as one of the competitors in Section 4|because the experiments we selected were too computational expensive for multi-fold cross-validation and CVST is not an. any time algorithm. Nonetheless, the CVST algorithm is an interesting approach and was shown to. have promising empirical performance in Krueger et al.(2015). Hence, we performed a small scale. comparison modeled after their empirical studies between CVST and HyPERBAND.\nCVST Hyperband Ratio Dataset Test Error Duration Test Error Duration Duration banana 9.8%1.6% 12.35.0 9.9%1.5% 1.80.1 6.72.8 german 26.0%4.5% 2.71.1 27.6%4.8% 0.70.0 4.11.7 image 2.9%1.1% 3.51.0 3.3%1.4% 1.00.0 3.40.9 splice 8.6%1.8% 10.63.1 8.7%1.8% 3.90.1 2.70.8 ringnorm 1.4%0.4% 21.32.3 1.5%0.4% 6.50.3 3.30.4 twonorm 2.4%0.5% 27.910.0 2.4%0.5% 6.50.2 4.31.5 waveform 9.3%1.3% 13.72.7 9.5%1.3% 2.90.2 4.81.0\nTable 2: The test error and duration columns show the average value plus/minus the standard deviation. across 50 trials. Duration is measured in minutes and indicates how long it took each method to evaluate the grid of 610 hyperparameters used inKrueger et al.(2015). The ratio column shows the. ratio of the duration for HyPERBAND over that for CVST with associated standard deviation..\nof n and r (lines 1-2). We will refer to each such run of SUCCESSIVEHALVING within HYPERBAND as a \"bracket.\" Each bracket is designed to use about B total resources and corresponds to a different tradeoff between n and B/n. A single execution of HyPERBAND takes a finite number of iterations and in practice can be repeated indefinitely.\nFor the experiments discussed in Section|4.1] the exact architecture used is the 18% model provided on cuda-convnet for CIFAR-105\nHyPERBAND requires two inputs (1) R, the maximum amount of resource that can be allocated to a single configuration, and (2) n, an input that controls the proportion of configurations discarded in each round of SuccEssiveHALvING. The two inputs dictate how many different brackets are considered; specifically, Smax + 1 different values for n are considered with smax = logn(R)]. HyPERBAND begins with the most aggressive bracket s = smax, which sets n to maximize exploration, subject to the constraint that at least one configuration is allocated R resources. Each subsequent bracket reduces n by a factor of approximately n until the final bracket, s = 0, in which every configuration is allocated R resources (this bracket simply performs classical random search). Hence, HyPERBAND performs a geometric search in the average budget per configuration to address the \"n versus B/n' problem, at the cost of approximately Smax +1 times more work than running SuccEssiveHALvING for a fixed n. By doing so, HyPERBAND is able to exploit situations in which adaptive allocation works well. while protecting itself in situations where more conservative allocations are required\nHyperparameter Scale Min Max Learning Parameters Initial Learning Rate. log 5 *10-5 5 Conv1 l2 Penalty log 5 *10-5 5 Conv2 l2 Penalty. log 5 *10-5 5 Conv3 l2 Penalty log 5 *10-5 5 FC4 l2 Penalty log 5 *10-3 500 Learning Rate Reductions. integer 0 3 Local Response Normalization Scale log 5 *10-6 5 Power linear 0.01 3\nTable 4: Hyperparameters and associated ranges for the three-layer convolutional network\nAlgorithm 1: HyPERBAND algorithm for hyperparameter optimization\nNDADaI ptnnZatlon. input : R, n (default n = 3) initialization: Smax = [log,(R)], B = (Smax + 1) R 1 for s E {Smax, Smax - 1, ..., 0} do 2 r = Rn-s // begin SuccessiveHaLving with (n,r) inner loop 3 T =get hyperparameter configuration(n) 4 for i e {0,..., s} do 5 ni=[nn-'] 6 ri = rni 7 L ={run_then_return_val_loss(t,ri) :t E T} 8 T =top_k(T,L,[ni/n]) 9 end 10 end 11 return Configuration with the smallest intermediate loss seen so far\nSearch Space: The search space used for the experiments is shown in Table4 The learning ra reductions hyperparameter indicates how many times the learning rate was reduced by a factor of 1 over the maximum iteration window. For example, on CIFAR-10, which has a maximum iteration 30,000, a learning rate reduction of 2 corresponds to reducing the learning every 10,000 iterations, fc a total of 2 reductions over the 30,o00 iteration window. All hyperparameters with the exception the learning rate decay reduction overlap with those in Snoek et al.(2012). Two hyperparameters i Snoek et al.(2012) were excluded from our experiments: (1) the width of the response normalizatic ayer was excluded due to limitations of the Caffe framework and (2) the number of epochs wa excluded because it is incompatible with dynamic resource allocation.\nri = rn L ={run_then_return_val_loss(t,r):t e T T =top_k(T,L,ni/n\nDatasets: CIFAR-10 and SVHN contain 32 32 RGB images while MRBI contains 28 28 grayscale images. For all datasets, the only preprocessing performed on the raw images was demeaning. Fo1 CIFAR-10, the training (40,000 instances) and validation (10,000 instances) sets were sampled from data batches 1-5 with balanced classes. The original test set (10,O00 instances) is used for testing For MRBI, the training (10,000 instances) and validation (2,000 instances) sets were sampled from the original training set with balanced classes. The original test set (50,o00 instances) is used for testing. Lastly, for SVHN, the train, validation, and test splits were created using the same procedure as that in Sermanet et al.(2012).\nR represents the maximum amount of resources that can be allocated to any given configuration. Ir. most cases, there is a natural upper bound on the maximum budget per configuration that is ofter dictated by the resource type (e.g., training set size for dataset downsampling; limitations based. on memory constraint for feature downsampling; rule of thumb regarding number of epochs wher. iteratively training neural networks). R is also the number of configurations evaluated in the bracke. that performs the most exploration, i.e s = Smax. In practice one may want n nmax to limit. overhead associated with training many configurations on a small budget, i.e. costs associated with. initialization, loading a model, and validation. In this case, set smax = [logn(nmax)]..\nComputational Considerations: The experiments took the equivalent of over 1 year of GPU hour. on NVIDIA GRID K520 cards available on Amazon EC2 g2 . 8x1arge instances. We set a tota. budget constraint in terms of iterations instead of compute time to make comparisons hardware. independent|6Comparing progress by iterations instead of time ignores overhead costs not associatec with training like cost of configuration selection for Bayesian methods and model initializatior. and validation costs for HyPERBAND. While overhead is hardware dependent, the overhead fo. HyPERBAND is below 5% on EC2 g2 . 8x1arge machines, so comparing progress by time passec. would not impact results significantly.\nThe value of n can be viewed as a knob that can be tuned based on practical user constraints Larger values of n correspond to a more aggressive elimination schedule and thus fewer rounds of elimination; specifically, each round retains 1/n configurations for a total of logn(n) + 1 rounds of elimination with n configurations. If one wishes to receive a result faster at the cost of a sub-optima asymptotic constant, one can increase n to reduce the budget per bracket B = (|log,(R)| + 1)R We stress that results are not very sensitive to the choice of n. In practice we suggest taking n to be equal to 3 or 4.\nDue to the high computational cost of these experiments, we were not able to run all searchers out to convergence. However, we did double the budget for each trial of CIFAR-10 to allow for a comparison of the searchers as they near convergence. Figure|6|shows while Bayesian optimization methods achieve similar performance as HyPERBAND and SuCCEssIVEHALVING, it takes them much longer to achieve a comparable error rate.\nComparison with Early Stopping: Adaptive allocation for hyperparameter optimization can be thought of as a form of early stopping where less promising configurations are halted before comple tion.Domhan et al.(2015) propose an early stopping method for neural networks and combine it\nHypeRBAND requires the following methods to be defined for any given learning problem: get_hyperparameter_configuration (n) returns a set of n i.i.d. samples from some dis- tribution defined over the hyperparameter configuration space; run_then return_va1 loss (t, r) takes a hyperparameter configuration (t) and resource allocation (r) and returns the validation loss after training for the allocated resources; and top_k (configs, losses, k) takes a set of configurations as well as their associated losses and returns the top k performing configurations.\nThe model specification is available at http: //code. google. com/p/ cuda- convnet/ 6Most trials were run on Amazon EC2 g2.8xlarge instances but a few trials were run on different machine: due to the large computational demand of these experiments.\nlyperparameter Scale Min Maxd earning Parameters Initial Learning Rate. log 5 *10-5 5 Conv1 l2 Penalty. log 5 *10-5 5 Conv2 l2 Penalty 1og 5 *10-5 5 Conv3 l2 Penalty log 5 *10-5 5 FC4 l2 Penalty log 5 *10-3 500 Learning Rate Reductions. integer 0 3 ocal Response Normalization Scale log 5 * 10-6 5 Power linear 0.01 3\n0.32 hyperband (finite) I spearmint SMAC 0.30 random SMAC (Early Stop) I I random 2x Err 0.28 TPE bracket s=4 0.26 Tte ae 0.24 ere AY 0.22 0.20 H HI #H H HI 0.18 0 20 40 60 80 100 Multiple of R Used (a) CIFAR-10 0.30 0.10 0.29 0.09 0.28 0.08 Error 0.27 0.07 lest 0.26 0.06 0.25 0.05 0.24 0.04 0.23 0.03 0.22 10 20 30 40 50 0 10 20 30 40 Multiple of R Used Multiple of R Used (b) MRBI (c) SVHN\nWe further define the number of iterations as the resource to allocate. with one unit of resource corresponding to one epoch or a full pass over the dataset. We set R to 81 and use the default value of n = 3, resulting in Smax = 4 and thus 5 brackets of SucCEssivEHALvInG with different tradeoffs between n and B/n. The resources allocated within each bracket are displayed in Table[1\ns = 4 s = 3 s = 2 s = 1 s = 0 2 ni ri ni ri ni ri ni ri ni ri 0 81 1 27 3 9 9 6 27 5 81 1 27 3 9 9 3 27 2 81 2 9 9 3 27 1 81 3 3 27 1 81 4 1 81\nTable 1: Values of n; and r; for the brackets of HypER BAND when R = 81 and n = 3.\nFigure2|compares the empirical performance of the different brackets of HyPERBAND if they were. used separately, as well as standard HyPERBAND (all results are averaged over 7O trials). In practice we do not know a priori which bracket s E {0, ..., 4} will be most effective, and in this case neither the most (s = 4) nor least aggressive (s = 0) setting is optimal. However, note that HyPERBAND. does nearly as well as the optimal bracket (s = 3) and vastly outperforms the baseline uniform allocation (i.e. random search), which is equivalent to bracket s = 0..\nFigure 6: Average test error across 10 trials is shown in all plots. Error bars indicate the maximum and minimum ranges of the test error corresponding to the model with the best validation error"}, {"section_index": "9", "section_name": "3.3 OVERVIEW OF THEORETICAL RESULTS", "section_text": "Although a detailed theoretical analysis is beyond the scope of this paper, we provide an intuitive. high-level description of theoretical properties of HyPERBAND. Suppose there are n configurations. each with a given terminal validation error v; for i = 1, . . ., n. Without loss of generality, index th. configurations by performance so that v1 corresponds to the best performing configuration, v2 to th. second best, and so on. Now consider the task of identifying the best configuration. The optima strategy would allocate to each configuration i the minimum resource required to distinguish it fror. V1, i.e., enough so that the envelope functions depicted in Figure[1(c)|bound the intermediate loss t. be less than -v1 away from the terminal value. As shown in Jamieson & Talwalkar(2015) and et al.(2016), the budget required by SuccEssiveHALvING is in fact only a small factor away fron. this optimal approach because it capitalizes on configurations that are easy to distinguish from v. In contrast, the naive uniform allocation strategy, which allocates B/n to each configuration, has t. allocate to every configuration the resource required to distinguish v2 from v1.\nwith SMAC to speed up hyperparameter optimization. Their method stops training a configuration. if the probability of the configuration beating the current best is below a specified threshold. This. probability is estimated by extrapolating learning curves fit to the intermediate validation error losses. of a configuration. If a configuration is terminated early, the predicted terminal value from the estimated learning curves is used as the validation error passed to the hyperparameter optimization algorithm. Hence, if the learning curve fit is poor, it could impact the performance of the configura tion selection algorithm. While this approach is heuristic in nature, it does demonstrate promising. empirical performance so we included SMAC with early termination as a competitor. We used the conservative termination criterion with default parameters and recorded the validation loss every 400 iterations and evaluated the termination criterion 3 times within the training period (every 8k. iterations for CIFAR-10 and MRBI and every 16k iterations for SVHN)[Comparing performance by the total multiple of R used is conservative because it does not account for the time spent fitting the learning curve in order to check the termination criterion..\nThe relative size of the budget required for uniform allocation and SuccEssivEHALvING depends on the envelope functions bounding deviation from terminal losses as well as the distribution from which v;'s are drawn. The budget required for SuccEssivEHALVING is smaller when the optimal n versus B/n tradeoff requires fewer resources per configuration. Hence, if the envelope functions tighten quickly as a function of resource allocated, or the average distances between terminal losses is large, then SucCEssivEHALVING can be substantially faster than uniform allocation. Of course we do not have knowledge of either function in practice, so we will hedge our aggressiveness with HyPERBAND. Remarkably, despite having no knowledge of the envelope functions or the distribution of v;'s, HyPERBAND requires a budget that is only log factors larger than the optimal for SuCCESs1VEHALVING. SeeLi et al.(2016) for details."}, {"section_index": "10", "section_name": "A.4 KERNEL CLASSIFICATION EXPERIMENTS", "section_text": "We trained the regularized least-squares classification model using a block coordinate descent solve. Our models take less than 10 minutes to train on CIFAR-10 using an 8 core machine, while the defaul. SVM method in Scikit-learn is single core and takes hours. Table 5|shows the hyperparameter and associated ranges considered in the kernel least squares classification experiment discussed ir.\n7We used the code provided at https://github.com/automl/pylearningcurvepredictor\nWe next present a simple example to provide intuition. We work with the MNIST dataset and optimize. hyperparameters for the LeNet convolutional neural network trained using mini-batch SGD. Our. search space includes learning rate, batch size, and number of kernels for the two layers of the network as hyperparameters (details are shown in Table|3|in Appendix|A).\n1e-2 nepoch=81 1.00 s=0 Baseline 0.98 s=1 s=2 0.96 s=3 rrrroerorr 0.94 s=4 Hyperband 0.92 0.90 0.88 0.0 0.5 1.0 1.5 2.0 Seconds 1e6\n8 0.5 1.0 0.0 1.5 2.0 Seconds 1e6\nFigure 2: Performance of individ ual brackets s and HYPERBAND.\n0.30 0.10 0.32 hyperband (finite) spearmint SMAC 0.29 random 0.09 0.30 SMAC (Early Stop) + random_2x 0.28 0.28 0.08 TPE bracket s=4 Frror 0.27 est 0.07 est 0.26 g0.24 ge 11 0.06 0.25 Aee g 0.05 0.22 0.24 0.04 0.20 0.23 0.03 0.18 0.22 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Multiple of R Used Multiple of R Used Multiple of R Used (a) CIFAR-10 (b) MRBI (c) SVHN\nTable 5: Hyperparameter space for kernel regularized least squares classification problem discussec in Section4.2"}, {"section_index": "11", "section_name": "4 HYPERPARAMETER OPTIMIZATION EXPERIMENTS", "section_text": "I hyperband / SMAC 0.65 / TPE random 0.60 random 2x Error bracket s=4 0.55 est 0.50 0.45 0.40 100 200 300 400 500 600 700 Minutes\nIn this section, we evaluate the empirical behavior of HyPERBAND with iterations, data subsamples and features as resources. For all experiments, we compare HyPERBAND with three well known. Bayesian optimization algorithms - SMAC, TPE, and Spearmint. Additionally, we show results for. SUCCESsIVEHALVING corresponding to repeating the most exploration bracket of HYPERBAND Finally for all experiments, we benchmark against standard random search and random_2, which is a variant of random search with twice the budget of other methods..\nWe study a convolutional neural network with the same architecture as that used inSnoek et al. (2012 and Domhan et al.[(2015) from cuda-convnet. The search spaces used in the two previous works. differ, and we used a search space similar to that of|Snoek et al.(2012) with 6 hyperparameters for. stochastic gradient decent and 2 hyperparameters for the response normalization layers. In line with. the two previous works, we used a batch size of 100 for all experiments. For these experiments, we also compare against a variant of SMAC named SMAC_early that uses the early termination criterion. proposed inDomhan et al.(2015) for neural networks. We view SMAC with early stopping to be a. combination of adaptive configuration selection and configuration evaluation. See Appendix|A for more details about the experimental setup.\nDatasets: We considered three image classification datasets: CIFAR-10 (Krizhevsky2009), rotatec MNIST with background images (MRBI) (Larochelle et al.] 2007), and Street View House Numbers (SVHN) (Netzer et al.]2011). CIFAR-10 and SVHN contain 32 32 RGB images while MRBI contains 28 28 grayscale images. The splits used for each dataset are as follows: (1) CIFAR-10 has 40k, 10k, and 10k instances; (2) MRBI has 10k, 2k, and 50k instances; and (3) SVHN has close to 600k, 6k, and 26k instances for training, validation, and test respectively. For all datasets, the only preprocessing performed on the raw images was demeaning.\nHyPERBAND Configuration: For these experiments, one unit of resource corresponds to 100 mini batch iterations. For CIFAR-10 and MRBI, R is set to 300 (or 30k total iterations). For SVHN, R is set to 600 (or 60k total iterations) to accommodate the larger training set. n was set to 4 for all experiments, resulting in 5 SUCCESSIVEHALVING brackets for HYPERBAND.\nTable 6: Hyperparameter space for random feature kernel approximation classification problen discussed in Section4.3\nResults: Ten independent trials were performed for each searcher. For CIFAR-10, the results i Figure[3[a) show that HyPERBAND is more than an order of magnitude faster than its competitors In Figure 6|of Appendix A] we extend the x-axis for CIFAR-10 out to 100R. The results shov that Bayesian optimization methods ultimately converge to similar errors as HypERBAND. Fo. MRBI, HyPERBAND is more than an order of magnitude faster than standard configuration selectio approaches and 5 faster than SMAC with early stopping. For SVHN, while HyPERBAND find. a good configuration faster, Bayesian optimization methods are competitive and SMAC with earl. stopping outperforms HyPERBAND. This result demonstrates that there is merit to incorporating. early stopping with configuration selection approaches.\nTable6|shows the hyperparameters and associated ranges considered in the random features kerne. approximation classification experiment discussed in Section 4.3] The regularization term X i divided by the number of features so that the tradeoff between the squared error and the l2 penalt would remain constant as the resource increased. Additionally, the average test error with associate minimum and maximum ranges across 10 trials are show in Figure[8\nHyperparameter Type Values preprocessor Categorical min/max, standardize, normalize kernel Categorical rbf, polynomial, sigmoid. C Continuous log [10-3, 105 gamma Continuous log [10-5, 10 degree if kernel=poly integer [2,5] coef0 if kernel=poly,sigmoid. uniform [-1.0, 1.0]\nFigure 3: Average test error across 10 trials is shown in all plots. Label \"SMAC_early\"' corresponds to SMAC with the early stopping criterion proposed inDomhan et al.(2015) and label \"bracket s = 4\" corresponds to repeating the most exploratory bracket of HyPERBAND.\nSection|4.2] The cost term C is divided by the number of samples so that the tradeoff between the squared error and the l2 penalty would remain constant as the resource increased (squared error is. summed across observations and not averaged). The regularization term is equal to the inverse of the scaled cost term C. Additionally, the average test error with associated minimum and maximum. ranges across 10 trials are show in Figure7.\nFigure 7: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished. Error bars correspond to observed minimum and maximum test error across 10 trials.\nAcross the three datasets, HyPERBAND and SMAC_early are the only two methods that consistently. outperform random_2. On these datasets, HyPERBAND is over 20 faster than random search. while SMAC_early is 7 faster than random search within the evaluation window. In fact, the first. result returned by HYPERBAND after using a budget of 5R is often competitive with results returnec by other searchers after using 5OR. Additionally, HypeRBAND is less variable than other searchers. across trials, which is highly desirable in practice (see Appendix[A|for plots with error bars)..\nFor computationally expensive problems in high dimensional search spaces, it may make sense to just repeat the most exploratory brackets. Similarly, if meta-data is available about a problem or it is known that the quality of a configuration is evident after allocating a small amount of resource. then one should just repeat the most exploration bracket. Indeed, for these experiments, repeating the most exploratory bracket of HyPERBAND outperforms cycling through all the brackets. In fact. bracket s = 4 vastly outperforms all other methods on CIFAR-1O and MRBI and is nearly tied with SMAC_early for first on SVHN.\nFinally, CIFAR-10 is a very popular dataset and state-of-the-art models achieve much better accuracy than what is shown in Figure[3 The difference in performance is mainly attributable to higher mode complexities and data manipulation (i.e. using reflection or random cropping to artificially increase the dataset size). If we limit the comparison to published results that use the same architecture and exclude data manipulation, the best human expert result for the dataset is 18% error and hyperparameter optimized result is 15.0% for Snoek et al.(20124and 17.2% forDomhan et al. (2015). These results are better than our results on CIFAR-10 because they use 25% more data by including the validatior set and also train for more epochs. The best model found by HyPeRBAND achieved a test error of 17.0% when trained on the combined training and validation data for 300 epochs.\nhyperband SMAC 0.65 TPE spearmint 0.60 random lrroreron random 2x 0.55 bracket s=4 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nIn this experiment, we use HyPERBAND with data samples as the resource to optimize the hyper parameters of a kernel-based classification task on CIFAR-10. We use the multi-class regularized least squares classification model which is known to have comparable performance to SVMs (Rifkir & Klautau2004] Agarwal et al.][2014) but can be trained significantly faster. The hyperparameters considered in the search space include preprocessing method, regularization, kernel type, kernel length scale, and other kernel specific hyperparameters (see Appendix |A|for more details). Hy PERBAND is run with n = 4 and R = 400, with each unit of resource representing 100 datapoints Similar to previous experiments, these inputs result in a total of 5 brackets. Each hyperparameter optimization algorithm is run for ten trials on Amazon EC2 m4 . 2x1arge instances; for a given trial, HyPERBAND is allowed to run for two outer loops, bracket s = 4 is repeated 10 times, and all other searchers are run for 12 hours.\nFigure4shows that HyPERBAND returns a good configuration after just the first SuCCESs1VEHALV ING bracket in approximately 20 minutes; other searchers fail to reach this error rate on average even after the entire 12 hours. Notably, HyPERBAND was able to evaluate over 250 configurations in this first bracket of SuccEssiveHALviNG, while competitors were able to evaluate only three configurations in the same amount of time. Consequently, HyPERBAND is over 3O faster than Bayesian optimization methods and 70 faster than random search. Bracket s = 4 sightly outper forms HyPERBAND but the terminal performance for the two algorithms are the same. Random_2 is competitive with SMAC and TPE.\nWe next demonstrate the performance of HyPERBAND when using features as a resource, focusing. on random feature approximations for kernel methods. Features are randomly generated using the method described in Rahimi & Recht (2007) to approximate the RBF kernel, and these random. features are then used as inputs to a ridge regression classifier. We consider hyperparameters of. a random feature kernel approximation classifier trained on CIFAR-10, including preprocessing. method, kernel length scale, and l2 penalty. We impose an upper bound of 100k random features for the kernel approximation so that the data will comfortably fit into a machine with 60GB of.\n4We were unable to reproduce this result even after receiving the optimal hyperparameters from the authors through a personal communication..\nFigure 8: Average test error of the best random features model found by each searcher on CIFAR-10 The test error for HypeRBAND and bracket s = 4 are calculated in every evaluation instead of at the end of a bracket. Error bars correspond to observed minimum and maximum test error across 10 trials.\nhyperband SMAC 0.65 TPE random 0.60 random 2x rror bracket s=4 0.55 lest 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nFigure 4: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished."}, {"section_index": "12", "section_name": "4.4 EXPERIMENTAL DISCUSSION", "section_text": "the maximum speedu y HYPERBAND compared to random search 1s F Ilog..(R)+1\nIf training time is superlinear as a function of the resource, then HypeRBAND can offer higher. speedups. More generally, if training scales like a polynomial of degree p > 1, the maximum speedup y np---1 n[logn(R)]. Hence, higher speedups of HyPERBAND over random search is approximately. were observed for the the kernel least square classifier experiment discussed in Section 4.2|because. the training time scaled quadratically as a function of the resource..\nIf 10 randomly sampled configurations is sufficient to find a good hyperparameter setting, then th. benefit of evaluating orders of magnitude more configurations is muted. Generally the difficulty of th problem scales with the dimension of the search space since coverage diminishes with dimensionalit For low dimensional problems, the number of configurations evaluated by random search an Bayesian methods is exponential in the number of dimensions so good coverage can be achieved; i.. if d = 3 as in the features subsampling experiment, then n = O(2d = 8). Hence, HypeRBAND i only 6 faster than random search on the feature subsampling experiment. For the neural networ experiments however, we hypothesize that faster speedups are observed for HyPERBAND because th. dimension of the search space is higher."}, {"section_index": "13", "section_name": "5 FUTURE WORK", "section_text": "We have introduced a novel bandit-based method for adaptive configuration evaluation with demon strated competitive empirical performance. Future work involves exploring (i) embedding HYPER BAND into parallel and distributed computing environments; (ii) adjusting for training methods with different convergence rates; and (iii) combining HyPERBAND with non-random sampling methods.\nhyperband 0.65 SMAC TPE spearmint 0.60 random random 2x 0.55 bracket s=4 ees 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nFigure 5: Average test error of the best ran- dom features model found by each searcher on CIFAR-10. The test error for HyPERBAND and bracket s = 4 are calculated in every eval- uation instead of at the end of a bracket.\nmemory. Additionally, we set one unit of resource to be 100 features for an R = 1000, which gives 5 different brackets with n = 4. Each searcher is run for 10 trials, with each trial lasting 12 hours on a n1-st andard-1 6 machine from Google Cloud Compute. The results in Figure|5 show that HYPERBAND is around 6x faster than Bayesian methods and random search. HyPERBAND performs similarly to bracket s = 4. Random_2 outperforms Bayesian optimization algorithms.\nFor a given R, the most exploratory SucCEssiVEHALVING round performed by HyPERBAND evaluates n[log,(R)] configurations using a budget of ([logn(R)] + 1) R, which gives an upper bound. on the potential speedup over random search. If training time scales linearly with the resource,. n[logn(R)] the values of n and R used in our experiments, the maximum speedup over random search is. approximately 50 given linear training time. However, we observe a range of speedups from 6 to 70 faster than random search. The differences in realized speedup can be explained by two factors. (1) the scaling properties of total evaluation time as a function of the allocated resource and (2) the. difficulty of finding a good configuration..\nof HYPERBAND over random search is approximately Hence. higher speedups"}]
SkXIrV9le
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The current computer graphics pipelines are the result of efficient implementations required by lim. ited hardware and high frequency output requirements. These requirements were also achieved witl the use of explicit physics and optic constraints and modeling with constantly improving data struc tures (Shirley et al.2015)."}, {"section_index": "1", "section_name": "5 CONCLUSIONS", "section_text": "In machine learning on the other hand, for a long time image (Olshausen et al.]1996) and videc (Hurri & Hyvarinen2003) generative models had been investigated with statistical approaches tha model images down to the pixel level (Simoncelli & Olshausen2001), sometimes assuming neigh borhood statistical dependencies (Osindero & Hinton2008). In video prediction, the current state of the art uses variations of deep convolutional recurrent neural networks (Kalchbrenner et al.]2016 (Lotter et al.2016) (Finn et al.2 2016).\nThis paper introduced a statistical framework for modeling video of 2D scenes inspired by graphic. ipelines and variational auto-encoding Bayes. From this statistical framework we derived a vari. tional lower bound that decouples sprites and their dynamics in a video. To optimize this lowe. ound, we suggested a family of architectures called Perception Updating Networks that can tak. dvantage of this decoupled representation by memorizing sprites or their percepts and updating i. ocation in a scene independently. We showed that this architecture could generate videos that ar nterpretable and are better suited than baseline RNNs for long video generation..\nAs a parallel to the classic machine learning approach to image and video interpretation and pre diction is a growing trend in the deep learning literature for modeling vision as inverse graphics (Kulkarni et al.]2015)(Rezende et al.] 2016)(Eslami et al.]2016). These approaches can be inter preted into two groups: supervised and unsupervised vision as inverse graphics. The supervised approach assumes that during training an image is provided with extra information about its rota tion, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder tha explicitly factors out the content of the image and its physical properties. The supervised approach is illustrated byKulkarni et al.(2015)."}, {"section_index": "2", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the University of Florida Graduate Student Fellowship and ONR N00014-14-1-0542"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "The unsupervised approach requires extra architectural constraints, similar to those assumed in com-. puter graphics. For example,Reed et al.(2016) modeled the content of a scene with a Generative. Adversarial Network (Goodfellow et al.2014) and its location with Spatial Transformer Networks. (Jaderberg et al.|2015). The full model is adapted end-to-end to generate images whose appear- ance can be changed by independently modifying the 'what\"' and/or 'where\"' variables. A similar approach was applied to video generation with volumetric convolutional neural networks (Vondrick et al.[2016).In two papers by Google DeepMind (Rezende et al.[2016) (Eslami et al.[2016) they improved the 'where\"' representations of the unsupervised approach and modeled the 3D geometry of the scene. This way they explicitly represented object rotation, translation, camera pose, etc. Their approaches were also trained end-to-end with REINFORCE-like stochastic gradients to back-. propagate through non-differentiable parts of the graphics pipeline (Rezende et al.]2016) or to count\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl. learning to align and translate. arXiv preprint arXiv:1409.0473. 2014\nChelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interactior through video prediction. arXiv preprint arXiv:1605.07157, 2016."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "In Fig. 5|we show sample rollout videos. The network was fed with 10 frames and asked to generate. 10 more getting its own outputs back as inputs and the companion code repository for an animated version of this figure\nThis experiment also suggests several improvements in the proposed architecture. For example, we assumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when the sprites don't change in the video. We should improve the architecture with an extra memory unity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We believe this would a possible way to free representation power that the internal RNN could use to. model the movement dynamics for even more time steps..\ndXY convolution result 0 10 10 20 20 30 30 40 40 50 50 convolution 60 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 8 0 4 spatial transformer A 40 8 4 50 60 0 10 20 30 40 50 60 spatial transformer result\nJarmo Hurri and Aapo Hyvarinen. Simple-cell-like receptive fields maximize temporal coherenc in natural video. Neural Computation, 15(3):663-691, 2003.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Svstems. p. 2017-2025. 2015\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex. Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016\nBruno A Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607-609, 1996.\nthe number of objects in the scene (Eslami et al.||2016). Those papers also used Spatial Transformer Networks to model the position of the objects in the scene, but they extended it to 3D geometry so it could also model rotation and translation in a volumetric space.\nOther approaches inspired by the graphics pipeline and computer vision geometry in machine learn ing uses the physics constraints to estimate the depth of each pixel in the scene and camera pose movements to predict frames in video (Mahjourian et al.| 2016) (Godard et al.2016).\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn ing what and where to draw. arXiv preprint arXiv:1610.02454, 2016..\nThe present paper is closer to the unsupervised approach of vision as inverse graphics. More pre- cisely, here we investigate frame prediction in video. Contrary to the work by[Reed et al. (2016) here we first limit ourselves to simple synthetic 2D datasets and learning models whose representations can be visually interpreted. This way we can investigate exactly what the neural network is learning and validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer Networks and question it as the default choice when limited compute resources are available and no scale invariance is required.\nDanilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nico las Heess. Unsupervised learning of 3d structure from images. arXiv preprint arXiv:1607.00662 2016.\nPeter Shirley, Michael Ashikhmin, and Steve Marschner. Fundamentals of computer graphics. CRC Press, 2015.\nEero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation Annual review of neuroscience, 24(1):1193-1216, 2001\nFirst in the next Section we will pose a statistical model that is appropriate for machine learning bu inspired by the graphics pipeline\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc representations using lstms. CoRR, abs/1502.04681, 2, 2015.\nCarl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics arXiv preprint arXiv:1609.02612, 2016.\nThis section starts with a high level description of the 2D graphics pipeline, followed by a discussior of how to implement it with neural network modules, and finally we define a formal statistical model\nThe 2D graphics pipeline starts from geometric primitives and follows with modeling transforma. tions, clipping, viewing transformations and finally scan conversion for generating an image. Here. we will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transfor mations, rotation and clipping with differential operations. This way, the steps in the pipeline cai. be defined as layers of a neural network and the free parameters can be optimized with backpropa. gation.\nan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014\nFigure 1: How to get similar results using convolutions with delta-functions and spatial transformers. Input sprite is 8 8 pixels and the outputs are 64 64 pixels. Note that in the convolution the result. shape is rotated 180 degrees and its center is where the delta equals to one at pixel (x = 16, y = 16).. Note also that the edges of the spatial transformer results are blurred due to bilinear interpolation. A. matrix can be read as \"zoom-out\"' 8 times and translate up and left in a quarter of the resulting size\nCjSj, where j Cj=1. j\nFor interpretable results it would be optimal to do one-hot memory addressing where c; = 1 foj S; = S and c; = 0 otherwise. Note that (1) is differentiable w.r.t to both c; and S; so we can learn the individual sprites from data. We can for c, to sum to 1 using the softmax nonlinearity. This. approach was inspired by the recent deep learning literature on attention modules (Bahdanau et al. 2014) (Graves et al.2014).\nWhen the number of possible sprites is too large it is more efficient to do a compressed represen-. tation. Instead of using an address value c we use a content addressable memory where the image generator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear). function d(z). If we interpret the addressing value z as a latent representation and the content addressable memory d(z) as a decoder, we can use the recent advances in neural networks for gen erative models to setup our statistical model. We will revisit this later in this section..\nThe translation transformation can be modeled with a convolution with a Delta function or using spatial transformers. Note that the translation of an image I(x, y) can be defined as.\nI(x-Tx,y-Ty) =I(x,y)+O(x-Tx,y-Ty\nWe assume the coordinates in the original image are integers 0 < x < M and 0 < y < N, where M N is the size of the image I. Once the new coordinates are defined, we can calculate the values of the pixels in the new image I using bilinear interpolation:\nwhere (x1, X2, Y1, Y2 are integers, x1 x < x2, y1 < y < y2 and\nWx1,Y1 =x-xy-x) Wx1,Y2 =[x]-x)([y]+1-y) Wx2,Y1 =([x]+1-x)([y]-y) Wx2,y2 =([x]-x)([y]+1-y)\nFor our neural network implementation, we assume a finite set of sprites (later we generalize it tc infinite sprites) that will be part of the frames in the video. The image generation network selects a. sprite, s, from a memorized sprite database S,cs1. K} using an addressing signal c..\nwhere + denotes the image convolution operation. Clipping is naturally handled in such a case. If the output images have finite dimensions and (x -- T, y -- Ty) is non-zero near its border, the translated image I(x - Tx, y Ty) will be clipped. Another way of implementing the translation operation is using Spatial Transformer Networks (STN) (Jaderberg et al.J2015). An implementation of STN can be defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the position of the pixels (x, y) in the original image using a linear transform to new positions (x, y) as\n[x] A y where 1] [A11 A12 A13 A A21 A22 A23\nI(x,y) = Wx1,y1I(x1, Y1) + Wx1,y2I(x1,Y2)+ Wx2,y1I(X2,Y1) + Wx2,y2I(x2,Y2)\nTo avoid sampling from outside the image we clip the values |x | and |x|+ 1 between 0 and M and the values [y] and [y] + 1 between 0 and N. We omitted that in (5) for conciseness. Note that (4 is piecewise differentiable w.r.t I.\nWe can define translation through operations with\n0 cos p sin p A = - sin p cosp 0\nConsidering the tools defined above, we can define a statistical model of 2D images the explicitly represents sprites and their positions in the scene. We can use the free energy of this statistical model. to optimize a neural network. Let us start with a static single frame model and later generalize it to Video.\nLet an image I ~ pe(I) be composed of sprite s ~ pe(s) centered in the (x, y) coordinates in. the larger image I. Denote these coordinates as a random variable dxy ~ pe, where 0 are the. model parameters. pe(xy) can be factored in two marginal categorical distributions Cat(x) and. Cat(oy) that models the probability of each coordinate of the sprite independently. For the finite sprite dataset, pe(s) is also a categorical distribution conditioned on the true sprites. For this finite case the generative model can be factored as.\npe(I, s, ) = pe(s)pe(Oxu)p(Is, Oxy)\n1 0 T x A 0 1 Ty\nImage rescaling is achieved on that framework by rescaling in the right square submatrix A1:2,1:2. We illustrate in Fig.1 how to get similar results using convolutions with a delta-function and spatial transformers.\npe(s,0[I) = pe(s[D)p(SxyI\nis tractable. One could use for instance Expectation-Maximization or greedy approaches like Match- ing Pursuit to alternate between the search for the position and fitting the best matching shape. For the infinite number of sprites case, we assume that there is a hidden variable z from which the sprites are generated as p(s, z) = pe(z )p0(s[z). In such case our full posterior becomes\npe(z, s,6I) = pe(z, s|I)p(8xy[I) = pe(zI)pe(sI,z)p(0xy[I)\nWe can simplify q10) assuming pe(z[s) = pe(z|I) for simple images without ambiguity and no. sprite occlusion. For a scalable inference in the case of unknown 0 and z and intractable pe(z[s) we can use the auto-encoding variational Bayes (VAE) approach proposed byKingma & Welling. (2013). Using VAE we define an approximate recognition model qs(z[s). In such case, the log-.\nogpe(I) = DkL(qo(z[si)[[pe(zsi))+ DkL(pe(z|si)|pe(z[Ii))+ L(0,$,0xy,Ii)\n(0,$,8,I) = -DkL(qo(zs)[pe(z))+ Eqp(z|s,s)pe(s|I)[l0gPe(I|z,0)];\nc sprites translate RNN Add 't+1 dxY rotate Background p\nFigure 2: A schematic block diagram for a Perception Updating Network. This configuration uses both convolutions with delta functions for translation and spatial transformers for rotation. It also shows the optional background underlay..\nz =mo(I)+vo(I)\nwhere ~ N(0, oI), I is the identity matrix, the functions m(I) and v(I) are deep neural network learned from data.\nOne can argue that given z and a good approximation to the posterior qo, estimating is stil tractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search ap proaches and use instead neural network layers lx and ly:.\nWe extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1), ...}, assuming that the conditional log-likelihood logpe(It|H1t) = logpe(It|Hst, Hzt) follows (11), where H1 is the history of video frames prior to time point t. Also Hs, and Hz. are the history of position coordinates and the history of latent variables of the sprites respectively. We should observe that one can make the assumption that the sprites don't change for a given video 1(t) and only estimate one sprite st=o or hidden variable zt=o. This assumption can be useful for long term predictions, but requires that the main object moving in the scene doesn't change.\nIn the next section, we propose a neural network architecture for maximizing our approximate vari ational lower bound 2D videos"}, {"section_index": "5", "section_name": "PERCEPTION UPDATING NETWORKS", "section_text": "This Section proposes a family of neural architectures for optimizing the lower bound (12). A. schematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network (RNN) augmented with task specific modules, namely a sprite addressable memory and modeling transformations layers. RNNs augmented with task specific units were popularized by|Graves et al.. (2014) in the context of learning simple differentiable algorithms and served as inspiration for us as. well. Here since we explicitly model the perceived sprites as s or z and update it and its location. and/or rotation though time we decided to call our method simply Perception Updating Networks.\nHere an input frame at time t, It, is fed to the RNN that emits 2 signals: a memory address that selects a relevant sprite and transformation parameters. If we are doing the translation transformation using convolutions and delta functions this output is equal to (14). If using STN, the translation operation returns the matrix A used in (3). Note that we could use both, letting convolutions with 8 to the translation is constraining A as in (7) to do rotation transformations only. We describe the. general case where both oxy and STNs are used in Algorithm 1..\nwhere we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by maximizing the lower bound (12), again inspired by VAE. We can do so using the reparametrization trick assuming qo(z[s) and the prior pe(z) to be Gaussian and sampling.\ndxy = softmax(lx(I)) softmax(ly(I))\nwith denoting the outer product of marginals. We also experiment using STNs. Such amortized inference is also faster in training and test time than EM and will also cover the case where I is itself a learned low dimensional or latent representation instead of an observable image. Bear this in mind while we use this approach even in simple experiments such as those with moving shapes in the Experiments Section. This will help us to understand what can be learned from this model.\nBeyond deciding between STNs vs xu, a few other free parameters of our method are the type of. RNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of. the RNN and neural network architectures that infer the correct sprite and modeling transformation parameters. Our hyperparameter choices are investigated separately in each experiment in the next. Section.\nData: input videos It, t E {0, 1, 2, ...}, initial RNN state ho, neural network layers m, Vo, d, l, f Result: video predictions It.t E L1. 2. 3...\nOxy = softmax(lx(ht)) softmax(ly(ht) p = f(ht) cos p sin p 0 A = sin p cos p 0 ~ pe(z) Zt =mo(ht)+vs(ht) St = d(zt) at = STN(st,A) It+1 = at* Oxy It+1 = It+1 +(1- )B nd\nIn the next section we present experiments with the proposed architecture on synthetic datasets"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In this section we experiment with several implementations of the proposed Perception Updating Networks. We start with a simple synthetic dataset made of videos where one of 3 moving shapes moves with constant speed bouncing in the edges of an image. This illustrates the working of the finite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST dataset (Srivastava et al.2015) commonly used in the literature of generative neural network models of videos."}, {"section_index": "7", "section_name": "4.1 BOUNCING SHAPES", "section_text": "In this first experiment we generate videos of one of three shapes moving on a non-zero background The shapes are a square, triangle and cross. The image size is 20 20 pixels and the shapes are 8 8 pixels. The pixel values are between O and 1. The shapes are picked with equal probability and they move at constant speed of 1 pixel per frame. The shapes start from random initial positions with anc start moving in random directions as well.\nAlgorithm 1: Perception Updating Networks. STN denotes spatial transformer operator (3)-(4) and * denotes convolution. We experimented with several variations of this algorithm, mainly changing if and how the \"where\"' modules dxy and STN are used. Also changing how the sprite st is calculated and not using a background B when not necessary..\nFigure 3: Results on the Bouncing Shapes dataset. Three 8x8 sprites (a square, a cross and a triangle were used to generate videos. The shapes move in a 20x20 pixels canvas with a Toeplitz backgrounc and bounce on the corners. a) We show one step ahead predictions with the compared methods. b We also show the learned sprites for the convolutional implementation of the proposed Perceptior Updating Networks when we over- and under-estimate the size of the desired sprites.\nIn the following experiments, the training videos were 10 frames long. At test time the network is fed the first 10 frames of a video and asked to predict the next 10. Results for the compared methods are shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventional LSTMs with a single linear output layer until we found one that had comparable results at test time. That network had 256 hidden cells. Also, note that although the scale of the mean square error is the same, the results from our proposed architecture look smoother than those learned by the LSTM as shown in Fig.3\nGiven such a simple experiment, it is elucidating to visualize values learned by each piece of th network. As expected the sprite memory learned the 3 investigated shapes in transposed order sinc they are reverted by the convolution operation to compose the frame. We also experimented witl choosing the size of the learned sprites s smaller and larger than the true shapes. We observed tha for larger shapes such as 10 10 the sprites converge to the correct shapes but just using part o the pixels. For smaller shapes such as 6 6 pixels, instead of learning a part of the correct shape the convolutional Perception Updating Network learned to compensate for the lack of enough pixel with more than one non-zero value in the location operation dxy (see Fig. 3. This allow us t suggest to the interested practitioner that in order to get interpretable results it is better to use sprite larger than the expected size than smaller.\nFor the spatial transformer PUN the image is calculated as (see Algorithm 1 for context)\nA=f(ht) It+1 = STN(St, A)\nWe noticed that the spatial transformer PUN was not able to learn the training videos using an equivalent architecture to the convolutional PUN one. We had to use multiple layers to define the function f(ht). In other words, in the convolution based method dxy can be estimated by a single affine transformation of the state ht but A cannot. We also had to use smaller learning rates to\na) one step ahead prediction ground truth convolutional PUN LSTM spatial transformer PuN b) convolutional PUN learned sprites 10x10 sprites 6x6 sprites sample 0xy when sprites 10x10 sample xy when sprites are 6x6\n0.25 cony.PUN LSTM 0.20 STNPUN 0.15 E ISW 0.10 0.05 0.00 0 50 100 150 200 epochs\n0.25 cOnVPUN LSTM 0.20 STN PUN 0.15 E ISW 0.10 0.05 0.00 0 50 100 150 200 epochs\nFigure 5: Sample rollouts of a 2 layer LSTM convolutional Perception Updating Network with\nguarantee convergence: 0.0001 for STN while the dxy-based model worked with a value 10 time larger.\nIf we don't use the softmax nonlinearity to construct dxy the representations learned by the con. volutional PUN are no longer visually interpretable. It is interesting to conclude that under this framework the \"what' and \"where\"' can only be distinguished if we impose architectural constraints.. The reason is the commutative property of the convolution operation..\nAs a note on rotation, we ran experiments where the sprite are rotated by a random angle before. being placed in the image. This new type of videos cannot be learned using only convolutiona based Perception Updating Networks unless we increase the number of sprites proportionally to the. number of possible angles. Spatial transformer based Perception Updating Networks can handle this. new type of video naturally. Nevertheless, if the number of rotation angles is finite or can be dis. cretized we found that we could learn to generate the videos faster if we combined the convolutiona. approach with a mechanism to select the appropriate angle from a set of possibilities. Results on. this experiment are not shown in this paper due to space constraints but they can be reproduced with. the companion code."}, {"section_index": "8", "section_name": "4.2 MOVING MNIST", "section_text": "The Moving MNIST benchmark uses videos generated by moving 28 28 pixel images of hand writ- ten digits in a 64 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with different different speeds in different directions and can bounce in the walls. Unlike the Bouncing Shapes dataset, there are 60oo0 different sprites for training and 1000o for test, making it impracti- cal to use a discrete memory module. Instead, we use the memory representation denoted by (13. followed by st = d(zt) as written in Algorithm 1..\nWe trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024. cells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimen-. sions and the decoder d(.) was a single hidden layer MLP with 10oo hidden neurons and softplus\nFigure 4: Learning curves in the test task of two implementations of the proposed architecture (conv PUN and STN PUN) and an equivalent LSTM baseline. Note that the spatial transformer based PUN was not able to generalize to the test set, i.e. they did not work well for generating videos when getting its own previous outputs as next step inputs..\n87 87 87 7 00 00\n5 B 8 8 B7 87 87 87 7 7 Q 00"}]
B184E5qee
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Language models, which are probability distributions over sequences of words, have many appli cations such as machine translation (Brown et al.]1993), speech recognition (Bahl et al.] 1983) 01 dialogue agents (Stolcke et al.|20oo). While traditional neural networks language models have ob tained state-of-the-art performance in this domain (Jozefowicz et al.[2016, Mikolov et al.[2010) they lack the capacity to adapt to their recent history, limiting their application to dynamic environ ments (Dodge et al.]2015). A recent approach to solve this problem is to augment these networks with an external memory (Graves et al.2014) Grefenstette et al.f2015] Joulin & Mikolov2015 Sukhbaatar et al.]2015). These models can potentially use their external memory to store new information and adapt to a changing environment.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neura network based language model. In INTERSPEECH, 2010.\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato. Learning longe memory in recurrent neural networks. arXiv preprint arXiv:1412.7753. 2014.\nWhile these networks have obtained promising results on language modeling datasets (Sukhbaatar. et al.|[2015), they are quite computationally expensive. Typically, they have to learn a parametrizable mechanism to read or write to memory cells (Graves et al.][2014)Joulin & Mikolov2015). This may limit both the size of their usable memory as well as the quantity of data they can be trained on. In. this work, we propose a very light-weight alternative that shares some of the properties of memory. augmented networks, notably the capability to dynamically adapt over time. By minimizing the computation burden of the memory, we are able to use larger memory and scale to bigger datasets.. We observe in practice that this allows us to surpass the perfomance of memory augmented networks. on different language modeling tasks.\nDenis Paperno, German Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. 2016.\nOur model share some similarities with a model proposed byKuhn (1988), called the cache model. A cache model stores a simple representation of the recent past, often in the form of unigrams, and uses them for prediction (Kuhn & De Mori1990). This contextual information is quite cheap to store and can be accessed efficiently. It also does not need any training and can be appplied on top of any model. This makes this model particularly interesting for domain adaptation (Kneser & Steinbiss1993).\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPs, 2015\nPaul J Werbos. Backpropagation through time: what it does and how to do it. 1990.\nOur main contribution is to propose a continuous version of the cache model, called Neural Cache Model, that can be adapted to any neural network language model. We store recent hidden activations and use them as representation for the context. Using simply a dot-product with the current hidden activations, they turn out to be extremely informative for prediction. Our model requires no training and can be used on any pre-trained neural networks. It also scales effortlessly to thousands of memory cells. We demonstrate the quality of the Neural Cache models on several language model tasks and the LAMBADA dataset (Paperno et al.]2016).\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016."}, {"section_index": "1", "section_name": "IMPROVING NEURAL LANGUAGE MODELS WITH A CONTINUOUS CACHE", "section_text": "Slava M Katz. Estimation of probabilities from sparse data for the language model component of a speecl recognizer. ICASSP, 1987."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Reinhard Kneser and Volker Steinbiss. On the dynamic adaptation of stochastic language models. In ICASSP.\nWe propose an extension to neural network language models to adapt their pre. diction to the recent history. Our model is a simplified version of memory aug. nented networks, which stores past hidden activations as memory and accesses. hem through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between. he use of external memory in neural network and cache models used with count. ased language models. We demonstrate on several language model datasets that. our approach performs significantly better than recent memory augmented net-. works.\nTomas Mikolov. Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation anc combination of advanced language modeling techniques. In INTERSPEECH, 2011.\nAndreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 2000.\nainbayar Sukhbaatar, Szlam Arthur, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS 2015.\nRonald J Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent net work trajectories. Neural computation, 1990.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.\nA language model is a probability distribution over sequences of words. Let V be the size of the vocabulary; each word is represented by a one-hot encoding vector x in RV = V, corresponding to its index in the vocabulary. Using the chain rule, the probability assigned to a sequence of words C1 : , xT can be factorized as\nT I 0(x1,..., XT) = pxtXt-1,.,X1 t=1\nLanguage modeling is often framed as learning the conditional probability over words, given the history (Bahl et al.1983)\nThis conditional probability is traditionally approximated with non-parameteric models based on counting statistics (Goodman]|2001). In particular, smoothed N-gram models (Katz||1987| Kneser & Ney|1995) achieve good performance in practice (Mikolov et al.]2011). Parametrized alternatives are either maximum entropy language models (Rosenfeld1996), feedforward networks (Bengio et al.]2003) or recurrent networks (Mikolov et al.2010).In particular, recurrent networks are currently the best solution to approximate this conditional probability, achieving state-of-the-arts performance on standard language modeling benchmarks (Jozefowicz et al.2016]Zilly et al.2016).\nRecurrent networks. Assuming that we have a vector ht E Rd encoding the history xt,..., X1, the conditional probability of a word w can be parametrized as\nPvocab(w Xt, ..., X1 x exp(h+\nPvocab(w xt,..., x1) exp(ht Ow.\nht =xt,ht-1)\nwhere is a function depending on the architecture of the network. Several architecture for recur- rent networks have been proposed, such as the Elman network (Elman1990), the long short-term memory (LSTM) (Hochreiter & Schmidhuber1997) or the gated recurrent unit (GRU) (Chung et al.2014). One of the simplest recurrent networks is the Elman network (Elman![1990), where.\nht =o(Lxt+ Rht-1)\nCache model. After a word appears once in a document, it is much more likely to appear again. As an example, the frequency of the word tiger on the Wikipedia page of the same name is 2.8%. compared to 0.0037% over the whole Wikipedia. Cache models exploit this simple observatior. to improve n-gram language models by capturing long-range dependencies in documents. More. precisely, these models have a cache component, which contains the words that appeared in the. recent history (either the document or a fixed number of words). A simple language model, such as. a unigram or smoothed bigram model, is fitted on the words of the cache and interpolated with th static language model (trained over a larger dataset). This technique has many advantages. First. this is a very efficient way to adapt a language model to a new domain. Second, such models cai. predict out-of-vocabulary words (OOV words), after seeing them once. Finally, this helps captur. long-range dependencies in documents, in order to generate more coherent text..\nwhere o is a non-linearity such as the logistic or tanh functions, L E RdV is a word embedding. matrix and R E Rdd is the recurrent matrix. The LSTM architecture is particularly interesting in. the context of language modelling (Jozefowicz et al.2016) and we refer the reader to[Graves et al. (2013) for details on this architecture.\nThe parameters of recurrent neural network language models are learned by minimizing the nega tive log-likelihood of the training data. This objective function is usually minimized by using the stochastic gradient descent algorithm, or variants such as Adagrad (Duchi et al.[2011). The gradient is computed using the truncated backpropagation through time algorithm (Werbos!1990) Williams & Peng1990).\nId (h1, x2) (h2,x3) (h3,x4) X5 Id I d Id 0 R R R h1 h4 n? L L L L X1 X2 x3 x 4\nThe Neural Cache Model adds a cache-like memory to neural network language models. It exploits the hidden representations ht to define a probability distribution over the words in the cache. As illustrated Figure[1| the cache stores pairs (hi, xi+1) of a hidden representation, and the word which. was generated based on this representation (we remind the reader that the vector h, encodes the history xi, ..., x1). At time t, we then define a probability distribution over words stored in the cache based on the stored hidden representations and the current one ht as.\nt-1 Pcache(w h1..t, x1..t) ) l{w=xi+1} exp(0ht i=1\nwhere the scalar 0 is a parameter which controls the flatness of the distribution. When 0 is equal to zero, the probability distribution over the history is uniform, and our model is equivalent to a unigram cache model (Kuhn & De MoriJ1990)\nFrom the point of view of memory-augmented neural networks, the probabilit. Pcache(w h1.t, x1.t) given by the neural cache model can be interpreted as the probabilit. to retrieve the word w from the memory given the query ht, where the desired answer is the nex word xt+1. Using previous hidden states as keys for the words in the memory, the memory lookuj. operator can be implemented with simple dot products between the keys and the query. In contras. to existing memory-augmented neural networks, the neural cache model avoids the need to learn the. memory lookup operator. Such a cache can thus be added to a pre-trained recurrent neural languag. model without fine tuning of the parameters, and large cache size can be used with negligible impac. on the computational cost of a prediction..\nNeural cache language model. Following the standard practice in n-gram cache-based languag. models, the final probability of a word is given by the linear interpolation of the cache languag model with the regular language model, obtaining:.\np(wh1..t, x1..t X)pvocab(w ht) + Apcache(wh1..t, x1..t)\nInstead of taking a linear interpolation between the two distribution with a fixed X, we also conside. a global normalization over the two distribution:.\nt-1 p(w (h1..t, x1..t) exp ) {w=xi+1} exp(0ht hi+ i=1\nThis corresponds to taking a softmax over the vocabulary and the words in the cache. The paramete. a controls the weight of the cache component, and is the counterpart of the parameter for linear interpolation.\nThe addition of the neural cache to a recurrent neural language model inherits the advantages of n gram caches in usual cache-based models: The probability distribution over words is updated online depending on the context, and out-of-vocabulary words can be predicted as soon as they have been seen at least once in the recent history. The neural cache also inherits the ability of the hidden states of recurrent neural networks to model longer-term contexts than small n-grams, and thus allows for a finer modeling of the current context than e.g., unigram caches.\nFigure 1: The neural cache stores the previous hidden states in memory cells. They are then used as keys to re trieve their corresponding word, that is the next word. There is no transfor mation applied to the storage during writing and reading\nh1 t.X1\nLinear interpolation (ptb) Global normalization (ptb). 0.05 0.0 96 0.1 96 0.5 0.15 1.0 90 90 0.2 aldhe 1.5 0.25 2.0 84 84 0.3 2.5 0.35 3.0 78 78 0.4 3.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.080.160.240.32 0.4 theta theta\nModel Test PPL RNN+LSA+KN5+cache (Mikolov & Zweig 2012 90.3 LSTM (Zaremba et al.|2014 78.4 Variational LSTM (Gal & Ghahramani2015 73.4 Recurrent Highway Network (Zilly et al. |2016 66.0 Pointer Sentinel LSTM (Merity et al.||2016 70.9 LSTM (our implem.). 82.3 Neural cache model. 72.1\nTable 1: Test perplexity on the Penn Tree eBank.\nTraining procedure. For now, we first train the (recurrent) neural network language model, with- out the cache component. We only apply the cache model at test time, and choose the hyperparam eters 0 and X (or a) on the validation set. A big advantage of our method is that it is very easy and cheap to apply, with already trained neural models. There is no need to perform backpropaga tion over large contexts, and we can thus apply our method with large cache sizes (larger than one thousand)."}, {"section_index": "3", "section_name": "4 RELATED WORK", "section_text": "Cache model. Adding a cache to a language model was intoducted in the context of speech recog nition(Kuhn]1988] Kupiec][1989] Kuhn & De Mori]1990). These models were further extended by Jelinek et al.[(1991) into a smoothed trigram language model, reporting reduction in both perplexity and word error rates.Della Pietra et al.(1992) adapt the cache to a general n-gram model such that it satisfies marginal constraints obtained from the current document.\nAdaptive language models. Other adaptive language models have been proposed in the past:. Kneser & Steinbiss (1993) and Iyer & Ostendorf|(1999) dynamically adapt the parameters of their model to the recent history using different weight interpolation schemes.Bellegarda (2o00) and Coccaro & Jurafsky[(1998) use latent semantic analysis to adapt their models to the current context. Similarly, topic features have been used with either maximum entropy models (Khudanpur & Wu 2000) or recurrent networks (Mikolov & Zweig2012] Wang & Cho 2015). Finally, Lau et al.. (1993) proposes to use pairs of distant of words to capture long-range dependencies..\nMemory augmented neural networks. In the context of sequence prediction, several memory augmented neural networks have obtained promising results (Sukhbaatar et al.|2015] Graves et al. 2014, Grefenstette et al.2015, Joulin & Mikolov2015). In particular, Sukhbaatar et al.(2015 stores a representation of the recent past and accesses it using an attention mechanism Bahdanau [et al. (2014).Sukhbaatar et al. (2015) shows that this reduces the perplexity for language modeling.\nFigure 2: Perplexity on the validation set of Penn Tree Bank for linear interpolation (left) and global normalization (right), for various values of hyperparameters 0, X and a. We use a cache. model of size 500. The base model has a validation perplexity of 86.9. The best linear interpolation. has a perplexity of 74.6, while the best global normalization has a perplexity of 74.9..\nLinear interpolation (wikitext2). Global normalization (wikitext2) 104 104 0.05 0.0 0.1 0.5 96 96 0.15 1.0 0.2 aldhe 1.5 88 88 0.25 2.0 0.3 2.5 80 80 0.35 3.0 0.4 3.5 72 72 0.00.2 0.4 0.60.8 1.0 0.0 0.08 0.160.24 0.32 0.4 theta theta\nTable 2: Test perplexity on the wikitext datasets. The two datasets share the same validation an. test sets, making all the results comparable.\nThis approach has been successfully applied to question answering, when the answer is containec. in a given paragraph (Chen et al.. 2016, Hermann et al. 2015]Kadlec et al.| 2016 Sukhbaatar et al. 2015). Similarly, Vinyals et al.(2015) explores the use of this mechanism to reorder sequence. of tokens. Their network uses an attention (or \"pointer') over the input sequence to predict whicl. element should be selected as the next output.Gulcehre et al.[(2016) have shown that a simila mechanism called pointer softmax could be used in the context of machine translation, to decid which word to copy from the source to target.."}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "Datasets. In this section, we describe experiments performed on two small datasets: the Penr Tree Bank (Marcus et al.l1993) and the wikitext2 (Merity et al.] 2016) datasets. The Penn Tree Bank dataset is made of articles from the Wall Street Journal, contains 929k training tokens and has a vocabulary size of 10k. The wikitext2 dataset is derived from Wikipedia articles. contains 2M training tokens and has a vocabulary size of 33k. These datasets contain non-shuffled documents, therefore requiring models to capture inter-sentences dependencies to perform well..\nFigure 3: Perplexity on the validation set of wikitext2 for linear interpolation (left) and global normalization (right), for various values of hyperparameters 0, X and a. We use a cache model of size 2000. The base model has a validation perplexity of 104.2. The best linear interpolation has a perplexity of 72.1, while the best global normalization has a perplexity of 73.5.\nModel wikitext2 wikitextl03 Zoneout + Variational LSTM (Merity et al.2016 100.9 Pointer Sentinel LSTM (Merity et al. 2016) 80.8 LSTM (our implementation) 99.3 48.7 Neural cache model (size = 100) 81.6 44.8 Neural cache model (size = 2,000) 68.9 40.8\nIndependently of our work, Merity et al.(2016) apply the same mechanism to recurrent network. Unlike our work, they uses the current hidden activation as a representation of the current input. (while we use it to represent the output). This requires additional learning of a transformation between the current representation and those in the past. The advantage of our approach is that we. can scale to very large caches effortlessly.\nIn this section, we evaluate our method on various language modeling datasets, which have differen sizes and characteristics. On all datasets, we train a static recurrent neural network language model. with LSTM units. We then use the hidden representations from this model to obtain our cache, which. is interpolated with the static LSTM model. We also evaluate a unigram cache model interpolated. with the static model as another baseline..\ntext8 wikitext103 125 49 48 120 47 0 0 O 115 0 0 leetir 46 0 0 0 0 O OX 0 0 0 0 45 X 110 baseline baseline pernn 44 x 0 unigram x 0 unigram 105 43 x x neural x neural 42 100 X * 41 X X 95 40 102 103 104 102 103 104 cache size (log scale) cache size (log scale)\nFigure 4: Test perplexity as a function of the number of words in the cache, for our method and a unigram cache baseline. We observe that our approach can uses larger caches than the baseline..\nImplementation details. We train recurrent neural network language models with 1024 LSTM. units, regularized with dropout (probability of dropping out units equals to 0.65). We use the Ada- grad algorithm, with a learning rate of 0.2, a batchsize of 20 and initial weight uniformly sampled in the range -0.05, 0.05]. We clip the norm of the gradient to 0.1 and unroll the network for 30 steps We consider cache sizes on a logarithmic scale, from 50 to 10, 000, and fit the cache hyperparameters. on the validation set.\nResults. We report the perplexity on the validation sets in Figures 2|and 3] for various value of hyperparameters, for linear interpolation and global normalization. First, we observe that oi both datasets, the linear interpolation method performs slightly better than the global normalizatioi approach. It is also easier to apply in practice, and we thus use this method in the remainder of thi paper. In Tables[1|and|2] we report the test perplexity of our approach and state-of-the-art models Our approach is competitive with previous models, in particular with the pointer sentinel LSTM model of Merity et al.(2016). On Penn Tree Bank, we note that the improvement over the bas model is similar for both methods. On the wikit ext 2 dataset, both methods obtain similar result when using the same cache size (100 words). Since our method is computationally cheap, it is eas to increase the cache to larger values (2, 000 words), leading to dramatic improvements (30% ove the baseline, 12% over a small cache of 100 words)."}, {"section_index": "5", "section_name": "5.2 MEDIUM SCALE EXPERIMENTS", "section_text": "Datasets and implementation details. In this section, we describe experiments performed over two medium scale datasets: text 8 and wikitext 103. Both datasets are derived from Wikipedia. but different pre-processing were applied. The text8 dataset contains 17M training tokens and has a vocabulary size of 44k words, while the wikitext103 dataset has a training set of size. 103M, and a vocabulary size of 267k words. We use the same setting as in the previous section,. except for the batchsize (we use 128) and dropout parameters (we use 0.45 for text 8 and 0.25 for. wikitext103). Since both datasets have large vocabularies, we use the adaptive softmax (Grave. et al.2016) for faster training.\nResults.We report the test perplexity as a function of the cache size in Figure 4] for the neura. cache model and a unigram cache baseline. We observe that our approach can exploits larger cach. sizes, compared to the baseline. In Table 2] we observe that the improvement in perplexity c. our method over the LSTM baseline on wikitext103 is smaller than for wikitext2 (approx. 16% v.s. 30%). The fact that improvements obtained with more advanced techniques decreas. when the size of training data increases has already been observed by Goodman (2oo1). Bot. wikitext datasets sharing the same test set, we also observe that the LSTM baseline, traine on 103M tokens (wikitext103), strongly outperforms more sophisticated methods, trained o 2M tokens (wikitext2). For these two reasons, we believe that it is important to evaluate an compare methods on relatively large datasets.\nTable 3: Perplexity on the text 8 and 1ambada datasets. WB5 stands for 5-gram language model with Witten-Bell smoothing\nlambada 700 600 x Control 500 0 Development Peeelerir 400 300 0 X x 200 X 100 X 0 0.0 0.2 0.4 0.6 0.8 1.0 1ambda\nFigure 5: Perplexity on the development and control sets of 1ambada, as a function of the interpo lation parameters ."}, {"section_index": "6", "section_name": "5.3 EXPERIMENTS ON THE LAMBADA DATASET", "section_text": "Finally, we report experiments carried on the 1ambada dataset, introduced byPaperno et al.(2016) This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word Thus, most state-of-the-art language models fail on this dataset. The 1ambada training set contains approximately 200M tokens and has a vocabulary size of 93, 215. We report results for our method in Table[3] as well the performance of baselines from|Paperno et al.[(2016). Adding a neural cache model to the LSTM baseline strongly improves the performance on the 1ambada dataset. We alsc observe in Figure |5 that the best interpolation parameter between the static model and the cache is not the same for the development and control sets. This is due to the fact that more than 83% of passages of the development set include the target word, while this is true for only 14% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representatior of the history ht :"}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "We presented the neural cache model to augment neural language models with a longer-term mem ory that dynamically updates the word probablilities based on the long-term context. A neural cach can be added on top of a pre-trained language model at negligible cost. Our experiments on both lan guage modeling tasks and the challenging LAMBADA dataset shows that significant performance gains can be expected by adding this external memory component.\nTechnically, the neural cache models is similar to some recent memory-augmented neural networks such as pointer networks. However, its specific design makes it possible to avoid learning the mem ory lookup component. This makes the neural cache appealing since it can use larger cache sizes than memory-augment networks and can be applied as easily as traditional count-based caches."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning tc align and translate. arXiv preprint arXiv:1409.0473, 2014.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR, 2003.\nPeter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. The mathematics statistical machine translation: Parameter estimation. Computational linguistics, 1993.\nDanqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858, 2016.\nunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated r current neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.\nNoah Coccaro and Daniel Jurafsky. Towards better integration of semantic predictors in statistical languag modeling. In ICSLP. Citeseer, 1998.\nJeffrey L Elman. Finding structure in time. Cognitive science, 1990\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural net- works. arXiv preprint arXiv:1512.05287, 2015.\nJoshua T Goodman. A bit of progress in language modeling. Computer Speech & Language, 2001\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 1997\nRukmini M Iyer and Mari Ostendorf. Modeling long distance dependence in language: Topic mixtures versus dynamic cache models. IEEE Transactions on speech and audio processing, 1999\nFrederick Jelinek, Bernard Merialdo, Salim Roukos, and Martin Strauss. A dynamic language model for speecl recognition. In HLT, 1991.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. Ir Advances in Neural Information Processing Systems. pp. 190-198. 2015.\nLalit R Bahl, Frederick Jelinek, and Robert L Mercer. A maximum likelihood approach to continuous speech recognition. PAMI, 1983.\nJerome R Bellegarda. Exploiting latent semantic information in statistical language modeling. Proceedings of the IEEE, 2000.\nEdouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient softmax ap. proximation for gpus. arXiv preprint arXiv:1609.04309, 2016.\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce witl unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828-1836, 2015..\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom. Teaching machines to read and comprehend. In NIPS. 2015."}]
Byk-VI9eg
[{"section_index": "0", "section_name": "BIBLIOGRAPHY", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016\nIshan Durugkar*, Ian Gemp*, Sridhar Mahadevan\nCollege of Information and Computer Sciences University of Massachusetts, Amherst 15\nHana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, and Mario Marchand Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446, 2014\n{idurugkar, imgemp, mahadeva}@cs.umass.edu\nGenerative adversarial networks (GANs) are a framework for producing a gen erative model by way of a two-player minimax game. In this paper, we propose the Generative Multi-Adversarial Network (GMAN). a framework that extend. GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on In contrast, GMAN can be reliably trained with the original, untampered objec tive. We explore a number of design perspectives with the discriminator role rang ing from formidable adversary to forgiving teacher. Image generation tasks com paring the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.\nJeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprin arXiv:1605.09782, 2016."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Ian Goodfellow. Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nThe GAN framework is one of the more recent successes in a line of research on adversarial train ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games. between learners are carefully crafted so that Nash equilibria coincide with some set of desired op-. timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun. et al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli-. cation domains including learning censored representations (Edwards & Storkey (2015)), imitating. expert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending. GANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014) Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning. (Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015); Radford et al. (2015)) have shown promise as well.\nJonathan Ho and Stefano Ermon. Generative adversarial imitation learning. . arXiv preprint arXiv:1606.03476, 2016\nDaniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating image. with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nDespite these successes, GANs are reputably difficult to train. While research is still underway to. improve training techniques and heuristics (Salimans et al. (2016)), most approaches have focused on understanding and generalizing GANs theoretically with the aim of exploring more tractable formulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016)).\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Master's Thesis, 2009\nYann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits 1998.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing a generative model by way of a two-player minimax game. One player, the generator, attempts tc generate realistic data samples by transforming noisy samples, z, drawn from a simple distributior (e.g., z ~ N(0, 1)) using a transformation function Ge(z) with learned weights, 0. The generator receives feedback as to how realistic its synthetic sample is from another player, the discriminator which attempts to discern between synthetic data samples produced by the generator and samples drawn from an actual dataset using a function Dw(x) with learned weights, w.\nIn this paper, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4 we present our N-discriminator extension to the GAN framework (Generative Multi-Adversarial Networks) with several variants which range the role of the discriminator from formidable adversary to forgiving teacher. Section 4.2 explains how this extension makes training with the untampered minimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMAN\nperformance and evaluate our framework on a variety of image generation tasks. Section 6 conclude with a summary of our contributions and directions for future research.\nContributions- To summarize, our main contributions are: i) a multi-discriminator GAN frame work, GMAN, that allows training with the original, untampered minimax objective; ii) a generative multi-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks iii) a particular instance of GMAN, GMAN*, that allows the generator to automatically regulate training and reach higher performance (as measured by GMAM) in a fraction of the training time required for the standard GAN model..\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. 2015.\nSiamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos Enabling dark energy science with deep generative models of galaxy images. arXiv preprin arXiv:1609.05796, 2016\nThe original formulation of a GAN is a minimax game between a generator, Ge(z) : z -> x, and discriminator, Dw(x) : x -> [0, 1],\nmin max V(D, G) = Ex~Pdat log(1 - D(G(z) G DeD\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498. 2016.\nJurgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation 4(6):863-879, 1992\nIn their original work, Goodfellow et al. (2014) proved that given sufficient network capacities and an oracle providing the optimal discriminator, D* = arg maxp V(D, G), gradient descent on. pG(x) will recover the desired globally optimal solution, pg(x) = Pdata(x), so that the generator. distribution exactly matches the data distribution. In practice, they replaced the second term, log(1 D(G(z))), with - log(D(G(z))) to enhance gradient signals at the start of the game; note this is no. longer a zero-sum game. Part of their convergence and optimality proof involves using the oracle D*, to reduce the minimax game to a minimization over G only:.\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844v3. 2016.\nmin V(D*, G) = min{C(G) = - log(4) + 2 : JSD(Pdata|PG\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016.\nWe propose introducing multiple discriminators, which brings with it a number of design possibil-. ities. We explore approaches ranging between two extremes: 1) a more discriminating D (better. approximating maxp V(D, G)) and 2) a D better matched to the generator's capabilities. Math-. ematically, we reformulate G's objective as ming max F(V(D1, G),..., V(Dv, G)) for different choices of F (see Figure 1). Each D; is still expected to independently maximize its own V(D,, G) (i.e. no cooperation). We sometimes abbreviate V(D, G) with V, and F(V1,..., Vn) with FG(Vt)..\nHere, we consider multi-discriminator variants that attempt to better approximate maxp V(D, G providing a harsher critic to the generator.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiy. preprint arXiv:1606.00709. 2016\nwhere pdata(x) is the true data distribution and pz(z) is a simple (usually fixed) distribution that is easy to draw samples from (e.g., N(0, 1)). We differentiate between the function space of discrim- inators, D, and elements of this space, D. Let pG(x) be the distribution induced by the generator, Ge(z). We assume D, G to be deep neural networks as is typically the case..\nJost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.\nwhere JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JSD. however, we rarely know D* and so we instead minimize V(D, G), which is only a lower bound\nhis perspective of minimizing the distance between the distributions, Pdata and pg, motivatec. i et al. (2015) to develop a generative model that matches all moments of pG(x) with pdata(x) (a ptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zha t al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generato nd discriminator objectives to take real-valued \"energies\"' as input instead of probabilities. Nowozir. t al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more genera. livergences, specifically f-divergences and then Bregman-divergences respectively.\nIn general, these approaches focus on exploring fundamental reformulations of V(D, G). Similarly our work focuses on a fundamental reformulation, however, our aim is to provide a framework that accelerates training of the generator to a more robust state irrespective of the choice of V.\nG A F(.) V(Dj,G) V(D,G) V(DN,G) D D D 2 N"}, {"section_index": "3", "section_name": "A APPENDIX", "section_text": "See Figures 10, 11, 12, and 13\n0.2 N=1 -0.4 N=2 -0.6 N=5 C 0.8 1.0 -1.2 -1.4 -1.6 0 2000 4000 6000 8000100001200 Iteration #\nFigure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If F := max, G trains against the best discriminator. If F := mean, G trains against an ensemble We explore other alternatives to F in Sections 4.1 & 4.4 that improve on both these options"}, {"section_index": "4", "section_name": "3.1 MAXIMIZING V(D,G", "section_text": "Figure 10: Generator objective, F, averaged over 5 training runs on CelebA. Increasing. N (# of D) accelerates convergence of F to steady state (solid line) and reduces its vari- ance, 2 (filled shadow 1). Figure 11 pro. vides alternative evidence of GMAN-O's ac- celerated convergence.\nIn practice, maxD,eD V(Di, G) is not performed to convergence (or global optimality), so the above problem is oversimplified. Furthermore, introducing N discriminators affects the dynam ics of the game which affects the trajectories of the discriminators. This prevents us from claiming max{V1(t), ..., V(t)} > max{V/(t)} Vt even if we initalize D1(0) = D (0) as it is unlikely that Di(t) = D| (t) at some time t after the start of the game.\nN=1 Original 0.3 N=1 Modified N=2=0 -0.4 N=2=1 ( 0.5 0.6 -0.7 -0.8 5000 10000 15000 20000 25000 30000 Iteration #"}, {"section_index": "5", "section_name": "3.2 BOOSTING", "section_text": "-0.3 0.4 () -0.5 0.6 -0.7 -0.8\nThere are a few differences between taking the max (case 1) and online boosting (case 2). In case 1 our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2 many boosting algorithms more generally use linear combinations of the discriminators. Moreover in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume access to the loss function at prediction time, which allows us to compute the max..\nFigure 12: Generator objective, F, averaged over 5 training runs on CIFAR-10. Increas- ing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its variance, 2 (filled shadow 1). Figure 13 provides alternative evidence of GMAN-0's accelerated convergence."}, {"section_index": "6", "section_name": "A.2 ADDITIONAL GMAM TABLES", "section_text": "The previous perspectives focus on improving the discriminator with the goal of presenting a better approximation of maxp V(D, G) to the generator. Our next perspective asks the question, \"Is maxp V(D, G) too harsh a critic?\""}, {"section_index": "7", "section_name": "4.1 Soft-DISCRIMINATOR", "section_text": "In practice, training against a far superior discriminator can impede the generator's learning. This is because the generator is unlikely to generate any samples considered \"realistic\" by the discrimi nator's standards, and so the generator will receive uniformly negative feedback. This is problem\nSee Figures 14 and 15\nFor a fixed G, maximizing FG(Vt) with F := max and N randomly instantiated copies of our dis-. criminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with random restarts in parallel and then presenting max,e{1,.,N} V(Di, G) as the loss to the generator --a very. pragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net. Requiring the generator to minimize the max forces G to generate high fidelity samples that must. hold up under the scrutiny of all N discriminators, each potentially representing a distinct max..\nFigure 11: Stdev, , of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state GMAN-0 with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 10's filled shadows reveal stdev of F over runs, while this plot shows stdev over time.\n100 N=1 Original N=1 Modified N=2,=0 10 N=2,=1 10-2 10-3 0 5000 10000 15000 20000 25000 30000 Iteration #\nWe can also consider taking the max over N discriminators as a form of boosting for the discrim-. inator's online classification problem (online because G can produce an infinite data stream). The boosted discriminator is given a sample xt and must predict whether it came from the generator or the dataset. The booster then makes its prediction using the predictions of the N weaker D;.\nFigure 13: Stdev, , of the generator objec- tive over a sliding window of 500 iterations Lower values indicate a more steady-state GMAN-0 with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 12's filled shadows reveal stdev of F' over runs, while this plot shows stdey over time.\nIt is possible to train the weak discriminators using boosting and then ignore the booster's prediction by instead presenting max{ V, }. We explore both variants in our experiments, using the adaptive al- gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promising results on the image generation tasks. It is possible that boosting produces too strong an adversary for learning which motivates the next section. Boosting results appear in Appendix A.7.\nSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif icantly improves scores over the standard GAN both in terms of the GMAM metric and Inception Scores.\natic because the information contained in the gradient derived from negative feedback only dictates where to drive down pg(x), not specifically where to increase pg(x). Furthermore, driving down pG(x) necessarily increases pG(x) in other regions of (to maintain Jx PG(x) = 1) which may or may not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is more likely to see positive feedback against a more lenient discriminator, which may better guide a generator towards amassing pg(x) in approximately correct regions of I'.\nFor this reason, we explore a variety of functions that allow us to soften the max operator. We choose to focus on soft versions of the three classical Pythagorean means parameterized by where. X = 0 corresponds to the mean and the max is recovered as -> oo:.\nTable 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column.\nN AMsoft(V,A) = WiVi N GMsoft(V,X) = exp Wi log (-Vi) N HMsoft(V,X) =\nScore Variant GMAN-0 GMAN-1 GMAN* mod-GAN 0.172 GMAN-0 0.022 0.062 0.088 BReter 0.050 GMAN-1 0.022 0.006 0.078 0.055 GMAN* 0.062 0.006 0.001 -0.167 mod-GAN 0.088 0.078 0.001\nScore Variant GMAN-0 GMAN-1 GMAN* mod-GAN 0.172 GMAN-0 0.022 0.062 0.088 Reeer 0.050 GMAN-1 0.022 0.006 0.078 0.055 GMAN* 0.062 0.006 0.001 0.167 mod-GAN 0.088 0.078 0.001\nTable 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores. are obtained by summing each column. GMAN variants were trained with two discriminators.\nwhere w; = eAVi /,eV with X 0, V, < 0. Using a softmax also has the well known advantage of being differentiable (as opposed to subdifferentiable for max). Note that we only require continuity to guarantee that computing the softmax is actually equivalent to computing V(D, G) where D is some convex combination of D; (see Appendix A.5).\nGMAN-0 GMAN-1 mod-GAN GMAN* Score 5.878 0.193 5.765 0.168 5.738 0.176 5.539 0.099\nTable 4: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with two discriminators.."}, {"section_index": "8", "section_name": "4.2 USING THE ORIGINAL MINIMAX OBJECTIVE", "section_text": "To illustrate the effect the softmax has on training, observe that the component of AMsoft(V, 0 relevant to generator training can be rewritten as\nScore Variant GMAN-0 GMAN* GMAN-1 mod-GAN 0.180 GMAN-0 0.008 0.041 0.132 Beter 0.122 GMAN* 0.008 0.038 0.092 0.010 GMAN-1 0.041 0.038 0.089 - -0.313 mod-GAN 0.132 0.092 0.089\nN 1 og(1- D(x) N 2\nwhere z = IIN (1 - D;(). Note that the generator gradient, | log(2) |, is minimized at z = 1 over z E (0, 1]'. From this form, it is clear that z = 1 if and only if D, = 0Vi, so G only receives a vanishing gradient if all D, agree that the sample is fake; this is especially unlikely for large N. In other words, G only needs to fool a single D, to receive constructive feedback. This result allows the generator to successfully minimize the original generator objective, log(1 - D). This is in contrast to the more popular - log(D) introduced to artificially enhance gradients at the start of training.\nGMAN-1 GMAN-0 GMAN* mod-GAN Score 6.001 0.194 5.957 0.135 5.955 0.153 5.738 0.176\nAt the beginning of training, when maxp, V(D, G) is likely too harsh a critic for the generator, we. can set closer to zero to use the mean, increasing the odds of providing constructive feedback tc the generator. In addition, the discriminators have the added benefit of functioning as an ensemble,. reducing the variance of the feedback presented to the generator, which is especially important. when the discriminators are far from optimal and are still learning a reasonable decision boundary. As training progresses and the discriminators improve, we can increase A to become more critical. of the generator for more refined training..\nTable 6: Inception score means with standard deviations for select models on CIFAR-10. Highe scores are better. GMAN variants were trained with five discriminators.\n1 Discriminator 5 discriminator GMAN* 5 discriminator GMAN- 0"}, {"section_index": "9", "section_name": "4.3 MAINTAINING MULTIPLE HYPOTHESES", "section_text": "We argue for this ensemble approach on a more fundamental level as well. Here, we draw on the density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof assumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminator only has access to a finite dataset sampled from pdata(x); therefore, when computing expectations of V(D, G), we only draw samples from our finite dataset. This is equivalent to training a GAN with pdata(x) = Pdata which is a distribution consisting of point masses on all the data points in the dataset. For the sake of argument, let's assume we are training a discriminator and generator, each\nFigure 14: Sample of pictures generated on CelebA cropped dataset\nN AMsoft(V,X)=wiVi N GMsoft(V, X) = - exp W; I N HMsoft(V,A) = Wi\nTable 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with five discriminators.\nwith infinite capacity. In this case, the global optimum (pG(x) = Pdata(x)) fails to capture any of the interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it is actually critical that we avoid this global optimum\nGenerated Images Real Images\nO . p(x)\nFigure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre- sponding probability mass function is given in light gray. After training GMAN, three discrimina- tors converge to distinct local optima which implicitly define distributions over the data (red, blue yellow). Each discriminator may specialize in discriminating a region of the data space (placing more diffuse mass in other regions). Averaging over the three discriminators results in the distribu- tion in black, which we expect has higher likelihood under reasonable assumptions on the structure of the true distribution.\nIn practice, this degenerate result is avoided by employing learners with limited capacity and corrupt ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously training a variety of limited capacity discriminators. With this approach, we might obtain a diverse set of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locally. optimal discriminators increases the entropy of pdata(x) by diffusing the probability mass over the. data space (see Figure 2 for an example)..\nFigure 15: Sample of pictures. enerated by GMAN-O on CIFAR dataset."}, {"section_index": "10", "section_name": "4.4 AUTOMATING REGULATION", "section_text": "The problem of keeping the discriminator and generator in balance has been widely recognized in. previous work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-. lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of classification accuracy (producing a single scalar) before the generator has made sufficient progress on the arguably more difficult generative task (producing a high dimensional sample). Salimans. et al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively. superior discriminator. Here, we explore an approach that enables the generator to automatically. temper the performance of the discriminator when necessary, but still encourages the generator to. challenge itself against more accurate adversaries. Specifically, we augment the generator objective:. min FG(Vi) - f() (7)\nA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g. ' = {t = Domain 1, = Domain 2,...}). In contrast, our framework applies to an unsu pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribing a new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osinder (2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and no a discriminator for each of the possibly exponentially many conditional labels\nmin Fg(V)-f( G,X>0\nIn Section 4.4, we describe an approach to customize adversarial training to better suit the devel opment of the generator. An approach with similar conceptual underpinnings was described ir Ravanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisec scenario whereas our applies to the unsupervised case\nwhere f(X) is monotonically increasing in which appears in the softmax equations, (3)-(5). In experiments, we simply set f(X) = cA with c a constant (e.g., O.001). The generator is incentivized to increase to reduce its objective at the expense of competing against the best available adversary D* (see Appendix A.6)."}, {"section_index": "11", "section_name": "A.5 Softmax REPRESENTABILITY", "section_text": "Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) repor log likelihood estimates from Gaussian Parzen windows, which they admit, has high variance anc is known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzer windows and argue that generative models should be evaluated with respect to their intended appli cation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the dataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak ing pairwise comparisons between independently trained GAN models. The core idea behind theii approach is given two generator, discriminator pairs (G1, D1) and (G2, D2), we should be able tc learn their relative performance by judging each generator under the opponent's discriminator.\nLet softmax(Vi) = V e [miny,, maxv,]. Also let a = argmin; Vi, b = arg max, Vi, and V(t) = V((1 - t)Da + tD) so that V(0) = Va and V(1) = V. The softmax and minimax objective. V(D, G) are both continuous in their inputs, so by the intermediate value theorem, we have tha t E [0,1] s.t. V(t) = V, which implies D E D s.t. V(D, G) = V. This result implies tha. the softmax (and any other continuous substitute) can be interpreted as returning V(D, G) for some. D selected by computing an another, unknown function over the space of the discriminators. This. result holds even if D is not representable by the architecture chosen for D's neural network.."}, {"section_index": "12", "section_name": "A.6 UNCONSTRAINED OPTIMIZATION", "section_text": "To convert GMAN* minimax formulation to an unconstrained minimax formulation. we introduce an auxiliary variable, A, define X(A) = log(1 + eA), and let the generator minimize over A E R.\nIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform the swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric (GMAM), that is amenable to training with multiple discriminators,\n' GMAM = log\nwhere a and b refer to the two GMAN variants (see Section 3 for notation Fg(V)). The idea here is similar. If G2 performs better than Gj with respect to both Dj and D2, then GMAM>0 (remember V<0 always). If G1 performs better in both cases, GMAM<0, otherwise, the result is indeterminate.\nWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST. (LeCun et al. (1998)). CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus on rates of convergence to steady state along with quality of the steady state generator according to the GMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare.\nFigure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (simila results with P-boost)."}, {"section_index": "13", "section_name": "A.8 EXPERIMENTAL SETUP", "section_text": "All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)) We use convolutional transpose layers (Zeiler et al. (201o)) for G and strided convolutions for L. except for the input of G and the last layer of D. We use the single step gradient method as ir. (Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each o. the generator layers. The different discriminators were trained with varying dropout rates fron. 0.3, 0.7|. Variations in the discriminators were effected in two ways. We varied the architecture b varying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), a well as varying dropout rates. Secondly we also decorrelated the samples that the disriminators were. training on by splitting the minibatch across the discriminators. The code was written in Tensorflov. (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:."}, {"section_index": "14", "section_name": "5.2.1 MNIST", "section_text": "Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN achieving the best overall performance. Figure 6 reveals GMAN*'s attempt to regulate the difficulty\nTable 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse Scores are obtained by summing each variant's column..\nAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner's slight edge over random guessing (P(correct label) = 0.5 + y E (0, 0.5]), and in fact, allows y < 0. This. is crucial because our weak learners are deep nets with unknown, possibly negative, y's..\nH H H H H\nF-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7). P-boost: D; is trained according to AdaBoost.OL. A max over the weak learner losses presented to the generator instead of the boosted prediction (see Appendix A.7). GMAN-max: max{ V} is presented to the generator. GAN: Standard GAN with a single discriminator (see Appendix A.2). mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))) GMAN-X: GMAN with F :=arithmetic softmax with parameter X. GMAN*: The arithmetic softmax is controlled by the generator through X.\nAll generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)). and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch normalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their. networks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax. {2. 5} discriminators. We maintain discriminator diversity by varying dropout and network depth.\nFigure 3 reveals that increasing the number of discriminators reduces the number of iterations to steady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the added benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari- ance of the same objective over a sliding time window, reaffirming GMAN's acceleration to steady. state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an epoch before the single discriminator run; digits at steady-state appear slightly sharper as well\nGenerator latent variables z ~ U (1, 1) 100 Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128) Variants have either convolution 3(4,4,128) removed or all the filter sizes are dividedby 2 or 4. That is,(32,32,1),(16,16,16),(8,8,32),(4,4,64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16), (4, 4, 32). ReLu activations for all the hidden units. Tanh activation at the output units of the generator.. Sigmoid at the output of the Discriminator.. Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 10-4, 1 = 0.5). MNIST was trained for 20 epochs with a minibatch of size 100.. CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100.\nScore Variant GMAN* GMAN-0 GMAN-max mod-GAN 0.127 GMAN* 0.020 0.009 0.028 0.019 0.089 0.036 0.007 GMAN-0 0.020 0.009 -0.0130.015 0.018 0.027 -0.034 GMAN-max 0.028 0.019 0.013 0.015 0.011 0.024 B 0.122 mod-GAN 0.089 0.036 0.018 0.027 0.011 0.024\n0.0 0.5 -1.0 -1.5 C 2.0 I -2.5 N=1 original -3.0 N=1 modified -3.5 N=2 N=5 -4.0 0 1000 2000 3000 4000 5000 6000 Iteration #\n0.5 Mw W -1.0 -1.5 -2.0 -2.5 N=1 original 3.0 N=1 modified 3.5 N=2 N=5 -4.0 1000 2000 3000 4000 5000 6000\nFigure 3: Generator objective, F, averaged. over 5 training runs on MNIST. Increas- ing the number of discriminators accelerates convergence of F to steady state (solid line) and reduces its variance, o2 (filled shadow. 1o). Figure 4 provides alternative evidence of GMAN*'s accelerated convergence..\n1 epoch 2 epochs 3 epochs 7 6 5 epochs 5 3 8 3 10 epochs 1 Discriminator 2 Discriminators 5 Discriminators\nFigure 5: Comparison of imag. epochs for N = {1, 2, 5} using GMAN-0 on MNIST\nof the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed X's to the variable \\ controlled by GMAN*\n101 N=1 original N=1 modified 100 N=2 N=5 10-1 10-2 10-3 0 1000 2000 3000 4000 5000 600 Iteration #\nFigure 4: Stdev, , of the generator objec- tive over a sliding window of 500 iterations Lower values indicate a more steady-state GMAN* with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 3's filled shadows reveal stdev of F over runs, while this plot shows stdev over time.\n. 1.1 N=2 Score 1* X=1 X=0 1.0 N=5 (N = 5) 0.9 \\* -0.008 Beeter 0.028 0.019 0.009 0.010 0.8 0.001 X=1 0.008 -0.008 0.7 0.009 0.010 0.6 X=0 0.019 0.008 -0.025 0.5 0.010 0.010 0.4 0 2000 4000 6000 8000 1000012000 Iteration # Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise GMAM for GMAN-X and\nGMAM Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise for GMAN-X and stdev(GMAM) game by adjusting X. Initially, G reduces A to GMAN* (X*) over 5 runs on MNIST. ease learning and then gradually increases X for a more challenging learning environment.\nWe see similar accelerated convergence behavior for the CelebA dataset in Figure 8\nFigure 8: Image quality improvement across number of generators at same number of iterations for GMAN-0 on CelebA.\nFigure 9 displavs imag erated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results\nReal Images. Generated Images\nFigure 9: Images generated by GMAN-0 on the CIFAR-10 dataset\nWe introduced multiple discriminators into the GAN framework and explored discriminator roles ranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically tune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In general, GMAN variants achieved faster convergence to a higher quality steady state on a variety of tasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original. GAN objective possible by increasing the odds of the generator receiving constructive feedback..\nIn future work, we will look at more sophisticated mechanisms for letting the generator control the game as well as other ways to ensure diversity among the discriminators. Introducing multiple generators is conceptually an obvious next step, however, we expect difficulties to arise from more complex game dynamics. For this reason, game theory and game design will likely be important."}, {"section_index": "15", "section_name": "ACKNOWLEDGMENTS", "section_text": "We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K4C GPU. This material is based upon work supported by the National Science Foundation under Gran Nos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF\n100 iterations 500 iterations 1000 iterations 2000 iterations 5000 iterations 9000 iterations 1 Discriminator 2 Discriminators 3 Discriminators\nWe also found that GMAN is robust to mode collapse. We believe this is because the generator must appease a diverse set of discriminators in each minibatch. Emitting a single sample will score well for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size"}]
B1IzH7cxl
[{"section_index": "0", "section_name": "A NEURAL STOCHASTIC VOLATILITY MODEI", "section_text": "3 data + X 0 -1 2 3 0 500 1000 1500 2000 timestep 0.5 0.4 y 0.3 ground truth variance 0.1 garch's prediction. nsvm's prediction 0.0 0 500 1000 1500 2000 timestep\nRui Luo', Xiaojun Xu+, Weinan Zhang', Jun Wang\ndata +0 u -3 0 500 1000 1500 2000 timestep 0.5 0.4 ground truth variance 0.1 garch's prediction. nsvm's prediction. 0.0 0 500 1000 1500 2000 timestep\nIn this paper, we show that the recent integration of statistical models with re. current neural networks provides a new way of formulating volatility models that. have been popular in time series analysis and prediction. The model comprises a. pair of complementary stochastic recurrent neural networks: the generative net-. work models the joint distribution of the stochastic volatility process; the inference. network approximates the conditional distribution of the latent variables given the. observable ones. Our focus in this paper is on the formulation of temporal dynam-. ics of volatility over time under a stochastic recurrent neural network framework Our derivations show that some popular volatility models are a special case of. our proposed neural stochastic volatility model. Experiments demonstrate that. the proposed model generates a smoother volatility estimation, and outperforms. standard econometric models GARCH, EGARCH, GJR-GARCH and some other GARCH variants as well as MCMC-based model stochvol and a recent Gaussian. processes based volatility model GPvOL on several metrics about the fitness of. the volatility modelling and the accuracy of the prediction.\n(a) Synthetic time series prediction. (up) The data and the predicted \" and bounds \" *. (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.\n(a) Synthetic time series prediction II. (up) The data and the predicted \" and bounds * o*. (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "2 1 0 X data +o 500 1000 1500 2000 timestep 0.5 0.4 g 0.3 ranne 0.2 ground truth variance 0.1 garch's prediction nsvm's prediction 0.0 500 1000 1500 2000 timestep\nThe volatility of the price movements reflects the ubiquitous uncertainty within financial markets. It is critical that the level of risk, indicated by volatility, is taken into consideration before investment decisions are made and portfolio are optimised (Hull, 2006); volatility is substantially a key variable in the pricing of derivative securities. Hence, estimating and forecasting volatility is of great im portance in branches of financial studies, including investment, risk management, security valuation and monetary policy making (Poon & Granger, 2003).\n(b) Real-world stock price prediction. (up) The data and the predicted ~ and bounds ~ ~. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than that of GARCH(1,1), also yielding smaller NLL.\nVolatility is measured typically by using the standard deviation of price change in a fixed time in terval, such as a day, a month or a year. The higher the volatility, the riskier the asset. One o the primary challenges in designing volatility models is to identify the existence of latent (stochas tic) variables or processes and to characterise the underlying dependences or interactions betweer variables within a certain time span. A classic approach has been to handcraft the characteris tic features of volatility models by imposing assumptions and constraints, given prior knowledg and observations. Notable examples include autoregressive conditional heteroskedasticity (ARCH model (Engle, 1982) and its generalisation GARCH (Bollerslev, 1986), which makes use of autore gression to capture the properties of time-variant volatility within many time series. Heston (1993 assumed that the volatility follows a Cox-Ingersoll-Ross (CIR) process (Cox et al., 1985) and de rived a closed-form solution for options pricing. While theoretically sound, those approaches requir strong assumptions which might involve complex probability distributions and non-linear dynamic that drive the process, and in practice, one may have to impose less prior knowledge and rectify a solution under the worst-case volatility case (Avellaneda & Paras, 1996).\nFigure 2: A case study of time series prediction\ncovariance matrix; 30 sampling for the latent variable at each time step. Instead of single LSTM layers, here we adopt stacked LSTM layers composed of 2 10 LSTM cells..\n(b) Synthetic time series prediction IV. (up) The data and the predicted \" and bounds \" o\". (down) Th groundtruth data variance and the corresponding prediction from GARCH(1,1) and NSVM."}, {"section_index": "2", "section_name": "5.5 RESULT AND DISCUSSION", "section_text": "Figure 3: A case study of synthetic time series prediction\nThe overall performance of NSVM and baselines is listed in details in Table 1 and case studies o synthetic data and real-world financial data are illustrated in Fig. 2. The results show that NSVM has higher accuracies for modelling heteroskedastic time series on various metrics: NLL shows th fitness of the model under likelihood measure; the smoothness indicates that NSVM obtains mor robust representation of the latent volatility; -MSE and -MSE in synthetic data experiment impl the ability of recognising the underlying patterns of both trend and volatility, which in fact verifie our claim of NSVM's high flexibility and rich expressive power for volatility (as well as trenc modelling and forecasting compared with the baselines. Although the improvement comes at th cost of longer training time before convergence, it can be mitigated by applying parallel computin techniques as well as more advanced network architecture or training procedure.\nIn this paper, we take a fully data driven approach and determine the configurations with as few. exogenous input as possible, or even purely from the historical data. We propose a neural network. re-formulation of stochastic volatility by leveraging stochastic models and recurrent neural networks (RNNs). We are inspired by the recent development on variational approaches of stochastic (deep). neural networks (Kingma & Welling, 2013; Rezende et al., 2014) to a recurrent case (Chung et al.,. 2015: Fabius & van Amersfoort, 2014: Bayer & Osendorfer, 2014), and our formulation shows thal existing volatility models such as the GARCH (Bollerslev, 1986) and the Heston model (Heston,. 1993) are the special cases of our neural stochastic volatility formulation. With the hidden latent."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "3.0 data 2.5 2.0 x 1.5 1.0 0.5 0.0 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.08 0.06 0.04 0.02 0.00 0 500 1000 1500 2000 2500\n3.5 data 3.0 2.5 +o 2.0 X 1.5 1.0 0.5 0.0 C.: 500 1000 1500 2000 2500 timestep 0.18 0.16 garch's prediction 0.14 nsvm's prediction 0.12 0.10 el 0.08 0.06 0.04 0.02 0.00 500 1000 1500 2000 2500\nExperiments with synthetic data and real-world financial data are performed, showing that the pro posed model outperforms the widely-used GARCH model on several metrics of the fitness and the. accuracy of time series modelling and prediction: it verifies our model's high flexibility and rich. expressive power.\n*the same results obtained from AR(20) mean models\nTable 1: Results of the experiments\n3.0 data 2.5 2.0 u+ x 1.5 1.0 0.5 0.0! 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.04 0.02 0.00 c. 500 1000 1500 2000 2500\nThe newly proposed NSVM outperforms standard econometric models GARCH(1,1), EGARCH(1,1), GJR-GARCH(1,1,1) and some other variants as well as the MCMC-based model \"stochvol\" and the recent GP-based model \"GPvOL'. Apart from the higher accuracy. NSVM obtained, it provides us with the ability to simply generalise univariate time series analysis. to multivariate cases by extending network dimensions and manipulating the covariance matrices Furthermore, it allows us to implement and deploy a similar framework on other applications, for example signal processing and denoising. The shortcoming of NSVM comparing to GPVOL is that the training procedure is offline: for short-term prediction, the experiments have shown the. accuracy, but for long-term forecasting, the parameters need retraining, which will be rather time. consuming. The online algorithm for inference will be one of the work in the future.\nOn the other hand, deep learning (LeCun et al., 2015; Schmidhuber, 2015) that utilises nonlinear structures known as deep neural networks, powers various applications. It has triumph over pattern recognition challenges, such as image recognition (Krizhevsky et al., 2012; He et al., 2015; van den Oord et al., 2016), speech recognition (Hinton et al., 2012; Graves et al., 2013; Chorowski et al., 2015), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014; Luong et al., 2015) to name a few.\nSpecifically, our NSVM outperforms GARCH(1,1) on 142 out of 162 stocks on the metric of NLL In particular, NSVM obtains -2.111, -2.044, -2.609 and -1.939 on the stocks corresponding to Fig2(b), Fig 4(a), (b) and (c) respectively, each of which is better than the that of GARCH (0.3433 0.589, 0.109 and 0.207 1ower on NLL).\nTime-dependent neural networks models include RNNs with advanced neuron structure such as long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997), gated recurrent unit (GRU) (Cho et al., 2014), and bidirectional RNN (BRNN) (Schuster & Paliwal, 1997). Recent results show that RNNs excel for sequence modelling and generation in various applications (Graves, 2013; Gregor et al., 2015). However, despite its capability as non-linear universal approximator, one of the draw- backs of neural networks is its deterministic nature. Adding latent variables and their processes into neural networks would easily make the posterori computationally intractable. Recent work shows that efficient inference can be found by variational inference when hidden continuous variables are. embedded into the neural networks structure (Kingma & Welling, 2013: Rezende et al., 2014). Some early work has started to explore the use of variational inference to make RNNs stochastic (Chung et al., 2015; Bayer & Osendorfer, 2014; Fabius & van Amersfoort, 2014). Bayer & Osendorfer (2014) and Fabius & van Amersfoort (2014) considered the hidden variables are independent be tween times, whereas (Fraccaro et al., 2016) utilised a backward propagating inference network according to its Markovian properties. Our work in this paper extends the work (Chung et al., 2015) with a focus on volatility modelling for time series. We assume that the hidden stochastic variables follow a Gaussian autoregression process, which is then used to model both the variance and the mean. We show that the neural network formulation is a general one, which covers two major fi- nancial stochastic volatility models as the special cases by defining the specific hidden variables and non-linear transforms.\n(b) Real-world stock price prediction III. (up) The data and the predicted and bounds * o. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than that of GARCH(1,1), also yielding smaller NLL..\n2.5 data 2.0 + 1.5 X 1.0 0.5 0.0 0 500 1000 1500 2000 2500 timestep 0.16 0.14 garch's prediction 0.12 nsvm's prediction 0.08 0.06 0.04 0.02 0.00 0 500 1000 1500 2000 2500 timester\nStochastic processes are often defined by stochastic differential equations (SDEs), e.g. a (univariate) generalised Wiener process is d xt = d t + d wt, where and denote the time-invariant rates of drift and standard deviation (square root of variance) while d wt ~ N(0, d t) is the increment of\nAs we have known, for models that evolve explicitly in terms of the squares of the residuals (e? xt - t)2), e.g. GARCH, the multi-step-ahead forecasts have closed-form solutions, which means\nFigure 4: A case study of real-world stock time series prediction\nSYNTHETIC DATA STOCK DATA NLL -MSE -MSE smoothness NLL smoothness NSVM 3.932e-2 2.393e-3 6.178e-4 4.322e-3 -2.184 3.505e-3 GARCH(1,1) 6.905e-2 7.594e-3* 8.408e-4 4.616e-3 -1.961 6.659e-3 GJRGARCH(1,1,1) 6.491e-2 7.594e-3* 7.172e-4 4.426e-3 -2.016 4.967e-3 EGARCH(1,1) 5.913e-2 7.594e-3* 8.332e-4 4.546e-3 -2.001 5.451e-3 ARCH(5) 7.577e-2 7.594e-3* 1.610e-3 5.880e-3 -1.955 7.917e-3 TARCH(1,1,1) 6.365e-2 7.594e-3* 7.284e-4 4.727e-3 -2.012 3.399e-3 APARCH(1,1,1) 6.187e-2 7.594e-3* 9.115e-4 4.531e-3 -2.014 4.214e-3 AGARCH(1,1) 6.311e-2 7.594e-3* 9.543e-4 4.999e-3 -2.008 5.847e-3 NAGARCH(1,1,1) 1.134e-1 7.594e-3* 9.516e-4 4.904e-3 -2.020 5.224e-3 IGARCH(1,1) 6.751e-2 7.594e-3* 9.322e-4 4.019e-3 -1.999 4.284e-3 IAVGARCH(1,1) 6.901e-2 7.594e-3* 7.174e-4 4.282e-3 -1.984 4.062e-3 FIGARCH(1,d,1) 6.666e-2 7.594e-3* 1.055e-3 5.045e-3 -2.002 5.604e-3 MCMC-stochvol 0.368 7.594e-3* 3.956e-2 6.421e-4 -0.909 1.511e-3 GPVOL 1.273 7.594e-3* 6.457e-1 4.142e-2 -2.052 5.739e-3 NAIVE 2.037e-1 8.423e-3 3.515e-3 2.708e-2 -0.918 7.459e-3\na) Real-world stock price prediction II. (up) The data and the predicted ~ and bounds ~ ~. (down) The variance prediction from GARCH(1,1) and NSVM. The prediction of NSVM is more smooth and stable than. hat of GARCH(1,1), also yielding smaller NLL.\nOn the other hand, for models that are not linear or do not explicitly evolve in terms of e2, e.g. EGARCH (linear but not evolve in terms of e2), our NSVM (nonlinear and not evolve in terms of e2), the closed-form solutions are absent and thus the analytical forecast is not available. W will instead use simulation-based forecast, which uses random number generator to simulate draw. from the predicted distribution and build up a pre-specified number of paths of the variances at . step ahead. The draws are then averaged to produce the forecast of the next step. For n-step-aheac forecast, it requires n iterations of 1-step-ahead forecast to get there..\nXt = Xt-1++OEt\nNSVM is designed as an end-to-end model for volatility estimation and forecast. It takes the pric. of stocks as input and outputs the distribution of the price at next step. It learns the dynamics using. RNN, leading to an implicit, highly nonlinear formulation, where only simulation-based forecast i. available. In order to obtain reasonably accurate forecasts, the number of draws should be relativel large, which will be very expensive for computation. Moreover, the number of draws will increas. exponentially as the forecast horizon grows, so it will be infeasible to forecast several time step. ahead. We have planned to investigate the characteristics of NSVM's long-horizontal forecasts an try to design a model specific sampling method for efficient evaluation in the future..\nP10t-1 where xt-1 is the observation from N(t-1, ?-1) at time t - 1. Note that the determinism is ir. a conditional sense, which means that it only holds under the condition that the complete history {x<t} is presented, such as the case of 1-step-ahead forecast. otherwise the current volatility would. still be stochastic as it is built on stochastic process { xt }. However, for multi-step-ahead forecast, we usually exploit the relation Et-1[(xt t)2] = ? to substitute the corresponding terms and calculate the forecasts with longer horizon in a recursive fashion, for example, ot+1 = o + 1Et-1[(xt - t)2] + 1o? = o + (1 + 1)o?. For n-step-ahead forecast, there will be n iterations and the. procedure is hence also deterministic."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Torben G Andersen and Tim Bollerslev. Answering the skeptics: Yes, standard volatility models dc provide accurate forecasts. International economic review, pp. 885-905, 1998.\nAnother extension is applicable for , from being conditionally deterministic (i.e. deterministic given the complete history (x<t}) to fully stochastic: t = (z<t) is driven by another laten stochastic process {zt} instead of the observable process {xt}. Heston (1993) model instantiates a continuous-time stochastic volatility model for univariate processes:.\nChristopher M Bishop. Pattern recognition. Machine Learning, 128, 2006\ndot=aotdt+bdwt .(2)\nGeorge EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. Time series analysi forecasting and control. John Wiley & Sons, 2015..\nxt=xt-1+-0.50 where t=(1+a)t-1+bZt\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. 2014.\nAs discussed above, the observable yariable x+ follows Gaussian distribution of which the mean and variance depend on the history of observable process {xt} and latent {zt}. We presume in addition that the latent process {zt} is an autoregressive model such that zt is (conditionally) Gaussian distributed. Therefore, we formulate the volatility model in general as:\nZt ~ N((z<t),Yz(z<t))z xt ~ N(*(x<t,Z<t),x(x<t,Z<t))\nTimothy Dozat. Incorporating nesterov momentum into adam. 2015.\nRobert F Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica: Journal of the Econometric Society, pp. 987-1007. 1982.\nThese two formulas (Eqs. (6) and (7)) abstract the generalised formulation of volatility models Together, they represents a broad family of volatility models with latent variables, where the Heston\nstandard Wiener process at time t. In a small time interval between t and t + t, the change in the variable is xt = t + wt. Let t = 1, we obtain the discrete-time version of basic volatility. model:\nThe time-invariant variance can be extended to be a function t = (x<t) relying on history of the (observable) underlying stochastic process {x<t}. The current variance t is therefore de termined given the history {x<t} up to time t. An example of such extensions is the univariate GARCH(1.1) model (Bollerslev, 1986):\n=Q0+Q1xt-1t_ O t t-1\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nJustin Bayer and Christian Osendorfer. Learning stochastic recurrent networks. arXiv preprint arXiv:1411.7610. 2014\n-\nTim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics. 31(3):307-327, 1986\nwhere the correlation between d w. and d wt 2 applies: IE[d w d w = p d t. We apply Euler's scheme of quantisation (Stoer & Bulirsch, 2013) to obtain the discrete analogue to the continuous- time Heston model (Eqs. (3) and (4)):\nJohn C Cox, Jonathan E Ingersoll Jr, and Stephen A Ross. A theory of the term structure of interes. rates. Econometrica: Journal of the Econometric Society. pp. 385-407. 1985.\nwhere ~(x<t, Z<t) and (x<t, Z<t) denote the autoregressive time-varying mean and variance of the latent variable zt while (x<t, Z<t) and (x<t, Z<t) represent the mean and variance of observable variable xt, which depend on not only history of the observable process {x<t} but that of the latent process {z<t}\nmodel for stochastic volatility is merely a special case of the family. Furthermore, it will degenerate to deterministic volatility models such as the well-studied GARCH model if we disable the laten process.\nRobert F Engle and Kenneth F Kroner. Multivariate simultaneous generalized arch. Econometri theory, 11(01):122-150. 1995.\nMarco Fraccaro. Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential neural model with stochastic layers. arXiv preprint arXiv:1605.07571, 2016\nIn this section, we establish the neural stochastic volatility model (NSVM) for stochastic volatility estimation and forecast."}, {"section_index": "5", "section_name": "4.1 GENERATING OBSERVABLE SEOUENCE", "section_text": "Recall that the latent variable zt (Eq. (6)) and the observable xt (Eq. (7)) are described by autore gressive models (xt has the exogenous input {z<t}.) For the distributions of {zt} and {xt}, the following factorisation applies:\npo(Z) =ps(zt|z<t) =]N(zt;$(z<t),$(z<t)), t p(XZ) = Ips(xt|x<t,z<t) =I[N(xt;$(x<t,z<t),$(x<t,Z<t)) t t\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. 2015.\nBarbara Hammer. On the approximation capability of recurrent neural networks. Neurocomputing 31(1):107-123. 2000\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nps(X,Z) =]]Ps(xt|x<t,Z<t)Ps(zt|z<t) t =1IN(zt;$(z<t),$(z<t))N(xt;$(x<t,z<t),E$(x<t,z<t))\nSteven L Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. Review of financial studies, 6(2):327-343, 1993.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly. Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE. Signal Processing Magazine, 29(6):82-97, 2012.\nIt is observed that the means and variances are conditionally deterministic: given the historica information {z<t}, the current mean 7 = $(<t) and variance = $(z<t) of zt is obtainec and hence the distribution N(zt; z, ) of zt is specified; after sampling zt from the specifie distribution, we incorporate {x<t} and calculate the current mean = $(x<t, Z<t) and varianc = $(x<t, <t) of xt and determine its distribution N(xt, t, ) of xt. It is natural an convenient to present such a procedure in a recurrent fashion because of its autoregressive nature As is known that RNNs can essentially approximate arbitrary function of recurrent form (Hammer 2000), the means and variances, which may be driven by complex non-linear dynamics, can b efficiently computed using RNNs.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nJohn C Hull. Options, futures, and other derivatives. Pearson Education India, 2006\nIt is always a good practice to reparameterise the random variables before we go into RNN ar-. chitecture. As the covariance matrix is symmetric and positive definite, it can be factorised as = UAUT, where A is a full-rank diagonal matrix with positive diagonal elements. Let. A = U A, we have = AAT. Hence we can reparameterise the latent variable zt (Eq. (6)) and observable xt (Eq. (7)):\nwhere A(Az)T = z,A(A)T = and e ~ N(0, Ix), e7 ~ N(0, I) are auxiliary vari- ables. Note that the randomness within the variables of interest (e.g. zt) is extracted by the auxiliary variables (e.g. e) which follow the standard distributions. Hence, the reparameterisation guarantees that gradient-based methods can be applied in learning phase (Kingma & Welling, 2013).\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.\nIn this paper, the joint generative model is comprised of two sets of RNN and multilayer perceptror (MLP): RNN?/MLP for the latent variable, while RNN*/MLP for the observables. We stack these two RNN/MLP together according to the causal dependency between those variables. The\npo(Z) =ps(zt|z<t) =TTN(zt;$(z<t),$(z<t)) t t po(X|Z) = 1ps(xt|x<t,Z<t) = 1[N(xt;$(x<t,Z<t),$(x<t,Z<t))\nwhere X = {xt} and Z = {zt} are the sequences of observable and latent variables, respectively, while represents the parameter set of the model. The full generative model is defined as the joint distribution:\nx,z) =I Pp(Xt|X<t,Z<t)P(Zt|Z<t) t =IIN(zt;$(z<t),$(z<t)N(xt;$(x<t,z<t),E$(x<t,z<t))\nZt=t+ Aet, xt= t+Aet\njoint generative model is implemented as the generative network.\nhz = RNN( -1Zt-1; Zt=t+Aet {t,A}=MLP(h;), hz = RNN*(h t-1,Xt-1,Zt; Xt=t+Aet,\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation an approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.\nwhere h? and h+ denote the hidden states of the corresponding RNNs. The MLPs map the hidden states of RNNs into the means and deviations of variables of interest. The parameter set is. comprised of the weights of RNNs and MLPs..\nJosef Stoer and Roland Bulirsch. Introduction to numerical analysis, volume 12. Springer Scienc & Business Media, 2013.\nOne should notice that when the latent variable z is obtained, e.g. by inference (details in the next. subsection), the conditional distribution po(X|Z) (Eq. (9)) will involve in generating the observable xt instead of the joint distribution po(X, Z) (Eq. (1O)). This is essentially the scenario of predicting. future values of the observable variable given its history. We will use the term \"generative model'. and will not discriminate the joint generative model or the conditional one as it can be inferred in. cOntext.\nllya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks arXiv preprint arXiv:1601.06759, 2016\nYue Wu, Jose Miguel Hernandez-Lobato, and Zoubin Ghahramani. Gaussian process volatility model. In Advances in Neural Information Processing Systems, pp. 1044-1052, 2014.."}, {"section_index": "6", "section_name": "4.2 INFERENCING THE LATENT PROCESS", "section_text": "Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization arXiv preprint arXiv:1409.2329. 2014\nAs the generative model involves latent variable zt, of which the true valus are unaccessible even we have observed xt. Hence, the marginal likelihood po(X) becomes the key that bridges the model and the data. The calculation of marginal likelihood involves the posterior distribution po(Z|X).. which is often intractable as complex integrals are involved. We are unable to learn the paramters. or to infer the latent variables. Therefore, we consider instead a restricted family of tractable dis- tributions qy(Z|X), referred to as the approximate posterior family, as approximations to the true. posterior po(Z|X) such that the family is sufficiently rich and flexible to provide good approxima tions (Bishop, 2006; Kingma & Welling, 2013; Rezende et al., 2014).\nqw(Z|X) = 1 qw(zt|Z<t,x<t) =11N(zt;w(z<t,x<t),Y$(z<t,x<t)) t t\nwhere (z<t, x<t) and (z<t, x<t) are functions of the historical information {z<t},{x<t} representing the approximated mean and variance of the latent variable zt, respectively. Note that represents the parameter set of inference model..\n{z,Az} = MLP,(ht), hz = RNN,(ht-1,Zt-1,xt-i Zt=t+AEt,"}, {"section_index": "7", "section_name": "4.3 FORECASTING OBSERVATIONS IN FUTURE", "section_text": "Mike Schuster and Kuldip K Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. 45(11):2673-2681. 1997\nWe define the inference model in accordance with the approximate posterior family we have pre sumed, in a similar fashion as (Chung et al., 2015), where the factorised distribution is formulatec as follows:\nThe inference model essentially describes an autoregressive model on zt with exogenous input xt Hence, in a similar fashion as the generative model, we implement the inference model as the infer- ence network using RNN/MLP:\nIn the realm of time series analysis, we usually pay more attention on forecasting over generating. (Box et al., 2015). It means that we are essentially more interested in the generation procedure\n*OOOO *+OOOO ui1O E1O H+1O Er+1O > Generate hiOOO ht ht+O C Zt+1O O ut1O Eg1O 4 uf+1O E4+1C Inference hOOOC ht OOO h4+1O OO C Xt+1O"}, {"section_index": "8", "section_name": "COMPLEMENTARY DISCUSSIONS OF NSYM", "section_text": "In this appendix section we present detailed derivations of NSVM, specifically, the parameters learn ing and calibration, and covariance reparameterisation.."}, {"section_index": "9", "section_name": "A.1 LEARNING PARAMETERS / CALIBRATION", "section_text": "Given the observations X, the objective of learning is to maximise the marginal log-likelihood of X given , where the posterior is involved. However, as we have discussed in the previous subsection the true posterior is usually intractable, which means exact inference is difficult. Hence, approximate inference is applied instead of rather than exact inference by following (Kingma & Welling, 2013. Rezende et al., 2014). We represent the marginal log-likelihood of X in the following form:.\nFigure 1: Forecasting the future using Neural Stochastic Volatility Model\nwhere the expectation term Eqw(z|x) [ln po(X, Z) - ln qw(Z|X)] is referred to as the variational lower bound [q; X, , ] of the approximate posterior qy(Z|X, ). The lower bound is essen- tially a functional with respect to distribution q and parameterised by observations X and parameter sets , of both generative and inference model. In theory, the marginal log-likelihood is max. imised by optimisation on the lower bound [q; X, , ] with respect to and .\nconditioning on the historical information rather than generation purely based on a priori belief since the observations in the past of x<t influences our belief of the latent variable zt. Therefore we apply the approximate posterior distribution of the latent variable zt (Eq. (19)) as discussed in previous subsection, in place of the prior distribution (Eq. (8)) to build our predictive model.\nWe apply the factorisations in Eas. (10) and (19) to the integrand within exnectation of Eq. (30)\nlnpo(X,Z)-lnqw(ZX) = > * lnN(xt;$(x<t,Z<t),$(x<t,Z<t)\nNSVM is learned using Stochastic Gradient Variational Bayes following (Kingma & Welling, 2013. Rezende et al., 2014). For readability, we provide the detailed derivation in Appendix A.\nAlthough we refer to GARCH and Heston as volatility models, the purposes of them are quit lifferent: GARCH is a predictive model used for volatility forecasting whereas Heston is mor f a generative model of the underlying dynamics which facilitate closed-form solutions to SDE n option pricing. The proposed NSVM has close relations to GARCH(1,1) and Heston model oth of them can be regarded as a special case of the neural network formulation. Recall Eq. (2 GARCH(1,1) is formulated as 0? = Q0 + Q1(xt-1 - t-1)2 + 10?-1, where t-1 is the tren estimate of {xt} at time step t calculated by some mean models. A common practice is to assum hat t follows the ARMA family (Box et al., 2015), or even simpler, as a constant that t = . W adopt the constant trend for simplicity as our focus is on volatility estimation.\nlndet +(+ Ae t)t)-1(t+Ae- t) 2N\n+ ln det t + (xt - t)()-1(xt - t) - lndet t + const\nWe define the hidden state as ht = [, ot]' , and disable the latent variable zt = O as the volatil. ity modelled by GARCH(1,1) is conditionally deterministic. Hence, we instantiate the generative network (Eqs. (16), (17) and (18)) as follows:\nwhere Az(Az)T = z and ez ~ N(0, I) is parameter-independent and considered as constant when calculating derivatives"}, {"section_index": "10", "section_name": "A.2 COVARIANCE PARAMETERISATION", "section_text": "As is known, it entails a computational complexity of O(M3) to maintain and update the full-size covariance with M dimensions (Rezende et al., 2014). In the case of very high dimensions, the full-size covariance matrix would be too computationally expensive to afford. Hence, we use insteac the covariance matrices with much fewer parameters for efficiency. The simplest setting is to use diagonal precision matrix (i.e. the inverse of covariance matrix) -1 = D. However, it draws very strong restrictions on representation of the random variable of interest as the diagonal precisior matrix (and thus diagonal covariance matrix) indicates independence among the dimensions. There fore, the tradeoff becomes low-rank perturbation on diagonal matrix: -1 = D + VVT, where V = {v1,..., vk} denotes the perturbation while each vk is a M-dimensional column vector.\nThe set of generative parameters is = { , Qo, Q1, B1}\nNext, we show the link between NSVM and (discrete-time) Heston model (Eq. (5)). Let hx xt-1, , ot]' be the hidden state and zt be i.i.d. standard Gaussian instead of autoregressive vari-\nGiven the historical observations x <t, the predictive model infers the current value of latent variable zt using inference network and then generates the prediction of the current observation xt using generative network. The procedure of forecasting is shown in Fig. 1.\ns there is usually no closed-form solution for the expecation (Eq. (30)), we have to estimate th <pectation by applying sampling methods to latent variable zt through time in accordance wit. e causal dependences. We utilise the reparameterisation of zt as shown in Eq. (22) such tha e sample the corresponding auxiliary standard variable e rather than zt itself and compute th alue of zt on the fly. This ensures that the gradient-based optimisation techniques are applicabl. s the reparameterisation isolates the model parameters of interest from the sampling procedure. B. mpling N sample paths, the estimator of the lower bound is defined as the average of paths:.\n{,t}=MLP(h;) ={[1,0]h, [0,1]h} (2 ht = RNN(ht-1,xt-1;) -1 - 1, 0ht-1 (2 Xt = l+OtEt where et ~ N(0, 1). (2\n,t} =MLP(h;) ={[1, 0]hx,[0,1]h} ht =RNNg(ht-1,xt-1;) xt-1 [1, 0]ht_1)2 Xt = l+ OtEt where et ~ N(0, 1)\nable, we represent the Heston model in the framework of NSVM as:\nThe corresponding covariance matrix and its determinant is obtained using Woodbury identity anc natrix determinant lemma\nEt {t,t} = MLP(h;) ={[1, 1, 0]hx- [0, 0, O.5](h)2, [0, 0, 1]h ht = RNN(ht-1,xt-1,Zt;) [0 0 0 [0 0 1 0 h 0 Xt-1 + 0 Zt 7 [0 0 1 + a 0] [6] Xt = t+tEt\n= D-1_ D-1V(I+vTD-1V)-1yD-1\nTo calculate the deviation A for the factorisation of covariance matrix = AA', we first con sider the rank-1 perturbation where K = 1. It follows that V = v is a column vector, and I + VT D-1V = 1 + vT D-1v is a real number. A particular solution of A is obtain:.\nA=D-2-[y-(1-n]D-1vvD-\nThe set of generative parameters is = { , a, b}\nOne should notice that, in practice, the formulation may change in accordance with the specific ar chitecture of neural networks involved in building the model, and hence a closed-form representation may be absent."}, {"section_index": "11", "section_name": "5 EXPERIMENTS", "section_text": "Algorithm 1 gives the detailed calculation scheme\nIn this section, we present our experiments' both on the synthetic and real-world datasets to validate the effectiveness of NSVM\nAlgorithm 1 Calculation of rank-K perturbation of precision matrices Input: The original diagonal matrix D; The rank-K perturbation V = {v1, ..., VK} Output: A such that the factorisation AA' = = (D + VV)-1 holds 1: A(o) = D-1 2: i = 0 3: while i < K do 4: Y(i) =V(iA(i)A(i)U(i) 5: n(i) = (1 + Y(i))- 6: A(i+1)=A(i)-[y(1-(i))]A(i)A()(i)(i)A(i 7: A = A(K)"}, {"section_index": "12", "section_name": "5.1 BASELINES AND EVALUATION METRICS", "section_text": "To evaluate the performance of volatility modelling, we adopt the standard economet-. ric model GARCH(1,1) Bollerslev (1986) as well as its variants EGARCH(1,1) Nelson (1991), GJR-GARCH(1,1,1) Glosten et al. (1993), ARCH(5), TARCH(1,1,1), APARCH(1,1,1) AGARCH(1.1.1), NAGARCH(1.1.1),IGARCH(1.1),IAVGARCH(1.1). FIGARCH(1.d.1) as base lines, which incorporate with the corresponding mean model AR(20). We would also compare our NSVM against a MCMC-based model \"stochvol\"' and the recent Gaussian-processes-based model. \"GPvOL' Wu et al. (2014), which is a non-parametric model jointly learning the dynamics and. hidden states via online inference algorithm. In addition, we setup a naive forecasting model as an. alternative baseline referred to as NA1VE, which maintains a sliding window of size 20 on the most. recent historical observations and forecasts the current values of mean and volatility by the average. mean and variance of the window..\n3: while i < K do 4: Y(i) = I 5: n(i) =(1+ y(i) 6: A(i+1) = A(i) -Vn(iA 7: A=A(K)\nFor synthetic data experiments, we take four metrics into consideration for performance evaluation:. 1) the negative log-likelihood (NLL) of observing the test sequence with respect to the generative. model parameters; 2) the mean-squared error (MSE) between the predicted mean and the ground truth (-MSE), 3) MSE of the predicted variance against the true variance (o-MSE); 4) smoothness of fit, which is the standard deviation of the differences of succesive variance estimates. As for the real-world scenarios, the trend and volatility are implicit such that no ground truth is accessible to. compare with, we consider only NLL and smoothness as the metrics for evaluation on real-world. data experiment."}, {"section_index": "13", "section_name": "5.2 MODEL IMPLEMENTATION", "section_text": "The implementation of NSVM in experiments is in accordance with the architecture illustrated in Fig. 1: it consists of two neural networks, namely inference network and generative network. Each network comprises a set of RNN/MLP as we have discussed above: the RNN is instantiated by stacked LSTM layers whereas the MLP is essentially a 1-layer fully-connected feedforward network which splits into two equal-sized sublayers with different activation functions - one sublayer applies exponential function to impose the non-negativity and prevents overshooting of variance estimates while the other uses linear function to calculate mean estimates. During experiment, the model is. structured by cascading the inference network and generative network as depicted in Fig. 1. The input layer is of size 20, which is the same as the embedding dimension Dg; the layer on the.\nRepeatable experiment code: https: //github. com/xxj96/nsvm\n(26) {t,t}= MLP(h;) ={[1, 1, 0]h [0, 0, O.5](h)2, [0, 0,1]h} (27) hf =RNNg(ht-1,xt-1,Zt;$) [0 0 0 1] [0] 0 1 0 ht 0 Xt-1 + Zt7 (28) 1+ [o] [0 0 1+ a (29) Xt = t+OtEt\nObserve that VVT = k=1 UU, the perturbation of rank K is essentially the superposition of K perturbations of rank 1. Therefore, we can calculate the deviation A iteratively, an algorithm is provided to demonstrate the procedure of calculation. The computational complexity for rank-K. perturbation remains to be O(M) given K < M."}, {"section_index": "14", "section_name": "B MORE CASE STUDIES", "section_text": "interface of inference network and generative network - we call it latent variable layer -- represents. the latent variable z, where its dimension is 2. The output layer has the same structure as the input one, therefore the latent variable layer acts as a bottleneck of the entire architecture which helps to. extract the key factor. The stacked layers between input layer, latent variable layer and output layer. are the hidden layers of either inference network or generative network, it consists of 1 or 2 LSTM. layers with size 10, which contains recurrent connection for temporal dependencies modelling..\nThe reason of the drops in Fig 4(b) and (c) seems to be that NSVM has captured the jumps and. drops of the stock price using its nonlinear dynamics and modelled the sudden changes as part of. the trend: the estimated trend \"mu\"' goes very close to the real observed price even around the jumps and drops (see the upper figure of Fig 4(b) and (c) around step 1300 and 1600). The residual (i.e difference between the real value of observation and the trend of prediction) therefore becomes quite. small, which lead to a lower volatility estimation..\nFor econometric models. we utilise several widely-used packages for time series analysis: statsmod\n1a1 w1ae1y-uscu packa Ics ana1ys1s. slulsnO els (http://statsmodels.sourceforge.net/),. arch (https://pypi.python. org/pypi/arch/3.2), Oxford-MFE-toolbox (https://www.kevinsheppard. com/MFe_Toolbox), stochvol (https://cran.r-project.org/web/packages/ stochvol) and fGarch (https://cran.r-project.org/web/packages/fGarch). The implementation of GPvOL is retrived from http://jmh1.org and we adopt the same. hyperparameter setting as in Wu et al. (2014).\nOn the other hand, for the baselines, we adopt AR as the trend model, which is a relatively simple linear model compared with the nonlinear NSVM. AR would not capture the sudden changes anc leave those spikes in the residual; GARCH then took the residuals as input for volatility modelling resulting in the spikes in volatility estimation."}, {"section_index": "15", "section_name": "5.3 SYNTHETIC DATA EXPERIMENT", "section_text": "We build up the synthetic dataset by generating 256 heteroskedastic univariate time series, each with. 2000 data points i.e. 2000 time steps. At each time step, the observation is drawn from a Gaussian. distribution with pre-determined mean and variance, where the tendency of mean and variance is synthesised as linear combinations of sine functions. Specifically, for the trend and variance, we synthesis each using 3 sine functions with randomly chosen amplitudes and frequencies; then the. value of the synthesised signal at each timestep is drawn from a Gaussian distribution with the corresponding value of trend and variance at that timestep. A sampled sequence is shown in Fig. 2a. We expect that this limited dataset could well simulate the real-world scenarios: one usually has very. limited chances to observe and collect a large amount of data from time-invariant distributions. In. addition, it seems that every observable or latent quantity within time series varies from time to time and seldom repeats the old patterns. Hence, we presume that the tendency shows long-term patterns. and the period of tendency is longer than observation. In the experiment, we take the former 1500 time steps as the training set whereas the latter 500 as the test set..\nFor the synthetic data experiment, we simplify the recurrent layers in both inference net and generative net as single LSTM layer of size 10. The actual input {xt} fed to NSVM is Dg- dimensional time-delay embedding (Kennel et al., 1992) of raw univariate observation {xt} such that x = [t+1- D, .., xt]. 2-dimensional latent variable zt is adopted to capture the latent pro- cess, and enforces an orthogonal representation of the process by using diagonal covariance matrix. At each time step, 30 samples of latent variable zt are generated via reparameterisation (Eq. (22))."}, {"section_index": "16", "section_name": "5.4 REAL-WORLD DATA EXPERIMENT", "section_text": "We select 162 out of more than 1500 stocks from Chinese stock market and collect the time series. of their daily closing prices from 3 institutions in China. We favour those with earlier listing date. of trading (from 2006 or earlier) and fewer suspension days (at most 50 suspension days in tota. during the period of observation) so as to reduce the noise introduced by insufficient observatior. or missing values, which has significant influences on the performance but is essentially irrelevan. to the purpose of volatility forecasting. More specifically, the dataset obtained contains 162 time. series, each with 2552 data points (7 years). A sampled sequence is shown in Fig. 2b. We divide the whole dataset into two subsets: the training subset consists of the first 2000 data points while the. test subset contains the rest 552 data points.\nSimilar model configuration is applied to the real-world data experiment: time-delay embedding of dimension D on the raw univariate time series; 2-dimensional latent variable with diagona\nState-of-the-art learning techniques have been applied: we introduce Dropout (Zaremba et al., 2014). into each LSTM recurrent layer and impose L2-norm on the weights of each fully-connected feed- forward layer as regularistion; NADAM optimiser (Dozat, 2015) is exploited for fast convergence,. which is a variant of ADAM optimiser (Kingma & Ba, 2014) incorporated with Nesterov momen-. tum; stepwise exponential learning rate decay is adopted to anneal the variations of convergence as. time goes."}]
BycCx8qex
[{"section_index": "0", "section_name": "DRAGNN: A TRANSITION-BASED FRAMEWORK FOR DYNAMICALLY CONNECTED NEURAL NETWORKS", "section_text": "Lingpeng Kong\nCarnegie Mellon University Pittsburgh, PA.\nIn this work, we present a compact, modular framework for constructing new recurrent neural architectures. Our basic module is a new generic unit, the Transi- tion Based Recurrent Unit (TBRU). In addition to hidden layer activations, TBRUs have discrete state dynamics that allow network connections to be built dynami cally as a function of intermediate activations. By connecting multiple TBRUs, we can extend and combine commonly used architectures such as sequence-to- sequence, attention mechanisms, and recursive tree-structured models. A TBRU can also serve as both an encoder for downstream tasks and as a decoder for its own task simultaneously, resulting in more accurate multi-task learning. We call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN. We show that DRAGNN is significantly more accurate and efficient than seq2seq with attention for syntactic dependency parsing and yields more ac- curate multi-task learning for extractive summarization tasks.\nTable 3: Deep stacked parsing compared to state-of-the-art on PTB. * indicates that additional re sources beyond the Penn Treebank are used. Our model is roughly comparable to an ensemble of multiple Stack-LSTM models. and the most accurate without any additional resources.\nOur final model uses 5 TBRU units. Inspired byZhang & Weiss|(2016), a left-to-right POS tagging TBRU provides the first layer of representations. Next, we run two shift-only TBRUs, one in eacl. direction, to provide representations to the parsers. Finally, we connect the left-to-right parser to the right-to-left parser using links defined via the SubTREE function. The result (Table|3) is a state-of. the-art dependency parser, yielding the highest published accuracy for a model trained solely on th Penn Treebank with no additional resources.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We presented a compact, modular framework for describing recurrent neural architectures. We eva uated our dynamically structured model and found it to be significantly more efficient and accurat than attention mechanisms for dependency parsing and extractive sentence summarization in bot single- and multi-task setups. While we focused primarily on syntactic parsing, the framework prc vides a general means of sharing representations between tasks. There remains low-hanging fru still to be explored: in particular, our approach can be globally normalized with multiple hypothese in the intermediate structure. We also plan to push the limits of multi-task learning by combir ing many different NLP tasks, such as translation, summarization, tagging problems, and reasonin tasks, into a single model.\nTo apply deep learning models to structured prediction, machine learning practitioners must address. two primary issues: (1) how to represent the input, and (2) how to represent the output. The seq2seq. encoder/decoder framework (Kalchbrenner & Blunsom2013) Cho et al.2014)Sutskever et al. 2014) proposes solving these generically. In its simplest form, the encoder network produces a. fixed-length vector representation of an input, while the decoder network produces a linearization of the target output structure as a sequence of output symbols. Encoder/decoder is state of the art. for several key tasks in natural language processing, such as machine translation (Wu et al.2016)..\nHowever, fixed-size encodings become less competitive when the input structure can be explicitly. mapped to the output. In the simple case of predicting tags for individual tokens in a sentence, state- of-the-art taggers learn vector representations for each input token and predict output tags from. those (Ling et al.]2015} Huang et al.]2 2015f Andor et al.2016).When the input or output is a. syntactic parse tree, networks that explicitly operate over the compositional structure of the network. typically outperform generic representations (Dyer et al.2015[Li et al.[2015]Bowman et al.2016). Implictly learned mappings via attention mechanisms can significantly improve the performance. of sequence-to-sequence (Bahdanau et al.]2015, Vinyals et al.]2015), but require runtime that's quadratic in the input size."}, {"section_index": "2", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Kuzman Ganchev, Michael Collins, Dipanjan Das, Slav Petrov, Aliaksei Severyn, Chris Dyer, and Noah Smith for their useful feedback and discussion while preparing this draft.\nDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev Slav Petrov, and Michael Collins. Globally normalized transition-based neural networks. In. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 2442-2452, 2016. Giuseppe Attardi and Felice Dell'Orletta. Reverse revision and linear tree combination for depen-. dency parsing. In Proceedings of Human Language Technologies: The 2009 Annual Conference. of the North American Chapter of the Association for Computational Linguistics, Companion. Volume: Short Papers, pp. 261-264. Association for Computational Linguistics, 2009..\nIn this work, we propose a modular neural architecture that generalizes the encoder/decoder concept to include explicit structure. Our framework can represent sequence-to-sequence learning as well as models with explicit structure like bi-directional tagging models and compositional, tree-structured models. Our core idea is to define any given architecture as a series of modular units, where con nections between modules are unfolded dynamically as a function of the intermediate activations produced by the network. These dynamic connections represent the explicit input and output struc- ture produced by the network for a given task.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.\nDev Test Model UAS LAS UAS LAS POS Right-to-left Parse 93.08 90.89 92.8 90.8 94.01 91.93 93.72 91.83 POS Right-to-left Left-to-right Parse Rev. Parse (Above, but with pretrained word2vec)* 94.07 92.06 94.09 92.12 Bi-LSTM, graph-based (Kiperwasser & Goldberg 2016 93.10 91.00 Stack LSTM (Dyer et al.[2015) 93.10 90.90 20 Stack LSTMs (Kuncoro et al.]2016)* 94.51 92.57 Globally normalized, transition-based|Andor et al. (2016)* 94.61 92.79\n{chrisalberti, andor, bogatyy, djweiss}@google.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We build on the idea of transition systems from the parsing literature (Nivre 2006), which linearize. structured outputs as a sequence of (state, decision) pairs. Transition-based neural networks have recently been applied to a wide variety of NLP problems;Dyer et al.[(2015); :Lample et al.(2016);\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, an Christopher Potts. A fast unified model for parsing and sentence understanding. ACL, 2016.\nBi-LSTM Tagging (3 TBRU) Stack-LSTM (2 TBRU) Transition Based Recurrent Unit (TBRU) Y1 Y2 Y4Y5 Y1 Y2 Y3 Y4 Y3 Y5 network activations Discrete Recurrence fcn Network state Input embeddings Cell Encoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5 T\nFigure 1: High level schematic of a Transition-Based Recurrent Unit (TBRU), and common network architectures that can be implemented with multiple TBRUs. The discrete state is used to compute recurrences and fixed input embeddings, which are then fed through a network cell. The network predicts an action which is used to update the discrete state (dashed output) and provides activations that can be consumed through recurrences (solid output). Note that we present a slightly simplified version of Stack-LSTM (Dyer et al.]2015) for clarity.\nKatja Filippova and Yasemin Altun. Overcoming the lack of parallel data in sentence compression In EMNLP, pp. 1481-1491. Citeseer, 2013\nKiperwasser & Goldberg(2016);Zhang et al.(2016); Andor et al.(2016), among others. We gen eralize these approaches with a new basic module, the Transition-Based Recurrent Unit (TBRU which produces a vector representation for every transition state in the output linearization (Figur 1). These representations also serve as the encoding of the explicit structure defined by the states For example, a TBRU that attaches two sub-trees while building a syntactic parse tree will also pro duce the hidden layer activations to serve as an encoding for the newly constructed phrase. Multipl TBRUs can be connected and learned jointly to add explicit structure to multi-task learning setup and share representations between tasks with different input or output spaces (Figure[2).\nNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. EMNLP, 2013\nEliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using bidirec tional lstm feature representations. ACL, 2016\nThis inference procedure will construct an acyclic compute graph representing the network archi tecture, where recurrent connections are dynamically added as the network unfolds. We therefore call our approach Dynamic Recurrent Acyclic Graphical Neural Networks, or DRAGNN\nGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyet Neural architectures for named entity recognition. NAACL-HTL, 2016.\nDRAGNN has several distinct modeling advantages over traditional fixed neural architectures. Un. like generic seq2seq, DRAGNN supports variable sized input representations that may contain ex. plicit structure. Unlike purely sequential RNNs, the dynamic connections in a DRAGNN can span. arbitrary distances in the input space. Crucially, inference remains linear in the size of the input. in contrast to quadratic-time attention mechanisms. Dynamic connections thus establish a compro. mise between pure seq2seq and pure attention architectures by providing a finite set of long-range inputs that 'attend' to relevant portions of the input space. Unlike recursive neural networks (Soche. et al.2010] 2011) DRAGNN can both predict intermediate structures (such as parse trees) and uti lize those structures in a single deep model, backpropagating downstream task errors through the. intermediate structures. Compared to models such as Stack-LSTM (Dyer et al.[2015) and SPINN Bowman et al.(2016), TBRUs are a more general formulation that allows incorporating dynamically structured multi-task learning (Zhang & Weiss!2016) and more varied network architectures..\nJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. When are tree structures necessary for deep learning of representations? EMNLP. 2015\nMinh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. ICLR, 2016.\nJoakim Nivre. Inductive dependency parsing. Springer, 2006\nIn sum, DRAGNN is not a particular neural architecture, but rather a formulation for describing neural architectures compactly. The key to this compact description is a new recurrent unitthe TBRU-which allows connections between nodes in an unrolled compute graph to be specified dynamically in a generic fashion. We utilize transition systems to provide succinct, discrete repre- sentations via linearizations of both the input and the output for structured prediction. We provide a straightforward way of re-using representations across NLP tasks that operate on different structures\nAnkur P Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attentior model for natural language inferencfne. EMNLP, 2016\nRichard Socher, Christopher D. Manning, and Andrew Y. Ng. Learning continuous phrase represen tations and syntactic parsing with recursive neural networks. In NIPS-2010 Deep Learning and. Unsupervised Feature Learning Workshop, 2010.\nWe demonstrate the effectiveness of DRAGNN on two NLP tasks that benefit from explicit struc ture: dependency parsing and extractive sentence summarization (Filippova & Altun2013). First we show how to use TBRUs to incrementally add structure to the input and output of a \"vanilla. seq2seq dependency parsing model, dramatically boosting accuracy over seq2seq with no additiona. computational cost. Second, we demonstrate how the same TBRUs can be used to provide structure intermediate syntactic representations for extractive sentence summarization. This yields better ac. curacy than is possible with the generic multi-task seq2seq (Dong et al.|2015f Luong et al.]2016 approach. Finally, we show how multiple TBRUs for the same dependency parsing task can b. stacked together to produce a single state-of-the-art dependency parsing model..\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. ACL, 2015..\nBi-LSTM Tagging_(3 TBRU) Stack-LSTM (2 TBRU) Transition Based Recurrent Unit (TBRU) Y1 Y1 Y2 Y2 Y3 Y4Y5 Y3 Y4. Y5 network activations Discrete Recurrence fcn Network state Input embeddings Cell Encoder/Decoder (2 TBRU) Y1 Y2 Y3 Y4 Y5 O>O>O>C\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014.\nDaxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. Multi-task learning for mul tiple language translation. In Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing, pp. 1723-1732, 2015.\nChris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transition-based dependency parsing with stack long short-term memory. pp. 334 -343, 2015.\nAdhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A Smith. Distilling an ensemble of greedy dependency parsers into one mst parser. EMNLP, 2016.\nRichard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dy-. namic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pp. 801-809, 2011.\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks 8104_3112.2014\nExtractive summarization TBRU Multi-task Encoder/Decoder Jniformed man laughed Right-to-left LSTM TBRU DROP KEEP KEEP Summarization laughed man Uniformed Right-to-left LSTM Dependency Uniformed man laughed laughed <eos> <eos> SHIFT SHIFT LA(nn) SHIFT LA(nsubj) RA(root) Trees Dependency parsing TBRU DRAGNN w/ Intermediate representations Uniformed man laughed Dynamic links as a function of transition state DROP KEEP Right-to-left KEEP Summarization LSTM laughed man Uniformed Dynamic unrolled links Dependency Trees Uniformed man laughed laughed <eos> <eos> SHIFT SHIFT LA(nn) SHIFT LA(nsubj) RA(root) Intermediate representation\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans. lation system: Bridging the gap between human and machine translation. arXiv preprint. arXiv:1609.08144, 2016.\nYuan Zhang and David Weiss. Stack-propagation: Improved representation learning for syntax. In Pr0c. ACL, 2016.\nFigure 2: Using TBRUs to share fine-grained, structured representations. Top left: A high level viev of multi-task learning with DRAGNN in the style of multi-task seq2seq (Luong et al.2016). Botton left: Extending the \"stack-propagation'Zhang & Weiss (2016) idea to included dependency parse trees as intermediate representations. Right: Unrolled TBRUs for each setup for a input fragmen \"Uniformed man laughed\", utilizing the transition systems described in Section|4\nWe use transition systems to map inputs x into a sequence of output symbols, d1 . . . dn. For the pur- poses of implementing DRAGNN, transition systems make explicit two desirable properties. First. we stipulate that the output symbols represent modifications of a persistent, discrete state, which makes book-keeping to construct the dynamic recurrent connections easier to express. Second, tran- sition systems make it easy to enforce arbitrary constraints on the output, e.g. the output should produce a valid tree.\nFormally, we use the same setup asAndor et al.(2016), and define a transition system T {S, A, t\nA set of states S(x) A special start state s' E S(x). A set of allowed decisions A(s, x) for all s E S. A transition function t(s. d. x) returning a new state s' f\nWe now formally define how to combine transition systems with recurrent networks into what we call a transition based recurrent unit (TBRU). A TBRU consists of the following:.\nA set of states S(x) . A special start state s' E S(x). . A set of allowed decisions A(s, x) for all s E S. . A transition function t(s, d, x) returning a new state s' for any decision d E A(s, x)\nFor brevity, we will drop the dependence of x in the functions given above. Throughout this work we will use transition systems in which all complete structures for the same input x have the same number of decisions n(x) (or n for brevity), although this is not necessary..\nA complete structure is then a sequence of decision/state pairs (s1, d1) ... (Sn, dn) such that s1 = s d, E A(s) for i = 1... n, and S+1 = t(s;, d,). We will now define recurrent network architectures that operate over these linearizations of input and output structure.\nA transition system T. An input function m(s) that maps states to fixed-size vector representations, for exampl an embedding lookup operation for features from the discrete state:.\nm(s) : S+> RK\nrs={1,3} hi Dependency Parse: h h2 h3 d = Right arc (incorrect) Buffer pun dob dobj on Monday gave flower LSTM / MLP LSTM / MLP LSTM / MLP LSTM / MLP nsubj] (iobjJ det amod Cell Cell Cell Cell Bob gave Alice pretty flower on Monday. Bob Alice a pretty Transition state: Stack Buffer d = Shift (correct) ms1 ms2 ms3 msi [gave] flower on Monday [gave] (flower] [on] Monday A nsubj] iobj det amod nsubj] iobj] det amod] dargmaxwh dargmaxwh dEA(s) dEAs Alice a K Bob pretty Bob Alice a pretty\nFigure 3: Left: TBRU schematic. Right: Dependency parsing example. For the given gold depen. dency parse tree and a arc-standard transition state with two sub-trees on the stack is shown. From this state, two possible actions are also shown (Shift and Right arc). To reproduce the tree, the Shift action should be taken.\nT sequentially tags each input token, where s; = {1, ... , d,-1}, and A is the set of p tags. We call this the tagger transition system.. m(si) = x, the word embedding for the next token to be tagged.. r(s) = {i - 1} to connect the network to the previous state.. RNN is a single instance of the LSTM cell..\nInference with TBRUs. Given the above, inference in the TBRU proceeds as follows.\n1. Initialize s1 = s'. 2. For i = 1,...,n: (a) Update the hidden state: h; < RNN(m(s),{h; j E r(s)}) (b) Update the transition state: d, argmaxdeA(s;) w hi, Si+1t(Si,d\nA schematic overview of a single TBRU is presented in Figure[3] By adjusting RNN, r, and T, TBRUs can represent a wide variety of neural architectures..\nr(si)={1,3} h3 hi Dependency Parse: h2 d =Right arc (incorrect) h1 Buffer dobi dobj on Monday [gave] flower LSTM / MLP LSTM / MLP LSTM / MLP LSTM / MLP nsubj] (iobj] (det amod] Cell Cell Cell Cell Bob gave Alice pretty flower on Monday Bob Alice a pretty Transition state: Stack Buffer d = Shift (correct) ms1 ms2 ms3 msi [gave] flower on Monday [gave] (flower) on] Monday [nsubj] (iobj] (det amod] d2 argmax w h2 nsubj (ob) [det) amod) dargmaxwh dEA(s) dE.A(82) Bob Alice a pretty Bob Alice a pretty\nA recurrrence function r(s) that maps states to a set of previous time steps:. r(s) : S+> P{1,...,i- 1}, where P is the power set. Note that in general r(s)] is not necessarily fixed and can vary. with s. We use r to specify state-dependent recurrent links in the unrolled computation graph. A RNN cel1 that computes a new hidden representation from the fixed and recurrent inputs: LR NNm( (s)Lh:i E r(sl\nrs):S+>P{1,...,i-1}\nhs < RNN(m(s),{hiiEr(s)})\nExample 1. Sequential tagging RNN. Let the input x = {x1,..., xn} be a sequence of word embeddings, and the output be a sequence of tags d1,..., dn. Then we can model a simple LSTM tagger as follows:\nT is the arc-standard transition system (Figure 3), so the state contains all words al partially built trees on the stack as well as unseen words on the buffer. m(s) is the concatenation of 52 feature embeddings extracted from tokens based on the positions in the stack and the buffer. r(s,) = {} is empty, as this is a feed-forward network. RNN is a feed-forward multi-layer perceptron (MLP).\nWhile TBRUs are a useful abstraction for describing recurrent models, the primary motivation for this framework is to allow new architectures by combining representations across tasks and compo\nsitional structures. We do this by connecting multiple TBRUs with different transition systems vi the recurrence function r(s). We formally augment the above definition as follows\nExample 3. \"Input'' transducer TBRUs via no-op decisions. We find it useful to define TBRUs even when the transition system decisions don't correspond to any output. These TBRUs, which we call no-op TBRUs, transduce the input according to some linearization. The simplest is the shift only transition system, in which the state is just an input pointer s = {i}, and there is only one transition which advances it: t(s, ) = {i + 1}. Executing this transition system will produce a hidden representation h, for every input token.\nFor shift-only TBRU: m(si) = Xi, r(si) = {i - 1} For tagger TBRU: m(Sn+i) =ydn+i-1rsi)={n,n+i-1}\nFor shift-only TBRU: m(s) = X, r(si) = {i - 1} For tagger TBRU: m(Sn+i) = ydn+i-1, r(si) = {n,n + i - 1}\nWe observe that the tagger TBRU starts at step n after the shift-only TBRU finishes, that y; is fixed embedding vector for the output tag j, and that the tagger TBRU has access to both the fina encoding vector hn as well as its own previous time step hn+i-1.\nLeft toright: T = shift-only, m(s) =x, r(s) ={i- 1} Right to left: T = shift-only, m(sn+i) = xn-i, r(sn+i) = {n + i - 1} Tagger: T = tagger, m(s2n+i) ={}, r(s2n+i) ={i, 2n- i}.\nLeft to right: T = shift-only, m(si) = x, r(s) = {i - 1} Right to left: T = shift-only, m(sn+i) = xn-i, r(sn+i) = {n + i - 1} Tagger: T = tagger, m(s2n+i) ={}, r(S2n+i) ={i, 2n - i}\nWe observe that the network cell in the tagger TBRU takes recurrences only from the bi-direction. representations, and so is not recurrent in the traditional sense. See Figure|1|for an unrolled example\nExample 5. Multi-task bi-directional tagging. Here we observe that it's possible to add addi. tional annotation tasks to the bi-directional TBRU stack from Example 4 simply by adding more instances of the tagger TBRUs that produce outputs from different tag sets, e.g. parts-of-speech vs.. morphological tags. Most important, however, is that any additional TBRUs have access to all three earlier TBRUs. This means that we can support the \"stack-propagation\" (Zhang & Weiss2016 style of multi-task learning simply by changing r for the last TBRU:.\nTraditional multi-task: r(s3n+i) = {i, 2n - i} Stack-prop: r(S3n+i) 2n - i : 2n +i Left-to-right Right-to-left Tagger TBRU\nTraditional multi-task: r(s3n+i) = {i, 2n - i} Stack-prop: r(S3n+i) 2n-i , 2n+i Left-to-right Right-to-left Tagger TBRU\nRemark: the raison d'etre of DRAGNN. This example highlights the primary advantage of ou formulation: a TBRU can serve as both an encoder for downstream tasks and as a decoder for it own task simultaneously. This idea will prove particularly powerful when we consider syntacti parsing, which involves compositional structure over the input. For example, consider a no-op. TBRU that traverses an input sequence x1, ..., Xn in the order determined by a binary parse tree. this transducer can implement a recursive tree-structured network in the style of [Tai et al.(2015) which computes representations for sub-phrases in the tree. In contrast, with DRAGNN, we car\n1. We execute a list of T TBRU components, one at a time, so that each TBRU advances a global step counter. Note that for simplicity, we assume an earlier TBRU finishes all of its. steps before the next one starts execution. 2. Each transition state from the t'th component s has access to the terminal states from every prior transition system, and the recurrence function r(s) for any given component. can pull hidden activations from every prior one as well..\nExample 4. Encoder/decoder networks with TBRUs.. We can reproduce the encoder/decoder framework for sequence tagging by using two TBRUs: one using the shift-only transition system to encode the input, and the other using the tagger transition system. For input x = {x1, ..., xn}, we connect them as follows:\nExample 4.Bi-directional LSTM tagger. With three TBRUs, we can implement a simple bi. directional tagger. The first two run the shift-only transition system, but in opposite directions. The final TBRU runs the tagger transition system and concatenates the two representations:\nUnrolled graph (incomplete): Recurrent inputs: SUBTREE(s, So) SUBTREE(s,S1) INPUT(s) Sh Sh L Sh R Sh Sh Sh qave flower On Monday TBRU 2 Bob Alice a pretty Stack Buffer TBRU 1 Bob gave Alice a pretty flower on Monday\nFigure 4: Detailed schematic for the compositional dependency parser used in our experiments The first TBRU consumes each input word right-to-left; the second uses the arc-standard transition system. Note that each \"Shift' action causes the TBRU1->TBRU2 link to advance. The dynamic recurrent inputs to the given state are highlighted; the stack representations are obtained from the last \"Reduce\"' action to modify each sub-tree.\nuse the arc-standard parser directly to produce the parse tree as well as encode sub-phrases int representations.\nFor a given parser state s, we compute two types of recurrences:\nExample 7. Extractive summarization pipeline with parse representations. To model extrac- tive summarization, we follow[Andor et al.(2016) and use a tagger transition system with two tags:. \"Keep\"' and \"Drop.' However, whereas Andor et al.(2016) use discrete features of the parse tree,. we can utilize the SUBTREE recurrence function to pull compositional, phrase-based representa tions of tokens as constructed by the dependency parser. This model is outlined in Figure[2] A full. specification is given in the Appendix.."}, {"section_index": "4", "section_name": "3.2 HOW TO TRAIN A DRAGNN", "section_text": "Given a list of TBRUs, we propose the following learning procedure. We assume training data consists of examples x along with gold decision sequences for one of the TBRUs in the DRAGNN.\n1 This composition function is similar to that in the constituent parsing SPINN model (Bowman et al.. 2016) but with several key differences. Since we use TBRUs, we compose new representations for \"Shift' actions as well as reductions, we take inputs from other recurrent models, and we can utilize subtree representations in. downstream tasks.\nSh SUBTREE(s, So) SUBTREE(s, S) INPUT(s) Sh Sh R Sh Sh Sh L gave flower on Monday TBRU 2 Bob Alice a pretty Stack Buffer TBRU 1 Bob gave Alice a pretty flower on Monday\nExample 6. Compositional representations from arc-standard dependency parsing.. We use the arc-standard transition system (Nivre 2006) to mode1 dependency trees. The system maintains two data structures as part of the state s: an input pointer and a stack (Figure 3). Trees are built. bottom up via three possible attachment decisions. Assume that the stack consists of S = {A, B}. with the next token being C. We use So and S1 to refer to the top two tokens on the stack. Then the decisions are defined as:\nShift: Push the next token on to the stack: S = {A, B, C}, and advance the input pointer Left arc + label: Add an arc A label B, and remove A from the stack: S = {B}. Right arc + label: Add an arc A ->label B, and remove B from the stack: S = {A}\nrinpur(st) = {INpUT(st)}, where INPUT returns the index of the next input token rsTACK(Si) = {SUBTREE(Si, So), SUBTREE(s, S1)}, where SUBTREE(S,1) is a function returning the index of the last decision that modified the i'th token:\nWe show an example of the links constructed by these recurrences in Figure |4] and we investigate. variants of this model in Section4] This model is recursively compositional according to the decision taken by the network: when the TBRU at step s; decides to add an arc A -> B for state, the. activations h; will be used to represent that new subtree in future decisions.\nParsing TBRU recurrence, r(s) C {1,..., n +i} Parsing Accuracy (%) Input links Recurrent edges News Questions Runtime {n} {n+i-1} 27.3 70.1 O(n) {n} {SUBTREE(Si, So), SUBTREE(Si, S1)} 36.0 75.6 O(n) Attention {n+i-1} 76.1 84.8 O(n2) Attention {SUBTREE(Si, So), SUBTREE(Si, S1)} 89.0 91.9 O(n2) INPUT(Si) {n+i-1} 87.1 89.7 O(n) INPUT(Si) {SUBTREE(Si, So), SUBTREE(Si, S1)} 90.9 92.1 O(n)\nTable 1: Dynamic links enable much more accurate, efficient linear-time parsing models on the Treebank Union dev set. We vary the recurrences r to explore utilizing explicit structure in the parsing TBRU. Utilizing the explicit INpUT(s) pointer is more effective and more efficient than a quadratic attention mechanism. Incorporating the explicit stack structure via recurrent links further 1mproves performance.\nL(x,dN+1:N+n;0) = ) log P(dN+i| d1:N, dN+1:N+i-1;0\nThe remaining question is where do the decisions d1 . . . d come from. There are two options here. they can either come as part of the gold annotation (e.g. if we have joint tagging and parsing data), or they will be predicted by unrolling the previous components (e.g. when training stacked extractive summarization model, the parse trees will be predicted by the previously trained parser TBRU).\nWhen training a given TBRU, we unroll an entire input sequence and then use backpropagation through structure (Goller & Kuchler1996) to optimize (1). To train the whole system on a set of C datasets, we use a similar strategy to (Dong et al.]2015 Luong et al.2016); we sample a target task c, 1 c C, from a pre-defined ratio, and take a stochastic optimization step on the objective of that task's TBRU. In practice, task sampling is usually preceded by a deterministic number of pre training steps, allowing, for example, to schedule a certain number of tagger training steps before running any parser training steps."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate three aspects of our approach on two NLP tasks: English dependency parsing and extractive sentence summarization. For English dependency parsing, we primarily use the the Union Treebank setup from Andor et al.(2016). By evaluating on both news and questions domains, we can separately evaluate how the model handles naturally longer and shorter form text. On the Union Treebank setup there are 93 possible actions considering all arc-label combinations. For extractive sentence summarization, we use the dataset of|Filippova & Altun (2013), where a large news collection is used to heuristically generate compression instances. The final corpus contains about 2.3M compression instances, but since we evaluated multiple tasks using this data, we sub- sampled the training set to be comparably sized to the parsing data (~60K training sentences). The test set contains 160K examples. We implement our method in TensorFlow, using mini-batches of size 4 and following the averaged momentum training and hyperparameter tuning procedure of Weiss et al.(2015).\nWe explore the impact of different types of recurrences on dependency parsing in Table 1 In this setup, we used relatively small models: single-layer LSTMs with 256 hidden units, taking\nNote that, at a minimum, we need such data for the final TBRU. Assuming given decisions d1 . . . dy from prior components 1 . . . T-- 1, we define a log-likelihood objective to train the T'th TBRU along its gold decision sequence d*+1 , d++n, conditioned on prior decisions:\nwhere 0 are the combined parameters across all TBRUs. We observe that this objective is locally normalized (Andor et al.]2016), since we optimize the probabilities of the individual decisions in. the gold sequence.\nModel Structure Multi-task? A (%) F1 (%) LAS (%) N 28.93 79.75 Right-to-left Summarize N 29.51 80.03 Right-to-left Left-to-right Summarize Luong et al.(2016 30.07 80.31 89.42 Right-to-left Parse Summarize Zhang & Weiss|(2016 30.56 80.74 89.13 Right-to-left Parse Summarize\nTable 2: Single- vs. multi-task learning with DRAGNN on extractive summarization. \"A' is full- sentence accuracy of the extraction model, \"F1\"' is per-token F1 score, and \"LAS\" is labeled parsing accuracy on the Treebank Union News dev set. Both multi-task models that utilize the parsing data outperform the single-task approach, but the model that uses parses as an intermediate representation in the vein ofZhang & Weiss[(2016) (Figure[2) makes better use of the data. Note that the locally normalized mode1 from Andor et al. (2016) obtains 30.50% accuracy and 78.72% F1 on the test set when trained on 100 more data.\n32-dimensional word or output symbol embeddings as input to each cell. In each case, the pars. ing TBRU takes input from a right-to-left shift-only TBRU. Under these settings, the pure en. coder/decoder seq2seq model simply does not have the capacity to parse newswire text with any. degree of accuracy, but the TBRU-based approach is nearly state-of-the-art at the same exact com- putational cost. As a point of comparison and an alternative to using input pointers, we also im. plemented an attention mechanism within DRAGNN. We used the dot-product formulation from. Parikh et al.(2016), where r(s; ) in the parser takes in all of the shift-only TBRU's hidden states and RNN aggregates over them.\nWe evaluate our approach on the summarization task in Table[2] We compare two single-task LSTM tagging baselines against two multi-task approaches: an adaptation of Luong et al.(2016) and the stack-propagation idea ofZhang & Weiss(2016). In both multi-task setups, we use a right-to left shift-only TBRU to encode the input, and connect it to both our compositional arc-standard dependency parser and the \"Keep/Drop\"' summarization tagging model.\nIn both setups we do not follow seq2seq, but utilize the InpuT function to connect output deci sions directly to input token representations. However, in the stack-prop case, we use the SuBTREF function to connect the tagging TBRU to the parser TBRU's phrase representations directly (Figure 2). We find that allowing the compressor to directly use the parser's phrase representations signif icantly improves the outcome of the multi-task learning setup. In both setups, we pretrained the parsing model for 40oK steps and tuned the subsequent ratio of parser/tagger update steps using a development set."}, {"section_index": "6", "section_name": "4.3 DEEP STACKED BI-DIRECTIONAL PARSING", "section_text": "Here we propose a continuous version of the bi-directional parsing model of|Attardi & Dell'Orlett 2009): first, the sentence is parsed in the left-to-right order as usual; then a right-to-left transitio system analyzes the sentence in reverse order using addition features extracted from the left-to-rigl parser. In our version, we connect the right-to-left parsing TBRU directly to the phrase represer tations of the left-to-right parsing TBRU, again using the SuBTREE function. Our parser has th significant advantage that the two directions of parsing can affect each other during training. Du ing each training step the right-to-left parser uses representations obtained using the predictions c the left-to-right parser. Thus, the right-to-left parser can backpropagate error signals through th left-to-right parser and reduce cascading errors caused by the pipeline.\nZhang & Weiss (201"}]
HJIY0E9ge
[{"section_index": "0", "section_name": "A SIMPLE YET EFFECTIVE METHOD TO PRUNE DENSE LAYERS OF NEURAL NETWORKS", "section_text": "{mb2,paris,rhc}@illinois.edu.edu\nFigure 6: Activation of neurons in a pruned Lenet-5 with only 3 neurons left in the dense layer. The x-axis has been populated by 100 random samples from each class of MNIST, sorted by class y-axis shows the neuron ID. Note that tanh is the activation function in the hid den layer.\nNeural networks are usually over-parameterized with significant redundancy in the number of required neurons which results in unnecessary computation and memory usage at inference time. One common approach to address this issue is to prune these big networks by removing extra neurons and parameters while maintaining the accuracy. In this paper, we propose NoiseOut, a fully automated pruning algorithm based on the correlation between activations of neurons in the hidden layers. We prove that adding additional output neurons with entirely random targets results into a higher correlation between neurons which makes pruning by NoiseOut even more efficient. Finally, we test our method on various networks and datasets. These experiments exhibit high pruning rates while maintaining the accuracy of the original network."}, {"section_index": "1", "section_name": "5 CONCLUSION", "section_text": "In this paper, we have presented NoiseOut, a simple but effective pruning method to reduce the number of parameters in the dense layers of neural networks by removing neurons with correlated activation during training. We showed how adding noise outputs to the network could increase the correlation between neurons in the hidden layer and hence result to more efficient pruning. The experimental results on different networks and various datasets validate this approach, achieving significant compression rates without loss of accuracy."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Neural networks and deep learning recently achieved state-of-the-art solutions to many problems ir computer vision (Krizhevsky et al.(2012); He et al.(2015)), speech recognition (Graves et al. (2013)) natural language processing (Mikolov et al.(2013)) and reinforcement learning (Silver et al.(2016)) Using large and oversized networks in these tasks is a common practice. Such oversized networks can easily overfit on the training dataset while having poor generalization on the testing data (Sabc & Yu (2008)). A rule of thumb for obtaining useful generalization is to use the smallest number of parameters that can fit the training data (Reed (1993). Unfortunately, this optimal size is noi usually obvious and therefore the size of the neural networks is determined by a few rules-of-thumb (Heaton (2008)) which do not guarantee an optimal size for a given problem. One common approach to overcome overfitting is to choose an over-sized network and then apply regularization (Ng(2004) and Dropout (Srivastava et al.[(2014). However, these techniques do not reduce the number o1 parameters and therefore do not resolve the high demand of resources at test time."}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "M Gethsiyal Augasta and T Kathirvalavakumar. Pruning algorithms of neural networks-a comparative study Central European Journal of Computer Science, 3(3):105-115. 2013\nAnother method is to start with an oversized network and then use pruning algorithms to remove. redundant parameters while maintaining the network's accuracy (Augasta & Kathirvalavakumar. (2013)). These methods need to estimate the upper-bound size of a network, a task for which there are. adequate estimation methods (Xing & Hu (2009). If the size of a neural network is bigger than what is necessary, in theory, it should be possible to remove some of the extra neurons without affecting. its accuracy. To achieve this goal, the pruning algorithm should find neurons which once removed. result into no additional prediction errors. However, this may not be as easy as it sounds since all the. neurons contribute to the final prediction and removing them usually leads to error..\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neura network. In Advances in Neural Information Processing Systems, pp. 1135-1143, 2015..\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2016.\nStephen Jose Hanson and Lorien Y Pratt. Comparing biases for minimal network construction with back propagation. In Advances in neural information processing systems, pp. 177-185, 1989\nIt is easy to demonstrate this problem by fitting an oversized network on a toy dataset. Figure1 shows a two-dimensional toy dataset which contains two linearly separable classes. Hence, only one hidden neuron in a two-layer perceptron should be enough to classify this data and any network with. more than one neuron (such as the network in Figure2fa) is an oversized network and can be pruned However, there is no guarantee that removing one of the hidden neurons will maintain the network's. performance. As shown in the example in Figure[1] removing any of the hidden neurons results into a. more compact, but under-performing network. Therefore, a more complicated process is required for pruning neural networks without accuracy loss..\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon Morgan Kaufmann. 1993.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi preprint arXiv:1512.03385, 2015.\nJeff Heaton. Introduction to neural networks with Java. Heaton Research, Inc., 2008\n500 NoiseOut NoiseOut + DropOut NoiseOut + L2-Regularization 400 Nennr euunne 300 200 100 0 0 20 40 60 80 10 Epoch"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Figure 7: Effect of Dropout and L2-regularization on NoiseOut. The y-axis represents the number of remain- ing neurons in the dense layer. Note that more than one neuron can be removed in each epoch. In each curve, the bold line is the median of 10 runs and colored background demonstrates the standard deviation.\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neura networks with the hashing trick. arXiv preprint arXiv:1504.04788, 2015.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp 6645-6649. IEEE, 2013.\n2.0 1.5 + 1.0 + 1 0.5 1 14 0.0 0.5 1 -1.0 1 -1.5 I: : -2.0 -2.0 -1.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 x0\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013\nAndrew Y Ng. Feature selection, 1 1 vs. 1 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, pp. 78. ACM, 2004.\nRussell Reed. Pruning algorithms-a survey. Neural Networks, IEEE Transactions on, 4(5):740-747, 1993\nDevin Sabo and Xiao-Hua Yu. A new pruning algorithm for neural network dimension analysis. In Neural Networks, 2008. IJCNN 2008.(IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on, pp. 3313-3318. IEEE, 2008.\nFigure 1: Effect of pruning on accuracy. Bold. line represents discriminator learned by a 2-2-1 MLP (Figure2a) on a toy data set. Dash line and dotted line show the results after pruning one of the hidden neurons. As it can be seen removing a. hidden neuron will result into accuracy drop.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nOur goal in this paper is two-fold. First, we introduce NoiseOut, a pruning method based on the. correlation between activations of the neurons. Second, we propose an approach which enforces the higher correlation between activations of neurons. Since the effectiveness of NoiseOut hinges on. high correlations between neuron activations, the combination of these two methods facilitates more. aggressive pruning.\nHong-Jie Xing and Bao-Gang Hu. Two-phase construction of multilayer perceptrons using information theor Neural Networks, IEEE Transactions on, 20(4):715-721, 2009.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep fried convnets. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1476-1483 2015."}, {"section_index": "5", "section_name": "2 RELATED WORK", "section_text": "Optimal Brain Damage (LeCun et al.(1989)) and Optimal Brain Surgeon (Hassibi & Stork(1993)) prune networks based on the Hessian of the loss function. It is shown that such pruning is more effective and more accurate than earlier magnitude-based pruning such as weight decay (Hanson. & Pratt (1989)). However, the necessary second-order derivatives require additional computational resources.\nIn this section, we describe the details of the proposed method called NoiseOut. First, we show how. this method can prune a single neuron and then how it can prune a full network, one neuron at a time\nx0 (a) W(1) W(2) x1 x0 W(2) (b) W(1) X1 N W(2)\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage In NIPs, volume 89. 1989\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324. 1998\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. 2013\nFigure 2: a) a simple 2-2-1 MLP; b) the same network with one additional noise output. All the hidden units have a linear activation while the output neurons use sigmoid as activation function. The gray neuron is a noise output which changes its target in each iteration.\nRecently, replacing the fully connected layers with other types of layers has been utilized to reduced the number of parameters in a neural network. Deep fried convnets (Yang et al.[(2015)) replaces these layers with kernel methods while the GoogLenet (Szegedy et al.(2015)) and Network in Network architecture (Lin et al.(2013)) replace them with global average pooling. Alternatively, Han et al. (2015) proposed a pruning method which learns the import connections between neurons, pruning the unimportant connections, and then retraining the remaining sparse network.\nBesides pruning, other approaches have been proposed to reduce the computation and memory requirements of neural networks. HashNets(Chen et al.(2015)) reduce the storage requirement of neural networks by randomly grouping weights into hash buckets. These techniques can be combined with pruning algorithms to achieve even better performance. As an example, Deep Compression(Han et al.(2016)) proposed a three stage pipeline: pruning, trained quantization, and Huffman coding, to reduce the storage requirement of neural networks.\nThe key idea in NoiseOut is to remove one of the two neurons with strongly correlated activations. The main rationale behind this pruning is to keep the signals inside the network as close to the original network as possible. To demonstrate this, assume there exists u, v, l such that [p(h), h))| = 1 => h(t) = ah?) + , where h(l) is the activation of ith neuron in the lth layer. By definition:\nnetwork as possible. To demonstrate this, assume there exists u, v, l such that p(hu =1 =\n+ 2 u r W.k iFu,V W i#u,v\nThis means that neuron u can be removed without affecting any of the neurons in the next layer simply by adjusting the weights of v and neurons' biases. Note that max.[p(h(), n()| = 1 is an\nNoiseOut follows the same logic to prune a single neuron using the following steps\n1. For each i, i, l calculate 2. Find u,v,l = argmax u,U,l,uFU 3. Calculate a, := arg min ( Q, 4. Remove neuron u in layer l 5. For each neuron k in layer l + 1: C 1 (1) -Update the weight w + Qw K u,k (1) - Update the bias Wu\nThe key element for successful pruning of neural networks using NoiseOut is the strong correlatio. between activation of the neurons. Essentially, a higher correlation between these activations mean. more efficient pruning. However, there is no guarantee that back-propagation results in correlate. activations in a hidden layer. In this section, we propose a method to encourage higher correlatio. by adding additional output nodes, called noise outputs. The targets for noise outputs will randoml. change in each iteration based on a predefined random distribution. We show that adding nois outputs to the output layer intensifies the correlation between activation in the hidden layers whic will subsequently make the pruning task more effective.."}, {"section_index": "6", "section_name": "3.2.1 EFFECT OF ADDING NOISE OUTPUTS TO THE NETWORK", "section_text": "To demonstrate the effect of adding noise outputs to the network, let us reconsider the toy example described previously in Figure[1] this time with some additional noise outputs in the output layer as shown in Figure[2|b. The result of training this network has been shown in Figure[3] As seen in this figure, the activation of the two hidden neurons has become highly correlated, and each hidden unit converged to the optimal discriminant by itself. This means that either of the extra neurons in the hidden layer can be removed without loss of accuracy.\nideal case. In this ideal scenario, removing one of the neurons results into no change in accuracy since the final output of the network will stay the same. In non-ideal cases, when the highest correlated neurons are not strongly correlated, merging them into one neuron may alter the accuracy. However continuing the training after the merge may compensate for this loss. If this does not happen, it means that the removed neuron was necessary to achieve the target accuracy and the algorithm cannot compress the network any further without accuracy degradation.\na) descriminators learned by hidden b) descriminators learned by hidden neurons in Figure 2-a neurons in Figure 2-b c) result after pruning. Hidden Neuron 1 Hidden Neuron 1. Hidden Neuron 2 Hidden Neuron 2 -2.01.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0-1.5 -1.0 0.5 0.0 0.5 1.0 1.5 2.0 -2.01.51.00.5 0.0 0.5 1.0 1.5 2.0 x0 x0 x0\nThis claim can be proved formally as well. The key to this proof is that neural networks are deterministic at inference time. In other words, the network generates a constant output for a constant input. For noise output, since the target is independent of the input, the training algorithm finds an optimal constant value that minimizes the expected error for every input. This objective can be presented as:\nm minC(f(X;W),[Y,Y]) =min(C(fm(X;W),Y)+C(f,(X;W),Y) W i=0\nwhere W is the adjustable parameters (i.e. weights), X is the input, Y is the target, Y, is the target. of noisy outputs at iteration i, C is the cost function, m is the number of iterations. f(X; W) and ft(X; W) are the outputs of the neural network in the original and noise outputs respectively, at. iteration i. Note that Y, changes in each iteration according to a random distribution P.\nThe first part of Equation2|represents the common objective of training a neural network, while the. second part has been added because of the noise outputs. It is possible to adjust the effect of noise. outputs based on Py. For instance by adding more noise outputs (in the case of Binomial distribution). or adjusting the variance (for Gaussian distribution). Another way of this adjustment is introducing a new multiplier for the second part of the cost. Although Y, changes in each iteration, the constant value that the network infers for any given input would be the same e.g. 0, due to the independent of. P from X. Therefore:\nmin C = min C f(X;W),[Y,Yi] W W min fX;W-Y min f(X;W)-Y2+ W(1),W(2) 2 W(1),W(2) min (f(X;W)-Y)2 min (f(X;w)-0) W(1),W(2) W(1).W(2)\nFigure 3: Comparison between discriminators learned with and without noise neurons. As it can be. seen with noise neurons the activation of hidden neurons is more correlated: a) discriminators defined by each hidden neuron in Figure2fa; b) discriminators defined by each hidden neuron in Figure[2b c) final discriminator after pruning one hidden neuron in Figure[2|b.\nfm(X;W) = J1xn0\nwhere J1 xn is the matrix of ones of size 1 n (n is the number of samples in the dataset). The actual value of 0 can be estimated for any given network architecture. As an example, let W(1), w(2) and W(2) be the weights of the first hidden layer, the output layer and the noisy neurons in a 2-2-1 MLP network respectively (Figure2b). Assuming linear activation in the hidden neurons and mean square error (MsE) as the cost function, Equation|2|can be rewritten as:\nmean correlation correlation distribution 0.95 1.0 0.90 0.8 0.85 aioon 0.80 0.6 Coorrl Ceorel 0.75 No Noise 0.4 )sqe 0.70 Constant 0.2 Binomial 0.65 Gaussian 0.0 0.60 0 20 40 60 80 100 No_Noise Constant Binomial Gaussian 1.00 1.0 0.95 0.90 0.8 (uo 0.85 0.6 abrreere 0.80 0.75 0.4 0.70 0.2 0.65 0.0 0.60 0 100 200 300 400 500 No Noise Constant Binomial Gaussian epoch epoch\nIn this particular case, 0 can be calculated using derivatives of the cost functions in Equation3\nf(X;W) = Xw(1)w 1 Wi,1 Pp(1) b(1) sgn n\nThe same results can be achieved empirically. Since the output of the noise outputs will converge to E|P, it seems that there may not be any difference between different random distributions with the same expected value. Therefore, we tested different random distributions for E|P with the same E|Pyv, on the network shown in Figure2b. These noise distributions are as follows:\nFigure 4: Effect of noise outputs to the correlation of neurons activation in the hidden layers. The top. row shows the correlation of two hidden neurons in the network of Figure 2|while the bottom row is the correlation between the two neurons on the first hidden layer of a 6 layer MLP (2-2-2-2-2-2-1) The left column represents the mean correlation during training of the network for 100 times. In right. column, yellow is the distribution of correlations at the end of the training and small red line shows the median. As it can be seen in these graphs, adding noise outputs improves the correlation between. neurons in the hidden layer..\nac f(X;W) - f(X;w)-0) de de (f(X;W)-0) (0-f(X;w))=0 ae 0=E[f(X;W)]=E[Py]\nThis means that in this particular network with MSE as the cost function, the final error will be minimized when the network outputs the expected value of targets in noise outputs (E|P) for any given input.\nTo demonstrate how outputting a constant value affects the weights of a network, let's consider the network in Figure2 b. In this case, the output of noisy output will be:\nEquation 6 means that the activation of the hidden neurons has a correlation of 1 or -1. For more than two neurons it can be shown that the output of one neuron will be a linear combination of other. neurons which means the claim still holds..\nAlgorithm 1 NoiseOut for pruning hidden layers in neural networks\nIgorrtmmn Ineulalnetwolks 1: procedure TRAIN(X, Y) > X is input, Y is expected output. 2: W initialize_weights( 3: for each iteration do. 4: Yv generate_random_noise() > generate random expected values 5: Y' concatenate(Y, YN) 6: W back_prop(X, Y') 7: while cost(W) threshold do. 8: A, B find_most_correlated_neurons(W, X) 9: a, estimate_parameters(W, X, A, B) 10: W' remove_neuron(W, A). 11: W' adjust_weights(W', B, a, ) 12: W+W' 13: return W.\nAs it can be seen in the top row Figure 4 in a regular network with no noise unit (shown as than O.8, while in the existence of a noise output this value approaches to one, rather quickly. This means that the two hidden neuron are outputting just a different scale of the same value for any given Input. In this case, NoiseOut easily prunes one of the two neurons.\nAs it can be seen in the top row Figure 4] in a regular network with no noise unit (shown a. No_Noi se), the correlation between the output of hidden neurons. b(1)does not go highei\nThe same technique can be applied to correlate the activation of the hidden neurons in networks with more than two layers. The bottom row of Figure4 shows the correlation between the activation of two hidden neurons in the first layer of a six layer MLP (2-2-2-2-2-2-1). As it can be seen in this figure, adding noise outputs helped the neurons to achieve higher correlation compared to a network with no noise output. Binomial noise acts chaotic at the beginning due to its sudden change of expected values in the noise outputs while Gaussian noise improved the correlation the best in these experiments."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "To illustrate the generality of our method we test it on a core set of common network architectures including fully connected networks and convolutional neural networks with dense layers. In all of these experiments, the only stop criteria is the accuracy decay of the model. We set the threshold\nGaussian P(x) = N(0.1, 0.4) Normal distribution with mean of 0.1 and standard devi. tion of O.4. This noise distribution is appropriate for regression tasks with MSE cost.. Binomial P(x) = B(1, 0.1) Binomial distribution with 1 trial and success probability. 0.1. We chose binomial distribution since it generates random classification labels and appropriate for networks that have to produce binary labels.. Constant P(x) = (x 0.1) In this case, the target of the noise outputs is the consta. value of O.1. This is used as an expected-value \"shortcut\" so that we can examine a stochast. vs. a deterministic approach. No. Noise no noise output for comparison..\nAlgorithm 1 shows the final NoiseOut algorithm. For the sake of readability, this algorithm has. been shown for networks with only one hidden layer. But the same algorithm can be applied to. networks with more that one hidden layer by performing the same pruning on all the hidden layers. independently. It can also be applied to convolutional neural networks that use dense layers, in which we often see over 90% of the network parameters (Cheng et al.(2015)).\nThis algorithm is simply repeating the process of removing a single neuron, as described in the previous section. The pruning ends when the accuracy of the network drops below some given threshold. Note that the pruning process is happening while training..\nTable 1: Results of pruning Lenet-300-100 on MNIST. The accuracy of all the compressed network are the same as the original network.\nNoise Layer 1 Layer 2 Removed Compressior Method Parameters Outpus Neurons Neurons Parameters Rate Ground Truth 300 100 266610 - No_Noise 23 14 15989 94.00% 16.67 Gaussian 512 20 9 15927 94.02% 16.73 Constant 512 20 7 15105 94.33% 17.65 Binomial 512 19 6 11225 95.78% 23.75 No_Noise 13 12 10503 96.06% 20.89 Gaussian 1024 16 7 12759 95.21% 18.58 Constant 1024 18 7 14343 94.62% 17.61 Binomial 1024 19 7 15135 94.32% 25.38\nTable 2: Pruning Lenet-5 on MNIST. The accuracy of all the compressed networks are the same a the original network.\nfor this criteria to match the original accuracy; therefore all the compressed network have the same accuracy as the original network. For each experiment, different random distributions have been used. for P, to demonstrate the difference in practice.\nTable[1|and Table[2|show the result of pruning Lenet-300-100 and Lenet-5 (LeCun et al.(1998) o1 MNIST dataset. Lenet-300-100 is a fully connected network with two hidden layers, with 300 anc 100 neurons each, while Lenet-5 is a convolutional network with two convolutional layers and one dense layer. These networks achieve 3.05% and 0.95% error rate on MNIST respectively (LeCur et al.(1998)). Note that in Lenet-5 over 98% of parameters are in the dense layer and pruning then can decrease the model size significantly.\nDense Noise Removed Compression Method Layer Parameters Neurons Parameters Rate Neurons Ground Truth 512 605546 No_Noise 313 374109 38.21% 1.61 Gaussian 512 3 13579 97.75% 44.59 Constant 512 33 48469 91.99% 12.49 Binomial 512 26 40328 93.34% 15.01\nAs it can be seen in these tables, NoiseOut removed over %95 of parameters with no accuracy. degradation. Astonishingly, pruned Lenet-5 can achieve 0.95% error rate with only 3 neurons in. the hidden layer which reduce the total number of weights in Lenet-5 by a factor of 44. Figure|6 demonstrates the output of these 3 neurons. This graph has been generated by outputting the activation of hidden layer neurons for 1oo0 examples randomly selected from MNIST dataset. Then, the data has been sorted by the target class. As it can be seen in this figure, the three neurons in the hidden layer efficiently encoded the output of convolutional layers to the expected ten classes. Obviously, these values can be utilized by the softmax layer to perform the final classification..\nTo test the effect of pruning of deeper architectures, we prune the network described in Table on SVHN data set.This model which has over 1 million parameters, achieves 93.39% and 93.84% accuracy on training set and test set respectively. As it can be seen in Table[3] NoiseOut pruned more than 85% of the parameters from the base model while maintaining the accuracy..\nTable 3: Pruning the reference network in Table 4 on SVHN dataset.\nIaalascl. conv2 32 3 x 3 9246 conv3 32 3 x 3 9246 Dense Removed Iethod Parameters pool1 2 x 2 Layer - Parameters Neurons conv4 48 3 3 13872 conv5 48 3 3 20784 round Truth 1024 1236250 1 conv6 48 3 x 3 20784 o_Noise 132 313030 74.67% pool2 - 2 x 2 aussian 4 180550 85.39% conv7 64 3 3 27712 onstant 25 202285 83.63% conv8 64 3 3 36928 ionomial 17 194005 84.30% conv9 64 3 x 3 36928 pool3 2 x 2 1 dense 1024 1049600 softmax 10 10250 Pruning results of LeNet-5 on MNisT. Pruning results of LeNet-300-100 on MNIST 1.005 1.00 1.000 0.995 0.98 0.990 ACeunrey ACernney 0.96 0.985 0.980 0.94 X 0.975 ... Training 0.970 Training 0.92 xXx Testing + xXx Testing 0.965 10000 15000 20000 25000 30000 12000 14000 16000 18000 20000 22000 24000 Parameters Parameters\nDense Removed Method Layer Parameterss Parameters Neurons Ground Truth 1024 1236250 No_Noise 132 313030 74.67% Gaussian 4 180550 85.39% Constant 25 202285 83.63% Bionomial 17 194005 84.30%\nFigure 5: Pruning Lenet-300-100 and Lenet-5 on MNIST data set with various accuracy thresholds x axis represents the total number of parameters in the pruned network (including weights in the convolutional layers), while y axis shows the accuracy of the model on test and training dataset"}, {"section_index": "8", "section_name": "4.2 EFFECT OF NOISEOUT ON TEST ACCURACY", "section_text": "To explore the effect of NoiseOut on the test accuracy, we pruned Lenet-300-100 and Lenet-5 on MNIST with multiple accuracy thresholds, using Gaussian distribution as the target for noise outputs. In each one of these experiments, we measured both the training and test accuracy. As expected, the results which have been shown in Figure[5l indicate that lower accuracy thresholds result into more pruned parameters. However, the gap between training and testing threshold stays the same. This shows that pruning the network using NoiseOut does not lead to overfitting.."}, {"section_index": "9", "section_name": "4.3 RELATION TO DROPOUT AND REGULARIZATION", "section_text": "The key point in successful pruning with NoiseOut is a higher correlation between neurons. This. goal might seem to be in contradiction with techniques designed to avoid overfitting such as Dropout and Regularization. To investigate this, we pruned Lenet-5 in the presence and absence of these features and demonstrated the results in Figure[7] As it can be seen in this figure, Dropout helps the pruning process significantly, while L2-regularizations causes more variance. It seems that preventing co-adaptation of neurons using Dropout also intensifies the correlation between them, which helps NoiseOut to remove even more redundant neurons without accuracy loss..\nTable 4: Base model architecture for SVHN with 1236250 parameters."}]
r1S083cgx
[{"section_index": "0", "section_name": "SEOUENCE GENERATION WITH A PHYSIOLOGICALLY PLAUSIBLE MODEL OF HANDWRITING AND RECURRENT MIXTURE DENSITY NETWORKS", "section_text": "Output We express the predicted probability of v; as a bivariate GMM as described in Section C.1. and u, as a Bernoulli distribution. Thus for K Gaussians the network has output dimensions of (6K + 1) which, in addition to eqn. (11), contains e, which we use to calculate the pen state probability via (Graves! 2013\n1 ei E (0,1) ei 1+ exp(ei)\nDaniel Berio*1, Memo Akten*1\nArchitectureWe use Long Short-Term Memory (Hochreiter & Schmidhuber1997) networks with input, output and forget gates (Gers et al.] 20oo), and we use Dropout regularization as described by Pham et al.(2014). We employ both a grid search and a random search (Bergstra & Bengio|2012) on various hyperparameters in the ranges: sequence length {64, 128}, number of hidden recurrent layers {1, 2, 3}, dimensions per hidden layer {64, 128, 256, 400, 512, 900, 1024}, number of Gaussians {5 10, 20}, dropout keep probability {50%, 70%, 80%, 90%, 95%} and peepholes {with, without}.\nFrederic Fol Ley\nJames Bergstra and Yoshua Bengio. Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research, 13:281-305, 2012.\nFor comparison we also tried a deterministic architecture whereby instead of outputing a probability distribution, the network outputs a direct prediction for xi+1. As expected, the network was unable. to learn this function, and all sequence of virtual targets synthesized with this method simply travel in a repeating zig-zag line.\nDaniel Berio and Frederic Fol Leymarie. Computational Models for the Analysis and Synthesis of Graffiti Tag Strokes. In Paul Rosin (ed.), Computational Aesthetics. Eurographics Association, 2015.\nThe purpose of this study is to explore the feasibility and potential benefits of. using a physiological plausible model of handwriting as a feature representation. for sequence generation with recurrent mixture density networks. We build on. recent results in handwriting prediction developed by Graves (2013), and we focus on generating sequences that possess the statistical and dynamic qualities of handwriting and calligraphic art forms. Rather than model raw sequence data, we. first preprocess and reconstruct the input training data with a concise representation given by a motor plan (in the form of a coarse sequence of ballistic' targets) and. corresponding dynamic parameters (which define the velocity and curvature of. the pen-tip trajectory). This representation provides a number of advantages, such. as enabling the system to learn from very few examples by introducing artificial variability in the training data, and mixing of visual and dynamic qualities learned. from different datasets.\nTrainingWe use a form of Truncated Backpropagation Through Time (BPTT) (Sutskever|2013) whereby we segment long sequences into overlapping segments of maximum length n. In this case. long-term dependencies greater than length n are lost, however with enough overlap the network car. effectively learn a sliding window of length n timesteps. We shuffle our training data and reset the internal state after each sequence. We empirically found an overlap factor of 50% to perform well. though further studies are needed to confirm the sensitivity of this figure..\nChristopher M Bishop. Mixture density networks. 1994\nJ-J Brault and Rejean Plamondon. Segmenting handwritten signatures at their perceptually important points IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):953-957, 1993.\nWe use dynamic unrolling of the RNN, whereby the number of timesteps to unroll to is not set at compile time, in the architecture of the network, but unrolled dynamically while training, allowing. variable length sequences. We also experimented with repeating sequences which were shorter than. the maximum sequence length n, to complete them to length n. We found that for our case they. produced desirable results, with some side-effects which we discuss in later sections.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Joeri De Winter and Johan Wagemans. Perceptual saliency of points along the contour of everyday objects: large-scale study. Perception & Psychophysics, 70(1):50-64, 2008.\nWe split our dataset into training: 70%, validation: 20% and test: 10% and use the Adam optimizer. (Kingma & Ba][2014) with the recommended hyperparameters. To prevent exploding gradients we clip gradients by their global L2 norm as described in (Pascanu et al.|2013). We tried thresholds of both 5 and 10, and found 5 to provide more stability..\nRecent results (Graves]2013) have demonstrated that, given a sufficiently large training data-set. Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber! 1997) Recurrent Mixture Density Networks (RMDNs) (Schuster1999) are capable of learning and generating convincing synthetic. handwriting sequences. In this study we explore a similar network architecture combined with ar intermediate feature representation, given by the parameters of a physiologically plausible model of handwriting: the Sigma Lognormal model (Plamondon 1995 Plamondon et al. 2014).\nWe formulate the loss function J to minimise the Negative Log Likelihood as described in Sectior C.2|using the probability density functions described in eqn. (12) and eqn. (19).\nIn the work byGraves(2013) and subsequent derivations, the RMDN operates on raw sequences of points recorded with a digitizing device. In our approach we preprocess the training data using an intermediate representation that describes a form of \"motor program' coupled with a sequence of dynamic parameters that describe the evolution of the pen tip. By doing so, we use a representation that is more concise (i.e. lower in dimensionality), meaningful (i.e. every data point is a high level segment descriptor of the trajectory), and is resolution independent..\nInput The input to this network at each timestep i is identical to that of the V2V-model, x, E IR3. where the first two elements are v; (normalised relative position displacement for the i'th stroke) and u; E {0, 1} (the pen state during the same stroke). Given input x; and its current internal state (c, h), the network learns to predict the dynamic parameters (toi, 0t) for the current stroke i, by learning the parameters for Pr(toi, 0 x, Ci, ht). Again with an abuse of notation, this can be. expressed more intuitively as Pr(toi, 0 | x, x;-1, ..., xi-n) where n is the maximum sequence. length.\nShimon Edelman and Tamar Flash. A model of handwriting. Biological cybernetics, 57(1-2):25-36, 1987\nThis project stems from the observation that human handwriting results from the orchestration of a large number of motor and neural subsystems, and is ultimately produced with the execution of complex and skillful motions. As such we seek a representation that abstracts the complex task o trajectory formation from the neural network, which is then rather focused on a higher level task oi movement planning. Note that for the scope of this study, we do not implement text-to-handwriting synthesis (Graves2013), but rather focus on the task of generating sequences that possess the statistical and dynamic qualities of handwriting, which can be expanded to calligraphy, asemic handwriting, drawings and graffiti (Berio & Leymarie[2015] Berio et al.||2016). In particular, we focus on two distinct tasks: (1) learning and generating motor plans and (2) given a motor plan\nAnath Fischer, Rejean Plamondon, Colin O'Reilly, and Yvon Savaria. Neuromuscular representation and synthetic generation of handwritten whiteboard notes. In Frontiers in Handwriting Recognition (ICFHR) 2014 14th International Conference on, pp. 222-227. IEEE, 2014.\nTamar Flash and Amir A Handzel. Affine differential geometry analysis of human arm movements. Biologica cybernetics, 96(6):577-601, 2007.\nDavid Freedberg and Vittorio Gallese. Motion, emotion and empathy in esthetic experience. Trends in cognitive sciences, 11(5):197-203, 2007.\nFelix A Gers, Jurgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with LSTM Neural Computation, 12(10):2451-2471, 2000\nTraining We use the same procedure for training as the V2V-model\nMartin Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URI http://tensorflow.org/ Software available from tensorflow.org\nDaniel Bullock, Stephen Grossberg, and Christian Mannes. A neural network model for cursive script production Biological Cybernetics, 70(1):15-28, 1993.\nSylvain Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligent Service Robotics 9(1):1-29, 2016.\nJacob Feldman and Manish Singh. Information along contours and object boundaries. Psychological review. 112(1):243, 2005.\nArchitecture We explored very similar architecture and hyperparamereters as the V2V-model, but found that we achieved much better results with a shorter maximum sequence length. We trained a number of models with a variety of sequence lengths {3, ..., 8, 13, 16, 21, 32}.\nAlex Graves. Supervised Se sence Labelling with Recurrent Neural Networks. PhD thesis, 2008\nAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013\nInput The input to this network x; E IR at each timestep i is slightly different to the V2V and. V2D models. Similar to the V2V and V2D models, the first two elements are v; (normalised relative position displacement for the i'th stroke), and the third element is u; E {0, 1} (the pen state during the same stroke). However in this case the final two elements are the dynamic parameters for the previous stroke (to-1, 0-1), normalized to zero mean and unit standard deviation.\nThe remainder of this paper is organised as follows: in Section [2] after briefly summarising the background context, we then briefly describe the Sigma Lognormal model and RMDNs; in Section 3 we present the data preprocessing step and the RMDN models that build up our system; in Section we propose various applications of the system, including learning handwriting representations from small datasets and mixing styles.\nDavid Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997.\nOutput The output of this network is identical to that of the V2D model.\nOur study is grounded on a number of notions and principles that have been observed in the general study of human movement as well as in the handwriting synthesis/analysis field (known as Grapho. nomics (Kao et al.l1986)). The speed profile of aiming movements is typically characterised by a \"bell shape\" that is variably skewed depending on the rapidity of the movement (Lestienne1979. Nagasaki][1989, Plamondon et al.]2013). Complex movements can be described by the superimpo- sition of a discrete number of \"ballistic' units of motion, which in turn can each be represented by the classic bell shaped velocity profile and are often referred to as strokes. A number of methods synthesise handwriting through the temporal superimposition of strokes, the velocity profile of which. is modelled with a variety of functions including sinusoidal functions (Morasso & Mussa Ivaldi 1982f [Maarse][1987] Rosenbaum et al.[[1995), Beta functions (Lee & Chof1998} Bezine et al.]2004) and lognormals (Plamondon et al.T|2009). In this study we rely on a family of models known as the Kinematic Theory of Rapid Human Movements, that has been developed by Plamondon et al. in an extensive body of work since the 1990's (Plamondon]1995) Plamondon et al.]2014). Plamondon et al.(2003) show that if we consider that a movement is the result of the parallel and hierarchical interaction of a large number of coupled linear systems, the impulse response of such a system to a centrally generated command asymptotically converges to a lognormal function. This assumption is attractive from a modelling perspective because it abstracts the high complexity of the neuromuscular system in charge of generating movements with a relatively simple mathematical model which further provides state of the art reconstruction of human velocity data (Rohrer & Hogan2006f Plamondon. [et al.]2013).\nTraining We use the same rocedure for training as the V2V-model.\nFrancesco Lacquaniti, Carlo Terzuolo, and Paolo Viviani. The law relating the kinematic and figural aspects of drawing movements. Acta psychologica, 54(1):115-130, 1983\nWe evaluated and batch rendered the outputs of many different architectures and models at differen training epochs, and settled on models which were amongst those with the lowest validation erroi out also produced visibily more desirable results. Once we picked the models, the results displaye are not cherry picked.\nThe preprocessed IAM dataset contains 12087 samples (8460 in the training set) with maximun. sequence length 305, minimum 6, median 103 and mean 103.9. For the V2V/V2D/A2V models trained on the IAM database we settle on an architecture of 3 recurrent layers, each with size 512, a maximum sequence length of 128, 20 Gaussians, dropout keep probability of 80% and no peepholes\nHenry W Lin and Max Tegmark. Why does deep and cheap learning work so well? arXiv preprint arXiv:1608.08225, 2016.\nFor V2V we used L2 normalisation on. input. and for A2D/V2D we used\nA number of methods have used neural inspired approaches for the task of handwriting trajectory. formation (Schomaker1992 Bullock et al.|[1993][Wada & Kawato]1993]. Similarly to our proposed method, Ltaief et al.(2012) train a neural on a preprocessed dataset where the raw input data is. reconstructed in the form of handwriting model parameters.Nair & Hinton (2005) use a sequence. of neural networks to learn the motion of two orthogonal mass spring systems from images of. handwritten digits for classification purposes. With a similar motivation to ours, Plamondon &. Privitera|(1996) use a Self Organising Map (SOM) to learn a sequence of ballistic targets, which describe a coarse motor plan of handwriting trajectories. Our method builds in particular on the work. of|Graves|(2013), who describes a system that uses a recurrent mixture density networks (RMDNs). (Bishop|1994) extended with a LSTM architecture (Hochreiter & Schmidhuber1997), to generate synthetic handwriting in a variety of styles..\nFrans J Maarse. The study of handwriting movement: Peripheral models and signal processing techniques. Lisse [etc.]: Swets & Zeitlinger, 1987.\nVinod Nair and Geoffrey E Hinton. Inferring motor programs from images of handwritten digits. In Advances in neural information processing systems, pp. 515-22, 2005.\nOn the basis of Plamondon's Kinematic Theory (Plamondon] 1995), the Sigma Lognormal (A) model (Plamondon & Djioua] 2006) describes complex handwriting trajectories via the vectorial superimposition of a discrete number of strokes. With the assumption that curved handwriting movements are done by rotating the wrist. the curvilinear evolution of strokes is described with a circular arc shape. Each stroke is charactersied by a variably assymmetric \"bell shape\" speed profile which is described with a (3 parameter) lognormal function. The planar evolution of a trajectory necessarily located along the generated trajectory) loci at which each consecutive stroke is aimed The virtual targets provide a low level description of the motor plan for the handwriting trajectory. A smooth trajectory is then generated by integrating the velocity of each stroke over time. The trajectory smoothness can be defined by adjusting the activation-time offset of a given stroke with respect to the\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. Ir International Conference on Machine Learning ICML, volume 28, pp. 1310-1318, 2013.\nVu Pham, Theodore Bluche, Christopher Kermorvant, and Jerome Louradour. Dropout Improves Recurrent Neural Networks for Handwriting Recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pp. 285-290. IEEE, 2014.\npredicting the corresponding dynamic parameters that determine the visual and dynamic qualities of the pen trace. We then go on to show that this modular workflow can be exploited in ways such as: mixing of dynamic qualities between data-sets (a form of handwriting \"style transfer\"' ) as well as learning from small datasets (a form of \"one shot learning')..\nGiven input x, and its current internal state (c, h), the network learns to predict the dynamic param eters (toi, 0) for the current stroke i, by learning the parameters for Pr(toi, 0 | x, C, h). Again with an abuse of notation, this can be expressed more intuitively as Pr(toi, 0 x, x-1, ..., x-n, where n is the maximum sequence length.\nHenry SR Kao, Rumjahn Hoosain, and GP Van Galen. Graphonomics: Contemporary research in handwriting Elsevier, 1986.\nhitecture We explored very similar architecture and hyperparamereters as the V2D model\nDo-Hoon Lee and Hwan-Gue Cho. The beta-velocity model for simulating handwritten korean scripts. In Electronic Publishing, Artistic Imaging, and Digital Typography, pp. 252-264. Springer, 1998..\nF Lestienne. Effects of inertial load and velocity on the braking process of voluntary limb movements. Experi mental Brain Research. 35(3):407-418. 1979\nXiaolin Li, Marc Parizeau, and Rejean Plamondon. Segmentation and reconstruction of on-line handwritten scripts. Pattern recognition, 31(6):675-684, 1998\nFor the augmented one-shot learning models we used similar architectures, but found that 2 recurrent layers each with size 256 was able to generalise better and produce more interesting results that both. captured the prime inputs without overfitting.\nWe also tried a number of different methods for normalising and representing v; on the input to the. models. We first tried normalising the components individually to have zero mean and unit standard deviation. We also tried normalising uniformly on L2 norm again to have zero mean and unit standard. deviation. Finally, we tried normalised polar coordinates, both absolute and relative..\nU-V Marti and Horst Bunke. The iam-database: an english sentence database for offline handwriting recognition International Journal on Document Analysis and Recognition, 5(1):39 46, 2002\n(a) V4 c (d) V1 Time t02 V2\nR. Plamondon et al. Recent developments in the study of rapid human movements with the kinematic theory Pattern Recognition Letters, 35:225-35, 2014.\nInput (online handwriting data) A-model parameter extraction Artificial variability with parameter perturbations (optional) Preprocessed input Virtual targets Dynamic parameters (t, 0.) parameters | action plan training training laee snded V2V model V2D/A2D models synthesize virtual target from predict model parameters seed virtual targets for synthesized virtual targets Squash vack+s. Acam ToP >y^/N> +<nM MZL i`breLu V M* generate trajectories with predict model parameters for random model parameters user-drawn action plan A> /N2r). M\nRejean Plamondon and Moussa Djioua. A multi-level representation paradigm for handwriting stroke generatior Human Movement Science, 25(4):586-607, 2006.\nFigure 1: A sequence of virtual targets and the corresponding A trajectory. (a), the virtual targets and the corresponding stroke aiming directions. (b), the virtual targets and the corresponding circular arcs. (c), a possible trajectory generated over the given sequence of virtual targets. While the generated trajectory might appear. similar to a polynomial curve such as a B-Spline, it also describes a smooth and physiologically plausible. velocity profile (d).\nBrandon Rohrer and Neville Hogan. Avoiding spurious submovement decompositions II. Biological cybernetics 94(5):409-14, 2006.\nprevious stroke, which is denoted with toi; a smaller time offset (i.e. a greater overlap betweer. lognormal components) will result in a smoother trajectory (Fig.1] c). The curvature of the trajectory. can be varied by adjusting the central angle of each circular arc, which is denoted with 0;. Equation and further details for the A model can be found in Appendix|A.\nDavid A Rosenbaum, Loukia D Loukopoulos, Ruud GJ Meulenbroek, Jonathan Vaughan, and Sascha Engelbrecht. Planning reaches by evaluating stored postures. Psychological review, 102(1):28. 1995\nA sequence of virtual targets provides a very sparse spatial description or \"motor plan\" for the trajectory evolution. The remaining stroke parameters, to and O, define the temporal, dynamic and geometric features of the trajectory and we refer to those as dynamic parameters.\nLambert Schomaker. A neural oscillator-network model of temporal pattern generation. Human movement science, 11(1):181-192, 1992\nMike Schuster. Better Generative Models for Sequential Data Problems: Bidirectional Recurrent Mixtur Density Networks. In Advances in Neural Information Processing Systems (NIPS), pp. 589-595, 1999"}, {"section_index": "2", "section_name": "2.2 RECURRENT MIXTURE DENSITY NETWORKS", "section_text": "V2V model\nMixture Density Networks (MDN) were introduced byBishop[(1994) in order to model and predict. the parameters of a Gaussian Mixture Model (GMM), i.e. a set of means, covariances and mixture. weights.Schuster(1999) showed that MDNs could be to model temporal data using RNNs. The author used Recurrent Mixture Density Networks (RMDN) to model the statistical properties of speech, and they were found to be more successful than traditional GMMs.Graves (2013) used. LSTM RMDNs to model and synthesise online handwriting, providing the basis for extensions to the. method, also used in|Ha et al.(2016);Zhang et al. (2016). Note that in the case of a sequential model. the RMDN outputs a unique set of GMM parameters for each timestep t, allowing the probability. distribution to change with time as the input sequence develops. Further details can be found in. AppendixC.1\nIlya Sutskever. Training Recurrent neural Networks. PhD thesis, University of Toronto, 2013\nP Viviani and C Terzuolo. Trajectory determines movement dynamics. Neuroscience, 7(2):431-437, 1982\nYasuhiro Wada and Mitsuo Kawato. A neural network model for arm trajectory formation using forward an inverse dynamics models. Neural Networks. 6(7):919-932. 1993.\nWe operate on discrete and temporally ordered sequences of planar coordinates. Similarly to Graves (2013), most of our results come from experiments made on the IAM online handwriting database (Marti & Bunke2002). However, we have made preliminary experiments with other datasets, such as the Graffiti Analysis Database (Lab][2009) as well as limited samples collected in our laboratory from a user with a digitiser tablet.\nThe Sigma Lognormal model (Plamondon & Djioual[2006) describes complex handwriting trajectorie. via the vectorial superimposition of lognormal strokes. The corresponding speed profile A, (t) assumes. a variably asymmetric \"bell shape\" which is described with a 3 parameter lognormal function.\nFigure 17: Schematic overview of the system\nWe then exploit the modularity of this system to conduct various experiments, details of which ca found in Section4\nRejean Plamondon. A Kinematic Theory of Rapid Human Movements. Part I . Movement Representation and Generation. Biological cybernetics, 72(4):295-307, 1995.\nRejean Plamondon, Moussa Djioua, and Christian O'Reilly. Recent Developments in the Study of Rapid Human Movements with the Kinematic Theory. Traitement Du Signal, 26:377-394, 2009. ISSN 0765-0019\nRejean Plamondon, Christian O'Reilly, Celine Remi, and Theresa Duval. The Lognormal Handwriter: Learning Performing and Declining. Frontiers in Psychology, 4(945), 2013. 1SsN 1664-1078\nFreek Stulp and Olivier Sigaud. Many regression algorithms, one unified model: A review. Neural Networks, 69. 60-79, 2015.\ntraining indu! V2D/A2D models\nTamas Varga, Daniel Kilchhofer, and Horst Bunke. Template-based Synthetic Handwriting Generation for the Training of Recognition Systems. In Proc. of 12th Conf. of the International Graphonomics Society, pp. 206-211, 2005.\npredict model parameters. for synthesized virtual targets\nAs a first step, we preprocess the raw data and reconstruct it in the form of A model parameters. Section|3.1] We then train and evaluate a number of RMDN models for two distinct tasks:\n1. Virtual target prediction. We use the V2V-model for this task. Given a sequence of virtual targets, this model predicts the next virtual target.. 2. Dynamic parameter prediction. For this task we trained and compared two model ar chitectures. Given a sequence of virtual targets, the task of these models is to predict the. corresponding dynamic parameters. The V2D-model is condititioned only on the previous. virtual targets, whereas the A2D-model is conditioned on both the previous virtual targets. and dynamic parameters.\n1 (ln(t-toi) xp V2(t - to, 20i2\nwhere toi defines the activation time of a stroke and the parameters ; and o; determine the shape of the lognormal function. ; is referred to as log-time delay and is biologically interpreted as the rapidity of the neuromuscular system to react to an impulse generated by the central nervous system (Plamondon et al.]2003); ; is referred to as log-response time and determines the spread and asymmetry of the lognormal.\nThe curvilinear evolution of strokes is described with a circular arc shape, which results in\nln(t - to) $it)=0+0 1 + erf O;V2\nA number of methods have been developed by Plamondon et. al in order to reconstruct A-model parameters from digitised pen input data (O'Reilly & Plamondon]2008] Plamondon et al.] 2014 Fischer et al.]2014). These methods provide the ideal reconstruction of model parameters, given a high resolution digitised pen trace. While such methods are superior for handwriting analysis and biometric purposes, we opt for a less precise method (Berio & Leymarie|2015) that is less sensitive to sampling quality and is aimed at generating virtual target sequences that remain perceptually similar to the original trace. We purposely choose to ignore the original dynamics of the input, and base the method on a geometric input data only. This is done in order to work with training sequences that are independent of sampling rate, and in sight of future developments in which we intend to extract handwriting traces from bitmaps, inferring causal/dynamic information from a static input as humans are capable of (Edelman & Flash]1987|Freedberg & Gallese2007).\nwhere 0, is the central angle of the circular arc that defines the shape of the ith stroke\nm-1 $(t) =v1+ dtA;(T) ) i()(Vi+1-Vi) i=1 h(0i)cos$i(t) 20i -h(0i)sin$i(t) if |sin0 2sin0i and h(0i)sin$i(t) -h(0i)cos$i(t) otherwis\nOur method operates on a uniformly sampled input contour, which is then segmented in correspon dence with perceptually salient key points: loci of curvature extrema modulated by neighbouring. contour segments (Brault & Plamondon] 1993) Berio & Leymarie2015), which gives an initial. estimate of each virtual target v. We then (i) fit a circular arc to each contour segment in order to estimate the 0, parameters and (ii) estimate the to parameters by analysing the contour curvature in. the region of each key point. Finally, (iii) we iteratively adjust the virtual target positions to minimise the error between the original trajectory and the one generated by the corresponding A parameters.. For Further details on the A parameter reconstruction method, the reader is referred to Appendix|B\nwhich scales the extent of the stroke based on the ratio between the perimeter and the chord length o the circular arc.\noriginal reconstructed (a) (b) original reconstructed\ni= ln(1+Qi)\nFigure 2: A parameter reconstruction. (a) The original and reconstructed trajectories. (b) The reconstructed virtual targets. Note that the virtual targets define a shape that is perceptually similar to the input. (c) Aligned and scaled speed profiles of the original (gray) and reconstructed (black) trajectories. Although the dynamic information in the input is ignored (due to uniform sampling), the two speed profiles show similarities in number and relative-height of peaks.\nt0i =t1i-e-30 t1i=t1(i-1)+ti t1(0) = 0,"}, {"section_index": "3", "section_name": "3.2 DATA AUGMENTATION", "section_text": "We can exploit the A parameterisation to generate many variations over a single trajectory, which are visually consistent with the original, and with a variability that is similar to the one that would be seen in multiple instances of handwriting made by the same writer (Fig.3) (Djioua & Plamondon 2008af Fischer et al.2014] Berio & Leymarie2015). Given a dataset of n training samples, we randomly perturb the virtual target positions and dynamic parameters of each sample np times, which results in a new augmented dataset of size n + n np where legibility and trajectory smoothness is maintained across samples. This would not be possible on the raw online dataset, as perturbations for each data-point would eventually result in a noisy trajectory.\n(a) b\nFigure 12: Lognormals with varying \"skeweness\" parameter and corresponding values for , . As -> ( the lognormal approaches a Gaussian.\nThe V2V-model is conditioned on a history of virtual targets and given a new virtual target it predicts the next virtual target (hence the name V2V). Note that each virtual target includes the corresponding\nThe planar evolution of a trajectory is defined by a sequence of virtual targets {v,}=I, where a. trajectory with m virtual targets will be characterised by m - 1 circular arc strokes. A A trajectory, parameterised by the virtual target positions, is given by.\nIntermediate parameterisation. In order to facilitate the precise specification of timing and profile shape of each stroke, we recur to an intermediate parametrisation that takes advantage of a few known properties of the lognormal (Djioua & Plamondon]2008b) in order to define each stroke with (i) a time offset t; with respect to the previous stroke, (ii) a stroke duration T; and (iii) a shape parameter Q;, which defines the skewedness of the lognormal. The corresponding A parameters {toi, i, } can be then computed with:\noriginal reconstructed (a) (b) original reconstructed\nUi = -ln Ti\nwhere t1; is the onset time of the lognormal stroke profile. As a approaches 0, the shape of the lognormal converges to a Gaussian, with mean t1 + e-o2 (the mode of the lognormal) and standard deviation d. 6\n3.5 Gaussian 3 a=0.1 =0.95 o=0.10 a=0.2 =0.27 o=0.18 a=0.3 =-0.15 o=0.26 2.5 a=0.4 =-0.46 o=0.34 a=0.5 =-0.72 o=0.41 2 a=0.6 =-0.94 o=0.47 a=0.7 =-1.14 o=0.53 a=0.8 =-1.33 =0.59 1.5 a=0.9 =-1.50 =0.64 a=1 =-1.66 o=0.69 0.5 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6\na=0.2 =0.27 =0.18 a=0.3 =-0.15 =0.26 a=0.4 =-0.46 =0.34 a=0.5 =-0.72 =0.41 a=0.6 =-0.94 =0.47 a=0.7 =-1.14 =0.53 a=0.8=-1.33 =0.59 a=0.9 =-1.50 =0.64 a=1 =-1.66 =0.69\npen state - up (not touching the paper) or down (touching the paper). Repeatedly feeding the. predicted virtual target back into the model at every timestep allows the model to synthesise sequence. of arbitrary length. The implementation of this model is very similar to the handwriting predictior demonstrated by Graves (2013), although instead of operating directly on the digitised pen positions we operate on the much coarser virtual target sequences which are extracted during the preprocessing step. The details of this model can be found in AppendixC.3."}, {"section_index": "4", "section_name": "RECONSTRUCTING A PARAMETERS FROM AN ONLINE DATASET", "section_text": "The A parameter reconstruction method operates on a input contour uniformly sampled at a fixec distance which is defined depending on the extent of the input, where we denote the kth sampled poin along the input with p[k|. The input contour is then segmented in correspondence with perceptuall salient key points, which correspond with loci of curvature extrema modulated by neighbourin? contour segments (Brault & Plamondon,[1993) Berio & LeymarieJ 2015). The proposed approacl shares strong similarities with previous work done for (i) compressing online handwriting data with a circular-arc based segmentation (Li et al.||1998) and (ii) for generating synthetic data for handwriting recognisers (Varga et al.|2005). The parameter reconstruction algorithm can be summarised with the following steps:\nThe goal of these models is to predict the corresponding dynamic parameters (toi, 0) for a given. sequence of virtual targets. We train and compare two model architectures for this task. The V2D. model is conditioned on the history of virtual targets, and given a new virtual target, this model. predicts the corresponding dynamic parameters (to, 0) for the current stroke (hence the name V2D). Running this model incrementally for every stroke of a given virtual target sequence allows us. to predict dynamic parameters for each stroke. The implementation of this model is very similar to. the V2V-model, and details can be found in AppendixC.4.\nAt each timestep, the V2D model outputs and maintains internal memory of a probability distribution. for the predicted dynamic parameters. However, the network has no knowledge of the parameters that. are sampled and used. Hence, dynamic parameters might not be consistent across timesteps. This problem can be overcome by feeding the sampled dynamic parameters back into the model at the. next timestep. From a human motor planning perspective this makes sense as, for a given drawing. style, when we decide the curvature and smoothness of a stroke we will take into consideration the. choices made in previously executed strokes..\nThe A2D model predicts the corresponding dynamic parameters (to;, 0,) for the current stroke conditioned on a history of both virtual targets and dynamic parameters (i.e. all A parameters - hence the name A2D). We use this model in a similar way to the V2D model, whereby we run it incrementally for every stroke of a given virtual target sequence. However, internally, at every. timestep the predicted dynamic parameters are fed back into the model at the next timestep along with the virtual target from the given sequence. The details of this implementation can be found in. AppendixC.5\nThe details for each step are highlighted in the following paragraphs\nEstimating input key-points. Finding significant curvature extrema (which can be counted as. convex and concave features for a closed/solid shape) is an active area of research, as relying on. discrete curvature measurements remains challenging. We currently rely on a method described by. Feldman & Singh (2005), and supported experimentally byDe Winter & Wagemans(2008): first we measure the turning angle at each position of the input p|k| and then compute a smooth version of the signal by convolving it with a Hanning window. We assume that the turning angles have\nPredicting Virtual Targets. In a first experiment we use the V2V model, trained on the prepro cessed IAM dataset, to predict sequences of virtual targets. We prime the network by first feeding it a sequence from the test dataset. This conditions the network to predict sequences that are similar to the prime. We can see from the results (Fig. 4) that the network is indeed able to produce sequences that capture the statistical qualities of the priming sequence, such as overall incline, proportions, and oscillation frequency. On the other hand, we observe that amongst the generated sequences, there are often patterns which do not represent recognisable letters or words. This can be explained by the high variability of samples contained in the IAM dataset, and by the fact that our representation is very concise, with each data-point containing high significance. As a result, the slightest variation in a prediction is likely to cause a large error in the next. To overcome this problem, we train a new model with a dataset augmented with 10 variations as described in Section 3.2] Due to our limited computing resources[' we test this method on 1/10th of the dataset, which results in a new dataset with the same size as the original, but with a lower number of handwriting specimens with a number of subtle variations per specimen. With this approach, the network predictions maintain statistical similarity with the priming sequences, and patterns emerge that are more evocative of letters of the alphabet or whole words, with fewer unrecognizable patterns (Fig. 4). To validate this result, we also test the model's performance training it on 1/1Oth of the dataset, without data augmentation, and the results are clearly inferior to the previous two models. This suggests that the data augmentation step is highly beneficial to the performance of the network.\n0.0040 Turning angle surprisal 0.0035 0.0030 0.0025 0.0020 0.0015 0.0010 0.0005 0.0000 0 10 20 30 40 50 60 70\n0.0040 0.0035 0.0030 0.0025 0.0020 0.0015 0.0010 0.0005 0.0000 0 10 20 30 40 50 60 70\nFigure 13: Input key-point estimation. Left, the (smoothed) turning angle surprisal signal and the key-points estimated with peak detection. Right, the corresponding key-points along the input trajectory..\nbeen generated by a random process with a Von Mises distribution with mean at 0 degrees, which corresponds with giving maximum probability to a straight line. We then measure the surprisal (i.e. the negative logarithm of the probability) for each sample as defined by[Feldman & Singh[(2005] which normalised to the [0, 1] range simplifies to:.\nwhere 0[k] is the (smoothed) turning angle. The first and last sample indices of the surprisal signa] together with its local maxima results in m key-point indices { z }. The corresponding key-points along the input contour are then given by {p [i}.\n1We are thus not able to thoroughly test the large network architectures that would be necessary to train on the whole augmented dataset.\n'ind m key-points in the input contour.. . Fit a circular arc to each contour segment defined between two consecutive key-points. (defining individual strokes), and obtain an estimate of each curvature parameter 0,.. . For each stroke compute the corresponding t; parameter by analysing the curvature signal. in the region of the corresponding key-point.. . Define an initial sequence of virtual targets with m positions corresponding with each input. key-point. Repeat the following until convergence or until a maximum number of iterations is reached Berio & Leymarie(2015): -Integrate the A trajectory with the current parameter estimate.. -Identify m key-points in the generated trajectory. - Move the virtual target positions to minimise the distance between the key-points of. the generated trajectory and the key-points on the input contour..\n1 cos(0[k])\n(a) KHiA wisibsMlo hae ft is th^ nirmm Hc ws sbl h yeh m+ twUs1 (b) rkwq's+tTn 1^VK,2 7*K lxvzsA Vnnnus wns > Wn M 3w s9WM^ gmAy snmV Mvxzn g~yewy L<md +nnyy lk f AL n N w4myLMLy|A7 iWm m J LsM^Vs*xniMN< np Fo^+ 1mM3 (c) YV Tk wwm th nvyniNNM VM zr J Uc YMsVvz nuMA Dm7 74yFKsitym+xbFc<Ev xs AW L LqAsHYu\\sNZ tY vusv7M sSy\nEstimating stroke curvature parameters. For each section of the input contour defined between two consecutive key-points, we estimate the corresponding stroke curvature parameter 0, by first. computing a least square fit of a circle to the contour section. We then compute the internal angle of. the arc supported between the two key-points, which is equal 20t, i.e. two times the corresponding. curvature parameter 0i .\nFigure 4: Predicting virtual targets. (a) Virtual targets from the test set (not seen during V2V training) usec to prime the V2V models. (b) Sequences generated with the V2V model. (c) Sequences generated with the augmented V2V model. Note that the non-augmented V2V model produces more undesired 'errors'. This is more visibly noticable when rendered with dynamic parameters (Fig.[6)\nPredicting Dynamic Parameters. We first evaluate the performance of both the V2D and A2D models on virtual targets extracted from the test set. Remarkably, although the networks have not been trained on these sequences, both models predict dynamic parameters that result in trajectories that are readable, and are often similar to the target sample. We settle on the A2D model trained on a 3 augmented dataset, which we qualitatively assess to produce the best results (Fig. 5).\nFigure 14: Fitting circles (dotted red) and circular arcs (red) to the input\nEstimating stroke time-overlap parameters. This step is based on the observation that a smaller. values of toi, i.e. a greater time overlap between strokes, result in smoother trajectories. On the contrary, a sufficiently large value of toi will result in a sharp corner in proximity of the corresponding virtual target. We exploit this notion, and compute an estimate of the toi parameters by examining the sharpness of the input contour in the region of each key-point..\nIk is thm ^Mirmm Hc wns + yh nt (a) RHik miibsMo hV f is the Uairman HC was sbIe ho plch mt dowws 1 (b) guide missiles onto bach. f is the Cairmau HC Nns a5Ie h yCh towe s 1 () qride wissiles on'o hach. fc is th Carmnau 4C nnl quide wissiles onto bacG. cb'e h ch mt Howus! x ic th oiruur (r wr) Och rut d-wn5, (d) 5lAk qissiles Ju*o bh.\nTo do so we examine the previously computed turning angle surprisal signal, in which we can observe that sharp corners in the contour correspond with sharper peaks, while smoother corners correspond with smooth peaks with a larger spread. By treating the surprisal signal as a probability density function, we can then use statistical methods to measure the shape of each peak with a mixture of parametric distributions, and examine the shape of each mixture component in order to get an estimate of the corresponding sharpness along the input contour. To do so we employ a variant of Expectation Maximisation (EM) (Dempster et al.1977) in which we treat the distance along the contour as a random variable weighted by the corresponding signal amplitude normalised to the [0, 1] range. Once the EM algorithm has converged, we treat each mixture component as a radial basis function (RBF) centred at the corresponding mean, and use linear regression as in Radial Basis Function Networks (Stulp & Sigaud[2015) to fit the mixture parameters to the original signal (Calinon]2016). Finally we generate an estimate of sharpness , (bounded in the [0, 1] range) for each key point using as a logarithmic function of the mixture parameters and weights. The corresponding toi parameters are then given by\nFigure 5: Dynamic parameter prediction. (a) Virtual targets from samples in the test set (not seen during training). (b) The original trajectories provided for comparison. (c) Trajectories reconstructed using predicted dynamic parameters. (d) Trajectories reconstructed with random dynamic parameters provided for comparison.\nWe then proceed with applying the same A2D model on virtual targets generated by the V2V models. primed on the test set. We observe that the predictions on sequences generated with the augmented dataset are highly evocative of handwriting and are clearly different depending on the priming sequence (Fig.6] c), while the predictions made with the non-augmented dataset are more likely to. resemble random scribbles rather than human readable handwriting (Fig.6] b). This further confirms. the utility of the data augmentation step..\nwhere tmin. and tmar are user specified parameters that determine the range of the to; estimates\n(b) eserebmJ,'s fcy@ri 1 nMC,e hfLcQ~snoe Onwnus wG 3 W mMus'en gr~ytchy Seonc +^ntg oNun gmfu A ndb 1nee 1n nmyNlg$y;4u ou1 50 bom^yc sosnin %p.T mc /hr/nnt 1mns Ygie n\\y^to (c) eM Cl j neec 11(nu Loe y>uesn(\\ebus) C rMe wwu th yJyghd`QNA s MuM er eaW J 4c 1msMwven E|O;trem+c5rc Gux5 A e reCdsHn|omZ t nwmn Im vacv>a\nSharpness GMM 0.005 Before nudge 0.004 0.003 0.002 0.001 0.000 0 10 20 30 40 50 60 70\nSharpness GMM 0.005 Before nudge 0.004 0.003 0.002 0.001 0.000 0 10 20 30 40 50 60 70\nFigure 15: Sharpness estimation. Left, the GMM components estimated from the turning angle surprisal signal Right, the A trajectory generated before the final iterative adjustment step. Note that at this stage the virtual target positions correspond with the estimated input key-points.\nFigure 6: Trajectories reconstructed with dynamic parameters predicted for generated virtual targets from Fig. using (b) non-augmented V2V, (c) augmented V2V.\n(a) 4c wns sbls h ych Mt #wUs A FMMM (b) M3m Y f sWM^ qmAm Mnxzn s~qKhy Lwd +nny sALnM wn/nqZ 1ny 7n<anmyLMLy|A7iWn m1 =J Ln^VXs*^i1M< mp TTo+ 1mM3 (c) rT< wum th n\\vynKzif#N LNAN lHn nwWnv 'VMLr nwMA Dm7 ykKihym+sFc<Ev xs AW L L4rsHYu\\sNZ # v*v7n s4\nt; = tmin + (tmax-tmin) Ai\n(b) s reqGni 1 m1S,_e hrGCQ~<noC Onunus wG 3 Wd seke musen g~ytahqy Seond +nnyg Nun U2ndb u1neL 1nq neg$Ag;4u ouJ 90 6qm^y c sosnin ~p.6 mr jhP/nnt 1mns Fgie ny7to (c) eMt Cl jne/ec 11(ru Loe y3desn(\\e&u5) C rMse aw th y Jyqf2d`QvA Rad lsHy 0V Au 1 num7 Dm y4qF|<ihrem+c5rc Geuxs AW e neCds1n|omL t~ vacv>a\nUser defined virtual targets. The dynamic parameter prediction models can also be used ir combination with user defined virtual target sequences (Fig.7). Such a method can be used to quickly and interactively generate handwriting trajectories in a given style, by a simple point and click procedure. The style (in terms of curvature and dynamics) of the generated trajectory is determined by the data used to train the A2D model, and by priming the A2D model with different samples, we can apply different styles to the user defined virtual targets.\nNote that we currently utilise an empirically defined function for this task. But in future steps, we intend to learn the mapping between sharpness and mixture component parameters from synthetically samples generated with the A model (for which to, and consequently A,, are known).\nIteratively estimating virtual target positions. The loci along the input contour corresponding with the estimated key-points provide an initial estimate for a sequence of virtual targets, where each virtual target position is given by v; = p[2]. Due to the trajectory-smoothing effect produced by the time overlaps, the initial estimate will result in a generated trajectory that is likely to have a reduced scale with respect to the input we wish to reconstruct (Varga et al.]2005). In order to produce a more accurate reconstruction, we use an iterative method that shifts each virtual targe1 towards a position that will minimise the error between the generated trajectory and the reconstructed input. To do so, we compute an estimate of m output key-points {& (z)} in the generated trajectory where z2, ..., m are the time occurrences at which the influence of one stroke exceeds the previous These will correspond with salient points along the trajectory (extrema of curvature) and can be easily computed by finding the time occurrence at which two consecutive lognormals intersect. Similarly tc the input key-point case, &(z1) and (zm) respectively denote the first and last points of the generated trajectory. We then iteratively adjust the virtual target positions in order to move each generated\nFigure 7: Dynamic parameters generated over user specified virtual targets for the word Res', using the A2D model trained on the IAM database.\nOne shot learning. In a subsequent experiment, we apply the data augmentaion method described in Section|3.2|to enable both virtual target and dynamic prediction models to learn from a small dataset of calligraphic samples recorded by a user using a digitiser tablet. We observe that with a low number of augmentations (50) the models generate quasi-random outputs, and seem to learn only the left to right trend of the input. With higher augmentation (700), the system generates outputs that are consistent to the human eye with the input data (Fig.8). We also train our models using only a single sample (augmented 7000) and again observe that the model is able to reproduce novel sequences that are similar to the input sample (Fig. 9l. Naturally, the output is a form of recombination of the input, but this is sufficient to synthesise novel outputs that are qualitatively similar to the input. It should be noted that we are judging the performance of the one-shot learned models qualitatively, and we may not be testing the full limits of how well the models are able tc generalise. On the other hand, these results, as well as the \"style transfer' capabilities exposed in following section suggest a certain degree of generalisation.\nReconstruction i final z V i initial\nReconstruction i final y(z. i initial\n(a) (b) c\nkey-point (z) towards the corresponding input key-point p[with\nViVi+pZi-(zi)\nFigure 8: Training with small (n = 4) datasets. (a) Training set with 4 samples. (b) Output of the networks when using 50 data augmentation. (c) Output of the networks with 700 data augmentation.\nThe iteration continues until the Mean Square Error (MsE) of the distances between every pair p [ and g(z) is less than an experimentally set threshold or until a maximum number of iterations is reached (Fig. 16). This method usually converges to a good reconstruction of the input within few iterations (usually < 5). Interestingly, even though the dynamic information of the input is discarded the reconstructed velocity profile is often similar to the original (in number of peaks and shape) which can be explained by the extensively studied relationships between geometry and dynamics of movement trajectories (Viviani & Terzuolo]1982| Lacquaniti et al.]1983] Viviani & Schneider1991 Flash & Handzel2007).\nthe red br^own fox hc ned hro wn FOf e renyL1 a\nFigure 9: Training with single training samples. For each row: (a) Training sample (augmented 7oo0). ( Output of combined V2V/A2D models primed on the training sample. (c) Output without priming\nStyle Transfer. Here, with a slight abuse of terminology, we utilise the term \"style\" to refer to th. dynamic and geometric features (such as pen-tip acceleration and curvature) that determine the visua qualities of a handwriting trajectory. Given a sequence of virtual targets generated with the V2. model trained on one dataset, we can also predict the corresponding dynamic parameters with the A2]. model trained on another. The result is an output that is similar to one dataset in lettering structure, bt possesses the fine dynamic and geometric features of the other. If we visually inspect Fig.[10l we ca. see that both the sequence of virtual targets reconstructed by the dataset preprocessing method, an the trajectory generated over the same sequence of virtual targets with dynamic parameters learne. from a different datasets, are both readable. This emphasises the importance of using perceptuall. salient points along the input for estimating key-points in the data-set preprocessing step (Sectio. 3.1).\nIn order to increase the expressive generative capabilities of our networks, we train them to model. parametric probability distributions. Specifically, we use Recurrent Mixture Density Networks that output the parameters of a bivariate Gaussian Mixture Model..\nIf a target variable zt can be expressed as a bivariate GMM, then for K Gaussians we can use a network architecture with output dimensions of 6K. This output vector would then consist of.\nFigure 16: Final trajectory reconstruction step. Left, iterative adjustment of virtual target positions. Right, the final trajectory generated with the reconstructed dynamic parameters.\n(a) (b c"}, {"section_index": "5", "section_name": "GMM via ( (Graves 2013", "section_text": "JIavCs = t : means for k'th Gaussian, t E IR2 o = exp(o) : standard deviations for k'th Gaussian, ot E IR2 pt = tanh(pt) : correlations for k'th Gaussian, pt E (-1, 1) K = softmax() : mixture weight for k'th Gaussian , Tt k 1 k\nt = : means for k'th Gaussian, t E IR2 = exp ) : standard deviations for k'th Gaussian, ot E IR2. = tanh(ot) : correlations for k'th Gaussian, pt E (-1, 1) 0+ K = softmax(^ ) : mixture weight for k'th Gaussian , . T t\n(a) gn J noW r fW wort (b) c AnV\nWe can then formulate the probability distribution function P, at timestep t as\nEy rA k^zwn f te yec brown-Fox Ehe re\\ b^own fox HE+EDBXD Fq Ec reA s7^pvu fvx the red b^own fox Ehe red b^own foK Ehe sC^ krown Fox Ey ZL k^pUu fDx\nEz FA k7^zwn f He yee brown Fox Ehe red b^own fox THEREDBRDNF9X EE reA u7^pvu fvx the red b^own Fox Ehe red brown For Ehe sCA krown fox thy ~L k7^pUu FDx\n0ML = argmaxPr(S|0) S 11 Pr(y|x,0) - arg max 0 (x,y)\nSince the logarithm is a monotonic function, a common method for maximizing this likelihood is minimizing its negative logarithm, also known as the Negative Log Likelihood (NLL), Hamiltonian or surprisal (Lin & TegmarkJ 2016). We can then define our cost function J as\nWe have presented a system that is able to learn the parameters for a physiologically plausible model of handwriting from an online dataset. We hypothesise that such a movement centric approach is advantageous as a feature representation for a number of reasons. Using such a representation provides a performance that is similar to the handwriting prediction demonstrated by Graves (2013 and Ha et al.(2016), with a number of additional benefits. These include the ability to: (i) capture both the geometry and dynamics of a hand drawn/written trace with a single representation, (ii) express the variability of different types of movement concisely at the feature level, (iii) demonstrate greater flexibility for procedural manipulations of the output, (iv) mix \"styles\"' (applying curvature and dynamic properties from one example, to the motor plan of another), (v) learn a generative model from a small number of samples (n < 5), (vi) generate resolution independent outputs.\nThe reported work provides a solid basis for a number of different future research avenues. As a firs extension, we plan to implement the label/text input alignment method described in Graves' origina work that should allow us to synthesise readable handwritten text and also to provide a more thorougl comparison of the two methods. Our method strongly relies on an accurate reconstruction of th input in the preprocessing step. Improvements should target especially parts of the latter method tha depend on user tuned parameters, such as the identification of salient points along the input (whicl requires a final peak detection pass), and measuring the sharpness of the input in correspondence with salient points.\nInput At each timestep i, the input to the V2V model is x; E IR3, where the first two elements. are given by v, (the relative position displacement for the i'th stroke, i.e. between the i'th virtua. target and the next), and the last element is u; E {0, 1} (the pen-up state during the same stroke). Given input x; and its current internal state (c, h), the network learns to predict x+1, by learning. the parameters for the Probability Density Function (PDF) : Pr(xi+1|x, C, h). With a slight abuse. of notation, this can be expressed more intuitively as Pr(xi+1 | x, x,-1, ..., x-n) where n is the. maximum sequence length.\nFurthermore, we can perform the same type of operation within a single dataset, by priming the A2D model with the dynamic parameters of a particular training example, while feeding it with the virtual targets of another. To test this we train both (V2V, A2D) models on a corpus containing 5 samples of the same sentence written in different styles and then augmented 1400 (Fig.11). We envision the utility of such as system in combination with virtual targets interactively specified by a user.\nFigure 10: Style transfer mixing training sets. (a) The priming sequence from the V2V dataset (IAM). (b) A2D is trained on a different, single user specified sample. (c) The virtual targets from (a) rendered with the dynamic parameters predicted form the A2D model from (b).\nK Pt=nkN(zt|t,0t,Pt), where k 1 Z N(x|,,p) = and 20102 2(1 - p2) (x1 - 1)2 (x2 2)2 2p(x1- 2)(x2- 2) Z 0102\nIf we let 0 denote the parameters of a network, and given a training set S of input-target pairs x EX, y E Y), our training objective is to find the set of parameters 0mL which has the maximum. likelihood (ML). This is the 0 that maximises the probability of training set S and is formulated as (Graves2008)\nFigure 11: Style transfer using priming. The leftmost column shows the entire training set consisting of 5 uset drawn samples. The top row (slightly greyed out) shows the virtual targets for two of the training examples Each cell in the table shows the corresponding virtual targets rendered using the dynamic parameters predicted with the A2D model primed with the sample in the corresponding row.\ns 11 J = - ln Pr(y|x,0) (x,y) S ln Pr(y|x,0). (x,y)"}]
ryMxXPFex
[{"section_index": "0", "section_name": "DISCRETE VARIATIONAL AUTOENCODERS", "section_text": "Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot. and Caltech-101 Silhouettes. Specifically, they show the generative output of a discrete VAE as. the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of. continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs. sampling passes through the large, low-probability space between the modes only infrequently. As a. result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes..\nJason Tyler Rolfe\na d qlog p= q : log Zp = E do da\nThe hierarchical structure of Section 4 is very powerful, and overfits without strong regularizatior of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the inpu p(x[(, 0) without any deterministic hidden layers, except on Omniglot. Moreover, all other neura networks in the prior have only one hidden layer, the size of which is carefully controlled. Or statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers oi the hierarchy over 3. We present the details of the architecture in Appendix H.\nC AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES\nOn statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs.\nqEp=-Ep[z.Wz+bz]\nIntuitively, variational autoencoders break the encoder13 distribution into \"packets\"' of probability of infinitessimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region r; < Pi < r; + for all i in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, sc high-probability values are more likely to be selected. More rigorously, Fq(z|,) (S) maps intervals of high probability to larger spans of 0 p 1, so a randomly selected p ~ U [0, 1] is more likely to be mapped to a high-probability point by F-1 q(z|x.o) (p).\nProbabilistic models with discrete latent variables naturally capture datasets com- posed of discrete classes. However, they are difficult to train efficiently, since. backpropagation through discrete variables is generally not possible. We present. a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through. the discrete latent variables. The associated class of probabilistic models com- prises an undirected discrete component and a directed hierarchical continuous. component. The discrete component captures the distribution over the discon-. nected smooth manifolds induced by the continuous component. As a result, this. class of models efficiently learns both the class of objects in an image, and their. specific realization in pixels, from unsupervised data; and outperforms state-of-. the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101. Silhouettes datasets.\nWe train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Om. niglot* (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST. we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood' of these models, computed using the method of (Burda et al., 2016). with 104 importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis-. crete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04,. 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou-. ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and. 0.66.\nz=bEqz=1\nThe approximating posterior q is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients:\nEo. =\n.W.z= Wi: Zi i,j\nMNIST (dynamic binarization) MNIST (static binarization) LL ELBO LL DBN -84.55 HVI -88.30 -85.51 IWAE -82.90 DRAW -87.40 Ladder VAE -81.74 NAIS NADE -83.67 Discrete VAE -80.15 Normalizing flows. -85.10 Variational Gaussian process -81.32 Discrete VAE -84.58 -81.01 Omniglot Caltech-101 Silhouettes LL LL IWAE -103.38 IWAE -117.2 Ladder VAE -102.11 RWS SBN -113.3 RBM -100.46 RBM -107.8 DBN -100.45 NAIS NADE -100.0 Discrete VAE -97.43 Discrete VAE -97.6\ndepends upon variables that are not usually in the same hierarchical level, so in general\nIn contrast, REINFORCE (Equation 18) breaks the latent represention into segments of infinites simal but equal volume; e.g., zi z' < zi + S for all i (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments. but the probability mass varies between them. Specifically, the probability mass of the segment z z' < z + o is proportional to q(z|x, $)."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such a denoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al 2006: Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest ar projections of underlying distributions over real-world objects into an observation space; the pixel of an image, for example. When the real-world objects are of discrete types subject to continuou transformations, these datasets comprise multiple disconnected smooth manifolds. For instance natural images change smoothly with respect to the position and pose of objects, as well as scen lighting. At the same time, it is extremely difficult to directly transform the image of a person to on of a car while remaining on the manifold of natural images\nwhere without loss of generality z; is in an earlier hierarchical layer than z; however, it is not clear how to take the derivative of zi, since it is a discontinuous function of Pk<i..\nOnce a segment is selected in the latent space, its location is independent of the encoder and decoder In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is fixed. Only the probability mass assigned to the segment is relevant.\nThe naive approach would be to take the gradient of the expectation using the gradient of log probabilities over all variables:\nAlthough variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear).\na d E[WijZiZj] = E g9 Og qk|l<k 9211 k 1 1,q2|1 i Z qk|l<k ) k\nIt would be natural to represent the space within each disconnected component with continuous vari ables, and the selection amongst these components with discrete variables. In contrast, most state. of-the-art probabilistic models use exclusively discrete variables - as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau. ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) - or exclusively continuous variables as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efficient variational autoencoder frame. work to models with discrete values, but this has proven difficult, since backpropagation througl. discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015)..\nTable 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot,. and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with 104 importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor- mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.\n2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, element-wise stochastic nonlinearity applied to a hidden layer. Since F-1 q(z|x,g) (p) selects a point in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and. Gaussian distribution of Section E.1 and let the standard deviation o go to zero..\nFor since those terms can be pulled out of the expectation over qk, and we can apply Equation 27. However, for terms involving zi>k or zj>k that occur hierarchically after k, the expected value of zi or z; depends upon the chosen value of zk.\nWe further analyze the performance of discrete VAEs on dynamically binarized MNIST: the larges. of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete. VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con. stant across each sub-row of five samples, and variation amongst these samples is due to the layer.. of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibb. sampling passes through the large, low-probability space between the modes only infrequently. A. a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM. prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the. different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens o.\nWe introduce a novel class of probabilistic models, comprising an undirected graphical model de. fined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific con. tinuously deformable realization. Moreover, we show how these models can be trained efficientl using the variational autoencoder framework, including backpropagation through the binary laten. variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical. approximation to the posterior distribution of the latent variables, which can model strong corre. lations. Since these models efficiently marry the variational autoencoder framework with discrete. latent variables, we call them discrete variational autoencoders (discrete VAEs)..\nThe gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18 Moreover, the variance of the estimate is proportional to the number of terms (to the extent that th terms are independent). The number of terms contributing to each gradient cally with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor 2014):\n8we use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.\nW ijZiZj - og q\n13Since the approximating posterior q(z|x, ) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood p(x[z, 0) maps each configu ration of the latent variables to a distribution over the input space, it is called the decoder..\n9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partitior. function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1\nSpike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables\nbatch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-. wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM. prior p(z[0) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma. & Ba, 2015) with a decaying step size.\nrequired to capture the variation of p(x|z, 0) in all directions; fewer samples span a smaller subspace. Since the latent representation commonly consists of dozens of variables, the REINFORCE gradi. ent estimate can be much less efficient than one that makes direct use of the gradient of p(x[z, 0) Moreover, we will show in Section 5 that, when the gradient is calculated efficiently, hundreds of. latent variables can be used effectively.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "As the parameters of the encoder are changed, the location of a packet can move, while its mass is held constant. That is, = F-! with a region of p-space is constant by definition. So long as F-1 small change in will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space.\nwith a region of p-space is constant by definition. So long as I exists and is differentiable, a\nEp[WijZiZj] =Wij.Epk<i Zi:Eek>i[zj]]\na E|Wi2 )g 0 dd log qk|l<k (2 121 dd 1 921 do qk|l<k\nHowever, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation,\nJqk|l<k terms are independent). The number of terms contributing to each gradien grows quadrati\nLO X G 3\nthe gradient of the decoder cannot be used to accurately estimate the change in the loss functior since the gradient only captures the effect of very small movements of the probability packet\nConventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling. from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby 1993).\nWhen using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sec- tions 2.1 D.2, and E.1, we can decompose the gradient of E [W;zz;] using the chain rule. Previ- ously, we have considered z to be a function of p and . We can instead formulate z as a function oi q(z = 1) and p, where q(z = 1) is itself a function of p and . Specifically,\nTo use discrete latent representations in the variational autoencoder framework, we must first trans form to a continuous latent space, within which probability packets move smoothly. That is, we. must compute Equation 17 over a different distribution than the original posterior distribution. Sur prisingly, we need not sacrifice the original discrete latent space, with its associated approximating. posterior. Rather, we extend the encoder q(z[x, ) and the prior p(z[) with a transformation to continuous, auxiliary latent representation S, and correspondingly make the decoder a function o this new continuous representation. By extending both the encoder and the prior in the same way. we avoid affecting the remaining KL divergence in Equation 2.14.\nif p<1-qiz=1=qiz=0 i(qi(Z=1),Pi Otherwise.\nIn contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, (x, 0, ): Hinton & Zemel, 1994):\nUsing the chain rule, dzi dqj(zj=1) . where dzi holds all qkj fixed, even j dqj(zj=1) dqj (zj=1) though they all depend on the common variables p and parameters $. We use the chain rule to differentiate with respect to q(z = 1) since it allows us to pull part of the integral over p inside the derivative with respect to $. In the sequel, we sometimes write q in place of q(z = 1) to minimize notational clutter.\nThe gradient is defined everywhere if we require that each point in the original latent space map to. nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability. packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous. space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a functior. of their main argument, and thus are invertible..\nL(x,0,) = logp(x[0) - KL[q(z[x,$)|[p(z[x,0)]\nExpanding the desired gradient using the reparameterization trick and the chain rule, we find\nIf we ignore the cases where some discrete latent variable has probability 0 or 1, we need only. require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters of the encoder, q(z[x, ), change, redistributing weight amongst. the associated regions of the auxiliary continuous space.\nL(x,0,$) =KL[q(z[x,$)][p(z[0)]+Eq logp(x[z,0)] KL term autoencoding term\nIn many cases of practical interest, such as Gaussian q(z[x) and p(z), the KL term of Equation 2 can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior q(z[x) can be drawn using a differentiable, deterministic function f (x, , p) of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables p ~ D. For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, N (m(x, ), v(x, $)), using\nWe can change the order of integration (via the expectation) and differentiation since\nWi;ZiZj| Wij< 00\nD ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS\nfor all p and bounded (Cheng, 2006). Although z(q, p) is a step function, and its derivative is. a delta function, the integral (corresponding to the expectation with respect to p) of its derivative is finite. Rather than dealing with generalized functions directly, we apply the definition of the. derivative, and push through the matching integral to recover a finite quantity..\n1 a q(z|x,s) [logp(x|z,0)] logp(xf(x,P,$),0) N O~\nFor simplicity, we pull the sum over k out of the expectation in Equation 30, and consider eacl summand independently. From Equation 29, we see that z; is only a function of qi, so all terms in the sum over k in Equation 30 vanish except k = i and k = j. Without loss of generality, we consider the term k = i; the term k = j is symmetric. Applying the definition of the gradient to on of the summands, and then analytically taking the expectation with respect to pi, we obtain:\nOWijZi(q,p)zj(q,p) dqi(Zi=1) E dqi(zi=1) do Wijzi(q+8qi,P)zj(q+8qi,P)-Wijzi(q,p)zj(q,p)dqi(zi=1) lim 8qi do 8qi(zi=1)->0 Wij1zj(qp)-Wi0zj(qp) dqi(zi=1) lim dqi 8qi(zi=1)->0 dqi do Pi=qi(zi=0) dqi(zi =1) d$ Pi=qi(zi=0)\nAs another concrete example, we consider a case where both r((i[zi = O) and r(Ci[zi = 1) are linear functions of Ci:\nwhere F is the conditional-marginal cumulative distribution function (CDF) defined by:\n2.(1-Si) if0S1 (CiZi 0) Frs|z=0)S)=2C-C 0 otherwise 2.Si,j if 0Si< 1 Fr(Si|zi=1)(C) 0, otherwise\nx F(x) = |x1,...,X OX\nFigure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a) the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per formance, but the network is robust to the size of the RBM (b).\nHowever, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable.\nA formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky. 1986):\n1\nThe third line follows from Equation 29, since z(q + 8qi, p) differs from z(q, p) only in the region of p of size Sqi around qi(zi = O) = 1 qi(Zi = 1) where zi(q + dqi,P) zi(q,p). Regardless of the choice of p, zj(q + dqi,P) = zj(q, P).\nFq(c|x,o)C)=(1-qz=1|x, 1[x,) =2qz=1|x,9\nThe large mixing time of block Gibbs sampling on the RBM suggests that training may be con strained by sample quality. Figure 6a shows that performance1o improves as we increase the num. ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(z|0) in. Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986)..\nThe third line fixes p to the transition between z = 0 and z; = 1 at q(zi = O). Since zi = 0 implies (; = 0,17 and ( is a continuous function of p, the third line implies that C, = 0. At the same n dqi is not affected by time, since qi is only a function of Pk<i from earlier in the hierarchy, the term the choice of p,.18 As noted above, due to the chain rule, the perturbation dq, has no effect on other\n17we chose the conditional distribution r(S|zi = 0) to be a delta spike at zero. 181n contrast, zi is a function of Pi..\n14Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous. space to the decoder, since this does not change the space of the probabilty packets\n10 All models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency\nwhere q(z|x, ) is a computationally tractable approximation to the posterior distribution p(z|x, 0) We denote the observed random variables by x, the latent random variables by z, the parameters of the generative model by 0, and the parameters of the approximating posterior by $. The variational. autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups. the evidence lower bound of Equation 1 as:.\nEq[WijZiZj] Eo[Wi;ZiZj OWijZiZj dqi dqk(Zk =1) k\nFigure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM out independent continuous latent variables, and shows the variation induced by the continuous ayers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (o) occasionally two) digit IDs, despite being trained in a wholly unsupervised manner.\nThe spike-and-exponential transformation from discrete latent variables z to continuous latent vari ables ( presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations\n80.3 80.4 80 80.5 1 10 100 8 16 32 64 128 1 2 4 8 (a) Block Gibbs iterations (b) Num RBM units (c) RBM approx post layers\nThe reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we find that an analog of Equation 3 holds. Specifically, D; is the uniform distribution between 0 and 1, and\nOWijzi(q,p)zj(q,p) dqi(Zi=1) E p dqi(zi =1) do Wijzi(q+8qi,P)z(q+8qi,P)-Wizi(qp)z(q,p) dqi(Zi=1) lim 8qi(zi=1)->0 dqi do Wi1z(qp)-Wi0zj(qp) dqi(Zi=1) lim dqi 8qi(zi=1)->0 8qi do Pi=qiz=0 dqi(zi=1) do Pi=qi(zi=0)\nf(x)=F-1(x)\nwhere z E {0, 1}n, Zp is the partition function of p(z), and the lateral connection matrix W is. triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain. corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its. derivative is not defined, as required in Equations 3 and 4.2.\nWe can calculate F-1 -' -> ( in Equation 19 to simplify notation:\nCommensurate with the small number of intrinsic classes, a moderately sized RBM yields the bes performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes.\nSince p; is fixed such that (; = 0, all units further down the hierarchy must be sampled consis tent with this restriction. A sample from p has (i = O if z = 0, which occurs with probability qi(zi = 0).19 We can compute the gradient with a stochastic approximation by multiplying each. sample by 1 - zi, so that terms with (i 0 are ignored,20 and scaling up the gradient when z = 0. by"}, {"section_index": "3", "section_name": "1.2 RELATED WORK", "section_text": "The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the ap- proximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx imating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting.\nRecently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016) Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed. 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos. terior distribution. Ladder variational autoencoders (Sonderby et al., 2016) increase the power of the. architecture of both approximating posterior and prior. Neural adaptive importance sampling (Di. et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi. mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured. variational autoencoders use conjugate priors to construct powerful approximating posterior distri. butions (Johnson et al., 2016).\nq2+2p-1)q+(1-p 2q - 1 q-1)2+2q-1)p q - 2q - 1\nWe avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusivel in the continous space, marginalizing out the original discrete latent representation. At the sam time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does no contribute to the KL term. To increase representational power, we make the approximating posterio over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variable below them. The resulting discrete variational autoencoder achieves state-of-the-art performance or the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.\nG MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER\np = 0.8 0.8 0.6 p = 0.5 p = 0.2 0.4 0.2 0 0.2 0.4 0.6 0.8 1 q(z =1|x,$)\nPrior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables. and a wider set of mappings to the continuous units\nIntuition regarding the difficulty of approximating the posterior distribution over the latent variables. given the data can be developed by considering sparse coding, an approach that uses a basis set of. spatially locallized filters (Olshausen & Field, 1996). The basis set is overcomplete, and there are. generally many basis elements similar to any selected basis element. However, the sparsity prior. pushes the posterior distribution to use only one amongst each set of similar basis elements.\nAs a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However having changed one basis element, the optimal choice for the adjacent elements also changes so the filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated. since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements."}, {"section_index": "4", "section_name": "ACKNOWLEDGEMENTS", "section_text": "The generative model underlying the discrete variational autoencoder resembles a deep belief net- work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz- mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer j receives connections from all previous layers i < j, with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound.\nFigure 7: Inverse CDF of the mixture of ramps transformation for p E {0.2, 0.5, 0.8\nIn Equation 20, F-1 is concave-up; if p > 0.5, F-1 is concave-down; if p ~ 0.5, F-1 is sigmoid. In no case is F-1 extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of z inevitably flattens."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084-3092, 2013."}, {"section_index": "6", "section_name": "D.2 SPIKE-AND-SLAB", "section_text": "Yoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiy preprint arXiv:1308.3432. 2013."}, {"section_index": "7", "section_name": "H ARCHITECTURE", "section_text": "2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDINC CONTINUOUS LATENT VARIABLES\nThe stochastic approximation to the ELBO is computed via one pass down the approximating pos terior (Figure 4a), sampling from each continuous latent layer ( and 3m>1 in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not flow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure.\nif C = 0 Fr(ci|z;=0)(C')=1 otherwise if 0<S1 Fr(5i|z=1)(C')= Si| otherwise\nWhen working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-margina1 CDF (defined by Equation 5 and Appendix A) by augmenting the latent representation with a set of continous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter\n3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to. the rest of the model. In contrast to a traditional RBM, there is no distinction between the \"visible' units and the \"hidden' units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome. fully hidden bipartite Boltzmann machine.'.\nSamuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 2Oth SIGNLL Conference on Computational Natural Language Learning, pp. 10-21, 2016.\n191t might also be the case that G = 0 when zi = 1, but with our choice of r(S|z), this has vanishingly small probability. 20This. S01\nFq(c|x,x)(C') =(1-q(z =1|x,$)) Fr(5i|z;=0)(C')+ q(z =1|x,$) Fr(Si|z;=1)S =qz=1x,)-1)+1.\nIn the following sections, we present the discrete variational autoencoder (discrete VAE), a hierar chical probabilistic model consising of an RBM.3 followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables.\n0 = 2: q -) +2-c2 0=(2q-1c2+21q-p 2(q-1)/4(1-2q+ q?)+4(2q-1) 2(2q - 1) q-1q2+2p-1q+1-p 2q - 1\nd 1- Zi dqi(zi=1) E[Wi;ziZj] =E 1-qiz=1)\nDatasets consisting of a discrete set of classes are naturally modeled using discrete latent variables However. it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013).\nIt is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Un- fortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B.\nZhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided. the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code.. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and. one of our anonymous reviewers for identifying the problem addressed in Appendix D.3..\nThese equivalent representations can easily be disambiguated by the successive layers of the rep. resentation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by. inferring the approximating posterior over the top-most latent layer first. Only then do we compute. the conditional approximating posteriors of lower layers given a sample from the approximating. posterior of the higher layers, breaking the symmetry between representations of similar quality..\nWe can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011):.\nOrg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511- 2519, 2016.\nx X q(z =1|x,) Z1 Z2 Z3 Z1 Z2 Z3 F q(S|x,s)(P) p(x|S,) (a) Approximating posterior q(S, z|x) (b) Prior p(x, (, z) (c) Autoencoding term\nAll hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128. units (64 units per side, with full bipartite connections between the two sides), with 4 layers o. hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20. persistent chains per element of the minibatch, to sample from the prior in the stochastic approxi. mation to Equation 11.\nifp>1-q H 1Sx otherwise\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceed- ings of the International Conference on Learning Representations, arXiv:1509.00519. 2016\nWhen using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overfit. if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger. and more powerful approximating posterior generally did not reduce performance within the range. examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the. use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks. implementing components of the approximating posterior contain two hidden layers of 2000 units.\nWe plot F ) as a function of q for various values of p in Figure 8\nSteve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Work ing paper, 2006.\n0.8 (o)(9*a|1)b 0.6 p = 0.8 0.4 p = 0.5 0.2 -0.2 0 0 0.2 0.4 0.6 0.8 1 q(z =1|x,)\nKyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltz mann machines. Neural Computation, 25(3):805-831, 2013.\nAaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images b spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning pp. 1145-1152, 2011.\nFigure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the. network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari. ables ( are smoothed analogs of discrete latent variables z, and insulate z from the observed vari. ables x in the prior (b). This facilitates the marginalization of the discrete z in the autoencoding tern of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable. given independent stochastic input p ~ U [0, 1].\nFigure 8: Inverse CDF of the spike-and-slab transformation for p E {0.2, 0.5, 0.8\nthe fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted a adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a smal. minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the. prior. The conceptual motivation for this approach is discussed in Appendix C..\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems. pp. 2672-2680. 2014.\nSpecifically, as shown in Figure 1a, we augment the latent representation in the approximating pos terior with continuous random variables c,4 conditioned on the discrete latent variables z of the RBM:\na OF aF 0 de de dz de OF de\nAlex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016\nFigure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural. network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden. layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green). in (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing p(x[3) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with. no parameter sharing.\nq(S,z[x,$) = r(S[z):q(z[x,$) where r(S|z) =II r(Si|zi). 2\nwhere z = F-1(p). Consider the case where r(Si|z; = 0) and r(Ci|zi = 1) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard. deviations apart. For values of (i between the two modes, F(() ~ q(zi = O|x, ), assuming. without loss of generality that the mode corresponding to z; = 0 occurs at a smaller value of (; than dq r(G) even if r(S) ~ 0. In this case, the stochastic estimates of the gradient in equation 8, which depend upon. OF-\nThe support of r((|z) for all values of z must be connected, so the marginal distribution. q(C[x, ) = z r([z) : q(z[x, $) has a constant, connected support so long as 0 < q(z[x, ) < 1. We further require that r(C[z) is continuous and differentiable except at the endpoints of its support. so the inverse conditional-marginal CDF of q(C[x, ) is differentiable in Equations 3 and 4, as we discuss in Appendix A.\nAs shown in Figure 1b, we corresp pondingly augment the prior with (:\nGeoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3-10. Morgan Kaufmann Publishers, Inc., 1994..\np(C,z[0) =r(C[z):p(z[0)\np(x[S,z,0) =p(x[(,0)\nTable 2: Architectural hyperparameters used for each dataset. Successive columns list the numbe. of layers of continuous latent variables, the number of such continuous latent variables per laye. the number of deterministic hidden units per layer in the neural network defining each hierarchica. layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more. regularization, and achieve optimal performance with a smaller prior..\nThe smoothing distribution r(([z) transforms the model into a continuous function of the distri bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient.\nMatthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P. Adams. Composing graphical models with neural networks for structured representations and. fast inference. In Advances in Neural Information Processing Svstems. pp. 2946-2954. 2016\nGiven this expansion, we can simplify Equations 3 and 4 by dropping the dependence on z an applying Equation 16 of Appendix A, which generalizes Equation 3:\n1 a q(S,z|x,p) [logp(x|S, z,0)] ~ ogp N do o~U(0,1)n\nOn statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using. recurrent parameter sharing. In the simplest case, each p(3m|31<m, 0) and p(x|3, 0) is a func- tion of i<m 3t, rather than a function of the concatenation [3o, 31,...,3m-1]. Moreover, all. p (3m1|31<m, 0) share parameters. The RBM layer 30 is rendered compatible with this parame- terization by using a trainable linear transformation of (, M : (; where the number of rows in M is\nMichael I. Jordan. Zoubin Ghahramani. Tommi S. Jaakkola, and Lawrence K. Saul. An introductio. to variational methods for graphical models. Machine learning, 37(2):183-233. 1999\nIt is not necessary to define the transformation from discrete to continuous latent variables in the approximating posterior, r((|z), to be independent of the input x. In the true posterior distribution,.\n4We always use a variant of z for latent variables. This is zeta, or Greek z. The discrete latent variables z can conveniently be thought of as English z.\nWe can calculate I plicitly, using the substitution q(z = 1|x. > q to simplify notation\n82 84 86 88 100 200 300 400 500 500 1,000 1,500 2,000 Num hidden units per decoder layer Num hidden units per encoder layer (a) Prior (b) Approximating posterior\nIf the smoothing transformation is not chosen appropriately, the contribution of low-probability. regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we find:\nNum Vars per Hids per Param layers layer prior layer sharing MNIST (dyn bin) 18 64 1000 none MNIST (static bin) 20 256 2000 2 groups Omniglot 16 256 800 2 groups Caltech-101 Sil 12 80 100 complete\nThese high-variance gradient estimates arise because r((i[zi = 0) and r((i[zi = 1) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing trans. formations are analogous to a sigmoid transfer function o(c . x), where is the logistic function. and c -> oo. The smoothing provided by the continuous random variables ( is only effective. if there is a region of meaningful overlap between r((|z = O) and r((|z = 1). In particular, z. r(Si|Zi = 0) +r(Si|Zi = 1) > 0 for all C between the modes of r(SiZi = O) and r(Si|Zi = 1), so p(z) remains moderate in equation 21. In the spike-and-exponential distribution described in. Section 2.1, this overlap can be ensured by fixing or bounding ..\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448-456, 2015.\nAs we shall demonstrate in Section 2.1, F-1 q(z = 1|x, ) is a deterministic probability value calculated by a parameterized function, such a a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input s is passed into a deterministic feedforward network q(z = 1|x, ), for which the final nonlinearity i the logistic function. Its output q, along with an independent random variable p ~ U[0, 1], is passe into the deterministic function F-1 input x, is finally passed to log p (x|$, 0). The expectation of this log probability with respect to p is the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the inpu and the independent p, this autoencoder is deterministic and differentiable, so backpropagation cai be used to produce a low-variance, computationally-efficient approximation to the gradient.\nOn datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneficial. We define the n group architecture by dividing the continuous latent layers 3m>1 into n equally sized groups of consecutive layers. Each such group is independently subject to recurrent parameter sharing analogous to the complete sharing architecture and the RBM layer 3o is independently parameterized.\npC p(S,x[z) = Z p(S[z,x): p(x[z)\nThis is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. To address this, we can define:.\nq(S,z[x,$) = q(z|x,$)q(S[z,x, p(S,z[0) =p(C[z):p(z0)\nWe use the spike-and-exponential transformation described in Section 2.1. The exponent is a train able parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sonderby et al., 2016)..\nThis leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term:\nWhen p(x|3) is linear, all nonlinear transformations are part of the prior over the latent variables. In contrast. it is also possible to define the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the final decoder p(x[3), as in. traditional VAEs. The former case can be reduced to something analogous to the latter case using. the reparameterization trick.\nAs a concrete example consistent with sparse coding, consider the spike-and-exponential transfor mation from binary z to continuous C:\nHowever, a VAE with a completely independent prior does not regularize the nonlinearity of th prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on th true posterior) be well-represented by the approximating posterior. Viewed another way, a com pletely independent prior requires the model to consist of many independent sources of variance so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows th data manifold to remain curled within a higher-dimensional ambient space, with the approximatin posterior merely tracking its contortions. A higher-dimensional ambient space makes sense whe modeling multiple classes of objects. For instance, the parameters characterizing limb positions an orientations for people have no analog for houses.\nif G = 0 Fr(Si|zi=0)(C')=1 otherwise BeBs if OSi1 Fr(Si|zi=1)(S otherwise\nThe extension to hierarchical approximating posteriors proceeds as in sections 3 and 4\nIf both q(S|z, x, ) and p(C[z) are Gaussian, then their KL divergence has a simple closed form. which is computationally efficient if the covariance matrices are diagonal. However, while the gra. dients of this KL divergence are easy to calculate when conditioned on z, the gradients with respect. of q(z|x, ) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18):\nBenjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles fo restricted Boltzmann machine learning. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 509-516, 2010.\nd log q(z|x, $) dq(z|x, .KL[q(S|z,x,$)|p(S|z)] = IEq(z|x,s) KL[q(S|z,x,$)||p(S|z)] do Z (23) The reward signal is now KL [q(S[z, x, $)[[p(C[z)] rather than log p(x[z, 0), but the effect on the variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function\ndq(z|x,$) d log q(z|x,$) KL[q(S|z,x,$)||p(S|z)] =Eq(z{x,s) KL[q(S|z,x,$)|p(S|z)] do do"}, {"section_index": "8", "section_name": "H.1 ESTIMATING THE LOG PARTITION FUNCTION", "section_text": "Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Pro ceedings of the 31st International Conference on Machine Learning, pp. 1791-1799, 2014.\nWe estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM. (log Zp from Equation 6) from an importance-weighted computation analogous to that of Burda et al.. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant. of Bennett's acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces. unbiased estimates of the partition function. Interpolating distributions were of the form p(x). and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing. parameters in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run and number of repeated experiments, to achieve sufficient statistical accuracy in the log partition. function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about O.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats..\nFq(S|x,s)(C')=(1-q(z=1|x,$))Fr(ci|z=0)(C)+ q(z=1|x,$)Fr(ci|zr=1)S = q(z = 1x, + 1 .\nAndriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceed ings of the 33rd International Conference on Machine Learning.. pp. 2188-2196. 2016\nHowever, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance Specifically, if q(z[x, ) and q(C[z, x, $) are factorial, with q((i[zi,x, ) only dependent on Zi, then KL [q(S|z, x, )|[p(S[z)] decomposes into a sum of the KL divergences over each variable, as\nIain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent. variable models. In Advances in Neural Information Processing Systems, pp. 1137-1144, 2009\nRadford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71-113 1992.\nBruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607-609, 1996.\nOg if p>1-q otherwise\nJohn Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012\nJudea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Mor gan Kaufmann, 1988.\nRather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normaliza tion on the L1 norm. Specifically, we use:"}, {"section_index": "9", "section_name": "E.1 SPIKE-AND-GAUSSIAN", "section_text": "We might wish q(C|z, x, $) to be a separate Gaussian for both values of the binary zi. However, it. is difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise:.\ny=x-x Xbn. Os+0\nwhere x is a minibatch of scalar values. x denotes the mean of x. O indicates element-wise mul tiplication, e is a small positive constant, s is a learned scale, and o is a learned offset. For the approximating posterior over the RBM units, we bound 2 s 3, and -s o s. This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used.\n5In the limit -> oo, S = zi almost surely, and the continuous variables ( can effectively be removed from. the model. This trick can be used after training with finite to produce a model without smoothing variables C.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998.\np(xS,0) :p(S[z,0):p(z|0) q(S|z,x,$) : q(z|x,$) : log q(S[z,x,$): q(z|x,$) =Eq(S|z,x,s).q(z|x,s) [logp(x|C,0)]- KL[q(z|x,$)||p(z|0)] >`q(z|x,$):KL[q(S|z,x,$)||p(S|z)]\nTo evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8 we must invert the conditional-marginal CDF Fq(c|x,$):\nof the form IE KL[qi|pi] d log qi due to the identity explained in Equation 27. We then use the\nwhere we use the substitution q(z = 1|x, ) -> q to simplify notation. For all values of the inde-. pendent random variable p ~ U[0, 1], the function F~1 Fq((|x,g)(p) rectifies the input q(z = 1|x, $) if q 1 - p in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is. also quasi-sigmoidal, in that F-1 is increasing but concave-down if q > 1 - p. The effect of p on. F-1 is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the. noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c.\nOther expansions to the continuous space are possible. In Appendix D.1, we consider the case where. both r(Ci|z; = 0) and r(S|z; = 1) are linear functions of (; in Appendix D.2, we develop a spike- and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous ( is directly dependent on the input x in addition to the discrete z..\n0. if Gi < 0 q(Si|Zi=O,x,$)=0(Si) Fq(Si|zi=0,x,s)(Si) = H(Ci) otherwise Uai(x. q(Si|Zi=1,x,$) =N(q,i(x,$),3,i(x,$)) Fq(Si|zi=1,x,$) + erf x,\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko... Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems. pp. 3546-3554, 2015.\nwhere q(x, ) and q(x, $) are functions of x and $. We use the substitutions q(z; = 1|x, $) > q. q,i(x, $) -> q,i, and q,i(x, ) -> q,i in the sequel to simplify notation. The prior distribution p is similarly parameterized.\nWe can now find the CDF for q(C|x, ) as a function of q(z\nq(C|x (S) =(1-qiHC qi erf 2\nRuslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of th 12th International Conference on Artificial Intelligence and Statistics. r pp. 448-455. 2009\nSince z, = 0 makes no contribution to the CDF until C. = 0. the value of o at which (, = 0 is.\nFigure2: Inverse CDF of the spike-and-exponential smoothing transformation for. p E {0.2, 0.5, 0.8}; =1 (dotted), =3 (solid), and = 5 (dashed) (a). Rectifiedlinear unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), -0.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In all. cases, the abcissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity F-1 Fq(c|x,g)(p) from Figure 1c, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c)\nif pi< Pi step Hq,i +20g.i erf- step+(1-qi 0 .i : erf-1 otherwise Uq,i+\nMichael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008.\nGradients are always evaluated for fixed choices of p, and gradients are never taken with respect to p. As a result, expectations with respect to p are invariant to permutations of p. Furthermore,\nPaul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In. D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6 pp. 194-281. MIT Press, Cambridge, 1986.\nWhen a probabilistic model is defined in terms of a prior distribution p(z) and a conditional dis tribution p(x[z), the observation of x often induces strong correlations in the posterior p(z[x) due to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)).\nFigure 10: Distribution of estimates of the log-partition function, using Bennett's acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a) statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d)\nif pi<1-q 2Pi .i : erf-1 otherwise qi\nDavid J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities or directed graphical structures. Networks, 20(5):579-605, 1990\nAll parameters of the multivariate Gaussians should be trainable functions of x, and independent q. The new term in Equation 22 is:."}, {"section_index": "10", "section_name": "1 COMPARISON MODELS", "section_text": "q(Z1,S1,..,Zk,Sk|x,$) = r(Sj|zj) q(zj|Si<j,x,$) where 1<j<k e9j(Si<j,x,$)T:zj q(Zj|Si<j,x,$) (1+ e9zi(Si<j,x,$))\nIn Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladde VAE; Sonderby et al., 2016).\nTijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064- 1071. ACM, 2008.\nFor the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamilto. nian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW;. Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing flows (Nor-. malizing flows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al.,. 2016).\nzj E {0, 1}^, and g;(Si<j, x, $) is a parameterized function of the inputs and preceding Si, such as. a neural network. The corresponding graphical model is depicted in Figure 3a, and the integratior of such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap. pendix A. If each group z, contains a single variable, this dependence structure is analogous to tha. of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution However, the dependence of z; on the preceding discrete variables zi<j is always mediated by the. continuous variables (i<j:\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992\nTo train q(z; = 1|x, ), we thus need to backpropagate KL [q(S|zj x, )[p((z; = 1)l into i\nA MULTIVARIATE VAES BASED ON THE CUMULATIVE DISTRIBUTION FUNCTION\nThis hierarchical approximating posterior does not affect the form of the autoencoding term in Equa tion 8. except to increase the depth of the autoencoder. as shown in Figure 3b. The deterministic probability value q(z; = 1|Si<, x, ) of Equation 10 is parameterized, generally by a neural net- work, in a manner analogous to Section 2. However, the final logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer j of the autoencoder, input x and all previous (i< are passed into the network computing q(z = 1|Si<j, x, $). Its output qj, along with an\naKL[q][p] Pq,i - Pp,i 2 P,i aKL[q]|p] 1 Oq,i d0q,i qi P,i\nThe reparameterization trick is always possible if the cumulative distribution function (CDF) of q(z|x, $) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014) However, for multivariate distributions, the CDF is defined by:.\nFinally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE. Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a. deep sigmoid belief network (RwS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann. machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015).\nX1 F(x) X\n6The continuous latent variables ( are divided into complementary disjoint groups (1, ...\n10-2 10-2 Frrrnee tr erntess 6 6 4 4 2 2 0 0 33.6 33.65 33.7 40.1 40.15 40.2 (a) MNIST (dyn bin). (b) MNIST (static bin). 10-2 .10-2 8 8 esnnnness 6 6 4 4 10 2 2 0 0 34.1 34.15 34.2 21.1 21.15 21.2 Log partition function estimate Log partition function estimate (c) Omniglot (d) Caltech-101 Silhouettes\n(d'x)f (0)(9'x|3)bj 1 0.8 x 0.3 p > 0.5 x:10.3 0.5 no noise .5 0.2 p < 0.5 F 0 0.2 0.4 0.6 0.8 1 -1 -0.5 0 0.5 1 -1 0.5 0 0.5 1 q(z =1|x,) x x (a) Spike-and-exp, E {1, 3, 5} (b) ReLU with dropout. (c) ReLU with batch norm\nRuslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872-879. ACM, 2008.\nstep qi Pq,i 1 + erf 2\n2Pi 2(p -1) +1 qi qi\nTo accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior q(z[x) over the discrete latent variables. Specifically, we divide the latent variables z of the RBM into disjoint groups, z1,..., 2k,' and. define the approximating posterior via a directed acyclic graphical model over these groups:.\n>`q(z|x,$) :KL[q(S|z,x,$)|p(S|z)] = >q(Zi =1|x,)KL[q(Si|zi=1,x,$)|p(Si|zi=1)] Z, i +(1-q(zi =1|x,$)):KL[q(Si[zi =0,x,$)[[p(Si[zi = O)\nq(z|x,$) : KL[q(S|z,x,$)|Ip(S|z)] = >`q(zi=1|x,$)KL[q(Si|zi=1,x,$)||p(Si|zi=1) Z,i 1x 6D): KL.[a(C:z 0.x.0)lp(C(z; = 0)\nIf zi = O, then q(Si[zi = O,x,$) =p(Si[zi= 0,0), and KL[q(Si[zi= 0,x,$)|[p(Si[zi = O,0)]= 0 as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance\n1 KL[ql|p]= log Op,i - log 0q,i 2\nOn Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016);. ladder variational autoencoder (Ladder VAE; Spnderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting. the results of Burda et al. (2015).\nx x q q q(z3 = 1|Si<3,x,) q1 92 q3 Z1 Z2 Z3 q3(S3|Si<3,x,q p(x|S,$) x (a) Hierarch approx post q((, z[x) (b) Hierarchical ELBO autoencoding term\n3 8 8 R 0 3 5\nThe multivariate CDF maps Rn [0, 1], and is generally not invertible.11\nIn place of the multivariate CDF, consider the set of conditional-marginal CDFs defined by:1\nKL[q]lp] = q(zi = 1|x, Ua Up. ) KL[q]lp] = q(zi = 1|x, a. 9 Z\nFor p, it is not useful to make the mean values of ( adjustable for each value of z, since this is redundant with the parameterization of the decoder. With fixed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be One.\nThat is, F,(x) is the CDF of x, conditioned on all x; such that i < h, and marginalized over. all xk such the j < k. The range of each F; is 0,1, so F maps the domain of the original distribution to p E [0, 1]n. To invert F, we need only invert each conditional-marginal CDF in turn,. conditioning x; = F-1(p) on x1 = F-1(p),..., j-1 = F1(p). These inverses exist so long as the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively. define F-1(p) based upon x<j, rather than Pi<j, since by induction we can uniquely determine. Xi<j given Pi<j\nFigure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables z; only depend on the previous zi<; through their smoothed analogs Ci<i. The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input p.\nUsing integration-by-substition, we can compute the gradient of the ELBO by taking the expectation. of a uniform random variable p on [0, 1]n, and using F-1 of z on which p(x[z, 0) is conditioned. To perform integration-by-substitution, we will require the determinant of the Jacobian of F-1\nThe KL term of the ELBO (Equation 2) is not significantly affected by the introduction of additiona continuous latent variables (, so long as we use the same expansion r(C[z) for both the approximat ing posterior and the prior:\nFigure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM. between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers. as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that. the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained. in a wholly unsupervised manner.\nThe derivative of a CDF is the probability density function at the selected point, and F; is a simpl CDF when we hold fixed the variables x<; on which it is conditioned, so using the inverse functior. theorem we find:\nindependent random variable p ~ U[0, 1], is passed to the deterministic function F-1 q(Sj|Si<j,x,$) to produce a sample of Ss. Once all S; have been recursively computed, the full ( along with th original input x is finally passed to log p (x|, 0). The expectation of this log probability with respe. to p is again the autoencoding term of the VAE formalism, as in Equation 2..\nH1<j<kr(Sj|zj) :q(zj|Si<j,x) KL[q|p] = II r(Sj|zj) q(zj|Si<j,x) . log p(z):I1<j<kr(Sj|zj) 1<j<k I1<j<k 9(zj|Si<j,x) II r(Sj|zj) q(zj|Si<j,x) . log p(z) 1<j<k\nIn Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using:"}, {"section_index": "11", "section_name": "SUPPLEMENTARY RESULTS", "section_text": "d dEp(z,0) 0Ep(z,0) KL[q][p] = Eq(z1|x,g de |Si<k,x,$) de (ze de\nTo highlight the contribution of the various components of our generative model, we investigate. performance on a selection of simplified models.21 First, we remove the continuous latent layers.. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the. smoothing variables (, and a factorial Bernoulli distribution over the observed variables x defined via. a deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihood of 86.9 with 128 RBM units and 85.2 with 200 RBM units.\nThe gradient of Equation 24 with respect to the parameters 0 of the prior, p(z[0), can be es timated stochastically using samples from the approximating posterior, q(S, z[x, ), and the true prior, p(z|0). When the prior is an RBM, defined by Equation 6, we find:\na yg KL[q][p] = E\nmarginal CDFs F; are independent of the value of the later xk, j < k, over which they are marginal- ized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as F so the Jacobian of F-1 is also triangular. The determinant of a triangular matrix is the product of the diagonal elements\nIn particular, Equation 12 is substantially lower variance than the naive approach to calculate KL [q|[p], based upon REINFORCE.\ndEp(z,0) OEp(z,0) KL [q]p] = - )q(S,z|x,$ p(z|0) de de de S,z dEp(z,0) aEp(z,0) + Ep(z\\0) Z1x Si<k,x,$ de de\nNext, we further restrict the neural network defining the distribution over the observed variables x given the smoothing variables ( to consist of a linear transformation followed by a pointwise logistic. nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal.. 1992). This decreases the negative log-likelihood to -92.7 with 128 RBM units and -88.8 with. 200 RBM units\n4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES\nWe can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus or continuous variables, which have proven to be powerful in generative adversarial networks (Goodfel low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al. 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complement the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects.\nWe then remove the lateral connections in the RBM, reducing it to a set of independent binary. random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of 97.0 with 200 binary latent variables..\nThe final expectation with respect to q(zk|S<k, x, $) can be performed analytically; all other expec- tations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units\nFinally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approxi. mating posterior of Figure 1a. This simplification of the approximating posterior, in addition to the prior, reduces the log-likelihood to -102.9 with 200 binary latent variables..\nSpecifically, we augment the latent representation with continuous random variables 3,' and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi cal models. We use the same autoregressive variable order for the approximating posterior as for the\nIn contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break KL [q[p] into two terms, the negative entropy z,c q log q, and the cross-entropy z,c q log p, and compute their gradients separately.\n21In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Mur ray, 2008), estimated with 104 importance weighted samples (Burda et al., 2016)..\nWe always use a variant of z for latent variables. This is Fraktur z, or German z\nx Fi X |X1,...,X X\nOF. 0 1 dpj F}(F-1(p)) 1 xj=F-(D)|xi<j\nWe can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through <a q(zjSi<i, x):\nUsing these facts to perform a multivariate integration-by-substitution, we obtain:\nEq(z[x,b) [logp(x[z,0)] q(z|x,) : logp(x[z, 0\n-H(q) = II r(Sj|zj) q(zj|Si<j,x) . log q(zj|Si<j,x) 1<j<k 1<j<k Ir(Sj|zj) q(zj|Si< Hr(Si|zi) q(zi|Sn<i,x) : log q(zj|Si<j,x i<j q(zj|Si<j,x):logq(zj|Si<j,x sq(Si<j,Zi<j|x,$ Epi<j )q(zj|Pi<j,x) logq(zj|Pi<j; Zj\nFigure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy o. continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1 respectively. The continuous latent variables 3 build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables z, which can represent the discrete types of objects in the image.\nThe gradient with respect to $ is then easy to approximate stochastically:\n1 (z|x,s) [logp(x|z,0)] N p~U(0,1)n\nwhere indices i and j denote hierarchical groups of variables. The probability q(zj|Pi<j,x) i evaluated analytically, whereas all variables zi<j and (i< are implicitly sampled stochasticall. via Pi<j:\nFigure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM.\norior, as in DRAw (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015) he deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Spnderby et al., 2016) We discuss the motivation for this ordering in Appendix G.\nWe wish to take the gradient of - H(q) in Equation 26. Using the identity:\nThe directed graphical model of the ap oroximating posterior and prior are defined by\nd ogq=c>` dc\nI1 q (3m|3l<m,x, $ . 0<m<n I1 30,...,3n(0) = p(3m|3l<m,0) 0<m<n"}, {"section_index": "12", "section_name": "B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITI REINFORCE", "section_text": "It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Un fortunately, this naive estimate is impractically high-variance, leading to slow training and pooi performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016):\nH(q) = Epi<j j|Pi<j,X\nThe full set of latent variables associated with the RBM is now denoted by 3o = { z1, S1, . . . , Zk, Sk} However, the conditional distributions in Equation 13 only depend on the continuous (. Each 3m>1 denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.\nMoreover, we can eliminate any log-partition function in log q(z|Pi<j, x) by an argument analogou: to Equation 27.15 By repeating this argument one more time, we can break $q(zj|Pi<j, x) into its factorial components.16 If z; E {0,1}, then using Equation 10, gradient of the negative entropy reduces to:\nThe ELBO decomposes as:\nL(x,0,q) =Eq(3|x,s) [l0gp(x|3,0)]-Eq(31<m|x,p) [KL[q(3m|31<m,x,$)|P(3m|3l<m,0)]] m\nq(z|x,s) [logp(x|z,0)] =Eq(z|x,b) [logp(x|z,0) - B(x) 1 [logp(x|z,0) - B(x)] N z~q(z|x,$)\nq lEj Zl dg. =1- S(Zi dd\nIf both q(3m|31<m,x, $) and p(3m|31<m,0) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. Gradients can be passed through the q(3t<m[x, $) using the traditional reparameterization trick, described in Section 1.1.\nwhere B(x) is a (possibly input-dependent) baseline, which does not affect the gradient, but ca reduce the variance of a stochastic estimate of the expectation."}, {"section_index": "13", "section_name": "5 RESULTS", "section_text": "Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution r((|z) dis- cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014: Rezende et al., 2014), we define all approximating posteriors q to be explicit functions of x, with parameters shared between all inputs x. For distributions over discrete variables, the neural net- works output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation 10; for the continuous 3, the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu- ral networks parameterizing the distributions over z. 3. and x consists of a linear transformation\ndqT(zj=1) a H(q) =Epi<j do\nFigure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using. persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM. between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar. demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type. despite being trained in a wholly unsupervised manner..\nEquation 18 of REINFORCE captures much less information about p(x|z, 0) per sample than Equa- tion 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of p(x|z, 0) in some direction d can only affect the REINFORCE gradient estimate if a sam- ple is taken with a component in direction d. In a D-dimensional latent space, at least D samples are\n31 32 33 31 32 33 (a) Approx post w/ cont latent vars q(3, (, z[x) (b) Prior w/ cont latent vars p(x, 3, (, z)\nB 8 b L b 4 ES D a] 149 1 5 i 5 3 5 9 P R LC R RE F 3 EAD F 3} 5 9 3 A FL 10 FG 3. 3 C D E F R A 11 L 8 + T A A 5 e 0 A t0 3 M G 3 dl B C D C L D C A a. 15 de 5 C L E R G ~ G 3 R F3 C e FT 0 F t3\n(z|x, $) : logp(x[z, 0 xF det\nH(q)=> I r(Sj|zj) q(zj|Si<j,x) . log q(zj|Si<j,x 1<j<k 1<j<k Ir(Sj|zj):q(zj|Si<j,x log q(zj|Si<j,x L Ir(Si|zi):q(zi|Sn<i,x) : log q(zj|Si<j,x I Z i<j Eq(Si<j,zi<j|x,$) `q(zj|Si<j,x) l0gq(zj|Si<z Epi<j q(Zj|Pi<j,x) : logq(Zj|Pi<j,x (2\nThe variable p has dimensionality equal to that of z; O is the vector of all Os; 1 is the vector of all 1s\nNote that if q(z|x, ) is factorial (i.e., the product of independent distributions in each dimension zt).. then the conditional-marginal CDFs F; are just the marginal CDFs in each direction. However, even if q(z|x, ) is not factorial, Equation 17 still holds so long as F is nevertheless defined to be the set of conditional-marginal CDFs of Equation 15..\na lEj dg =1-qz=1)\ndifference approximation to the derivative. The autoencoding term is a function of the conditional. og-likelihood logp(x[z, 0), composed with the approximating posterior q(z[x, $), which deter mines the value of z at which p(x[z, 0) is evaluated. However, the conditional log-likelihood is never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the con ditional log-likelihood is evaluated at many different points z ~ q(z|x, ), and a weighted sum o1. these values is used to approximate the gradient, just like in the finite difference approximation..\nwhere t and z, correspond to single variables within the hierarchical groups denoted by j. In Ten- sorFlow, it might be simpler to write:."}]
ByG8A7cee
[{"section_index": "0", "section_name": "REFERENCE-AWARE LANGUAGE E MODELS", "section_text": "val test model all entity word all entity word 1m 33.08 44.52 32.04 33.08 43.86 32.10 pointer 32.57 32.07 32.62 32.62 32.07 32.69 pointer + init 30.43 28.56 30.63 30.42 28.56 30.66\nZichao Yang1 *, Phil Blunsom2,3, Chris Dyer1,2, and Wang Ling2 1Carnegie Mellon University, 2DeepMind, and 3University of Oxford\nTable 6: Coreference based LM. pointer + init means we initialize the model with the LM weights"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Referring expressions (REs) in natural language are noun phrases (proper nouns, common nouns, and pronouns) that identify objects, entities, and events in an environment. REs occur frequently and they play a key role in communicating information efficiently. While REs are common, previ- ous works neglect to model REs explicitly, either treating REs as ordinary words in the model or replacing them with special tokens. Here we propose a language modeling framework that explicitly incorporates reference decisions.\nn Figure we list examples of REs in the context of the three tasks that we consider in this work Firstly, reference to a database is crucial in many applications. One example is in task orientec. lialogue where access to a database is necessary to answer a user's query (Young et al., 2013; t al., 2016; Vinyals & Le, 2015; Wen et al., 2015; Sordoni et al., 2015; Serban et al., 2016; Borde. & Weston, 2016; Williams & Zweig, 2016; Shang et al., 2015; Wen et al., 2016). Here we conside. he domain of restaurant recommendation where a system refers to restaurants (name) and thei. attributes (address, phone number etc) in its responses. When the system says \"the nirala is a. nice restaurant\"', it refers to the restaurant name the nirala from the database. Secondly, many. nodels need to refer to a list of items (Kiddon et al.l, 2016; Wen et al., 2015). In the task of recip. generation from a list of ingredients (Kiddon et al, 2016), the generation of the recipe will frequentl. eference these items. As shown in Figure , in the recipe \"Blend soy mi1k and...\", soy mi1k. efers to the ingredient summaries. Finally, we address references within a document (Mikolov et al.. 2010; Ji et all, 2015; Wang & Cho, 2015), as the generation of words will ofter refer to previously. generated words. For instance the same entity will often be referred to throughout a document. Ir. Figure , the entity you refers to I in a previous utterance..\nRecently, there has been great progresses in modeling languages based on neural network, including language modeling (Mikolov et al, 2010; Jozefowicz et al, 2016), machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), question answering (Hermann et al., 2015) etc. Based on the. success of seq2seq models, neural networks are applied in modeling chit-chat dialogue (Li et al. 2016; Vinyals & Le, 2015; Sordoni et al., 2015; Serban et al, 2016; Shang et al., 2015) and task oriented dialogue (Wen et al., 2015; Bordes & Weston, 2016; Williams & Zweig, 2016; Wen et al.J 2016). Most of the chit-chat neural dialogue models are simply applying the seq2seq models. For the task oriented dialogues, most of them embed the seq2seq model in traditional dialogue systems, in which the table query part is not differentiable. while our model queries the database directly. Recipe generation was proposed in (Kiddon et al., 2016). Their model extents previous work on attention models (Allamanis et al., 2016) to checklists, whereas our work models explicit references. to those checklists. Context dependent language models (Mikolov et all, 2010; Ji et al.l, 2015; Wang & Chd, 2015) are proposed to capture long term dependency of text. There are also lots of works on coreference resolution (Haghighi & Klein, 2010; Wiseman et al.J, 2016). We are the first to. combine coreference with language modeling, to the best of our knowledge. Much effort has been invested in embedding a copying mechanism for neural models (Gulcehre et al., 2016; Gu et al., 2016; Ling et al., 2016). In general, a gating mechanism is employed to combine the softmax over observed words and a pointer network (Vinyals et al., 2015). These gates can be trained either by marginalizing over both outcomes, or using heuristics (e.g. copy low frequency words). Our models. are similar to models proposed in (Ahn et al., 2016; Merity et al., 2016), where the generation of. each word can be conditioned on a particular entry in knowledge lists and previous words. In our. work, we describe a model with broader applications, allowing us to condition, on databases, lists and dvnamic lists.\nIn this work we develop a language model that has a specific module for generating REs. A series of latent decisions (should I generate a RE? If yes, which entity in the context should I refer to? How should the RE be rendered?) augment a traditional recurrent neural network language model and the two components are combined as a mixture model. Selecting an entity in context is similar to familiar models of attention (Bahdanau et al, 2014), but rather than being a deterministic function that reweights representations of elements in the context, it is treated as a distribution over contextual elements which are stochastically selected and then copied or, if the task warrants it, transformed (e.g., a pronoun rather than a proper name is produced as output). Two variants are possible for updating the RNN state: one that only looks at the generated output form; and a second that looks at values of the latent variables. The former admits trivial unsupervised learning, latent decisions are conditionally independent of each other given observed context, whereas the latter enables more"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., di- alogue generation and recipe generation) and internal state (required by, e.g. lan guage models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or dis course context, even when the targets of the reference may be rare words. Ex periments on three tasks show our model variants outperform models based on deterministic attention.\nThe results for dialogue, recipe generation and coref language model are shown in Table , anc. respectively. We can see from Table that models that condition on table performs better ir. redicting table tokens in general. Table pointer has the lowest perplexity for token in the table Since the table token appears rarely in the dialogue, the overall perplexity does not differ much anc. he non-table tokens perplexity are similar. With attention mechanism over the table, the perplexity. of table token improves over basic seq2seq model, but not as good as directly pointing to cells in the. able. As expected, using sentence attention improves significantly over models without sentence ttention. Surprisingly, table latent performs much worse than table pointer. We also measure the. erplexity of table tokens that appear only in test set. For models other than table pointer, because. he tokens never appear in training set, the perplexity is quite high, while table pointer can predic. hese tokens much more accurately. The recipe results in Table in general follows that findings. rom the dialogue. But the latent model performs better than pointer model since that tokens ir ngredients that match with recipe does not necessarily come from the ingredients. Imposing a upervised signal will give wrong information to the model and hence make the result worse. Hence vith latent decision, the model learns to when to copy and when to generate it from the vocabulary The coref LM results are shown in Table 6. We find that coref based LM performs much better or he entities perplexities, but however is a little bit worse than for non-entity words. We found it is ar ptimization problem and perhaps the model is stuck in local optimum. So we initialize the pointe. nodel with the weights learned from LM, the pointer model performs better than LM both for entity erplexity and non-entity words perplexity.\nWe introduce reference-aware language models which explicitly model the decision of from where. to generate the token at each step. Our model can also learns the decision by treating it as a laten variable. We demonstrate on three tasks, table based dialogue modeling, recipe generation and corei based LM, that our model performs better than attention based model, which does not incorporate this decision explicitly. There are several directions to explore further based on our framework. The current evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can alsc. try human evaluation to see if the model can reply users' query accurately. It is also interesting tc use reinforcement learning to learn the actions in each step..\nFigure 1: Reference-aware language models"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Sungjin Ahn, Heeyoul Choi, Tanel Parnamaa, and Yoshua Bengio. A neural knowledge languag model. CoRR, abs/1608.00318, 2016.\nexpressive models that can extract information from the entity that is being referred to. In each of the three tasks, we demonstrate our reference aware model's efficacy in evaluations against models that do not explicitly include a reference operation.\nAntoine Bordes and Jason Weston. Learning end-to-end goal-oriented dialog. arXiv preprin arXiv:1605.07683, 2016\nWe denote each document as a series of tokens x1, . . . , x L, where L is the number of tokens in the. document. Our goal is to maximize the probabilities p(x, c, ), for each word in the document based on its previous context c; = x1,..., x-1. In contrast to traditional neural language models, we. introduce a variable at each position zi, which controls the decision on which source x; is generated. from. The token conditional probably is then obtained by:.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in. Neural Information Processing Systems, pp. 1693-1701, 2015.\nIn dialogue modeling and recipe generation, z; will simply taken on values in {0, 1}. Where z; = denotes that x; is generated as a reference, either to a database entry or an item in a list. However z, can also be defined as a distribution over previous entities, allowing the model to predict x conditioned on its a previous mention word. This will be the focus of the coreference language model. When z; is not observed (which it generally will not be), we will train our model to maximize the marginal probability in Eq. directly.\nYangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. Document context language models. arXiv preprint arXiv:1511.03962, 2015."}, {"section_index": "4", "section_name": "2.1 DIALOGUE MODEL WITH DATABASE SUPPORT", "section_text": "We can observe from this example, users get recommendations of restaurants based on queries that specify the area, price and food type of the restaurant. We can support the system's decisions by incorporating a mechanism that allows the model to query the database allowing the model to find restaurants that satisfy the users queries. Thus, we crawled TripAdvisor for restaurants in the\nreference example the dialogue moderate M: the nirala is a nice restuarant nirala table recipe 1 cpu plain soy milk. Blend soy milk and ... ingredients um and [] think ... [you] .. coreference [][Linda]2 [you]]... coref\nAanTpr dialogue the moderate M: the nirala is a nice restuarant nirala table 1 cpu plain soy milk Blend soy milk and .. recipe ingredionts [][Linda]2 [you]]... um and [] think ... [you]] coreference coref\nWe propose a general framework to model reference in language and instantiate it in the context of dialogue modeling, recipe generation and coreference based language models.. We build three data sets to test our models. There lack existing data sets that satisfy our need, so we build these data sets ourselves. These data sets are either built on top existing. data set (we constructed the table for DSTC2 data set for dialogue evaluation), crawled. from websites (we crawled all recipes in www.allrecipes.com) or annotated with NLP tools (we annotate the coreference with Gigaword corpus for our evaluation).. We perform comprehensive evaluation of our models on the three data sets and verify our. models perform better than strong baselines..\nJiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism in sequence-to-sequence learning. CoRR, abs/1603.06393, 2016. URL http://arxiv.org/ abs/1603.06393.\nAria Haghighi and Dan Klein. Coreference resolution in a modular, entity-centered model. Ir Human Language Technologies: The 2010 Annual Conference of the North American Chapte of the Association for Computational Linguistics, pp. 385-393. Association for Computationa Linguistics, 2010.\no(xici)=p(xi Zj,Cj)p(ZiCj)\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumi Wang, and Phil Blunsom. Latent predictor networks for code generation. In Proc. ACL, 2016..\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In Interspeech, volume 2, pp. 3, 2010..\nTable 1: Example dialogue. M stands for Machine and U stands for Uset\nTable 2: Fragment of database for dialogue system\nAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generatior. of conversational responses. In Proc. NAACL, 2015..\nCambridge area, where the dialog dataset was collected. Then, we remove restaurants that do not appear in the data set and create a database with 109 entries with restaurants and their attributes (e.g food type). A sample of our database is shown in Table. . We can observe that each restaurant contains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requests a restaurant that serves \"indian\"' food, we wish to train a model that can search for entries whose \"food\"' column contains \"indian'\". Now, we describe how we deploy a model that fulfills these requirements.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proc. NIPs, 2015.\nTsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes David Vandyke, and Steve Young. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562, 2016.\nFigure 2: Hierarchica1 RNN Seq2Seq model\nJason D Williams and Geoffrey Zweig. End-to-end lstm-based dialog control optimized with super vised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016\nSam Wiseman, Alexander M Rush, and Stuart M Shieber. Learning global features for coreference resolution. arXiv preprint arXiv:1604.03035, 2016\nSteve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160-1179, 2013.\nConsider a dialogue with T turns, and the utterance from a user is denoted as X = {x}=1, where. i is the i-th utterance, whereas the utterance from a machine is denoted as Y = {y}T-1, where i. 1xi in the i-th utterance from the user, whereas yiv denotes the v-th token in the i-th utterance from. the machine. Finally, x and y denote the number of tokens in the user and machine utterances respectively. The dialogue sequence starts with machine utterance {y1, x1, y2, x2, ..., yT, xT}. We. would like to model the utterances from the machine.\nIp(yi|Y<i,x<i) = p(y1, Y2,..., YT[X1, X2,..., XT) 1]p(Yi,v|Yi,<v,Y<i,x<i] i i,U\nwhere y<; denotes all the utterances before i and y.<y denotes the first v - 1 tokens in the i-th utterance of the machine. A neural model is employed to predict p(yi,v|yi,<v, Y<i, x<i), which operates as follows:\nSentence Encoder: We first encode previous utterances y<; and x<; into continuous space by gen erating employing a LSTM encoder. Thus, for a given utterance x,, and start with the initial LSTM state ho and apply the recursion h, = LSTMe(Wexi,, hi-1), where Wex,; denotes a word\nI: Hello , welcome to the Cambridge restaurant system? You can ask for restaurants by area, price range or food type. How may Ihelp you? J: i want a moderately priced restaurant in the north town I: the ni rala is a nice restaurant in the north of town in the moderate price range J: what is the address I: Sure,the niralaison 7 Milton Road Chesterton\nNAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE ali baba moderate lebanese centre 59 Hills Road City Cen- CB 2, 1 NT 01462 432565 tre the moderate indian north 7 Milton Road Chester- CB 4, 1 UY 01223 360966 nirala ton\ntre 7 Milton Road Chester ton\nWe build a model based on the hierarchical RNN model described in (Serban et al. 2016), as in dialogues, the generation of the response is not only dependent on the previous sentence, but on all sentences leading to the response. We assume that a dialogue is alternated between a machine and a user. An illustration of the model is shown in Figure"}, {"section_index": "5", "section_name": "A VISUALIZATION", "section_text": "embedding lookup for the token x,, and LSTM denotes the LSTM transition function describec in Hochreiter & Schmidhuber (1997). The representation of the user utterance is represented b. representation hy = hy.\nM: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area price range or food type . How may I help you ?. U: and expensive restaurant that serves tuscan food. M: Sorry there is no tuscan restaurant in the expensive price range U: french food M: cote is a great restaurant serving french food and it is in the expensive price range. U: may i have the phone number. M: The phone number of coteis 01223 311053. I I: thank. ood by\nTurn Encoder: Then, combine all the representations of all the utterances with a second LSTM. which encodes the sequence {h?, h+, ..., h?, hx} into a continuous vector. Once again, we start with an initial state uo and feed each of the utterance representation to obtain the following LSTM state, until the final state is obtained. For simplicity, we shall refer to this as u,, which can be seen as the hierarchical encoding of the previous i utterances.\nSeq2Seq Decoder: As for decoding, in order to generate each utterance y, we can feed u;-1 into the decoder LSTM as the initial state s;,o = ui1 and decode each token in yi. Thus, we can express the decoder as:\n- LSTMp(WEYi,v-1, Si,v- - softmax(Wsy .W\nwhere the desired probability p(yi. i, x<i) is expressed by py\nAttention based decoder: We can also incorporate the attention mechanism in our hierarchical model. An attention model builds a representation d by averaging over a set of vectors p. We define. the attention function as a = ATTN(p, q), where a is a probability distribution over the set of vectors. p, conditioned on any input representation q. A full description of this operation is described in (Bah. danau et al, 2014). Thus, for each generated token yi,v, we compute the attentions ai,v, conditioned. on the current decoder state st,, obtaining the attentions over input tokens from previous turn (i-- 1). K = |h-1| be the number of tokens in previous turn. Thus, we obtain the attention probabilities. over all previous tokens ai,v as ATTN(st,, h, 1). Then, the weighted sum is computed over these. from previous turn. The resulting vector d,., is used to obtain the probability of the following word.\nU - LSTMp(|WEYi,v-1, di i.v = xy li-1,k kEK = softmax(W[s.,, d\nFigure 3: Table based decoder\n(a) Dialogue script NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE charlie chan cheap chinese east Regent Street City Cen- C.B 2.1 D.B 01223 361763 tre chiquito restau- expensive mexican south 2G Cambridge Leisure C.B 1, 7 D.Y 01223 400170 rant bar Park Cherry Hinton Road Cherry Hinton city stop expensive food north Cambridge City Foot- _EMPTY 01223 363270 ball Club Milton Road Chesterton clowns cafe expensive italian centre EMPTY C.B 1, 1 L.N 01223 355711 cocum expensive indian west 71 Castle Street City C.B 3, 0 A.H 01223 366668 Centre cote expensive french centre Bridge Street City Cen- C.B 2, 1 U.F 01223 311053 tre curry garden expensive indian centre 106 Regent Street City _EMPTY 01223 302330 Centre curry king expensive indian centre 5 Jordans Yard Bridge C.B 1, 2 B.D 01223 324351 Street City Centre curry prince moderate indian east 451 Newmarket Road C.B 5, 8 J.J 01223 566388 Fen Ditton (b) Attention heat map: cote is a great restaurant serving french food and it is in the expensive price range. NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE charlie chan cheap chinese east Regent Street City Cen- C.B 2, 1 D.B 01223 361763 tre chiquito restau- expensive mexican south 2G Cambridge Leisure C.B 1,7 D.Y 01223 400170 rant bar Park Cherry Hinton Road Cherry Hinton city stop expensive food north Cambridge City Foot- _EMPTY 01223 363270 ball Club Milton Road Chesterton clowns cafe expensive italian centre EMPTY C.B 1, 1L.N 01223 355711 expensive indian west 71 Castle Street City cocum C.B 3, 0 A.H 01223 366668 Centre cote expensive french centre Bridge Street City Cen- C.B 2, 1 U.F 01223 311053 tre curry garden expensive indian centre 106 Regent Street City _EMPTY 01223 302330 Centre curry king expensive indian centre 5 Jordans Yard Bridge C.B 1, 2 B.D 01223 324351 Street City Centre curry prince moderate indian east 451 Newmarket Road C.B 5, 8 J.J 01223 566388 Fen Ditton (c) Attention heap map: The phone number of cote is 01223 311053 ..\n106 Regent Street City Centre 5 Jordans Yard Bridge\n106 Regent Street City Centre 5 Jordans Yard Bridge\nWe now extend the attention model in order to allow the attention to be computed over a table allowing the model to condition the generation on a database..\nWe denote a table with R rows and C columns as {fr.c}, r E [1, R], c E [1, C], where fr.c is the cell in row r and column c. The attribute of each column is denoted as sc, where c is the c-th attribute fr.c and sc are one-hot vector.\nM: 1eno.welcomne can ask lor reslal price range or food type . How may I help you ?. U: cheap restaurant M: What kind of food would you like ? U: in the west part of town. M: 1a margherita is a nice restaurant in the west of town in the cheap price range U: address M: Sure,lamargheritais on15 Magdalene Street City Centre U: thank you M: la margherita is a nice restaurant in the west of town in the cheap price range. U: good bye\nTable Encoding: To encode the table, we build an attribute vector gc for each column. For each cell fr.c of the table, we concatenate it with the corresponding attribute gc and then feed it through a one-layer MLP as follows: gc = WEsc and then er,c = tanh(W[We fr,c, gc]).\nTable Attention: The diagram for table attention is shown in Figure 3a. The attention over cells. in the table is conditioned on a given vector q, similarly to the attention model for sequences ATTN(p, q). However, rather than a sequence p, we now operate over a table f. Our attentior. model computes a attribute attention followed by row attention of the table. We first use the atten tion mechanism on the attributes to find out which attribute the user asks about. Suppose a usei says cheap, then we should focus on the price attribute. After we get the attention probabil ity pa = ATTN({ gc}, q), over the attribute, we calculate the weighted representation for each rov er = c perc conditioned on pa. Then e, has the price information of each row. We further use. attention mechanism on er and get the probability p* = ATTN({er}, q) over the rows. Then restau rants with cheap price will be picked. Then, using the probabilities p, we compute the weightec average over the all rows ec = r per,c, which is used in the decoder. The detailed process is:.\nPr = ATTN({er},q) ec = Vc\nThis is embedded in the decoder by replacing the conditioned state q as the current decoder state at each step. The detailed diagram of table attention is shown in Figure 3a!."}, {"section_index": "6", "section_name": "2.1.3 INCORPORATING TABLE POINTER NETWORKS", "section_text": "We now describe the mechanism used to refer to specific database entries during decoding. At each timestep, the model needs to decide whether to generate the next token from an entry of the database or from the word softmax. This is performed as follows\nPointer Switch: We use zi.y E [0, 1] to denote the decision of whether to copy one cell from the table. We compute this probability as follows:.\n= sigmoid(W sv, d,y)\nc) Attention heap map: Sure , la margherita is on 15 Magdalene Street City Centre\nObjective: As we treat z; as a latent variable, we wish to maximize the marginal probability of the sequence y over all possible values of zy. Thus, our objective function is defined as:.\n)) + pcopyp(1|Si,u) p(Yi,v[Si,v) Si a\n(a) Dialogue script NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE india house expensive indian west 31 Newnham Road _EMPTY 01223 461661 Newnham j restaurant cheap oriental centre 86 Regent Street City C.B 2, 1 D.P 01223 307581 Centre jinling noodle moderate chinese centre 11 Peas Hill City Cen- C.B 2, 3 P.P 01223 566188 bar tre kohinoor cheap indian centre 74 Mill Road City Cen- _EMPTY 01223 323639 tre kymmoy expensive oriental centre 52 Mill Road City Cen- C.B 1, 2 A.S 01223 311911 tre la margherita cheap italian west 15 Magdalene Street C.B 3, 0 A.F 01223 315232 City Centre la mimosa expensive mediterranean centre Thompsons Lane Fen C.B 5, 8 A.Q 01223 362525 Ditton la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550 la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630 lan hong house moderate chinese centre 12 Norfolk Street City _EMPTY 01223 350420 Centre (b) Attention heat map: 1a margherita is a nice restaurant in the west of town in the cheap price range NAME PRICE RANGE FOOD AREA ADDRESS POST CODE PHONE india house expensive indian west 31 Newnham Road _EMPTY 01223 461661 Newnham j restaurant cheap oriental centre 86 Regent Street City C.B 2, 1D.P 01223 307581 Centre jinling noodle moderate chinese centre 11 Peas Hill City Cen- C.B 2, 3 P.P 01223 566188 bar tre kohinoor cheap indian centre 74 Mill Road City Cen- _EMPTY 01223 323639 tre kymmoy expensive oriental centre 52 Mill Road City Cen- C.B 1, 2 A.S 01223 311911 tre la margherita cheap italian west 15 Magdalene Street C.B 3, 0 A.F 01223 315232 City Centre la mimosa expensive mediterranean centre Thompsons Lane Fen C.B 5, 8 A.Q 01223 362525 Ditton la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550 la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630 lan hong house moderate chinese centre 12 Norfolk Street City EMPTY 01223 350420 Centre\ncentre 11 Peas Hill City Cen- tre\nwest 15 Magdalene Street City Centre\nThus, if zi,v = 1, the next token yi,v will be generated from the database, whereas if zi,v = 0, then the following token is generated from a softmax. We shall now describe how we generate tokens from the database.\nTable Pointer: If z,.y = 1, the token is generated from the table. The detailed process of calculating the probability distribution over the table is shown in Figure Bbl. This is similar to the attention mechanism, except that we perform a column attention to compute the probabilities of copying from each column after Equation. 5. More formally:\n=ATTN{ec},q) copy\nwhere pc is a probability distribution over columns, whereas pr is a probability distribution over rows. In order to compute a matrix with the probability of copying each cell, we simply compute the outer product pcopy = pr pc.\nThe model can also be trained in a fully supervised fashion, if zy , is observed. In such cases we simply maximize the likelihood of p(zi,v[Si,v), based on the observations, rather than using the. marginal probability over zi,v.\nsoy pcopy pvocab Yes No ingredients decod soy Blend encoder\nsoy ncopy pvocab Yes No ingredients decoder soy Blend encoder Figure 4: Recipe pointer\nhi,j = LSTMe(WEXij,hi,j- Vi\nS, = LSTMp( -1,dv-1, WeYv-1 copy =ATTN{{hi Pr d, = ) [su) = sigmoid(W[su, d]) vocab = softmax(W[su, dy])\nFigure 6: Recipe heat map example 1. The ingredient tokens appear on the left while the recipe tokens appear on the top. The first row is the p(zu[su).\nNext, we consider the task of recipe generation conditioning on the ingredient lists. In this task, we must generate the recipe from a list of ingredients. Table. 3 illustrates the ingredient list and recipe for Spinach and Banana Power Smoothie. We can see that the ingredients soy milk, spinach leaves, and banana occur in the recipe\nLet the ingredients of a recipe be X = {x}T=1 and each ingredient contains L tokens x, = A1 gredient:\nThen, we sum the resulting state of each ingredient to obtain the starting LSTM state of the decoder Once again we use an attention based decoder:\nS, = LSTMp(Su -1,dy-1, WeYv-1 copy = ATTN{{h dy = Zy[sy) = sigmoid(W[sv, dv yocab = softmax(W[sv, d)\nSimilar to the previous task, the decision to copy from the ingredient list or generate a new word from the softmax is performed using a switch, denoted as p(zu[su). We can obtain a probability distribution of copying each of the words in the ingredients by computing peopy = likelihood function employed in the previous task..\nFinally, we build a language model that uses coreference links to point to previous words. Before. generating a word, we first make the decision on whether it is an entity mention. If so, we decide\nIn large skillet heat olive oil medium heat Stir shallots over in p(z) tablespoon olive oil shallot diced ( 10 ounce ) bag baby spinach leaves kosher salt and freshly ground pepper to taste (a) part 1 and cook until transparent about 5 minutes Add spinach sprinkle with salt p(z) 1 tablespoon olive oil 1 shallot diced 10 ounce bag baby spinach leaves kosher salt and freshly ground pepper to taste (b) part 2 and pepper cook and stir 3 to minutes until leaves are wilted and 5 p(z) 1 tablespoon olive oil 1 shallot diced 1 10 ounce bag baby spinach leaves kosher salt and freshly ground pepper to taste (c) part 3\nwhich entity this mention belongs to, then we generate the word based on that entity. Denote the document as X = {t}-1, and the entities are E = {e,}1, each entity has M, mentions, e; = the hidden state of each token is h, = LSTM(Wexi, hi-1). We use a set he = {ho, hi, ..., hM} to keep track of the entity states, where h, is the state of entity j.\num and [I] think that is whats - Go ahead [Linda]2. Well and thanks goes to [you] and tc [the media]3 to help [us]4...So [our]4 hat is off to all of [you]5....\nFigure 5: Coreference based lan. e model, example taken from Wiseman et a (2016)\nWord generation: At each time step before generating the next word, we predict whether the wor is an entity mention:\nwhere z; denotes whether the next word is an entity and if yes v, denotes which entity the next word corefers to. If the next word is an entity mention, then p(x,[vi,hi-1,he). softmax(W1 tanh(W2[he.,hi-1])) else p(x;|hi-1) = softmax(W1hi-1),\np(xi|hi-1)p(zi|hi-1,he) if Zi = 0. p(xi|Ui,hi-1,h)pcoref(v;|hi-1,h)p(zi|hi-1,he) if Zi = 1.\nEntity state update: We update the entity state he at each time step. In the beginning, he = {h} he denotes the state of an virtual empty entity and is a learnable variable. If z; = 1 and v; = 0, then it indicates the next word is a new entity mention, then in the next step, we append h, to he, i.e. he = {he, h}, if e; > 0, then we update the corresponding entity state with the new hidden state, he [v,] = h;. Another way to update the entity state is to use one LSTM to encode the mention states and get the new entity state. Here we use the latest entity mention state as the new entity state for simplicity. The detailed update process is shown in Figure 5.\nDialogue: We use the DSTC2 data set. We only extracted the dialogue transcript from data set There are about 3,200 dialogues in total. Since this is a small data set, we use 5-fold cross validatior and report the average result over the 5 partitions. There may be multiple tokens in each table cell for example in Table., the name, address, post code and phone number have multiple tokens, we replace them with one special token. For the name, address, post code and phone number of the j-th row, we replace the tokens in each cell with NAME_j, _ADDR_j, POSTCODE_j, _PHONE_j If a table cell is empty, we replace it with an empty token _EMPTY. We do a string match in the transcript and replace the corresponding tokens in transcripts from the table with the special tokens\nFigure 7: Recipe heat map example 2\nnew entity entity 1 empty Linda 2 Linda state You te 0 cess push state update state attn push state - []1 [Linda] and - of [You] um - -\nnew entity entity 1 empty Linda Linda state You entity state 0 update process []1 ou push state update state attn push state - A - and []1 [Linda], - of [You]] um - - -\ndi =) Vi\nEach dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, includii about 400 table tokens and 500 words\nRecipes: We crawl all recipes from www. allrecipes. com. There are about 31, 000 recipes ir. total, and every recipe has a ingredient list and a corresponding recipe. We exclude the recipes tha have less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. Or. average each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take. 80% as training and 10% for validation and test. We use a vocabulary size of 10,O00 in the model..\nCoref LM: We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000. documents from it that has length in range from 100 to 500. Each document has on average 234. tokens, so there are 23 million tokens in total. We use a tool to annotate all the entity mentions. and use the annotation in the training. We take 80% as training and 10% as validation and test respectively. We ignore the entities that have only one mention and for the mentions that have multiple tokens, we take the token that is most frequent in the all the mentions for this entity. After the preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabulary size of 50,000 in the model."}, {"section_index": "7", "section_name": "4.1 MODEL TRAINING AND EVALUATION", "section_text": "We train all models with simple stochastic gradient descent with clipping. We use a one-layer LSTM. for all RNN components. Hyper-parameters are selected using grid search based on the validation. set. We use dropout after the input embedding and LSTM output. The learning rate is selected from. [0.1, 0.2, 0.5, 1], maximum gradient norm is selected from [1, 2, 5, 10] and drop ratio is selected. from [0.2, O.3, O.5]. The batch size and LSTM dimension size is slightly different for different. tasks so as to make the model fit into memory. The number of epochs to train are different for each task and we drop the learning rate after reaching a given number of epochs. We report the. per-word perplexity for all tasks, specifically, we report the perplexity of all words, words that can. be generated from reference and non-reference words. For recipe generation, we also generate the. recipe using beam size of 10 and evaluate the generated recipe with BLEU..\nTable 4: Dialogue perplexity results. (All means all tokens, table means tokens from table, table oov. denotes table tokens that does not appear in the training set, word means non-table tokens). sentence. attn denotes we use attention mechanism over tokens from past turn. Table pointer and table laten differs in that table pointer, we provide supervised signal on when to generate a table token, while. in table latent it is a latent decision..\nTable 5: Recipe result, evaluated in perplexity and BLEU score. ing denotes tokens from recipe that appear in ingredients.\nmodel all table table oov word seq2seq 1.350.01 4.980.38 1.99E77.75E6 1.230.01 table attn 1.370.01 5.090.64 7.91E71.39E8 1.240.01 table pointer 1.330.01 3.990.36 1360 2600 1.230.01 table latent 1.360.01 4.990.20 3.78E76.08E7 1.240.01 + sentence attn seq2seq 1.280.01 3.310.21 2.83E9 4.69E9 1.190.01 table attn 1.280.01 3.170.21 1.67E79.5E6 1.200.01 table pointer 1.270.01 2.990.19 82.86110 1.200.01 table latent 1.280.01 3.260.25 1.27E71.41E7 1.200.01\nval test model pp1 ppl BLEU BLEU all ing word all ing word seq2seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39 attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15 pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29 latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41"}]
HyxQzBceg
[{"section_index": "0", "section_name": "HYPERPARAMETERS AND ARCHITECTURE DETAILS FOR EXPERIMENTS", "section_text": "(i.e., it reduces the chance that the adversary \"gets lucky' in its perturbation due to an untypical. sample). We also ran the VIB models in \"mean mode', where the os are forced to be 0. This had nc. noticeable impact on the results, so all reported results are for stochastic evaluation with 12 samples\nAll of the networks for this paper were trained using TensorFlow (Abadi et al.] 2016). All weights. were initialized using the default TensorFlow Xavier initialization scheme (Glorot & Bengio|2010 using the averaging fan scaling factor on uniform noise. All biases were initialized to zero. The Adam optimizer (Kingma & Ba|2015) was used with initial learning rate of 10-4, (1 = 0.5, 2 = 0.999) and exponential decay, decaying the learning rate by a factor of 0.97 every 2 epochs. The. networks were all trained for 200 epochs total. For the MNIST experiments, a batch size of 100. was used, and the full 60,000 training and validation set was used for training, and the 10,000 test. images for test results. The input images were scaled to have values between -1 and 1 before fed to. the network.\nAlexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy"}, {"section_index": "1", "section_name": "4.2.4 MNIST RESULTS AND DISCUSSION", "section_text": "alemi,iansf,jvdillon,kpmurphy}@google.com\nWe selected the first 1O zeros in the MNIST test set, and use the L2 optimization adversary of|Carlini & Wagner(2016) to try to perturb those zeros into ones!|Some sample results are shown in Figure 3 We see that the deterministic models are easily fooled by making small perturbations, but for the VIB models with reasonably large , the adversary often fails to find an attack (indicated by the green borders) within the permitted number of iterations. Furthermore, when an attack is succesful. it needs to be much larger for the VIB models. To quantify this, Figure4|plots the magnitude of the perturbation (relative to that of the deterministic and dropout models) needed for a successful attack as a function of . As increases, the Lo norm of the perturbation decreases, but both L2 and Lo. norms increase, indicating that the adversary is being forced to put larger modifications into fewer pixels while searching for an adversarial perturbation.\nWe present a variational approximation to the information bottleneck of Tishby et al.(1999). This variational approach allows us to parameterize the informa- tion bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method \"Deep Variational Information Bottleneck\"', or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.\nAll runs maintained an exponential weighted average of the parameters during the training run;. these averaged parameters were used at test time. This is in the style of Polyak averagingPolyak &. Juditsky(1992), with a decay constant of 0.999. Our estimate of mutual informations were measured in bits. For the VIB experiments in all sections, no other form of regularization was used.\nFigure5|plots the accuracy on FGS adversarial examples of the first 1000 images from the MNIST test set as a function of . Each point in the plot corresponds to 3 separate executions of three different models trained with the same value of . All models tested achieve over 98.4% accuracy on the unperturbed MNIST test set, so there is no appreciable measurement distortion due to underlying model accuracy."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "= log(1 + exp(x - 5.0)\nFigure|6[plots the accuracy on L2 optimization adversarial examples of the first 1000 images from the MNIST test set as a function of 3. The same sets of three models per were tested three times as with the FGS adversarial examples..\nFor the 1024 dimensional Imagenet embeddings of Section4.2.5] a sigma bias of 0.57 was used to keep the initial standard deviations near 1 originally, and a batch size of 200 was used\nWe generated both untargeted and targeted adversarial examples for Figure 6 For targeting, we generate a random target label different from the source label in order to avoid biasing the result with unevenly explored source/target pairs. We see that for a reasonably broad range of values the VIB models have significantly better accuracy on the adversarial examples than the deterministic models, which have an accuracy of O% (the L2 optimization attack is very effective on traditiona model architectures).\nGiven the data processing inequality, and the invariance of the mutual information to reparameteriza tions, if this was our only objective we could always ensure a maximally informative representation by taking the identity encoding of our data (Z = X), but this is not a useful representation of our data. Instead we would like to find the best representation we can obtain subject to a constraint on its complexity. A natural and useful constraint to apply is on the mutual information between our encoding and the original data, I(X, Z) < Ic, where Ic is the information constraint. This suggests the objective:\nFigure [6also reveals a surprising level of adversarial robustness even when -> 0. This can be explained by the theoretical framework of Fawzi et al.(2016). Their work proves that quadratic classifiers (e.g., x' Ax, symmetric A) have a greater capacity for adversarial robustness than linear classifiers. As we show in Appendix C] our Gaussian/softmax encoder/decoder is approximately quadratic for all oo.\nmaxI(Z,X) - I(Z,i)\nmax1(Z, Y;0) s.t. 1(X,Z;0) Ic 0"}, {"section_index": "3", "section_name": "4.2.5 IMAGENET RESULTS AND DISCUSSION", "section_text": "VIB improved classification accuracy and adversarial robustness for toy datasets like MNIST. We now investigate if VIB offers similar advantages for ImageNet, a more challenging natural image classification. Recall that ImageNet has approximately 1M images spanning 1K classes. We pre process images such that they are 299x299 pixels.\nR1B(0) = 1(Z,Y;0) - 31(Z,X;0)"}, {"section_index": "4", "section_name": "Architecture", "section_text": "I(Z,X) = dx dz p(x, z) log px H(x)+ dx p(x|z) log p(x|2 dx p(x|z) log q(x|z) dx dz p(x|z) log q(x[z)\nThe IB principle is appealing, since it defines what we mean by a good representation, in terms of the fundamental tradeoff between having a concise representation and one with good predictive power (Tishby & Zaslavsky2015a). The main drawback of the IB principle is that computing mutual. information is, in general, computationally challenging. There are two notable exceptions: the first\nHere we have dropped the entropy in our data H(X) because it is out of our control and we have used the nonnegativity of the Kullbach-Leibler divergence to replace our intractable p(x[z) with a variational decoder q(x|z)."}, {"section_index": "5", "section_name": "ABSTRACT", "section_text": "For the 256 dimensional gaussian embeddings of Section4.1.1] a linear layer of size 512 was used. to create the 256 mean values and standard deviations for the embedding. The standard deviations were made to be positive by a softplus transformation with a bias of -5.0 to have them initially be. Small.\nWe adopt an information theoretic view of deep networks. We regard the internal representation of some intermediate layer as a stochastic encoding Z of the input source X, defined by a parametric encoder p(z|x; 0)|1[Our goal is to learn an encoding that is maximally informative about our target Y, measured by the mutual information between our encoding and the target I(Z, Y; 0), where\nFor the 2 dimensional gaussian embeddings of Section4.1.2] a linear layer was used with 2+4 = 6 outputs, the first two of which were used for the means, and the other 4 were reshaped to a 2 2 matrix, the center was transformed according to a softplus with a bias of -5.0, and the off diagonal components were multiplied by 10-2, while the upper triangular element was dropped to form the Cholesky decomposition of the covariance matrix.\np(z,y|O) I(Z,Y;0) = dx dy p(z,y\\0) log p(z|0)p(y|0)\nHere the aim is to take our data X and maximize the mutual information contained in some encoding Z, while restricting how much information we allow our representation to contain about the identity of each data element in our sample (i). We will form a bound much like we did in the main text. For the first term, we form a variational decoder g(x[z) and take a bound:\nHere our goal is to learn an encoding Z that is maximally expressive about Y while being maximally compressive about X, where > 0 controls the tradeoff|3|This approach is known as the informa- tion bottleneck (IB), and was first proposed in|Tishby et al.[(1999). Intuitively, the first term in R1 B encourages Z to be predictive of Y; the second term encourages Z to \"forget\" X. Essentially it forces Z to act like a minimal sufficient statistic of X for predicting Y.\nz) log (21) dx p(x|z) log p(x|z (22) p(x[z) log q(x[z] (23) z p(x|z) log q(x|z) (24)\nWe make use of publicly available, pretrained checkpoints10of Inception Resnet V2 (Szegedy et al.. 2016) on ImageNet (Deng et al.|2009). The checkpoint obtains 80.4% classification accuracy on the. ImageNet validation set. Using the checkpoint, we transformed the original training set by applying the pretrained network to each image and extracting the representation at the penultimate layer This new image representation has 1536 dimensions. The higher layers of the network continue to. classify this representation with 80.4% accuracy; conditioned on this extraction the classification\n1 In this work, X, Y, Z are random variables, x, y, z and x, y, z are instances of random variables, and F(.; 0) and f(.; 0) are functionals or functions parameterized by 0. 2 Note that in the present discussion, Y is the ground truth label which is independent of our parameters so p(y\\0) = p(y). 3 Note that, in our notation, large results in a highly compressed representation. In some works, the IB principle is formulated as the minimization of I(Z, X) - I(Z, Y), in which case large corresponds to high\n9 We chose this pair of labels since intuitively zeros and ones are the digits that are least similar in terms of human perception, so if the adversary can change a zero into a one without much human-noticeable perturba- tion, it is unlikely that the model has learned a representation similar to what humans learn\nis when X, Y and Z are all discrete, as in Tishby et al.(1999); this can be used to cluster discret data, such as words. The second case is when X, Y and Z are all jointly Gaussian (Chechik et al 2005). However, these assumptions both severely constrain the class of learnable models.\nTurning our attention to the second term. note that.\ndx p(z|x)p(x|i) dx p(z[x)(x-x) =p(z|x)\nIn this paper, we propose to use variational inference to construct a lower bound on the IB objective. in Equation[3] We call the resulting method VIB (variational information bottleneck). By using th reparameterization trick (Kingma & Welling2014), we can use Monte Carlo sampling to get ar unbiased estimate of the gradient, and hence we can optimize the objective using stochastic gradien descent. This allows us to use deep neural networks to parameterize our distributions, and thus tc. handle high-dimensional, continuous data, such as images, avoiding the previous restrictions to the. discrete or Gaussian cases.\nSo that we can bound our second term from above\nWe also show, by a series of experiments, that stochastic neural networks, fit using our VIB method are robust to overfitting, since VIB finds a representation Z which ignores as many details of the input X as possible. In addition, they are more robust to adversarial inputs than deterministic models which are fit using (penalized) maximum likelihood estimation. Intuitively this is because each input image gets mapped to a distribution rather than a unique Z, so it is more difficult to pass small idiosyncratic perturbations through the latent bottleneck.\nFigure 3: The adversary is trying to force each O to be classified as a 1. Successful attacks have a red background. Unsuccessful attacks have a green background. In the case that the label is changed to an incorrect label different from the target label (i.e., the classifier outputs something other than O or 1), the background is purple. The first column is the original image. The second column is. adversarial examples targeting our deterministic baseline model. The third column is adversarial examples targeting our dropout model. The remaining columns are adversarial examples targeting our VIB models for different 3..\nWhere we have replaced the intractable marginal p(z) with a variational marginal r(z)\nPutting these two bounds together we have that our unsupervised information bottleneck objective takes the form\nI(Z,X)-I(Z,i) < lz p(z[x) log q(x[z) - `KL[p(Z|xi),r(Z)]\nThe idea of using information theoretic objectives for deep neural networks was pointed out ir Tishby & Zaslavsky(2015b). However, they did not include any experimental results, since theii. approach for optimizing the IB objective relied on the iterative Blahut Arimoto algorithm, which is. infeasible to apply to deep neural networks..\nVariational inference is a natural way to approximate the problem. Variational bounds on mutual. information have previously been explored in Agakov(2004), though not in conjunction with the. information bottleneck objective.Mohamed & Rezende (2015) also explore variational bounds on mutual information, and apply them to deep neural networks, but in the context of reinforcement. learning. We recently discovered Chalk et al.[(2016), who independently developed the same varia-. tional lower bound on the IB objective as us. However, they apply it to sparse coding problems, and. use the kernel trick to achieve nonlinear mappings, whereas we apply it to deep neural networks. which are computationally more efficient. In addition, we are able to handle large datasets by using. stochastic gradient descent, whereas they use batch variational EM..\nIt is interesting that while this objective takes the same mathematical form as that of a Variational Autoencoder, the interpretation of the objective is very different. In the VAE, the model starts life as. a generative model with a defined prior p(z) and stochastic decoder p(x[z) as part of the model, and the encoder q(z[x) is created to serve as a variational approximation to the true posterior p(z[x) = p(x|z)p(z)/p(x). In the VIB approach, the model is originally just the stochastic encoder p(z[x), and the decoder q(x[z) is the variational approximation to the true p(x[z) = p(z[x)p(x)/p(z) and r(z) is the variational approximation to the marginal p(z) = I dx p(x)p(z[x). This difference in. interpretation makes natural suggestions for novel directions for improvement..\n3.0 Deterministic Model L* -- Dropout Model L* 2.0 Targeted L2 Optimization (0->1):L0 Targeted L2 Optimization (0->1):L0 Targeted L2 Optimization (0->1):L2 -- Targeted L2 Optimization (0->1):L2 Targeted L2 Optimization (0->1):Loo 1.8 Targeted L2 Optimization (0->1):Lx 2.5 * *1.6 2.0 onodou 1.4 AI/*1 IIn 1.2 1.5 1.0 1.0/P 0.8 0.6 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 (a) (b)\nThis precise setup, albeit with a different motivation was recently explored in|Higgins et al.(2016) where they demonstrated that by changing the weight of the variational autoencoders regularization term, there were able to achieve latent representations that were more capable when it came ot zero- shot learning and understanding 'objectness'\"'. In that work, they motivated their choice to change the relative weightings of the terms in the objective by appealing to notions in neuroscience. Here we demonstrate that appealing to the information bottleneck objective gives a principled motivation and could open the door to better understanding the optimal choice of and more tools for accessing the importance and tradeoff of both terms.\nN 1 [H(p(y|yn),p(y|xn)) -BH(p(y|xn)] N n=1\nwhere H(p,q) = -yp(y) logq(y) is the cross entropy, H(p) = H(p,p) is the entropy, p(y[yn) = dyn (y) is a one-hot encoding of the label yn, and N is the number of training exam- ples. (Note that setting = 0 corresponds to the usual maximum likelihood estimate.) In (Pereyra et al.]2016) they show that CP performs better than the simpler technique of label smoothing, in which we replace the zeros in the one-hot encoding of the labels by e > 0, and then renormalize so that the distribution still sums to one. We will compare our VIB method to both the confidence penalty method and label smoothing in Section4.1\nFigure 4: (a) Relative magnitude of the adversarial perturbation, measured using Lo, L2, and Lx norms, for the images in Figure|3|as a function of . (We normalize all values by the correspondin, norm of the perturbation against the base model.) As increases, Lo decreases, but both L2 and Lx increase, indicating that the adversary is being forced to put larger modifications into fewer pixels while searching for an adversarial perturbation. (b) Same as (a), but with the dropout model as the baseline. Dropout is more robust to the adversarial perturbations than the base deterministic model but still performs much worse than the VIB model as increases.\nConsider the special case when the bottleneck Z is a multivariate Normal, i.e., z|x ~ N(x, x where , is a K K positive definite matrix. The parameters x, , can be constructed from a deep neural network, e.g.,\nIn the unsupervised learning literature, our work is closely related to the work in|Kingma & Welling. (2014) on variational autoencoders. In fact, their method is a special case of an unsupervised version of the VIB, but with the parameter fixed at 1.0, as we explain in AppendixB] The VAE objective. but with different values of , was also explored inHiggins et al.(2016), but from a different. perspective.\nPx = Y1:K(x chol(x) = diag(log(1 + exp(7K+1:2K))) + subtril(72K+1:K(K+3)/2\nThe method of|Wang et al.(2016b) proposes a latent variable generative model of both x and y;. their variational lower bound is closely related to ours, with the following differences. First, we do\nwhere y(x) E RK(K+3)/2 is the network output of input x.\nOrig. Det. Dropout 3 = 0 = 10-10 3 = 10-8 3 = 10-6 =10-4 =10-3=10-2\no(z[i) I(Z,i)= dz p(z|i)p(i) log 2 1 p(z|xi) dz p(z|xi) log N p(z) 1 p(z|Xi dz p(z|xi) log N\nAnd this takes the form of a variational autoencoder (Kingma & Welling]2014), except with the second KL divergence term having an arbitrary weight .\nDeterministic Model L* 2.0 Dropout Model L* Targeted L2 Optimization (0->1):L0 + Targeted L2 Optimization (0->1):L0 :- Targeted L2 Optimization (0->1):L2 - - Targeted L2 Optimization (0->1):L2 Targeted L2 Optimization (0->1):Lx 1.8 Targeted L2 Optimization (0->1):Loo *1.6 1.4 1.0 0.8 0.6 10-10 10-9 108 107 10-6 105 104 10-3 10-2 10-11 10-10 10-9 10-8 10-7 106 10-5 104 10-3 B (a) (b)\nIn the supervised learning literature, our work is related to the recently proposed confidence penalty. (entropy regularization) method of (Pereyra et al.]2016). In this work, they fit a deterministic. network by optimizing an objective that combines the usual cross entropy loss with an extra term which penalizes models for having low entropy predictive distributions. In more detail, their cost. function has the form\nBeyond the connection to existing variational autoencoder techniques, we note that the unsupervised information bottleneck objective suggests new directions to explore, including targetting the exact marginal p(z) in the regularization term, as well as the opportunity to explore tighter bounds on the first I(Z, X) term that may not require explicit variational reconstruction.\nnot have a likelihood term for x, since we are in the discriminative setting. Second, they fix = 1 since they do not consider compression.\nDeterministic Model Dropout Model + FGS, epsilon=0.350 + FGS, epsilon=0.350 + FGS, epsilon=0.400 5 + FGS, epsilon=0.400 10 + FGS, epsilon=0.450 + FGS, epsilon=0.450 + FGS, epsilon=0.500 + FGS, epsilon=0.500 8 3 6 2 4 1 2 108 107 10-6 10-5 104 10-3 10-2 10-1 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 (a) (b)\nFinally, the variational fair autoencoder of Louizos et al.[(2016) shares with our paper the idea of ignoring parts of the input. However, in their approach, the user must specify which aspects of the input (the so-called \"sensitive'' parts) to ignore, whereas in our method, we can discover irrelevant parts of the input automatically.\nThis setup (which is identical to our experiments) induces a classifier which is bounded by a quadratic function, which is interesting because the theoretical framework|Fawzi et al. (2016) proves that quadratic classifiers have greater capacity for adversarial robustness than linear functions.\nWe now derive an approximate bound using second order Taylor series expansion (TSE). The bounc. can be made proper via Browne & McNicholas(2015). However, using the TSE is sufficient tc sketch the derivation."}, {"section_index": "6", "section_name": "3 METHOD", "section_text": "Following standard practice in the IB literature, we assume that the joint distribution p(X, Y, Z factors as follows:\nJensen's inequality implies that the negative log-likelihood soft-max is upper bounded by\np(X,Y,Z) = p(Z|X,Y)p(YX)p(X) =p(Z|X)p(Y]X)p(X\nThe second order Taylor series expansion (TSE) of lse is given by\nFigure 5: Classification accuracy of VIB classifiers, divided by accuracy of baseline classifiers, or FGS-generated adversarial examples as a function of . Higher is better, and the baseline is always at 1.0. For the FGS adversarial examples, when = 0 (not shown), the VIB model's performance is almost identical to when = 10-8. (a) FGS accuracy normalized by the base deterministic model performance. The base deterministic model's accuracy on the adversarial examples ranges from about 1% when e = 0.5 to about 5% when e = 0.35. (b) Same as (a), but with the dropout mode. as the baseline. The dropout model is more robust than the base model, but less robust than VIB particularly for stronger adversaries (i.e., larger values of e). The dropout model's accuracy on the adversarial examples ranges from about 5% when e = 0.5 to about 16% when e = 0.35. As in the other results, relative performance is more dramatic as increases, which seems to indicate tha the VIB models are learning to ignore more of the perturbations caused by the FGS method, even though they were not trained on any adversarial examples.\nIse(x + 0) ~ Ise(x) + 8T S(x) + 1sT diag(S(x)) - S(x)S(x) S\nRecall that the IB objective has the form I(Z, Y) - 3I(Z, X). We will examine each of these expressions in turn. Let us start with I(Z, Y). Writing it out in full, this becomes\nTaking the expectation of the TSE at the mean yields\nI(Z,Y dy dz p(y, z) log dy dz p(y, z) log\nyx)pzx)px Lx 1 dx p(y\\x)p(x[z) py|z p(z\nKL[p(Y|Z),q(Y|Z)] 0 = dy p(y|z) logp(y|z) > dy p(y|z) log q(y|z) z\nThe second-moment was calculated by noting\n0.7 0.6 0.5 0.4 0.3 0.2 0.1 Deterministic and Dropout Models (Targeted and Untargeted) Targeted L2 Optimization Untargeted L2 Optimization 0.0 10-11 10-10 10-9 10-8 10-7 10-6 10-5 104 10-3 10-2 101\nq(y\\z) I(Z,Y) dy dz p(y, z) log. dy dz p(y, z) log q(y[z) dy p(y) log p(y dy dz p(y, z) log q(y|z) + H(Y) .\nPutting this altogether, we conclude\nNotice that the entropy of our labels H(Y) is independent of our optimization procedure and so car be ignored.\nFigure 6: Classification accuracy (from O to 1) on L2 adversarial examples (of all classes) as a function of . The blue line is for targeted attacks, and the green line is for untargeted attacks (which are easier to resist). In this case, = 10-11 has performance indistinguishable from = 0. The deterministic model and dropout model both have a classification accuracy of O% in both the targeted and untargeted attack scenarios, indicated by the horizontal red dashed line at the bottom of the plot. This is the same accuracy on adversarial examples from this adversary reported in Carlini & Wagner(2016) on a convolutional network trained on MNIST.\nI(Z,Y) dx dy dz p(x)p(y|x)p(z|x) log q(y|z)\nThis only requires samples from both our joint data distribution as well as samples from our stochas.. tic encoder, while it requires we have access to a tractable variational approximation in q(y[z)\np(z|x I(Z,X) dz dx p(x, z) log. dz dx p(x, z) log p(z|x) dz p(z) logp(z) pz\nlog E[S(W Z)[x,x] -E[logS(W Z)[x,x =-W x + E[Ise(W Z)[x,x] Wx + E[Ise(Z)|W x,Wx]\ne., we assume p(Z|X, Y) = p(Z|X), corresponding to the Markov chain Y > X > Z. This estriction means that our representation Z cannot depend directly on the labels Y. (This opens he door to unsupervised representation learning, which we will discuss in Appendix [B]) Besides he structure in the joint data distribution p(X, Y), the only content at this point is our model for he stochastic encoder p(Z|X), all other distributions are fully determined by these and the Markov hain constraint.\nEn(0,w,wt)[Ise(Wx + 0)] ~ lse(Wx) + En(0,w,wt)[6T ]S(Wx)+ +1 EN(0,WE,wT)[6Tdiag(S(Wx))-S(Wx)S(Wx)T = lse(Wx) +1tr(WxWTdiag(S(Wx)) -S(Wx)S(Wx)T lse(Wx) + tr(WxWT diag(S(Wx))) - 1S(Wx)WxWTS(Wx) =lse(Wx) +2VS(Wx)'WxWTS(Wx) -3S(Wx)T WExWT S(Wx\nSince this is intractable in our case, let q(y[z) be a variational approximation to p(y[z). This is our decoder, which we will take to be another neural network with its own set of parameters. Using the fact that the Kullback Leibler divergence is always positive, we have.\nE[X'BX]=Etr(XX'B)=tr(E[XX'1B)=tr(B)\nE[S(W Z)[x,Ex] Z S(W x) exp -2VS(Wx)'WExWTS(Wx)+JS(Wx)TWxWT S(Wx)\nAs indicated, rather than approximate the Ise via TSE, we can make a sharp, quadratic upper bound via|Browne & McNicholas(2015). However this merely changes the S(W x) scaling in the expo nential; the result is still log-quadratic.\nFocusing on the first term in Equation11 we can rewrite p(y, z) as p(y, z) = J dx p(x, y, z) = f dx p(x)p(y|x)p(z|x) (leveraging our Markov assumption), which gives us a new lower bound on the first term of our objective:\n221 22 418 222222 236 236 222 (a) (b) 2 221 221 222|222 222|222 981 981 222|222 222|222 22 22222 21222 222222 222/222 236 27 222 c d\nCombining both of these bounds we have that\nSuppose we use an encoder of the form p(z[x) = N(z|f(x), f(x)), where fe is an MLP which outputs both the K-dimensional mean of z as well as the K K covariance matrix . Then we can use the reparameterization trick (Kingma & Welling2014) to write p(z|x)dz = p(e)de, where z = f(x, e) is a deterministic function of x and the Gaussian random variable e. This formulation has the important advantage that the noise term is independent of the parameters of the model, so it is easy to take gradients.\nAs in Kingma & Welling(2014), this formulation allows us to directly backpropagate through a single sample of our stochastic code and ensure that our gradient is an unbiased estimate of the true expected gradient4\nIn this section, we present various experimental results, comparing the behavior of standard deter ministic networks to stochastic neural networks trained by optimizing the VIB objective"}, {"section_index": "7", "section_name": "4.1 BEHAVIOR ON MNIST", "section_text": "We start with experiments on unmodified MNIST (i.e. no data augmentation). In order to pick a model with some \"headroom' to improve, we decided to use the same architecture as in the (Pereyra et al.[2016) paper, namely an MLP with fully connected layers of the form 784 - 1024 - 1024 - 10, and ReLu activations. (Since we are not exploiting spatial information, this correpsonds to the \"permutation invariant\"' version of MNIST.) The performance of this baseline is 1.38% error. (Pereyra et al.[2016) were able to improve this to 1.17% using their regularization technique. We were able to improve this to 1.13% using our technique, as we explain below.\nIn our method, the stochastic encoder has the form p(z[x) = N(z|f(x), f(x)), where fe is ai MLP of the form 784 - 1024 - 1024 - 2K, where K is the size of the bottleneck. The first K outputs from fe encode , the remaining K outputs encode (after a softplus transform).\nIn general, while it is fully defined, computing the marginal distribution of Z, p(z) : I dx p(z|x)p(x), might be difficult. So let r(z) be a variational approximation to this marginal Since KL[p(Z),r(Z)] 0 => f dzp(z) logp(z) f dz p(z) logr(z), we have the following upper bound:\nI(Z,X) dx dz p(x)p(z|x) log\nI(Z,Y) -BI(Z,X) dx dy dz p(x)p(y|x)p(z|x) log q(y|z dx dz p(x)p(z|x) log I\nN 1 L ~ dz p(z|xn) logq(yn[z) - p(z[xn) log N n=1\nAssuming our choice of p(z[x) and r(z) allows computation of an analytic Kullback-Leibler di-. vergence, we can put everything together to get the following objective function, which we try to minimize:\nN JI B Ee~p(e) [-logq(yn|f(xn,E))]+ BKL[p(Z|xn),r(Z)] N n=1\n4 Even if our choice of encoding distribution and variational prior do not admit an analytic KL, we could similarly reparameterize through a sample of the divergence (Kingma & Welling) 2014 Blundel1 et a1.12015)\nThe decoder is a simple logistic regression model of the form q(y[z) = S(y|fd(z)), where S(a) latent code to the logits of the C = 10 classes. (In later sections, we consider more comple decoders, but here we wanted to show the benefits of VIB in a simple setting.)\nFigure 8: Shown are the absolute differences between the original and final perturbed images for. all three networks. The left block shows the perturbations created while targeting the VIB network.. The middle block shows the perturbations needed for the deterministic baseline using precomputed whitened features. The right block shows the perturbations created for the unmodified Inception ResNet V2 network. The contrast has been increased by the same amount in all three columns to. emphasize the difference in the magnitude of the perturbations. The VIB network required much. larger perturbations to confuse the classifier, and even then did not achieve the targeted class in 13. of those cases.\nFinally, we treat r(z) as a fixed K-dimensional spherical Gaussian, r(z) = N(z|0, I)\nWe compare our method to the baseline MLP. We calso consider the following deterministic limit of our model, when = 0. In this case, we obtain the following objective function:\nmodel is simply logistic regression. To further speed training, we whitened the 1536 dimensional representation.\nUnder this transformation, the experiment regime is identical to the permutation invariant MNIST task. We therefore used a similar model architecture. Inputs are passed through two fully connected layers, each with 1024 units. Next, data is fed to a stochastic encoding layer; this layer is charac- terized by a spherical Gaussian with 1024 learned means and standard deviations. The output of the stochastic layer is fed to the variational classifier-itself a logistic regression, for simplicity. All other hyperparameters and training choices are identical to those used in MNIST, more details in Appendix A"}, {"section_index": "8", "section_name": "4.1.1 HIGHER DIMENSIONAL EMBEDDING", "section_text": "To demonstrate that our VIB method can achieve competitive classification results, we comparec against a deterministic MLP trained with various forms of regularization. We use a K = 256. dimensional bottleneck and a diagonal Gaussian for p(z|x). The networks were trained using Ten. sorFlow for 200 epochs using the Adam optimizer (Kingma & Ba]2015) with a learning rate o1 0.0001. Full hyperparameter details can be found in Appendix|A."}, {"section_index": "9", "section_name": "Classification", "section_text": "The results are shown in Table[1 we see that we can slightly outperform other forms of regulariza tion that have been proposed in the literature while using the same network for each. Of course, the performance varies depending on . These results are not state of the art, nor is our main focus of our work to suggest that VIB is the best regularization method by itself, which would require much more experimentation. However, using the same architecture for each experiment and comparing to VIB as the only source of regularization suggests VIB works as a decent regularizer in and of itself. Figure[1(a) plots the train and test error vs , averaged over 5 trials (with error bars) for the case where we use a single Monte Carlo sample of z when predicting, and also for the case where we average over 12 posterior samples (i.e., we use p(y|x) = s=1 q(y|z) for z ~ p(z|x), where S = 12). In our own investigations, a dozen samples seemed to be sufficient to capture any additional benefit the stochastic evaluations had to offer in this experimen[5\nWe see the same favorable VIB classification performance in ImageNet as in MNIST. By varying. 3, the estimated mutual information between encoding and image (I(Z, X)) varies as well. At large values of accuracy suffers, but at intermediate values we obtain improved performance over both a deterministic baseline and a = 0 regime. In all cases our accuracy is somewhat lower than the original 80.4% accuracy. This may be a consequence of inadequate training time or suboptimal. hyperparameters.\nOverall the best accuracy we achieved was using = O.01. Under this setting we saw an accu racy of 80.12%-nearly the same as the state-of-the-art unmodified network- but with substantially smaller information footprint, only I(X, Z) ~ 45 bits. This is a surprisingly small amount of infor mation; = 0 implies over 10,000 bits yet only reaches an accuracy of 78.87%. The deterministic baseline, which was the same network, but without the VIB loss and a 1024 fully connected lin ear layer instead of the stochastic embedding similarly only achieved 78.75% accuracy. We stress that regressions from the achievable 80.4% are likely due to suboptimal hyperparameters settings or inadequate training.\nWe see several interesting properties in Figure[1(a). First, we notice that the error rate shoots up once rises above the critical value of ~ 10-2. This corresponds to a setting where the mutual information between X and Z is less than log,(10) bits, so the model can no longer represent the fact that there are 10 different classes. Second, we notice that, for small values of , the test error\nConsidering a continuum of and a deterministic baseline, the best classification accuracy was achieved with a = 0.01 E (0, 1). In other words, VIB offered accuracy benefit yet using a mere ~ 45 bits of information from each image\n5 A dozen samples wasn't chosen for any particular reason, except the old addage that a dozen samples are sufficient, as mirrored in David MacKay's book (MacKay. 2003). They proved sufficient in this case..\nModel error Baseline 1.38% Dropout 1.34% Dropout (Pereyra et al.]2016) 1.40% Confidence Penalty 1.36% Confidence Penalty (Pereyra et al. 2016 1.17% Label Smoothing 1.40% Label Smoothing (Pereyra et al. 2016 1.23% VIB (B = 10 1.13%\nTable 1: Test set misclassification rate on permutation-invariant MNIST using K = 256. We com- pare our method (VIB) to an equivalent deterministic model using various forms of regularization. The discrepancy between our results for confidence penalty and label smoothing and the numbers reported in (Pereyra et al.|2016) are due to slightly different hyperparameters.\nN 1 J1B0 = - Ez~N(ft(xn),f2(xn))[logS(yn|fd(z)] N n=1\nWhen -> 0, we observe the VIB optimization process tends to make f(x) -> 0, so the network. becomes nearly deterministic. In our experiments we also train an explicitly deterministic model that has the same form as the stochastic model, except that we just use z = f(x) as the hidden. encoding, and drop the Gaussian layer..\nis higher than the training error, which indicates that we are overfitting. This is because the network learns to be more deterministic, forcing ~ 0, thus reducing the benefits of regularization. Third. we notice that for intermediate values of , Monte Carlo averaging helps. Interestingly, the region with the best performance roughly corresponds to where the added benefit from stochastic averaging. goes away, suggesting an avenue by which one could try to optimize using purely statistics on the. training set without a validation set. We have not extensively studied this possibility yet.."}, {"section_index": "10", "section_name": "Adversarial Robustness", "section_text": "We next show that the VIB-trained network improves resistance to adversarial attack.\nIn Figure[1(c), we plot the IB curve, i.e., we plot I(Z, Y) vs I(Z, X) as we vary . As we allow. more information from the input through to the bottleneck (by lowering 3), we increase the mutua information between our embedding and the label on the training set, but not necessarily on the tes. set, as is evident from the plot.\nIn Figure[1(d) we plot the second term in our objective, the upper bound on the mutual information between the images X and our stochastic encoding Z, which in our case is simply the relative entropy between our encoding and the fixed isotropic unit Gaussian prior. Notice that the y-axis is a logarithmic one. This demonstrates that our best results (when is between 10-3 and 10-2) occur where the mutual information between the stochastic encoding and the images is on the order of 10 to 100 bits.\nFigure|8|shows the absolute pixel differences between the perturbed and unperturbed images for the examples in Figure[7] We see that the VIB network requires much larger perturbations in order to fool the classifier, as quantified in Table2\nMetric Determ IRv2 VIB(0.01) Sucessful target 1.0 1.0 0.567 L2 6.45 14.43 43.27 Lo 0.18 0.44 0.92\n0.020 0.05 0.015 0.04 error 0.010 Crnor 0.03 test 1 shot eval test 1 shot eval 0.02 0.005 test avg eval test avg eval train 1 shot eval 0.01 - train 1 shot eval train avg eval train avg eval 0.000 0.00 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 (a) (b) 103 3.3 train train + test 102 test 3.2 101 3.1 'z)I 'z)I 100 3.0 10-1 2.9 10-2 2.8 10-3 101 102 103 104 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 I(Z,X) (c) (d)\nTable 2: Quantitative results showing how the different Inception Resnet V2-based architectures. (described in Section 4.2.5) respond to targeted L2 adversarial examples. Determ is the deterministic architecture, IRv2 is the unmodified Inception Resnet V2 architecture, and VIB(0.01) is the VIB architecture with = O.01. Successful target is the fraction of adversarial examples that caused the architecture to classify as the target class (soccer ball). Lower is better. L2 and Loo are the. average L distances between the original images and the adversarial examples. Larger values mean the adversary had to make a larger perturbation to change the class.."}, {"section_index": "11", "section_name": "5 FUTURE DIRECTIONS", "section_text": "There are many possible directions for future work, including: putting the VIB objective at multipl or every layer of a network; testing on real images; using richer parametric marginal approxima tions, rather than assuming r(z) = N(0, I); exploring the connections to differential privacy (se e.g.,[Wang et al.(2016a);Cuff & Yu(2016)); and investigating open universe classification problem (see e.g.,Bendale & Boult|(2015)). In addition, we would like to explore applications to sequence prediction, where X denotes the past of the sequence and Y the future, while Z is the current repre sentation of the network. This form of the information bottleneck is known as predictive informatior (Bialek et al.2001| Palmer et al.||2015).\nFigure 1: Results of VIB model on MNIST. (a) Error rate vs for K = 256 on train and test set \"1 shot eval' means a single posterior sample of z, \"avg eval' means 12 Monte Carlo samples. The. spike in the error rate at ~ 10-2 corresponds to a model that is too highly regularized. Plotted. values are the average over 5 independent training runs at each 3. Error bars show the standard deviation in the results. (b) Same as (a), but for K = 2. Performance is much worse, since we pass through a very narrow bottleneck. (c) I(Z, Y) vs I(Z, X) as we vary for K = 256. We see that. increasing I(Z, X) helps training set performance, but can result in overfitting. (d) I(Z, X) vs . for K = 256. We see that for a good value of , such as 10-2, we only need to store about 10 bits. of information about the input.\nDavid Barber Felix Agakov. The IM algorithm: a variational approach to information maximizatior In NIPS, volume 16, 2004.\nShumeet Baluja, Michele Covell, and Rahul Sukthankar. The virtues of peer pressure: A simple method for discovering high-value mistakes. In Intl. Conf. Computer Analysis of Images and Patterns, 2015."}, {"section_index": "12", "section_name": "4.1.2 TWO DIMENSIONAL EMBEDDING", "section_text": "Abhijit Bendale and Terrance Boult. Towards open world recognition. In CVPR, 2015\nTo better understand the behavior of our method, we refit our model to MNIST using a K = 2 dimensional bottleneck, but using a full covariance Gaussian. (The neural net predicts the mean anc the Cholesky decomposition of the covariance matrix.) Figure[1(b) shows that, not surprisingly, the classification performance is worse (note the different scaled axes), but the overall trends are the\n11 The attacks still often cause the VIB model to misclassify the image, but not to the targeted label. This is a form of \"partial' robustness, in that an attacker will have a harder time hitting the target class, but can stil disrupt correct function of the network.\nWe focus on the Carlini targeted L2 attack (see Section4.2.1). We show results for the VIB-trained network and a deterministic baseline (both on top of precomputed features), as well as for the origi- nal pretrained Inception ResNet V2 network itself. The VIB network is more robust to the targeted L2 optimization attack in both magnitude of perturbation and frequency of successful attack\nFigure7|shows some example images which were all misclassified as \"soccer balls\"' by the deter- ministic models; by contrast, with the VIB model, only 17 out of 30 of the attacks succeeded in. being mislabeled as the target label|11 We find that the VIB model can resist about 43.3% of the. attacks, but the deterministic models always fail (i.e., always misclassify into the targeted label)\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nsame as in the K = 256 dimensional case. The IB curve (not shown) also has a similar shape tc before, except now the gap between training and testing is even larger..\nFigure[2|provides a visualization of what the network is doing. We plot the posteriors p(z[x) as a 2c Gaussian ellipse (representing the 95% confidence region) for 1000 images from the test set. Colors correspond to the true class labels. In the background of each plot is the entropy of the variationa. classifier q(y[z) evaluated at that point..\nRyan P. Browne and Paul D. McNicholas. Multivariate sharp quadratic bounds via -strong con vexity and the fenchel connection. Electronic Journal of Statistics, 9, 2015.\n15 10 10 15 15 10 10 15\n, errmc = 3.18%, (b) = 10-1, errmc = 3.44%, (c) = 10, errmc = 33.82% (a) = 10-3 3 24% 432% 62.81% err err\nMatthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational informa tion bottleneck. In NIPS, 2016\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 248-255. IEEE, 2009.\nFigure 2: Visualizing embeddings of 1000 test images in two dimensions. We plot the 95% confi dence interval of the Gaussian embedding p(z[x) = N(, ) as an ellipse. The images are colored according to their true class label. The background greyscale image denotes the entropy of the vari ational classifier evaluated at each two dimensional location. As becomes larger, we forget more about the input and the embeddings start to overlap to such a degree that the classes become indis. tinguishable. We also report the test error using a single sample, err1, and using 12 Monte Carlo. samples, errmc. For \"good\" values of , a single sample suffices..\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers from adversarial to random noise. In NIPS, 2016.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AI/Statistics, volume 9, pp. 249-256, 2010.\nWe see several interesting properties. First, as increases (so we pass less information through). the embedding covariances increase in relation to the distance between samples, and the classe start to overlap. Second, once passes a critical value, the encoding \"collapses\"', and essentiall all the class information is lost. Third, there is a fair amount of uncertainty in the class predition (q(y[z)) in the areas between the class embeddings. Fourth, for intermediate values of (say 10-. in Figure[2(b)), predictive performance is still good, even though there is a lot of uncertainty abou. where any individual image will map to in comparison to other images in the same class. This mean it would be difficult for an outside agent to infer which particular instance the model is representing. a property which we will explore more in the following sections..\nRuitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvari. Learning with a strong adver sary. CoRR, abs/1511.03034, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014.\nDavid JC MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.\nSince the initial work by Szegedy et al.[(2013) and Goodfellow et al.[(2014), many different adver saries have been proposed. Most attacks fall into three broad categories: optimization-based attack. (Szegedy et al.|2013 Carlini & Wagner2016f|Moosavi-Dezfooli et al.2 2016 Papernot et al.]2015 Robinson & Graham2015, Sabour et al.2016), which directly run an optimizer such as L-BFGs or ADAM (Kingma & Ba2015) on image pixels to find a minimal perturbation that changes the model's classification; single-step gradient-based attacks (Goodfellow et al.]2014) Kurakin et al. 2016, Huang et al.|2015), which choose a gradient direction of the image pixels at some loss anc then take a single step in that direction; and iterative gradient-based attacks (Kurakin et al.2016\nShakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsi cally motivated reinforcement learning. In NIPS, pp. 2125-2133, 2015.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. Arxiv, 2016.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR, 2016.\nNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. Arxiv. 2016.\nIrina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. URL|https : //openreview. net/pdf? id=Sy2 f zU9g1\nSzegedy et al.[(2013) was the first work to show that deep neural networks (and other kinds of classifiers) can be easily \"fooled'' into making mistakes by changing their inputs by imperceptibly small amounts. In this section, we will show how training with the VIB objective makes models significantly more robust to such adversarial examples.\nMany adversaries can be formalized as either untargeted or targeted variants. An untargeted ad. versary can be defined as A(X, M) -> X', where A(.) is the adversarial function, X is the input image, X' is the adversarial example, and M is the target model. A is considered successful if. M(X) M(X'). Recently,Moosavi-Dezfooli et al.(2016) showed how to create a \"universal'*. adversarial perturbation that can be added to any image X in order to make M(X + ) M(X). for a particular target model.\nStephanie E Palmer, Olivier Marre, Michael J Berry, and William Bialek. Predictive information ir a sensory population. PNAS, 112(22):6908-6913, 2015.\nIn this work, we focus on the Fast Gradient Sign (FGs) method proposed in Goodfellow et al.. (2014) and the L2 optimization method proposed in Carlini & Wagner(2016). FGS is a standard baseline attack that takes a single step in the gradient direction to generate the adversarial example. As originally described, FGS generates untargeted adversarial examples. On MNIST, Goodfellow. et al.[(2014) reported that FGS could generate adversarial examples that fooled a maxout network approximately 90% of the time with e = 0.25, where e is the magnitude of the perturbation at each. pixel. The L2 optimization method has been shown to generate adversarial examples with smaller. perturbations than any other method published to date, which were capable of fooling the target. network 100% of the time. We consider both targeted attacks and untargeted attacks for the L2. optimization method8\nSara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deep representations. In ICLR, 2016.\nNoam Slonim, Gurinder Singh Atwal, Gasper Tkacik, and William Bialek. Information-based clus tering. PNAS, 102(51):18297-18302, 2005"}, {"section_index": "13", "section_name": "4.2.2 ADVERSARIAL ROBUSTNESS", "section_text": "There are multiple definitions of adversarial robustness in the literature. The most basic, which we shall use, is accuracy on adversarially perturbed versions of the test set. called adversarial examples\nIt is also important to have a measure of the magnitude of the adversarial perturbation. Since ad versaries are defined relative to human perception, the ideal measure would explicitly correspond to how easily a human observer would notice the perturbation. In lieu of such a measure. it is commor to compute the size of the perturbation using Lo, L1, L2, and Loo norms (Szegedy et al.]2013 Goodfellow et al.[ 2014] Carlini & Wagner2016] Sabour et al.f 2016). In particular, the Lo norr measures the number of perturbed pixels, the L2 norm measures the Euclidean distance between X and X'. and the I. norm measures thelarges change to any pixel\nWe used the same model architectures as in Section4.1 using a K = 256 bottleneck. The archi tectures included a deterministic (base) model trained by MLE; a deterministic model trained wit dropout (the dropout rate was chosen on the validation set); and a stochastic model trained with VII for various values of\nWeina Wang, Lei Ying, and Junshan Zhang. On the relation between identifiability, differentia privacy and Mutual-Information privacy. IEEE Trans. Inf. Theory, 62:5018-5029, 2016a.\nWeiran Wang, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis arXiv [cs.LG].11 October 2016b. URL https://arxiv.org/abs/1610.03454\nFor the VIB models, we use 12 posterior samples of Z to compute the class label distribution p(y[x) This helps ensure that the adversaries can get a consistent gradient when constructing the perturba tion, and that they can get a consistent evaluation when checking if the perturbation was successful\n6 There are also other adversaries that don't fall as cleanly into those categories, such as \"fooling im-. ages\" from[Nguyen et al.(2014), which remove the human perceptual constraint, generating regular geometric patterns or noise patterns that networks confidently classify as natural images; and the idea of generating ad- versaries by stochastic search for images near the decision boundary of multiple networks from |Baluja et al (2015).\nA targeted adversary can be defined as A(X, M, l) -> X', where l is an additional target label, and A is only considered successful if M(X') = l|/ Targeted attacks usually require larger magnitude perturbations, since the adversary cannot just \"nudge'' the input across the nearest decision boundary but instead must force it into a desired decision region.\nBoris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging SIAM Journal on Control and Optimization, 30(4):838-855, 1992\nSabour et al.(2016) proposes a variant of the targeted attack, A(Xs, M, XT, k) -> X's, where Xs is the source image, XT is a target image, and k is a target layer in the model M. A produces X's by minimizing the difference in activations of M at layer k between XT and X's. The end result of this attack for a classification network is still that M(X's) yields a target label implicitly specified by XT in a successful attack.\nCarlini & Wagner (2016) shared their code with us, which allowed us to perform the attack with exactly the same parameters they used for their paper, including the maximum number of iterations and maximum C value (see their paper for details)."}]
HycUbvcge
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Adrian Benton, Huda Khayrallah, Biman Gujral Drew Reisinger, Sheng Zhang, Raman Arora\nadrian', huda*,bgujral1*, reisinger', zsheng2*, arora' *@jhu.edu, '@cogsci.jhu.edu, '@cs.jhu.edu.\nWe present Deep Generalized Canonical Correlation Analysis (DGCCA) - a. method for learning nonlinear transformations of arbitrarily many views of data. such that the resulting transformations are maximally informative of each other.. While methods for nonlinear two-view representation learning (Deep CCA, (An-. drew et al.]2013)) and linear many-view representation learning (Generalized. CCA (Horst|[1961)) exist, DGCCA is the first CCA-style multiview representation learning technique that combines the flexibility of nonlinear (deep) representation. learning with the statistical power of incorporating information from many inde-. pendent sources, or views. We present the DGCCA formulation as well as an. efficient stochastic optimization algorithm for solving it. We learn DGCCA repre-. sentations on two distinct datasets for three downstream tasks: phonetic transcrip. tion from acoustic and articulatory measurements, and recommending hashtags. and friends on a dataset of Twitter users. We find that DGCCA representations. soundly beat existing methods at phonetic transcription and hashtag recommenda. tion, and in general perform no worse than standard linear many-view techniques"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Multiview representation learning refers to settings where one has access to many \"views\" of data at train time. Views often correspond to different modalities or independent information about ex- amples: a scene represented as a series of audio and image frames, a social media user characterized by the messages they post and who they friend, or a speech utterance and the configuration of the speaker's tongue. Multiview techniques learn a representation of data that captures the sources of variation common to all views.\n1DLVSIS.AOLSC nalive lalenl space. In Computer Vision and Pattern Reco gnlllon 2012 IEEE Conference on, pp. 2160-2167. IEEE, 2012. Karthik Sridharan and Sham M Kakade. An information theoretic framework for multi-view learn ing. In Proceedings of COLT, 2008.. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. Unsupervised learning of acoustic. features via deep canonical correlation analysis. In Proc. of the IEEE Int. Conf. Acoustics, Speech and Sig. Proc. (ICASSP'15), 2015a. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representatior learning. In Proc. of the 32nd Int. Conf. Machine Learning (ICML 2015), 2015b.. Weiran Wang, Raman Arora, Karen Livescu, and Nathan Srebro. Stochastic optimization for deep. cca via nonlinear orthogonal iterations. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control and Computing (ALLERTON), 2015c. John R. Westbury. X-ray microbeam speech production database users handbook. In Waisman Cen-. ter on Mental Retardation & Human Development University of Wisconsin Madison, WI 53705. 2280, 1994. Dong Xiaowen. Multi-View Signal Processing and Learning on Graphs. PhD thesis, Ecole Poly. technique Federale de Lausanne, 2014.\nMultiview representation techniques are attractive for intuitive reasons. A representation that is abl. to explain many views of the data is more likely to capture meaningful variation than a representatior. that is a good fit for only one of the views. They are also attractive for the theoretical reasons. Fo. example,Anandkumar et al.(2014) show that certain classes of latent variable models, such a. Hidden Markov Models, Gaussian Mixture Models, and Latent Dirichlet Allocation models, can b optimally learned with multiview spectral techniques. Representations learned from many view. will generalize better than one, since the learned representations are forced to accurately captur. variation in all views at the same time (Sridharan & Kakade|2008) - each view acts as a regularize. constraining the possible representations that can be learned. These methods are often based or. canonical correlation analysis (CCA), a classical statisical technique proposed byHotelling(1936\nIn spite of encouraging theoretical guarantees, multiview learning techniques cannot freely mode. nonlinear relationships between arbitrarily many views. Either they are able to model variatior across many views, but can only learn linear mappings to the shared space (Horst1961), or they. simply cannot be applied to data with more than two views using existing techniques based on kerne. CCA (Hardoon et al.]2004) and deep CCA (Andrew et al.2013)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "bhishek Kumar, Piyush Rai, and Hal Daume. Co-regularized multi-view spectral clustering. In"}, {"section_index": "3", "section_name": "APPENDIX A DERIVING THE GCCA OBJECTIVE GRADIENT", "section_text": "Here we present Deep Generalized Canonical Correlation Analysis (DGCCA). Unlike previous correlation-based multiview techniques, DGCCA learns a shared representation from data with ar bitrarily many views and simultaneously learns nonlinear mappings from each view to this shared space. The only (mild) constraint is that these nonlinear mappings from views to shared space must be differentiable. Our main methodological contribution is the derivation of the gradient update for the Generalized Canonical Correlation Analysis (GCCA) objective (Horst1961). As a practical contribution, we have also released an implementation of DGCCA1\nIn order to train the neural networks in DGCCA, we need to compute the gradient of the GCCA objective with respect to any one of its input views. This gradient can then be backpropagated through the input networks to derive updates for the network weights.\nLet N be the number of data points and J the number of views. Let Y, E RckN be the data of neurons in the output layer of the jth network. Then, GCCA can be written as the following optimization problem, where r is the dimensionality of the learned auxiliary representation:\nWe also evaluate DGCCA-learned representations on two distinct datasets and three downstrean tasks: phonetic transcription from aligned speech and articulatory data, and Twitter hashtag anc friend recommendation from six text and network feature views. We find that downstream perfor mance of DGCCA representations is ultimately task-dependent. However, we find clear gains ir performance from DGCCA for tasks previously shown to benefit from representation learning or. more than two views, with up to 4% improvement in heldout accuracy for phonetic transcription..\n|g-UY||? minimize J;ER<Kxr,GERrx N j=1 GGT =Ir subject to"}, {"section_index": "4", "section_name": "2 PRIOR WORK", "section_text": "u]E12U2 (ui, u2) corr(u] X1, u2 X2) = = argmax argmax U1 ERd1,u2ERd2 u1 ERd1,u2ERd2 11u1u 22U2 ui\nkth row of G is gk, and since the matrix product Ggk = ek,\nN GMG'=> AkGgk(Ggk) k=1 k=1\nBut this is just an N N diagonal matrix containing the top r eigenvalues of M, so we can write the GCCA objective as\nDeep CCA (DCCA) (Andrew et al.l2013) is an extension of CCA that addresses the first limitation by finding maximally linearly correlated non-linear transformations of two vectors. It does this by passing each of the input views through stacked non-linear representations and performing CCA on the outputs.\nThus, minimizing the GCCA objective (w.r.t. the weights of the neural nets) means maximizing th sum of eigenvalues '-1 A,(M), which we will henceforth denote by L.\n(u*, u2, W*, W*) = argmax corr(uj f1(X1), u2 f2(X2)) U1U2\n'See https://bitbucket.org/adrianbenton/dgcca-py3 for implementation of DGCC along with data from the synthetic experiments.\nThe paper is organized as follows. We review prior work in Section |2] In Section [3|we describe DGCCA. Empirical results on a synthetic dataset, and three downstream tasks are presented in Section4 In Section 5] we describe the differences between DGCCA and other non-CCA-based multiview learning work and conclude with future directions in Section|6.\nIt can be shown that the solution is found by solving a certain eigenvalue problem. In particular define Cjj = YjYJ E Rckck, Pj = Y,T CF,'Y, (note that P, is symmetric and idempotent), and M = j-1 P, (since each P, is psd, so is M). Then the rows of G are the top r (orthonormal) eigenvectors of M, and U, = C,'Y,GT. Thus, at the minima of the objective, we can rewrite the reconstruction error as follows:\n|G-UYj|F=||G-GYCy'Yj j=1 j=1 J =||G(IN- Pj)|I j=1 J Tr[G(Iv - P;)GT j=1 J =Tr(Ir) - Tr(GMGT) j=1 = Jr - Tr(GMG\nSome of most successful techniques for multiview representation learning are based on canonical correlation analysis (Wang et al.]2015a b) and its extension to the nonlinear and many view settings, which we describe in this section. For other related multiview learning techniques, see Section|5\nCanonical correlation analysis (CCA) (Hotelling1936) is a statistical method that finds maximally correlated linear projections of two random vectors and is a fundamental multiview learning tech- nique. Given two input views, X1 E Rd1 and X2 E Rd2, with covariance matrices, 11 and 22, respectively, and cross-covariance matrix, 12, CCA finds directions that maximize the correlation between them:\n(u,u) = uj 12U2 argmax uTY11u1=u 22u2=1\nThis technique has two limitations that have led to significant extensions: First, it is limited to learning representations that are linear transformations of the data in each view, and second, it can only leverage two input views.\nJr- \\(M i=1\nLet us use f1(X1) and f2(X2) to represent the network outputs. The weights, W1 and W2, of these. networks are trained through standard backpropagation to maximize the CCA objective\nN aL aL aMcd d(Yj)ab aMcd 0(Yj)ab c,d=1 N aMcd (G G)cd a(Yj)ab c,d=1\nAnother extension of CCA, which addresses the limitation on the number of views, is Generalized CCA (GCCA) (Horst!1961). It corresponds to solving the optimization problem in Equation (2) of finding a shared representation G of J different views, where N is the number of data points. d, is the dimensionality of the jth view, r is the dimensionality of the learned representation, and X, E Rd, x N is the data matrix for the jth view2\n(Pj)cd=>`(Yj)kc(C)ke(Yj)ed k,l=1\n||G-u, x|l minimize U;ERdjXr,GERrX N j=1 GGT =Ir subject to\nThus, by the product rule"}, {"section_index": "5", "section_name": "3 DEEP GENERALIZED CANONICAL CORRELATION ANALYSIS (DGCCA)", "section_text": "In this section, we present deep GCCA (DGCCA): a multiview representation learning technique that benefits from the expressive power of deep neural networks and can also leverage statistical strength from more than two views in data, unlike Deep CCA which is limited to only two views More fundamentally, deep CCA and deep GCCA have very different objectives and optimization problems, and it is not immediately clear how to extend deep CCA to more than two views.\nThe derivative in the last term can also be computed using the chain rule\nDGCCA learns a nonlinear map for each view in order to maximize the correlation between the learnt representations across views. In training, DGCCA passes the input vectors in each view through multiple layers of nonlinear transformations and backpropagates the gradient of the GCCA. objective with respect to network parameters to tune each view's network, as illustrated in Figure|1. The objective is to train networks that reduce the GCCA reconstruction error among their outputs. At test time, new data can be projected by feeding them through the learned network for each view..\nd(Cz)kl N 0(C)ke O(Cjj)mn d(Yj)ab d(Cjj)mn 0(Yj)ab m,n=1 N C)km( m,n=1 Sam(Yj)nb + 0an(Y) N -Cj)ka n=1 N LY m. m=1\n)kl O(Cjj)mn d(Yj)ab d(Cjj)mn 0(Yj)ab m,n=1 N (C)km( 7 n. m,n=1 [Sam(Yj)nb+ dan(Yj)ml N -C)kaC i)ne( nl n=1 N C I km i m m=1\nAW AW GCCA I 1 1 1 1 1 1 1 1 i4w\nAW 4W GCCA I 1 1 1 1 1 1 1 1 X 1 1 / AW 1\nFigure 1: A schematic of DGCCA with deep networks for J views\nSubstituting this into the expression for a(Y)\n(P; )cd Scb(CY))ad+ 0db(C) Yi -(C) i) (C) Yi)a 1(Y 'C (IN Pj)cb(CYj)ad + (IN P)d(C)\na(Pj)cd K l = dcb b(Yj)ea(Cj)ae+ d(Yj)ab l=1 ddb >(Yj)kc(Cz1) )ka+ k=1 a(C ) kl (Yj)kc(Yj)ed d k,l=1 cbC-Y Yad+ Odb(C) K (Yj)kc(Yj)ed-g L + a a k,l=1\nSolving GCCA requires finding an eigendecomposition of an N N matrix, which scales quadrat ically with sample size and leads to memory constraints. Unlike CCA and DCCA, which only learn projections or transformations on each of the views, GCCA also learns a view-independent repre- sentation G that best reconstructs all of the view-specific representations simultaneously. The key limitation of GCCA is that it can only learn linear transformations of each view.\nWe now formally define the DGCCA problem. We consider J views in our data, and let X; E Rd, N denote the jth input matrix3The network for the jth view consists of K, layers. Assume,. for simplicity, that each layer in the jth view network has c; units with a final (output) layer of size. Oj. The output of the kth layer for the jth view is h = s(Whk-1), where s : R > R is a. nonlinear activation function and W? E Rc ck-1 is the weight matrix for the kth layer of the jth. view network. We denote the output of the final layer as fj(Xj)..\na(Pj)cd Ocb(C Y)ad+ ddb(C a(Yj)ab (CYj)ac(Y}'Cz'Yj)b (CYj) C-: (In P)c(C- Ydad - (IN -Pj)db(CYj)ac\nOL Finally, substituting this into our expression for we find that\nthat J |G-UJ fj(Xj)lIF minimize U;ERjXr,GERrXN j=1 GGT =Ir subject to\nwhere G E Rr N is the shared representation we are interested in learning\nOptimization: We solve the DGCCA optimization problem using stochastic gradient descen. (SGD) with mini-batches. In particular, we estimate the gradient of the DGCCA objective in Prob. lem 3 on a mini-batch of samples that is mapped through the network and use back-propagatior. to update the weight matrices, Wj's. However, note that the DGCCA optimization problem is a. constrained optimization problem. It is not immediately clear how to perform projected gradient de. scent with back-propagation. Instead, we characterize the objective function of the GCCA problem. at an optimum, and compute its gradient with respect to the inputs to GCCA, i.e. with respect to the network outputs. These gradients are then back-propagated through the network to update Wj's..\nBut recall that U . Using this, the gradient simplifies as follows:\nAlthough the relationship between DGCCA and GCCA is analogous to the relationship betweei. DCCA and CCA, derivation of the GCCA objective gradient with respect to the network outpu layers is non-trivial. The main difficulty stems from the fact that there is no natural extension of the. correlation objective to more than two random variables. Instead, we consider correlations betweer every pair of views, stack them in a J J matrix and maximize a certain matrix norm for tha. matrix. For GCCA, this suggests an optimization problem that maximizes the sum of correlations. between a shared representation and each view. Since the objective as well as the constraints of th generalized CCA problem are very different from that of the CCA problem, it is not immediately. obvious how to extend Deep CCA to Deep GCCA.\n|G-Ufj(X;)|I=|G-Gfj(Xj)Cz'fj(X;)llF=rJ-Tr(GMG j=1 j=1\nMinimizing the GCCA objective (w.r.t. the weights of the neural networks) means maximizing Tr(GMGT), which is the sum of eigenvalues L = r=1 A,(M). Taking the derivative of L with. respect to each output layer f(X) we have:.\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded into the subspace spanned by the columns of U, (the first term) and the projection of the actual data in f(X) onto the said subspace (the second term). Intuitively, if the auxiliary representation G is far away from the view-specific representation U, fj(Xj), then the network weights should. receive a large update. Computing the gradient descent update has time complexity O(JNrd),. where d = max(d1. d2. . d 1) is the largest dimensionality of the input views.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we apply DGCCA to a small synthetic data set to show how it preserves the generative structure of data sampled from a multiview mixture model. The data we use for this experiment are\nN dL (GG)ca(IN-Pj)cb(Cj'Yj)ad d(Yj)ab c,d=1 N (G~G)cd(IN - Pj)ds(Cj] c,d=1 2[Cz'Y;G'G(In Pj)]ab aL 2C-1Y:GG(n - Pa\naL =2U;G-2U;U}Y OYj\nThus, the gradient is the difference between the r-dimensional auxiliary representation G embedded. into the subspace spanned by the columns of U; (the first term) and the projection of the network outputs in Y, = f(X,) onto said subspace (the second term). Intuitively, if the auxiliary repre-. sentation G is far away from the view-specific representation U, f,(X), then the network weights. should receive a large update.\nNext, we show a sketch of the gradient derivation, the full derivation is given in appendix A] It is straightforward to show that the solution to the GCCA problem is given by solving an eigenvalue problem. In particular, define Cj; = f(X;) f(X;)T E Ro o, to be the scaled empirical covariance projection matrix that whitens the data; note that P, is symmetric and idempotent. We define M = G are the top r (orthonormal) eigenvectors of M, and U, = C' f(X;)GT. Thus, at the minimum of the objective, we can rewrite the reconstruction error as follows:\nOL 2U;G-2U;U}fj(Xj) 0fj(Xj)"}, {"section_index": "7", "section_name": "APPENDIX B DGCCA OPTIMIZATION PSEUDOCODE", "section_text": "plotted in Figure2 Points that share the same color across different views are sampled from the same mixture component.\n###ot ###x# # F X\nInput: multiview data: X1, X2,..., X 1 number of iterations T, learning rate n Output: O1, O2,..., OJ Initialize weights W1, W2,..., WJ for iteration t = 1, 2, . .. , T do for each view j = 1, 2, ..., J do O, forward pass of X, with weights W. mean-center O end for U1,...,UJ,G gcca(O1,...,Oj) for each view j = 1, 2, ..., J do aF/8OjU;UJO;-U;G VW; backprop(@F/dOj, Wj) W; W;-nVWj end for end for for each view j = 1, 2, ..., J do O, forward pass of X, with weights W, mean-center Oj end for U1,...,UJ,G gcca(O1,...,Oj) for each view j = 1...J do OjUTOj end for\nA H #\nFigure 3: The matrix G learned from applying (linear) GCCA or DGCCA to the data in Figure|2\nIt is also illustrative to consider the view-specific representations learned by DGCCA, that is, tc consider the outputs of the neural networks that were trained to maximize the GCCA objective. We plot the representations in Figure4 For each view, we have learned a nonlinear mapping that does remarkably well at making the mixture components linearly separable. Recall that absolutely nc direct supervision was given about which mixture component each point was generated from. The only training signals available to the networks were the reconstruction errors between the network outputs and the learned representation G.\nFigure 4: Outputs of the trained input neural networks in Section4.1applied to the data in Figure2\nIn this section, we discuss experiments on the University of Wisconsin X-ray Microbeam Database (XRMB) (Westbury|1994). XRMB contains acoustic and articulatory recordings as well as phone. mic labels. We present phoneme classification results on the acoustic vectors projected using DCCA.\nAlgorithm1contains the pseudocode for the DGCCA optimization algorithm. In practice we use stocastic optimization with minibatches, followingWang et al.(2015c).\nFigure 2: Synthetic data used in in Section|4.1|experiments. Importantly, in each view, there is no linear transformation of the data that separates the two mixture components, in the sense that the generative structure of the data could not be exploited by a linear model. This point is reinforced by Figure|3(a), which shows the two-dimensional representation G learned by applying (linear) GCCA to the data in Figure |2] The learned representation completely loses the structure of the data.\n(a) GCCA (b) DGCCA x G learned from applving (inear) GCCA or DGCCA to the\nCPPy18( We can contrast the failure of GCCA to preserve structure with the result of applying DGCCA; in this. case, the input neural networks had three hidden layers with ten units each with weights randomly nitialized. We plot the representation G learned by DGCCA in Figure[3|(b). In this representation,. the mixture components are easily separated by a linear classifier; in fact, the structure is largely. preserved even after projection onto the first coordinate of G.."}, {"section_index": "8", "section_name": "ENDIX C RECONSTRUCTION ERROR AND DOWNSTREAM PERFORMANCE", "section_text": "GCCA. and DGCCA. We set acoustic and articulatory data as the two views and phoneme labels as the third view for GCCA and DGCCA. For classification, we run K-nearest neighbor classification (Cover & Hart1967) on the projected result.\n0.40 0.35 0.30 0.25 8 000T 0.20 0.15 0.10 0.05 0.00 101 102 103 104 105 106 tune_err\nWe use the same train/tune/test split of the data as Arora & Livescu(2014). To limit experimen runtime, we use a subset of speakers for our experiments. We run a set of cross-speaker experiment using the male speaker JW11 for training and two splits of JW24 for tuning and testing. We alsc perform parameter tuning for the third view with 5-fold cross validation using a single speaker JW11. For both experiments, we use acoustic and articulatory measurements as the two views ir DCCA. Following the pre-processing in Andrew et al.(2013), we get 273 and 112 dimensiona feature vectors for the first and second view respectively. Each speaker has ~50,o0o frames. Fo the third view in GCCA and DGCCA, we use 39-dimensional one-hot vectors corresponding to the labels for each frame, following|Arora & Livescu (2014)."}, {"section_index": "9", "section_name": "4.2.2 PARAMETERS", "section_text": "Figure 6: Tuning reconstruction error against Recall at 1o00 for the hashtag prediction task. Eacl point corresponds to a different setting of hyperparameters.\nCCA methods are typically evaluated intrinsically by the amount of correlation captured, or recon struction error. These measures are dependent on the width of the shared embeddings and view specific output layers, and do not necessarily predict downstream performance. Although recon struction error cannot solely be relied on for model selection for a downstream task, we found tha it was a useful as a signal to weed out very poor models. Figure 6[shows the reconstruction erro against hashtag prediction Recall at 1000 for an initial grid search of DGCCA hyperparameters Models with tuning reconstruction error greater than 103 can safely be ignored, while there is some variability in the performance of models with achieving lower error."}, {"section_index": "10", "section_name": "4.2.3 RESULTS", "section_text": "As we show in Table1 DGCCA improves upon both the linear multiview GCCA and the non-lineal 2-view DCCA for both the cross-speaker and speaker-dependent cross-validated tasks.\nIn addition to accuracy, we examine the reconstruction error, i.e. the objective in Equation [3] ob tained from the objective in GCCA and DGCCA4|This sharp improvement in reconstruction error. shows that a non-linear algorithm can better model the data..\nSince a DGCCA model with high reconstruction error suggests that the views do not agree with eacl. other at all, it makes sense that the shared embedding will likely be noisy, whereas a relatively lowly reconstruction error suggests that the transformed views have converged to a stable solution..\nIn this experimental setup, DCCA under-performs the baseline of simply running KNN on the orig inal acoustic view. Prior work considered the output of DCCA stacked on to the central frame of the original acoustic view (39 dimensions). This poor performance, in the absence of original features indicates that it was not able to find a more informative projection than original acoustic features based on correlation with the articulatory view within the first 30 dimensions.\nTable 1: KNN phoneme classification performance\nCROSS-SPEAKER SPEAKER-DEPENDENT DEV TEST REC DEV TEST REC METHOD Acc Acc ERROR Acc Acc ERROR MFCC 48.89 49.28 66.27 66.22 DCCA 45.40 46.06 65.88 65.81 GCCA 49.59 50.18 40.67 69.52 69.78 40.39 DGCCA 53.78 54.22 35.89 72.62 72.33 20.52\nTo highlight the improvements of DGCCA over GCCA, Figure5|presents a subset of the the con. fusion matrices on speaker-dependent test data. In particular, we observe large improvements in the classification of D, F, K, SH, V and Y. GCCA outperforms DGCCA for UH and DH. These. matrices also highlight the common misclassifications that DGCCA improves upon. For instance.\n4For 2-view experiments, correlation is a common metric to compare performance. Since that metric is unavailable in a multiview setting, reconstruction error is the analogue.\nWe use a fixed network size and regularization for the first two views, each containing three hidden ayers with sigmoid activation functions. Hidden layers for the acoustic view were all width 1024. and layers in the articulatory view all had width 512 units. L2 penalty constants of O.oo01 and 0.01 were used to train the acoustic and articulatory view networks, respectively. The output layer dimension of each network is set to 30 for DCCA and DGCCA. For the 5-fold speaker-dependent experiments, we performed a grid search for the network sizes in {128, 256, 512, 1024} and covari- ance matrix regularization in {10-2, 10-4, 10-6, 10-8} for the third view in each fold. We fix the hyperparameters for these experiments optimizing the networks with minibatch stochastic gradient descent with a step size of O.005, batch size of 2000, and no learning decay or momentum. The third view neural network had an L2 penalty of 0.0005.\nDDHP R B F KSHWAOIY S THHVUHY DDHP R B F KSHWAOIY S THHVUHY D 90 D 90 DH DH 80 P 80 R R 70 70 B F F 60 60 K K SH SH 50 50 W W AO 40 AO 40 IY IY S 30 s 30 HH 20 HH 20 V UH 10 10 UH V 0 0 (a) GCCA (b) DGCCA F1011re ondDOCO\nDGCCA rectifies the frequent misclassification of V as P, R and B by GCCA. In addition, com monly incorrect classification of phonemes such as S and T is corrected by DGCCA, which enables better performance on other voiceless consonants such as like F, K and SH. Vowels are classified with almost equal accuracy by both the methods.\nLinear multiview techniques are effective at recommending hashtag and friends for Twitter users. (Benton et al.|2016). In this experiment, six views of a Twitter user were constructed by applying. principal component analysis (PCA) to the bag-of-words representations of (1) tweets posted by the ego user, (2) other mentioned users, (3) their friends, and (4) their followers, as well as one-hot. encodings of the local (5) friend and (6) follower networks. We learn and evaluate DGCCA models on identical training, development, and test sets as|Benton et al.|(2016), and evaluate the DGCCA. representations on macro precision at 1000 (P@ 1000) and recall at 1000 (R @ 1000) for the hashtag and friend recommendation tasks described there..\nWe trained 40 different DGCCA model architectures, each with identical architectures across views where the width of the hidden and output layers, c1 and c, for each view are drawn uniformly from 10, 1000], and the auxiliary representation width r is drawn uniformly from 10, c2P] All networks used ReLUs as activation functions, and were optimized with Adam (Kingma & Ba2014) for 200 epochs6Networks were trained on 90% of 102,328 Twitter users, with 10% of users used as a tuning set to estimate heldout reconstruction error for model selection. We report development and test results for the best performing model on the downstream task development set. Learning rate was set to 10-4 with an L1 and L2 regularization constants of 0.01 and 0.001 for all weights7\nTable 2: Dev/test performance at Twitter friend and hashtag recommendation tasks\nFRIEND HASHTAG ALGORITHM P@1000 R @1000 P@1000 R @1000 PCA[TEXT+NET] 0.445/0.439 0.149/0.147 0.011/0.008 0.312/0.290 GCCA[TEXT] 0.244/0.249 0.080/0.081 0.012/0.009 0.351/0.326 GCCA[TEXT+NET] 0.271/0.276 0.088/0.089 0.012/0.010 0.359/0.334 DGCCA[TEXT+NET] 0.297/0.268 0.099/0.090 0.013/0.010 0.385/0.373 WGCCA[TEXT] 0.269/0.279 0.089/0.091 0.012/0.009 0.357/0.325 WGCCA[TEXT+NET] 0.376/0.364 0.123/0.120 0.013/0.009 0.360/0.346\nTable 2|displays the performance of DGCCA compared to PCA[text+net] (PCA applied to con catenation of view feature vectors), linear GCCA applied to the four text views, [text], and all\n5we chose to restrict ourselves to a single hidden layer with non-linear activation and identical architec- tures for each view, so as to avoid a fishing expedition. If DGCCA is appropriate for learning Twitter user representations, then a good architecture should require little exploration.\nviews, Jtext+net], along with a weighted GCCA variant (WGCCA). We learned PCA, GCCA, anc WGCCA representations of width r E {10, 20, 50, 100, 200, 300, 400, 500, 750, 1000}, and repor the best performing representations on the development set..\nThere are several points to note: First is that DGCCA outperforms linear methods at hashtag rec. ommendation by a wide margin in terms of recall. This is exciting because this task was shown to. benefit from incorporating more than just two views from Twitter users. These results suggest that. a nonlinear transformation of the input views can yield additional gains in performance. In addi-. tion, WGCCA models sweep over every possible weighting of views with weights in {0, 0.25, 1.0}. WGCCA has a distinct advantage in that the model is allowed to discriminatively weight views. to maximize downstream performance. The fact that DGCCA is able to outperform WGCCA at. hashtag recommendation is encouraging, since WGCCA has much more freedom to discard unin-. formative views, whereas the DGCCA objective forces networks to minimize reconstruction error equally across all views. As noted inBenton et al.(2016), only the friend network view was useful. for learning representations for friend recommendation (corroborated by performance of PCA ap-. plied to friend network view), so it is unsurprising that DGCCA when applied to all views cannot. compete with WGCCA representations learned on the single useful friend network view"}, {"section_index": "11", "section_name": "S OTHER MULTIVIEW LEARNING WORK", "section_text": "There has been strong work outside of CCA-related methods to combine nonlinear representation and learning from multiple views.Kumar et al.[(2011) elegantly outlines two main approaches these methods take to learn a joint representation from many views: either by 1) explicitly maximizing. pairwise similarity/correlation between views or by 2) alternately optimizing a shared, \"consen- sus\"' representation and view-specific transformations to maximize similarity. Models such as the siamese network proposed byMasci et al.(2014), fall in the former camp, minimizing the squared error between embeddings learned from each view, leading to a quadratic increase in the terms of. the loss function size as the number of views increase.Rajendran et al.(2015) extend Correlational. Neural Networks (Chandar et al.]2015) to many views and avoid this quadratic explosion in the. loss function by only computing correlation between each view embedding and the embedding of a \"pivot' view. Although this model may be appropriate for tasks such as multilingual image caption. ing, there are many datasets where there is no clear method of choosing a pivot view. The DGCCA. objective does not suffer from this quadratic increase w.r.t. the number of views, nor does it require. a privileged pivot view, since the shared representation is learned from the per-view representations.\nApproaches that estimate a \"consensus\"' representation, such as the multiview spectral clustering ap. proach in|Kumar et al.[(2011), typically do so by an alternating optimization scheme which depends on a strong initialization to avoid bad local optima. The GCCA objective our work builds on is par. ticularly attractive, since it admits a globally optimal solution for both the view-specific projections U1 ...UJ, and the shared representation G by singular value decomposition of a single matrix: a. sum of the per-view projection matrices. Local optima arise in the DGCCA objective only because. we are also learning nonlinear transformations of the input views. Nonlinear multiview methods. often avoid learning these nonlinear transformations by assuming that a kernel or graph Laplaciar. (e.g. in multiview clustering) is given (Kumar et al.|. 2011 Xiaowen 2014 Sharma et al.|2012).\nWe present DGCCA, a method for non-linear multiview representation learning from an arbitrary number of views. We show that DGCCA clearly outperforms prior work when using labels as a third view (Andrew et al.2013] Arora & Livescu2014] Wang et al.2015c), and can successfully exploit multiple views to learn user representations useful for downstream tasks such as hashtag recommendation for Twitter users. To date, CCA-style multiview learning techniques were either restricted to learning representations from no more than two views, or strictly linear transformations of the input views. This work overcomes these limitations.\n8The performance of WGCCA suffers compared to PCA because whitening the friend network data ignores. the fact that the spectrum of the decays quickly with a long tail -- the first few principal components made up a. large portion of the variance in the data, but it was also important to compare users based on other components."}]
rkYmiD9lg
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http: //tensorf1ow. org/ Software available from tensorflow.org.\nAlexander Novikoy1,2\nnovikov@bayesqroup.ru\ni.oseledets@skoltech.ru\n1National Research University Higher School of Economics, Moscow, Russia 2Institute of Numerical Mathematics, Moscow, Russia 3Moscow Institute of Physics and Technology, Moscow, Russia 4Skolkovo Institute of Science and Technology, Moscow, Russia\nI. Bayer. Fastfm: a library for factorization machines. arXiv preprint arXiv:1505.00641, 2015\nM. Blondel, A. Fujino, N. Ueda, and M. Ishihata. Higher-order factorization machines. 2016a\nModeling interactions between features improves the performance of machine learning solutions in many domains (e.g. recommender systems or sentiment analysis). In this paper, we introduce Exponential Machines (ExM), a predictor that models all interactions of every order. The key idea is to represent an exponentially large tensor of parameters in a factorized format called Tensor Train (TT). The Tensor Train format regularizes the model and lets you control the number of underlying parameters. To train the model, we develop a stochastic Riemannian optimization procedure, which allows us to fit tensors with 2160 entries. We show that the model achieves state-of-the-art performance on synthetic data with high order interactions and that it works on par with high-order factorization machines on a recommender system dataset MovieLens 100K."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "If the dictionary has d words, modeling pairwise interactions requires O(d2) parameters and will probably overfit to the data. Taking into account all interactions (all pairs, triplets, etc. of words requires impractical 2d parameters.\nM. Lichman. UCI machine learning repository, 2013\n[n this paper, we show a scalable way to account for all interactions. Our contributions are\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825-2830, 2011.\nmikhail.trofimov@phystech.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Machine learning problems with categorical data require modeling interactions between the features to solve them. As an example, consider a sentiment analysis problem - detecting whether a review is positive or negative - and the following dataset: 'I liked it', 'I did not like it', 'I'm not sure'. Judging by the presence of the word 'like' or the word 'not' alone, it is hard to understand the tone of the review. But the presence of the pair of words 'not' and 'like' strongly indicates a negative opinion.\nR. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems 27 (N1PS), 2014 C. Lubich, I. V. Oseledets, and B. Vandereycken. Time integration of tensor trains. SIAM Journal on Numerical Analysis, pp. 917-941, 2015. G. Meyer, S. Bonnabel, and R. Sepulchre. Regression on fixed-rank positive semidefinite matrices: a Riemannian approach. The Journal of Machine Learning Research, pp. 593-625, 2011. A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov. Tensorizing neural networks. In Advances in Neural Information Processing Systems 28 (NIPS). 2015.\nWe propose a predictor that models all 2d interactions of d-dimensional data by representing. the exponentially large tensor of parameters in a compact multilinear format - Tenso Train (TT-format) (Sec.3). Factorizing the parameters into the TT-format leads to a bette. generalization, a linear with respect to d number of underlying parameters and inference. time (Sec.5). The TT-format lets you control the number of underlying parameters througl. the TT-rank - a generalization of the matrix rank to tensors.. We develop a stochastic Riemannian optimization learning algorithm (Sec. 6.1j. In ou. experiments, it outperformed the stochastic gradient descent baseline (Sec.8.2) that is ofter. used for models parametrized by a tensor decomposition (see related works, Sec.[9). We show that the linear model (e.g. logistic regression) is a special case of our model witl. the TT-rank equal 2 (Sec.8.3) We extend the model to handle interactions between functions of the features, not jus. between the features themselves (Sec.7)\nI. V. Oseledets. Tensor-Train decomposition. SIAM J. Scientific Computing, 33(5):2295-2317, 2011"}, {"section_index": "3", "section_name": "2 LINEAR MODEL", "section_text": "In this section, we describe a generalization of a class of machine learning algorithms - the linear. feature vector of f-th object, and y(f) is the corresponding target variable. Also fix a loss function. e(y, y) : R2 -> R, which takes as input the predicted value y and the ground truth value y. We call. a model linear, if the prediction of the model depends on the features x only via the dot product between the features x and the d-dimensional vector of parameters w:.\nYlinear(x) =(x, w) + b\nM. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix recovery. 2014.\nOne of the approaches to learn the parameters w and b of the model is to minimize the following los.\nN ) f=1\nwhere is the regularization parameter. For the linear model we can choose any regularization term instead of L2, but later the choice of the regularization term will become important (see Sec.[6.1)\nSeveral machine learning algorithms can be viewed as a special case of the linear model with an appropriate choice of the loss function l(y, y): least squares regression (squared loss), Support Vector Machine (hinge loss), and logistic regression (logistic loss)"}, {"section_index": "4", "section_name": "PROOF OF THEOREM 1", "section_text": "Theorem|1|states that the inference complexity of the proposed algorithm is O(r2 d), where r is th TT-rank of the weight tensor W. In this section, we propose an algorithm that achieve the statec. complexity and thus prove the theorem..\nNote that all permutations of features in a term (e.g. x1x2 and x2x1) correspond to a single term an have exactly one associated weight (e.g. Wi1o)..\nProof. Let us rewrite the definition of the model response (4) assuming that the weight tensor W is represented in the TT-format (6\nIn the general case, we enumerate the subsets of features with a binary vector (i1, ..., id), whe k = 1 if the k-th feature belongs to the subset. The model equation looks as follows.\nd d Wi...id ik Gi[i1]...Ga[id] y(x) = x x k k i1,...,id k=1 i1,...,id k=1\nHere we assume that 0o = 1. The model is parametrized by a d-dimensional tensor W, which consis of 2d elements.\nNote that there is no need in a separate bias term, since it is already included in the model as the weight tensor element Wo...o (see the model equation example (3)..\n1 Ap = xqGk[ik] = Gk[0] +xkGk[1] ik=0\nThe key idea of our method is to compactly represent the exponentially large tensor of parameters W in the Tensor Train format (Oseledets2011)\nA d-dimensional tensor A is said to be represented in the Tensor Train (TT) format (Oseledets|2011 if each of its elements can be computed as the following product of d - 2 matrices and 2 vectors\nH. Zhang, S. J. Reddi, and S. Sra. Fast stochastic optimization on riemannian manifolds. arXiv preprint arXiv:1605.07147, 2016.\nBefore introducing our model equation in the general case, consider a 3-dimensional example. The equation includes one term per each subset of features (each interaction).\ny(x) = Wo00 + W100 x1 + W010 x2 + W001x3 + W110 x1x2 + Wi01 x1x3 + W011 x2x3 + W111 x1x2x3.\ny(x) = Wo00 + W100 x1 + Wo10 x2 + Wo01x3 + W110 x1x2 + W101 x1x3 + W011 x2x3 + W111 x1x2X3.\n1 1 d y(x)=... W x i1=0 id=0 k=1\ny(x)=xG1[ii]...xGa[id]= G1|i1 id Gd[id] i1.....id. i1=0 IXr X\nThe model equation (4) is linear with respect to the weight tensor W. To emphasize this fact and simplify the notation we rewrite the model equation (4) as a tensor dot product y(x) = (X, W) where the tensor A is defined as follows.\nd 11 x k=1\nThe final value y(x) can be computed from the matrices A via d-- 1 matrix-by-vector multiplications and 1 vector-by-vector multiplication, which yields O(r2 d) complexity..\nNote that the proof is constructive and corresponds to an implementation of the inference algorithm\nAii...id = G1[i1]... Ga[id]\nCores GD Cores GD 101 Cores SgD 100 Cores SgD 100 10 Cores SGD 500 Cores SGD 500 100 0 Riemann GD 0 Riemann GD 0 Riemann 100 0 Riemann 100 0 Riemann 500 10 0-0 Riemann 500 tessssss - Riemann GD rand init 1 Riemann GD rand init 1 10-2 fest Riemann GD rand init 2. 10-1 10-3 10-4 101 100 101 102 101 100 101 102 103 time (s) time (s) oRin (bHI\nFigure 1: An illustration of the TT-format for a 3 4 4 3 tensor A with the TT-rank equal 3\nwhere for any k = 2,...,d - 1 and for any value of ik, Gg[ik] is an r r matrix, Gi[i1] is a. 1 r vector and Ga[id] is an r 1 vector (see Fig.[1). We refer to the collection of matrices Gk. corresponding to the same dimension k (technically, a 3-dimensional array) as the k-th TT-core, where k = 1,..., d. The size r of the slices Gk[ik] controls the trade-off between the representational. power of the TT-format and computational efficiency of working with the tensor. We call r the. TT-rank of the tensor A.\nFigure 5: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the legend stand for the batch size. The methods marked with 'rand init' in the legend (square and triangle markers) were initialized from a random TT-tensor from two different distributions, all other methods were initialized from the solution of ordinary linear logistic regression. See details in Sec.[8.2 and|8.3\nAn attractive property of the TT-format is the ability to perform algebraic operations on tensors without materializing them, i.e. by working with the TT-cores instead of the tensors themselves. The TT-format supports computing the norm of a tensor and the dot product between tensors; element-wise sum and element-wise product of two tensors (the result is a tensor in the TT-format with increased TT-rank), and some other operations (Oseledets2011)."}, {"section_index": "5", "section_name": "5 INFERENCE", "section_text": "Theorem2|states that it is possible to initialize the weight tensor W of the proposed model from the weights w of the linear model.\nIn this section, we return to the model proposed in Sec.3 and show how to compute the model. equation (4) in linear time. To avoid the exponential complexity, we represent the weight tensor W. and the data tensor ' (5) in the TT-format. The TT-ranks of these tensors determine the efficiency of the scheme. During the learning, we initialize and optimize the tensor W in the TT-format and. explicitly control its TT-rank. The TT-rank of the tensor X always equals 1. Indeed, the following. TT-cores give the exact representation of the tensor\nTheorem. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank ' such that for any d-dimensional vector x and the corresponding object-tensor X the dot product (x, w) and(X, W) coincide.\nTo proof the theorem, in the rest of this section we show that the tensor W from Theorem |2 representable in the TT-format with the following TT-cores\nGk[ik] = xk C k = 1.....d\nG1[0]= 1 G1[1]=[ 0 W1 b Wd Ga[0] = Ga[1] = 1 0\nG1[0] = [ 1 G1[1] = 0 W1 b Wd Ga[0] = Ga[1] = 1 0 V2<k<d-1 1 0 0 Wk Gk[0] = Gk[1] = 0 1 0 0\nNow that we have a TT-representations of tensors W and X, we can compute the model response y(x) = (, W) in the linear time with respect to the number of features d..\nTheorem 1. The model response y(x) can be computed in O(r-d), where r is the TT-rank of the weight tensor W.\nWe refer the reader to Appendix[A|where we propose an inference algorithm with O(r2 d) complexity and thus prove Theorem1\nand thus the TT-rank of the tensor W equals 2\n0 1, if q=1i 1 lg = 0 0 0 1 if q=1iq2, G1[i1]... Gp[ip] =- 0 Wk if q=1iq=1, and ik = 1..\nProof. We prove the lemma by induction. Indeed, for p = 1 the statement of the lemma becomes\nif i1 = 0, G1[i1] W1 ifi1= 1,\nwhere the loss is defined as follows\nN L(W)=e(x(f),w),yf)+W l|WI=.. W2 ..id i1=0 id=0\nWe consider two approaches to solving problem (7). In a baseline approach, we optimize the objective L(W) with stochastic gradient descent applied to the underlying parameters of the TT-format of the tensor W.\n[f in = 1, then there are 3 options.\nG1 G2 G3 G4 A2423 = X i1 = 2 i2 = 4 i3 = 2 i4\nG1 G2 G3 G4 A2423 = X X X i1 = 2 i2 = 4 i3 = 2 i4 = 3\n101 Cores GD Cores GD Cores SgD 100 Cores SgD 100 10 Cores SGD 500 Cores SGD 500 100 (lot iol) ssos o Riemann GD Riemann GD 0 Riemann 100 o Riemann 100 10 0 0 Riemann 500 10 0 Riemann 500 tesr tsss - Riemann GD rand init 1 Riemann GD rand init 1 test 10-2 V Riemann GD rand init 2 10-3 10-1 10-4 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s) (a) Binarized Car dataset (b) HIV dataset\nThe TT-rank of the weight tensor W is a hyper-parameter of our method and it controls the efficiency vs. flexibility trade-off. A small TT-rank regularizes the model and yields fast learning and inference but restricts the possible values of the tensor W. A large TT-rank allows any value of the tensor W and effectively leaves us with the full polynomial model without any advantages of the TT-format.\nLearning the parameters of the proposed model corresponds to minimizing the loss under the TT-rank constraint:\nA simple alternative to the baseline is to perform gradient descent with respect to the tensor W, that is subtract the gradient from the current estimate of W on each iteration. The TT-format indeed allows to subtract tensors, but this operation increases the TT-rank on each iteration, making this approach impractical.\nWe now describe how to implement each of the steps outlined above\nTT-rank(PTwM,(Z)) < 2TT-rank(W) = 2r\nb, if 0, if 1...ig= G1[i1]...Gd-1[id-1]Ga[id] = Wk, if and ik = 1.\nN aL al W aw du f=1\nThe elements of the obtained tensor W that correspond to interactions of order > 2 equal to zero; th weight that corresponds to xk equals to wk; and the bias term Wo...o = b..\nThe TT-rank of the obtained tensor e qual 2 since its TT-cores are of size 2 2\nSince the resulting expression is a weighted sum of projections of individual data tensors (f). .We can project them in parallel. Since the TT-rank of each of them equals 1 (see Sec.5, all N projections cost O(dr2(r + N)) in total. The TT-rank of the projected gradient is less or equal to 2r regardless of the dataset size N.\nNote that here we used the particular choice of the regularization term. For terms other than L2 (e.g L1), the gradient may have arbitrary large TT-rank..\nSince we aim for big datasets, we use a stochastic version of the Riemannian gradient descent: or each iteration we sample a random mini-batch of objects from the dataset, compute the stochastic gradient for this mini-batch, make a step along the projection of the stochastic gradient, and retract back to the manifold (Alg.1).\nAn iteration of the stochastic Riemannian gradient descent consists of inference O(dr2 M), projection O(dr2(r + M)), and retraction O(dr3), which yields O(dr2 (r + M)) total computational complexity\n0.70 Riemann SGD 2000, LR 0.05 0.90 Riemann SGD 2000, LR 0.05 0.68 Riemann SGD 1000, LR 0.05 0.85 Riemann SGD 1000, LR 0.05 0.66 Riemann SGD 500, LR 0.05 0.80 Riemann SGD 500, LR 0.05 Riemann SGD 1000, LR 0.02 0.75 Riemann SGD 1000, LR 0.02 Riemann SGD 1000, LR 0.1 0.70 Riemann SGD 1000, LR 0.1 0.62 SGD 128, LR 0.005 SGD 128, LR 0.005 Riemann SGD 1000, LR 0.05, rand init 2. 0.65 Riemann SGD 1000, LR 0.05, rand init 2 0.60 0.58 0.55 0.56 0.50 0.45 101 102 103 101 102 103 time (s) time (s) (a) Training set. (b) Test set\nTo improve upon the baseline and avoid the TT-rank growth, we exploit the geometry of the set of tensors that satisfy the TT-rank constraint (7) to build a Riemannian optimization procedure (Sec.6.1) We experimentally show the advantage of this approach over the baseline in Sec.8.2\nFigure 6: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-3 Exponential Machines on the synthetic dataset with high order interactions. The first number in each legend enrty stands for the batch size The method marked with 'rand init' in the legend (triangle markers) was initialized from a random linear model, all other methods were initialized from the solution of ordinary linear logistic regression See details in Sec.8.2and 8.3\nforms a Riemannian manifold (Holtz et al.[2012). This observation allows us to use Riemannian optimization to solve problem (7). Riemannian gradient descent consists of the following steps which are repeated until convergence (see Fig.2[for an illustration):\n2. Follow along 9 with some step a (this operation increases the TT-rank).. 3. Retract the new point W - aG back to the manifold Mr. that is decrease its TT-rank to r\nubich et al. (2015) proposed an algorithm to project a TT-tensor Z on the tangent space of M, at a point W which consists of two steps: preprocess W in O(dr3) and project Z in O(dr2 TT-rank(Z)2).Lubich et al.(2015) also showed that the TT-rank of the projection is bounded by a constant that is independent of the TT-rank of the tensor Z:\nG1[i1]...Gp[ip] = 0 Wk 1Gp[1] = 0 0 1\naL dl T aw dy f=1\n1. UCI (Lichman, 2013) Car dataset is a classification problem with 1728 objects and 21 binary features (after one-hot encoding). We randomly splitted the data into 1382 training and 346 test objects. For simplicity, we binarized the labels: we picked the first class (unacc') and made a one-versus-rest binary classification problem from the original Car dataset. 2. UCI (Lichman,2013) HIV dataset is a binary classification problem with 1625 objects and 160 features, which we randomly splitted into 1300 training and 325 test objects.. 3. Synthetic data. We generated 100 000 train and 100 000 test objects with 30 features.. Each entry of the data matrix X was independently sampled from {-1, +1} with equal probabilities 0.5. We also uniformly sampled 20 subsets of features (interactions) of order 6: ..., j20 ~ U{1, ..., 30}. We set the ground truth target variable to a. of the interactions from the uniform distribution: E1, ... , E2o ~ U(--1, 1).."}, {"section_index": "6", "section_name": "6.2 INITIALIZATION", "section_text": "We found that a random initialization for the TT-tensor W sometimes freezes the convergence of optimization method (Sec.8.3j. We propose to initialize the optimization from the solution of the. corresponding linear model (1)\nThe following theorem shows how to initialize the weight tensor W from a linear model\nTheorem 2. For any d-dimensional vector w and a bias term b there exist a tensor W of TT-rank 2 such that for any d-dimensional vector x and the corresponding object-tensor X the dot products x, w) and(X, W) coincide.\nIn the general case, to model interactions between ng functions g1, . . . , gng of the features we redefine the obiect-tensor as follows.\nd k=1\nThe weight tensor W and the object-tensor X' are now consist of (ng + 1)d elements. After this change to the object-tensor X, learning and inference algorithms will stay unchanged compared to the original model (4).\nCategorical features. Our basic model handles categorical features xk E {1, . . ., K} by converting them into one-hot vectors xk,1, ... , xk,K. The downside of this approach is that it wastes the model capacity on modeling non-existing interactions between the one-hot vector elements xk,1, . .. , xk,K which correspond to the same categorical feature. Instead, we propose to use one TT-core per categorical feature and use the model extension technique with the following function\nif xk = ik Or ik = 0 Otherwise.\n4. MovieLens 1o0K. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan2015). We followed Blondel et al.(2016a) in preparing. the features and in turning the problem into binary classification. For users, we converted. age (rounded to decades), living area (the first digit of the zipcode), gender and occupation. into a binary indicator vector using one-hot encoding. For movies, we used the release year. (rounded to decades) and genres, also encoded. This process yielded 49+29 = 78 additional. one-hot features for each user-movie pair (943 + 1682 + 78 features in total). Original. ratings were binarized using 5 as a threshold. This results in 21200 positive samples, half of. which were used for traininig (with equal amount of sampled negative examples) and the. rest were used for testing.\n4. MovieLens 1ooK. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan2015). We followed Blondel et al.(2016a) in preparing the features and in turning the problem into binary classification. For users, we converted age (rounded to decades), living area (the first digit of the zipcode), gender and occupation into a binary indicator vector using one-hot encoding. For movies, we used the release year (rounded to decades) and genres, also encoded. This process yielded 49+29 = 78 additional one-hot features for each user-movie pair (943 + 1682 + 78 features in total). Original ratings were binarized using 5 as a threshold. This results in 21200 positive samples, half of which were used for traininig (with equal amount of sampled negative examples) and the rest were used for testing.\nFigure 2: An illustration of one step of the Riemannian gradient descent The step-size a is assumed to be 1 for clarity of the figure\nIn this section, we extend the proposed model to handle polynomials of any functions of the features As an example, consider the logarithms of the features in the 2-dimensional case:\nk=1 if ik = 0, if ik = 1, if ik = Ng\nThis allows us to cut the number of parameters per categorical feature from 2Kr2 to (K + 1)r without losing any representational power."}, {"section_index": "7", "section_name": "8 EXPERIMENTS", "section_text": "We release a Python implementation of the proposed algorithm and the code to reproduce the experiments' For the operations related to the TT-format, we used the TT-Toolbox?"}, {"section_index": "8", "section_name": "8.1 DATASETS", "section_text": "The datasets used in the experiments (see details in Appendix|C"}, {"section_index": "9", "section_name": "8.2 RIEMANNIAN OPTIMIZATION", "section_text": "In this experiment, we compared two approaches to training the model: Riemannian optimiza tion (Sec.6.1) vs. the baseline (Sec.[6). In this and later experiments we tuned the learning rate of both Riemannian and SGD optimizers with respect to the training loss after 100 iterations by the grid search with logarithmic grid.\nOn the Car and HIV datasets we turned off the regularization ( = 0) and used rank r = 4. We reporl that on the Car dataset Riemannian optimization (learning rate a = 40) converges faster and achieves. better final point than the baseline (learning rate a = 0.03) both in terms of the training and tesi losses (Fig.3a, 5h). On the HIV dataset Riemannian optimization (learning rate a = 800) converges to the value 10-4 around 20 times faster than the baseline (learning rate = 0.001, see Fig.3b), but. the model overfitts to the data (Fig.5b)..\nThe results on the synthetic dataset with high-order interactions confirm the superiority of the Riemannian approach over SGD - we failed to train the model at all with SGD (Fig.6)\nOn the MovieLens 1o0K dataset, we have only used SGD-type algorithms, because using the one-hot feature encoding is much slower than using the categorical version (see Sec.7), and we have yet to implement the support for categorical features for the Riemannian optimizer. On the bright side prototyping the categorical version of ExM in TensorFlow allowed us to use a GPU accelerator.."}, {"section_index": "10", "section_name": "8.3 INITIALIZATION", "section_text": "In this experiment, we compared random initialization with the initialization from the solution of the corresponding linear problem (Sec.|6.2). We explored two ways to randomly initialize a TT-tensor 1) filling its TT-cores with independent Gaussian noise; 2) initializing W to represent a linear model with random coefficients (sampled from a standard Gaussian). We report that on the Car dataset type-1 random initialization slowed the convergence compared to initialization from the linear model solution (Fig.3a), while on the HIV dataset the convergence was completely frozen (Fig.3b).\nTwo possible reasons for this effect are: a) the vanishing and exploding gradients problem (Bengio et al.l|1994) that arises when dealing with a product of a large number of factors (160 in the case of the HIV dataset); b) initializing the model in such a way that high-order terms dominate we may force the gradient-based optimization to focus on high-order terms, while it may be more stable to start with low-order terms instead. Type-2 initialization (a random linear model) indeed worked on par with the best linear initialization on the Car, HIV, and synthetic datasets (Fig.3p,[6).\nhttps://github.com/Bihaqo/exp-machines https://github.com/oseledets/ttpy\n1. UCI (Lichman, 2013) Car dataset is a classification problem with 1728 objects and 21 binary features (after one-hot encoding). We randomly splitted the data into 1382 training and 346 test objects and binarized the labels for simplicity. 2. UCI HIV dataset is a binary classification problem with 1625 objects and 160 features, which we randomly splitted into 1300 training and 325 test objects. 3. Synthetic data. We generated 100 000 train and 100 000 test objects with 30 features and set the ground truth target variable to a 6-degree polynomial of the features. 4. MovieLens 100K is a recommender system dataset with 943 users and 1682 movies (Harper & Konstan] 2015). We followed Blondel et al.(2016a) in preparing 2703 one-hot features And the problem into binary classification\n101 Cores GD 104 Cores GD Cores SGD 100 101 Cores SGD 100 100 Cores SGD 500 100 Cores SGD 500 Riemann GD 10 Riemann GD 10 Riemann 100 601) 10 Riemann 100 SSO 0- Riemann 500 10 Riemann 500 10-2 bu! Riemann GD rand init 1 Riemann GD rand init 1. trrann 10-3 10-5 V Riemann GD rand init 2. 10-6 104 10-7 10-5 10-8 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s) (a) Binarized Car dataset (b) HIV dataset\n101 Cores GD 102 Cores GD Cores SGD 100 101 Cores SGD 100 100 Cores SGD 500 100 Cores SGD 500 0 o Riemann GD 0 o Riemann GD 101 0 o Riemann 100 102 0 o Riemann 100 SSOJ 0 o Riemann 500 0 0 Riemann 500 10-2 - Riemann GD rand init 1 2 104 Riemann GD rand init 1 10-3 innn V V Riemann GD rand init 2 10-6 10-4 10-7 10-5 108 10-1 100 101 102 10-1 100 101 102 103 time (s) time (s)\nFigure 3: A comparison between Riemannian optimization and SGD applied to the underlying parameters of the TT-format (the baseline) for the rank-4 Exponential Machines. Numbers in the legend stand for the batch size. The methods marked with 'rand init' in the legend (square and triangle markers) were initialized from a random TT-tensor from two different distributions (see Sec. 8.3] all other methods were initialized from the solution of ordinary linear logistic regression. Type-2 random initialization is ommited from the Car dataset for the clarity of the figure\nTraining Inference Method Test AUC time (s) time (s) Log. reg. 0.50 0.4 0.0 RF 0.55 21.4 6.5 Neural Network 0.50 47.2 0.1 SVM RBF 0.50 2262.6 5380 SVM poly. 2 0.50 1152.6 4260 SVM poly. 6 0.56 4090.9 3774 2-nd order FM 0.50 638.2 0.5 6-th order FM 0.57 549 3 6-th order FM 0.86 6039 3 6-th order FM 0.96 38918 3 ExM rank 3 0.79 65 0.2 ExM rank 8 0.85 1831 1.3 ExM rank 16 0.96 48879 3.8\nTable 1: A comparison between models on synthetic data with high-order interactions (Sec.[8.4). We report the inference time on 100000 test objects in the last column"}, {"section_index": "11", "section_name": "8.4 COMPARISON TO OTHER APPROACHES", "section_text": "On the synthetic dataset with high-order interactions we campared Exponential Machines (the proposed method) with scikit-learn implementation (Pedregosa et al.2011) of logistic regression,. random forest, and kernel SVM; FastFM implementation (Bayer2015) of 2-nd order Factorization. Machines; our implementation of high-order Factorization Machines3] and a feed-forward neural. network implemented in TensorFlow (Abadi et al.2015). We used 6-th order FM with the Adam. optimizer (Kingma & Ba 2014) for which we'd chosen the best rank (20) and learning rate (0.003) based on the training loss after the first 50 iterations. We tried several feed-forward neural networks. with ReLU activations and up to 4 fully-connected layers and 128 hidden units. We compared the. models based on the Area Under the Curve (AUC) metric since it is applicable to all methods and is. robust to unbalanced labels (Tbl.1).\nOn the MovieLens 1ooK dataset we used the categorical features representation described in Sec.7. Our model obtained 0.784 test AUC with the TT-rank equal 10 in 273 seconds on a Tesla K40 GPU. (the inference time is 0.3 seconds per 78800 test objects); our implentation of 3-rd order FM obtained 0.782; logistic regression obtained 0.782; and Blondel et al.(2016a) reported 0.786 with 3-rd order FM on the same data.\n0.785 0.784 0.783 AUC 0.782 ftest 0.781 0.780 0.779 0.778 O 5 10 15 20 25 TT-rank\nFigure 4: The influence of the TT-rank on the test AUC for the MovieLens 100K dataset.\nKernel SVM is a flexible non-linear predictor and, in particular, it can model interactions when used with the polynomial kernel (Boser et al.J 1992). As a downside, it scales at least quadratically with the dataset size (Bordes et al.f2005) and overfits on highly sparse data.\nWith this in mind, Rendle|(2010) developed Factorization Machine (FM), a general predictor that. models pairwise interactions. To overcome the problems of polynomial SVM, FM restricts the rank of the weight matrix, which leads to a linear number of parameters and generalizes better on sparse data. FM running time is linear with respect to the number of nonzero elements in the data, which. allows scaling to billions of training entries on sparse problems..\nA number of works used full-batch or stochastic Riemannian optimization for data processing tasks (Meyer et al.]2011) Tan et al.]2014] Xu & Ke2016] Zhang et al.]2016). The last work (Zhang et al. 2016) is especially interesting in the context of our method, since it improves the convergence rate of stochastic Riemannian gradient descent and is directly applicable to our learning procedure.\nIn a concurrent work, Stoudenmire & Schwab (2016) proposed a model that is similar to ours but relies on the trigonometric basis (cos(x), sin(x)) in contrast to polynomials (1, x) used in Exponential Machines (see Sec.7|for an explanation on how to change the basis). They also proposed a different learning procedure inspired by the DMRG algorithm (Schollwock] 2011), which allows to automatically choose the ranks of the model, but is hard to adapt to the stochastic regime. One of the possible ways to combine strengths of the DMRG and Riemannian approaches is to do a full DMRG sweep once in a few epochs of the stochastic Riemannian gradient descent to adjust the ranks.\nOther relevant works include the model that approximates the decision function with a multidimen. sional Fourier series whose coefficients lie in the TT-format (Wahls et al.|2014); and models that are similar to FM but include squares and other powers of the features: Tensor Machines (Yang & Gittens 2015) and Polynomial Networks (Livni et al.]2014). Tensor Machines also enjoy a theoretica. generalization bound. In another relevant work, Blondel et al.(2016b) boosted the efficiency of FM. and Polynomial Networks by casting their training as a low-rank tensor estimation problem, thu. making it multi-convex and allowing for efficient use of Alternative Least Squares types of algorithms. Note that Exponential Machines are inherently multi-convex.."}, {"section_index": "12", "section_name": "10 DISCUSSION", "section_text": "We presented a predictor that models all interactions of every order. To regularize the model and to make the learning and inference feasible, we represented the exponentially large tensor of parameters in the Tensor Train format. To train the model, we used Riemannian optimization in the stochastic regime and report that it outperforms a popular baseline based on the stochastic gradient descent However, the Riemannian learning algorithm does not support sparse data, so for dataset with hundreds of thousands of features we are forced to fall back on the baseline learning method. We found that training process is sensitive to initialization and proposed an initialization strategy based. on the solution of the corresponding linear problem. The solutions developed in this paper for the. stochastic Riemannian optimization may suit other machine learning models parametrized by tensors in the TT-format.\nThe TT-rank is one of the main hyperparameters of the proposed model. Two possible strategies can be used to choose it: grid-search or DMRG-like algorithms (see Sec.9). In our experiments we opted for the former and observed that the model is fairly robust to the choice of the TT-rank (see Fig.) but a too small TT-rank can hurt the accuracy (see Tbl.[1).\nFor high-order interactions FM uses CP-format (Caroll & Chang1970] Harshman]1970) to represent the tensor of parameters. The choice of the tensor factorization is the main difference between the high-order FM and Exponential Machines. The TT-format comes with two advantages over the CP-format: first, the TT-format allows for Riemannian optimization; second, the problem of finding the best TT-rank r approximation to a given tensor always has a solution and can be solved in polynomial time. We found Riemannian optimization superior to the SGD baseline (Sec.[6) that was used in several other models parametrized by a tensor factorization (Rendlel 2010f Lebedev et al. 2014][Novikov et al.] 2015). Note that CP-format also allows for Riemannian optimization, but only for 2-order tensors (and thereafter 2-order FM)."}]
S1LVSrcge
[{"section_index": "0", "section_name": "VARIABLE COMPUTATION IN RECURRENT NEURAL NETWORKS", "section_text": "0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 longstanding ban. Der Ansatz der. e i n e umweltfreundliche a\nYacine Jernite\nDepartment of Computer Science New York University. New York. NY 10012. USA\nFigure 5: Per-character computation by VCRNN. The model appears to make use of morphology separating sub-word units.\nFigure 4|shows that both perform similarly on the Czech dataset, achieving better performanc more efficiently than the standard RNN. On German, the guided settings remains slightly mor efficient than the fully learned one, but both are more efficient than the RNN and achieve the sam performance when using more dimensions. Both learn to use more dimensions at word boundarie as shown in Figure|3] The German model also appears to be learning interesting morphology (Luft ver-kehrs, eben-falls in Figure[3] An-satz, Um-welt-freund-lich in Figure[5), and grammar (focusing on case markers at the end of articles, Figure[5).\negrave,ajoulin, tmikolov}@fb.com\nRecurrent neural networks (RNNs) have been used extensively and with increasing success to model various types of sequential data. Much of this. progress has been achieved through devising recurrent units and architectures. with the flexibility to capture complex statistics in the data, such as long range. dependency or localized attention phenomena. However, while many sequential data (such as video, speech or language) can have highly variable information. flow, most recurrent models still consume input features at a constant rate and perform a constant number of computations per time step, which can be detrimental to both speed and model capacity. In this paper, we explore a. modification to existing recurrent units which allows them to learn to vary the. amount of computation they perform at each step, without prior knowledge of the sequence's time structure. We show experimentally that not only do our models require fewer operations, they also lead to better performance overall on. evaluation tasks.\nIn this work, we have presented two kinds of Variable Computation recurrent units: the VCRNN and VCGRU, which modify the Elman and Gated Recurrent Unit respectively to allow the model to achieve better performance with fewer operations, and can be shown to find time patterns oi interest in sequential data. We hope that these encouraging results will open up paths for furthe exploration of adaptive computation paradigms in neural networks in general, which could lead tc more computation-efficient models, better able to deal with varying information flow or multi-scal processes. We also see a few immediate possibilities for extensions of this specific model. Fo example, the same idea of adaptive computation can similarly be applied to yet other commonly used recurrent units, such as LSTMs, or to work within the different layers of a stacked architecture and we are working on adapting our implementation to those settings. We also hope to investigat the benefits of using stronger supervision signals to train the scheduler, such as the entropy of the prediction, to hopefully push our current results even further."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The class of Recurrent Neural Network models (RNNs) is particularly well suited to dealing with sequential data, and has been successfully applied to a diverse array of tasks, such as language modeling and speech recognition (Mikolov2012), machine translation (Mikolov]2012) Cho et al. 2014a), or acoustic modeling (Robinson et al.f1993f Graves & Jaitly2014) among others.Two factors have been instrumental in allowing this paradigm to be so widely adopted and give rise to the aforementioned successes. On the one hand, recent advances in both hardware and software have had a significant role in bringing the training of recurrent models to tractable time periods. On the other hand, novel units and architectures have allowed recurrent networks to model certain features of sequential data better than Elman's simple RNN architecture (Elman!|1990). These include such developments as the LSTM (Hochreiter & SchmidhuberJ 1997) and GRU (Cho et al.|2014a) units, which can more easily learn to model long range interactions (Chung et al.]2014), or attention mechanisms that allow the model to focus on a specific part of its history when making a prediction (Bahdanau et al.]2014). In this work, we focus on another feature of recurrent networks: the ability to efficiently model processes happening at different and possibly varying time scales.\nMost existing recurrent models take one of two approaches regarding the amount of computation. they require. Either the computational load is constant over time, or it follows a fixed (or deter-. ministic) schedule (Koutnik et al.[2014), (Mikolov et al.2014). The latter approach has proven especially useful when dealing with sequences which reflect processes taking place at different lev-.\nWork done at Facebook AI Research"}, {"section_index": "2", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014\nPiotr Bojanowski, Armand Joulin, and Tomas Mikolov. Alternative structures for character-leve rnns. CoRR, abs/1511.06303, 2015.\nIn this work, we show how to modify two commonly used recurrent unit architectures, namely the. Elman and Gated Recurrent Unit, to obtain their variable computation counterparts. This gives rise. to two new architecture, the Variable Computation RNN and Variable Computation GRU (VCRNN. and VCGRU), which take advantage of these phenomena by deciding at each time step how much. computation is required based on the current hidden state and input. We show that the models learn time patterns of interest, can perform fewer operations, and may even take advantage of these time structures to produce better predictions than the constant computation versions..\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR. abs/1412.3555. 2014\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. CoRR, abs/1609.01704, 2016.\nWe start by giving an overview of related work in Section 2] provide background on the class of Recurrent Neural Networks in Section 3] describe our model and learning procedure in Section 4 and present experimental results on music as well as bit and character level language modeling ir section 5] Finally, Section 6 concludes and lays out possible directions for future work.\nJeffrey L. Elman. Finding structure in time. Cognitive Science, 14(2):179-211, 1990. doi: 10.1207 s15516709c0g1402_1.\nShai Fine, Yoram Singer, and Naftali Tishby. The hierarchical hidden markov model: Analysis an applications. Machine Learning, 32(1):41-62, 1998"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Jurgen Van Gael, Yee Whye Teh, and Zoubin Ghahramani. The infinite factorial hidden marko model. In Advances in Neural Information Processing Systems 21, Vancouver, British Columbia Canada, December 8-11, 2008, pp. 1697-1704, 2008.\nHow to properly handle sequences which reflect processes happening at different time scales has. been a widely explored question. Among the proposed approaches, a variety of notable systems. based on Hidden Markov Models (HMMs) have been put forward in the last two decades. The Fac torial HMM model of (Ghahramani & Jordan][1997) (and its infinite extension in (Gael et al.|[2008) use parallel interacting hidden states to model concurrent processes. While there is no explicit han-. dling of different time scales, the model achieves good held-out likelihood on Bach chorales, which. exhibit multi-scale behaviors. The hierarchical HMM model of (Fine et al.1998) and (Murphy & Paskin2001) takes a more direct approach to representing multiple scales of processes. In these. works, the higher level HMM can recursively call sub-HMMs to generate short sequences with-. out changing its state, and the authors show a successful application to modeling cursive writing. Finally, the Switching State-Space Model of (Ghahramani & Hinton20oo) combines HMMs and. Linear Dynamical Systems: in this model, the HMM is used to switch between LDS parameters,. and the experiments show that the HMM learns higher-level, slower dynamics than the LDS..\nOn the side of Recurrent Neural Networks, the idea that the models should have mechanisms tha allow them to handle processes happening at different time scales is not a new one either. On the one. hand, such early works as (Schmidhuber1991) and (Schmidhuber1992) already presented a two level architecture, with an \"automatizer\"' acting on every time step and a \"chunker\"' which should. only be called when the automatizer fails to predict the next item, and which the author hypothesizes. learns to model slower scale processes. On the other hand, the model proposed in (Mozer1993) has. slow-moving units as well as regular ones, where the slowness is defined by a parameter t E [0, 1. deciding how fast the representation changes by taking a convex combination of the previous and. predicted hidden state.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8 1735-1780, 1997.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring th limits of language modeling. CoRR, abs/1602.02410, 2016.\nJan Koutnik, Klaus Greff, Faustino J. Gomez, and Jurgen Schmidhuber. A clockwork RNN. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1863-1871, 2014.\nTomas Mikolov. Statistical language models based on neural networks. Ph. D. thesis, Brno Univer. sity of Technology, 2012.\nBoth these notions, along with different approaches to multi-scale sequence modeling, have been developed in more recent work. (Mikolov et al.2014) expand upon the idea of having slow moving. units in an RNN by proposing an extension of the Elman unit which forces parts of the transition. matrix to be close to the identity. The idea of having recurrent layers called at different time steps has also recently regained popularity. The Clockwork RNN of (Koutnik et al.]|2014), for example,. has RNN layers called every 1, 2, 4, 8, etc... time steps. The conditional RNN of (Bojanowski et al.. 2015) takes another approach by using known temporal structure in the data: in the character level\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato Learning longer memory in recurrent neural networks. CoRR. abs/1412.7753. 2014.\nConsider sequential data such as video feeds, audio signal, or language. In video data, there are time. eriods where the frames differ very slightly, and where the underlying model should probably dc. nuch less computation than when the scene completely changes. When modeling speech from ar. udio signal, it is also reasonable to expect that the model should be able do little to no computatior during silences. Finally, in the case of character level language modeling, having more computa. ional power at word boundaries can certainly help: after reading the left context The prime..., the. nodel should be able to put a higher likelihood on the sequence of characters that make up the worc. ninister. However, we can take this idea one step further: after reading The prime min..., the nex. ew characters are almost deterministic, and the model should require little computation to predic. he sequence i-s-t-e-r.\nZoubin Ghahramani and Geoffrey E. Hinton. Variational learning for switching state-space models Neural Computation, 12(4):831-864, 2000.\nAlex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1764-1772, 2014.\nlevel language modeling application, the first layer is called for every character, while the second is only called once per word. It should also be noted that state-of-the art results for language models have been obtained using multi-layer RNNs (Jozefowicz et al.]2016), where the higher layers can in theory model slower processes. However, introspection in these models is more challenging, and it is difficult to determine whether they are actually exhibiting significant temporal behaviors.\nTony Robinson, Luis B. Almeida, Jean-Marc Boite, Herve Bourlard, Frank Fallside, Mike Hochberg. Dan J. Kershaw, Phil Kohn, Yochai Konig, Nelson Morgan, Joao Paulo Neto, Steve Renals, Marc. Saerens, and Chuck Wooters. A neural network based, speaker independent, large vocabulary continuous speech recognition system: the WERNICKE project. In Third European Conference on Speech Communication and Technology, EUROSPEECH 1993, Berlin, Germany, Septembe. 22-25, 1993, 1993.\nFinally, even more recent efforts have considered using dynamic time schedules. (Chung et al.l2016 presents a multi-layer LSTM, where each layer decides whether or not to activate the next one at every time step. They show that the model is able to learn sensible time behaviors and achieve good perplexity on their chosen tasks. Another implementation of the general concept of adaptive time- dependent computation is presented in (Graves!2016). In that work, the amount of computation performed at each time step is varied not by calling units in several layers, but rather by having a unique RNN perform more than one update of the hidden state on a single time step. There too, the model can be shown to learn an intuitive time schedule.\nJurgen Schmidhuber. Neural sequence chunkers. Technical Report, 1991.\nSaizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov. and Yoshua Bengio. Architectural complexity measures of recurrent neural networks. In Ad vances in Neural Information Processing Systems 29: Annual Conference on Neural Informatior Processing Systemsn, 2016.\nLet us start by formally defining the class of Recurrent Neural Networks (RNNs). For tasks such as language modeling, we are interested in defining a probability distribution over sequences w =- W1 . wt). Using the chain rule. the negative log likelihood of a sequence can be written:\nT L(w) =-)`log(p(wt|F(w1,...,Wt-1) t=1\nwhere F is a filtration, a function which summarizes all the relevant information from the past. RNNs are a class of models that can read sequences of arbitrary length to provide such a summary. in the form of a hidden state ht ~ F(w1, ... , wt-1), by applying the same operation (recurrent unit). at each time step. More specifically, the recurrent unit is defined by a recurrence function g which takes as input the previous hidden state ht-1 at each time step t, as well as a representation of the input xt (where ht-1 and xt are D-dimensional vectors), and (with the convention ho = 0,) outputs the new hidden state:\nElman Unit. The unit described in (Elman![1990) is often considered to be the standard unit. It is parametrized by U and V, which are square, D-dimensional transition matrices, and uses a sigmoid. non-linearity to obtain the new hidden state:.\nIn the Elman unit, the bulk of the computation comes from the matrix multiplications, and the cost per time step is O(D2). In the following section, we show a simple modification of the unit which. allows it to reduce this cost significantly.\nGated Recurrent Unit. The Gated Recurrent Unit (GRU) was introduced in (Cho et al.]2014b) The main difference between the GRU and Elman unit consists in the model's ability to interpolate between a proposed new hidden state and the current one, which makes it easier to model longer range dependencies. More specifically, at each time step t, the model computes a reset gate rt, an update gate zt, a proposed new hidden state ht and a final new hidden state ht as follows:\nht = tanh(U(rt O ht-1) + Vxt)\nht =ZtO ht+(1-Zt) O ht-1\nIn this paper, we present an alternative view of adaptive computation, where a single Variable Com putation Unit (VCU) decides dynamically how much of its hidden state needs to change, leading to both savings in the number of operations per time step and the possibility for the higher dimensions of the hidden state to keep longer term memory.\nht=g(ht-1,Xt\nht = tanh(Uht-1+ Vxt)\nrt = o(Urht-1+ Vrxt) Zt = o(Uzht-1+ Vzxt\nht-1 ht+1 Scheduler Scheduler Xt Xt+1"}, {"section_index": "4", "section_name": "A APPENDIX", "section_text": "Vi E 1,...,D, (et) = Threse (o((mtD-i)))\nht-1 =et O ht-1 and it = et O Xt\nht = etO g(ht-1,xt) +(1-et) O ht\nFigure 1: Two time steps of a VCU. At each step t, the scheduler takes in the current hidden vector ht-1 and input vector xt and decides on a number of dimensions to use d. The unit then uses the first d dimensions of ht-1 and xt to compute the first d elements of the new hidden state ht, and carries the remaining D - d dimensions over from ht-1.\nht = tanh(U(rt O ht-1) + Vxt)"}, {"section_index": "5", "section_name": "4 VARIABLE COMPUTATION RNN", "section_text": "ht=ZtO ht+(1-Zt)O ht-1\nAs noted in the previous section, the bulk of the computation in the aforementioned settings comes. from the linear layers; a natural option to reduce the number of operations would then be to only. apply the linear transformations to a sub-set of the hidden dimensions. These could in theory cor. respond to any sub-set indices in {1, ..., D}; however, we want a setting where the computationa. cost of the choice is much less than the cost of computing the new hidden state. Thus, we onl consider the sets of first d dimensions of RD, so that there is a single parameter d to compute..\nOur Variable Computation Units (VCUs) implement this idea using two modules: a scheduler de. cides how many dimensions need to be updated at the current time step, and the VCU performs a partial update of its hidden state accordingly, as illustrated in Figure[1 Section4.1 formally de scribes the scheduler and partial update operations, and Section4.2loutlines the procedure to jointly learn both modules."}, {"section_index": "6", "section_name": "4.1 MODEL DESCRIPTION", "section_text": "Scheduler. The model first needs to decide how much computation is required at the current tim. step. To make that decision, the recurrent unit has access to the current hidden state and input this way, the model can learn to ignore an uninformative input, or to decide on more computatior when an it is unexpected given the current hidden state. The scheduler is then defined as a functior m : R2D -> [0, 1] which decides what portion of the hidden state to change based on the curren hidden and input vectors. In this work, we decide to implement it as a simple log-linear functior with parameter vectors u and v, and bias b, and at each time step t, we have:.\nPartial update. Once the scheduler has decided on a computation budget mt, the VCU needs t perform a partial update of the first [mD] dimensions of its hidden state. Recall the hidden stat ht-1 is a D-dimensional vector. Given a smaller dimension d E {1, ..., D}, a partial update of th hidden state would take the following form. Let ga be the d-dimensional version of the model' recurrence function g as defined in Equation|2] which uses the upper left d by d square sub-matrice of the linear transformations (Ua, Vd, ...), and hg-1 and x denote the first d elements of ht-1 anc xt. We apply ga to ht-1 and x, and carry dimensions d + 1 to D from the previous hidden state so the new hidden state ht is defined by:\nand Vi>d, hti=ht-1,i\nWe apply the method outlined in the previous paragraph to two commonly used architecture. Recall that, given a proportion of dimensions to use mt E 0, 1| and a sharpness parameter X, we the gating vector et E RD is defined as:\nFirst, we derive a variable computation version of the Elman RNN to get the Variable Computation Recurrent Neural Network (VCRNN) by transforming Equation|3|as follows:.\nSecondly, we obtain the Variable Computation Gated Recurrent Unit (VCGRU) by deriving the variable computation of the GRU architecture. This is achieved by modifying Equations4|to|6|as follows:\nrt = o(U,ht-1+ Vrxt), Zt = et O o(Uzht-1+ Vzxt\nmt=(uht-1+vxt+b)\nVi E 1,..., D, (et) = Threse (o((mtD-i)))"}, {"section_index": "7", "section_name": "4.2 LEARNING", "section_text": "Since the soft mask et is a continuous function of the model parameters, the scheduler can be learne through back-propagation. However, we have found that the naive approach of using a fixed sharp ness parameter and simply minimizing the negative log-likelihood defined in Equation|1|led to th model being stuck in a local optimum which updates all dimensions at every step. We found that th following two modifications allowed the model to learn better parametrizations.\nO(w,U,V,O,u,v,b) = L(w,U,V,O,u,v,b) +Q(m,m)\nSecondly, for the model to be able to explore the effect of using fewer or more dimensions, we neec to start training with a smooth mask (small parameter), since for small values of A, the mode actually uses the whole hidden state. We can then gradually increase the sharpness parameter unti the model truly does a partial update."}, {"section_index": "8", "section_name": "5 EXPERIMENTS", "section_text": "We ran experiments with the Variable Computation variants of the Elman and Gated Recurrent Units (VCRNN and VCGRU respectively) on several sequence modeling tasks. All experiments were run using a symmetrical l1 penalty on the scheduler m, that is, penalizing mt when it is greater or smaller than target m, with m taking various values in the range [0.2, 0.5]. In all experiments, we start with a sharpness parameter = 0.1, and increase it by 0.1 per epoch to a maximum value of 1.\nIn each of our experiments, we are interested in investigating two specific aspects of our model On the one hand, do the time patterns that emerge agree with our intuition of the time dynamic. expressed in the data? On the other hand, does the Variable Computation Unit (VCU) yield a gooc predictive model? More specifically, does it lead to lower perplexity than a constant computatior counterpart which performs as many or more operations? In order to be able to properly assess the efficiency of the model, and since we do not know a priori how much computation the VCt uses, we always report the \"equivalent RNN\" dimension (noted as RNN-d in Table[3) along with the performance on test data, i.e. the dimension of an Elman RNN that would have performed the same amount of computation. Note that the computational complexity gains we refer to are exclusively ir terms of lowering the number of operations, which does not necessarily correlate with a speed up o training when using general purpose GPU kernels; it is however a prerequisite to achieving such a speed up with the proper implementation, motivating our effort.\nWe answer both of these questions on the tasks of music modeling, bit and character level language. modeling on the Penn Treebank text, and character level language modeling on the Text8 data set as well as two languages from the Europarl corpus.\nSoft mask. In practice, the transition function we just defined would require making a hard. choice at each time step of the number of dimensions to be updated, which makes the model non-differentiable and can significantly complicate optimization. Instead, we approximate the hard choice by using a gate function to apply a soft mask. Given mt E [0, 1] and a sharpness parameter A, we use the gating vector et E RD defined by:.\nwhere Thres, maps all values greater than 1 -- e and smaller than e to 1 and O respectively. That way, the model performs an update using the first (mt D + n) dimensions of the hidden state, where n goes to O as X increases, and leaves its last ((1 mt) D - n) dimensions unchanged. Thus, if g is the recurrence function defined in Equation[2] we have:\nFirst, we can encourage m to be either close or no greater than a target m at all time by adding a penalty term to the objective. For example, we can apply a l1 or l2 penalty to values of m that are greater than the target, or that simply diverge from it (in which case we also discourage the model from using too few dimensions). The cost function defined in Equation[1 then becomes:"}, {"section_index": "9", "section_name": "5.1 MUSIC MODELING", "section_text": "We downloaded a corpus of Irish traditional tunes from https://thesession.org and split them intc a training validation and test of 16,000 (2.4M tokens), 1,511 (227,000 tokens) and 2,000 (288,00( tokens) melodies respectively. Each sub-set includes variations of melodies, but no melody ha. variations across subsets. We consider each (pitch, length) pair to be a different symbol; with rests. and bar symbols, this comes to a total vocabulary of 730 symbols..\nTable[1compares the perplexity on the test set to Elman RNNs with equivalent computational costs an VCRNN with hidden dimension 500 achieves better perplexity with fewer operations than ar RNN with dimension 250.\nLooking at the output of the scheduler on the validation set also reveals some interesting patterns. First, bar symbols are mostly ignored: the average value of mt on bar symbols is 0.14, as opposec. to 0.46 on all others. This is not surprising: our pre-processing does not handle polyphony or time. signatures, so bars en up having different lengths. The best thing for the model to do is then just tc ignore them and focus on the melody. Similarly, the model spends lest computation on rests (0.34 average mt), and pays less attention to repeated notes (0.51 average for mt on the first note of a. repetition, 0.45 on the second).\nTable 1: Music modeling, test set perplexity on a corpus of traditional Irish tunes. Our mode manages to achieve better perplexity with less computation than the Elman RNN..\nWe also notice that the model needs to do more computation on fast passages, which often have. richer ornamentation, as illustrated in Table[2] While it is difficult to think a priori of all the sorts. of behaviors that could be of interest, these initial results certainly show a sensible behavior of the scheduler on the music modeling task..\nnote length 0.25 1/3 0.5 0.75 1 1.5 2 0.61 0.77 0.39 0.59 0.44 0.46 0.57 average m\nTable 2: Average amount of computation (mt) for various note lengths. More effort is required fc the faster passages with 16th notes and triplets.\nWe also chose to apply our model to the tasks of bit level and character level language modeling. Those appeared as good applications since we know a priori what kind of temporal structure to look. for: ASCII encoding means that we expect a significant change (change of character) every 8 bits in bit level modeling, and we believe the structure of word units to be useful when modeling text at the. character level."}, {"section_index": "10", "section_name": "5.2.1 PENN TREEBANK AND TEXT8", "section_text": "We first ran experiments on two English language modeling tasks, using the Penn TreeBank and Text8 data sets. We chose the former as it is a well studied corpus, and one of the few corpora for which people have reported bit-level language modeling results. It is however quite small for ou. purposes, with under 6M characters, which motivated us to apply our models to the larger Text8 data set (1ooM characters). Table|3|shows bit per bit and bit per character results for bit and character level language modeling. We compare our results with those obtained with standard Elman RNN, GRU, and LSTM networks, as well as with the Conditional RNN of (Bojanowski et al. 2015)\nunit type equivalent RNN perplexity RNN-200 9.13 RNN-250 8.70 VCRNN-500 233 8.51\nQuantitative Results. We first compare the VCRNN to the regular Elman RNN, as well as to the Conditional RNN of (Bojanowski et al.]2015), which combines two layers running at bit and. character level for bit level modeling, or character and word level for character level modeling. For bit level language modeling, the VCRNN not only performs fewer operations than the standard. unit, it also achieves better performance. For character level modeling, the Elman model using a. hidden dimension of 1024 achieved 1.47 bits per character, while our best performing VCRNN does. slightly better while only requiring as much computation as a dimension 760 Elman unit. While we do slightly more computation than the Conditional RNN, it should be noted that our model is not explicitly given word-level information: it learns how to summarize it from character-level input..\nThe comparison between the constant computation and Variable Computation GRU (VCGRU) fol lows the same pattern, both on the PTB and Text8 corpora. On PTB, the VCGRU with the best. validation perplexity performs as well as a GRU (and LSTM) of the same dimension with less than. half the number of operations. On Text8, the VCGRU models with various values of the target m. always achieve better perplexity than other models performing similar or greater numbers of opera- tions. It should be noted that none of the models we ran on Text8 overfits significantly (the training. and validation perplexities are the same), which would indicate that the gain is not solely a matter. of regularization.\n2345678ssssssss123456781234567812345678pppppppp Figure 2: Top: Per-bit computation by VCRNN, higher dimensions (950 to 1000). Middle: adding 8 bits of buffer between every character. Bottom: adding 24 bits of buffer between each character.\nCharacter level PTB Character level Text8 unit type RNN-d bpc Bit level PTB unit type m RNN-d bpc GRU-1024 1450 1.42 unit type RNN-d bpb RNN-512* 512 1.80 LSTM-1024 2048 1.42 1 RNN-1024* 1024 1.69 RNN-1024 1024 1.47 RNN-100 100 0.287 LSTM-512* 1024 1.65 RNN-500 500 0.227 CRNN-500 700 1.46 LSTM-1024* - 2048 1.52 RNN-1000 1000 0.223 VCRNN-1024 760 1.46 RNN-512 512 1.80 CRNN-100 140 0.222 RNN-760 760 1.47 GRU-512 725 1.69 LSTM-380 760 1.44 VCRNN-1000 340 0.231 GRU-1024 1450 1.58 460 0.215 GRU-538 760 1.43 VCRNN-1000 VCGRU-1024 0.3 464 1.69 VCGRU-1024 648 1.42 VCGRU-1024 0.4 648 1.64 LSTM-324 648 1.46 VCGRU-1024 0.5 820 1.63 GRU-458 648 1.47\nTable 3: Left: Bits per character for character level language modeling on Penn TreeBank. CRNN refers to the Conditional RNN from (Bojanowski et al.||2015). Middle: Bits per bit for bit level lan- guage modeling on Penn TreeBank. Right: Bits per character for character level language modeling. on Text8. *From (Zhang et al.|2016)\n0.98 0.96 0.94 uuuuuuuutttttttteeeeeeeessssssss WWWWWWWW 0.35 0.30 0.25 0.20 12345678ssssssss12345678pppppppp12345678eeeeeeee 0.6 0.4 0.2 12345678ssssssss123456781234567812345678pppppppp\n2345678ssssssss12345678pppppppp12345678eeeeeee\n0.8 0.6 0.4 days everyone is looking for a way to get viewers more exci 0.8 0.6 0.4 na konci zdlouhaveho a namahaveho procesu. Navrat do teto e 0.8 0.6 0.4 d i e deut li ch iber d em An t e i 1 des Luftverkehrs liegt, der eb\nFigure 3: Per-character computation by VCRNN. Top: English. Middle: Czech. Bottom: German All languages learn to make use of word units.\nBit Level Scheduler. The scheduler in the bit level language model manages to learn the structure. of ASCII encoding: Figure 2|shows that the higher dimensions are modified roughly every 8 bits We also created some artificial data by taking the PTB text and adding 8 or 24 0 bits between each character. Figure[2] shows that the model learns to mostly ignore these \"buffers\", doing most of its. computation on actual characters.\nCharacter Level Scheduler. On character level language modeling, the scheduler learns to make. use of word boundaries and some language structures. Figure|3|shows that the higher dimensions are used about once per words, and in some cases, we even observe a spike at the end of each morpheme (long-stand-ing, as shown in Figure[5). While we provide results for the VCRNN specifically in this Section, the VCGRU scheduler follows the same patterns.."}, {"section_index": "11", "section_name": "5.2.2 EUROPARL CZECH AND GERMAN", "section_text": "We also ran our model on two languages form the Europarl corpus. We chose Czech, which has a larger alphabet than other languages in the corpus, and German , which is a language that features long composite words without white spaces to indicate a new unit. Both are made up of about 20M characters. We tried two settings. In the \"guide\" setting, we use the penalty on mt to encourage the model to use more dimensions on white spaces. The \"learn' setting is fully unsupervised, and. encourages lower values of mt across the board..\nEuroparl-cs Europarl-de 1.9 4 O Elman 1.7 o Elman . x x Guide VCRNN 4 x Guide VCRNN 1.8 1.6 A Learn VCRNN A Learn VCRNN x 1.5 4 0 1.7 4 0 4 0 A 0 . 1.4 x 4 X A A 0 1.6 XxX XX A XA 100 200 300 400 500 100 200 300 400 500 hidden dimension hidden dimension\nFigure 4: Bits per character for different computational loads on the Europarl Czech (left) and German (right) datasets. The VCRNN, whether guided to use boundaries or fully unsupervised. achieves better held-out log-likelihood more efficiently than the standard RNN.."}]
r1aGWUqgg
[{"section_index": "0", "section_name": "UNSUPERVISED LEARNING OF STATE REPRESENTATIONS FOR MULTIPLE TASKS", "section_text": "Antonin Raffin\nEcole Nationale Superieure de Techniques Avancees (ENSTA-ParisTech), Paris, France antonin.raffin@ensta- -paristech.fr\nSebastian Hoferl. Rico Jonschkowski & Oliver Brock\nRobotics and Biology Laboratory, Technische Universitat Berlin, Germany. {sebastian.hoefer,rico.ionschkowski,oliver.brock}@tu-be\nRobotics and Biology Laboratory, Technische Universitat Berlin, Germany\nRobotics and Mechatronics Center, German Aerospace Center (DLR), Wessling, Germany freek.stulp@dlr.de\nRico Jonschkowski, Sebastian Hofer, and Oliver Brock. Patterns for Learning with Side Information arXiv:1511.06429 [cs, stat], November 2015."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "n many reinforcement learning problems, the agent has o solve a variety of different tasks to fulfill its overall goal. A common approach to this problem is to learn a ingle policy for the whole problem, and leave the de- composition of the problem into subtasks to the learner. In many cases, this approach is successful (Mnih et al. 2015, Zahavy et al.2016), but it comes at the expense Figl f requiring large amounts of training data. Alternatively, has nultiple policies dedicated to different subtasks can be as fa earned. This, however, requires prior knowledge about obse now the overal problem decomposes into subtasks. More- over, it can run into the same issue of requiring large amounts overlap and thus afford shared computation to solve them.\nBrenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Buildin Machines That Learn and Think Like People. arXiv preprint arXiv:1604.00289, 2016..\nS. Lange, M. Riedmiller, and A. Voigtlander. Autonomous reinforcement learning on raw visua input data in a real world application. In 2012 International Joint Conference on Neural Network. (IJCNN), pp. 1-8, June 2012. doi: 10.1109/IJCNN.2012.6252823.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. arXiv:1504.00702 [cs], April 2015.\nA common approach to address overlapping problems is multi-task learning (Caruana 1997): b learning a single policy with different subgoals, knowledge between the different tasks can be trans ferred. This not only allows to learn a compact representation more efficiently, but also improve the agent's performance on all the individual subtasks (Rusu et al.]2016)..\nMulti-task learning, however, faces two problems: it requires the decomposition of the overall prob- lem into subtasks to be given. Moreover, it is not applicable if the subtasks are unrelated, and are better solved without sharing computation. In this case, the single-policy approach results in an agent that does not perform well on any of the individual tasks (Stulp et al.][2014) or that unlearns\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In ICLR, San Juan, Puerto Rico, 2016.\n1 The first two authors contributed equally to this work\nAlain Droniou, Serena Ivaldi, and Olivier Sigaud. Deep unsupervised network for multimodal per ception, representation and classification. Robotics and Autonomous Systems, 71:83-98, Septem ber 2015. 1SSN 0921-8890. doi: 10.1016/j.robot.2014.11.005.\nWe present an approach for learning state representations in multi-task reinforce- ment learning. Our method learns multiple low-dimensional state representations from raw observations in an unsupervised fashion, without any knowledge of which task is executed, nor of the number of tasks involved. The method is based on a gated neural network architecture, trained with an extension of the learning with robotic priors objective. In simulated experiments, we show that our method is able to learn better state representations for reinforcement learning, and we an- alyze why and when it manages to do so.\nAndras Gabor Kupcsik, Marc Peter Deisenroth, Jan Peters, and Gerhard Neumann. Data-Efficien Generalization of Robot Skills with Contextual Policy Search. In AAAI. 2013..\nFigure 1: Slot car racing - the agent nas learn how to drive any of the cars s far as possible (left), based on its raw observations (right).\nIn this work, we address the problem of identifying and isolating individual unrelated subtasks. and learning multiple separate policies in an unsupervised way. To that end, we present MT-LRP an algorithm for learning state representations for multiple tasks by learning with robotic priors MT-LRP is able to acquire different low-dimensional state representations for multiple tasks ir. an unsupervised fashion. Importantly, MT-LRP does not require knowledge about which task is executed at a given time or about the number of tasks involved. The representations learned with MT-LRP enable the use of standard reinforcement learning methods to compute effective policies. from few data.\nRichard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A frame- work for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181-211, August 1999. ISSN 0004-3702. doi: 10.1016/S0004-3702(99)00052-1.\nAs explained before, our approach is orthogonal to the classical multi-task learning approach, and constitutes a problem of its own right due to the issues of underperformance and catastrophic forget ting. Therefore, we disregard the shared knowledge problem in this paper. However, any complete reinforcement learning system will need to combine both flavors of multi-task learning, for related and unrelated tasks, and future work will have to address the two problems together.\nManuel Watter, Jost Tobias Springberg, Joschka Boedecker, and Martin Riedmiller. Embed to Con trol: A Locally Linear Latent Dynamics Model for Control from Raw Images. arxiv, 2015..\nMT-LRP is implemented as two neural networks, coupled by a gating mechanism (Sigaud et al.. 2015}Droniou et al.]2015) as illustrated in Figure 2] The first network, x, detects which task is. being executed and selects the corresponding state representation. The second network, , learns task-specific state representations. The networks are trained simultaneously using the robotic priors learning objective (Jonschkowski & Brock| 2015), exploiting physics-based prior knowledge about how states, actions, and rewards relate to each other. Both networks learn from raw sensor data, without supervision and solely based on the robot's experiences..\nTom Zahavy, Nir Ben Zrihem, and Shie Mannor. Graying the black box: Understanding DQNs arXiv:1602.02658 [cs], February 2016\ntask T. select policy X extract. task select state obser- o state s. T2 action a. vation extract task- TT3 specific state\nFigure 2: Overview of the gated network for state representation learning for multiple tasks\nIn a simulated experimental scenario, we show that MT-LRP is able to learn multiple state represen tations and task detectors from raw observations and that these representations allow to learn bette policies from fewer data when compared with other methods. Moreover, we analyze the contributior to this result of each the method's individual components.\nMT-LRP combines three ideas into a novel approach for task discovery and state representation. learning: 1) extracting state representations for each task with robotic priors (Jonschkowski & Brock||2015); 2) discovering discrete tasks and corresponding actions/policies in a RL context (Stulp et al.[[2014j[Hofer & Brock[2016); 3) using gated networks to implement a \"mixture of experts'' (Ja- cobs et al.f1991} Droniou et al.[2015)\nState Representation Learning: Learning from raw observations is considered a holy grail in re inforcement learning (RL). Deep RL has had major success in this, using model-free (Mnih et al. 2015) but also by combining model-free and model-based RL (Levine et al.]2015). These ap proaches apply end-to-end learning to get from raw input to value functions and policies. A dif ferent approach is to explicitly learn state representations using unsupervised learning, e.g. using auto-encoders (Lange et al.[2012). Recently, Watter et al.[(2015) extended this idea to learn state representations jointly with dynamic models and apply optimal control to compute a policy. We use learning with robotic priors (Jonschkowski & Brock|2015), a state representation learning methoc\nthat exploits information about temporal structure, actions, and rewards. We go beyond previous work by not only learning single state representations, but learning multiple state representations given raw data from multiple tasks\nOptions and Parameterized Skills: A common approach to factorizing a RL problem into subtasks. are macro-actions, often called options (Sutton et al.l[1999] Hengst]2002). The main difference with. our approach is that options are used to hierarchically decompose one high-level task into subtasks. (and learn sub-policies for these subtasks), whereas we learn task-specific state representations for . different high-level tasks. However, options bear resemblance on a technical level, since they are often implemented by a high-level \"selection\"' policy that parametrizes low-level policies (Daniel. et al.2012] Kupcsik et al.]2013} Stulp et al.| 2014). Continuous versions of options, referred to as parametrized skills, have been proposed, too (Da Silva et al.[2012f Deisenroth et al.|2014) Doshi- Velez & Konidarisl2016). However, in all the work above, the state representation is given. To the. best of our knowledge, state representation learning has not yet been considered in the context of. RL with options or parameterized skills..\nGated Networks for Mixtures of Experts and Submanifold Learning: Gated networks are net. works that contain gating connections, in which the outputs of at least two neurons are multi plied (Sigaud et al.]2015). This allows a gating neuron g to prohibit (or limit) the flow of in. formation from one neuron x to another neuron y, similar to how transistors function. An early. example of gated networks is the mixture of experts approach (Jacobs et al.1991] Jacobs & Jor dan]1993] Haruno et al.[2001), where separate networks in a modular neural network specialize ir predicting subsets of training examples from a database. Our contribution is to extend mixtures o experts by state representation learning (e.g. from raw images) and to the more difficult RL (rathe. than supervised learning) context. Our gated network architecture is similar to the one proposed b Droniou et al.(2015). Their network simultaneously learns discrete classes jointly with continuou. class variations (called submanifolds) in an unsupervised way, e.g., discrete digit classes and shap. variations within each class. We use a similar architecture, but in a different way: rather than learn. ing discrete classes, we learn discrete tasks; class-specific submanifolds correspond to task-specifi. state representations; and finally, we consider a RL rather than an unsupervised learning context..\nAs mentioned in the introduction, our work is orthogonal to multi-task learning (Caruana1997) which has been extensively studied in recent reinforcement learning literature, too (Parisotto et al. 2016). Our approach can be trivially combined with multi-task learning by by prepending the gate and state extraction modules with a subnetwork that shares knowledge across tasks. Another inter- esting multi-task approach is policy distillation (Rusu et al.|2016). This method combines different policies for multiple tasks into a single network, which enables to share information between tasks and to learn a compact network that can even outperform the individual policies.\nWe formulate MT-LRP in a reinforcement learning (RL) setting using a Markov decision process (MDP) (S,A,T,R,y): Based on the current state s E S, the agent chooses and executes an action a E A, obtains a new state s' E S (according to the transition function T) and collects a reward r E R. The agent's goal is to learn a policy t : S -> A that maximizes the expected return E(=o 7'rt), with. r being the reward collected at time t and O < y 1 the discount factor. We consider an episodic setting with episodes of finite length, a continuous state space S and a discrete action space A..\nIn this work, we assume that the agent cannot directly observe the state s but only has access. to observations o E O, which are usually high-dimensional and contain task-irrelevant distractors This requires us to extract the state from the observations by learning an observation-state-mapping. : O -> S, and use the resulting state representation S to solve the RL problem (assuming that a. Markov state can be extracted from a single observation). To learn the state representation, we apply. learning with robotic priors (Jonschkowski & Brock (2015), from now on referred to as LRP). This. method learns o from a set of temporally ordered experiences D = {(ot,at,rt)}d=1 by optimizing. the following loss:\nRP(D, ) = Wttemp.(D, ) + @pprop.(D,) + @cLcaus.(D, ) + @rLrep.(D,)\nThis loss consists of four terms, each expressing a different prior about suitable state representations. for robot RL. We optimize it using gradient descent, assuming to be differentiable. We now explain the four robotic prior loss terms in Eq. (1).\nTemporal Coherence enforces states to change gradually over time (Wiskott & Sejnowski] 2002)\nCausality enforces two states St,St, to be dissimilar if executing the same action in St, generates a different reward than in St?.\nLcaus.(D,) =E Atj =At2,^tj+1Frt2+1\nLrep.(D,) =E\nAdditionally, the method enforces simplicity by requiring s to be low-dimensional."}, {"section_index": "2", "section_name": "4 MULTI-TASK STATE REPRESENTATIONS: MT-LRP", "section_text": "Now consider a scenario in which an agent is learning multiple distinct tasks. For each task t E {1, ..., T}, the agent now requires a task-specific policy t : S -> A. We approach the problem by learning a task-specific state representation Qr : O -> S, for each policy, and a task detector x whicl determines the task, given the current observation. We will consider a probabilistic task-detecto. x : O -> [0, 1]T that assigns a probability to each task being active."}, {"section_index": "3", "section_name": "4.1 GATED NEURAL NETWORK ARCHITECTURE", "section_text": "We use a gated neural network architecture as shown schematically in Fig.2 The key idea is that both the task detector x as well as the state representation o are computed from raw inputs. However. the output of the task detector gates the output of the state representation. Effectively, this means the output of x(o) decides which task-specific state representation @t is passed further to the policy which is also gated by the output of x(o).\nFormally, x(o) = o(xpre(o)) is composed of a function Xpre with T-dimensional output and a soft e j max o(z) = . The softmax ensures that x computes a proper probability distribution over tasks. Ek ek The probabilities are then used to gate q. To do this, we decompose into a pre-gating function\nLtemp.(D, ) = E\nwhere s, = St+1 - st denotes the state change. (To increase readability we replace o(o) by s.) Proportionality expresses the prior that the same action should change the state by the same magni tude, irrespective of time and the location in the state space:\nLprop.(D,o) = E (st2lI - l|st lD)\nNote that learning with robotic priors only makes use of the actions a, rewards r, and temporal information t during optimization, but not at test time for computing o(o) = s. Using a, r and t in this way is an instance of the learning with side information paradigm (Jonschkowski et al.2015)\nIn order to solve the full multi-task RL problem, we must learn X, { t } te{1,,T} and {t } te{1,,T}- We propose to address this problem by MT-LRP, a method that jointly learns X and {t } te{1,,T} from raw observations, actions, and rewards. MT-LRP then uses the state representations {9t} to learn task-specific policies {t } te{1,..,T} (using standard RL methods), and switches between them using the task detector x. To solve the joint learning problem, MT-LRP generalizes LRP (Jon- schkowski & Brock 2015) in the following regards: (i) we replace the linear observation-state- mapping from the original method with a gated neural network, where the gates act as task detectors that switch between different task-specific observation-state-mappings; (ii) we extend the list of robotic priors by the prior of task coherence, which allows us to train multiple task-specific state representations without any specification (or labels) of tasks and states.\nPpre that extracts features shared across all tasks (i.e. \"multi-task\" in the sense of[Caruana|(1997) unless set to the identity), and a T M N gating tensor G that encodes the T (linear) observation state mappings (M = dim(s) and N is the output dimension of Ppre). The value of the state's i-th dimension s; computes as the expectation of the dot product of gating tensor and Spre(o) over the task probabilities x(o):\nT si=Qi(0)= Xk(0)(Gk,i,;, Ppre(0)) k=1\nwhere w, is a scalar weight balancing the influence of the additional loss term. Task coherence is the assumption that a task only changes between training episodes, not within the same episode. It. does not presuppose any knowledge about the number of tasks or the task presented in an episode,. but it exploits the fact that task switching weakly correlates with training episodes. Moreover, this. assumption only needs to hold during training: since x operates directly on the observation o, it can in principle switch the task at every point in time during execution. Task-coherence applies directly to the output of the task detector, x(o), and consists of two terms:.\nThe second term expr resses task senaration and encout to assign tasks to different episodes\nThis loss is complementary to task consistency, as it penalizes x if it assigns similar task distributions to ot, ot, from different episodes. Note that $ep will in general not become zero. The reason is that. the number of episodes usually exceeds the number of tasks, and therefore two observations from different episodes sometimes do belong to the same task. We will evaluate the contribution of each of the two terms to learning success in Section|5.2."}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "We evaluate MT-LRP in two scenarios. In the multi-task slot-car racing scenario (inspired byLange et al.(2012), we apply MT-LRP to a linearly solvable problem, allowing us to easily inspect wha and how MT-LRP learns. In slot-car racing, the agent controls one of multiple cars (Figure 1) with the goal of traversing the circuit as fast as possible without leaving the track due to speeding in curves. However, the agent does not know a priori which car it controls, and only receives the raw visual signal as input. Additionally, uncontrolled cars driving at random velocity, act as visual distractors. We turn this scenario into a multi-task problem in which the agent must learn to contro. each car, where controlling the different cars corresponds to separate tasks. We will now provide the technical details of our experimental set-up."}, {"section_index": "5", "section_name": "5.1 EXPERIMENTAL SET-UP: SLOT-CAR RACING", "section_text": "The agent controls the velocity of one car (see Fig.1), receives a reward proportional to the car's velocity, chosen from [0.01, 0.02, : , O.1], and a negative reward of -10 if the car goes too fast.\nL = LRp(D,)+@tLr(D,X)\nccon+sep osep\nccon =E|H(x(0t),x(0t2)) episode, = episode,\nwhere H denotes the cross-entropy H(p,q) = - Lx p(x) logg(x). It can be viewed as a measure of dissimilarity between probability distributions p and q. We use it to penalize x if it assigns different task distributions to inputs ot, Ot, that belong to the same episode. Note that task-consistency can be viewed as a temporal coherence prior on the task level (Wiskott & SejnowskiJ2002).\nsep =E -H(x(0t1),x(0t2) episode, episode\nStatic Visual Cue 60 Dynamic Visual Cue 60 50 50 ppodode e eposddee 40 40 deer err eun uunnep 30 deer rrn eun eennrp 30 20 20 10 10 O MT-LRP MT-LRP LRP 0 LRP PCA PCA -10 Observations -10 Observations Known Car Position Known Car Position -20 -20 1000 2000 3000 4000 5000 6000 7000 8000 1000 2000 3000 4000 5000 6000 7000 8000 Training steps. Training steps\nFigure 3: Reinforcement learning curves (mean and standard error) for different state representations for the two-slot car scenarios. Left: static visual cue. Right: dynamic visual cue\nin curves. The velocity is subject to Gaussian noise (zero mean, standard deviation 10%) of th. commanded velocity. All cars move on independent lanes and do not influence each other. Th agent observes the scenario by getting a downscaled 16x16 RGB top-down view (dimension N = 16 16 3 = 768) of the car circuit (Fig.1(b))\nIn our experiments, there are two or three cars on the track, and the agent controls a different one in every episode. To recognize the task, the agent must be able to extract a visual cue from the observation which correlates with the task. We study two types of visual cues:. Static Visual Cue: The arrangement of cars stays the same in all episodes and a static visual cue (a picture of the controlled car) in the top-left image corner indicates which car is currently controlled. Dynamic Visual Cue: The agent always controls the same car (with a certain color), but in each task the car is located on a different lane (as in Fig.[1(b))\nData Collection and Learning Procedure: The agent collects 40 episodes per task, each episode consisting of 100 steps. To select an action in each step, the agent performs -greedy exploration by. picking a random action with probability & = O.3 and the best action according to its current policy otherwise. The agent computes a policy after every t episodes, by first learning the observation-state mapping (state representation) and then computing policies 1,..., , (based on the outcomes of. the learned x and o). To monitor the agent's learning progress, we measure the average reward. the agent attains on T test episodes, i.e. one test episode of length 100 per task (using the greedy policy), amounting to 80o0 experiences in total. To collect sufficient statistics, the whole experiment is repeated 10 times.\nand solve it using nearest-neighbor Q-learning kNN-TD-RL (Martin H et al.I 2009) with k = 1( More recent approaches to model-free RL would be equally applicable (Mnih et al.|2015). Learning Strategies and Baselines: We compare five strategies. We run a) MT-LRP with coherence prior. We compare MT-LRP to several state representation methods; for each metho we evaluate different M and report only the best performing M: a) robotic priors without gate network, LRP (M = 4), b) principal components analysis (PCA) on the observations (M = 2O) an ) raw observations (M = 768). Additionally, we evaluate d) a lower baseline in the form of randomly moving agent and e) an upper baseline by applying RL on the known 2D-position o the slot car under control (M = 2). We use the same RL algorithm for all methods. To learn th state representations with robotic priors, we base our implementation on Theano and lasagne, us ng the Adam optimizer with learning rate O.005, batch size 100, Glorot's weight initialization an @t = 1, @p = 5, @c = 1, @r = 5, @t = 10. Moreover, we apply an L1 regularization of O.001 on . Additionally, we analyze the contribution of task coherence priors by applying MT-LRP to the ful con+sep\nPolicy Learning: We consider the model-free setting with continuous states S, discrete actions A and solve it using nearest-neighbor Q-learning kNN-TD-RL (Martin H et al.]2009) with k = 10. More recent approaches to model-free RL would be equally applicable (Mnih et al.||2015)."}, {"section_index": "6", "section_name": "5.2 RESULTS", "section_text": "We will now present the three main results of our experiments: (i) we show that MT-LRP enable. the agent to extract better representations for RL; (ii) we provide insight in how the learner detect. the task and encodes the state representations; and finally, (ii) we show the contribution of each o the task-coherence loss terms.\nMT-LRP Extracts Better State Representations for RL Figure 3|shows the learning curves for RL based on state representations learned by the different methods in the two-slot-car sce. nario (static visual cue on the left, dynamic on the right). No method reaches the performance of the upper baseline, mainly due to aliasing errors resulting from the low image resolution. The random baseline ranges around an average re-.\nof the upper baseline, mainly due to aliasing erro The random baseline ranges around an average re-. ward of -84.9 with standard error 0.72 and was omitted from the Figure. The state representation. learning baselines without robotic priors perform. poorly because they are unable to identify the task- irrelevant distractions. MT-LRP gets very close to the performance of the upper baseline, especially. for very low amounts of training data (d < 2500). whereas LRP does not even attain this level of per- formance for the full training set d = 80o0 in the. static task. The gap between MT-LRP and LRP in- creases even more if we add another car (Figure [5] because LRP can only learn one state representation for all three tasks. Including the three slot cars in. this representation results in distractions for the RL. method. However, in the dynamic-visual-cue sce- nario LRP-4 performs on par with MT-LRP. Sur-. prisingly, running LRP with only two dimensions. suffices to achieve the performance of MT-LRP. We will explain this phenomenon below. To conclude, M. cies than the baselines in all slot-car scenarios.\nMT-LRP Detects All Tasks and Learns Good State Representations To gain more nsight into what is learned, we analyze the tate representations extracted by MT-LRP and RP. Figure 4 shows the state representation earned by MT-LRP for the static-visual-cue Ol cenario. Each point in the figure corresponds Known ( o one observation, markers indicate the task and colors the most active gate unit. We see hat the first gate unit (blue) is always active for ask 1 (circle), and the second gate unit for task Figure .. This shows that the task is detected with high the th accuracy. The task detector x is also highly cer- ain which is reflected in the fact that its entropy evalu he states reflect the circular structure of the slot car ra as learned to identify the tasks and to represent the po\nThe RL experiments raised the question why LRP manages to solve the dynamic, but not the static-visual-cue scenario as well as MT-LRP. We hypothesize that, for the dynamic cue, LRP is able to extract the position of the car on regardless of which lane it is in using a single lin ear mapping. Figure 6 confirms this hypothesis: LRP filters for the car's color (blue) along the track and assigns increasing weights to these pixels which results in the extraction of its posi- tion. It also assigns constant weights along the track in the red channel using the lane change of the two cars as an offset. This results in a mapping to two circles similar to Fig.4 where the state encodes both the position and the task. Such a mapping can be expressed by a linear\n3 Task 1 2 Task 2 Gate Unit 1 1 Gate Unit 2 0 1 S -2 -2 -1 0 1 2 3 4 First State Dimension\nFigure 4: State representation learned per task (different markers) and per gate unit (different colors)\n3 cars 12000 steps MT-LRP LRP PCA Observations nown Car Position 0 10 20 30 40 50 Average reward per episode\nFigure 5: Reinforcement learning performance in. the three-slot car scenario with static visual cue\nfunction precisely because the features that are relevant for one task do not reappear in anothe. task (e.g. a blue slot car in track 1 does not appear in the task where the blue car is in track 2). However, there exists no equivalent linear map. ping for the static-visual-cue variant of the slot-. 0.60\nWe can generalize from this insight as follows A single linear observation-state-mapping is sufficient for multiple tasks if the state repre- sentation for every task can be extracted by a linear function using only features that stay constant for all other tasks. If this is the case, than there is no need for decoupling the extrac tion of task and state.\nPeroiace Tounderstandtheinlruenceol the different task-coherence prior variants, we compared their performance in Figure [7 We se that relying solely on the robotic priors gives poor results, mainly because the gate units are nc used properly: more than one gate unit is activated per task (x has high entropy). Adding the task separation prior forces the network to use as many gates as possible (5 in our case), leading to ba. state representations. Interestingly, using task consistency only gives roughly the same result a. using task consistency and task separation.\nDiscussion 1 The experiments showed that MT-LRP is. able to solve the representation and reinforcement learn- ing tasks better than the baselines. Important questions. for future work concern: the necessity and influence of the task-separation loss, in particular for short episode. lengths and if the number of expected tasks exceeds the. number of actual tasks; and transferring knowledge by. adding a shared neural network layers before gating.."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "We have presented MT-LRP, a method for multi-task state representation learning with robotic pri. ors. The method learns in an unsupervised fashion, solely based on the robots own observations actions, and rewards. Our experiments confirmed that MT-LRP is effective in simultaneously iden. tifying tasks and learning task-specific state representations. This capability is beneficial for scaling. reinforcement learning to realistic scenarios that require dedicated skills for different tasks.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "R. Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997\n0.60 0.45 0.30 0.15 0.00 -0.15 -0.30 -0.45 R G B\nFigure 6: learned by LRP (M = 2) for the two- car dynamic visual cue tasks. Row corresponds to. state dimension. column to RGB color channel.\nCcon + sep con sep 100-80-60-40-20 0 20 40 60 Average reward per episode.\nFigure 7: Task coherence: Average re ward per episode (8000 samples).\nWe gratefully acknowledge the funding provided by the German Research Foundation (DFG, Ex. ploration Challenge, BR 2248/3-1), the Alexander von Humboldt foundation through an Alexander. von Humboldt professorship (funded by the German Federal Ministry of Education and Research) Additionally, Antonin Raffin was supported by an Erasmus+ grant.."}]
SJBr9Mcxl
[{"section_index": "0", "section_name": "UNDERSTANDING TRAINED CNNS BY INDEXING NEORON SELECTIVITY", "section_text": "Ivet Rafegas & Maria Vanrell\nComputer Vision Center. Universitat Autonoma de Barcelona Bellaterra, Barcelona (Spain).\nivet.rafegas, maria.vanrell}@uab.cat\nThe impressive performance and plasticity of convolutional neural networks to solve different vision problems are shadowed by their black-box nature and its consequent lack of full understanding. To reduce this gap we propose to describe the activity of individual neurons by quantifying their inherent selectivity to spe- cific properties. Our approach is based on the definition of feature selectivity in- dexes that allow the ranking of neurons according to specific properties. Here we report the results of exploring selectivity indexes for: (a) an image feature (color); and (b) an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer conv4 or class selective neurons such as dog-face neurons in layer conv5, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers at a moment when the size of trained nets is growing and automatic tools to index can be helpful.\n100% 90% [0,0.1) 80% [0.1,0,2) 70% [0.2,0.1) 60% [0.3,0.4) 50% [0.4,0.5) 40% [0.5,0.6) 30% [0.6,0.7 20% 0.7,0,8 10% >=0.80 0% Conv1 Conv2 Conv3 Conv4 Conv5"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Figure 8: Number of neurons and degree of color selectivity through layers. Grayish bars are for low index values and reddish for high index values.\nSeveral works have proposed different methodologies to address the understanding problem. Re- cently, inLi et al.(2016) two main groups of works are mentioned. On one side those works that deal with the problem from a theoretical point of view. These are works such as Montavon et al.[(2011) where kernel sequences are used to conclude that deep networks create increasingly better representations as the number of layer increases,Paul & Venkatasubramanian (2014) which explains why a deep learning network learns simple features first and that the representation com- plexity increases as the layers get deeper, Goodfellow et al.(2014) where an explanation for why an adversarial example created for one network is still valid in many others and they usually assign it the same (wrong) class, orArora et al.(2014) that presents algorithms for training certain deep generative models with provable polynomial running time. On the other side, an empirical point of view, which comprises approaches that pursuit methodologies to visualize intermediate features in the image space, or approaches that analyze the effect of modifying a given feature map in a neuron activation. Our work is framed in the first subset of empirical approaches.\nIn a second experiment, we analyze how color selective neurons from all layers cover the color space Figure[7|displays the distribution of color selective neurons with a 0.40. Each NF is plotted or the hue angle that represents the projection of its first principal component on the OPP chromaticity plane (red-green and blue-yellow components). Dashed rings identify different convolutional layer. from conv1 (inner ring) to conv5 (outer ring) linking the NFs that belong to the same layer. W can appreciate the emergence of an axis (from orange to cyan) that connects a crowded area o color selective neurons. We can add a low population of NFs in the magenta area, that become more crowded on the opposite side where green and yellow selectivity has several neurons. The interest of this explanation relies on the fact that a similar distribution appears in the ImageNet color distribution, that is plotted at the bottom of the same images, where a similar interpretation in terms of emergent axes can be done. A more in depth study is required to prove this correlation, but we illustrate how neuron selectivity helps in the understanding of how a specific property is representec by the CNN.\nVisualizing intermediate features seeks to describe the activity of individual neurons. This descrip tion is the basis of this work hypothesis that is based on the idea that a proper understanding of the activity of the individual neurons allow us to draw a map of the CNN behavior. This behavior can be understood either in terms of relevant image features or in terms of the discriminative power of the neurons across the full architecture.\nThe first and most obvious way to describe the activity of a single neuron is given by the inherent sel of weights of the learned filters. These weights can be used to compare neurons between them. eithe.\nLuis A. Alexandre\nDepartment of Computer Science Universidade da Beira Interior Covilha , Portugal"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 7: Distribution of color selective neurons on a hue color space through layers. Maximum activation images for 4 top color selective neurons for each layer. Dashed rings connect NFs of. color selective neurons through layers, from inner ring (conv1) to outer ring (conv5)..\n100% 90% (0,0.1) 80% [0.1,0.2] 70% [0.2,0.3] 60% [0.3,0.4] 50% [0.4,0.5] 40% [0.5,0.6] 30% [0.6,0.7] 20% [0.7,0.8 10% [0.8,0.9 0% [0.9,1] Conv1 Conv2 Conv3 Conv4 Conv5\nIn parallel with the success of CNNs to solve vision problems, there is a growing interest in de- veloping methodologies to understand and visualize the internal representations of these networks How the responses of a trained CNN encode the visual information is a fundamental question for computer and eventually for human vision.\nFigure 9: Number of neurons and degree of. class selectivity through layers. Grayish bars. are for low index values and bluish for high index values.\nFollowing with the analysis of ranking neurons by their response to a certain property, here w. focus on the proposed selectivity index that relates to image labels instead of to an image property. is the class selectivity index, which only applies for classification networks. We report the results o. different experiments where we have fixed th = 1, which means we consider all the class labels fo. the N = 100 images that maximally activates the neuron. As we mentioned before, this index cai. enlighten how classes are encoded through the net layers, that again it can be related to the scientifi problem of how general object recognition is encoded in the human brain. Here we hypothesize tha. the difference between localist or distributed codes could correlate with the idea of neurons highl selective to a single class and neurons highly selective to several classes, we resume on this later a. section[5]\nA second method to describe neuron activity is projecting the filter weights into the image space trying to get the inherent feature that maximally activates the filter. The projection can be computec by composing the inversion of the layer operators under a specific neuron towards the image space this was called a Decoded Filter (DF) in Rafegas & Vanrell(2016). The resulting image represents an estimation of the feature that should highly activate such neuron. The disentangling algorithn that inverts the filter would give a good estimation of the feature image if most of the layer operators were invertible. However, when the number of non-invertible operators increases, the estimatiol becomes unintelligible. The appearance of the DFs can be seen in Fig. 1 of Rafegas & Vanrel (2016). They have also been explored by[Springenberg et al.(2015) for architectures with no pooling layers since pooling is the less invertible operator. They point out the interest of obtaining such a representation, since it would allow the understanding of neuron activity independently of the inpu image. However, the majority of proficient CNNs contain pooling layers.\nSecondly, we have visualized the properties of a set of images presenting different degrees of class. selectivity in Fig.|10|for different levels of depth. We visualize each neuron with their NF visual-. ization and the corresponding cropped images. We also show two tag clouds of each neuron. They. visualize the importance of each class label. With an orange frame we plot the leave classes of the ImageNet ontology, while in the green frame we plot generic classes. This second analysis could. help finding neurons that are specialized to a general semantic concept that different final classes. share. Note that neurons with high class selectivity index have a set of cropped images that we can identify as belonging to the same class.\nFinally, other works focus on proposing approaches able to reconstruct the input image given a feature map, going further of analyzing the individual neuron activity.Mahendran & Vedaldi|(2015 make use of optimization algorithms to search for an image whose feature map best matches a given feature map by incorporating natural image priors. Contrary, in|Dosovitskiy & Brox(2015) the authors propose to reconstruct the input image from its feature maps of a given convolutional network by training a new deconvolutional network to learn filter weights that minimize the image reconstruction error when these filters are applied to the image feature maps. With this approach they are also able to get an image reconstruction with natural priors\nFinally we stress the utility of ranking images by selectivity indexes in Fig.11 where we shov. interesting neurons in different convolutional layers that present high values for both selectivit indexes, neurons which are both, color and class selective."}, {"section_index": "3", "section_name": "5 CONCLUSIONS", "section_text": "In this paper we propose a framework to analyze a trained CNN by dissecting individual neurons. using their indexes of selectivity to specific properties. We have proposed two properties of differ. ent nature: (a) color, that is a low-level image property that we have shown to be entangled in all. the representations levels of the net; (b) class label, that is a high-level image property that can be. analyzed at different levels of abstraction. We have shown that while the number of color selective neurons decreases with depth, the number of class selective neuron increases. In this line of describ. ing the activity of individual images, we have also proposed to visualize the activity with what we. have called the neuron feature (NF), that allows to arise interesting structures that are shared by the. images that highly activate a neuron..\nThe proposed work have made us to speculate about two different ways to address the coding proper ties of individual neurons (localist versus distributed). Firstly, we have mentioned the possibility tha a blurred NF, i.e. without a clear structure, belongs to a neuron that can be part of a distributed code\nLikewise, in Zeiler & Fergus(2014) in this work we pursuit visualizing the intrinsic feature of a neuron by analyzing the images that maximally activates a specific neuron. However, to avoid the\nwithin the same layer or versus neurons in similar CNNs which have been trained under different initialization conditions, as it is proposed by Li et al.[(2016). A direct visualization of these weights is intuitive when they belong to neurons of a first convolutional layer. However, when layers are stacked. that intuition disappears and the capability to understand the neuron activity is lost.\nn a first experiment we analyze how many neurons present different degrees of class selectivity hrough layers. The bars in Fig.9[plot the relative quantity of neurons that are class selective com-. ared to those that are not. Grey represents the ratio of neurons that are not activated by a single. lass and bluish represent neurons that are highly activated by a single class. Opposite to what we. howed about color selectivity, we found most of class selective neurons in deeper layers, and no class selectivity in shallow layers, as expected. We have moved from a very basic image property,. color, to a very high level property, class label. This fact corroborates the idea that CNNs start by lefining basic feature detectors that are share by most of the classes, and the neurons become more pecialized when they belong to deeper layers representing larger areas in the image space and there-. ore more complex shapes. We start to have neurons with relevant class selectivity in layer conv3. where a 5% of neurons is quite class selective and we found some neurons with a degree of selective lose to 1. These ratios progressively increase up to layer conv5 where we have more than a 50% of. neurons with a class selectivity index greater than 0.6, that means that we have less than 40 different. lasses activating this neuron, which is a very selective ratio considering the number of classes of he ImageNet dataset. In the same layer a 20% of neurons present a high class selectivity index, than. neans less than 20 different classes. Further experiments should explore how this graphic evolves by. noving from current class labels which are on the leaves of the ImageNet ontology towards higher. nodes with more generic classes.\nA third way to describe neuron activity is by exploring the images that maximally activate the neu ron. One of the most relevant works pursuing the visualization of intermediate features, is the one proposed byZeiler & Fergus|(2014), where they project intrinsic features of neurons from the image hat have provoked a maximum spike to a certain neuron, the network representation is projectec into the image space by isolating them in the deconvolution approach Zeiler et al. (2010). By ob- serving different projections that maximally activate a certain neuron they get the intuition about the main features learned on the network. Later on, in Springenberg et al.(2015) the guided back- oropagation improves the deconvolution approach by a new way of inverting rectified linear (ReLu nonlinearities, achieving better visualizations of the activations. These approaches present a main drawback, their feature visualization is image-specific, since the maximum activation of a neuron not always generalize the intrinsic feature of the neuron. To solve this problem, in some works in stead of using the image that provokes the maximum activation, they use optimization techniques to generate an image that maximizes the activation. The key point of these works is using an appropri ate regularization in the generation process, otherwise, the resulting image appearance is unrealistic and difficult to understand.Simonyan et al.[(2014) propose a method to generate an image which is representative of a certain class by maximizing the score of this image to be classified in a cer tain class (or highly activates the specified neuron) with an L2-regularization. A similar work was performed afterwards in |Yosinski et al.(2015) but taking advantage of combining three different regularizations to achieve more recognizable images. Although they have explored different reg ularizations to achieve more realistic intrinsic feature representations, their visualizations present important artifacts that complicate the understanding of the intrinsic property.\nIn the second subset of empirical approaches, Alexey Dosovitskiy(2015) train a generative decon volutional network to create images from neuron activations. With this methodology, the variation of the activations enables the visualization of the differences in the generated images. A similar analysis is done byAubry & Russell (2015), but instead of forward-propagate different activations to the image space and comparing them, they observe the changes on neuron activations when simi- lar computer-generated images with different scene factors are introduced into a CNN. These works contribute in giving a deeper understanding on the internal CNN behavior. Both works conclude that there are specific neurons which are sensitive to color changes, point of views, scale or lighting confi gurations.\nClass Selecitivity index Conv3, y=0.10 Conv2, y=0.21 Conv3, y=0.43 Conv2, y=0.63 Conv3, y=0.83 1 A OM .banniste ambulance policean benker bubble castle.. grannysmith mashedpotatc bookcase streetcar limousine. ragon pembroke .crib. domeelectricfan windowscreen blenheimspaniel gazelle cucumbcr greenhouse geanslot beachwagon audxrtle parkingmeter ringlet peacoek plowpolccat studiocouch cab.. plaseplow popbottle convertible westhighlandwhiteterier zuchini lawnmower animal.. misc artifact artifact --car. artifact animal toydog vehicle 'screen artifact.... blenheimspaniel organism conveyancedevi covering ..nstiru dog organism ..chordateplacenta motorvehicl instrumentality misc instrumentality windowscreen vertebrate. domesticanimal wheeledvehicle protectivecovering cnglishtoyspanicl toyspanicl.. beeth instrumentality ..mammal Conv5, y=0.20 Conv4, y=0.41 Conv5, y=0.63 Conv4, y=0.96 Conv5, y=0.99 conch. sodalscaactire articho ...bellcote... cbottle cabbagebuttertle .church mosque cocktailshaker palaceperfume acousticguitar pooltable cardoon orange chiton.cockroscl dome vault chamberednautilus teddyju monastery animal.instrumentality artifact. animal. m1sc acousticguitar instrumentality nvertebrate furnishing artifact chosdaic.. artifacl..s cardoon artifact musicalinstrument artifact dogdomesticanimal vegetable covering structure.. furniture pooltable device... stringedinstrument .organism... instumcstality organism .instrumentality guitar instrumentality table misc protectivecovering\nconv1 % AUC conv2 % AUC conv3 % AUC conv4 % AUC conv5 % AUC 100% 95.01% 90.04% 85.02% 85.53% 95.47% 90.05% -85.03% 80.01% -80.06% aet 85.01% act aet 75.18% 75.05% 90.43% aer 80.09% 85.02% 80.24% -75.24% 70.01% -70.03% 80.23% 78.93% 70.04% -66.32% 65.02% -76.32% -73.32% 66.58% 61.42% 62.38% 70.21% 57.28% 60.02% 54.01% Image Ranking Image Ranking Image Ranking Image Ranking Image Ranking\nFigure 1: Normalized activations of a subset of neurons for the first 400 ranked images through al convolutional layers. For each layer we plot the normalized activation for the neurons with highes and smallest AUC (Area Under Curve), and some other examples in between these extremes. Fo all neurons the highest normalized activations is 1, and the percentage of AUC is computed witl respect to the neuron AUC achieving the biggest area in the entire network.\nartifact conveyancedevice vehicle motorvehicle instrumentality wheeledvehicle\nlack of generality of this approach, we define the Neuron Feature which is not based on a single maximum activation. The Neuron Feature is a weighted average version of a set of maximum activation images that capture the essential properties shared by the most important activations and makes it not to be image-specific. Additionally, our Neuron Feature overcomes the problem of unrealistic representation we metnioned earlier, by directly averaging on the image space. In this way we achieve two main advantages: (a) keeping the properties of the natural images, and (b) providing a very straightforward approach to compute it.\nFigure 10: Neurons with different class selectivity indexes. For each neuron two images (top: NF bottom: cropped images) and two tag clouds (top: leave classes, bottom: all classes in the ontology)\nAs we mentioned in the previous section we propose to visualize the image feature that activates. a neuron, whenever is possible, by directly computing a weighted average of the N-th first images that maximally activate this neuron. We will refer to it as the Neuron Feature (NF).\nIn order to build the NF we need to calculate the activations associated to each individual neuron They need to be accordingly ranked with the rest of activations of the layer. For each neuron w select the set of images that achieve a minimum normalized activation value but constrained to a maximum number of images for practical reasons. By normalized activation we mean the value o the maximum activation of a neuron for a specific input image, which is normalized by the maximun of these values achieved by the same neuron over all the images in the dataset..\ndevice animal. artifact... vcetcbrali mise instrumentality organisn\nartifact lamplantcern sourceofillumination insiramen device instrumentality\nIn Fig.1we can see the behavior of the ranked normalized responses of a subset of neurons for every. convolution layer of the VGG-M CNN trained on ImageNet byChatfield et al.(2014). The y-axis. represents the normalized activation value of a single neuron to an image of the dataset. Images. are ranked on the x-axis according with their activation value, from highest to lowest activation (we just plot the first 400 images for each neuron). Therefore, the first relative activation value is always 1 for all neurons and then the normalized activation values decrease monotonically. This\nFigure 11: Examples of neurons with high color and class selectivity indexes\nconv1 conv2 conv3 % AUC conv4 % AUC conv5 % AUC % AUC % AUC 1 100% 95.01% 90.04% 85.02% 85.53% 95.47% 90.05% -85.03% 80.01% 80.06% 90.43% aet 85.01% act 80.09% aar 75.18% aer 75.05% 80.24% 85.02% -75.24% 70.01% -70.03% 80.23% -78.93% 70.04% 66.32% 65.02% 76.32% -73.32% -66.58% 61.42% 62.38% 70.21% 57.28% 60.02% 54.01% Image Ranking Image Ranking Image Ranking Image Ranking Image Ranking\nartifact screen covering ...instrumentality.-. windowscreen protectivecovering\nacousticguitar instrumentality artifact musicalinstrument device... stringedinstrument guitar\nDepth Conv2, y=0.40 Conv3, y=0.70 Conv4, y=0.76 Conv5, y=0.93 a=0.68 a=0.50 a=0.79 a=0.72 beerboule. europeangallinule digitalclock popbottle digitalclock ladybug oscilloscope indigobunting jellyfish... nematode theatercurtain-..,.volcano peacock vhistle tigerbeetle leafbeetlelifeboat levico animal animal organism artifact.... artifact lamplantern animal arthropod ladybug. sourceofillumination bird... misc devicem organism ..beetle... insect mise chordate veriebrate instrumentality instrumentality. misc. invertebrate\nanimal organism arthropod ladybug ... beetle... insect misc. invertebrate\nanimal bird.. misc chordate vertebrate organism\nwhere the neuron does not represent a selectivity to a single shape, maybe to diverse shapes thar. can be part of a code in deeper neurons. Secondly, we speculate about the possibility that neurons with high class selective index can represent a localist code, and part of a distributed when is low. In parallel, the analysis of the color selective neurons have made to arise some parallelism between. color representation in the 1st convolutional layer and known evidences about the representation in. the human visual system.\nconv1 conv2 conv3 conv4 conv5\nAs further work we need to fully exploit the potential of the indexes in different CNN architectures and defining new selectivity indexes like shape or texture, that could be a perfect complement tc current ones.\nFigure 2: Neuron Feature (NF) visualizations (top) for 5 neuronsof the different convolutional layers of VGG-M with their corresponding 100 cropped images (bottom). We scale all layers to the same. size due to space constraints."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Thomas Brox Alexey Dosovitskiy, Jost Tobias Springenberg. Learning to generate chairs with con volutional neural networks. In CVPR, 2015.\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In ICML, pp. 584-592, 2014.\nconv1 conv2 conv3 conv4 conv5 (a) (b)\nMathieu Aubry and Bryan C. Russell. Understanding deep features with computer-generated im agery. In ICCV, 2015.\nRobert Benavente, Maria Vanrell, and Ramon Baldrich. Parametric fuzzy sets for automatic color naming. JOSA, 25(10):2582-2593, Oct 2008\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR,abs/1412.6572,2014. URL http://arxiv.org/abs/1412.6572\nFigure 3: Examples of NFs for each convolutional layer of the network VGG-M (see section 4.1 (a) 20 examples of structured NF, (b), blurred NF. Although sizes of NF increments through layers we scale them into the same size. Original sizes are: 7x7x3 , 27x27x3, 75x75x3, 107x107x3 anc 139x139x3 for conv1, conv2, conv3, conv4 and conv5, respectively.\nAravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. CVPR, 2015.\nNajib J Majaj, and James J DiCarlo. Deep neural networks rival the representation of primate it. cortex for core visual object recognition. PLoS computational biology, 10, 2014 Dec 2014.. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving. deep into convolutional nets. In BMVC, 2014. Bevil R. Conway and Doris Y. Tsao. Color-tuned neurons are spatially clustered according to color preference within alert macaque posterior inferior temporal cortex. Proc Natl Acad Sci U S A., 42. (106):18034-18039, 2009. L. Delchambre. Weighted principal component analysis: a weighted covariance eigendecomposition. approach. MNRAS, 446:3545-3555, 2014. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical. Image Database. In CVPR09, 2009. Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks.. CoRR, abs/1506.02753,2015. URLhttp://arxiv.0rg/abs/1506.02753\nNicolaus Kriegeskorte and Gabriel Kreiman. Visual Population Codes - Toward a Common Multi ndFu M O01\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E. Hopcroft. Convergent learning: Do different neural networks learn the same representations? In ICLR, 2016.\nIvet Rafegas and Maria Vanrell. Color spaces emerging from deep convolutional networks. In CIC 2016.\nRobert Shapley and Michael J. Hawken. Color in the cortex: Single- and double-opponent cell. VR, 51(7):701-717, 4 2011.\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks. Visualising image classification models and saliency maps. In In ICLR Workshop 2014, 2014.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. ICLR, 2015..\nA. Vedaldi and K. Lenc. Matconvnet - convolutional neural networks for matlab. 2015.\nThus. the NF is computed as:\nJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neura networks through deep visualization. In Deep Learning Workshop, (ICML), 2015.\nNmax 1 NF Wj,i,L1 lax j=1\nwhere wj,i,L is the relative activation of the j-th cropped image, denoted as I,, of the i-th neuron nL,i. at layer L. The relative activation is the activation aj,i of a neuron, given a input image, with respect to its maximum activation obtained for any image, wj,i,L =. a j, i where amax,i = max ak,i,Vk. am.a.\nMatthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Rob Fergus. Deconvolutional networks In CVPR, 2010.\nIn Fig.2 we can see some NFs and their corresponding set of first 100 maximum activations, and in Fig.3[(a) we can see a selected subset of 20 NF per layer. In this image we can identify specific shapes that display the intrinsic property that fires a single neuron. At first glance, we can see how in this particular network the first two layers are devoted to basic properties. Oriented edges of different frequencies and in different colors in the first layer; textures, blobs, bars and more specific curves in the second layer. The rest of the layers seem to be devoted to more complex objects. We can see that dog and human faces, cars and flowers are detected at different scales in different layers, since the size of the NF and their corresponding cropped images increase with depth. This visualization of the neuron activity can be seen as a way to visualize a trained vocabulary of the CNN that opens multiple ways to analyze the global behavior of the network from its single units. However, not all neurons present such a clear tuning to an identifiable shape. Some neurons present a blurred version of NF, such as, those in Fig.3[b). The level of blurring is directly related to a high variability between the maximally activated images for a neuron.\nAt this point, we want to make a short parenthesis to relate the previous representational observa tions with the scientific problem about neural coding that is focus of attention in visual brain research (Kriegeskorte & Kreiman (2011)). We are referring to the hypothesis about distributed representa- tions that encode object information in neuron population codes, that co-exist with strong evidences of neurons which are only activated by a very specific object. In line with this idea, we invite to speculate about neurons presenting a highly structured NF could be closer to localist code neurons while neurons with a blurred NF as closer to a distributed code. We return on this discussion later on at sections4.3and5\nFinally, we want to add a further analysis about how neuron feature is related to the neuron activit is representing. In Fig.4|we plot the level of the neuron responses when the input image is its ow. NF. We can observe a high degree of activation (in green) between the NF and the response of the ne. to this feature. However we have some disagreements between the NF and the neuron activations. an important example is shown in layer 2, that is curiously bigger than in layer 3 and 4. This i. explained by the high number of dead neurons?|and also by a higher presence of texture selectiv. neurons, that is observed in Fig. 3] Another example, which is more understandable, is the clea. increase of disagreement that happens through layers 3, 4 and 5, that seems to be explained by a. increase in invariance that is obvious when the size of the image increases..\nThe results are shown for a maximum number of images equal to Nma. 100 and a minimum activation value over a 70% of the maximum activation. We plot these values on Fig.1 2Rv dead nel\nnormalization allows to compare different neuron behaviors, from neurons which are activated by most of the images (flatter behavior), to neurons that highly activates only for a subset of images and have very little activation for the rest (steeper behavior). In this figure we also provide the percentage of area for each plotted curve. This percentage is computed over the area of the neuron that presents the maximum AUC in the entire architecture. We can observe different behaviors in all layers. In general, we can state that in deeper layers the behavior of the neurons is steeper (lower AUC), i.e. neurons highly spike for a small number of images. However, in shallower layers the behavior is flatter, i.e. neurons highly spike for a lot of images. This is an expected behavior, since the image features spiking neurons in first layers (e.g. oriented edges) are shared by almost all the images, while the features spiking shallow neurons are more selective features (e.g. faces) that only spike for specific images. The observation of the responses confirms the adequacy of our assumption to fix a minimum value for the activation and a maximum number of images to capture the most important activations for all the neurons. Similar observations have been made for other networks like VGG-S and VGG-FChatfield et al.(2014)\n100% <0% 90% [0,10)% 80% [10,20]% 70% [20,30]% 60% 30,40% 50% [40,50)% 40% [50,60)% 30% [60,70)% 20% [70,80)% 10% [80,90]% 0% Conv1 Conv2 Conv3 Conv4 Conv5 [90,100] 9\nFigure 4: Number of neurons and degree of activation as a response to their own NF. Activations values are. normalized to a specific range within each layer..\nIn this section we propose to describe neurons by their inherent response to a specific property, using an index. The index has to allow to rank them in a proportional order between their response and the existence of the property in the input image. Therefore, we translate the problem of describing neuron activity to the problem of proposing methods which are able to quantify specific image facets that correlate with the degree of activation of the neuron holding such a property. A selectivity index of a single unit is a flexible an independent method for discriminating or clustering between neurons inside the same network. Selectivity indexes can be defined either for image features or for image labels. In what follows, we propose two selectivity indexes one on each group."}, {"section_index": "5", "section_name": "3.1 COLOR SELECTIVITY INDEX", "section_text": "Color selectivity is a property that can be proved in specific neurons of the human brain. The leve. of activation of the neuron when the observer is exposed to a stimulus with a strong color bias, an its corresponding low activation when the color is not present, is the object of attention in visio. research that pursuits the understanding of how color is coded in the human visual system (Shaple. & Hawken(2011)Conway & Tsao(2009)).\nHere we propose a method to compute a color selectivity index for neurons in artificial neural net- works. We propose to base it directly on the image properties of the NF we have defined above. We quantify the selectivity to a specific chromaticity directly from the color distribution of the NF We define this index as the angle between the first principal component (v) of the color distribution. of the NF and the intensity axis (b) of the Opponent Color Space (OPP). To compute (v) we use a weighted Principal Component Analysis Delchambre[(2014) that allows to strengthen the selec. tivity of small color areas. Weights are applied to each pixel in order to reinforce those pixels that. are shared by most cropped images and that highly contribute to the NF. Therefore, the weights are the inverse of the standard deviation. In this way, a NF defined by cropped images with different. colors will tend to be represented by a grayish image and its principal component will be close to. the intensity axis in the OPP color space and it will receive a low selectivity index. We formulate. this index (in degrees) as follows:.\nOther selectivity indexes that can be derived from this, are those related to color attributes. We ca. easily extract color name labels using a color naming approach such asBenavente et al.(2008) anc directly define color selectivity to basic names such as red, or green, among others.\nFigure 5: Conv1 NFs sorted by their color selectivity in- dex.\n1 b: v arccos 90 l|b[v|\nClass selectivity is a property of a neuron that can help to establish its discriminative power for one specific class or can allow to cluster neurons accordingly with the ontological properties of their class labels.\nWe propose a method to compute a class selectivity index for individual neurons by compiling the class labels of the images that maximally activates this neuron in a single descriptor. We define class selectivity from the set of class labels of the N images used to build the NF. To quantify this index we build the class label distribution of the full set of images. As in the color selectivity index we weight the significance of a class label by the relative activation of its image. Thus, the relative frequency of each class c for a certain neuron is defined as:\nwhere N. refers to the number of images, among the N cropped images activating this neuron, that belong to class c.\nGiven the densities for all the classes. Finally, our class selectivity index is defined as follows:\nwhere M is the minimum number of classes that covers a pre-fixed ratio, th, of the neuron activation this can be denoted as M fc th. This threshold allow to avoid considering class labels with very small activation weight. Jointly with the index value the selectivity provides the set of M classes that describe the neuron selectivity and their corresponding relative frequency values.\nTherefore, a low class selectivity index indicates a poor contribution of this neuron to a single class (minimum is O when M = N), while a high value (maximum is 1) indicates a strong contribution of this neuron to a single class. In between we can have different degrees of selectivity to different number of classes. Obviously, this index is irrelevant for the last fully connected layers in a CNN but it allows to group related neurons across different convolutional layers.\nHere we want to point out, that this index can also contribute to give some insights about the problem. of how information is coded through layers, in the debate of localist and distributed neural codes we mentioned before (Kriegeskorte & Kreiman|(2011)). Neurons with high class selectivity index. should be in line with a localist code, while neurons with low class selectivity index should be part. of a distributed code. This way the index is defined allow a large range of interpretations in between. these two kinds of coding as it has been outlined in the visual coding literature.."}, {"section_index": "6", "section_name": "4 RESULTS", "section_text": "In this section we report some empirical results to show how the proposed selectivity indexes per form and what representational conclusions we can extract from the subsets of neurons sharing indexed properties"}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "In this paper we analyze the neurons of a CNN architecture trained on ImageNet ILSVRC datasel Deng et al.(2009) (using a subset of 1.2M images classified in 1.000 categories). We report the results for the VGG-M CNN that was trained by [Chatfield et al.(2014) for a generic visual task of object recognition. The details of the CNN architecture are given in table[1 We selected this net work since it has a similar structure to those which have been reported as having a representational performance that competes with human performance (as was proved in|Cadieu et al. (2014)). Never- theless, we have obtained similar results for VGG-F and VGG-S that are provided in Chatfield et al. (2014). We used the Matconvnet library provided by[Vedaldi & Lenc (2015) for all the experiments\n.i.\nN - M N -1\nColor Selecitivity index Conv2, a=0.03 Conv3, a=0.15 Conv2, a=0.29 Conv3, a=0.48 Conv2, a=0.97 ept Conv4, a=0.03 Conv5, =0.12 Conv4, a=0.39 Conv5, a=0.53 Conv4, a=0.76\nFigure 6: Neurons with different color selectivity indexes. Images in 4 rows (1st and 3rd row are NFs, 2nd and 4th rows are sets of cropped images that maximally activates the neuron)"}, {"section_index": "8", "section_name": "4.2 COLOR SELECTIVITY", "section_text": "General purpose CNN architectures are usually trained on RGB color images. However there is a. strong belief in the computer vision community that color is a dispensable property. The results we. obtain by indexing color selective neurons make us conclude that there is no basis for such a belief Results show that color is strongly entangled at all levels of the CNN representation. In a preliminary. experiment we have tested a subset of ImageNet images with VGG-M in their original color and the. same subset in a gray scale representation. Classification results show a considerable decrease. while original RGB images are classified with a 27.50% top-1 error and 10.14% top-5 error, gray. scale image versions present 51.12% and 26.37% errors, top-1 and top-5 errors respectively.\nIn a first experiment we extract how many NFs are related to color in each convolutional layer usin. the proposed color selectivity index. The bars in Fig.8|plot the relative quantity of neurons that ar. color selective compared to those that are not. Grey represents the ratio of neurons that do not spik. for the presence of a color and reddish represent neurons that are highly activated by the presenc. of a color. In the graphic we can observe that shallow layers are the main responsible for the colo representation on the images: 50% and 40% of neurons are color selective in layers conv1 and conv2. respectively. Nevertheless, we also still found around 25% of color selective neurons in in deepe. layers. Therefore, although neurons in deeper layers tend to be color invariant, an important par. of the representation is devoted to color, that reinforces the discriminative power of color in objec recognition. In Fig.6|we show some examples of NFs with different degrees of color selectivity a. different layers of the network and showing the corresponding cropped images..\nRegarding color representation in layer 1 we want to point out two more observations derived from the NFs (see Fig.5): (a) selectivity to different spatial-frequencies is only tackled by gray-level neurons; and (b) four main color axis emerge (black-white, blue-yellow, orange-cyan and cyan magenta). Curiously, these two observations correlate with evidences in the human visual system (Shapley & Hawken(2011).\nTable 1: VGG-M architecture designed byChatfield et al.(2014), where M N P corresponds. to number of filters, number of rows and columns of the filters respectively. St. and pad. refers to stride and padding respectively: LRN is a ReLU and the corresponding pooling (pool) if applied.."}]
ByG4hz5le
[{"section_index": "0", "section_name": "ADAPTIVE FEATURE ABSTRACTION FOR TRANSLATING VIDEO TO LANGUAGI", "section_text": "Yunchen Pu\nDepartment of Electrical and Computer Engineering Duke University\nDepartment of Electrical and Computer Engineering Duke University\nDepartment of Electrical and Computer Engineering Duke University"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Accurately understanding the fast-growing number of videos poses a significant challenge for com. puter vision and machine learning. An important component of video analyasis involves generating. natural-language video descriptions, i.e., video captioning. Inspired by the successful deploymen of the encoder-decoder framework used in machine translation (Cho et al.[2014) and image captior generation (Vinyals et al.]2015] Pu et al.]2016] Gan et al.]2017), most recent work on video cap tioning (Venugopalan et al.[2015}[Yu et al.[2016) employs a 2-dimentional (2D) or 3-dimentiona (3D) Convolutional Neural Network (CNN) as an encoder, mapping an input video to a compact fea. ture vector representation; a Recurrent Neural Network (RNN) is typically employed as a decoder. unrolling this feature vector to generate a sequence of words of arbitrary length.\n*Most of this work was done when the author was an intern at NEC Labs America\nMartin Renqiang Min Machine Learning Group NEC Laboratories America anoln\nMartin Renqiang Min"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A new model for video captioning is developed, using a deep three-dimensional Convolutional Neural Network (C3D) as an encoder for videos and a Recurrent Neural Network (RNN) as a decoder for captions. A novel attention mechanism with spatiotemporal alignment is employed to adaptively and sequentially focus on different layers of CNN features (levels of feature \"abstraction'), as well as local spatiotemporal regions of the feature maps at each layer. The proposed approach is evaluated on the YouTube2Text benchmark. Experimental results demonstrate quantitatively the effectiveness of our proposed adaptive spatiotem- ooral feature abstraction for translating videos to sentences with rich semantic structures.\nDespite achieving encouraging successes in video captioning, previous models suffer from impor-. tant limitations. First, the rich contents in an input video is often compressed to a single compact feature vector for caption generation; this approach is prone to miss detailed spatiotemporal infor-. mation. Secondly, the video feature representations are typically extracted from the output of a. CNN at a manually-selected fixed layer, which is incapable of modeling rich context-aware seman-. tics that requires focusing on different abstraction levels of features. As investigated in Zeiler &. Fergus(2014); Simonyan et al.(2014), the features from layers at or near the top of a CNN tends. to focus on global semantic discriminative visual percepts, while low-layer feature provides more. local, fine-grained information. It is desirable to select/weight features from different CNN layers.\nadaptively when decoding a caption, selecting different levels of feature abstraction by sequentiall emphasizing features from different CNN layers. In addition to focusing on features from differ. ent CNN layers, it is also desirable to emphasize local spatiotemporal regions in feature maps at. particular layers.\nTo realize these desiderata, our proposed decoding process for generating a sequence of words dy- namically emphasizes different levels (CNN layers) of 3D convolutional features, to model impor- tant coarse or fine-grained spatiotemporal structure. Additionally, the model employs different con texts and adaptively attends to different spatiotemporal locations of an input video. While some pre- vious models use 2D CNN features to generate video representations, our model adopts the features from a pre-trained deep 3D convolutional neural network (C3D); such features have been shown to be natural and effective for video representations, action recognition and scene understanding (Tran et al.[2015) by learning the spatiotemporal features that can provide better appearance and motion information. In addition, the proposed model is inspired by the recent success of attention-based models that mimic human perception (Mnih et al.|2014} Xu et al.]2015).\nThe principal contributions of this paper are as follows: (i) A new video-caption-generation mode is developed by dynamically modeling context-dependent feature abstractions; (ii) New attentiol mechanisms to adaptively and sequentially emphasize different levels of feature abstraction (CNN layers), while also imposing attention within local spatiotemporal regions of the feature maps at eacl layer are employed; (iii) 3D convolutional transformations are introduced to achieve spatiotempora and semantic feature consistency across different layers; (iv) The proposed model achieves state-of the-art performance on Youtube2Text benchmark. We call the proposed algorithm Adaptive Spa tioTemporal representation with dynAmic abstRaction (ASTAR)."}, {"section_index": "3", "section_name": "2 METHOD", "section_text": "Consider N training videos, the nth of which is denoted X(n), with associated caption Y(n). The length-Tn caption is represented Y(n) = (yin), . ,(n) (n) vector, with V the size of the vocabulary."}, {"section_index": "4", "section_name": "2.1 CAPTION MODEL", "section_text": "For notational simplicity, henceforth we omit superscript n. The t-th word in a caption, yt, is mapped to an M-dimensional vector wt = Weyt, where We E RMV is a learned word- embedding matrix, i.e., wt is a column of We chosen by the one-hot yt. The probability of caption Y=3u} t-1 T is defined as\np(Y|A) =p(y1|A)IIf-2p(yt|y<t,A) (1) Specifically, the first word yi is drawn from p(yi|A) = softmax(Vh1), where h1. tanh(Car+1). Bias terms are omitted for simplicity throughout the paper. All the other words. in the caption are then sequentially generated using an RNN, until the end-sentence symbol is gen erated. Conditional distribution p(yt[y<t, A) is specified as softmax(Vht), where ht is recursively. updated as ht = H(wt-1, ht-1, zt). V is a matrix connecting the RNN hidden state to a softmax,. for computing a distribution over words. zt = (ht-1, a1,..., ar) is the context vector used in. the attention mechanism, capturing the relevant visual features associated with the spatiotemporal attention (also weighting level of feature abstraction), as detailed in Sec.2.2] The transition function H() is implemented with Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber! 1997).\np(Y|A) =p(y1|A)IIt=2p(yt|y<t,A)\nGiven the video X (with features A) and associated caption Y, the objective function is the sum o the log-likelihood of the caption conditioned on the video representation:.\nlog p(Y|A) = logp(y1|A) + T=2logp(yt|y<t,A),\nEquation (2) is a function of all model parameters to be learned; they are not explicitly depicted in (2) for notational simplicity. Further, (2) corresponds to a single video-caption pair, and when training we sum over all such training pairs."}, {"section_index": "5", "section_name": "2.2 ATTENTION MECHANISM", "section_text": "We introduce two attention mechanisms when predicting word yt: (i) spatiotemporal-localization attention, and (ii) abstraction-level attention; these, respectively, measure the relative importance oi a particular spatiotemporal location and a particular CNN layer (feature abstraction) for producing Yt, based on the word-history information y<t.\nTo achieve this, we seek to map a -> a, where 4D tensors a all have the same dimensions. are embedded into same semantic spaces, and are aligned spatialtemporally. Specifically, at, l = 1, . .., L- 1 are aligned in the above ways with a. To achieve this, we filter each at, l = 1, . .., L- 1, and then apply max-pooling; the filters seek semantic alignment of the features (including feature dimension), and the pooling is used to spatiotemporally align the features with at. Specifically consider\nfor l = 1,..., L - 1, and with at = a. a(k) is the 3D feature map (tensor) for dictionary. k E {1, ..., n'F} at layer l, and Uk,t is a 4D tensor. The convolution * in (3) operates in the three. shift dimensions, and a(k) * U,t manifests a 4D tensor. Function f () is an element-wise nonlinea. activation function, followed by max pooling, with the pooling dimensions meant to realize fina dimensions consistent with at. Consequently, a.i E RnF is a feature vector..\nWith { at }t=1,L semantically and spatiotemporally aligned, we now seek to jointly quantify the value of a particular spatiotemporal region and a particular feature layer (\"abstraction') for prediction of the next word. For each al, the attention mechanism generates two positive weights, Qti and t, which measure the relative importance of location i and layer l for producing yt based y<t Attention weights Qt; and ti and context vector zt are computed as\neti = w tanh(Waa; + Wnaht-1), Qti=softmax({eti}),St=l=1Qtiai, btl = wT tanh(WsStl + Wnht-1), Btt = softmax({bti}), Zt=i=1BtlStl\nWe present results on Microsoft Research Video Description Corpus (YouTube2Text) (Chen & Dolan2011). The Youtube2Text contains 1970 Youtube clips, and each video is annotated with around 40 sentences. For fair comparison, we used the same splits as provided in Yu et al.(2016) with 1200 videos for training, 100 videos for validation, and 670 videos for testing. We convert all captions to lower case and remove the punctuation, yielding vocabulary sizes V = 12594. We consider the RGB frames of videos as\nat = f(k=1ai(k) * Uk,l)\neti = w tanh(Waaai + Whaht-1), Qti=softmax({eti}),St=t=1Qtii (4 btt = wT tanh(WsStl + Wnht-1), Btl=softmax({bt})Zt=l=1BtStl (5\nwhere a; is a vector composed by stacking {ai,t}l=1,L (all features at position i). eti and btt are. scalars reflecting the importance of spatiotemporal region i and layer t to predicting yt, while Qti. and Bti are relative weights of this importance, reflected by the softmax output. In (4) we provide attention in the spatiotemporal dimensions, with that spatiotemporal attention shared across all L (now aligned) CNN layers. In (5) the attention is further refined, focusing attention in the layer dimension.\nTable 1: Results on BLEU-4. METEOR and CIDEr met rics compared to state-of-the-art results (Yu et al.[2016) or Youtube2Text. respectively\nMethods BLEU-4 METEOR CIDEr h-RNN [4] 49.9 32.6 65.8 ASTAR 51.74 36.39 72.18\nResults are summarized in Tables [1] and we outperform the previous state-of-the-art result on Youtube2Text. This demonstrates the importance of leveraging intermediate convolutional layer. features. In addition, we achieve these results using a single model, without averaging over an ensemble of such models"}, {"section_index": "6", "section_name": "4 CONCLUSION AND FUTURE WORK", "section_text": "We have proposed a novel video captioning model, that adaptively selects/weights the feature ab. straction (CNN layer), as well as the location within a layer-dependent feature map. Our model. achieves state-of-the-art video caption generation performance on Youtube2Text benchmark"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "D. Chen and W. B. Dolan. Collecting highly parallel data for paraphrase evaluation. In ACL, 2011\nS. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.\nM. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014\nZ. Gan, C. Gan, X. He, Y. Pu, K. Tran, J. Gao, L. Carin, and L. Deng. Semantic compositional networks for visual captioning. In CVPR, 2017. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In NIPS, 2014. K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a method for automatic evaluation of machine translation. Transactions of the Association for Computational Linguistics, 2002. Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep learning of images, labels and captions. In NIPs, 2016. K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR Workshop, 2014. D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. R. Vedantam, Z. C. Lawrence, and D. Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015. S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko. Sequence to sequence-video to text. In ICCV, 2015. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator In CVPR, 2015. K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. H. Yu, J. Wang, Z. Huang, Y. Yang, and W. Xu. Video paragraph captioning using hierarchical recurrent neural networks. In CVPR, 2016. M. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014"}]
BJjn-Yixl
[{"section_index": "0", "section_name": "ATTENTIVE RECURRENT COMPARATORS", "section_text": "providing the necessary context before score emission. This is also the method used in Matching Networks(Vinyals et al.2016)\nPranav Shyam* & Ambedkar Dukkipati\nLSTM(e;) ] c; =[LSTM Vj E [1,20]\nDepartment of Computer Science and Automation Indian Institute of Science nd1\nPj = softmax(Sj Vj E[1,20]\nAttentive Recurrent Comparators (ARCs) are a novel class of neural networks built with attention and recurrence that learn to estimate the similarity of a set of objects by cycling through them and making observations. The observations made in one object are conditioned on the observations made in all the other ob- jects. This allows ARCs to learn to focus on the salient aspects needed to ascertain similarity. Our simplistic model that does not use any convolutions performs com- parably to Deep Convolutional Siamese Networks on various visual tasks. How- ever using ARCs and convolutional feature extractors in conjunction produces a model that is significantly better than any other method and has superior general ization capabilities. On the Omniglot dataset, ARC based models achieve an error rate of 1.5% in the One-Shot classification task - a 2-3x reduction compared to the previous best models. This is also the first Deep Learning model to outper form humans (4.5%) and surpass the state of the art accuracy set by the highly specialized Hierarchical Bayesian Program Learning (HBPL) system (3.3%)."}, {"section_index": "1", "section_name": "5.3.2 WITHIN ALPHABETS", "section_text": "Advancing Deep Learning systems to solve Artificial Intelligence tasks requires that models be ca. pable of performing continual meta-learning[ (Lake et al.]2016), (Schaul & Schmidhuber]2010)]. But top-down hierarchical designs of models (Santoro et al.|2016) to perform such tasks are not very successful on real world data and there are many reasons for this. First, most datasets are generally not designed with such higher order tasks in mind, thus researchers either work with syn-. thetic data or fabricate higher level tasks based on traditional datasets - both of which constrain their utility. Second, hierarchical or meta models suffer from reduced supervision during training due to their inherent design. Third, with our experiments we found that the foundational architectures like Memory Augmented Neural Networks are still in their infancy and not ripe enough to be utilized in complex hierarchical systems. Therefore, in this paper, we present an alternative way of bridging. this gap by building models in a bottom-up fashion. Comparing two or more inputs and estimating their similarity is a primal task using which more sophisticated models can be designed - an idea that has been well exploited in traditional Machine Learning for long (Bellet et al.]2013). Using the. modern developments of attention mechanisms and by combining it with recurrent neural networks. we first built better comparators called Attentive Recurrent Comparators (ARCs)[' Using ARCs as a foundational element, we were then able to build more complex models and achieve qualitatively better results on tasks like one-shot learning. Thus, this work is proof of concept for the bottom-up. design approach that can be applied to almost any dataset..\nThe across alphabet task is much more simpler as it is easy to distinguish characters belonging to different languages, compared to distinguishing characters belonging to the same language. Further across alphabet methods use a lot more data which is a particularly advantageous entity for Deep Learning Methods\nThere are large variations in the resolution of the images used as well. The Deep Siamese Networl. of Koch et al.uses 105x105 images and thus not comparable to out model, but we include it as it i. the current best result using deep neural nets. The performance of MANNs in this standard setup is. interpreted from the graph in the paper as the authors did not report it. It should also be noted tha. HBPL incorporates human stroke data into the model. Lake et al estimate the human performance. to be at 95.5%.\nTable 3: One Shot Classification accuracies of various methods and our ARC models\nWhen a person is asked to compare two objects and estimate their similarity, the person does so. by repeatedly looking back and forth between the two objects. With each glimpse of an object, a. specific observation is made. These observations made in both objects are then cumulatively used. to come to a conclusion about their similarity. A crucial characteristic of this process is that new. observations are made conditioned on the previous context that has been investigated so far by the. observer. The observation and it's contextual location are based on intermediate deductions. These intermediate deductions are themselves based on the observations made so far in the two objects.\n*Other Affiliation: Student at R V College of Engineering, Bengaluru 1 Code available at https://github.com/pranv/ARC\nEach embedding is mapped to a single score s; = f(c;), where f(.) is an affine transform followed by a non-linearity. The final output is the normalized similarity with respect to all similarity scores\nThis whole process is to make sure that we adhere to the fundamental principle of deep learning which is to optimise objectives that directly reflect the task. The normalisation allows for the ex- pression of relative similarity rather than absolute similarity.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We compare the two models discussed above with other methods in literature: starting from the simplest baseline of k-Nearest Neighbours to the latest meta-learning methods. The training and evaluation practises are non consistent.\nMany papers recently, like Matching NetworksVinyals et al.(2016) and MANNsSantoro et al. (2016) have used 1200 chars for background set (instead of 964 specified by Lake et al.[(2015) The remaining 423 characters are used for testing. Most importantly, the characters sampled for both training and evaluation are across all the alphabets in the training set.\nThis corresponds to standard Omniglot setting where characters are sampled within an alphabet and only the 30 background characters are used for training and validation..\nRNN nt- RNN Wg W 2 G-1 G"}, {"section_index": "3", "section_name": "5.4 RESULTS", "section_text": "Results are presented in Table 2. Our ARC models outperform all previous methods according to both of the testing protocols and establish the corresponding state of the art results\nDeep Neural Networks (Schmidhuber2015) (LeCun et al.[ 2015) are very complex parametrisec. functions which can be adapted to have the required behaviour by specifying a suitable objective. function. Our overall model is a simple combination of the attention mechanism and recurrent. neural networks (RNNs). We test our model by analysing its performance in similarity learning. We. also test its generalisation ability by using it in a model built for the challenging task of one sho classification on hand-written character symbols..\nIt is known that attention brings in selectivity in processing information while reducing the process- ing load (Desimone & Duncan|1995). Attention and (Recurrent) Neural Networks were combined in Schmidhuber & Huber(1991) to learn fovea trajectories. Later attention was used in conjunc. tion with RBMs to learn what and where to attend in Larochelle & Hinton(2010) and in Denil et al.(2012). Hard Attention mechanism based on Reinforcement Learning was used in Mnih et al. (2014) and further extended to multiple objects in|Ba et al.(2014); both of these models showed that the computation required at inference is significantly less compared to highly parallel Convolutional Networks, while still achieving good performance. A soft or differentiable attention mechanisms have been used in Graves(2013). A specialised form of location based soft attention mechanism well suited for 2D images was developed for the DRAW architecture (Gregor et al.]2015), and this forms the basis of our attention mechanism in ARC.\nA series of such guided observations and the entailing inferences are accumulated and finally th judgement on similarity is made\nA survey of the methods and importance of measuring similarity of samples in Machine Learning is presented in|Bellet et al.[(2013). With respect to deep learning methods, the most popular archi tecture family is that of Siamese Networks (Bromley et al.|1993). The energy based derivation is presented in Chopra et al.[(2005) and since then they have been used across wide range of modalities - in vision (Zagoruyko & Komodakis] 2015) (Bertinetto et al.2016), for face recognition and verifi cation (Taigman et al.[[2014) and in Natural Language Processing (Lu & Li][2013) (Hu et al.[2014) Recently Triplet Losses (Hoffer & Ailon2015) are being used to achieve higher performance and is similar to our Ternary ARC model at an abstract level.\nIn stark contrast to this, current similarity estimating systems in Deep Learning are analogues of th Siamese similarity learning system (Bromley et al.]1993). In this system, a fixed set of features is detected in both the objects. Detection of features is independent of the features present in th other object. The two objects are compared based on the mutual agreement in the detected features More concretely, comparison between two objects in this system consists of measuring the distanc between their vector embeddings. A neural network defines the mapping from the object to the corresponding embedding vector in target space. This neural network is trained to extract the mos salient features from the object for the specific task in hand.\nThere is major underlying difference between the human approach discussed above and the siamese. approach to the problem. In the human way, the information from the two objects is fused from the. very beginning and this combined information primes the subsequent steps in comparison. There. are multiple lookups on each of the objects and each of these lookups are conditioned on the obser vations of both the objects so far. In the siamese way, when the embeddings in the target space are. compared the information fuses mostly at an abstract level and only in the last stage..\nA bayesian framework for one shot visual recognition was presented in Fe-Fei et al.(2003).Lake et al.[(2015) extensively study One Shot Learning and present a novel probabilistic framework called Hierarchical Bayesian Program Learning (HBPL) for rapid learning. They have also re leased the Omniglot dataset, which has become a testing ground for One Shot learning techniques Recently, many Deep Learning methods have been developed to do one shot learning: Koch et al use Deep Convolutional Siamese Networks for performing one shot classification. Matching Net- works (Vinyals et al.]2016) and Memory Augmented Neural Networks (Santoro et al.]2016) are other approaches to perform continual or meta learning in the low data regime. All the models except the HBPL have inferior one shot classification performance compared to humans on the Om niglot Dataset.\nWe tested ARCs across many visual tasks and compared it against strong baselines of prevalent methods. ARCs which did not use any convolutions showed superior performance compared to Deep Convolutional Siamese Neural Networks on challenging tasks. Though Dense ARCs are as capable as ConvNets, a combination of both ARCs and convolutions produces more superior models (hereafter referred to as ConvARCs), capable of better generalization and performance. In the task of estimating the similarity of two characters from the Omniglot dataset for example, ARCs and Deep Convnets both achieve about 93.4% accuracy, whereas ConvARCs achieve 96.10% accuracy."}, {"section_index": "4", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "Figure 1: The abstract computational graph of a Binary ARC comparing two images. The controller which is an RNN primes the whole process. The two images are alternatively and repeatedly at tended to, depicted by the carousel below. At each time-step the glimpse taken from the image is based on the attention parameters Nt which is calculated using the previous state of RNN ht-1 by orojecting it with Wq. The glimpse obtained Gt and the previous state ht-1 together used to update the state of controller to ht. The vertical dotted lines demarcate the time-steps.\nWe were interested to see the utility of the human way of comparing objects. For this, we used the modern tools of attention and recurrence to build an end-to-end differentiable model that can learn to compare objects called Attentive Recurrent Comparators (ARCs). ARCs judge similarity of objects similar to the way people do as discussed above.\nWe presented a model that uses attention and recurrence to cycle through a set images repeatedly and. estimate their similarity. We showed that this model is not only viable but also much better than the siamese neural networks in wide use today in terms of performance and generalization. Our main\nFurther, as discussed above, similarity estimation is a generic and a primal task in many other higher. level cognitive tasks. Evaluating our model on these higher-level tasks also lets us explore the. generalisation capacity of ARCs. In this work, we study the performance of models designed to. perform One Shot Learning with ARCs as building blocks. On the Omniglot one-shot classification task, our model achieved 98.5% accuracy significantly surpassing the current state of the art set by. Deep Learning methods or other systems.\nresult is in the task of One Shot classification on the Omniglot dataset, where we achieved state oi the art performance surpassing HBPL's and human performance.\nOne potential downside of this model is that due to sequential execution of the recurrent core and by the very design of the model, it might be more computationally expensive than a distance metric method. But we believe that advancing hardware speeds, such costs will be outweighed by the benefits of ARCs.\nFundamentally, the performance of ARCs shows the value of early fusion of information across the. entire context of the task. Further, it also strengthens the view that attention and recurrence together can be as good as convolutions in some cases.\nMore interesting extensions would involve developing more complex architectures using this bottom-up approach to solve even more challenging AI tasks.."}, {"section_index": "5", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank all the members of the Statistics and Machine Learning Lab at the Indiar Institute of Science for their support and feedback. We would like to specifically thank Akshay Mehrotra for his extensive help with everything from the implementation to discussing results. We would also like to thank Siddharth Agrawal and Gaurav Pandey for their helpful feedback throughou the process. We would like to thank Soumith Chintala for his feedback on this manuscript and the idea.\nThe model operates on given two images over the span of an episode. The images are given at the beginning of the episode and the ARC is expected to emit a token of similarity at the end of this. episode. Given two images {xa, x}, the model repeatedly cycles through the both, attending to only one image at one time step. Thus the sequence of presentations is xa, xb, xa, xb, ... and so on. for a finite number of presentations of each image. An episode is nothing more than a collection of. time-steps, with an action being taken in each time-step.."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "For time step t the input image presented is given by\nJimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visua attention. arXiv preprint arXiv:1412.7755, 2014.\nIt < xg if t % 2 is 0 else xb\nThe model functionally consists of a recurrent core and an attention mechanism. During the spa. of the episode, the model iteratively focuses its attention on the current input. At each time step c. the episode, the model attends to only one input, but over the course of many time steps it woul. have observed many aspects of all the inputs. The observations are made by the model at eac. time step by directing its attention to a region of interest in each input. Since the core of the mode. is a recurrent neural network, this round robin like cyclic presentation of inputs allows for earl fusion of information from all the inputs. This makes the model aware of the context in which it i. operating. Consequently, this provides feedback to the attention mechanism to attend on the relevar. and crucial parts of each sample considering the context of all the inputs and observations made s. far.\nAurelien Bellet, Amaury Habrard, and Marc Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.\nLuca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully convolutional siamese networks for object tracking. arXiv preprint arXiv:1606.09549, 2016\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduarc Sackinger, and Roopak Shah. Signature verification using a ?siamese? time delay neural network International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision. and Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005.\nIf there are n inputs and we allow for g glimpses of each input, then the episode length L is ng The hidden state of the RNN controller at the final time step h1 can be then used for subsequent processing.\nMisha Denil, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. Learning where to attend with deep architectures for image tracking. Neural computation, 24(8):2151-2184, 2012.\nThe attention mechanism focuses on a specific region of the image I to get the glimpse Gt\nGt < attend(It, t Nt = Wqht-1\nattend(.) is the attention mechanism described in the sub section below, that acts on image It. are the attention glimpse parameters which specify the location and size of the attention window At each step, we use the previous hidden state of the RNN core ht-1 to compute Nt. Wg is the projection matrix that maps the hidden state to the required number of attention parameters.\nNext, both the glimpse and previous hidden state are combined to form the next hidden state\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw. recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.\nThe above 4 equations describe the Binary ARC. We arrived at the iterative cycling of input paradigm. after trying out many approaches to attend to multiple images at once. Iterative cycling turned out to\nThough presented in the context of images, ARCs can be used in any modality. There are in numerable ways to extend ARCs. Better attention mechanisms, higher resolution images, different datasets, hyper-parameter tuning, more complicated controllers etc are the simple things which could. be employed to achieve better performance..\nThe ARC model can be directly derived by distilling the vital aspects from the human way discussed in Section 1. In the following paragraphs we describe the ARC model for the binary image case - where there are two images whose similarity has to be judged. It is trivial to generalise it to more objects or other modalities. See Figure 1 for a visual depiction of the model.\nRobert Desimone and John Duncan. Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1):193-222, 1995.\nht< RNN(Gt,ht-1)\nbe more computationally efficient, scalable and statistically more consistent than other approaches we tested. Note that It for some t alternates between xa and x, while the rest of the equations are. exactly the same for all time steps.\nElad Hoffer and Nir Ailon. Deep metric learning using triplet network. In International Worksho on Similarity-Based Pattern Recognition, pp. 84-92. Springer, 2015."}, {"section_index": "7", "section_name": "2.1 ATTENTION MECHANISM", "section_text": "The attention mechanism is based on DRAw (Gregor et al.]2015), zoomable and differentiable The attention window is defined by an N N 2D grid of Cauchy kernels. We found that the heavy tail of the Cauchy curve to aids in alleviating some of the vanishing gradient issues and it sped up training.\nThe grid's location and size is defined based on the glimpse parameters. The N N grid of kernels is placed at (x, y) on the S S image, with the central Cauchy kernel being located at (x, y) The distance between two Cauchy kernals either in the vertical or horizontal direction is o. In other words, the elemental square of the 2D grid is & in size. The glimpse parameter set Nt is unpacked IandSor tedfroM nd S using the following transforms\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.\n(x+1) (y+1) (S-1) x = S y=( y = e1-2|] 2 2\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems, pp. 1243-1251, 2010.\nThe location of a ith row, jth column's Cauchy kernel in terms of the pixel coordinates of the image is given by:\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 2015.\ntx=x+i-(N+1)/2) and ly=y+(j-(N+1)/2)d\nZhengdong Lu and Hang Li. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pp. 1367-1375, 2013.\nThe horizontal and vertical filterbank matrices are then calculated as:\n1'v and Fy[j,b\nTom Schaul and Jurgen Schmidhuber. Metalearning. Scholarpedia, 5(6):4650, 2010.\nJuergen Schmidhuber and Rudolf Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(01n02):125-134, 1991.\nattend(It, Nt) = FyItFX\nattend thus gets an N N patch of the image, which is flattened and used in the model\nAs seen in the experimental section below, while simple attention over raw images performs as well as Deep ResNets, we found large improvements by using Convolutional feature extractors Applying several layers of Convolution produces a 3D solid of activations (or a stack of 2D feature maps). Attention over this corresponds to applying the same 2D attention over the entire depth of the 3D feature map and outputting the flattened glimpse.\nUnderstanding the empirical functioning of an ARC and identifying factors affecting its performance. requires both qualitative and quantitative studies. Qualitative analysis tells us what the model is do-. ing when it is comparing 2 images and how this relates to human ways of comparison. Quantitative analysis shows the variations in performance when certain aspects of the model are changed and thus provide an estimate of their importance. For the analysis presented below, we use the simple. ARC model (without convolutions) described in Section 2 above trained for the verification task on the Omniglot dataset. Data samples in the Omniglot dataset have an understandable structure with characters being composed of simple strokes drawn on a clean canvas. The dataset is also very. diverse, which allows us to study various characteristics of our model under a wide range of condi-. tions. Since our main result in the paper is also on the Omniglot dataset (Sections 4 and 5), we train.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advance 22122014\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016.\nSYA S C 861 '861 76\n(a) It can be seen that the two characters look very similar in their stroke pattern and differ only ir their looping structure. ARC has learnt to focus on these crucial aspects..\n(b) ARC parses over the characters in a left to right, top to bottom fashion. Finally, it ends up focussing in the region where the first character has a prolonged downward stroke, whereas the second one does not.\nFigure 2: Attention windows over time when comparing the two Omniglot characters. The top row has the first image and the bottom row has the second. Each column represents a glimpse step. (a) Comparing two dissimilar characters and (b) Comparing two similar characters.\nThe verification task is a binary classification problem wherein the model is trained to predic whether the 2 drawings of characters provided belong to the same character or not (see Section 4 fo1 more details). The final hidden state of the RNN Controller h1 is given to a single logistic neuron tc estimate the probability of similarity. The whole setup is trained end to end with back-propagatior and SGD. The particular model under consideration had an LSTM controller (Hochreiter & Schmid huber[1997) with forget gates (Gers et al.[2000). The number of glimpses per image was fixed tc 8, thus the total number of recurrent steps being 16. 32 32 greyscale images of characters were used and the attention glimpse resolution is 4 4.\nThe following inferences were made after studying several cases of ARC's operation (see Figure 2 for an example):\n33333 3 7 7\n1. The observations in one image are definitely being conditioned on the observations in th other image. This can be seen in figures 2a and 2b. 2. The ARC seems to have learnt a fairly regular left to right parsing strategy, during whicl the attention window gradually reduces in size. This is quite similar to strategies found ii other sequential attentive models like DRAw (Gregor et al.]2015). 3. Deviation from such regular ordered parsing occurs if model finds some interesting featur in either character. This results in attention being fixated to that particular region of th character for a few subsequent glimpses. 4. There is no strict coordination or correspondence chronologically between the attende regions of the two images. While instances of ARC focussing on the same aspect/stroke o two characters were common, there were plenty more instances wherein the ARC attende to different aspects/strokes in each image during an interval. We hypothesise that the RNI"}, {"section_index": "8", "section_name": "3.2 QUANTITATIVE ANALYSIS", "section_text": "We performed a simple yet very insightful ablation study to understand ARC's dynamics. ARC accumulates information about both the input images by a series of attentive observations. We trained 8 separate binary classifiers to classify images as being similar or not based on hidden states of the LSTM controller at every even time step correspondingly . The performance of these binary classifiers are correlated with the information contained in the hidden states. The performance of these classifiers is reported in Table 1. Since the ARC has an attention window of only 4 4 pixels it can barely see anything in the first time step, where its attention is spread throughout the whole image. With more glimpses, finer observations bring in more precise information into the ARC and the recurrent transitions make use of this knowledge, leading to higher accuracies. We also used the 8 binary classifiers to study how models confidence grows with more glimpses and one such good example is provided in Figure 3.\nTable 1: Glimpses per image vs Classification Accuracy of ARC\n(a) ARC is very unsure of similarity at the beginning. But at 5th glimpse (4th column), the attention goes over the region where there are strokes in the first image and no strokes in the second one resulting in dropping of the score\n(b) Initially ARC is unsure or thinks that the characters are similar. But towards the end, at 6th glimpse (5th column), the model focusses on the region where the connecting strokes are different.. The similarity score drops and with more \"ponder'\", it falls down significantly..\ncontroller could be utilizing turns of glimpsing at an image to observe some other aspects which are not of immediate consequence.. 5. We also frequently encountered cases wherein the attention window, after parsing as de-. scribed in point 2, would end up focusing on some blank, stroke-less region, as if it had. stopped looking at the sample. We hypothesize that the model is preferring to utilize its recurrent transitions and not to be disturbed by any input stimuli..\nGLIMPSES ACCURACY 1 58.2% 2 65.0% 4 80.8% 6 89.25% 8 92.08% 0.6\nFigure 3: Attention windows over time and instantaneous predictions from independent binary clas. sifiers. The first glimpse is omitted as it covers the whole image. In the graph: x-axis - glimpse number, y-axis - similarity score. The red line is the decision threshold, above which the images are. considered to be similar. Both of the cases above are examples of a dissimilar pair.."}, {"section_index": "9", "section_name": "1+ SIMILARITY LEARNING", "section_text": "Verification is a generic and common task in Machine Learning. The verification task essentially requires models that can predict whether the two inputs are the same or different, for some notior of same (such as unique facial identity, objects of same class etc.,). Specifically, here we restrict ourselves to the task of estimating the similarity of the given pair of images. When given twc images the models are required to output a single logistic value, which is expected to be 1 for very similar inputs and O for very dissimilar inputs. We compare our ARC model with several baselines and report performance on two challenging datasets."}, {"section_index": "10", "section_name": "4.1.1 OMNIGLOT", "section_text": "The dataset is thoroughly detailed in the next section which is on one shot classification on thi dataset. And this task acts as a precursor to the more sophisticated next task. We use 32 32 images and similar/dissimilar pairs of character drawings are randomly chosen only within alphabet to make the task more challenging. Out of the 50 alphabets provided in the dataset, 30 were used for training 10 for validation and the last 10 for testing."}, {"section_index": "11", "section_name": "4.3 RESULTS", "section_text": "The results are in Table 1 for Omniglot respectively. Our simple ARC model without using any. convolutional layers obtains a performance that matches a AlexNet style 6 layer Deep Convnet with millions of parameters. Using convolutional feature extractors, ARCs outperform the Wide ResNet. based Siamese ConvNet baselines, even the ones containing an order of magnitude more parameters.\nOne shot learning requires Machine Learning models to be at the apotheosis of data efficiency. In case of classification. only a single example of each individual class is given and the model is\nWe consider strong convolutional baselines, which have been shown time and again to excel at such visual tasks. We particularly use Wide Resnets (WRNs) (Zagoruyko & Komodakis] 2016) which are the current state of the art models in image classification. Independent nets were tuned for each dataset. Hyper parameters were set for reasonable values for all our ARC models and no hyper- parameter tuning of any kind was employed. For the Omniglot dataset, we also include the result from (Koch et al.) We used moderate data augmentation consisting of translation, flipping, rotation and shearing, which we found to be critical for training ARC models.\nTable 2: Performance of ARC vs conventional methods on the verification task. All values are accuracies on the test set. For Wide ResNets, suffixes specify the depth and width. For example (d=60, w=4) means that it is a ResNet that 60 is layers deep with each residual block having a width multiplier of 4.\nOmniglot is a dataset byLake et al.(2015) that specially designed to compare and contrast th learning abilities of humans and machines. The dataset contains handwritten characters of 50 of th world's languages/alphabets. Though there are 1623 characters, there are only 20 samples for eac. which is drawn by 20 individuals. So this is in a diagonally opposite position when compared t MNIST or ImageNet. One Shot Classification on this dataset is very challenging one as most Deej Learning systems do not work well in such extreme conditions. Lake et al.(2015) developed dedicated system for such rapid knowledge acquisition called Hierarchical Bayesian Programmin Learning, which surpasses human performance and is the current state of the art of all methods.\nThe dataset is divided into a background set and an evaluation set. Background set contains 30 alpha bets (964 characters) and only this set should be used to perform all learning (e.g. hyper-parameter inference or feature learning). The remaining 20 alphabets are for pure evaluation purposes only Each character is a 105 105 image.\nA one shot classification task episode is as follows: from a randomly chosen alphabet, 20 character are chosen which becomes the support set. One character among these 20 becomes the test character 2 drawers are chosen, one each for the support set and the test character. The task is to match the test drawing to the correct character's drawing in the support set. Assigning an image to one of the 20 characters given results in a 20-way, 1-shot classification task."}, {"section_index": "12", "section_name": "5.2.1 NAIVE ARC MODEL", "section_text": "This is a trivial extension of ARC for used for verification to this task. The test image from the firs1 set is chosen and compared against all 20 images from the second set. It is matched to the character with maximum similarity. This is done for 20 times for each character in the first set.."}, {"section_index": "13", "section_name": "5.2.2 FULL CONTEXT ARC", "section_text": "Our whole hypothesis in this work has been about the value of providing the full context to the. model. And we have shown to that models which are aware of the context of operation are better than the ones that aren't. While Naive ARC model is simple and efficient, it does not incorporate the whole context in which our model is expected to make the decision of similarity. When the character is being compared to 20 other characters from the support set, the comparisons are all independently done. That is, the model is not aware available options for matching, so it assigns the similarity score to each pair independently.\nIt is highly desirable to have a 20-way ARC, where each observation is conditioned on the all images.. Unfortunately, such a model is not practical. The recurrent controller has memory limitations in its state. Scaling up the memory incurs a huge parameter burden. So instead, we use a hierarchical setup, which decomposes the comparisons to be at two levels - first local pairwise comparison and a second global comparison. We found that this model reduces the information that has to be crammed in the controller state, while still providing sufficient context..\nAs with the Naive method, we compare one image from set A with one from set B in pairs. Bu1 instead of emitting a similarity score immediately, we collect the comparison embeddings of each comparison. Comparison embeddings are the final hidden state of the controller when the test image. processed by a Bi-Directional LSTM layer. This merges the information from all comparisons, thus.\nexpected to generalise to new samples. A classic example is of a human kid learning about the animal giraffe (Vinyals et al.|2016). The kid does not need to see thousands of images of a Giraffe to learn to detect it. Rather, just from a single example, the kid can not only recognize it at a future point, but going further, she can also speculate on its other characteristics. While humans excel at this task, current Deep Learning systems are at the opposite end of the spectrum, where they are trained on millions of samples to achieve the kind of results that they are well known for. With ARCs we have developed a generic method to comparing objects. We have also shown that our model generalizes extremely well. So we decided to test ARC on the challenging Omniglot dataset"}]
ryAe2WBee
[{"section_index": "0", "section_name": "MULTI-LABEL LEARNING WITH SEMANTIC EMBED- DINGS", "section_text": "Liping Jing, MiaoMiao Cheng & Liu Yang\nBeijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University. 1000A\nW. Bi and J. Kwok. Efficient multi-label classification with many labels. In Proc. of ICML, 2013\ngittens@icsi.berkeley.edu, mmahoney@stat.berkeley.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The multi-label learning problem is to learn to predict potentially multiple relevant labels given an instance. Instances that have multiple labels naturally occur in many application domains including multimedia information retrieval, tag recommendation, semantic scene classification, query categorization, gene function prediction, medical diagnosis, drug discovery, and marketing.\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In Proc. of NIPs, 2007\nC. Boutsidis, M. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Proc. of ACM SODA, 2009.. Y. Chen and H. Lin. Feature-aware label space dimension reduction for multi-label classification. In Proc. of NIPS, 2012. M. Cisse, M. Al-Shedivat, and S. Bengio. ADIOS: Architectures Deep in Output Space. In Proc. of. ICML, 2016. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning and Research, 12:2121-2159, 2011. D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Proc. of NIPS, 2009. L. Jing, L. Yang, J. Yu, and M. Ng. Semi-supervised low-rank mapping learning for multi-label classification. In Proc. of CVPR, 2015. Z. Lin, G. Ding, M. Hu, and J. Wang. Multi-label classification via feature-aware implicit label space encoding. In Proc. of ICML, 2014. P. Mineiro and N. Karampatziakis. Fast label embeddings via randomized linear algebra. In Proc. of ECML, 2015. S. Mohamed, Z. Ghahramani, and K. A. Heller. Bayesian Exponential Family PCA. In Proc. of NIPS, 2009. J. Nam, J. Kim, E. Mencia, I. Gurevich, and J. Furnkranz. Large-scale multi-label text classification - revisiting neural networks. In Proc. of ECML, 2014.. Y. Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press, 2001. Y. Prabhu and M. Varma. Fastxml: a fast. accurate and stable tree-classifier for extreme multi-label learning. In Proc. of ACM SIGKDD, 2014."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Multi-label learning aims to automatically assign to an instance (e.g., an image or. a document) the most relevant subset of labels from a large set of possible labels. The main challenge is to maintain accurate predictions while scaling efficiently on. data sets with extremely large label sets and many training data points. We propose. a simple but effective neural net approach, the Semantic Embedding Model (SEM). that models the labels for an instance as draws from a multinomial distribution parametrized by nonlinear functions of the instance features. A Gauss-Siedel. mini-batch adaptive gradient descent algorithm is used to fit the model. To handle. extremely large label sets, we propose and experimentally validate the efficacy. of fitting randomly chosen marginal label distributions. Experimental results on. eight real-world data sets show that SEM garners significant performance gains. over existing methods. In particular, we compare SEM to four recent state-of-. the-art algorithms (NNML, BMLPL, REmbed, and SLEEC) and find that SEM uniformly outperforms these algorithms in several widely used evaluation metrics. while requiring significantly less training time..\nA popular approach to the multi-label learning problem is to embed the labels in a low-dimensional. latent space via linear or local non-linear embeddings. The approach of Hsu et al.(2009) projects the label vectors to a random low-dimensional space, fits a regression model in this space, then. projects these predictions back to the original label space. Balasubramanian & Lebanon|(2012) use a. sparsity-regularized least squares reconstruction objective to select a small set of landmark labels that are used to predict the remaining labels. Bi & Kwok(2013) take a similar approach, with a greatly decreased computation cost, by posing the problem of selecting the landmark labels as one. of column subset selection and adopting the leverage score sampling approach (Boutsidis et al.. 2009). Recently,[Yu et al.(2014) and Jing et al. (2015) propose using trace norm regularization to identify a low-dimensional representation of the original large label space.Mineiro & Karampatziakis (2015) use randomized dimensionality reduction to learn a low-dimensional embedding that explicitly captures correlations between the instance features and their labels. These approaches, like other. linear embedding methods, assume that the label matrix is low-rank. However, the label matrix in most applications of multi-label learning is a sparse binary matrix, and thus is extremely likely to violate this low-rank assumption (Bhatia et al.|2015)\nRather than working with the original label and feature matrices, some methods work instead with label or feature similarity matrices, and seek to preserve the local structure of the data in the learnec low-dimensional latent space.Tai & Lin[(2010) use PCA on the label covariance matrix to extraci a low-dimensional latent space for labels and Chen & Lin (2012) extend this method to integrate feature information.Lin et al.(2014) apply PCA to a similarity matrix constructed using both labe and feature information; this approach is time-consuming as it requires computing a large similarity matrix.Nam et al.(2014) introduce a neural network model to capture non-linear relationships between the input features and the labels. However, this approach is computationally infeasible wher the number of possible labels is large. Similarly,[Cisse et al.(2016) shows that using a deep learning approach built on top of an informative partitioning of the label space gives good performance the scalability of this method was not characterized.Prabhu & Varma(2014) propose a method tc efficiently train a classification tree by minimizing the Normalized Discounted Cumulative Gain. Ra et al.[(2015) assumes that the label vectors are generated by sampling from a weighted combination of label topics, where the mixture coefficients are determined by the instance features."}, {"section_index": "3", "section_name": "A. Effect of Latent Space Dimensionality", "section_text": "It can be seen that the latent space dimensionality r plays an important role to learn latent factors V and a feature mapping matrix W in our proposed methods, as it does in the three baselines BMLPI REmbed and SLEEC. In order to investigate this dependence, we conducted a series of experiment on the training data sets using 5-fold cross-validation, comparing BMLPL, REmbed, SLEEC and ou proposed SEM and SEM-K.\n0.4 0.65 0.38 0 d8 0.36 0.6 Miir 0.34 O-BMLPL O-BMLPL 0.55 -REmbed -REmbed 0.32 SLEEC -SLEEC SEM SEM O-SEM-K 0.3 O-SEM-K 0.5 5 100 200 300 400 450 5 100 200 300 400 450 Latent space dimensionality (r) Latent space dimensionality (r) (a) Delicious-P@1 (b) Delicious-MiF1\n0.4 0.65 0.38 8 0.36 0.6 Mir P 0.34 O-BMLPL O-BMLPL 0.55 -REmbed -REmbed x-SLEEC 0.32 -SLEEC SEM SEM OSEM-K 0.3 O-SEM-K 0.5 1 5 100 200 300 400 450 5 100 200 300 400 450 Latent space dimensionality (r Latent space dimensionality (r\nBhatia et al. (2015) proposes a multi-phase algorithm (SLEEC) that first clusters the instances intc a number of relatively small groups, learns label embeddings for each group via an SVD, and ther trains linear regressors from the input features to the latent label factors for each group. SLEEC empirically outperforms previous state-of-the-art multi-label classifiers, but the label embedding in each group is learned from a nearest neighbor graph that is constructed solely from labelling information, ignoring the available feature matrix; the feature matrix has been shown repeatedly to be a source of useful information for label embedding (Chen & Lin]2012] Lin et al.]2014} [Yu et al. 2014; Jing et al.2015).\nFigure 3: The effect of the latent space dimensionality r on BMLPL, REmbed, SLEEC, SEM anc SEM-K in terms of MiF1 and P@ 1 on the Delicious dataset.\nIn this experiment, we take Delicious dataset as an example. The training data is separated into five. folds where four folds are used as training and one fold as validating, and the averaged results ir. terms of P@1 and MiF1 are given by Figure[3] It can be seen that their performances usually improve with increasing r until they reach an optimum value. However, once r becomes too large, thei. performances degrade. This is reasonable: when r is too small, the learned parameters cannot full. characterize the hidden semantic structure in the classification problem, while when r is too large, the. benefits of dimensionality reduction are lost, as the model begins to over-fit to the idiosyncrasies of the training data rather than capturing the semantic structure common to both the training and validatior data. Usually, these methods could obtain good performance at small r, say 45 for Delicious datasel\nNotation: In the sequel, n is the number of training instances, c is the cardinality of the set of possible. labels, d is the dimensionality of the feature vectors, and r is the dimension of the learned latent space The matrix X E IRn d contains the instance features, and Y E 0, 1nc indicates the labels assigned. to each instance. We denote the number of observed labels for instance i with l, = k=1 Yik. The notations Ay. and A.; respectively refer to the ith row and jth column of the matrix A. Unless otherwise specified, the notation f(A) denotes the elementwise application of an arbitrary function f to the A, so for example exp(A)i; = exp(aij).\nOur Semantic Embedding Model (SEM) assumes that the underlying parameters determining th observed labels are low-rank rather than that the observed label matrix is itself low-rank, and it use a nonlinear model to fit the probability distributions over the labels, conditioned on the instanc features.\n0.9 0.7 0.88 0.65 0.86 0.84 MMir 0.6 P *-NNML *-NNML 0.82 O-BMLPL O-BMLPL 0.8 -REmbed -REmbed 0.55 xSLEEC SLEEC 0.78 +SEM +-SEM O-SEM-K O-SEM-K 0.76 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.20.3 0.4 0.5 0.6 0.7 Training data size (ratio over whole data) Training data size (ratio over whole data)\n0.9 0.7 0.88 0.65 0.86 0.84 ? 0.6 0.82 - NNML -NNML f BMLPL O-BMLPL 0.8 -REmbed -REmbed 0.55 SLEEC SLEEC 0.78 -SEM -SEM O-SEM-K O-SEM-K 0.76 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Training data size (ratio over whole data) Training data size (ratio over whole data) (a) P@1 (b) MiF1\nSEM models the i-th row of Y as the result of l, draws from a multinomial distribution:\nYg. ~ Multinomial(l; P). where P = Nik i=1,...,n =1,...,C\nFigure 4: Effect of varying the training data size, as a fraction of the combined test and training data on five multi-label learning methods in terms of P@1 and MiF1 on the Mediamill dataset.\nMeanwhile, we studied the label prediction performance as a function of the amount of labeled. training data. In this experiment, we fixed the testing data size, and randomly selected training data from the training set so that the training data size varies from 1% to 70% of the combined training. and testing data. In order to avoid the presence of empty categories and instances with no labels, at. least one instance is kept for each label and at least one label is kept for each instance during this sampling process. For each fixed size of the training set, the desired amount of data is randomly\nThe contribution of this paper is a scalable, accurate, and simple neural network approach to multi label learning. Experiments establish that our method is faster and more accurate than SLEEC, the current state-of-the-art scalable algorithm\nThe parameter matrix H = UvT + 1nbT is the sum of label priors b E Rc and the product of explanatory latent factors associated with the instances (U E Rnxr) and the labels (V E Rcxr) Further, we allow the latent factors associated with each instance to be a nonlinear function of the features associated with that instance, U = f(X, W) for some W to be learned. We note that if f(X, W) = XW, SEM could be viewed as fitting a Bayesian Exponential Family PCA (Mohamed et al.]2009). However, throughout this paper we take f(Xw) = o(Xw), where o(X) = (1 + exp(--X))-1 denotes the elementwise application of the sigmoid function, as we find this gives good results; with this choice, SEM is more naturally viewed as a neural network model.\nWe fit the SEM parameters by maximizing the likelihood of the observed labels. This is equivalent. to minimizing the sum of the KL divergences between the empirical label distributions for each instance and the label distributions predicted by the model (Pawitan!2001). Accordingly, we define. the empirical label distribution matrix G, whose ith row satisfies G; = Y/li, then minimize the.\nsampled ten times, and the resulting average P@1 and MiF1 on the testing data are recorded. Durin. training, the latent dimensionality parameter r is selected via 5-fold cross-validation.\nn * Gij. n Jg|p= - Gij log Gij log Pij. Pij i=1 j=1 i=1 j=1\nexp(hii exp((o(XW)VT)ij+ bj Lc-1 exp((o(XW)VT)ik + bk)"}, {"section_index": "4", "section_name": "C. Convergence", "section_text": "exp(o(XW)i.(VT)j+bj) J(W, V,b) = JG|p = - Gij log k=1 exp(0(XW)i.(VT).k + bk) i=1 j=1 n C =-Gi(0(XW)i.( exp(0(XW)i.(V).k+bk i=1 j=1 = -Tr(G(o(XW)vT +1nbT)T) +1 log(exp(o(Xw)vT +1nbT)1c)\nIn order to demonstrated the convergence of the proposed method, we show the value of objective function (4) (at r = 45) via Figure 5(a) and the prediction result (P@1) via Figure 5(b) along with the number of passes to the dataset (i.e., t in Algorithm[1). It can be seen that SEM could be convergent and the prediction performance becomes stable in less than 50 epochs, which will leverage SEM dealing with large-scale data.\n9.510 0.7 9 0.65 8.5 ld 0.6 7.5 0.55 6.5 0.5+ 60 50 100 150 200 250 300 0 50 100 150 200 250 300 Iteration Iteration (a) Convergence curve (b) P@1 curve\nHere V E Rcr are the representations of the labels in a latent semantic space, W E Rdr controls the nonlinear mapping from the instance features to the same semantic space, and the offsets b E R allow for label-specific offsets in the mapping from the semantic space to the log probabilities."}, {"section_index": "5", "section_name": "3 MODEL FITTING", "section_text": "Figure 5: Performance of the proposed SEM method (with r = 45, p = 0.1) on the Delicious dataset a) objective function value to minimum and b) prediction result in terms of P@1, where x-axis represents the number of passes to the dataset..\nThe optimization problem (4) is non-convex. To solve it efficiently, we use a Gauss-Siedel approach combined with mini-batching.\nNamely, we cyclically update each of W, V, and b using AdaGrad (Duchi et al.. 2011) while keeping the other two variable fixed. We compute the gradients using mini-batches. To state the expressions for the gradients with respect to the model parameters, we introduce some helpful notation: A o B denotes the entry-wise product of two matrices, M = (Xw) o (1 (Xw))\n(w) = xT (M c (XW)yT o(XW)yT +1x\nw(r) = W(r-1) _ T Qw O g(W(t-1)\np (9(W(m)) O G(W(m) YY-\nv(r) and b(r) are computed according to similar updating rules obtained from (8) and (9) by substituting 9(W) with G(V) (or G(b)), W with V (or b), and aw with ay (or ab).\nA listing of the proposed algorithm is given in Algorithm 1 Its computational complexity is O(Tnr(d + c)), where T is the number of epochs. We note that the gradient calculations in lines 7-9 of Algorithm[1are amenable to parallelization.\nrow-wise Kullback-Leibler distance (Yang et al.]2011) between G and P:. Jc|P =Gy log Pj Gu -G logPij. (2) i=1 j=1 i=1 j=1 Recalling that exp((o(XW)VT)ij+bj Pij - k=1 exp(hik) k=1 exp((o(XW)VT)ik+bk) some algebraic manipulations give the final objective. exp(o(XW)i(VT)j + bj) i=1 j=1 -Gi((XW);.(V] exp(0(XW);.(V).k + bk -n 108 i=1 j=1 = -Tr(G(o(XW)vT +1nbT)T) +1 log(exp(o(XW)vT +1nbT)1c) (3) Thus the SEM parameters are learned by solving the optimization problem\nFigure4|shows these results for the Mediamill dataset which contains the largest number of instances As expected, the performance of all the methods is positive correlated with the size of the training data set, and we also see that the proposed SEM-K uniformly outperforms the other methods regardless of the training data size. As it is often expensive to obtain large labeled data sets in real applications this observation suggests that SEM-K is a better choice for these situations.\n9.5 X10 0.7 9 ***** +*+*++++***+*+ 0.65 8.5 8 ld 0.6 oobeeeree 7.5 0.55 6.5 6 50 100 150 200 250 300 50 100 150 200 250 300 Iteration Iteration\nmin J(W,V,b) W,V,b\nwhere e and the learning rate p determine how much an entry W ; is updated during the first timestep\nAlgorithm 1 Mini-Batched Gauss-Siedel Adaptive Gradient Descent for learning SEM parameters\nAlthough Algorithm[1runs in time linear in the dimensions of the model parameters and the inpu datasets, it can be computationally expensive when there are more than a few thousand labels. Tc further reduce the running time of our algorithm, we note that in practice, each instance is ofter associated with l; < c labels.\nn Maginal(W,V,b)(r) =- G;j log exp(o(XW);.(VT) L(r) exp(o(XW)i.(VT).k + bk) i=1 jEL(r)\nNote that 7() is a random function that changes at each timestep. Minimizing this stochasti objective effectively seeks SEM parameters which fit all the randomly sampled marginals encountere during training. Thus it is important to sample the sets NL; so that the selected marginals captur non-trivial information about the label distributions. One can imagine that uniformly sampling from AL, will not provide very informative marginals. As an improvement on this naive scheme we sample labels from AL; with probability proportional to their frequency of occurrence in th training data set. The number of negative labels is set to be times the number of positive label i.e., NL;= |PL,| = l;. Further, when m > 1, to faciliate efficient BLAS operations whil mini-batching, we use the same marginals for each instance in the same minibatch, i.e., we fi 1. L(t), where I, denotes the set of instances in the current minibatch. marginals over L(t) := ( J.c 1.\nIn the experiments presented in Section4] we found that around 10 suffices when c is relativel small, and around 100 suffices when c is on the order of tens of thousands.\nWe present two methods for predicting the labels for a new instance x E Rd given the fitted SEM parameters.\nThe first uses the generative model behind SEM: form h = (xT'W)VT +bT and note the probabilit that the jth label is assigned to that instance is given by\nP(yj =1) = exp(hj)/> exp( k=1\nnarginals over where I, denotes the set of instances in the current minibatch\nAccordingly we assign the most probable labels to x. We call this prediction scheme the direct SEA method; it simply requires choosing the labels corresponding to the largest entries of h.\nn l(Yi,Zy(x;W)) +A|Z|F min ZERcXs i=1\nwhere E Rsxr is a matrix of i.i.d. standard Gaussians and 0 e [0, 2)s is a vector of i.i.d uniform samples from0, 2)."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In the sequel we refer to the direct SEM scheme as simply SEM, and the kernelized SEM scheme as SEM-K. We compare SEM and SEM-K with several alternative multi-label learning algorithms NNML (Nam et al.]2014), REmbed (Mineiro & Karampatziakis]2015), SLEEC (Bhatia et al.]2015) and BMLPL (Rai et al.]2015). We do not compare to the models proposed in (Tai & Lin]2010f|Chen) & Lin][2012]|Bi & Kwok[2013fYu et al.[|2014] Prabhu & Varma]2014) because earlier works (Yu et al.|1 2014] Bhatia et al.]2015] have shown that they are inferior to SLEEC.\nTable 1: Multi-label dataset summary"}, {"section_index": "7", "section_name": "4.2 METHODOLOGY", "section_text": "The codes of the methods we compare to are provided by the authors, in particular, we note tha. the computationally intensive portions of REmbed, SLEEC and NNML are implemented in C; by. way of comparison, our algorithms are entirely implemented in Matlab. Due to there being severa parameters for each method, we hand-tuned the parameters for each dataset as suggested by the. authors. All methods were run in MATLAB on a Windows server with 4GB memory and four 2.3GHz. CPUs with eight cores.\nThe second method builds a kernel classifier in the semantic space obtained from the SEM fac. torization. FollowingMineiro & Karampatziakis (2015), a classifier is trained on these semantic representations by solving the optimization problem.\n(x) = cos(x+ 0)\nAt test time, the predicted label probabilities for an instance x are given by Zy(xW), so we assign. the most probable labels according to this model. We refer to this scheme as the kernelized SEM method.\nTable[1summarizes the eight datasets used in our experiments. Here ntrain and ntest are the numbers of training and testing instances, d is the number of features, c is the number of labels/classes, and the avg(l,) column reports the average number of labels per instance. In these datasets, the number of labels varies from 23 to 30938, the average label cardinality varies from 2.508 to 19.020, and the number of instances in different classes varies over a large range. Thus predicting the labels assignments correctly over this collection of datasets is a challenging task.\nDataset Domain Ntrain Ntest d c avg(li) MSRC image 296 295 512 23 2.508 Corel5K image 4500 500 499 374 3.522 SUN image 12906 1434 512 102 15.526 Delicious text 12920 3185 500 983 19.020 EurLex-sub text 17413 1935 5000 201 2.213 Mediamill video 30993 12914 210 101 4.736 Eurlex-des text 17413 1935 5000 3993 5.31 WikilOK text 14146 6616 101938 30938 18.64\nThe prediction performance for each algorithm is evaluated according to widely-used metrics in the field of multi-label classification, viz., label-based Macro-F1 (MaF1) and Micro-F1 (MiF1) and instance-based Precision-at-k (P@ k, esp. P@1 and P@3) (Zhang & Zhou2014). MaF1 and MiF1 require predefining a threshold to determine the number of labels to be assigned to the testing data In our experiments, the number of labels assigned to each testing instance was set according to its ground truth.\nTable 2: The classification performance of six multi-label classification algorithms (NNML, BMLPI REmbed, SLEEC and the proposed SEM and SEM-K). The best and second best results are respec tively bolded and underlined for each evaluation measure..\nTable 3: The running times, in seconds, of six multi-label classification algorithms (NNML, BMLPI REmbed, SLEEC and the proposed SEM and SEM-K) for differing training sizes on the Mediamil dataset.\nFirst we compare the performance on six multi-label learning problems with c < 1000. To fit botl. SEM models, we take the number of epochs be 30 and the mini-batch size be 200---i.e., T = 30 an m = 200 in Algorithm|1 -and because c is small, we fit the full label distributions. The classificatio. performances of our SEM algorithms and the baseline methods are shown in Table[2] SEM or SEM-I outperform the alternative algorithms in most cases.\nTable 3|compares the running times of the algorithms as the size of the dataset is increased, using. MediaMill. We see that SEM is the fastest model, followed by REMBED, then closely by SEM-K the remaining three models are significantly more costly. It is clear that NNML, the previous neural\nMaF1 MiF1 P@1 P@3 MaF1 MiF1 P@1 P@3 MSRC Corel5K NNML 0.4086 0.5944 0.7356 0.5073 0.0547 0.2967 0.4020 0.3047 BMLPL 0.4592 0.6199 0.7017 0.5288 0.0315 0.2779 0.3940 0.2820 REmbed 0.3537 0.5128 0.5322 0.4384 0.0450 0.2144 0.3060 0.2247 SLEEC 0.4973 0.6314 0.7353 0.5243 0.0534 0.3188 0.4360 0.3287 SEM 0.5064 0.6173 0.7220 0.5333 0.0623 0.3188 0.4320 0.3293 SEM-K 0.5770 0.6492 0.7458 0.5525 0.0589 0.2649 0.3600 0.2773 SUN Mediamill NNML 0.2807 0.5248 0.9421 0.8580 0.0819 0.5890 0.8260 0.6675 BMLPL 0.1897 0.4766 0.9024 0.8001 0.0855 0.6012 0.8478 0.6854 REmbed 0.3408 0.5125 0.9393 0.8591 0.2634 0.6371 0.8741 0.6988 SLEEC 0.2935 0.5256 0.9484 0.8656 0.2851 0.6546 0.8899 0.7158 SEM 0.3648 0.5486 0.9365 0.8642 0.1593 0.6296 0.8746 0.6996 SEM-K 0.3703 0.5466 0.9575 0.8787 0.2570 0.6717 0.8953 0.7278 Delicious Eurlex-sub NNML 0.1721 0.3963 0.6687 0.6169 0.5761 0.8487 0.9173 0.6267 BMLPL 0.1061 0.3739 0.6378 0.5772 0.1459 0.6011 0.6789 0.4697 REmbed 0.1549 0.3713 0.6353 0.572 0.5335 0.8031 0.8785 0.5977 SLEEC 0.1257 0.3859 0.6674 0.6112 0.5433 0.8461 0.9152 0.6191 SEM 0.1941 0.3980 0.6727 0.6162 0.5652 0.8339 0.8971 0.6188 SEM-K 0.1675 0.3886 0.6658 0.6112 0.5807 0.8494 0.9188 0.6269\nntrain NNML BMLPL REMBED SLEEC SEM SEM-K 439 327.57 10.29 2.07 16.11 0.60 1.50 1756 1333.91 20.35 3.02 57.16 2.41 4.29 3073 2363.02 48.2 4.14 145.88 4.36 6.99 4391 3264.79 41.72 5.45 227.76 6.65 10.10 8781 4428.09 84.09 10.83 815.66 12.29 21.73 13172 5170.00 119.09 17.04 1041.07 18.39 26.49 17563 5170.17 185.05 20.90 1692.7 24.22 42.21 21954 5297.75 225.96 44.20 1772.52 30.10 50.64 26344 5947.94 235.93 52.75 1985.82 35.95 59.42 30735 6604.93 275.06 58.74 2181.48 41.37 61.30\nnetwork approach to multi-label learning costs the most. In the other five algorithms, the latent spac dimensionality (r) is set to be 50. SLEEC is expensive because it constructs the nearest neighbc graph among training data and computes the top r eigenvectors of the corresponding similarity matri which costs O(n2r + d2r). REmbed is efficient because its main cost is to find the singular vector of a c (r + q) matrix (here c is the number of labels and q is a small integer), but its performanc is inferior to SEM-K. The BMLPL code provided by the author applies SVD to the training data t initialize than model parameters and then uses conjugate gradient to update the parameters, thus costs much more than REmbed and our proposed methods.\nWe proposed using SEM to fit marginals rather than the entire label distribution when c is large. for computational efficiency. To judge the effectiveness of this proposal, we compare the accuracy and running times of the SEM and SEM-K models with baselines on EurLex-des and Wiki10K, twc datasets with c > 1000. As baselines, we use REmbed and SLEEC in accordance with the above discussion which showed that these two methods are efficient and/or have good performance..\nThe hyperparameters in SLEEC were set according to the original authors' code: r for EurLex-des and Wiki10K is 100 and 75 respectively, and 3 clusters are used for Eurlex-des and 5 are used for Wiki10K. To fit the SEM models, we used the same value of r as SLEEC on these two datasets and used 10 training epochs. For REmbed, the latent space size r was tuned via cross-validation; r = 300. for Eurlex-des and r = 150 for Wiki10K. The number of Random Fourier Features is 2000 for both REmbed and SEM-K. The latent space size r in SEM is same with SLEEC. The mini-batch sizes and. number of epochs are set to be 200 and 10 respectively when fitting the SEM models. The number of threads is set to be 8 for all methods..\nTable 4: The classification performance of five methods (REmbed, SLEEC and the proposed SEM. and SEM-K with two values of 3) on the Eurlex-des and Wiki10K datasets. The best and second bes results are respectively bolded and underlined for each evaluation metric..\nREmbed SLEEC SEM SEM-K SEM-K ( = 500) ( = 10) ( = 60) P@1 0.7299 0.8017 0.7107 0.8024 0.8135 Eurlex-des P@3 0.6064 0.6539 0.5874 0.6621 0.6714 P@5 0.5060 0.5375 0.4916 0.5493 0.5563 REmbed SLEEC SEM SEM-K SEM-K ( = 1200) ( = 10) ( = 100) P@1 0.6963 0.8554 0.8517 0.8582 0.8671 Wiki10K P@3 0.5790 0.7359 0.7133 0.7278 0.7385 P@5 0.4929 0.6310 0.6171 0.6236 0.6353\nTable 5: The running times, in seconds, of five methods (REmbed, SLEEC and the proposed SEM and SEM-K for two values of 3) on the Eurlex-des and wiki10K datasets.\non lne Eurlex-aes and Wlkllo dalasels. REmbed SLEEC SEM SEM-K SEM-K ( = 500) ( = 10) ( = 60) Eurlex-des 358.63 1571.30 1210.30 167.10 250.77 Wiki10K 2858.96 2497.00 2003.43 646.48 769.18\nFigure|1|illustrates the impact of the choice of on the prediction performance (in terms of P@ 1 of SEM and SEM-K. The performances of SLEEC and REmbed are included for comparison. The hyperparameters of SLEEC, REmbed and SEM were set as in Section4.4\nTable4|compares the classification performances of the methods on these two datasets. It is clear that. SEM-K with a small set of negative labels obtains better performance than both REmbed and SLEEC Table|5[shows that, additionally, the SEM-K models are fit much faster than than the other models.\nIt is evident that the performance of SEM increases significantly in a monotonic fashion with . However, SEM-K is insensitive to once it passes a dataset-dependent threshold (e.g., = 60\n0.8 0.85 0.75 0.8 0.7 0.65 0.7 P 0.6 0.55 REmbed 0.6 -REmbed 0.5 SLEEC -SLEEC SEM *-SEM 0.45 I SEM-K +SEM-K 0.5 0 100 200 300 400 500 600 0 200 400 600 800 1000 1200 Negative sampling rate ( Negative sampling rate () (a) Eurlex-des (b) Wiki10K\nFigure 1: The P@1 performance of SEM and SEM-K as a function of , in comparison to the performances of SLEEC and REmbed on the (a) Eurlex-des and (b)Wiki10K datasets.\nfor Eurlex-des and = 100 for Wikil0K). Note that on Wikil0K, even the simpler direct SEN outperforms REmbed when there are sufficient negative labels.\nFigure|2|illustrates the effect of on the running times of SEM and SEM-K. Note that the additiona time to fit the classifier in the semantic space required by SEM-K is negligible compared to the time it takes to first fit the direct SEM model..\n1000 SEM SEM SEM-K 2000 SEM-K 800 1500 600 1000 400 200 500 10 20 40 6080100120200300400500600 10 20 406080100200400600800 1000 1200 Negative sampling rate ( Negative sampling rate () (a) Eurlex-des (b) Wiki10K\nFigure 2: Running time of SEM-K under varying"}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "There are other important ways in which the proposed SEM methods can be compared to the baseline multi-label learning methods, including their performance as a function of the latent space dimensionality and as a function of the amount of training. Due to space constraints, a discussion of these two concerns and the convergence behavior of Algorithm[1is provided in the Supplementary material.\nWe proposed a new semantic embedding model (SEM) for handling the multi-label learning task. A framework based on Gauss-Siedel mini-batched adaptive gradient descent was proposed for efficiently. solving the non-convex optimization problem required to learn the SEM parameters. For large label sets, we proposed fitting the SEM to marginal distributions rather than the full label distribution. A series of experiments on eight real-world datasets empirically demonstrated that the proposed method is superior to state-of-the-art methods in terms of prediction performance and running time.."}]
Skq89Scxx
[{"section_index": "0", "section_name": "SGDR: STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS", "section_text": "Ilya Loshchilov & Frank Hutter\nilya, fh}@cs.uni-freiburg.de"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Deep neural networks (DNNs) are currently the best-performing method for many classificatio problems, such as object recognition from images (Krizhevsky et al.| 2012aDonahue et al. 2014 or speech recognition from audio data (Deng et al. 2013). Their training on large datasets (wher DNNs perform particularly well) is the main computational bottleneck: it often requires severa days, even on high-performance GPUs, and any speedups would be of substantial value.\nFigure 4: (Top) Improvements obtained by the baseline learning rate schedule and SGDR w.r.t. the best known reference classification error on a dataset of electroencephalographic (EEG) recording. of brain activity for classification of actual right and left hand and foot movements of 14 subjects. with roughly 1o00 trials per subject. Both considered approaches were tested with the initial learn. ing rate lr = 0.025 (Top-Left) and lr = 0.05 (Top-Right). Note that the baseline approach i considered with different settings of the total number of epochs: 30, 60, . .., 480. (Bottom) SGDR with lr = 0.025 and lr = 0.05 without and with M model snapshots taken at the last M = nr/2 restarts, where nr is the total number of restarts..\nThe training of a DNN with n free parameters can be formulated as the problem of minimizing a function f : Rn -> IR. The commonly used procedure to optimize f is to iteratively adjust xt E IRn. (the parameter vector at time step t) using gradient information ft(xt) obtained on a relatively. small t-th batch of b datapoints. The Stochastic Gradient Descent (SGD) procedure then becomes an extension of the Gradient Descent (GD) to stochastic optimization of f as follows:\nXt+1=xt-ntVft(xt)\nWe benchmarked SGD with momentum with the default learning rate schedule, SGDR with To = 1, Tmult = 2 and SGDR with To = 10, Tmult = 2 on WRN-28-10, all trained with 4 settings of the initial learning rate nmax: 0.050, 0.025, 0.01 and 0.005. We used the same data augmentation procedure as for the CIFAR datasets. Similarly to the results on the CIFAR datasets, Figure|5|shows that SGDR demonstrates better anytime performance. SGDR with To = 10, Tmult = 2, nmax 0.01 achieves top-1 error of 39.24% and top-5 error of 17.17% matching the original results by AlexNets (40.7% and 18.2%, respectively) obtained on the original ImageNet with full-size images of ca. 50 times more pixels per image (Krizhevsky et al.|2012b). Interestingly, when the dataset is permuted only within 10 subgroups each formed from 100 classes, SGDR also demonstrates better results (see Figure|8lin the Supplementary Material). An interpretation of this might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selectior of the latter by scanning / annealing from the initial learning rate to O.\nwhere nt is a learning rate. One would like to consider second-order information\nxt+1=xt-ntH--Vft(xt);\nbut this is often infeasible since the computation and storage of the inverse Hessian H- is in tractable for large n. The usual way to deal with this problem by using limited-memory quasi Newton methods such as L-BFGS (Liu & Nocedal|1989) is not currently in favor in deep learning, not the least due to (i) the stochasticity of ft(xt), (ii) ill-conditioning of f and (iii) the presence of saddle points as a result of the hierarchical geometric structure of the parameter space (Fukumizu & AmariJ 2ooo). Despite some recent progress in understanding and addressing the latter problems (Bordes et al.[2009[Dauphin et al.[2014] Choromanska et al.]2014}Dauphin et al.]2015), state-of- the-art optimization techniques attempt to approximate the inverse Hessian in a reduced way, e.g., by considering only its diagonal to achieve adaptive learning rates. AdaDelta (Zeiler2012) and Adam (Kingma & Bal2014) are notable examples of such methods.\nClearly, longer runs (more than 40 epochs considered in this preliminary experiment) and hyperpa rameter tuning of learning rates, regularization and other hyperparameters shall further improve the results."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Restart techniques are common in gradient-free optimization to deal with multi nodal functions. Partial warm restarts are also gaining popularity in gradient- oased optimization to improve the rate of convergence in accelerated gradieni schemes to deal with ill-conditioned functions. In this paper, we propose a sim- le warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its per- formance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate ts advantages on a dataset of EEG recordings and on a downsampled version of he ImageNet dataset. Our source code is available at\nLearning rate schedule 100 Default, Ir=0.1 - Default, Ir=0.05 To = 50, Tmult = 1 10 T = 100, Tmult A = 1 fane raannne To = 200, Tm mult =1 10 T=1, Tmul = 2 mult = 2 mult 10 10~4 20 40 60 80 100 120 140 160 180 200 Epochs\nWRN-28-10 on downsampled 32x32 ImageNet WRN-28-10 on downsampled 32x32 ImaqeNet 60 40 Default Default SGDR T = 1, Tmut = 2 mu 55 SGDR T. = 10, T. mult = 35 (%) eonee aee (%) donne aeeg g-do 50 30 -do] 45 25 40 20 35 5 10 15 20 25 30 35 5 10 15 20 25 30 35 Epochs Epochs\nFigure 5: Top-1 and Top-5 test errors obtained by SGD with momentum with the default learning rate schedule, SGDR with To = 1, Tmult = 2 and SGDR with To = 10, Tmult = 2 on WRN-28-1C trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 3 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Four settings of th initial learning rate are considered: 0.050, 0.025, 0.01 and 0.005.\nFigure 1:Alternative schedule schemes of learning rate nt over batch index t: default schemes with no = 0.1 (blue line) and no = 0.05 (red line) as used byZagoruyko & Komodakis (2016) warm restarts simulated every To = 50 (green line), To = 100 (black line) and To = 200 (grey line epochs with nt decaying during i-th run from nmax = 0.05 to nmin = 0 according to eq. (5); warm restarts starting from epoch To = 1 (dark green line) and To = 10 (magenta line) with doubling (Tmult = 2) periods T, at every new warm restart."}, {"section_index": "3", "section_name": "5 DISCUSSION", "section_text": "Our results suggest that even without any restarts the proposed aggressive learning rate schedule given by eq. (5) is competitive w.r.t. the default schedule when training WRNs on the CIFAR- 10 (e.g., for To = 200, Tmult = 1) and CIFAR-100 datasets. In practice, the proposed schedule requires only two hyper-parameters to be defined: the initial learning rate and the total number of epochs.\nWe found that the anytime performance of SGDR remain similar when shorter epochs are considere. (see section|8.1|in the Supplemenary Material).\nOne should not suppose that the parameter values used in this study and many other works witl. (Residual) Neural Networks are selected to demonstrate the fastest decrease of the training erroi. Instead. the best validation or / and test errors are in focus. Notably. the validation error is rarel used when training Residual Neural Networks because the recommendation is defined by the fina. solution (in our approach, the final solution of each run). One could use the validation error t. determine the optimal initial learning rate and then run on the whole dataset; this could furthe. improve results.\nVt+1 = tVt-ntVft(xt) Xt+1=Xt+Vt+1;\nwhere v is a velocity vector initially set to 0, nt is a decreasing learning rate and t is a momentun rate which defines the trade-off between the current and past observations of Vft(x). The mair difficulty in training a DNN is then associated with the scheduling of the learning rate and the amount of L2 weight decay regularization employed. A common learning rate schedule is to use a constant learning rate and divide it by a fixed constant in (approximately) regular intervals. The blue line in Figure[1shows an example of such a schedule, as used byZagoruyko & Komodakis(2016 to obtain the state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN datasets.\nThe main purpose of our proposed warm restart scheme for SGD is to improve its anytime perfor. mance. While we mentioned that restarts can be useful to deal with multi-modal functions, we do not claim that we observe any effect related to multi-modality.\nAs we noted earlier, one could decrease nmax and n? nmin at every new warm restart to control the amount of divergence. If new restarts are worse than the old ones w.r.t. validation error, then one might also consider going back to the last best solution and perform a new restart with adjusted hyperparameters.\nIn this paper, we propose to periodically simulate warm restarts of SGD, where in each restart the. learning rate is initialized to some value and is scheduled to decrease. Four different instantiations. of this new learning rate schedule are visualized in Figure[1 Our empirical results suggest that SGD with warm restarts requires 2 to 4 fewer epochs than the currently-used learning rate schedule schemes to achieve comparable or even better results. Furthermore, combining the networks ob tained right before restarts in an ensemble following the approach proposed by|Huang et al.[(2016a]. improves our results further to 3.14% for CIFAR-10 and 16.21% for CIFAR-100. We also demon strate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNe dataset.\nOur results reproduce the finding by Huang et al.(2016a) that intermediate models generated by SGDR can be used to build efficient ensembles at no cost. This finding makes SGDR especially attractive for scenarios when ensemble building is considered.\nIn this paper, we investigated a simple warm restart mechanism for SGD to accelerate the training of DNNs. Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results on CIFAR-10 and CIFAR-100 roughly two to four times faster. We also achieved new state-. of-the-art results with SGDR, mainly by using even wider WRNs and ensembles of snapshots from\nIntriguingly enough, the current state-of-the-art results on CIFAR-10, CIFAR-100, SVHN, Ima. geNet, PASCAL VOC and MS COCO datasets were obtained by Residual Neural Networks. [He et al.[2015] Huang et al.2016c[ He et al.]2016f Zagoruyko & Komodakis]2016) trained with out the use of advanced methods such as AdaDelta and Adam. Instead, they simply use SGD with momentum\nSGDR's trajectory. Future empirical studies should also consider the SVHN, ImageNet and MS COCO datasets, for which Residual Neural Networks showed the best results so far. Our preliminary results on a dataset of EEG recordings suggest that SGDR delivers better and better results as we carry out more restarts and use more model snapshots. The results on our downsampled ImageNe dataset suggest that SGDR might also reduce the problem of learning rate selection because the annealing and restarts of SGDR scan / consider a range of learning rate values. Future work should consider warm restarts for other popular training algorithms such as AdaDelta (Zeiler2012) and Adam (Kingma & Ba2014).\nAlternative network structures should be also considered; e.g., soon after our initial arXiv report (Loshchilov & Hutter2016),Zhang et al.(2016);Huang et al.(2016b); Han et al.(2016) reported that WRNs models can be replaced by more memory-efficient models. Thus, it should be tested whether our results for individual models and ensembles can be further improved by using their networks instead of WRNs. Deep compression methods (Han et al.]2015) can be used to reduce the. time and memory costs of DNNs and their ensembles..\nThis work was supported by the German Research Foundation (DFG), under the BrainLinksBrain Tools Cluster of Excellence (grant number EXC 1086). We thank Gao Huang, Kilian Quirin Wein berger, Jost Tobias Springenberg, Mark Schmidt and three anonymous reviewers for their helpful comments and suggestions."}, {"section_index": "4", "section_name": "2.2 RESTARTS IN GRADIENT-BASED OPTIMIZATION", "section_text": "Gradient-based optimization algorithms such as BFGS can also perform restarts to deal with mul timodal functions (Ros! 20o9). In large-scale settings when the usual number of variables n is on the order of 103 _ 10, the availability of gradient information provides a speedup of a factor of n w.r.t. gradient-free approaches. Warm restarts are usually employed to improve the convergence rate rather than to deal with multimodality: often it is sufficient to approach any local optimum to a given precision and in many cases the problem at hand is unimodal.Fletcher & Reeves(1964 proposed to flesh the history of conjugate gradient method every n or (n + 1) iterations.Powell (1977) proposed to check whether enough orthogonality between f(xt-1) and V f(xt) has been lost to warrant another warm restart. Recently,O'Donoghue & Candes[(2012) noted that the iterates of accelerated gradient schemes proposed by Nesterov[(1983f 2013) exhibit a periodic behavior if momentum is overused. The period of the oscillations is proportional to the square root of the local condition number of the (smooth convex) objective function. The authors showed that fixed warn restarts of the algorithm with a period proportional to the conditional number achieves the optima linear convergence rate of the original accelerated gradient scheme. Since the condition number is an unknown parameter and its value may vary during the search, they proposed two adaptive warm restart techniques (O'Donoghue & Candes]2012):"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Anna Choromanska. Mikael Henaff. Michael Mathieu. Gerard Ben Arous. and Yann LeCun. Th loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014..\nYann N Dauphin, Harm de Vries, Junyoung Chung, and Yoshua Bengio. Rmsprop and equilibratec. adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390. 2015.\nL. Deng, G. Hinton, and B. Kingsbury. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. of ICASsP'13, 2013.. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. of ICML'14, 2014.\nReeves Fletcher and Colin M Reeves. Function minimization by conjugate gradients. The compute. journal, 7(2):149-154, 1964\nKenji Fukumizu and Shun-ichi Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317-327, 2000.\nSmith (2015] 2016) recently introduced cyclical learning rates for deep learning, his approach is closely-related to our approach in its spirit and formulation but does not focus on restarts.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.\nYang & Lin (2015) showed that Stochastic subGradient Descent with restarts can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron. In contrast to our work, they never increase the learning rate to perform restarts but decrease it geometrically at each epoch. To perform restarts, they periodically reset the current solution to the averaged solution from the previous epoch..\nNikolaus Hansen. Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computa tion Conference: Late Breaking Papers, pp. 2389-2396. ACM, 2009.\nAntoine Bordes, Leon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gra dient descent. The Journal of Machine Learning Research, 10:1737-1754, 2009\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op timization. In Advances in Neural Information Processing Systems, pp. 2933-2941, 2014.\nThe function scheme restarts whenever the objective function increases.. The gradient scheme restarts whenever the angle between the momentum term and tl negative gradient is obtuse, i.e, when the momentum seems to be taking us in a bad dire tion, as measured by the negative gradient at that point. This scheme resembles the one Powell(1977) for the conjugate gradient method."}, {"section_index": "6", "section_name": "STOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS (SGDR)", "section_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.\nIn this work, we consider one of the simplest warm restart approaches. We simulate a new warm started run / restart of SGD once T; epochs are performed, where i is the index of the run. Impor tantly, the restarts are not performed from scratch but emulated by increasing the learning rate nt while the old value of xt is used as an initial solution. The amount of this increase controls to which extent the previously acquired information (e.g., momentum) is used.\nGao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E. Hopcroft, and Kilian Q. Weinberger Snapshot ensembles: Train 1, get m for free. ICLR 2017 submission, 2016a..\nWithin the i-th run, we decay the learning rate with a cosine annealing for each batch as follows\nnt = Tmin + cos mar Ti\nare ranges for the learning rate, and Tcur accounts for how many epochs. Imax have been performed since the last restart. Since Tcur is updated at each batch iteration t, it can take discredited values such as 0.1, 0.2, etc. Thus, nt = nmax when t = 0 and Tcur = 0. Once Tcur = T,, the cos function will output -1 and thus nt = nmin. The decrease of the learning rate is shown in Figure1 for fixed T, = 50, T; = 100 and T; = 200; note that the logarithmic axis obfuscates the typical shape of the cosine function.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nIn order to improve anytime performance, we suggest an option to start with an initially small T, and increase it by a factor of Tmult at every restart (see, e.g., Figure1for To = 1, Tmult = 2 and To = 10, Tmult = 2). It might be of great interest to decrease nmax and nmin at every new restart. However, for the sake of simplicity, here, we keep nmax and nmin the same for every i to reduce the. number of hyperparameters involved.\nDong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization Mathematical programming, 45(1-3):503-528, 1989.\nSince our simulated warm restarts (the increase of the learning rate) often temporarily worsen per-. formance, we do not always use the last x, as our recommendation for the best solution (also called. the incumbent solution). While our recommendation during the first run (before the first restart) is indeed the last xt, our recommendation after this is a solution obtained at the end of the last per-. formed run at nt = min. We emphasize that with the help of this strategy, our method does not. require a separate validation data set to determine a recommendation..\nYurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2) In Soviet Mathematics Doklady, volume 27, pp. 372-376, 1983."}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETTINGS", "section_text": "Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.\nWe consider the problem of training Wide Residual Neural Networks (WRNs; seeZagoruyko &. Komodakis (2016) for details) on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky2009). We will use the abbreviation WRN-d-k to denote a WRN with depth d and width k.Zagoruyko & Komodakis (2016) obtained the best results with a WRN-28-10 architecture, i.e., a Residual Neural Network with d = 28 layers and k = 10 times more filters per layer than used in the original. Residual Neural Networks (He et al.]2015 2016).\nBrendan O'Donoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes arXiv preprint arXiv:1204.3982. 2012\nHadi Pouransari and Saman Ghili. Tiny imagenet visual recognition challenge. CS231 course at STANFORD. 2015\nMike Preuss. Niching the CMA-ES via nearest-better clustering. In Proceedings of the 12th annua. conference companion on Genetic and evolutionary computation. pp. 1711-1718. ACM. 2010\nFor training,Zagoruyko & Komodakis(2016) used SGD with Nesterov's momentum with initial. learning rate set to no = 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and. minibatch size to 128. The learning rate is dropped by a factor of 0.2 at 60, 120 and 160 epochs. with a total budget of 200 epochs. We reproduce the results of Zagoruyko & Komodakis|(2016) with. the same settings except that i) we subtract per-pixel mean only and do not use ZCA whitening; ii). we use SGD with momentum as described by eq. (3|4) and not Nesterov's momentum.\nMike Preuss. Niching methods and multimodal optimization performance. In Multimodal Optimiza tion by Means of Evolutionary Algorithms, pp. 115-137. Springer, 2015..\nRaymond Ros. Benchmarking the bfgs algorithm on the bbob-2009 function testbed. In Proceed- ings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Con- ference: Late Breaking Papers, pp. 2409-2414. ACM. 2009\nThe existing restart techniques can also be used for stochastic gradient descent if the stochasticity is taken into account. Since gradients and loss values can vary widely from one batch of the data to another, one should denoise the incoming information: by considering averaged gradients and losses, e.g., once per epoch, the above-mentioned restart techniques can be used again.\nGao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016b\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014\nThe CIFAR-10 and CIFAR-100 datasets (Krizhevsky2009) consist of 3232 color images drawn from 10 and 100 classes, respectively, split into 50,000 train and 10,000 test images. For image preprocessingZagoruyko & Komodakis(2016) performed global contrast normalization and ZCA whitening. For data augmentation they performed horizontal flips and random crops from the image padded by 4 pixels on each side, filling missing pixels with reflections of the original image..\nWRN-28-10 on ClFAR-10 WRN-28-10 on CIFAR-100 25 50 Default, Ir=0.1 Default, Ir=0.05 T = 50, Tmult = 1 20 401 T = 100, Tmult 15 30 error = 10, T. error test eest 10 20 5 10 0 0 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on ClFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 4.5 (%)errrrerrr 20 error 4 19.5 Test 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-20 on CIFAR-10 WRN-28-20 on ClFAR-100 5 21 20.5 4.5 (%) (%) 20 Tresreror error 19.5 estt 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs\nLeslie N Smith. No more pesky learning rate guessing games. arXiv preprint arXiv:1506.01186 2015.\nTianbao Yang and Qihang Lin. Stochastic subgradient methods with linear convergence for polyhe dral convex optimization. arXiv preprint arXiv:1510.01444, 2015\nFigure 2: Test errors on CIFAR-10 (left column) and CIFAR-100 (right column) datasets. Note that for SGDR we only plot the recommended solutions. The top and middle rows show the same results on WRN-28-10, with the middle row zooming into the good performance region of low test error. The bottom row shows performance with a wider network, WRN-28-20.. The results of the default learning rate schedules ofZagoruyko & Komodakis(2016) with no = 0.1 and no = 0.05 are depicted by the blue and red lines, respectively. The schedules of nt used in SGDR are shown with i) restarts every To = 50 epochs (green line); ii) restarts every To = 100 epochs (black line); iii) restarts every To = 200 epochs (gray line); iv) restarts with doubling (Tmult = 2) periods of restarts starting from the first epoch (To = 1, dark green line); and v) restarts with doubling (Tmult = 2) periods of restarts starting from the tenth epoch (To = 10, magenta line).\nThe schedule of nt used byZagoruyko & Komodakis|(2016) is depicted by the blue line in Figure[1 The same schedule but with no = 0.05 is depicted by the red line. The schedule of nt used in SGDR is also shown in Figure[1] with two initial learning rates To and two restart doubling periods\nCIFAR-10 30 Default SGDR 25 (%) errrr eror 20 15 10 5 0 20 40 60 80 100 Epochs\nFigure 6: The median results of 5 runs for the best learning rate settings considered for WRN-28-1"}, {"section_index": "8", "section_name": "8.1 50K VS 1OOK EXAMPLES PER EPOCH", "section_text": "Our data augmentation procedure code is inherited from the Lasagne Recipe code for ResNets where. flipped images are added to the training set. This doubles the number of training examples per epoch. and thus might impact the results because hyperparameter values defined as a function of epoch index have a different meaning. While our experimental results given in Table 1 reproduced the. results obtained byZagoruyko & Komodakis(2016), here we test whether SGDR still makes sense. for WRN-28-1 (i.e., ResNet with 28 layers) where one epoch corresponds to 50k training examples.. We investigate different learning rate values for the default learning rate schedule (4 values out of. [0.01, 0.025, 0.05, 0.1]) and SGDR (3 values out of [0.025, 0.05, 0.1]). In line with the results given. in the main paper, Figure|6|suggests that SGDR is competitive in terms of anytime performance..\nTable 1: Test errors of different methods on CIFAR-10 and CIFAR-100 with moderate data aug. mentation (flip/translation). In the second column k is a widening factor for WRNs. Note that the computational and memory resources used to train all WRN-28-10 are the same. In all other case. they are different, but WRNs are usually faster than original ResNets to achieve the same accuracy (e.g., up to a factor of 8 according toZagoruyko & Komodakis(2016)). Bold text is used only tc. highlight better results and is not based on statistical tests (too few runs).."}, {"section_index": "9", "section_name": "4.2 SINGLE-MODEL RESULTS", "section_text": "Table 1 shows that our experiments reproduce the results given byZagoruyko & Komodakis(2016 for WRN-28-10 both on CIFAR-10 and CIFAR-100. These \"default' experiments with no = 0.1 and no = 0.05 correspond to the blue and red lines in Figure2 The results for no = 0.05 show better performance, and therefore we use no = 0.05 in our later experiments.\nSGDR with To = 50, To = 100 and To = 200 for Tmult = 1 perform warm restarts every 50, 100 and 200 epochs, respectively. A single run of SGD with the schedule given by eq. (5) for To = 200 shows the best results suggesting that the original schedule of WRNs might be suboptimal w.r.t. the test error in these settings. However, the same setting with To = 200 leads to the worst anytime performance except for the very last epochs.\nSGDR with To = 1, Tmult = 2 and To = 10, Tmult = 2 performs its first restart after 1 and 10 epochs, respectively. Then, it doubles the maximum number of epochs for every new restart. The main purpose of this doubling is to reach good test error as soon as possible, i.e., achieve good anytime performance. Figure|2|shows that this is achieved and test errors around 4% on CIFAR-10 and around 20% on CIFAR-100 can be obtained about 2-4 times faster than with the default schedule used byZagoruyko & Komodakis(2016).\ndepth-k # params # runs CIFAR-10 CIFAR-100 original-ResNet (He et al.]2015] 110 1.7M mean of 5 6.43 25.16 1202 10.2M mean of 5 7.93 27.82 stoc-depth (Huang et al.). 2016c 110 1.7M 1 run 5.23 24.58 1202 10.2M 1 run 4.91 n/a pre-act-ResNet (He et al. 2016 110 1.7M med. of 5 6.37 n/a 164 1.7M med. of 5 5.46 24.33 1001 10.2M med. of 5 4.62 22.71 WRN (Zagoruyko & Komodakis. 2016 16-8 11.0M 1 run 4.81 22.07 28-10 36.5M 1 run 4.17 20.50 with dropout 28-10 36.5M 1 run n/a 20.04 WRN (ours) default with no = 0.1 28-10 36.5M med. of 5 4.24 20.33 default with no = 0.05 28-10 36.5M med. of 5 4.13 20.21 To = 50, Tmult = 1 28-10 36.5M med. of 5 4.17 19.99 To = 100, Tmult = 1 28-10 36.5M med. of 5 4.07 19.87 To = 200, Tmult = 1 28-10 36.5M med. of 5 3.86 19.98 To = 1, Tmult = 2 28-10 36.5M med. of 5 4.09 19.74 To = 10, Tmult = 2 28-10 36.5M med. of 5 4.03 19.58 default with no = 0.1 28-20 145.8M med. of 2 4.08 19.53 default with no = 0.05 28-20 145.8M med. of 2 3.96 19.67 To = 50, Tmult = 1 28-20 145.8M med. of 2 4.01 19.28 To = 100, Tmult = 1 28-20 145.8M med. of 2 3.77 19.24 To = 200, Tmult = 1 28-20 145.8M med. of 2 3.66 19.69 To = 1,Tmult = 2 28-20 145.8M med. of 2 3.91 18.90 To = 10,Tmult = 2 28-20 145.8M med. of 2 3.74 18.70\nWRN-28-10 on CIFAR-10 WRN-28-10 on ClFAR-100 Default, Ir=0.1 Default, Ir=0.05 0.8 0.8 = 200, T mult 0.6 1,T =2 mult 0.6 I + errrrenrrnr ^ T. =10,T mult 0.4 0.4 0.2 0.2 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on ClFAR-10 WRN-28-10 on CIFAR-100 0.2 0.19 SsoL .1 Crrnnrerrnp y 0.18 0.17 0.9 0.16 Teese 0.8 0.15 0.7 50 100 150 200 50 100 150 200 Epochs Epochs WRN-28-10 on CIFAR-10 WRN-28-10 on CIFAR-100 5 21 20.5 4.5 (%)errrero (%) 20 error 4 19.5 eest 19 3.5 18.5 3 18 50 100 150 200 50 100 150 200 Epochs Epochs\nSince SGDR achieves good performance faster, it may allow us to train larger networks. We there fore investigated whether results on CIFAR-10 and CIFAR-100 can be further improved by makin WRNs two times wider, i.e., by training WRN-28-20 instead of WRN-28-10. Table 1 shows tha the results indeed improved, by about 0.25% on CIFAR-10 and by about 0.5-1.0% on CIFAR-100 While network architecture WRN-28-20 requires roughly three-four times more computation thai. WRN-28-10, the aggressive learning rate reduction of SGDR nevertheless allowed us to achieve . better error rate in the same time on WRN-28-20 as we spent on 200 epochs of training on WRN 28-10. Specifically, Figure[2|(right middle and right bottom) show that after only 50 epochs, SGDF (even without restarts, using To = 50, Tmult = 1) achieved an error rate below 19% (whereas none. of the other learning methods performed better than 19.5% on WRN-28-10). We therefore have hop that -- by enabling researchers to test new architectures faster - SGDR's good anytime performance may also lead to improvements of the state of the art..\nIn a final experiment for SGDR by itself, Figure7|in the appendix compares SGDR and the de- fault schedule with respect to training and test performance. As the figure shows, SGDR optimizes training loss faster than the standard default schedule until about epoch 120. After this, the default schedule overfits, as can be seen by an increase of the test error both on CIFAR-10 and CIFAR-100 (see, e.g., the right middle plot of Figure[7). In contrast, we only witnessed very mild overfitting for SGDR.\nFigure 7: Training cross-entropy + regularization loss (top row), test loss (middle row) and tes. error (bottom row) on CIFAR-10 (left column) and CIFAR-100 (right column)."}, {"section_index": "10", "section_name": "4.3 ENSEMBLE RESULTS", "section_text": "Our initial arXiv report on SGDR (Loshchilov & Hutter2016) inspired a follow-up study byHuang et al.[(2016a) in which the authors suggest to take M snapshots of the models obtained by SGDR (in their paper referred to as cyclical learning rate schedule and cosine annealing cycles) right before M last restarts and to use those to build an ensemble, thereby obtaining ensembles \"for free\"' (in contrast to having to perform multiple independent runs). The authors demonstrated new state-of-\nMedian test error (%) of ensembles on CIFAR-10 Median test error (%) of ensembles on ClFAR-100 19.5 W) (W) 3.9 3 3.51%3.29% 3.25% 3.23% 3.15% 3.14% ) unu nud sunysdeus Jo unqwnr unu Jnd suoysdeus no uuqwnn 317.75%16.84% 16.64% 16.48% 16.29% 16.21% 19 3.8 18.5 3.7 3.6 18 2 3.61%3.41%3.29% 3.29%3.21% 3.21% 218.27%17.31%16.97%16.78%16.45% 16.31% 3.5 17.5 3.4 17 4.03%3.63%3.51% 3.44%3.28% 3.27% 3.3 119.57%18.16%17.58%17.32%16.78% 16.50% 16.5 3.2 2 3 4 8 16 2 3 4 8 16 Number of runs (N) Number of runs (N)\nFigure 3:Test errors of ensemble models built from N runs of SGDR on WRN-28-10 with M. model snapshots per run made at epochs 150. 70 and 30 (right before warm restarts of SGDR as suggested byHuang et al.(2016a)). When M=1 (respectively, M=2), we aggregate probabilities of softmax layers of snapshot models at epoch index 150 (respectively, at epoch indexes 150 and 70)\nthe-art results on CIFAR datasets by making ensembles of DenseNet models (Huang et al.] 2016b) Here, we investigate whether their conclusions hold for WRNs used in our study. We used WRN 28-10 trained by SGDR with To = 10, Tmult = 2 as our baseline model.\nFigure3Jand Table 2 aggregate the results of our study. The original test error of 4.03% on CIFAR-10 and 19.57% on CIFAR-100 (median of 16 runs) can be improved to 3.51% on CIFAR-10 and 17.75% on CIFAR-100 when M = 3 snapshots are taken at epochs 30, 70 and 150: when the learning rate of SGDR with To = 10, Tmult = 2 is scheduled to achieve O (see Figure1) and the models are used with uniform weights to build an ensemble. To achieve the same result, one would have to aggregate N = 3 models obtained at epoch 150 of N = 3 independent runs (see N = 3, M = 1 in Figure[3). Thus, the aggregation from snapshots provides a 3-fold speedup in these settings because. additional (M > 1-th) snapshots from a single SGDR run are computationally free. Interestingly. aggregation of models from independent runs (when N > 1 and M = 1) does not scale up as well. as from M > 1 snapshots of independent runs when the same number of models is considered: the case of N = 3 and M = 3 provides better performance than the cases of M = 1 with N = 18 and N = 21. Not only the number of snapshots M per run but also their origin is crucial. Thus. naively building ensembles from models obtained at last epochs only (i.e., M = 3 snapshots at. epochs 148, 149, 150) did not improve the results (i.e., the baseline of M = 1 snapshot at 150). thereby confirming the conclusion of|Huang et al.[(2016a) that snapshots of SGDR provide a useful diversity of predictions for ensembles.\nWRN-28-10 on downsampled 32x32 ImageNet 100 95 (%)eoeee eeee geo 90 85 Default, Ir=0.050 80 Default. Ir=0.015 Default, Ir=0.005 SGDR, Ir=0.050 75 SGDR. Ir=0.015 SGDR. Ir=0.005 70 5 10 15 20 25 30 35 40 Epochs\nThree runs (NV = 3) of SGDR with M = 3 snapshots per run are sufficient to greatly improve the. results to 3.25% on CIFAR-10 and 16.64% on CIFAR-100 outperforming the results of Huang et al. (2016a). By increasing N to 16 one can achieve 3.14% on CIFAR-10 and 16.21% on CIFAR-100. We believe that these results could be further improved by considering better baseline models than WRN-28-10 (e.g., WRN-28-20)."}, {"section_index": "11", "section_name": "4.4 EXPERIMENTS ON A DATASET OF EEG RECORDINGS", "section_text": "To demonstrate the generality of SGDR, we also considered a very different domain: a dataset of electroencephalographic (EEG) recordings of brain activity for classification of actual right and left. hand and foot movements of 14 subjects with roughly 1000 trials per subject. The best classification results obtained with the original pipeline based on convolutional neural networks [R. Schirrmeister et al. Convolutional neural networks for EEG analysis: Design choices, training strategies, and feature visualization., under review at Neuroimage] were used as our reference. First, we compared the baseline learning rate schedule with different settings of the total number of epochs and initial. learning rates (see Figure 4). When 30 epochs were considered, we dropped the learning rate by. a factor of 10 at epoch indexes 10, 15 and 20. As expected, with more epochs used and a similar (budget proportional) schedule better results can be achieved. Alternatively, one can consider SGDR. and get a similar final performance while having a better anytime performance without defining the. total budget of epochs beforehand.\nFigure 8: Top-5 test errors obtained by SGD with momentum with the default learning rate schedule and SGDR with To = 1,Tmult = 2 on WRN-28-10 trained on a version of ImageNet, with all images from all 1000 classes downsampled to 32 32 pixels. The same baseline data augmentation as for the CIFAR datasets is used. Three settings of the initial learning rate are considered: O.050. 0.015 and O.005. In contrast to the experiments described in the main paper, here, the dataset is permuted only within 10 subgroups each formed from 100 classes which makes good generalization much harder to achieve for both algorithms. An interpretation of SGDR results given here might be that while the initial learning rate seems to be very important, SGDR reduces the problem of improper selection of the latter by scanning / annealing from the initial learning rate to O.\nSimilarly to our results on the CIFAR datasets, our experiments with the EEG data confirm that. snapshots are useful and the median reference error (about 9%) can be improved i) by 1-2% when model snapshots of a single run are considered, and ii) by 2-3% when model snapshots from both hyperparameter settings are considered. The latter would correspond to N = 2 in Section (4.3)..\nIn order to additionally validate our SGDR on a larger dataset, we constructed a downsampled version of the ImageNet dataset [P. Chrabaszcz, I. Loshchilov and F. Hutter. A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets., in preparation]. In contrast to earlier attempts (Pouransari & Ghili2015), our downsampled ImageNet contains exactly the same images from 1000 classes as the original ImageNet but resized with box downsampling to 32 32 pixels. Thus, this dataset is substantially harder than the original ImageNet dataset because the average number of pixels per image is now two orders of magnitude smaller. The new dataset is also more difficuli than the CIFAR datasets because more classes are used and the relevant objects to be classified often cover only a tiny subspace of the image and not most of the image as in the CIFAR datasets."}]
Sk2iistgg
[{"section_index": "0", "section_name": "NON-LINEAR DIMENSIONALITY REGULARIZER FOR SOLVING INVERSE PROBLEMS", "section_text": "Ravi Garg\nUniversity of Adelaide\nravi.garg@adelaide.edu.au"}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "ian.reid@adelaide.edu.au\nConsider an ill-posed inverse problem of estimating causal factors from observa. tions, one of which is known to lie near some (unknown) low-dimensional, non linear manifold expressed by a predefined Mercer-kernel. Solving this problem re-. quires simultaneous estimation of these factors and learning the low-dimensional. representation for them. In this work, we introduce a novel non-linear dimension- ality regularization technique for solving such problems without pre-training.. We re-formulate Kernel-PCA as an energy minimization problem in which low. dimensionality constraints are introduced as regularization terms in the energy. To the best of our knowledge, ours is the first attempt to create a dimensionality. regularizer in the KPCA framework. Our approach relies on robustly penalizing. the rank of the recovered factors directly in the implicit feature space to create their low-dimensional approximations in closed form.. Our approach performs robust KPCA in the presence of missing data and noise. We demonstrate state-of-the-art results on predicting missing entries in the stan-. dard oil flow dataset. Additionally, we evaluate our method on the challenging. problem of Non-Rigid Structure from Motion and our approach delivers promis-. ing results on CMU mocap dataset despite the presence of significant occlusions\nChristoph Bregler, Aaron Hertzmann, and Henning Biermann. Recovering non-rigid 3d shape from image streams. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 690-696,. 2000. R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinear. factorization approaches for low-rank matrix decomposition. In International Conference on. Computer Vision (ICCV), 2013.\nOur approach performs robust KPCA in the presence of missing data and noise We demonstrate state-of-the-art results on predicting missing entries in the stan- dard oil flow dataset. Additionally, we evaluate our method on the challenging problem of Non-Rigid Structure from Motion and our approach delivers promis- ing results on CMU mocap dataset despite the presence of significant occlusions and noise.\nEmmanuel J Candes and Benjamin Recht. Exact matrix completion via convex optimization. Foun dations of Computational mathematics. 9(6):717-772. 2009"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Dimensionality reduction techniques are widely used in data modeling, visualization and unsuper vised learning. Principal component analysis (PCAJolliffe(2002)), Kernel PCA (KPCAScholkopf et al.(1998)) and Latent Variable Models (LVMsLawrence(2005)) are some of the well known techniques used to create low dimensional representations of the given data while preserving its significant information.\nYuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure. from-motion factorization. International Journal of Computer Vision. 107(2):101-122. 2014\nOne key deployment of low-dimensional modeling occurs in solving ill-posed inference problems Assuming the valid solutions to the problem lie near a low-dimensional manifold (i.e. can be parametrized with a reduced set of variables) allows for a tractable inference for otherwise under constrained problems. After the seminal work of |Candes & Recht](2009); Recht et al.(2010) or guaranteed rank minimization of the matrix via trace norm heuristics Fazel (2002), many ill-posec computer vision problems have been tackled by using the trace norm - a convex surrogate of the rank function - as a regularization term in an energy minimization frameworkCandes & Rech (2009);Zhou et al.(2014). The flexible and easy integration of low-rank priors is one of key factors for versatility and success of many algorithms. For example, pre-trained active appearance models Cootes et al.(2001) or 3D morphable models Blanz & Vetter (1999) are converted to robust featur trackingPoling et al.(2014), dense registration[Garg et al.(2013b) and vivid reconstructions of natu ral videos Garg et al.(2013a) with no a priori knowledge of the scene. Various bilinear factorizatior problems like background modeling, structure from motion or photometric stereo are also addressec with a variational formulation of the trace norm regularization Cabral et al. (2013).\nRavi Garg, Anastasios Roussos, and Lourdes Agapito. Dense variational reconstruction of non-rigi surfaces from monocular video. In Computer Vision and Pattern Recognition, pp. 1272-1279 2013a.\nGiven the success of non-linear dimensionality reduction in modeling real data and overwhelming use of the linear dimensionality regularizers in solving real world problems, we expect that pro. posed non-linear dimensionality regularizer will be applicable to a wide variety of unsupervised inference problems: recommender systems; 3D reconstruction; denoising; shape prior based object segmentation; and tracking are all possible applications.\nAnders Eriksson\nQueensland University of Technology. anders.eriksson@qut.edu.au\nQueensland University of Technology\nanders.eriksson@qut.edu.au\nIjaz Akhter, Yaser Sheikh, Sohaib Khan, and Takeo Kanade. Nonrigid structure from motion in trajectory space. In Advances in neural information processing systems, pp. 41-48, 2009"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision. 40(1):120-145. 2011\nTimothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. IEEE\nAmaury Dame, Victor Adrian Prisacariu, Carl Yuheng Ren, and Ian Reid. Dense reconstruction using 3d object shape priors. In Computer Vision and Pattern Recognition, pp. 1288-1295. IEEE 2013.\nMaryam Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002\nOn the other hand, although many non-linear dimensionality reduction techniques - in particulai. KPCA - have been shown to outperform their linear counterparts for many data modeling tasks. they are seldom used to solve inverse problems without using a training phase. A general (discrim. inative) framework for using non-linear dimensionality reduction is: (i) learn a low-dimensional representation for the data using training examples via the kernel trick (ii) project the test exam-. ples on the learned manifold and finally (iii) find a data point (pre-image) corresponding to each. projection in the input space.\nAndreas Geiger, Raquel Urtasun, and Trevor Darrell. Rank priors for continuous non-linear dimen sionality reduction. In Computer Vision and Pattern Recognition, pp. 880-887. IEEE, 2009..\nPaulo FU Gotardo and Aleix M Martinez. Kernel non-rigid structure from motion. In IEEE Inter national Conference on Computer Vision, pp. 802-809, 2011a\nThis setup has two major disadvantages. Firstly, many problems of interest come with corrupted. observations - noise, missing data and outliers - which violate the low-dimensional modeling assumption.Secondly, computing the pre-image of any point in the low dimensional feature subspace is non-trivial: the pre-image for many points in the low dimensional space might not even exist because the non linear feature mapping function used for mapping the data from input space to the. feature space is non-surjective\nIan Jolliffe. Principal component analysis. Wiley Online Library, 2002\nNeil D Lawrence. Probabilistic non-linear principal component analysis with gaussian process laten 6:1783-1816. 2005 Varable models The. 1\nGenerative models like LVMs Lawrence (2005) are often used for inference by searching the low. dimensional latent space for a location which maximizes the likelihood of the observations. Prob- lems like segmentation, tracking and semantic 3D reconstruction|Prisacariu & Reid(2011); Dame et al.(2013) greatly benefit from using LVM. However, the latent space is learned a priori with clean training data in all these approaches.\nMinh Hoai Nguyen and Fernando De la Torre. Robust kernel principal component analysis. I Advances in Neural Information Processing Systems. 2009..\nAlmost all non-linear dimensionality reduction techniques are non-trivial to generalize for solving ill-posed problems (See section4.2) without a pre-training stage. Badly under-constrained problems require the low-dimensional constraints even for finding an initial solution, eliminating applicability of the standard \"projection + pre-image estimation\"' paradigm. This hinders the utility of non- linear dimensionality reduction and a suitable regularization technique to penalize the non-linear dimensionality is desirable.\nJorge Nocedal and Stephen J. Wright. Numerical optimization. Springer, New York, 2006\nSum and Substance: A closer look at most non-linear dimensionality reduction techniques reveals that they rely upon a non-linear map- ping function which maps the data from in- put space to a (usually) higher dimensional fea- ture space. In this feature space the data is as- sumed to lie on a low-dimensional hyperplane\nnon-linear dimensionality reduction techniques reveals that they rely upon a non-linear map- ping function which maps the data from in- put space to a (usually) higher dimensional fea ture space. In this feature space the data is as- sumed to lie on a low-dimensional hyperplane thus, linear low-rank prior is apt in the fea ture space. Armed with this simple observa- tion, our aim is to focus on incorporating the advances made in linear dimensionality reduc tion techniques to their non-linear counterparts while addressing the problems described above Figure[1explains this central idea and proposed dimensionality regularizer in a nutshell with\nRalph Tyrrell Rockafellar. Conjugate duality and optimization, volume 14. SIAM, 1974\nGuido Sanguinetti and Neil D Lawrence. Missing data in kernel pca. In Machine Learning: ECML 2006, pp. 751-758. Springer, 2006\nJohn Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Advances in Neural Information Processing Systems, pp. 2080-2088. 2009\nOur Contribution: In this work we propose a unified for simultaneous robust KPCA and pre image estimation while solving an ill-posed in ference problem without a pre-training stage\nXiaowei Zhou, Can Yang, Hongyu Zhao, and Weichuan Yu. Low-rank modeling and its application in image analysis. ACM Computing Surveys (CSUR), 47(2):36, 2014.\nIn particular we propose a novel robust en- ergy minimization algorithm which handles the implicitness of the feature space to directly penalize its rank by iteratively: (i) creating robust low-dimensional representation for the\nXu Zongben, Chang Xiangyu, Xu Fengmin, and Zhang Hai. L1/2 regularization: a thresholding representation theory and a fast solver. IEEE Transactions on neural networks and learning systems, 23(7):1013-1027, 2012\nPreviously, extensions to KPCA like Robust KPCA (RKPCANguyen & De la Torre (2009)) and probabilistic KPCA (PKPCASanguinetti & Lawrence(2006)) with missing data have been proposed to address the first concern, while various additional regularizers have been used to estimate the pre-image robustly Bakir et al.[(2004); Mika et al.(1998); Kwok & Tsang(2004); Abrahamsen & Hansen(2009).\nSebastian Mika, Bernhard Scholkopf, Alex J Smola, Klaus-Robert Muller, Matthias Scholz, and Gunnar Ratsch. Kernel pca and de-noising in feature spaces. In NIPS, volume 4, pp. 7, 1998.\nCausal Factors 3D shapes (S) and the projection matrices (R) Wj =RjSi R1 R2 Observations RN 2D projections (W) in the image W1 W2 S2 SN WN Si->$(Si) Input Space Feature Space (RKHS) (Span of aligned 3D shapes) (Span of non-linearly transformed shapes). Dimensionality Regularizer Penalizing rank in the feature space Dimensionality of S ~I|$(S)|l*\nBernhard Scholkopf, Alexander Smola, and Klaus-Robert Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural computation, 10(5):1299-1319, 1998.\nIichael E Tipping and Christopher M Bishop. Probabilistic principal component analysis. Journa f the Roval Statistic. 61(3):611-622.1999\nFigure 1: Non-linear dimensionality regularizer for NRSfM. The top part of the figure explains the ill-posed inverse problem of recovering the causal factors (1); projection matrices R; and 3D structures S, from 2D image observations (2) W's, by minimizing the image reprojection error f(W, R, S) = a W - RSi[2. Assuming that the recovered 3D structures (S's) lies near an unknown non-linear manifold (represented by the blue curve) in the input space, we propose to regu- larize the dimensionality of this manifold (3) - span of the non-linearly transformed shape vectors (S)'s - by minimizing (S)l*. The non-linear transformation is defined implicitly with a Mercer kernel and maps the non-linear manifold to a linear low rank subspace (shown in blue line) of RKHS"}, {"section_index": "4", "section_name": "PROOF OF THEOREM 3.1", "section_text": "data given the kernel matrix in closed form and (ii) reconstructing the noise-free version of the data (pre-image of the features space projections) using the estimated low-dimensional representations in a unified framework\nThe proposed algorithm: (i) provides a novel closed form solution to robust KPCA; (ii) yields state of-the-art results on missing data prediction for the well-known oil flow dataset; (iii) outperforms state-of-the-art linear dimensionality (rank) regularizers to solve NRSfM; and (iv) can be trivially generalized to incorporate other cost functions in an energy minimization framework to solve various ill-posed inference problems.\n- wr2w+l? +r|T|* min TeDn wTw=I\nmin tr -2 tr WI 2 r,w i=1 n n p min Wii 0 2 r,w i=1 i=1 j=1 n 21 0 111 2 V7 1 i=1 n 21 p min Oi+Y Yi 2 Yi0 0 i=1\nIf, f(W, S) = 0 is ill-conditioned (for example when ) < ), we want to recover matrix S under the assumption that the columns of it lie near a low-dimensional non-linear manifold. This can be done by solving a constrained optimization problem of the following form:.\nmin rank((S)) S s.t. f(W,S)<e\nHA-LL|=+r||L|*= 2u-r2lI? + rl|Tll 2 n -y+T i=1\nwhere (S) = [(s1), (s2), ... , $(sn)] e H N is the non-linear mapping of matrix S from the input space ' to the feature space H (also commonly referred as Reproducing Kernel Hilbert Space), via a non-linear mapping function $ : -> H associated with a Mercer kernel K such that K(S)i,j = $(si)T$(sj)\nIn this paper we present a novel energy minimization framework to solve problems of the genera form (1).\nAs our first contribution, we relax the problem (1) by using the trace norm of (S) - the convex. surrogate of rank function - as a penalization function. The trace norm ||M||* =: , ,(M) of a matrix M is the sum of its eigenvalues ,(M) and was proposed as a tight convex relaxatior'oi the rank(M) and is used in many vision problems as a rank regularizerFazel(2002). Although. the rank minimization via trace norm relaxation does not lead to a convex problem in presence ol a non-linear kernel function, we show in|3.2|that it leads to a closed-form solution to denoising a kernel matrix via penalizing the rank of recovered data (S) directly in the feature space..\np 21 2YOi+Yi +TYi 0 2\nGiven the relaxations proposed in Section[2] our assertion that the novel trace regularization based. non-linear dimensionality reduction is robust need to be substantiated. To that end, we evaluate our closed-form solution of Algorithm2|on the standard oil flow dataset introduced in|Bishop & James (1993).\nWith these changes we can rewrite (1) as\nwhere t is a regularization strength2\nIt is important to notice that although the rank of the kernel matrix K(S) is equal to the rank of (S), K(S)* is merely (S)|[?. Thus, directly penalizing the sum of the singular values of. K(S) will not encourage low-rank in the feature space|3.\nAlthough we have relaxed the non-convex rank function, (2) is in general difficult to minimize due to the implicitness of the feature space. Most widely used kernel functions like RBF do noi have a explicit definition of the function . Moreover, the feature space for many kernels is high (possibly infinite-) dimensional, leading to intractability. These issues are identified as the main\nIt is important to note that in this experiment, we only estimate the principal components (and their variances) that explain the estimated non-linear manifold, i.e. matrix C by Algorithm[2] without reconstructing the denoised version of the corrupted data samples\nBoth KPCA and our solution require model selection (choice of rank and t respectively) which is beyond the scope of this paper. Here we resort to evaluate the performance of both methods under different parameters settings. To quantify the accuracy of the recovered manifold (C) we use following criteria:\n21/t can also be viewed as Lagrange multiplier to the constraints in (1). 3 Although it is clear that relaxing the rank of kernel matrix to ||K(S)||+ is suboptimal, works likeHuang et al.(2012); Cabral et al.[(2013) with a variational definition of nuclear norm, allude to the possibility of kernelization. Further investigation is required to compare this counterpart to our tighter relaxation.\nProof. We will prove theorem1by first establishing a lower bound for (8) and subsequently showing that this lower bound is obtained at L* given by (10). The rotational invariance of the entering norms allows us to write 8) as:\nUE-wr2wT|?+r||T|* min (14) TeDn wTw=I obtain mintr( )-2tr(wr2) (15) 2 r,w 2T p min WijYjOi O i (16) 2 r,w i=1 i=1 j=1 n P 27 min A (17) Y 2 r 0 i=1 n min (18) 0 Yi0 0\nThis paper focuses on solving a generic inverse problem of recovering causal factor S = S1,S2, . SN] E X N from N observations W = [w1, w2, wvE V x N such that f(W, S) = 0. Here function f(observation,variable), is a generic loss function which aligns the. observations W with the variable S (possibly via other causal factors. e.g. R or Z in Section|4.1 and4.2).\nThe inequality in (17) follows directly by applying Holder's inequality to (16) and using the property that the column vectors wi are unitary.\nFinally, since the subproblems in (18) are separable in yi, its minimizer must be KKT-points of the individual subproblems. As the constraints are simple non-negativity constraints, these KKT points are either (positive) stationary points of the objective functions or O. It is simple to verify that the stationary points are given by the roots of the cubic function Po,r/2p. Hence it follows that there exists a * such that\nnin f(W,S) +t[(S)] S\nThis dataset comprises 1000 training and 1000 testing data samples, each of which is of 12 dimen- sions and categorized into one of three different classes. We add zero mean Gaussian noise with variance o to the training data|and recover the low-dimensional manifold for this noisy training data S, with KPCA and contrast this with the results from Algorithm2 An inverse width of the Gaussian kernel y = 0.075 is used for all the experiments on the oil flow dataset..\n'More precisely, ||M||* was shown to be the tight convex envelope of rank(M)/l|M|s, where ||M||. epresent spectral norm of M.\n5Note that our formulation assumes Gaussian noise in K(S) where as for this evaluation we add noise to S lirectly.\nTable 3: Robust dimensionality reduction accuracy by KPCA versus our closed-form solution on the full oi. flow dataset. Columns from left to right represent: (1) standard deviation of the noise in training samples (2-3). Error in the estimated low-dimensional kernel matrix by (2) KPCA and (3) our closed-form solution, (4-5 Nearest neighbor classification error of test data using (4) KPCA and (5) our closed-form solution respectively.\nbarriers to robust KPCA and pre-image estimationNguyen & De la Torre(2009). Thus, we have tc reformulate (2) by applying kernel trick where the cost function (2) can be expressed in terms of the kernel function alone.\nManifold Error Classification Error STD KPCA Our CFS KPCA Our CFS .2 0.1099 0.1068 9.60% 9.60% .3 0.2298 0.2184 19.90% 15.70% .4 0.3522 0.3339 40.10% 22.20% 0.4 0.35 T aann aannnor 0.3 0.25 KPCA,=.2 0.2 Ours,=.2 KPCA=.3 Ours,o=.3 0.15 KPCA,=.4 Ours,=.4 0.1 1 0 2 4 6 8 10 12 14 16 Rank of kernel matrix\nThe key insight here is that under the assumption that kernel matrix K(S) is positive semidefinite we can factorize it as: K(S) = CT C. Although, this factorization is non-unique, it is trivial to show. the following:\nwhere X,(.) is the function mapping the input matrix to its ith largest eigenvalue\nmin f(W,S) + [K(S) - CTClI? + t|[C|l S,C\nBefore moving on, we would like to discuss some alternative interpretations of (5) and its rela tionship to previous work - in particular LVMs. Intuitively, we can also interpret (5) from the probabilistic viewpoint as commonly used in latent variable model based approaches to define ker nel function Lawrence(2005). For example a RBF kernel with additive Gaussian noise and inverse width can be defined as: K(S), = e-ylls,-s,|l? + e, where e ~ N(0, o). In other words, with a finite p, our model allows the data points to lie near a non-linear low-rank manifold instead o on it. Its worth noting here that like LVMs, our energy formulation also attempts to maximize the likelihood of regenerating the training data W, (by choosing f(W, S) to be a simple least squares cost) while doing dimensionality reduction.\nNote that in closely related work Geiger et al.(2o09), continuous rank penalization (with a loga rithmic prior) has also been used for robust probabilistic non-linear dimensionality reduction anc model selection in LVM framework. However, unlike Geiger et al.(2009); Lawrence(2005) where the non-linearities are modeled in latent space (of predefined dimensionality), our approach directly penalizes the non-linear dimensionality of data in a KPCA framework and is applicable to solve inverse problems without pre-training."}, {"section_index": "5", "section_name": "B KERNEL NRSFM WITH CAMERA POSE ESTIMATION", "section_text": "Table4 shows the reconstruction performance on a more realistic experimental setup, with the mod- ification that the camera projection matrices are initialized with rigid factorization and were refined with the shapes by optimizing (2). To solve NRSfM problem with unknown projection matrices, we parameterize each R; with quaternions and alternate between refining the 3D shapes S and pro jection matrices R using LM. The regularization strength t was selected for the TNH method by golden section search and parabolic interpolation for every test case independently. This ensures the best possible performance for the baseline. For our proposed approach t was kept to 10-4 for all sequences for both missing data and full data NRSfM. This experimental protocol somewhat disad- vantages the non-linear method, since its performance can be further improved by a judicious choice of the regularization strength.\nWe approach the optimization of (5) by solving the following two sub-problems in alternation:\n6Errors from non-noisy kernel matrix can be replaced by cross validating the entries of the kernel matrix for model selection for more realistic experiment.\nX;(K(S)) = X(C) =X((S)\nC* = (S)ll* V C: CTC = K(S)\ns.t. K(S) = cTc\nThe above minimization can be solved with a soft relaxation of the manifold constraint by assuming that the columns of S lie near the non-linear manifold.\nAs p -> oo, the optimum of (5) approaches the optimum of (4) . A local optimum of (4) can be achieved using the penalty method of|Nocedal & Wright (2006) by optimizing (5) while iteratively increasing p as explained in Section|3\nFigure 3: Performance comparison between KPCA and our Robust closed-form solution with dimensionality. regularization on oil flow dataset with additive Gaussian noise of standard deviation o. Plots show the normal-. ized kernel matrix errors with different rank of the model. Kernel PCA results are shown in dotted line with diamond while ours are with solid line with a star. Bar-plot show the worst and the best errors obtained by our. method for a single rank of recovered kernel matrix..\nManifold Error : A good manifold should preserve maximum variance of the data - i.e. it should be able to generate a denoised version K(Sest) = CTC of the noisy kernel. matrix K(S). We define the manifold estimation error as K(Sest) K(SgT)|?, where. K(SgT) is the kernel matrix derived using noise free data. Figure3 shows the manifold. estimation error for KPCA and our method for different rank and parameter t respectively|. Classification error: The accuracy of a non-linear manifold is often also tested by the near- est neighbor classification accuracy. We select the estimated manifold which gives mini- mum Manifold Error for both the methods and report 1NN classification error (percentage. of misclassified example) of the 1000 test points by projecting them onto estimated mani-. folds.\nf(W,S) K(S) - cTc? min S 2 p min r[C[* K(s) - cTc|l? 2 C\nAlgorithm[1 outlines the approach and we give a detailed description and interpretations of both sub-problems (7) and (6) in next two sections of the paper.\nSubproblem (6) can be seen as a generalized pre-image estimation problem: we seek the factor si, which is the pre-image of the projection of $(s;) onto the principle subspace of the RKHS stored in\nTable 4: 3D reconstruction errors for linear and non-linear dimensionality regularization with noisy camer pose initialization from rigid factorization and refined in alternation with shape. The format is same as Table2\nAlgorithm 1: Inference with Proposed Regularizer.\nInput: Initial estimate So of S.. Output: Low-dimensional S and kernel representation C. Parameters: Initial p and maximum pmax penalty, with scale Ps.. - S = So, p = po ; while p Pmax do while not converged do. - Fix S and estimate C via closed-form solution of (7) using Algorithm - Fix C and minimize (6) to update S using LM algorithm;. - p = PPs ;\nNo Missing Data 50% Missing Data Linear Non-Linear Linear Non-Linear Dataset T = T* T = 10-4 T = T* T = 10-4 dmax dmed dmax dmed Drink 0.0947 0.0926 0.0906 0.0957 0.0942 0.0937 Pickup 0.1282 0.1071 0.1059 0.1598 0.1354 0.1339 Yoga 0.2912 0.2683 0.2639 0.2821 0.2455 0.2457 Stretch 0.1094 0.1043 0.1031 0.1398 0.1459 0.1484 Mean 0.1559 0.1430 0.1409 0.1694 0.1552 0.1554\nNotice that (6) only computes the pre-image for the feature space projections of the data points witl. which the non-linear manifold (matrix C) is learned. An extension to our formulation is desirabl if one wants to use the learned non-linear manifold for denoising test data in a classic pre-image. estimation framework. Although a valuable direction to pursue, it is out of scope of the presen. paper.\nAs suggested byDai et al.(2014), robust camera pose initialization is beneficial for the structure es-. timation. We have used rigid factorization for initializing camera poses here but this can be trivially changed. We hope that further improvements can be made by choosing better kernel functions, with cross validation based model selection (value of ) and with a more appropriate tuning of kernel. width. Selecting a suitable kernel and its parameters is crucial for success of kernelized algorithms. It becomes more challenging when no training data is available. We hope to explore other kernel. functions and parameter selection criteria in our future work..\nOne can interpret sub-problem (7) as a robust. form of KPCA where the kernel matrix has been corrupted with Gaussian noise and we. want to generate its low-rank approximation. Although (7) is non-convex we can solve it in. closed-form via singular value decomposition This closed-form solution is outlined in Algo. rithm2land is based on the following theorem:\nWe would also like to contrast our work with Gotardo & Martinez(2011a), which is the only wor we are aware of where non-linear dimensionality reduction is attempted for NRSfM. While esti mating the shapes lying on a two dimensional non-linear manifold, Gotardo & Martinez (2011a additionally assumes smooth 3D trajectories (parametrized with a low frequency DCT basis) and pre-defined hard linear rank constraint on 3D shapes. The method relies on sparse approximation o the kernel matrix as a proxy for dimensionality reduction. The reported results were hard to replicat under our experimental setup for a fair comparison due to non-smooth deformations. However, i contrast to|Gotardo & Martinez (2011a), our algorithm is applicable in a more general setup, ca be modified to incorporate smoothness priors and robust data terms but more importantly, is flexibl to integrate with a wide range of energy minimization formulations leading to a larger applicabilit beyond NRSfM.\nTheorem1. With Sn 3 A > 0 let A = UU7 denote its singular value decomposition. Then\nA - LTL?+rL] n1n L n i=1\nIn section4.2] we have compared the proposed non-linear dimensionality reduction prior against a variant of[Garg et al.(2013a) which handles missing data by optimizing:\nF N + maxmin T < S,Q> + Zi(xj)||wi(xj)- Risi(xj)| Q S,R i=1 j=1 s.t.||Q||s 1\nset of candidates for the corresponding eigenvalue . best one Irom this set is obtained b. choosing the value which minimizes (9) (see Algorithm[2) As elaborated in Section2] problem (7) can be seen as regularizing sum of square root (L1/2 norm) of the eigenvalues of the matrix K(S). In a closely related workZongben et al.(2012), authors. advocate L1/2 norm as a better approximation for the cardinality of a vector then the more commonly. used L1 norm. A closed form solution for L1/2 regularization similar to our work was outlined in. Zongben et al.(2012) and was shown to outperform the L1 vector norm regularization for sparse. coding. To that end, our Theorem [1and the proposed closed form solution (Algo 2) for (7) can\nwhere Q E RXN stores the dual variables to S and |s represent spectral norm (highest eigen- value) of a matrix.\nFor more details on primal dual formulation and dual norm of the trace norm see[Rockafellar[(1974] et al.(2010); ;Chambolle & Pock (2011)\nHowever our purpose is primarily to show that the non-linear method adds value even without time-. consuming per-sequence tuning. To that end, note that despite large errors in the camera pose esti-. mations by TNH and 50% missing measurements, the proposed method shows significant (~ 10%) improvements in terms of reconstruction errors, proving our broader claims that non-linear repre-. sentations are better suited for modeling real data, and that our robust dimensionality regularizer can Improve inference for ill-posed problems.\nCT C, which best explains the observation w;. Here (6) is generally a non-convex problem, unless the Mercer-kernel is linear, and must therefore be solved using non-linear optimization techniques In this work, we use the Levenberg-Marquardt algorithm for optimizing (6)\nAlgorithm 2: Robust Dimensionality Reduction\nInput: Current estimate of S. Output: Low-dimensional representation C Parameters: Current p and regularization strength T\nF N T||S|*+Zi(xj)|w(xj)-R;si(xj)l min S,R i=1 j=1\nTheorem [1shows that each eigenvalue of the minimizer C* of (7) can be obtained by solving a. depressed cubic whose coefficients are determined by the corresponding eigenvalue of the kernel. matrix and the regularization strength t. The roots of each cubic, together with zero, comprise a set of candidates for the corresponding eigenvalue of C*. The best one from this set is obtained by. choosing the value which minimizes (9) (see Algorithm2).\nTable 1: Performance comparison on missing data completion on Oil Flow Dataset: Row 1 shows the amount of missing data and subsequent rows show the mean and standard deviation of the error in recovered data matrix over 50 runs on 100 samples of oil flow dataset by: (1) The mean method (also the initialization of other methods) where the missing entries are replaced by the mean of the known values of the corresponding attributes, (2) 1-nearest neighbor method in which missing entries are filled by the values of the nearest point. (3) PPCA|Tipping & Bishop(1999), (4) PKPCA ofSanguinetti & Lawrence(2006), (5)RKPCA|Nguyen & De la Torre (2009) and our method.\nAlgorithm 3: Trace norm Heuristics\n// set iteration count n step size and duals ( - n = 0; - = 1/T : Q = 0:\nbe seen as generalization ofZongben et al. (2012) to include the L1/2 matrix norms for which simplified proof is included in the Appendix|A] It is important to note however, that the motivatio and implication of using L1/2 regularization in the context of non-linear dimensionality reductio are significantly different to that of Zongben et al.(2012) and related work Du et al.[(2013);Zha et al. (2014) which are designed for linear modeling of the causal factors. The core insight of usin L1 regularization in the feature space via the parametrization given in|3|facilitates a natural way fc non-linear modeling of causal factors with low dimensionality while solving an inverse problem b making feature space tractable.\nan+1 =(I2x2+o(ZijRRi))-*(S-0TQ+oR(Zij 0 Wij))"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In this section we demonstrate the utility of the proposed algorithm. The aims of our experiments are twofold: (i) to compare our dimensionality reduction technique favorably with KPCA and its robust variants; and (ii) to demonstrate that the proposed non-linear dimensionality regularizer consistently outperforms its linear counterpart (a.k.a. nuclear norm) in solving inverse problems..\nTable 5: 3D reconstruction errors for different NRSfM approaches and our TNH Algorithm given ground truth camera projection matrices. Results for all the methods (except TNH) are taken from[Dai et al.(2014)"}, {"section_index": "7", "section_name": "4.1 MATRIX COMPLETION", "section_text": "The nuclear norm has been introduced as a low rank prior originally for solving the matrix comple tion problem. Thus, it is natural to evaluate its non-linear extensions on the same task. Assuming. W E Rmxn to be the input matrix and Z a binary matrix specifying the availability of the observa. tions in W, Algorithm1can be used for recovering a complete matrix S with the following choice. of f(W. Z. S):\nf(W, Z,S) =||Z o(W - S)l)"}, {"section_index": "8", "section_name": "where o represents Hadamard product", "section_text": "To demonstrate the robustness of our algorithm for matrix completion problem, we choose 100 training samples from the oil flow dataset described in section|3.2|and randomly remove the elements from the data with varying range of probabilities to test the performance of the proposed algorithm against various baselines. Following the experimental setup as specified in Sanguinetti & Lawrence (2006), we repeat the experiments with 50 different samples of Z. We report the mean and standard deviation of the root mean square reconstruction error for our method with the choice of t = 0.1, alongside five different methods in Table[1 Our method significantly improves the performance of missing data completion compared to other robust extensions of KPCA Tipping & Bishop1999); Sanguinetti & Lawrence(2006);Nguyen & De la Torre(2009), for every probability of missing data.\nAs the main manuscript uses NRSfM only as a practical application of our non-linear dimension ality reduction prior, we have restricted our NRSfM experiments to only compare the proposed method against its linear counterpart. For the timely evaluation, the reported experiments we con- ducted on sub-sampled CMU mocap dataset. Here, we supplement the arguments presented in the main manuscript by favorably comparing the linear dimensionality reduction based NRSfM algo rithm(TNH) to other NRSfM methods on full length CMU mocap sequences.\nAlthough we restrict our experiments to least-squares cost functions, it is vital to restate here that our framework could trivially incorporate robust functions like the L1 norm instead of the Frobenius norm - as a robust data term f(W, Z, S) - to generalize algorithms like Robust PCA|Wright et al. (2009) to their non-linear counterparts.\nDataset PTA[Akhter et al.[(2009) CSF2Gotardo & Martinez (2011b) BMMDai et al.[(2014) TNH Drink 0.0229 0.0215 0.0238 0.0237 Pick-up 0.0992 0.0814 0.0497 0.0482 Yoga 0.0580 0.0371 0.0334 0.0333 Stretch 0.0822 0.0442 0.0456 0.0431\nWe choose quaternions to perametrize the 2 3 camera matrices R; to satisfy orthonormality con- straints as done in[Garg et al.[(2013a) and optimize the saddle point problem (22) using alternation. In particular, for a single iteration: (i) we optimize the camera poses R's using LM, (ii) take a steepest descend step for updating S and (ii) a steepest ascend step for updating Q which is fol-. lowed by projecting its spectral norm to unit ball. Given ground truth camera matrices ( without. step (i)), alternation (ii-iii) can be shown to reach global minima of (22). Algorithm[3|outlines TNH algorithm.\nNon-rigid structure from motion under orthography is an ill-posed problem where the goal is to esti- mate the camera locations and 3D structure of a de. formable objects from a collection of 2D images which are labeled with landmark correspondences Bregler et al.(2000). Assuming s,(x;) e R3 to be the 3D location of point x; on the deformable object in the ith image, its orthographic projection wi(xj) E R2 can be written as w;(x) = R,s;(xj) where R; E R23 is a orthographic projection ma- trix Bregler et al.(200o). Notice that as the object deforms, even with given camera poses, reconstruct- ing the sequence by least-squares reprojection error minimization is an ill-posed problem. In their semi nal work, Bregler et al.(2000) proposed to solve this problem with an additional assumption that the re- constructed shapes lie on a low-dimensional linear subspace and can be parameterized as linear combi- nations of a relatively low number of basis shapes NRSfM was then cast as the low-rank factorization problem of estimating these basis shapes and corre- sponding coefficients.\nRecent work. like Dai et al. (2014);Garg et al. [2013a) have shown that the trace norm regularizer can be used as a convex envelope of the low-rank prior to robustly address ill-posed nature of the prob- lem. A good solution to NRSfM can be achieved by optimizing:\nF N min r||S|*+Zi(xj)||w;(xj)-R;si(xj)|l3 S,R i=1 j=1\nwhere S is the shape matrix whose columns are 3N dimensional vectors storing the 3D coordinates S,(x;) of the shapes and Z,(x) is a binary variable indicating if projection of point x; is available in the image i.\nAssuming the projection matrices to be fixed, this problem is convex and can be exactly solvec with standard convex optimization methods. Additionally, if the 2D projections w;(x) are noise free, optimizing (12) with very small t corresponds to selecting the the solution - out of the many solutions - with (almost) zero projection error, which has minimum trace norm Dai et al.[(2014) Thus henceforth, optimization of (12) is referred as the trace norm heuristics (TNH). We solve this problem with a first order primal-dual variant of the algorithm given in|Garg et al.(2013a), which can handle missing data. The algorithm is detailed and compared favorably with the state of the ar NRSfM approaches (based on linear dimensionality regularization) Appendix|C\nA simple kernel extension of the above optimization problem is\nwhere (S) is the non-linear ma ping of S to the feature.. oace using an RBF kernel.\nWith fixed projection matrices R, (13) is of the general form (2), for which the local optima can be found using Algorithm1\nGround truth Our regularizer X Linear regularizer\npny Ground truth # Our regularizer X Linear regularizer esti- de- ages nces 3 to able tion xj ma- ject uct- X rror emi- this\nFigure 2: Non-linear dimensionality regular. isation improves NRSfM performance com- pared to its linear counterpart. Figure shows. the ground truth 3D structures in red wire-frame. overlaid with the structures estimated using: (a). proposed non-linear dimensionality regularizer. shown in blue dots and (b) corresponding lin-. ear dimensionality regularizer (TNH) shown in. black crosses, for sample frames of CMU mo. cap sequence. Red circles represent the 3D points. for which the projections were known whereas. squares annotated missing 2D observations. See. text and Table2|for details.\nF N min t(S)l*+ Zi(x)|wi(xj)-Risi(x)| S,R i=1 j=1 f(W,Z,R,S)\nTable 2: 3D reconstruction errors for linear and non-linear dimensionality regularization with ground truth. camera poses. Column 1 and 4 gives gives error for TNH while column (2-3) and (5-6) gives the corresponding error for proposed method with different width of RBF kernel. Row 5 reports the mean error over 4 sequences.."}, {"section_index": "9", "section_name": "4.2.1 RESULTS ON THE CMU DATASET", "section_text": "In our experiments we use ground truth camera projection matrices to compare our algorithm agains1 TNH. The advantage of this setup is that with ground-truth rotation and no noise, we can avoid the model selection (finding optimal regularization strength ) by setting it low enough. We run the TNH with t = 10-7 and use this reconstruction as initialization for Algorithm 1 For the proposed method, we set T = 10-4 and use following RBF kernel width selection approach:\nFollowing the standard protocol in Dai et al. (2014); Akhter et al.(2009), we quantify the recon-. struction results with normalized mean 3D errors e3D = FN i ; eij, where ei; is the euclidean distance of a reconstructed point j in frame i from the ground truth, o is the mean of standard devi- ation for 3 coordinates for the ground truth 3D structures, and F, N are number of input images and number of points reconstructed.\nTable 2 shows the results of the TNH and non-linear dimensionality regularization based methods using the experimental setup explained above, both without missing data and after randomly remov. ing 50% of the image measurements. Our method consistently beats the TNH baseline and improves the mean reconstruction error by ~ 40% with full data and by ~ 25% when used with 50% miss. ing data. Figure2|shows qualitative comparison of the obtained 3D reconstruction using TNH and proposed non-lienar dimensionality regularization technique for some sample frames from various sequences. We refer readers to Appendix B|for results with simultaneous reconstruction pose opti- mization."}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "In this paper we have introduced a novel non-linear dimensionality regularizer which can be incor. porated into an energy minimization framework, while solving an inverse problem. The proposed algorithm for penalizing the rank of the data in the feature space has been shown to be robust to noise and missing observations. We have picked NRSfM as an application to substantiate our arguments and have shown that despite missing data and model noise (such as erroneous camera poses) our algorithm significantly outperforms state-of-the-art linear counterparts.\nAlthough our algorithm currently uses slow solvers such as the penalty method and is not directly. scalable to very large problems like dense non-rigid reconstruction, we are actively considering. alternatives to overcome these limitations. An extension to estimate pre-images with a problem\n4 Since our main goal is to validate the usefulness of the proposed non-linear dimensionality regularizer, w opt for a reduced size dataset for more rapid and flexible evaluation.\nNo Missing Data 50% Missing Data Linear Non-Linear Linear Non-Linear Dataset dmax dmed dmax dmed Drink 0.0227 0.0114 0.0083 0.0313 0.0248 0.0229 Pickup 0.0487 0.0312 0.0279 0.0936 0.0709 0.0658 Yoga 0.0344 0.0257 0.0276 0.0828 0.0611 0.0612 Stretch 0.0418 0.0286 0.0271 0.0911 0.0694 0.0705 Mean 0.0369 0.0242 0.0227 0.0747 0.0565 0.0551\nMaximum distance criterion (dmax): we set the maximum distance in the feature space to be 3o. Thus, the kernel matrix entry corresponding to the shape pairs obtained by TNH with maximum Euclidean distance becomes e-9/2. Median distance criterion (dmed): the kernel matrix entry corresponding to the median euclidean distance is set to 0.5."}]
HyET6tYex
[{"section_index": "0", "section_name": "ACKNOWLEDGMENTS", "section_text": "Levent Sagun Mathematics Department New York University saqun@cims u.edi\nThomas Trogdon Mathematics Department. University of California, Irvine. ttrogdon@math.uci.edi"}, {"section_index": "1", "section_name": "REFERENCES", "section_text": "Robert J Adler and Jonathan E Taylor. Random fields and geometry. Springer Science & Busines. Media, 2009.\nAntonio Auffinger, Gerard Ben Arous, and Jiri Cerny. Random matrices and complexity of spi glasses. Communications on Pure and Applied Mathematics, 66(2):165-201, 2013\nThe authors present empirical distributions for the halting time (measured by th number of iterations to reach a given accuracy) of optimization algorithms ap plied to two random systems: spin glasses and deep learning. Given an algorithm which we take to be both the optimization routine and the form of the randon andscape, the fluctuations of the halting time follow a distribution that, after cen tering and scaling, remains unchanged even when the distribution on the landscape is changed. We observe two main classes, a Gumbel-like distribution that appears n Google searches, human decision times, QR factorization and spin glasses, anc a Gaussian-like distribution that appears in conjugate gradient method, deep net work with MNIST input data and deep network with random input data. This empirical evidence suggests presence of a class of distributions for which the halt ing time is independent of the underlying distribution under some conditions.\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surface of multilayer networks. arXiv preprint arXiv:1412.0233, 2014.\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in Neural Information Processing Svstems. pp. 2933-2941. 2014\nPercy Deift. Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, volume 3 American Mathematical Soc., 2000."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "In this paper we discuss both the presence and application of universality in optimization algorithms More precisely, in order to optimize an energy functional when the functional itself and the initial guess are random, we consider the following iterative algorithms: conjugate gradient for solving a linear system, gradient descent for spin glasses, and stochastic gradient descent for deep learning.\nPercy Deift and Thomas Trogdon. Universality for eigenvalue algorithms on sample covariance matrices. arXiv Preprint arXiv:1701.01896, pp. 1-31, 2017.\nA bounded, piecewise differentiable random field (See [Adler & Taylor (20o9) for an account or the connection of random fields and geometry), where the randomness is non-degenerate, yields a landscape with many saddle points and local minima. Given such a landscape and a moving particle that takes steps to reach a low-energy level, an essential quantity is the time the particle takes until i1 stops which we call the halting time. Many useful bounds on the halting time are known for convex cases, where the stopping condition produces a halting time that is, essentially, the time to finc the minimum. In non-convex cases, however, the particle knows only the information that can be calculated locally. And a locally measurable stopping condition, such as the norm of the gradient at the present point, or the difference in altitude with respect to the previous step, can lead the algorithn to locate a local minimum. This feature allows the halting time to be calculated in a broad range of non-convex, high-dimensional problems. A prototypical example of such a random field is the class of polynomials with random coefficients. Spin glasses and deep learning cost functions are then special cases of such fields that yield different landscapes. Polynomials with random coefficients are not only a broad class of functions, but also they are hard to study mathematically in any generality Therefore, in order to capture essential features of such problems, we focus on their subclasses tha1 are well studied (spin glasses) and practically relevant (deep learning cost functions).\nPercy Deift, Govind Menon, Sheehan Olver, and Thomas Trogdon. Universality in numerical com putations with random data. Proceedings of the National Academy of Sciences, 111(42):14973- 14978, 2014.\nPercy Deift, Govind Menon, and Thomas Trogdon. On the condition number of the critically-scalec laguerre unitary ensemble. arXiv preprint arXiv:1507.00750, 2015.\nMagnus Rudolph Hestenes and Eduard Stiefel. Method of Conjugate Gradients for solving Linea. Systems. J. Res. Nat. Bur. Stand., 20:409-436, 1952\nThe halting time in such landscapes, when normalized to mean zero and variance one (subtracting the mean and dividing by the standard deviation), appears to follow a distribution that is independent of the input data, in other words it follows a universal distribution: the fuctuations are universal. In statistical mechanics, the term \"universality' is used to refer to a class of systems which, on a certain macroscopic scale, behave statistically the same while having different statistics on a micro- scopic scale. An example of such a law is the central limit theorem, which states that the sums of observations tend to follow the same distribution independent of the distribution of the individual observations, as long as contribution from individual observations is reasonably small. It may fail to hold, if the microscopic behavior is not independent, does not have a finite second-moment, or if we consider something different than the sum. This work's focus is an attempt to put forward the cases\nJason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converge to minimizers. University of California, Berkeley, 1050:16, 2016\nWe thank Percy Deift for valuable discussions and Gerard Ben Arous for his mentorship throughout. the process of this research. The first author thanks very much to Ugur Guney for his availability for support and valuable contributions in countless implementation issues. This work was partially supported by the National Science Foundation under grant number DMS-1303018 (TT).."}, {"section_index": "3", "section_name": "Yann LeCun", "section_text": "YammDeoun Computer Science Departme New York University. 001"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "Percy Deift and Thomas Trogdon. Universality for the Toda algorithm to compute the eigenvalues of a random matrix. arXiv Prepr. arXiv1604.07384, apr 2016. URLhttp://arxiv.org/ abs/1604.07384\nAnne Greenbaum. Behavior of slightly perturbed lanczos and conjugate-gradient recurrences. Lin ear Algebra and its Applications. 113:7-63. 1989\nMoritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240, 2015.\nEric Kostlan. Complexity theory of numerical linear algebra. Journal of Computational and Appliea Mathematics, 22(2):219-230, 1988.\nwhere we see universality. But in this spirit, we show a degenerate case in which halting time fail. to follow a universal law.\nA rather surprising example of halting time universality is in the cases of observed human decision. times and Goog1eTM query times. In Bakhtin & Correll (2012) the time it takes a person make a. decision in the presence of visual stimulus is shown to have universal fluctuations. The theoretically predicted curve in this experiment follows a Gumbel-like distribution. In addition, we randomly sampled words from two different dictionaries and submitted search queries. The time it takes Google to present the results are recorded. The normalized search times closely follow the same. Gumbel-like curve.\nLevent Sagun, V Ugur Guney, Gerard Ben Arous, and Yann LeCun. Explorations on high dime. sional landscapes. arXiv preprint arXiv:1412.6615, 2014"}, {"section_index": "5", "section_name": "APPENDIX", "section_text": "Asymptotic error bounds for loss functions have been useful in the study of convergence properties of various models under various algorithms, for instance, at the heart of[Hardt et al.[(2015) and Lee et al.[(2016) lies a bound that depends largely on the number of steps. Such bounds, even when they are tight, hold asymptotically. The finite time behaviour may be less pessimistic and it may prove to be useful in many practical concerns. For example, assuming the assumptions for a possible bound of an asymptotic nature could give results that are a lot more pessimistic."}, {"section_index": "6", "section_name": "EFFECTS OF VARYING ACCURACY IN OPTIMIZATION", "section_text": "In Figure 6] we plot ensemble averages of efficiency versus accuracy for different e's. A sharp. plateau in the accuracy is seen, indicating that the extra computation for small values of e is unnec essary. In MNIST and the spin glass example, the extra computation does not come with a gain in. performance.\nIn the spin glass setting, the floor value gives a natural bound on the value that the Hamiltonian car. practically reach. That value is above the ground state at an energy level where most local minima lie. This level presents a natural barrier for an algorithm like the gradient descent. Therefore a natural measure of performance at the point w* is H(w*)/(floor value). In MNIST, performance is the percentage of correct guesses in the test."}, {"section_index": "7", "section_name": "NORMALIZED-MOMENT ANALYSIS", "section_text": "In the cases we observe, we find two main universality classes: (1) A Gumbel-like distribution thai appears in Google searches, human decision times, QR factorization and spin glasses, and (2) a Gaussian-like distribution that appears in conjugate gradient algorithm and deep learning. To the best of our knowledge, our work along with the accompanying references in this introduction are the first ones to address the question of observing and classifying the distribution of the halting time\nWe use the normalized third and fourth moments of the data, also referred to as the skewness and kurtosis, to identify which class the distributions belong to. Note that the first and second moments are zero and one since the date is normalized.\nIntuitively, in gradient based methods, the halting time is effected by the curvature of the surface. And the curvature of the surface describes the landscape along the path of decay. The Gaussian-like behavior of halting time in MNIST might allow us to speculate that it has a funnel like non-convex landscape rather than a glassy landscape. This observation is consistent with Sagun et al.(2014) in. its landscape exploration for spin glasses and deep learning.."}, {"section_index": "8", "section_name": "1.1 DEFINITION OF UNIVERSALITY", "section_text": "Definition 1.1. An algorithm A consists of both a random cost function F(x, w) where x is a give random input and an optimization routine that seeks to minimize F with respect to w.\nTo each algorithm we attach a precise e-dependent halting criteria for the algorithm. The halting time, which is a random variable. is the time it takes to meet this criteria. Within each algorithn there must be an intrinsic notion of dimension which we denote by N. The halting time Te,N,A,E depends on e, N, the choice of algorithm A, and the ensemble E (or probability distribution). We use the empirical distribution of Te,N,A,E to provide heuristics for understanding the qualitative performance of the algorithms.\nThe presence of universality in an algorithm is the observation that for sufficiently large N anc e = e(N), the halting time random variable satisfies.\nwhere t* is a continuous random variable that depends only on the algorithm. The random variabl Te,N,A,E is referred to as the fluctuations and when such an approximation appears to be valid we\nDistribution of normalized search time English words. 0.5 Turkish words. f BC 0.4 Frenneeey 0.3 0.2 0.1 0.0 0 2 2 4 6 Halting time (search time) fluctuations\nFigure 1: Search times of randomly selected words from two ensembles is compared with the curve fBc in Bakhtin & Correll|(2012) that is estimated from the decision times in an experiment conducted on humans. It is evident that more observations have yet to be made in identifying the underlying principles of the algorithms that are increasingly part of our life..\nTe,N,A,E E[Te,N,A,E Te,N,A,E := Var(Te,N,A,E)\nSome remarks must be made.\nPerformance of GD on the spin glass model. 100 E400 0.01 400 400 95 0.1 75 = 0.01 E75 Pernrnnnee e 90 00000 85 0 e75 = 2 80 75 N=75 70 N=400 65 500 0 1000 1500 2000 2500 3000 Average number of steps it takes to reach e (a) Test performance vs. number of steps. 10000 = 0.05 9800 9600 9400 9200 9000 fully connected 8800 convnet 0 500 1000 1500 2000 2500 3000 Average number of steps it takes to reach e accuracy (b)\n100 400 0.1 E400 E400 0.0 95 75 = 0.1 75 = 0.01 90 8 85 0 E75 = 2 80 75 N=75 70 N=400 65 0 500 1000 1500 2000 2500 3000\nTo give some context, we discuss the universality in the solution of the eigenvalue problem with th classical QR algorithm. Historically, this was first noticed in Pfrang et al.(2014). In this example the fundamental object is the QR factorization (Q, R) = QR(A) where A = QR, Q is orthogona (or unitary) and R is upper-triangular with positive diagonal entries. The QR algorithm applied to a Hermitian N N matrix A is given by the iteration\nAo := A, Qj,R) := QR(A) Aj+1 := RjQj\nGenerically, A, D as j -> oo where D is a diagonal matrix whose diagonal entries are the eigenvalues of A. The halting time in Pfrang et al. (2014) was set to be the time of first deflation. T. N A E(A). as:\nmin{j:/N(N-k)|A,(k+1: N,1: k)|l< e for some 1 < k N - 1}\nHere ||Allo refers to the maximum entry of a matrix A in absolute value and the notation A(i :. j, k : l) refers to the submatrix of A consisting of entries only in rows i, i + 1, . .. , j and in columns k, k + 1, ..., l. Thus the halting time for the QR algorithm is the time at which at least one off-. diagonal block is appropriately small. Next, we have to discuss choices for the randomness, or ensemble E, by choosing different distributions on the entries of A. Four such choices for ensembles are, Bernoulli ensemble (BE), Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE), Quartic unitary ensemble (QUE):\nFigure 6: (a) Norm of the gradient varies from 5 to O.01 for the spin glass. (b) Averages of consecutive costs on MNIST that varies from 0.6 to 0.005.\nsay that N and e (and any other external parameters) are in the scaling region. For example, in Section[1.2] A is the QR eigenvalue algorithm, N is the size of the matrix, e is a small tolerance and E is given by a distribution on complex Hermitian (or real symmetric) matrices.\nA statement like (1) is known to hold rigorously for a few algorithms (seeDeift & Trogdon (20162017)) but in practice, it is verified experimentally. This was first done in Pfrang. et al.(2014) and expanded inDeift et al.(2014) for a total of 8 different algorithms. The random variable T* depends fundamentally on the functional form of F. And we only. expect (1) to hold for a restricted class of ensembles E. Te,N,A,E is an integer-valued random variable. For it to become a continuous distribution. limit must be taken. This is the only reason N must be large -- in practice, the approxima-. tion in (1) is seen even for small to moderate N..\nUniversality in this sense is a measure of stability in an algorithm. For example, it is known from the. work of Kostlan (1988) that halting time for the power method to compute the largest eigenvalue (in modulus) of symmetric Gaussian matrices has infinite expectation and hence this type of universality . is not believed to be present. One could use this to conclude that the power method is a naive method. for these matrices. Yet, it is known that the power method is much more efficient on positive-definite. matrices where universality can be shown Deift & Trogdon (2017). Therefore, we have evidence that the presence of universality is a desirable feature of a numerical method..\ne = 0.05 O O O e = 0.05 00 O fully connected convnet 500 1000 1500 2000 2500 3000\nBE A is real-symmetric with iid Bernoulli 1 entries on and below the diagonal. GOE A is real-symmetric with iid standard normal entries below the diagonal. The entries on the diagonal are iid normal with mean zero and variance two. GUE A is complex-Hermitian with iid standard complex normal entries below the diagonal. The entries on the diagonal are iid complex normal mean zero and with variance two. QUE A is complex-Hermitian with probability density e-trA4 dA. SeeDeift(2o00) for de- tails on such an ensemble and Olver et al.(2015) for a method to sample such a matrix. Importantly, the entries of the matrix below the diagonal are correlated.\nHere we have continuous and discrete, real and complex, and independent and dependent ensemble but nevertheless we see universality in Figure2|where we take N = 150 and e = 10-10.\nMODEL ENSEMBLE MEAN ST.DEV. 3RD 4TH CG:M = N LOE 970 164 5.1 35.2 CG:M = N LUE 921 46 15.7 288.5 CG:M = N +2VN LOE 366 13 0.08 3.1 CG:M = N+2|VN LUE 367 9 0.07 3.0 CG:M = N +2|VN PBE 365 13 0.08 3.0 SPIN GLASS GAUSSIAN 192 79.7 1.10 4.58 SPIN GLASS BERNOULLI 192 80.2 1.10 4.56 SPIn GLASS UNIFORM 193 79.6 1.10 4.54 QR BE 26 15 1.18 4.77 QR GOE 24 14 1.17 4.78 QR GUE 22 12 1.04 4.32 QR QUE 22 12 1.02 4.16 FULLY CONNECTED MNIST 2929 106 -0.32 3.24 FULLY CONNECTED RANDOM 4223 53 -0.08 2.98 CONVNET MNIST 2096 166 -0.11 3.18 COND. ON GRADIENT MNIST 3371 118 -0.34 3.31\nFigure 2: Empirical histograms for the halting time fluctuations Te,N,QR,E when N = 150, e = 10-10 for various choices of ensembles E. This figure shows four normalized histograms, one each. for E = BE, GOE, GUE and QUE. It is clear that the fluctuations follow a universal law.\nRemark 1.1. The ensembles discussed above (GOE, GUE, BE and QUE) exhibit eigenvalue repu sion. That is, the probability that two eigenvalues are closq'lis much smaller than if the locations c the eigenvalues were just given by iid points on the line. It turns out that choosing a random matri with iid eigenvalues breaks the universality that is observed in Figure[2] SeePfrang et al.(2014) fo a more in-depth discussion of this.\nRemark 1.2. To put the QR algorithm in the framework, let B = U AU* define F(A, U) by\nmin{j :V/N(N - k)|B(k +1: N,1: k)| < e for some 1 k N - 1}\nWe then use the QR algorithm to minimize F with respect to unitary matrices U using the initial condition U = I. If A is random then F(A, U) represents a random field on the unitary group.\nTable 1: Skewness (3rd moment) and kurtosis (4th moment) for the experiments: (1) In the M = N + 2| N | it is clear that these normalized moments nearly coincide and they are quite distinct for. M = N. (2) Gumbel like distribution in spin glasses and QR. (3) Gaussian-like distribution, with a. flat left tail for deep learning.\nA natural class of random fields is the class of Gaussian random functions on a high-dimensiona. sphere, known as p-spin spherical spin glass models in the physics literature (in the Gaussian proces. literature they are known as isotropic models). From the point of view of optimization, minimiz ing the spin glass model's Hamiltonian is fruitful because a lot is known about its critical points. This allows us to experiment with questions regarding whether the local minima and saddle points. due to the non-convex nature of landscapes, present an obstacle in the training of a system. Suc. observations on the Hamiltonian doesn't imply that it is a cost function or a simplified version o. a cost function. Rather, the features that both systems have in common hint at a deeper underlyin. structure that needs to be discovered..\nIn recent years Dauphin et al.(2014) attacked the saddle point problem of non-convex optimiza tion within deep learning. In contrast, Sagun et al.[(2014) and the experimental second section of Choromanska et al. (2014) jointly argue that if the system is large enough, presence of saddle points is not an obstacle, and add that the local minimum practically gives a good enough solution within the limits of the model. However, Sagun et al.(2014) and Choromanska et al.(2014) hold differ ent perspectives on what the qualitative similarities between optimization in spin glasses and deep learning might imply. The latter asserts a direct connection between the two systems based on these similarities. On the contrary, the former argues that these similarities hint at universal behaviors that are generically observed in vastly different systems rather than emphasizing a direct connection..\nBy close. we mean that their distance is much less than O(1/N) where N is the size of the matrix\nUniversal scaling - N = 150 0.5 BE GOE 0.4 GUE Frenneeey QUE 0.3 0.2 0.1 0.0 -2 0 2 4 6 Halting time fluctuations.\nThe two functions are indeed different in two major ways. First, the domain of the Hamiltonian i a compact space and the couplings are independent Gaussian random variables whereas the input for (2) are not independent and the cost function has a non-compact domain. Second, at a fixec point w, variance of the function LTrain(w) is inversely proportional to the number of samples but the variance of Hy(w) is N. As a result a randomly initialized Hamiltonian can take vastl different values, but a randomly initialized cost tend to have very similar values. The Hamiltoniar has macroscopic extensive quantities: its minimum scales with a negative constant multiple of N. Ir contrast, the minimum of the cost function is bounded from below by zero. All of this indicates tha landscapes with different geometries (glass-like, funnel-like, or another geometry) might still lea to similar phenomena such as existence of the floor level, and the universal behavior of the halting time."}, {"section_index": "9", "section_name": "1.4 SUMMARY OF RESULTS", "section_text": "We discuss the presence of universality in algorithms that are of a very different character. The conjugate gradient algorithm, discussed in Section 2.1] effectively solves a convex optimizatior problem. Gradient descent applied in the spin glass setting (discussed in Section|2.2) and stochastic gradient descent in the context of deep learning (MNIST, discussed in Section2.3) are much more complicated non-convex optimization processes. Despite the fact that these algorithms share very little geometry in common, we demonstrate three things they share:\nA scaling region in which universality appears and performance is good Regions where the computation is either ineffective or inefficient. A moment-based indicator for finding the universality class.\n22-spin spherical spin glass, sum of xijw;w; terms, has exactly 2N critical points. When p 3, pspin model has exponentially many critical points with respect to N. For the latter case, complexity is a measure on the number of critical points in an exponential scale. Deep learning problems are suspected to be complex in this sense.\nIn line with the asymptotic proof in [Auffinger et al.](2013), the local minima are observed to lie roughly at the same energy level in spherical spin glasses. [Auffinger et al.(2013) also gives asymp- totic bounds on the value of the ground state and the exponential behavior of the average of the number of critical points below a given energy level. It turns out, when the dimension is large, the bulk of the local minima tend to have the same energy which is slightly above the global minimum. This level is called the floor level of the function. Simulations of the floor in spin glass can be found in |Sagun et al.(2014).Sagun et al.[(2014) also exhibits floor in a specially designed MNIST experiment: A student network is trained by the outputs of a pre-trained teacher network. Zero cost is achievable by the student, but the stochastic gradient descent cannot find zeros. It also does not have to because the floor level already gives a decent performance.\nGiven data (i.e., from MNIsT) and a measure L(xl, w) for determining the cost that is parametrized by w E R, the training procedure aims to find a point w* that minimizes the empirical training cost while keeping the test cost low. Here x' e Z for l E {1, ..., S}, where Z is a random (ordered) sample of size S from the training examples. Total training cost is given by\nS 1 F(Z, w) = LTrain(w) = L(x',w) S l=1\nN 1 F(x(),w) = Hn(w) = XijkWiWjWk. N i,j,k\n1. Compute rk =Tk-1-ak-1Apk-1 wherq|ak-1 =(rk-1,Tk-1)/(Pk-1,Apk-1). 2. Compute pk =Tk + bk-1Pk-1 where bk-1 =(rk,Tk)/(rk-1,Tk-1) 3. Compute Xk = Xk-1 + ak-1Pk-1.\nIf A is strictly positive definite xk -> x = A-1b as k -> oo. Geometrically, the iterates xk are the best approximations of x over larger and larger affine Krylov subspaces k.\nas k N. The quantity one monitors over the course of the conjugate gradient algorithm is the norn |rk|l:\nTe,N,CG,E(A,b) := min{k:||rk< e}\nIn exact arithmetic, the method takes at most N steps: In calculations with finite-precision arithmetic the number of steps can be much larger than N and the behavior of the algorithm in finite-precision arithmetic has been the focus of much research (Greenbaum] [1989] Greenbaum & Strakos]1992) What is important for us here is that it may happen that |rk| < e but the true residual rk := b- Axk (which typically differs from rk in finite-precision computations) satisfies [^k > E.\nUniversal scaling - N = 500 Degenerate scaling - N = 500 0.5 LOE LOE LUE LUE 0.4 PBE 0.2 0.1 0.0 0 4 -2 0 2 4 -2 0 6 8 Halting time fluctuations Halting time fluctuations (a) (b)\nFigure 3: Empirical histograms for the halting time fluctuations Te,n,cG,E when N = 500, e =. 10-10 for various choices of ensembles E. (a) The scaling M = N + 2|N| demonstrating the. presence of universality. This plot shows three histograms, one each for E = LUE, LOE and PBE (b) The scaling M = N showing two histograms for E = LUE and LOE and demonstrating the non-existence of universality.\nNow, we discuss our choices for ensembles E of random data. In all computations, we take b : (bj)1<j<n where each b, is iid uniform on (-1, 1). We construct positive definite matrices A b A =~XX* where X = (Xij)1i<N, 1j<M and each Xij ~ D is iid for some distribution T We make the following three choices for D, Positive definite Bernoulli ensemble (PDE), Lagueri orthogonal ensemble (LOE), Laguerre unitary ensemble (LUE):\n3We use the notation |y1l2 (y,y) for yn) CN 1\n1 F(A,y) Ay-y*6\nwhere * denotes the conjugate-transpose operation. Given an initial guess xo (we use xo = b), compute ro = b - Axo and set po = ro. For k = 1,..., N,.\nKk = xo + span{ro x||=x,A\nPBE D a Bernoulli 1 random variable (equal probability) LOE D is a standard normal random yariable. LUE D is a standard complex normal random variable.\nThe choice of the integer M, which is the inner dimension of the matrices in the product X X*, is critical for the existence of universality. In Deift et al.[(2014) and Deift et al.[(2015) it is demon- strated that universality is present when M = N + c/N and the e-accuracy is small, but fixed Universality is not present when M = N and this can be explained by examining the distribution of the condition number of the matrix A in the LUE setting (Deift et al.]2015). We demonstrate this again in Figure3(a) 1We also demonstrate that universality does indeed fail for M = N in Figure 3(b)\nThe gradient descent algorithm for the Hamiltonian of the p-spin spherical glass will find a local minimum of the non-convex function (3). Since variance of Hy(w) is typically of order N, a. local minimum has size N. More precisely, by Auffinger et al.(2013), the energy of the floor. level where most of local minima are located is asymptotically at -2/2/3N ~ -1.633N and the ground state is around -1.657N. The algorithm starts by picking a random element w of the sphere. with radius N, SN-1(N), as a starting point for each trial. We vary the environment for each. trial and introduce ensembles by setting x() ~ D for a number of choices of distributions. For. a fixed dimension N, accuracy e that bounds the norm of the gradient, and an ensemble E: (1) Calculate the gradient steps: wt+1 = wt - ntwH(wt), (2) Normalize the resulting vector to the sphere: N wt+1 Te,N,GD,E. This procedure is repeated 10,000 times for different ensembles (i.e. different choices. for D). Figure4exhibit the universal halting time presenting evidence that Te,N,GD,E is independent. of the ensemble.\nUniversal scaling - N = 400 0.5 Gaussian Bernoulli 0.4 Uniform Frenneney 0.3 0.2 0.1 0.0 -2 0 2 4 6 Halting time fluctuations\nFigure 4: Universality across different distributions: We choose D ~ Gaussian(0, 1), D ~ uniform on (-(3/2)1/3, (3/2)1/3) and D ~ Bernoulli 1/2 with equal probability.\nA deep learning cost function is trained on two drastically different ensembles. The first is the. MNIST dataset, which consists of 60,000 samples of training examples and 10,000 samples of. test examples. The model is a fully connected network with two hidden layers, that have 500 and 300 units respectively. Each hidden unit has rectified linear activation, and a cross entropy cos. is attached at the end. To randomize the input data we sample 30K samples from the training se each time we set up the model and initialize the weights randomly. Then we train the model by the stochastic gradient descent method with a minibatch size of 100. This model gets us about 97%\nUniversality in fully connected network with SGD 0.5 Fully connected MNIST Fully connected random 0.4 MNIST on convnet MNIST on norm condition reunneey 0.3 0.2 0.1 0.0 0 2 4 Halting time fluctuations\nFigure 5: Universality in the halting time for deep learning cost functions. MNIST digit inputs and independent Gaussian noise inputs give rise to the same halting time fluctuations, as well as a convnet with a different stopping condition..\naccuracy without any further tuning. The second ensemble uses the same model and outputs, bu. the input data is changed from characters to independent Gaussian noise. This model, as expected. gets us only about 10% accuracy: it randomly picks a number! The stopping condition is reachec. when the average of successive differences in cost values goes below a prescribed value. As a. comparison we have also added a deep convolutional network (convnet), and we used the full connected model with a different stopping condition: one that is tied to the norm of the gradient. Figure|5|demonstrates universal fluctuations in the halting time in all of the four cases.."}, {"section_index": "10", "section_name": "3 CONCLUSIONS", "section_text": "What are the conditions on the ensembles and the model that lead to such universality? What con stitutes a good set of hyperparameters for a given algorithm? How can we go beyond inspectior when tuning a system? How can we infer if an algorithm is a good match to the system at hand What is the connection between the universal regime and the structure of the landscape? This re search attempts to exhibit cases where one can extract answers to these questions in a robust anc quantitative way. The examples we have presented clearly exhibit universality. The normalized mo ment analysis, presented in the Appendix, gives a quantitative way to test for universality. And we further believe that an algorithm that exhibits universality is running in a scaling region of \"higl performance\"': universality is a measure of insensitivity to initial data which is a beneficial property of a numerical method. Establishing this claim is a difficult task, beyond the scope of this primarily empirical work.\nTe,N,A,E ~ + OTA,\nTe,N,A,E ~ + OTA\nP(|Te,N,A,E - | ol) ~ P(I*| l)\nIf t* has (or is just conjectured to have) exponential tails, for example, this can be quite useful\nThis work also validates the broad claims made in Deift et al.(2015) that universality is present ir. all or nearly all (sensible) computation. Future work will be along the lines of using these heuristic.. to identify when we have universality, to identify the different kinds of landscapes, and to guide both. algorithm development and algorithm tuning. Furthermore, one would like theoretical estimates fo the mean e.N.A.E and the standard deviation Oe.N.A.E..\nMore specifically, the current work gives empirical evidence that within an appropriate scaling re gion, the halting time can often be approximated as.\nwhere t* is a mean-zero, variance one universal distribution. If this holds, a simple estimate of the mean = e,N,A,E and the standard deviation o = e,N,A,E using a few samples will give a good. a priori estimate of algorithm run time"}]
HkvS3Mqxe
[{"section_index": "0", "section_name": "COARSE PRUNING OF CONVOLUTIONAL NEURAI NETWORKS WITH RANDOM MASKS", "section_text": "Sajid Anwar, Wonyong Sung\nDepartment of Electrical Engineering and Computer Science Seoul National University\nsaiid@dsp.snu.ac.kr, wysung@snu.ac.kr\nThe learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns further increase this cost and may hinder real-time inference. We propose feature map and kernel pruning for reducing the computational complexity of a deep con- volutional neural network. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. Further, we propose a simple strategy to choose the least adversarial pruning masks. The proposed ap- proach is generic and can select good pruning masks for feature map, kernel and intra-kernel pruning. The pruning masks are generated randomly, and the best performing one is selected using the validation set. The sufficient number of ran- dom pruning masks to try depends on the pruning ratio, and is less than 100 when 40% complexity reduction is needed. Once the least adversarial pruning mask is selected, we prune and retrain the network in one-shot. The proposed approach therefore consumes less time compared to iterative pruning. We have extensively evaluated the proposed approach with the CIFAR-100, CIFAR-10, SVHN, and MNIST datasets. Experiments show that 60-70% sparsity can be induced in the convolution layers with less than 1% increase in the misclassification rate of the baseline network.\nFigure 8: The pruning plots for 100 class classification problem is reported in (a). It can be observec that this network can be pruned by more than 60% with very small degradation in the network performance. Figure (b) shows the pruning results for the CNNsvhn. It can be observed that more than 70% sparsity can be induced in the network while the network accuracy still remains above 96%.\nWe further conducted experiments on the CNNc1FAR1o.large and the corresponding plots are shown in Fig. 7b]The CNNc1FAR10.large is much wider and deeper than the CNNsmall as reported in Table 1. Therefore there are more chances of redundancy and hence more room for pruning. Further we observe similar trends as C N Nc1FAR1o.small where the kernel pruning can be induced in higher ratios compared to the feature map pruning. When the kernel pruning is applied to the feature map pruned network, we can achieve more than 88% sparsity in the Conv2 - Conv7 of the CN Nc1FAR1o.large network. This way we show that our proposed technique has good scal- ability. These results are in conformity to the resiliency analysis of fixed point deep neural networks Sung et al."}, {"section_index": "1", "section_name": "4.2 CIFAR-100", "section_text": "Deep and wider neural networks have the capacity to learn a complex unknown function from the training data. The network reported inDean et al.(2012) has 1.7 billion parameters and is trained on tens of thousands of CPU cores. Similarly (Simonyan & Zisserman2014) has employed 11-19 layers and achieved excellent classification results on the ImageNet dataset. However, the increasing depth and width demands higher computational power. This high computational complexity is a major obstacle in porting the benefits of deep learning to resource limited devices. Further, the hot- spot for optimization are the convolution layers, as most of the computations are conducted there. Therefore, many researchers have proposed ideas to accelerate deep networks for real-time inference Yu et al.2012 :Han et al.2015b a :Mathieu et al (2013):Anwar et al.(2015b).\nThe CIFAR-100 dataset has 50,000 images classified into 100 fine and 20 coarse labels. The datase has 50,000 training and 10,000 test set images. The hundred class classification problem of CIFAR 100 has 500 images for each class. We construct a validation set for learning rate scheduling during training. The validation set is constructed with 100 samples for each class from the training set. This way we are left with 400 samples per class for training. We train the network with 40,000 image. with data augmentation and batch normalization Ioffe & Szegedy(2015). We obtain a baseline accuracy of 33.65% on the CIFAR-100 test set with a VGG styled network. The network architecture is reported in Table1as CN Nc1 F AR100.\nNetwork pruning is one promising technique that first learns a function with a sufficiently large. sized network followed by removing less important connections[Yu et al.(2012); Han et al.(2015b) Anwar et al.[(2015b). This enables smaller networks to inherit knowledge from the large sized pre. decessor networks and exhibit a comparable level of performance. The works of|Han et al.[(2015b a] introduce fine grained sparsity in a network by pruning scalar weights. Due to unstructured sparsity. the authors employ compressed sparse row/column (CSR/CSC) for sparse representation. Thus the fine grained irregular sparsity cannot be easily translated into computational speedups..\nThe pruning plots for this dataset are provided in Fig.8a It can be observed that around 60% of the. network parameters can be pruned with less than 1% (absolute) increase in the network performance Moreover, pruning in combinations further improve the pruning ratios. Thus the lessons learnt. generalize well to other datasets.\nThe SVHN dataset consists of 32 32 3 cropped images of house numbers [Netzer et al. 2011] and bears similarity with the MNIST handwritten digit recognition dataset [LeCun et al. 1998]. The classification is challenging as more than one digit may appear in sample and the goal is to identify a digit in the center of a patch. The dataset consists of 73,257 digits for training, 26,032 for testing and 53,1131 extra for training. The extra set consists of easy samples and may augment the\n41 5 Baseline MCR 33.65% 40 Baseline + 1.0% 4.8 X Feature map pruning Feature Map Prunning >- Kernel Pruning 39 Kernel Prunning 4.6 Baseline MCR is 3.5% Feature Map(42%) Followed by Kernel Pruned - -. Tolerance MCR is 4.00% Feature Map(50%) Followed by Kernel Pruned 4.4 37 MMCR 4 oyn uo 36 3.8 3.6 34 3.4 33 3.2 3 32 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Prune Ratio Prune Ratio (b) SVHN CNN (a) CIFAR-100 CN N"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "results. We achieve the best pruning results in this case and the final pruned network is reported in. detail in Table[2] Overall we achieve more than 75% pruning ratio in the final pruned network.\nSparsity in a deep convolutional neural network (CNN) can be induced at various levels. Figure[1 shows four pruning granularities. At the coarsest level, a full hidden layer can be pruned. This is shown with a red colored rectangle in Fig.[1(a). Layer wise pruning affects the depth of the network and a deep network can be converted into a shallow network. Increasing the depth improves the net- work performance and layer-wise pruning therefore demand intelligent techniques to mitigate the\nPruning granularity (coarse (left) to fine grained (right). 0 Width reduction Conv] Conv2 (a) Layer-wise pruning (c) k k Kernel-pruning (d) Intra-kernel-pruning (b) Feature map pruning Depth reduction Sparse representation (increasing complexity from left to right)\nFigure 9: This figure shows the per class MCR for the original, feature map, and kernel prunec networks. It can be observed that the per class error does not vary much in the pruned networks This shows that the pruning method is not biased towards a specific class. The feature map pruned network has 63.67% sparsity with MCRTest = 3.84%, MC Rval = 4.16%. The kernel pruned net work has 65.01% sparsity with MC RTest = 3.77%, M CRval = 4.45%. The sparsity are computed for Conv2-Conv6.\nFigure 1: (a-d) shows four possible pruning granularities. The proposed work is focussed on the (b) feature map and (c) kernel pruning for simple sparse represenation. It can be observed that for the depicted architecture in Fig. (b), four convolution kernels are pruned..\ntraining set. We generate a validation set of 6000 samples which consists of 4000 samples from the training set and 2000 samples from the extra [Sermanet et al. 2012]. The network architecture is reported like this: (2 64C3)-MP2- (2 128C3)-MP2-(2 128C3)-512FC-512FC-10Softmax This network is trained with batch normalization and we achieve the baseline MCR of 3.5% on the test set. The corresponding pruning plots are reported in Fig. 8b] We can observe a similar trend where kernels can be pruned by a bigger ratio compared to feature maps. More than 70% pruning ratio can be implemented in the reported network. Thus we show that the lessons learnt generalize well on various datasets.\nperformance degradation. The next pruning granularity is removing feature maps Polyak & Wolf (2015);Anwar et al.](2015b). Feature map pruning removes a large number of kernels and may degrade the network performance much. We therefore may not achieve higher pruning ratios with this granularity. For the depicted architecture in Fig. 1[(b)., pruning a single feature map, removes four kernels. Feature map pruning affects the layer width and we directly obtain a thinner network and no sparse representation is needed. Kernel pruning is the next pruning granularity and it prunes k k kernels. It is neither too fine nor too coarse and is shown in Fig. 1(c). Kernel pruning is therefore a balanced choice and it can change the dense kernel connectivity pattern to a sparse one. Each convolution connection involves W H k k multiply and accumulate (MAC) oper ations where W, H and k represents the feature map width, height and the kernel size, respectively Further the sparse representation for kernel pruning is also very simple.A single flag is Pre-Train the Network enough to represent one convolution connec- to the Baseline\nperformance degradation. The next pruning granu 2015); Anwar et al.(2015b).Feature map prunir degrade the network performance much. We theref this granularity. For the depicted architecture in Fig four kernels. Feature map pruning affects the layer and no sparse representation is needed. Kernel prur k k kernels. It is neither too fine nor too coarse therefore a balanced choice and it can change the d Each convolution connection involves W H ations where W, H and k represents the feature map Further the sparse representation for kernel. pruning is also very simple. A single flag is enough to represent one convolution connec- tion. Generally, the pruning techniques in- duce sparsity at the finest granularity by remov- ing scalar weights. This sparsity can be in-. duced in much higher rates but high pruning ratios do not directly translate into computa- tional speedups in VLSI or parallel computer based implementationsHan et al.(2015b). Fig- ure 1(d) shows this with red colored zeroes in the kernel. Further Fig. 1 summarizes the re- lationship between three related factors: the pruning granularities, the pruning ratios and the sparse representations. Coarse pruning granu-. larities demand very simple sparse representa- tion but higher pruning ratios are comparatively difficult to achieve. Similarly fine grained prun- ing granularities can achieve higher pruning ra-. tios but the sparse representation is more com- Fi plicated. The proposed work therefore prunes or feature maps and kernels in a network. Experi- ge mental evaluations show that better pruning re- ite sults can be achieved when a network is pruned ac with both granularities successively. Si2\nIn the literature, network pruning has been studied by several researches Han et al.(2015b a);Yu et al.(2012); Castellano et al.](1997); Collins & Kohli(2014);Stepniewski & Keane(1997); Reed (1993). Collins & Kohli(2014) have proposed a technique where irregular sparsity is used to re- duce the computational complexity in convolutional and fully connected layers. However they have not discussed how the sparse representation will affect the computational benefits. The works of Han et al.(2015b a) introduce fine-grained sparsity in a network by pruning scalar weights. If the absolute magnitude of any weight is less than a scalar threshold, the weight is pruned. This work therefore favors learning with small valued weights and train the network with the L1/L2 norm augmented loss function. Due to pruning at very fine scales, they achieve excellent pruning ratios. However this kind of pruning results in irregular connectivity patterns and demand complex sparse representation for computational benefits. Convolutions are unrolled to matrix-matrix multiplication in|Chellapilla et al.[(2006) for efficient implementation. The work of Lebedev & Lempitsky[(2015] also induce intra-kernel sparsity in a convolutional layer. Their target is efficient computation by un- rolling convolutions as matrix-matrix multiplication. Their sparse representation is not also simple because each kernel has an equally sized pruning mask. A recently published work propose sparsity at a higher granularity and induce channel level sparsity in a CNN network for deep face application Polyak & Wolf(2015). The work ofCastellano et al.(1997);Collins & Kohli(2014); Stepniewski & Keane(1997);Reed (1993) utilize unstructured fine grained sparsity in a neural network. Fixed\nAn important contribution of this work is. proposing a simple and generic strategy for the selection of pruning masks. Finding pruning. candidates is an important and difficult prob-\nInducible pruning ratios Inside allowable budget (increasing from left to right), (e.g., budget = Accuracy)\n0.1 0.1 Non-Pruned Non-Pruned 0.08 Feature Map Pruned 0.08 Feature Map Pruned Kernel Pruned Kernel Pruned 0.06 0.06 aasssesn 0.04 0.04 0.02 0.02 0 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Class (0 to 9 Digits) Class (0 to 9 Digits)\nThere can be a concern that pruning may decrease the accuracy of the original network when it is deployed in the field for run time classification. For a specific problem domain, the test set is used as a proxy for the future unseen data. We argue that to some extent, this question can be answered by comparing the per class error for the original and pruned networks. This way we can see whether the pruned network is biased towards a specific class. To analayze this, we computed the per class error with the CNNsv Hv network as reported in Table[1] The results are reported in Fig.9 It can be observed that the per class error for both validation and test set do not vary significantly We therefore infer that the pruning and retraining process is a promising technique for complexity reduction.\nPre-Train the Network to the Baseline One- Shot vs. Iterative Iterative One-Shot =tpr/M =tpr For current pruning ratio For current pruning ratio cpr = j,generate mmask,where cpr = tpr, generate mmask,where i i=12...N =1,2,...,N Evaluate the MCR on each Evaluate the MCR on each W.* m, network. W.* m, network. Choose the best pruning mask, Choose the best pruning mask m=argminm(MCRm) m=argminmMCRm Re-initialize from the baseline network Re-initialize from the baseline network, prune with m,retrain and increment j prune with m, and retrain\nFigure 2: This figure compares the iterative and. one-shot pruning. tpr and cpr represents the tar-. get and current pruning ratio respectively. The. iterative pruning Han et al.. (2015b) gradually achieves the target pruning ratio in M steps of . size each, while the = tpr for one-shot prun- ing. This work adopts the one-shot pruning ap-. proach.\npoint optimization for deep neural networks is employed byAnwar et al.(2015a); Hwang & Sung (2014); Sung et al.|for VLSI based implementations. The reference work of|Anwar et al.(2015b analyzed feature map pruning with intra-kernel strided sparsity. To reduce the size of feature map and kernel matrices, they further imposed a constraint that all the outgoing kernels from a feature map must have the same pruning mask. In this work, we do not impose any such constraint and the pruning granularities are coarser. We argue that this kind of sparsity is useful for VLSI and FFT based implementations. Moreover we show that the best pruning results are obtained when we combine feature map and kernel level pruning.\nem. Generally, in the literature granularity specific pruning strategies are reported Han et al. 2015b);[Li et al.(2016). (Anwar et al.]2015b) have developed a particle filtering approach, where the sequential importance resampling is employed. The proposed strategy randomly generates pruning masks, evaluates the importance of each mask with the validation set, selects the best masl having the argminm, (M C Rm,), prunes and retrains the networkYu et al.(2012). It is important tc. mention here that the pruning can be conducted in one-shot or iteratively. This difference is shown ir. Fig.2 For a target pruning ratio (tpr), the iterative process gradually increases sparsity and repeat. the process M times. On the other hand, the one-shot pruning induces the target pruning ratio ir. one step. We employ one-shot pruning as the retraining after pruning consumes much time. Thus. the one shot pruning is much more efficient in terms of the optimization time. We show experimen. tally that the proposed algorithm can select better pruning candidates compared to other methods. Further, our approach is not computationally expensive as it involves N random evaluations on the. small sized validation set."}, {"section_index": "3", "section_name": "6 CONCLUDING REMARKS", "section_text": "In this work, we proposed feature map and kernel pruning for reducing the computational complexity of deep CNN. We have discussed that the cost of sparse representation can be avoided with coarse pruning granularities. We demonstrated a simple and generic algorithm for selecting the best pruning mask from a random pool. We showed that the proposed approach adopts a holistic approach and performs better than the other methods. Further, we adopted the efficient one-shot pruing approach as the iterative retraining consumes much time. We conducted experiments with several benchmarks and networks and showed that the proposed technique has good scalability."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Pruning reduces the number of network parameters and inevitably degrades the classification per formance. The pruning candidate selection is therefore of prime importance. For a specific pruning ratio, we search for the best pruning masks which afflicts the least adversary on the pruned net work. Indeed retraining can partially or fully recover the pruning losses, but the lesser the losses. the more plausible is the recovery|Mishkin & Matas|(2015). Further small performance degradatior also means that the successor network has lost little or no knowledge of the predecessor network. I there are M potential pruning candidates, the total number of pruning masks is (2M) and an exhaus tive search is therefore infeasible even for a small sized network. We therefore propose a simple anc greedy strategy for selecting pruning candidates.\nSajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional. neural networks for object recognition. In Acoustics, Speech and Signal Processing (ICASsP) 2015 1EEE International Conference on, pp. 1131-1135. IEEE, 2015a.\nWe initialize a network with pre-trained parameters. These parameters may be learnt on the same or related problem. We randomly generate N pruning masks and compute the misclassificatior rate (MCR) for each one. We then choose the best pruning mask with maximum accuracy on the validation set. Referring to the depicted architecture in Fig|4a] suppose we need to select feature map pruning candidates in layer L2 and L3 with 1/3 pruning ratio. If N = 4, the following N ordered pairs of feature maps may be randomly selected for (L2, L3) : (1, 2), (2, 3), (3, 1), (1, 1). These combinations generate random paths in the network and we evaluate the validation set MCR through these routes in the network.\nMaxwell D Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. arXiv preprint arXiv:1412.1442, 2014.\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing. Systems, pp. 3105-3113, 2015.\nHowever, this further raises the question of how to approximate N. We analyze the relationship. between pruning ratio and N on three datasets and the results are reported in Fig. 3] This analysis. is conducted for feature map pruning but is also applicable to other pruning granularities. It can be. observed from Fig. 3a|and [3c] that for higher pruning ratios, bigger value of N is beneficial as i1. results in better pruning candidate selection. Moreover, for the pruning ratio of no more than 40%. N = 50 random evaluations generate good selections. For lower pruning ratios, retraining is alsc. more likely to compensate the losses as the non-pruned parameters may still be in good numbers. The computational cost of this technique is not much as the evaluation is conducted on the smal sized validation set. By observing Fig. 3a|and 3c], we propose that the value of N can be estimated initially and later used in several pruning passes. The plots in Fig.3b|and|3d|show the pre-retraining. distribution of N random masks. Further, the plots in Fig. 3b|and 3d] shows that the distributions. are narrow for small pruning ratios..\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior. Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pp. 1223-1231, 2012.\nWe further analyze the effect of retraining on the pruning mask selection. We prune a network with several masks and retrain each pruned network. As several networks needs to be pruned and retrained many times, we experiment with a small network where the architecture is reported like this: 32(C5) - MP2 - 64(C5) - MP2 - 64(C5) - 64FC - 10Softmax. The network is trained with the CIFAR-10 dataset (40,000 training samples) without any data augmentation and batch normalization. The network achieves the baseline performance of 26.7% on the test set. The results are reported in Fig.4d where the pre and post-retraining network performance is shown on\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing interna1 covariate shift. arXiv preprint arXiv:1502.03167, 2015..\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009\nThe rest of the paper is organized as follows. Section2lprovides detailed explanations on the pruning candidate selection. Section 3 discusses the two pruning granularities while Section4|presents the experimental results. In Section 5] recent related works are revisited. We finally conclude the discussion in Section|6|and add the future research dimensions for this work.\nSong Han, Huizi Mao, and William J Dally. A deep neural network compression pipeline: Pruning quantization, huffman encoding. arXiv preprint arXiv:1510.00149, 2015a.\nPrune Ratio 88.8672 Prune Ratio 81.4616 Prune Ratio 81.4616 Prune Ratio 61.8076 120 Prune Ratio 34.7005 Prune Ratio 34.7005 100 20 100 200 300 400 500 600 700 800 900 1000 Random Pruning Masks N 30 MCRyalidationSet (a) Best of N masks for CIFAR10 CN Nsmall (b) Distribution of N masks for CIFAR10 C N Nsm 160 Prune Ratio 90.5132 Prune Ratio 83.3969 Prune Ratio 83.3969 140 Prune Ratio 63.6679 70 Prune Ratio 63.6679 Prune Ratio 35.9769 Prune Ratio 35.9769 120 100 60 100 200 300 400 500 600 700 800 900 Random Pruning Masks Ne 1000 10 20 MCRValidationSet c) Best of N.. masks for C N Na.\nHao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.\nMichael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks through ffts. arXiv preprint arXiv:1312.5851, 2013.\nDmytro Mishkin and Jiri Matas. All you need is a good init. arXiv preprint arXiv:1511.06422 2015.\nAdam Polyak and Lior Wolf. Channel-level acceleration of deep face representations. Access, IEEE 3:2163-2175, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nSlawomir W Stepniewski and Andy J Keane. Pruning backpropagation neural networks using mod-. ern stochastic optimisation techniques. Neural Computing & Applications. 5(2):76-98. 1997.\nFigure 3: The network architectures are reported in Table[1 The networks are feature map pruned to generate the pre-retraining plots. Figure (a, c) compares the best candidate selected out of N random combinations for various pruning ratios. The distribution of N random evaluations is shown in Fig (b, d). We can observe that it resembles a Gamma distribution. Further, for higher pruning ratios. the distribution resembles a bell-shaped curve. Analyzing Fig. (a,c) with Fig. (b,d), we infer that bigger N may be beneficial for higher pruning ratios.\nthe x and y axis, respectively. Further, we superimpose a least-squares (LS) line fit to each of th scatter plot. It can be observed that the slope of the LS line decreases for higher pruning ratios. W infer that for high pruning ratios, the final network performance is dictated by the surviving numbe of effective parameters. It can be observed that the overall distribution is noisy. However, in general. the pre-retraining least adversarial pruning masks perform better after retraining. In the rest of this work, we therefore use the pre-retraining best mask for pruning the network..\nWe further compare this method with the weight sum criterion proposed in Li et al.(2016) and shown in Fig. 4a The set of filters or kernels from the previous layer constitute a group. This is shown with the similar color in Fig. 4a According toLi et al.(2016), the absolute sum of weights determine the importance of a feature map. Suppose that in Fig|4a] the Layer L2 undergoes feature map pruning. The weight sum criterion computes the absolute weight sum at S1, S2 and S3. If we further suppose that the pruning ratio is 1/3, then the min(S1, S2, S3) is pruned. All the incoming and outgoing kernels from the pruned feature map are also removed. We argue that the sign of a weight in kernel plays important role in well-known feature extractors and therefore this is not a good criterion.\nWe compare the performance of the two algorithms and Fig. 4b| and 4c| shows the experimental. results. These results present the network status before any retraining is conducted. We report. the performance degradation in the network classification against the pruning ratio. From Fig.4b and4c] we can observe that our proposed method outperforms the weight sum method particularly for higher pruning ratios. The best of N Pruning masks strategy evaluates pruning candidates in combinations and provides a holistic view. The criterion in|Li et al.[(2016) evaluates the importance of a pruning unit in the context of a single layer while our proposed approach evaluates several paths through the network and selects the best one. The combinations work together and matter. more instead of individual units. Further, our proposed technique is generic and can be used for any. pruning granularity: feature map, kernel and intra-kernel pruning.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\n140 120 100 80 60 40 20 0 0\nWonyong Sung, Sungho Shin, and Kyuyeon Hwang. Resiliency of deep neural networks unde quantization.\nDong Yu, Frank Seide, Gang Li, and Li Deng. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In 2012 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 4409-4412. IEEE. 2012\nFigure 4: (a) This figure explains the idea presented in|Li et al. (2016) and shows three layers, L1 L2 and L3. All the filters/kernels from previous layer to a feature map constitute one group which is shown with similar color. The S1,S2 and S3 is computed by summing the absolute value of all the weights in this group. (b) The comparison of the proposed method with the absolute weight sum method is shown here for the CN Nsv H n. It can be observed that our proposed method inflicts lesser adversary on the network for different pruning ratios. (d) In this plot, we prune a CNN network with various masks and compare their pre and post retraining performance. It can be observed that on the average, pre-retraining masks perform better after retraining.\nIn this section we discuss feature map and kernel pruning granularities. For a similar sized network we analyze the achievable pruning ratios with feature map and kernel pruning. In terms of granu larity, feature map pruning is coarser than kernel pruning. Feature map pruning does not need any sparse representation and the pruned network can be implemented in a conventional way, convolu tion lowering[Chellapilla et al. (2006) or convolution with FFTs[Mathieu et al.(2013). The propose work analyzes the unconstrained kernel and feature map pruning. Pruning a feature map eliminate all the incoming and outgoing kernels because the outgoing kernels are no more meaningful.\nKernel pruning is comparatively finer. The dimension and connectivity pattern of 2D kernels deter- mine the computing cost of a convolutional layer. The meshed fully connected convolution layers increases this cost and can hinder the real-time inference. In LeNet|LeCun et al.(1998), the second convolution layer has 6 16 feature maps and the kernel connectivity has a fixed sparse pattern. With kernel pruning, we learn this pattern and convert the dense connectivity to sparse one. Kernel pruning zeroes k k kernels and is neither too fine nor too coarse. Kernel level pruning provides a balance between fine-grained and coarse-grained pruning. It is coarser than the intra-kernel sparsity and finer than the feature map pruning. Depending on the network architecture, kernel pruning may achieve good pruning ratios at very small sparse representation and computational cost. Each con- volution connection represents one convolution operation which involves width x heiaht x k x k\nS1 S1 Baseline MCR = 3.93% FM FM1 FeatureMap Pruning with Weight Sum Voting FeatureMap Pruning with N = 10 Rand Evaluattions FeatureMap Pruning with N = 20 Rand Evaluattions - FeatureMap Pruning with N = 50 Rand Evaluations FeatureMap Pruning with N = 100 Rand Evaluations FeatureMap Pruning with N = 200 Rand Evaluations 70 FM, FM2 r FM3 FM3 S3 S3 L2 L3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 J 1 Pruning Ratioe (a) Absolute weight sum votingLi et al.(2016 (b) Weight sum vs. best of N random masks. 50 33 45 Baseline MCR 0.62% Pruning with weight sum voting 40 32 Pruning with the best of 10 random masks Pruning Ratio 31.12 % Pruning with the best of 20 random masks x+ Pruning Ratio 56.73% 35 Pruning with the best of 50 random masks Pruning Ratio 66.7% Pruning with the best of 100 random masks Pruning Ratio 77.13% Pruning with the best of 200 random masks 30 )set XX MCR 20 X 15 10 5 26 25 30 40 50 60 70 80 90 5 MCR with Pre-Retraining Pruning Masks 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Feature Map Pruning Ratio (d) Pre and Post Retraining Pruning masks cBest of random masks ys (a.\n20 3 19.5 (1). MCRBaseline = 16.260% (2). MCRBaseline + Tol(1.0) = 17.26% 2.5 19 - (3). FeatureMap Pruning (5). Kernel Pruning (1). MCRBaseline = 0.79% x (2). Kernel Pruning 2 18 -O (3). FeatureMap Pruning 17.5 1.5 MCR 17 16.5 16 0.5 15.5 15 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 7 Pruning Ratio 0 0 0.2 0.4 0.6 0.8 1 Pruning Ratio (a) Feature map and kernel pruning of CIFAR-10 CNNsmall (b) MNIST feature map and kernel pruning\nFigure 5: Figure (a) and (b) shows feature map and kernel pruning of two networks: CN Nc1FAR-10.small and CN NM N1sT2. The corresponding network architectures are reported in Table1 The network can be pruned by more than 50% with very small degradation in perfor- mance. Further, due to finer nature, the kernel pruning may inflict lesser adversary on the network performance.\n140 120 Conv1 3 x 128 Conv2 128 128 Conv3 128 x 128 100 Conv4 128 128 Conv5 128 x 256 80 60 40 20 F; Fo dim3 dimGrid (F F pr, BatchSize, 1); 0 dim3 dimThread (H, W, 1); 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 Kernel Prune Ratio (b) Custom GPU kernel f (a) Profiling kernel pruning. convolutions\n120 100 80 60 40 20 0\nFigure 6: (a) This figure shows the profiling results for kernel pruning with a customized GPU im plementation. It can be observed that the kernel pruning reduces the execution time. The experiment is conducted with the CIFAR-1O CNN. In (b), F, and Fo shows the input and output feature maps. while pr represents the pruning ratio. The GPU function scheduler shows that the call is only for. non-masked kernels.\nMAC operations. We first select pruning candidates with the criterion outlined in Section 2] The pruned network is then retrained to compensate for the losses incurred due to pruning. Figure 5a and|5b|show that depending on the network architecture, kernel pruning may achieve higher pruning. ratio than feature map pruning due to finer granularity. As the sparse granularities are coarse, a. generic set of computing platform can benefit from it. One disadvantage of the unconstrained kernel pruning is that the convolution unrolling technique cannot benefit from it Chellapilla et al.[(2006) However, customized VLSI implementations and FFT based convolutions do not employ convolu-. tion unrolling.Mathieu et al.(2013), have proposed FFT based convolutions for faster CNN training. and evaluation and the GPU based parallel implementation showed very good speedups. As com-. monly known that the IFFT(FFT(kernel) FFT(featuremap)) = kernel * featuremap,. the kernel level pruning can relieve this task. Although the kernel size is small, massive reusability. of the kernels across the mini-batch enables the use of FFT. The FFT of each kernel is computed. only once and reused for multiple input vectors in a mini-batch. In a feed-forward and backward path, the summations can be carried in the FFT domain and once the sum is available, the IFFT can\nTable 1: Specifications of the three CIFAR-10 networks\nNetwork Architecture Baseline MCR(%) Data Augmentation CNNMNIST1 16(C532C5)64C5)120-10 0.62 NO CNNMNIST2 6C5)16C5)-120(C5)8410 0.79 NO CNNC1F AR10.small 2 x 128C3- MP2 -2 128C3- MP2-2 256C3-256FC -10Softmax 16.6 NO CN NC1FAR10.large 2 128C3- MP2 -2 256C3- MP2-2 256C3-1 512C3-1024FC - 1024FC -10Softmax 9.41 YES CNNSVHN (2 64C3)- MP2- (2 128C3)- MP2- (2 128C3)-512FC-512FC -10Softmax 3.5 NO CNNCIFAR100 (2 128C3) - MP2 - (2 128C3) - MP2 - (2 256C3) - 256C3 - 512FC -10Softmax 33.65 YES 13 20 x- FeatureMap Pruning 1.MCR = 16.260% > Kernel Pruning 19.5 (2). MCRBaseline +Tol(1.0) Baseline 12 = 17.26% --O-- FeatureMap followed by Kernel Pruning 19 (3). FeatureMap Pruning (4). Feature Map Followed by Kernel Pruning - - Baseline MCR = 9.39% (5). Kernel Pruning Baseline + Tolerance (1.0%) (6). Kernel Prune Followed by Feature Map Pruning 11 18.5 Rernnn 18 AAier 10 17.5 les set MoR 9 16.5 16 8 15.5 7 15 0 0.2 0.4 0.6 0.3 0.7 0.8 0.2 0.5 0.8 1 0 0.1 0.4 0.6 0.9 Prune RatioConv2-Conv7 Pruning Ratio (a) CIFAR-10 CN Nsmall (b) CIFAR-10 CN N1arge\nFigure 7: The combinations of feature map and kernel pruning is reported here. Figure (a) and (b provides pruning results for the CNNc1FAR10.small and CNNc1FAR1o.large networks. It can be observed from both figure, that more sparsity can be induced in the network by indcuing sparsit with two granularities.\nbe performed Mathieu et al.(2013). Similarly, a customized VLSI based implementation can alsc benefit from the kernel level pruning. If the VLSI implementation imposes a constraint on the prun ing criterion, such as the fixed number of convolution kernels from the previous to the next layer, the pruning criterion can be adapted accordingly. In the next Section, we report and discuss the exper imental results in detail. As the commonly available libraries do not support masked convolutions we therefore profile kernel pruning with customized GPU functions. We call the GPU function only for the non-pruned convolution kernels and pass the appropriate indices. It can be observed thai fewer number of convolutions will reduce the required number of GFLOPs. Howevr, we conjecture that the true benefit of kernel pruning can be obtained with FFT based masked convolution.\nIn this section, we present detailed experimental results with the CIFAR-1O and SVHN datasets. Krizhevsky & Hinton(2009). We experiment on three image classification problems and induce sparsity feature map and kernel wise. We also prune one network with more than one pruning. granularity in combinations. During training and pruning, we use the stochastic gradient descent (SGD) and batch normalization Ioffe & Szegedy(2015). As elaborated in Section[1] we do not prune the network in small steps, and instead one-shot prune the network for a given pruning ratio. followed by retraining. The experimental results are reported in the corresponding subsections.."}, {"section_index": "5", "section_name": "4.1 CIFAR-10", "section_text": "The CIFAR-10 dataset includes samples from ten classes: airplane, automobile, bird, cat, deer dog, frog, horse, ship and truck. The training set consists of 50,o0o RGB samples and we allo cate 20% of these samples as validation set. Test set contains 10.000 samples and each sample has 32 32 RGB resolution. We evaluate the proposed pruning granularities with two networks. CN Nc1FAR10.small and CNNc1FAR10.large. CNNc1FAR10.small has six convolution and two overlapped max pooling layers. We report the network architecture with an alphanumeric string as\nTable 2: Feature map and kernel level pruning (75%) in CN Nc1\nreported in|Courbariaux et al. (2015) and outlined in Table[1] The (2 128C3) represents two con- volution layers with each having 128 feature maps and 3 3 convolution kernels. M P2 represents 3 3 overlapped max-pooling layer with a stride size of 2. We pre-process the original CIFAR-10 dataset with global contrast normalization followed by zero component analysis (ZCA) whitening.\nThe CN Nc1FAR1o.large has seven convolution and two max-pooling layers. Further, online data. augmentations are employed to improve the classification accuracy. We randomly crop 28 28 : patches from the 32 32 3 input vectors. These cropped vectors are then geometrically transforme randomly. A vector may be flipped horizontally or vertically, rotated, translated and scaled. A evaluation time, we crop patches from the four corners and the center of a 32 32 3 patch anc. flip it horizontally. We average the evaluation on these ten 28 28 3 patches to decide the fina label. Due to larger width and depth, the CN Nc1FAR1o.large achieves more than 90% accuracy or. the CIFAR-10 dataset. The CN Nc1FAR10.small is smaller than CNNc1FAR10.large and trainec without any data augmentation. The CN Nc1FAR10.smal therefore achieves 84% accuracy.."}, {"section_index": "6", "section_name": "4.1.1 FEATURE MAP AND KERNEL LEVEL PRUNING", "section_text": "For the same network, we can see that kernel level pruning performs better. We can achieve 70% sparsity with kernel level pruning. This is attributed to the fact that kernel pruning is finer anc hence it achieves higher ratios. Further kernel pruning may ultimately prune a feature map if all th incoming kernels are pruned. However at inference time, we need to define the kernel connectivity pattern which can simply be done with a binary flag. So although the sparse representation is needed it is quite simple and straightforward. Experimental results confirm that fine grained sparsity can b induced in higher rates. We achieved 70% kernel wise sparsity for Conv2 - Conv6 and the networl is compressed with very simple sparse representation."}, {"section_index": "7", "section_name": "4.1.2 COMBINATIONS OF KERNEL AND FEATURE MAP PRUNING", "section_text": "In this section we discuss the various pruning granularities applied in different combinations. We first apply the feature map and kernel pruning to the CN Nc1FAR1o.small network in different or- ders. With feature map pruning, we can achieve 60% sparsity under the budget of 1% increase in MCR. But at this pruning stage, the network learning capability is affected much. So we take a 50% feature map pruned network, where the CN Nc1FAR10.small is reduced to (128C3 - 89C3)- MP3-(89C3 - 89C3)-MP3-(179C3 - 179C3)-256FC-10Softmax. As pruning is only applied to Conv2 - Conv6, therefore in Fig.5al, pruning ratios are computed only for these layers. This network then undergoes kernel level pruning. The blue rectangle line in Figure 7a shows the pruning\nFeature Maps Pruned Feature Maps Feature Maps Prune Ratio Pruned Kernels (%) Conv Connections Kernel Prune Ratio (%) Conv2(128 128) 128 x 89 30.5 27306/9 = 3034 11392 3034/11392 = 26.6 Conv3(128 128) 89 x 89 51.5 18702/9 = 2078 7921 2078/7921 = 26.2 Conv4(128 128) 89 x 89 51.5 18702/9 = 2078 7921 2078/7921 = 26.2 Conv5(128 x 256) 89 x 179 51.4 37881/9 = 4209 15931 4209/15931 = 26.4 Conv6(256 256) 179 x 179 51.1 76851/9 = 8539 32041 8539/32041 = 26.6\nAfter layer pruning, feature map pruning is the 2nd coarsest pruning granularity. Feature map pruning reduces the width of a convolutional layer and generates a thinner network. Pruning a single feature map, zeroes all the incoming and outgoing weights and therefore, higher pruning ratios degrade the network classification performance significantly. Feature map pruning for the CN Nc1FAR1o.small is shown in Fig. 5a|with a circle marked red colored line. The sparsity re- ported here is for Conv2 to Conv6. We do not pruned the first convolution layer as it has only 3 128 (3 3) = 3456 weights. The horizontal solid line shows the baseline MCR of 16.26% whereas the dashed line shows the 1% tolerance bound. Training the network with batch normaliza- tionIoffe & Szegedy(2015) enables us to directly prune a network for a target ratio, instead of taking small sized steps. With a baseline performance of 16.26%, the network performance is very bad at 80% feature map pruning. We can observe that 62% pruning ratio is possible with less than 1% increase in MCR. The CN Nc1FAR10.small is reduced to (128C3 - 83C3)-MP3-(83C3 83C3)- MP3-(166C3 - 166C3)-256FC-10Softmax. As pruning is only applied in Conv2 to Conv6, therefore the Figure5a pruning ratios are computed only for these layers."}]
rJe-Pr9le
[{"section_index": "0", "section_name": "MULTI-TASK LEARNING WITH DEEP MODEL BASED REINFORCEMENT LEARNING", "section_text": "350- 20 15 300 10 250 - 5 coree 0 S150 S -5 -10 100 15 50 -20 0 -25 910111213141516171819 2 A 5 89101112131415161718 19 Iteration Iteration (a) Breakout (b) Pong 7000 - 6000 5000 4000 S3000 2000 1000 0 123456 8910111213 1415 161718 19 Iteration (c) Demon Attack\nAsier Mujika\nZurich. Switzerlanc"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Figure 5: Comparison between an agent that learns the three games simultaneously (continuous blue), one that learns each game individually (dashed red) and the score of human testers (horizonta. green) as reported by Mnih et al. (2015).\nRecently, there has been a lot of success in applying neural networks to reinforcement learning achieving super-human performance in many ATARI games (Mnih et al. (2015); Mnih et al. (2016) Most of these algorithms are based on Q-learning, which is a model free approach to reinforcemen learning. This approaches learn which actions to perform in each situation, but do not learn al explicit model of the environment. Apart from that, learning to play multiple games simultaneousl remains an open problem as these approaches heavily degrade when increasing the number of task to learn."}, {"section_index": "2", "section_name": "5 DISCUSSION", "section_text": "We have presented a novel model based approach to deep reinforcement learning. Despite not achieving state of the art results, this papers opens new lines of research showing that a model based approach can work in environments as complex as ATARI. We have also shown that it can beat human performance in three different tasks simultaneously and that it can benefit from learning multiple tasks.\nIn contrast, we present a model based approach that can learn multiple tasks simultaneously. The idea of learning predictive models has been previously proposed (Schmidhuber (2015); Santana & Hotz (2016)), but all of them focus on learning the predictive models in an unsupervised way We propose using the reward as a means to learn a representation that captures only that which is important for the game. This also allows us to do the training in a fully supervised way. In the experiments, we show that our approach can surpass human performance simultaneously on three different games. In fact, we show that transfer learning occurs and it benefits from learning multiple tasks simultaneously.\nStill, the model has two areas that can be addressed in future work: long-term dependencies anc the instability during training. The first, can potentially be solved by combining our approach witl Q-learning based techniques. For the instability, balancing the training set or oversampling harc. training cases could alleviate the problem\nFinally, we have also presented a new kind of recurrent network which can be very useful for problems were little memory and a lot of computation is needed.."}, {"section_index": "3", "section_name": "ACKNOWLEDGMENTS", "section_text": "I thank Angelika Steger and Florian Meier for their hardware support in the final experiments anc comments on previous versions of the paper..\nIn recent years, approaches that use Deep Q-learning have achieved great success, making an important breakthrough when Mnih et al. (2015) presented a neural network architecture that was able to achieve human performance on many different ATARI games, using just the pixels in the screen as input."}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "In recent years, model-free methods that use deep learning have achieved greai. success in many different reinforcement learning environments. Most successful. approaches focus on solving a single task, while multi-task reinforcement learn-. ing remains an open problem. In this paper, we present a model based approach. to deep reinforcement learning which we use to solve different tasks simultane. ously. We show that our approach not only does not degrade but actually benefits. from learning multiple tasks. For our model, we also present a new kind of recur-. rent neural network inspired by residual networks that decouples memory from computation allowing to model complex environments that do not require lots of. memory. The code will be released before ICLR 2017..\nIn this paper, we first discuss why Q-learning fails to learn multiple tasks and what are its draw-. backs. Then, we present our approach, Predictive Reinforcement Learning, as an alternative to. overcome those weaknesses. In order to implement our model, we present a recurrent neural net- work architecture based on residual nets that is specially well suited for our task. Finally, we discuss our experimental results on several ATARI games..\nJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer Normalization. arXiv, 2016. URI\nMarc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning envi ronment: An evaluation platform for general agents. In IJCA7 International Joint Conference on Artificial Intelligence, volume 2015-January, pp. 4148-4152, 2015. ISBN 9781577357384. doi: 10.1613/jair.3912\nQ(s,a) = Es + y maxQ(s', a)|s, a a\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Imag Recognition. arXiy. 2015. URL ht+p. df/1512.03385v1.pdf.\nSepp Hochreiter and Urgen Schmidhuber.. Long Short-Term Memory. Neural computa- tion,9(8):1735-80.1997. ISSN 0899-7667. doi:10.1162/neco.1997.9.8.1735. URL http://www.ncbi.nlm.nih.gov/pubmed/9377276\nFor the rest of this subsection, we assume the reader is already familiar with Deep Q-learning and. we discuss its main problems. Otherwise, we recommend skipping to the next section directly as none of the ideas discussed here are necessary to understand our model..\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re. ducing internal covariate shift. arXiv, 2015. URL http://arxiv.org/abs/1502.03167\nAs the true value of the Q-function is not known, the idea of Deep Q-learning is iteratively approximating this function using a neural network' which introduces several problems..\nFirst, the Q-values depend on the strategy the network is playing. Thus, the target output for the. network given a state-action pair is not constant, since it changes as the network learns. This means. that apart from learning an strategy, the network also needs to remember which strategy it is playing This is one of the main problems when learning multiple tasks, as the networks needs to remember. how it is acting on each of the different tasks. Rusu et al. (2015) and Parisotto et al. (2015) have managed to successfully learn multiple tasks using Q-learning. Both approaches follow a similai idea: an expert network learns to play a single game, while a multi-tasking network learns to copy. the behavior of an expert for each different game. This means that the multi-tasking network does not iteratively approximate the Q-function, it just learns to copy the function that the single-tasl expert has approximated. That is why their approach works, they manage to avoid the problem oi. simultaneously approximating all the Q-functions, as this is done by each single task expert..\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei a Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wier stra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learn ing. Nature, 518(7540):529-533, 2015. ISSN 0028-0836. doi: 10.1038/nature14236. URI http://dx.doi.0rg/10.1038/nature14236.\nApart from that, the network has to change the strategy very slightly at each update as drastically. changing the strategy would change the Q-values a lot and cause the approximation process to diverge/slow-down. This forces the model to interact many times with the environment in order to find good strategies. This is not problematic in simulated environments like ATARI games where the. simulation can easily be speed up using more computing power. Still, in real world environments, like for example robotics, this is not the case and data efficiency can be an important issue..\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. arXiv, 2016. URL http://arxiv.0rg/abs/1602.01783.\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy Dis tillation. arXiv, 2015. URL http://arxiv.0rg/abs/1511.06295.\nIn order to avoid the drawbacks of Deep Q-learning, we present Predictive Reinforcement Learn- ing (PRL). In our approach, we separate the understanding of the environment from the strategy This has the advantage of being able to learn from different strategies simultaneously while also being able to play strategies that are completely different to the ones that it learns from. We will also argue that this approach makes generalization easier. But before we present it, we need to define what we want to solve."}, {"section_index": "5", "section_name": "3.1 PREDICTION PROBLEM", "section_text": "The problem we want to solve is the following: given the current state of the environment and the actions we will make in the future, how is our score going to change through time?.\nTo formalize this problem we introduce the following notation\nWe do not explain the process, but Mnih et al. (2015) give a good explanation on how this is done\nAs the name indicates, this approach revolves around the Q-function. Given a state s and an. action a, Q(s, a) returns the expected future reward we will get if we perform action a in state s. Formally, the Q-function is defined in equation 1..\nAndrej Karpathy and Fei Fei Li. Deep visual-semantic alignments for generating image descrip tions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pai tern Recognition, volume 07-12-June-2015, pp. 3128-3137, 2015. ISBN 9781467369640. doi 10.1109/CVPR.2015.7298932\na: The observation of the environment at time i. In the case of ATARI games, this corre-. sponds to the pixels of the screen. r;: The total accumulated reward at time i. In the case of ATARI games, this corresponds to the in-game score. cy: The control that was performed at time i. In the case of ATARI games, this corresponds. to the inputs of the ATARI controller: up, right, shoot, etc.."}, {"section_index": "6", "section_name": "Appendices", "section_text": "+1 reward MARIO WORLD TIME MARIO WORLD TIME 000400 01 000600 02 Input jump action\nDue to the huge cost involved in training the agents, we have not exhaustively searched over al the possible hyper parameters. Still, we present them here for reproducibility of the results..\nFigure 1: We chose i = 0 and k = 1. We assume ao to be the pixels in the current image (the lefl one) and c1 to be the jump action. Then, given that input, we want to predict r1 - ro, which is 1 because we earn a reward from time 0 to time 1.\nThen, we want to solve the following problem: For a given time i and a positive integer k, let the input to our model be an observation a, and a set of future controls ci+1, ... Ci+k. Then, we want to predict the change in score for the next k time steps, i.e. (ri+1 r), ..., (ri+k - r). Figure 1 illustrates this with an example.\nApart from that, at the beginning of each episode, we pick an n E [0, 30] uniformly at randon and do not perform any action for the initial n time steps of that episode. This idea was also used by Mnih et al. (2015) to avoid any possible over-fitting. In addition, we also press shoot to start a new episode every time we die in Breakout, since in the first iterations the model learns that the safest option is not to start a new episode. This causes the agent to waste a lot of time without starting a new episode."}, {"section_index": "7", "section_name": "3.2.1 PERCEPTION", "section_text": "The Perception has to be tailored for the kind of observations the environment returns. For now. we will focus only on vision based Perception. As we said before, the idea of this network is tc convert the high dimensional input to a low dimensional vector that contains only the necessary information for predicting the score. In the case of video games, it is easy to see that such vector exists. The input will consists of thousands of pixels but all we care about is the position of a few key objects, like for example, the main character or the enemies. This information can easily be\nNumber of strategies: As explained in Section 4.4, we need to pick a number k of strategies. we consider at each step. Initially, we pick k = 25, raise it to k = 100 at iteration 4 and. finally, at iteration 7, we set it to k = 200 for the remaining of the experiment.. Confidence interval: We also need to pick how safe we want to play, i.e., where we set the threshold for the set of actions we consider. For simplicity, in Breakout and Pong, we set it to O and only pick the safest option. In Demon Attack, initially we only consider actions. with a survival probability higher than 0.2 for three iterations. After that, we reduce it to. 0.1 for another three iterations. Then, we set it to 0.005 until iteration 15 and finally, reduce it to 0.001 for the rest of the iterations.. Learning schedule: For training we use the Adam (Kingma & Ba, 2014) optimizer with a batch size of 100. We use a learning rate of 10-4 for the first 3 iterations, then reduce it to. 5 10-5 for the next 3 iterations and finally set it to 10-5 for the rest of the experiment. We. make a total of 4.8 104 parameter updates per iteration (1.6 104 in the case of single-. task networks) and divide the learning rate in half after 2.4 104 updates for the remaining. of the iteration. We add a weight decay of O.o001 and clamp the gradients element-wise to. the [-1, 1] range\nObserve that, unlike in Q-learning, our predictions do not depend on the strategy being played The outputs only depend on the environment we are trying to predict. So, the output for a given state-actions pair is always the same or, in the case of non-deterministic environments, it comes from the same distribution.\nWe have defined what we want to solve but we still need to specify how to implement a model that will do it. We will use neural networks for this and we will divide it into three different networks as follows:\nPerception: This network reads a state a; and converts it to a lower dimensional vector ho that is used by the Prediction. Prediction: For each j E {1,..., k}, this network reads the vector h-1 and the corre- sponding control ci+; and generates a vector h; that will be used in the next steps of the Prediction and Valuation. Observe that this is actually a recurrent neural network. Valuation: For each j E {1, ..., k}, this network reads the current vector h; of the Predic tion and predicts the difference in score between the initial time and the current one, i.e ri+i - ri.\nFigure 2 illustrates the model. Observe that what we actually want to solve is a supervised learning problem. Thus, the whole model can be jointly trained with simple backpropagation. We will now proceed to explain each of the components in more detail.\nFigure 2: Diagram of our predictive model\nencoded using very few neurons. In our experiments, we convert an input consisting of 28K pixels into a vector of just 100 real values.\nIn order to do this, we use deep convolutional networks. These networks have recently achievec super-human performance in very complex image recognition tasks (He et al., 2015). In fact, it has been observed that the upper layers in these models learn lower dimensional abstract representations of the input (Yosinski et al. (2015), Karpathy & Li (2015)). Given this, it seems reasonable to believe that if we use any of the successful architectures for vision, our model will be able to learn a usefu representation that can be used by the Prediction."}, {"section_index": "8", "section_name": "3.2.2 PREDICTION", "section_text": "For the Prediction network, we present a new kind of recurrent network based on residual neura networks (He et al., 2015), which is specially well suited for our task and it achieved better result than an LSTM (Hochreiter & Schmidhuber, 1997) with a similar number of parameters in our initia teStS.\nResidual Recurrent Neural Network (RRNN) We define the RRNN in Figure 3 using the fol lowing notation: LN is the layer normalization function (Ba et al., 2016) which normalizes the activations to have a median of 0 and standard deviation of 1. \".\" is the concatenation of two vec. tors. f can be any parameterizable and differentiable function, e.g., a multilayer perceptron..\nNj 4 ri=f(LN(hi-1)xi) (2) h=hi-1+ri (3) LN hj-1 Xj\nFigure 3: The equations of the RRNN and a diagram of the network\nAs in residual networks, instead of calculating what the new state of the network should be, we calculate how it should change (r;). As shown by He et al. (2015) this prevents vanishing gradients or optimization difficulties. LN outputs a vector with mean 0 and standard deviation 1. As we\nri+1-ri),...,ri+k-ri} ri+1-ri ri+2-rj A Valuation Valuation Valuation Prediction Prediction Prediction Perception Perception 1 aj {Ci+1, ..., Ci+k} aj Cj+1 Ci+2 (a) The recurrent model. (b) The same model unfolded in time\nproof? in Observation 1, this prevents internal exploding values that may arise from repeatedly adding r to h. It also avoids the problem of vanishing gradients in saturating functions like sigmoid or hyperbolic tangent.\nObservation 1. Let x E Rn be a vector with median 0 and standard deviation 1. Then, for al 1 < i < n, we get that x; < n.\nProof. Taking into account that the median is O and the standard deviation is 1, simply substitutin the values in the formula for the standard deviation shows the observation.."}, {"section_index": "9", "section_name": "3.2.3 VALUATION", "section_text": "The Valuation network reads the h vector at time i + j and outputs the change in reward fo that time step, i.e. ri+j - rj. Still, it is a key part of our model as it allows to decouple the representation learned by the Prediction from the reward function. For example, consider a robot in a real world environment. If the Perception learns to capture the physical properties of all surrounding objects (shape, mass, speed, etc.) and the Prediction learns to make a physical simulation of the environment, this model can be used for any possible task in that environment, only the Valuation would need to be changed."}, {"section_index": "10", "section_name": "3.3 STRATEGY", "section_text": "As we previously said, finding an optimal strategy is a very hard problem and this part is the most complicated. So, in order to test our model in the experiments, we opted for hard-coding a strategy. There, we generate a set of future controls uniformly at random and then we pick the one that would maximize our reward, given that the probability of dying is low enough. Because of this, the games we have tried have been carefully selected such that they do not need very sophisticated and long-term strategies.\nThe bound is not tight but it is sufficient for our p poses and straightforward to prove\nn 1 > = n j=1 n 1 x3 n j=1 n x 2 j=1 Vn xi\nn 1 (xj - n j=1 n 1 1 j n j=1 n j=1 Vn xi\nThe idea behind this network is mimicking how a video game's logic works. A game has some. variables (like positions or speeds of different objects) that are slightly modified at each step. Our intuition is that the network can learn a representation of these variables (h), while f learns how they are transformed at each frame. Apart from that, this model decouples memory from computation allowing to increase the complexity of f without having to increase the number of neurons in h. This is specially useful as the number of real valued neurons needed to represent the state of a game. is quite small. Still, the function to move from one frame to the next can be quite complex, as it has to model all the interactions between the objects such as collisions, movements, etc..\nEven if this method looks like it may be just tailored for video games, it should work equally well for real world environments. After all, physics simulations that model the real world work in the same way, with some variables that represent the current state of the system and some equations that define how that system evolves over time.\nTable 1: f function of the Prediction network. We apply the non-linearity be- fore the linear layer, this way we avoid always adding positive values. The ReLU is not applied to the control in- puts.\nStill, our approach learns a predictive model that is independent of any strategy and this car be beneficial in two ways. First, the model can play a strategy that is completely different to the ones it learns from. Apart from that, learning a predictive model is a very hard task to over-fit Consider a game with 10 possible control inputs and a training set where we consider the next 25 time steps. Then, there are 1025 possible control sequences. This means that every sequence we train on is unique and this forces the model to generalize. Unfortunately, there is also a downside. Our approach is not able to learn from good strategies because we test our model with many different ones in order to pick the best. Some of these strategies will be quite bad and thus, the model needs to learn what makes the difference between a good and a bad set of moves."}, {"section_index": "11", "section_name": "4.1 ENVIRONMENT", "section_text": "Our experiments have been performed on a computer with a GeForce GTX 980 GPU and an Inte Xeon E5-2630 CPU. For the neural network, we have used the Torch7 framework and for the ATAR simulations, we have used Alewrap, which is a Lua wrapper for the Arcade Learning Environmer (Bellemare et al., 2015)."}, {"section_index": "12", "section_name": "4.2 MODEL", "section_text": "For the Perception, we used a network inspired in deep. residual networks (He et al., 2015). Figure 4 shows the architecture. The reason for this, is that even if the Per- ception is relatively shallow, when unfolding the Predic-. tion network over time, the depth of the resulting model. is over 50 layers deep\nFor the Prediction. we use a Residual Recurrent Neu. ral Network. Table 1 describes the network used for the f function. Finally. Table 2 illustrates the Valuation net-. work."}, {"section_index": "13", "section_name": "4.3 SETUP", "section_text": "We preprocess the images following the same tech-. nique of Mnih et al. (2015). We take the maximum from. the last 2 frames to get a single 84 84 black and white. image for the current observation. The input to the Per-. ception is a 4 84 84 tensor containing the last 4 obser- vations. This is necessary to be able to use a feed-forward network for the Perception. If we observed a single frame.\nTable 2: Valuation network. We apply. Layer Normalization to bound the incoming values to the network.\nStride: 2 7x7 conv, 16 Output size: 40x40 3x3 conv, 16 3x3 conv, 32 Output size: 20x20 max pooling /2 3x3 conv, 32 3x3 conv, 64 Output size: 10x10 max pooling /2 3x3 conv, 64 3x3 conv, 128 Output size: 5x5 max pooling /2 Output size: 512 fc 3200 Output size: 100 fc 512\nFigure 4: Each layer is followed by a Batch Normalization (Ioffe & Szegedy 2015) and a Rectifier Linear Unit..\nit would not be possible to infer the speed and direction of a moving object. Not doing this woul force us to use a recurrent network on the Perception, making the training of the whole model mucl slower.\nIn order to train the Prediction, we unfold the network over time (25 time steps) and treat the model as a feed-forward network with shared weights. This corresponds to approximately 1.7 seconds"}, {"section_index": "14", "section_name": "4.4 GENERATING DATA", "section_text": "We start with k = 25 and increase it every few iterations up to k = 200. For the full details check Appendix A. In order to accelerate training, we run several games in parallel. This allows to run the Perception, Prediction and Valuation networks together with the ATARI simulation in parallel which heavily speeds up the generation of data without any drawback.."}, {"section_index": "15", "section_name": "4.5 TRAINING", "section_text": "In the beginning, we generate 400K training cases for each of the games by playing randomly which gives us a total of 1.2M training cases. Then, for the subsequent iterations, we generate 200K additional training cases per game (600K in total) and train again on the whole dataset. That is, at first we have 1.2M training cases, afterwards 1.8M, then 2.4M and so on.\nFor our Valuation, network we output two values. First, the probability that our score is highe. than in the initial time step. Second, we output the probability of dying. This is trained using cross entropy loss.\nTo train the model, we use an off-line learning approach for simplicity. During training we al. ternate between two steps. First, generate and store data and then, train the model off-line on that data.\na;: A 4 84 84 tensor, containing 4 consecutive black and white frames of size 84 84 each. C: For j E {i + 1, ..., i + 25}, each c; is a 3 dimensional vector that encodes the control action performed at time j. The first dimension corresponds to the shoot action, the second to horizontal actions and the third to vertical actions. For example, [1, 1, 0] represent pressing shoot and left. R: For j E {i + 1, ..., i +25}, we store a 2 dimensional binary vector rj. Tj1 is 1 if we die between time i and j. rg2 is 1 if we have not lost a life and we also earn a point between time i and j.\nInitially, we have an untrained model, so at each time step, we pick an action uniformly at random and perform it. For the next iterations, we pick a k and do the following to play the game:.\n1. Run the Perception network on the last 4 frames to obtain the initial vector.. 2. Generate k - 1 sequences of 25 actions uniformly at random. Apart from that, take the best sequence from the previous time step and also consider it. This gives a total of k sequences. Then, for each sequence, run the Prediction and Valuation networks with the vector obtained in Step 1. 3. Finally, pick a sequence of actions as follows. Consider only the moves that have a low enough probability of dying. From those, pick the one that has the highest probability of earning a point. If none has a high enough probability, just pick the one with the lowest probability of dying.\nPong Breakout Demon Human score 9.3 31.8 3401 PRL Best (Multi-task) 14.6 316 6872 PRL Best (Single-task) 18.2 186 6100 A3C (Mnih et al., 2016) 18.9 766.8 115202\nTable 4: The best iteration of PRL is able to surpass human performance in all three tasks. Still state of the art model-free approaches work better.\nThe training is done in a supervised way as depicted in Figure 2b. a; and C are given as input tc the network and R as target. We minimize the cross-entropy loss using mini-batch gradient descent. For the full details on the learning schedule check Appendix A..\nIn order to accelerate the process, instead of training a new network in each iteration, we keep training the model from the previous iteration. This has the effect that we would train much more on the initial training cases while the most recent ones would have an ever smaller effect as the training set grows. To avoid this, we assign a weight to each iteration and sample according to these weights during training. Every three iterations, we multiply by three the weights we assign to them. By doing this, we manage to focus on recent training cases, while still preserving the whole training set\nObserve that we never tell our network which game it is playing, but it learns to infer it fron. the observation a;. Also, at each iteration, we add cases that are generated using a different neura. network. So our training set contains instances generated using many different strategies"}, {"section_index": "16", "section_name": "4.6 RESULTS", "section_text": "We have trained a model on the three games for a total of 19 iterations, which correspond tc 4M time steps per game (74 hours of play at 60 Hz). Each iteration takes around two hours or our hardware. We have also trained an individual model for each game for 4M time steps. In the individual models, we reduced the length of the training such that the number of parameter update per game is the same as in the multi-task case. Unless some kind of transfer learning occurs, one would expect some degradation in performance in the multi-task model. Figure 5 shows that not onl there is no degradation in Pong and Demon Attack, but also that there is a considerable improvemen in Breakout. This confirms our initial belief that our approach is specially well suited for multi-tasl learning.\nWe have also argued that our model can potentially play a very different strategy from the one it has observed. Table 3 shows that this is actually the case. A model that has learned only fron random play is able to play at least 7 times better.\nDemon Attack's plot in Figure 5c shows a potential problem we mentioned earlier which also happens in the other two games to a lesser extent. Once the strategy is good enough, the agent dies very rarely. This causes the model to '\"forget'\"' which actions lead to a death and makes the score Oscillate.\nTable 3: After one iteration Preditive Reinforcement Learning (PRL) has only observed random play. but it can play much better. This means that it is able to generalize well to many situations it has not observed during training"}]
ryrGawqex
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Moshe Looks, Marcello Herreshoff, DeLesley Hutchins & Peter Norvig Google Inc\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. In NAACL, 2016..\n{madscience, marcelloh, delesley norvig}@google.com\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015.\nAnna Maria Bianucci, Alessio Micheli, Alessandro Sperduti, and Antonina Starita. Application of cascade correlation networks for structures to chemistry. Applied Intelligence, 2000..\nNeural networks that compute over graph structures are a natural fit for problems. in a variety of domains, including natural language (parse trees) and cheminfor-. matics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learn-. ing libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between dif-. ferent input graphs of dissimilar shape, but also between different nodes within a. single input graph. The technique allows us to create static graphs, using popu-. lar libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library' lof compositional blocks that simplifies the. creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature.\nSamuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, anc. Christopher Potts. A fast unified mode1 for parsing and sentence understanding. In NAACL, 2016\nJohn Hughes. Generalising monads to arrows. Science of Computer Programming, 2000\nSteven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 2016."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS, 2012\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv, 1607.04315, 2016a\nTsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. arXiv 1607.04492, 2016b.\nHowever, there is also a long history of neural networks that compute over structures such as parse trees (Pollack||1990), logical terms (Goller & Kuchler!|1996), and molecular graphs (Bianucci et al. 2000). In these models, each distinct input has a different computation graph structure; we say thai. they use dynamic computation graphs (DCGs). Such models continue to be developed and have recently yielded superior results on problems such as sentiment classification and semantic related ness (Tai et al.]2015] Li et al.]2015), question-answering (Andreas et al.]2016), and screening o1 chemical compounds (Kearnes et al.|2016). Despite these successes, most practitioners avoid DCGs for implementation reasons. For example, Bowman et al.(2016) assert that \"because TreeRNNs use. a different model structure for each sentence ... efficient batching is impossible in standard imple. mentations\"'. Moreover, even if efficient batching were possible in principle, current libraries such. as TensorFlow (Abadi et al.||2016) assume that the data-flow graph is static (i.e. is the same for each input) and impose a significant cost to graph construction, which makes it infeasible to build a new. graph for each input.\nJordan B Pollack. Recursive distributed representations. Artificial Intelligence, 1990.\nStanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss arXiv, 1603.05118, 2016."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011..\nTraining deep neural networks directly on minimally pre-processed corpora has led to many recent performance breakthroughs, mainly on problems in domains such as vision (Krizhevsky et al.]2012) and natural language (Bahdanau et al.]2015) where the inputs can be cast as dense n-dimensional arrays (henceforth tensors), or sequences of tensors. These successes exploit the effectiveness of training via gradient descent on mini-batches of tens to hundreds of inputs, implemented using the parallel SIMD capabilities of modern GPUs (Oh & Jung2004) and multi-core CPUs (Vanhoucke et al.2011). This, in turn has led to a proliferation of libraries making it easier to train and deploy such models, by expressing them in terms of differentiable data-flow graphs over tensors (Abadi et al.[ 2016] Theano Development Team2016} [Collobert et al.]2011].\nJiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. When are tree structures necessary for deep learning of representations? arXiv, 1503.00185, 2015..\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. In NAACL, 2015..\nSection 2lintroduces dynamic batching, which enables efficient batching for training and inference with DCGs. Dynamic batching runs DCGs efficiently with existing libraries that only support static data-flow graphs; e.g. the same static graph can run a TreeRNN over any parse tree. We present empirical results for our implementation in TensorFlow. Section|3|presents a combinator library for concisely implementing models with DCGs using dynamic batching. Section 4 concludes."}, {"section_index": "3", "section_name": "FEED-FORWARD ATTENTION", "section_text": "The feed-forward attention model from Section|3.4|may be implemented in Fold as follows\nn deep learning libraries like TensorFlow, computations are manually batched. The computatic s expressed as a static graph of mathematical operations, such as y = o(x . w + c), which ai. olymorphic in batch size; an input x of dimensions (6, n) will yield an output of dimensions (6, m vhere b is the batch size. With DCGs, the graph of operations is not static, but is assumed to b. lifferent for every input, so multiple inputs no longer naturally batch together in the same way. Th. lynamic batching algorithm overcomes this difficulty. Given a set of computation graphs as inpu. each of which has a different size and topology, it will rewrite the graphs by batching together a. nstances of the same operation that occur at the same depth in the graph. The rewriting proce. nserts additional concat and gather operations to move data between the batched operations; tl. ndices to gather encode the topology of the original input graphs..\nattention = Composition () with attention.scope() : h = attention.input exp_e = Map(a >> Function(tf.exp)).reads(h) z = (Sum() >> Broadcast()).reads(exp_e) alpha = ZipWith(Function(tf.div)).reads(exp_e, z) c = (Zipwith(Function(tf.mul)) >> Sum()).reads(alpha, h) attention.output.reads(c)\nOPO31CIO with attention.scope(: h = attention.input exp_e = Map(a >> Function(tf.exp)).reads(h) z = (Sum() >> Broadcast()).reads(exp_e) alpha = Zipwith(Function(tf.div)).reads(exp_e, z) c = (Zipwith(Function(tf.mul)) >> Sum()).reads(alpha, h) attention.output.reads (c)\nWithin a composition scope, blocks may be wired together with reads, provided no directed cycle. are formed. The input and output properties are used to define the overall inputs and outputs o the composition block. This example introduces several additional block types:.\nWe distinguish between individual operations appearing as nodes in the underlying data-flow graph such as addition or matrix-multiply, and small sub-graphs that conceptually act as functions over tensors, such as a feed-forward layer or LSTM cell. We refer to the former as \"ops\"', and to the latter as \"operations.'' Operations, (i.e. sub-graphs), form the building-blocks from which neural networks with DCGs are composed; dynamic batching schedules operations, not ops. Our algorithm requires that all operations which might be used be specified in advance, and it enumerates them for scheduling purposes. For example, a binary TreeRNN for NLP parse trees has two operations: embedding table lookups for words at the leaves of the tree. and RNN cells for the non-terminals.\nThe inputs and outputs of operations have tensor types. Each input or output may have a different. type, but all types must be fixed and fully specified in advance. A tensor type consists of a shape. x1,...xn, together with a scalar data type (e.g. float32). The inputs to an operation shall be. tensors of dimension (b, x1,... xn), where b is the batch size and x1...xn is the shape of corre-. sponding input tensor type. The outputs must all be tensors of dimension (b, y1,... ym), where. Y1, ... ym is the shape of the corresponding output tensor type. Operations must be polymorphic. with respect to the batch size, because the batch size will change each time the operation is invoked depending on the topologies of the input graphs. However, their tensor types are fixed, so that it is. possible to assign a known tensor type to each edge in the input computation graph.."}, {"section_index": "4", "section_name": "B GRAPH CONVOLUTIONS", "section_text": "This section implements the graph convolution model introduced by Kearnes et al.. (2016), for molecules represented as undirected graphs of atoms. There are real-valued feature vectors for each atom and for each distinct pair of atoms. For a molecule having N atoms, we index its atom feature vectors as ai E Rn for 1 i N. We index its pair feature vectors as pi,j E Rm for. 1 < i, j < N, where pi.i = Pi.i and pii = 0.\nN ay=fA(fA>A(a),fP-A(Pi,j)) j=1 pi=fP(fA-P(a,a)+fAP(a,a),fP-P(P,)\nwhere fA. fp.. are learnable functions ndf\nIt is noteworthy that the a -> py calculation involves a nested scan over the atoms; for each a; we must calculate fA-P(a,, a%) + fA-p(ax, ax) for all 1 < j < N:\ncomposIcIon( with a_i_to_p.scope(): a_x_i = Broadcast().reads(a_i_to_p.input[0]) a_x = a_i_to_p.input[1] f_i_j = Zipwith(Concat() >> f_a_p).reads(a_x_i, a_x) f_j_i = ZipWith(Concat() >> f_a_p).reads(a_x, a_x_i) p = ZipWith(Sum()).reads(f_i_j, f_j_i) a_i_to_p.output.reads(p)\nIn our TensorFlow implementation, each dynamic operation is instantiated once in the static. data-flow graph. The inputs to each operation are tf.gather ops, and the outputs are fed. into tf.concat ops, as described above. These TensorFlow ops are then placed within a t f . while_loop. Each iteration of the loop will evaluate all of the operations at a particular depth.. The loop maintains state variables for each tensor type t, and feeds the output of concat for tensor. type t and iteration d into the input of the gathers at tensor type t and iteration d + 1. The indices. for gather at iteration d are drawn from the edge labels i for depth d in the schedule. The initial values for the state variables at iteration/depth O are the constants in the input graph..\nTuple(Tensorf1oat32, [n], Sequence(Tensorf1oat32, [n]\nWe broadcast a over a twice in succession to compute fA->p(a, a) and fA-P(at, a) for all 1 < j < N, yielding f_i_j and f_j_i, which are length-n sequences of vectors. We join and sum\nSum is a specialization of Reduce that performs elementwise addition. Zipwith is a variant of Map that accepts n sequences as input and applies an n-ary functio f elementwise (stopping when the end of the shortest input sequence is reached).. Broadcast creates a Sequence(t) from a single t, repeating the same element endlessly\nThe core of the graph convolution model is the weave module, which combines atom-level and. pair-level features using six learnable functions (typically fully connected ReLU layers). The weave module can be stacked arbitrarily to create deep graph convolution models. Denoting inputs and. outputs by x and y superscripts respectively, the weave module is:.\nThe dynamic batching algorithm takes a directed acyclic computation graph as input. A batch of multiple input graphs can be treated as a single disconnected graph. Source nodes are constant tensors, and non-source nodes are operations. Edges connect one of the outputs of a node to one of the inputs of another node. Scheduling is performed using a greedy algorithm:\nAssign a depth to each node in the graph. Nodes with no dependencies (constants) are assigned depth zero. Nodes with only dependencies of depth zero are assigned depth one nodes whose dependencies have a maximum depth of one get assigned depth two, etc.. Insert pass-through (identity) operations so that an operation at depth d + 1 only refers to results at depth d. Batch together all nodes invoking the same operation at the same depth into a single node. .Concatenate all outputs which have the same depth and tensor type. The order of concate-. nation corresponds to the order in which the dynamic batching operations were enumerated.. Assign a label (d, t, i) to each edge in the original graph, where d is the depth, t is the tensor type, and i is the integer index for that edge into the (concatenated) outputs for d, t. The. schedule for the graph consists of the indices i for all edges, which are grouped together by. depth and operation.\neach of these vectors elementwise to obtain the ultimate output of the block, which is also a length-r sequence of vectors. The overall weave module may now be implemented as follows..\nNeave= Compos1t1on( vith weave.scope() : a_x = weave.input [0] p_x = weave.input [1] a_to_a = Map(f_a_a).reads(a_x) p_to_a = Map(Map(f_p_a) >> Sum()).reads(p_x) a_y = Zipwith(Concat() >> f_a).reads(a_to_a, p_to_a). a_to_p = Zipwith(a_i_to_p).reads(a_x, Broadcast().reads(a_x)) p_to_p = Map(Map(f_p_p)).reads(p_x) p_y = ZipWith(ZipWith(Concat() >> f_p)).reads(a_to_p, p_to_p) weave.output.reads(a_y, p_y)\nCOPC(/ a_x = weave.input[0] p_x = weave.input[1] a to_a = Map(f_a a).reads(a x) p_to_a = Map(Map(f_p_a) >> Sum()).reads(p_x) a_y = Zipwith(Concat() >> f_a).reads(a_to_a, p_to_a). a_to_p = Zipwith(a_i_to_p).reads(a_x, Broadcast().reads(a_x)) p_to_p = Map(Map(f_p_p)).reads(p_x) p_y = ZipWith(ZipWith(Concat() >> f_p)).reads(a_to_p, p_to_p) weave.output.reads(a_y, p_y)\nFigure 1: The static data-flow graph created by dynamic batching for a binary TreeRNN over parse. trees (left), and input graph corresponding to the parse tree ((word1, word3), word5) (right)\nDynamic batching allows us to construct a static TensorFlow graph that contains a single instance. of each operation, yet can emulate input graphs of arbitrary size and topology where operations may. appear an arbitrary number of times. The TensorFlow concat, gather, and while_loop ops are. all differentiable, so gradients calculations and back-propagation do not require any additional code\nFor example, a binary TreeRNN as described above yields a TensorFlow data-flow graph with a t f. whi1e_loop whose body is shown on the left of Figure1 Here each gather has an additional. input (the indices for the given op at the given depth) which picks out which elements the operations are to be called with. The long downward arrows are the pass-throughs. The algorithm consumes a tree such as the one shown on the right of Figure[1|and turns it into inputs for the gather | operations at each depth (here depth is the loop counter for the t f . whi1e_loop.)."}, {"section_index": "5", "section_name": "2.1 EXPERIMENTAL RESULTS", "section_text": "We have implemented dynamic batching as part of a new library, TensorFlow Fold, and designed a synthetic speed benchmark to compare it with manual batching in native TensorFlow. The bench mark uses the same underlying kernels and execution engine in both cases. Native TensorFlow cannot batch together trees of different shapes so, for testing purposes, we use a batch of random binary trees, all of which have the same shape. These test results thus represent a best-case scenario in which all operations can be batched together perfectly. For the manual batching tests, we con struct a static data-flow graph of operations corresponding to the shape of the tree. For the dynamic batching tests, we traverse each tree to construct a schedule, as described above.\n. Define the functional unit of comparison as an input-output mapping . Prepare a single file that implements this functionality and nothing else. . Remove import statements, abstract base classes, logging, file i/o, and validation lo : Count lines of code, ignoring blank lines and comments\nThe leaves of the tree are lookups into an embedding table, while the non-terminals implement a variant of the Tree-LSTM (Tai et al.]2015) equations. The tree size is 128, with a state size of. 1024 for the LSTM. The CPU tests were run on a Dell z620 workstation with dual 8-core Intel Xeon processors (32 hardware threads), and the GPU tests were done using a consumer Nvidia GeForce GTX-1080 card. We compare manual batching, dynamic batching where all trees have the. same shape, and dynamic batching where each tree has a different shape (the column marked \"full dynamic'). There is no measurable penalty for dealing with trees of different shapes.."}, {"section_index": "6", "section_name": "FEED-FORWARD ATTENTION", "section_text": "The functional unit of comparison is creating the model for the variable-length experiment described in Raffel & Ellis[(2016, sec. 2.3). This includes the loss and accuracy calculations, but does not include the training loop or the creation of training data. The original implementatior Jis in Python and uses Theano and Lasagne. The TensorFlow Fold implementation is more concise, partly due to differences between TensorFlow and Lasagne. Fold itself reduces implementation complexity by eliminating the need for manual batching, e.g. x. sum (axi s=1) where batching is explicit over axis O. vs. x >> Sum (). which is implicitly batched.\nThe test results shown in Table1emphasize the importance of batching, especially on GPUs. Tensor. Flow will launch a GPU kernel for every node in the tree, so there is a fixed overhead, proportional. to the size of the tree, that dominates execution for small batch sizes. TensorFlow does not begin t saturate the GPU until relatively large batch sizes - 1024 or higher. The difference in speed betweer. fully-batched and unbatched is over 160x..\nDynamic batching has less kernel invocation overhead because the data-flow graph is smaller. Dy. namic batching instantiates each operation only once, and invokes it once for each depth, so the number of kernel invocations is log(n), rather than n, where n is tree size. Dynamic batching thus achieves substantial speedups even at batch size 1, because it batches operations at the same depth. within a single tree.\nint [] state. float32 [128] state 1 3 5 gather gather gather gather gather embed embed embed embed lookup. RNN Cell cell concat concat cell int [] state. float32 [128] state\na_to_a maps over a\" with fA->A, going from Sequence(Tensor) to Sequence(Tensor) p_to_a maps over p with fA-P and sums along the inner dimension, reducing from Sequence(Sequence( Tensor)) to Sequence(Tensor). a_y zips a_to_a and p_to_a with fA, going from Tuple(Sequence( Tensor), Sequence( Tensor)) to Sequence( Tensor). a_to_p broadcasts a* over itself with a_i_to_p, expanding from Sequence(Tensor) to Sequence(Sequence( Tensor)). p_to_p maps over p* with fP->P, going from Sequence(Sequence(Tensor)) to Sequence(Sequence(Tensor)). : p_y zips a_to_p and p_to_p with fp, going from Tuple(Sequence(Sequence( Tensor)), Sequence(Sequence(Tensor))) to Sequence(Sequence( Tensor)).\n6All of the implementations we examine are formatted with 80-column lines excepting the Tree-LSTM. mplementation, which has a few lines that are slightly longer; we still count these as single lines."}, {"section_index": "7", "section_name": "TREE-LSTM", "section_text": "Table 1: Inference timing benchmark; times are wall-clock averages in seconds\nThe functional unit of comparison is creating a (binary) constituency Tree-LSTM and running an. epoch of training for the fine-grained sentiment classification task as described in Tai et al.(2015 sec. 5.1). This does not include loading the word embeddings or dataset, which are provided as inputs. The original implementatior?is in Lua and uses Torch. Lua terminates blocks with the end. keyword; we do not count these lines. Here, the use of Python and TensorFlow leads to substantially. more concise code than with Lua and Torch. Unlike the previous example manual batching plays. no role here, because the original implementation computes gradients and losses one tree at a time. Fold reduces complexity here by using a OneOf block to distinguish between leaves and internal. nodes, rather than a recursive function that explicitly traverses the tree.\nbatch-size manual dynamic full dynamic cost speedup batch tree batch tree batch tree ratio ratio (CPU) 1024 14.62 0.014 18.68 0.018 18.37 0.017 1.27 28.86 512 7.54 0.014 9.84 0.019 9.57 0.018 1.30 27.68 256 4.14 0.016 5.22 0.020 5.25 0.020 1.26 25.23 128 2.48 0.019 2.95 0.023 3.08 0.024 1.18 21.47 64 1.64 0.025 1.76 0.027 1.78 0.027 1.06 18.55 32 1.27 0.039 1.05 0.032 1.10 0.034 0.82 14.94 1 0.52 0.517 0.26 0.258 0.26 0.262 0.49 1.97 (GPU) 1024 0.978 0.0009 1.590 0.0015 1.617 0.0015 1.62 101.79 512 0.530 0.0010 0.715 0.0013 0.721 0.0014 1.34 114.15 256 0.312 0.0012 0.323 0.0012 0.340 0.0013 1.03 120.86 128 0.236 0.0018 0.164 0.0012 0.178 0.0013 0.69 115.05 64 0.193 0.0030 0.093 0.0014 0.106 0.0016 0.48 96.40 32 0.153 0.0047 0.061 0.0019 0.074 0.0023 0.40 68.79 1 0.161 0.1608 0.038 0.0376 0.036 0.0359 0.23 4.47"}, {"section_index": "8", "section_name": "GRAPH CONVOLUTION", "section_text": "The functional unit of comparison is creating a single weave module as described in Kearnes. et al.(2016, sec. 3.3). The original implementation1o[is in Python and uses TensorFlow. Here,. both implementations use the same language and deep learning library. Fold helps by eliminat-. ing the need for manual batching, as in the first example. This is particularly apparent in the. atoms-to-pairs calculation, which requires making n \"copies' of an n d matrix x to get an n n d tensor. In native TensorFlow the first dimension is batch, and the copying is explicit, as reshape(tile(x, [1, n, 1]), [batch_size, n, n, d]). InFold,x >> Broadcast(). suffices, because the number of copies needed is determined lazily by subsequent computations..\nHowever, the extra concat and gather ops that dynamic batching inserts do have a cost. The \"cos. ratio\"' column above shows the ratio between dynamic and manual batching, in the case where al trees in the batch have the same shape. The cost is only 20% for inference on GPUs with batch-siz 1, but rises to 60% for training with backpropagation. The cost is mainly visible at large batch sizes. because it is balanced by the benefit of within-tree batching at smaller sizes..\nEven with the cost, dynamic batching yields a 120x speedup over using a batch size of 1 on GPU.. and 28x on CPU. The \"speedup ratio' column above shows the ratio between the per-tree time for dynamic batching on random shapes (\"full dynamic'), versus manual batching with a batch size of. 1. Note that using a batch size of 1 is not actually feasible for TensorFlow, because TensorFlow has. a large graph construction overhead, which is not included in these measurements, but it may apply. to other libraries that lack such overhead.."}, {"section_index": "9", "section_name": "A COMBINATOR LIBRARY FOR NEURAL NETWORKS", "section_text": "In addition to dynamic batching, the TensorFlow Fold library provides a set of combinators tha simplify the task of constructing neural networks for DCGs. Our goal here is to show how dynami batching enables implementing deep learning models (which are growing ever more complex) at a. higher level of abstraction than manual batching. This in turn facilitates a more rapid feedback loop. for trying out novel model variants, and thus obtaining superior results..\nThe design of the library was inspired by functional programming techniques such as parser combi- nators (Hutton & Meijer1996) and arrows (Hughes 2000). In a combinator library computations are structured compositionally, by plugging together simpler computations in various ways. The basic unit of computation in TensorFlow Fold is a block, essentially a function from input to output. In a typical DCG model, the input is a graph or tree of some kind, and the output is a vector, which can be attached to a loss for training.\nFor example, consider a model where the inputs are sequences of words, of varying lengths, and the output is a sentence vector. Our library provide several different ways of handling sequences. Given a simpler block f that operates on elements of the sequence, or g on pairs of elements, we define the following combinators:\nMap (f) : yields [f(x1), f(x2), ... f(xn)]. Applies f to each element of the sequence, e.g. embedding each of the words of a sentence into RN. Fold(g, z): yields g(...g(g(z,x1),x2),...xn). Applies g sequentially in a leftward chain, e.g. running an RNN over a sequence. By default z = 0.\nNote that it is not necessary to pad or truncate sequences to the same length; dynamic batching handles sequences of differing lengths\nBlocks are statically typed; each block has an input type and an output type. Types are inferred. where possible, but must be explicitly specified in some cases. A type is one of the following\n.Input denotes objects in the host language (Python), such as trees and dictic : Tensoratype,shape denotes tensors of a particular dt ype and shape.. .Tuple(t1,... tn), denotes a tuple of values of types t1, ... tn.. .Sequence(t), denotes a sequence of elements of type t, of any length.. Void is the unit type.\nFor example Sequence(Sequence(Tuple(Tensorf1oat32,, Tensorint8,[3,4]))) denotes jagged ar rays whose elements are pairs (f1oat32, int83x4\nIn addition to the the sequence combinators described above, important combinators in the library include the following:\n2Reduce uses a balanced tree rather than a chain in order to minimize computation depth and provide mo opportunities for batching. oftbo Ofth Od1n0\nReduce (g): yields g(Reduce([x1,...x[n/2]]), Reduce([[n/2]+1,...xn]). Applies g in a balanced tree|2le.g. max or sum-pooling over the elements..\nBlocks are composed hierarchically; a block expression is always a tree. The non-terminals in the tree are combinators such as Map and Fold, which take simpler blocks as arguments. The leaves of the tree are atomic blocks, which include the following:.\nScalar: Input -> Tensor Convert a Python scalar to a tensor. Tensor: Input -> Tensor Convert a NumPy array to a tensor. Function (h) : [Tensor or Tuple(Tensor,...)] -> [Tensor or Tuple(Tensor,...)] Defines an operation h (see Section[2) over tensors. Operations with multiple inputs and outputs use tuples of tensors. InputTransform(h) : Input -> Input Applies a user-defined Python function h to pre-process the input.\nb1 >> b2: Function composition; the output of b1 is fed to the input of b2 Record({li: b1 ,... ln : bn}: Input -> Tuple(t1,...tn) Takes a Python dictionary or tuple as input, and applies each block b, to the field label li, to yield an object of type t. Returns a tuple of the results for all fields. OneOf (b1,...bn): Input -> t Conditionally dispatches on its input to one of the blocks b1, . . . bn. Optional (b): Input -> t Applies b if the input is not None, otherwise returns zeros. A special case of OneOf. AllOf(b1,...bn):to-> Tuple(t1,...tn Passes its input of type to to each of the blocks b1, . . . bn, returning a tuple of results\nsplit > word2vec expr qits h word rnn a pair ax\nFigure 2: Block architectures for a pipeline (Section 3.3), feed-forward attention (Section 3.4) binary Tree-LSTMs (Section|3.5), and the weave module for molecule graphs (Section|3.6)\nAssume we have a set of (text, labe1) pairs as input and wish to predict the label from the text. The text consists of words, and we want to use an array of pretrained word embeddings (word_matrix) and corresponding dictionary mapping words to indices (word_idx). We call word_idx. get (word) to obtain the index of wordin word_matrix, or None if wordis unknown\nWe start by creating a block which embeds each word into a continuous space:\nd2vec = (InputTransform(word idx.get) >> Optional(Scalar('int32')) >> Function(Embedding(initializer=word matrix)))\nword2vec = (InputTransform(word_idx.get) >> Optional(Scalar('int32')) >> Function(Embedding(initializer=word matrix)))\nWith word2vec in hand, we can define text2vec, which embeds sentences:\nplit = InputTransform(str.split) nn_cell = Concat() >> Function(Fc(d, activation=tf.nn.relu)) ext2vec = split >> Map(word2vec) >> Fold(rnn cell, Zeros(d) )\nWe use an Input Trans form to split the string into words. Then we map the words to vectors witl word2vec, and combine the word vectors with a simple RNN, which uses a single fully connectec layer FC with d hidden units. The Zeros block defines the initial state for the RNN.\nAssume there are n labels: we use a linear layer with n outputs to get unscaled logits\nFor training, we create a Record block to convert the labe1 to a tensor as well, and calculate loss.\nrecord = Record([('text' text2loqits) , ('labe1', Scalar('int32'))l) loss = record >> Function(tf.nn.sparse_softmax_cross_entropy)\nFinally, we create a Compiler, which validates a block, performs type-checking, and sets up dy namic batching in TensorFlow. Outputs of a compiled block are available as TensorFlow tensors, so training now proceeds as it would for any other TensorFlow model:\ncompiler = Compiler.create(loss) cross_entropy = Compiler.output_tensors[0] train_op = tf.train.AdamOptimizer().minimize(cross_entrop\nsplit > word2vec ex rnn loqits h word pair ax\nThis block uses an Input Transform to get the index of a word, which is passed to an Optional block that converts the scalar index to a tensor (or O if None). This in turn gets passed to an Embedding operation, which performs a lookup into an embedding table\nRecently, Raffel & Ellis (2016) have introduced an attention mode1 for feed-forward neural net works. The model generalizes average-pooling and is defined as:.\nIn this model, the block architecture is not a simple pipeline (i.e. a composition using >>) but instead forms a directed acyclic graph, as illustrated in Figure[2] A Composition block allows blocks to be composed into DAGs. The model code and details may be found in Appendix[A"}, {"section_index": "10", "section_name": "3.5 RECURSIVE DEFINITIONS", "section_text": "where TreeLSTM (x, hteft, hright) is a learnable function corresponding toTai et al.(2015) eqs 9-14 with N = 2. Since a tree is a recursive data type, a model that processes trees must be recursively defined, as illustrated by the cycle in Figure 2 A ForwardDeclaration allows the creation of recursive models:\nexpr = ForwardDeclaration() word = AllOf(Record([('word', word2vec)]),. Zeros((state size, state size)) pair = AllOf(Zeros(embedding_size), Record([('left', expr()), ('right', expr())])) expr_def = (Oneof(key_fn=len, case_blocks-[(1, word), (2, pair)]) >> TreeLSTM(state_size) ) expr.resolve_to(expr_def)\nA forward declaration like expr is not itself a block, but may be called (using the expr () syntax). to create references -i.e. blocks which refer to the declaration. The subsequent call to resolve_to then updates all the references to refer to expr_def..\nThe word2vec block is as defined in Section|3.3\nHere we briefly report on some experiments with our implementation of N-ary Tree-LSTMs fc sentiment analysis. While we set a new state-of-the-art, that is not really the point here. Our model are not particularly original, and could certainly be implemented without using TensorFlow Fold What Fold does is to enable simpler and more concise definitions (see Table 3), along with faste execution, thus making it easier to rapidly explore novel model variants.\nWe used constituency Tree-LSTMs with tuned Glove vectors for word embedding, which achievec the best results of all sentiment models presented in Tai et al.(2015). In addition to this specific model, we have explored several novel variants,4 In particular,Tai et al.(2015) employed non\n4Unsuccessful variants included standard LSTMs (i.e. having only a single forget gate) accepting poolec histories from their children, and models based on character rather than word-level embeddings..\nT exp(et t = a(ht),Qt = t=1\nN-ary Tree-LSTMs (Tai et al.[2015, sec. 3.2) generalize LSTMs from 1 to N previous states. In|Tai et al.[(2015] sec. 5.1) they are applied to classify sentences from the Stanford Sentiment Treebank. This corpus consists of binarized constituency parse trees of one-sentence movie reviews, where every node has a sentiment label. At the leaves of the tree, words are mapped to word-embedding vectors which serve as the input to a binary tree-LSTM with O for the previous states. At the internal nodes, the LSTM takes O as input, and previous states from its two children. More formally,\nhword = TreeLSTM(Embedding(word), 0, 0 hleft,right = TreeLSTM(0, hleft, hright).\nTable 2: Test set accuracies on the Stanford Sentiment Treebank\nTable 3: Lines of code comparison\nmodel ours original ratio Feed-Forward Attention 26 71 0.37 Tree-LSTM 119 219 0.54 Graph Convolutions 32 44 0.73\nrecurrent dropout and L2 weight regularization. We eliminated weight regularization in favor of th. recurrent dropout scheme introduced bySemeniuta et al.(2016) and increased the LSTM state size from 150 to 300, leaving all other hyperparameters unchanged..\nResults are shown in Table[2] including the best previously reported results. Fine-grained accuracy is measured for all trees and calculated based on the five possible labels. Binary accuracy is measured only for trees with non-neutral sentiment, and is based on negative vs. positive classification. The. numbers in parentheses are standard deviations.Tai et al.(2015) report five independent runs, our results are based on thirty independent runs/|Noting the small size of this dataset (8544/1101/2210 trees for train/dev/test), we further evaluated an ensemble consisting of these thirty independently. trained models; this variant sets a new state-of-the-art on both subtasks..\nAs a final example, we have used the Fold library to implement the graph convolution model intro. duced byKearnes et al.[(2016) for molecules, which are represented as undirected graphs of atoms The code is more complex than our previous examples because it involves nested Compositior. blocks, and is given in AppendixB."}, {"section_index": "11", "section_name": "4 DISCUSSION", "section_text": "The experimental results presented in section2.1 quantify the impact of dynamic batching. The impact of the combinator library is harder to demonstrate quantitatively. One way to approach this (with a large grain of salt) is by comparing lines of code, which we do in Table[3] vs. the origina. author's sources. See Appendix |C|for details on the comparison protocol. Of course, a very short implementation is suboptimal if it comes at the cost of flexibility. The results in Section|3.5.1show that models from the literature can be reimplemented in Fold, then extended to achieve superior performance. We suspect that other models with DCGs will have quite a bit of \"head room' as well. due to simply having less work done tuning them compared with more mainstream architectures.\nMunkhdalai & Yu (2016a b do not report standard deviations or number of runs\nmodel fine-grained binary Tai et al.(2015 51.0 (0.5) 88.0 (0.3) Munkhdalai & Yu 2016a 52.8 89.7 Munkhdalai & Yu 2016b 53.1 89.3 Ours (Single Model) 52.3 (0.7) 89.4 (0.4) Ours (Ensemble) 53.6 90.2\nNeural architectures with dynamic computation graphs suffer from inefficient batching and poor tooling. Dynamic batching solves the former problem in full generality, we believe for the first time. The SPINN architecture (Bowman et al.] 2016) is an alternative stack-based approach that also en- ables efficient batching with DCGs, but it is limited to binary trees, and requires padding/truncation to handle trees of different sizes. The Fold library addresses the tooling problem by providing a high-level combinator library which is intended to make it easy for practitioners to rapidly develop and iterate on architectures with DCGs."}]
SkwSJ99ex
[{"section_index": "0", "section_name": "DEEPREBIRTH: A GENERAL APPROACH FOR ACCEL- ERATING DEEP NEURAL NETWORK EXECUTION ON MOBILE DEVICES", "section_text": "Table 5: GoogLeNet Execution Stora vs.En vs. Runtime-Memory Cos1\nComputer Science and Engineering Department Lehigh University 11 1801511S A\nTable 6: AlexNet Result (Accuracy vs. Speed vs. Energy cost\nStep Merged Layer(s) Top-5 Accuracy Speed-up Energy Cost 0 N/A 80.03% 445 ms 688 mJ 1 conv1+norm1 -> conv1 79.99% 343 ms (1.29x) 555 mJ (1.24x) 2 conv2+norm2 -> conv2 79.57% 274 ms (1.63x) 458 mJ (1.51x)\nAnother benefit of layer merging is run-time memory saving. The generated GoogLeNet-Merge model reduces the number of layers and consumes only 13.2 MB to process one image. This feature is also very useful for the cloud based deep learning service which can process a much larger batch at one run. As shown in table[5] one Titan X GPU can run a batch size of 882 with the GoogLeNet- Merge model while the original GoogLeNet can only allow a batch size of 350. On the other hand SqueezeNet though has much less trained parameters, it has much larger run-time memory impact due to the increased number of layers."}, {"section_index": "1", "section_name": "4.2 ALEXNET AND RESNET", "section_text": "To further analyze the generality of proposed DeepRebirth acceleration framework, besides. GoogLeNet, we also apply the proposed framework to other popular deep neural structures: AlexNet (Krizhevsky et al.) and ResNet (He et al.(2015)). Note that we did not apply tensor weights com- pression to those two models which can further reduce the model forwarding latency..\nFirst, we study the classical AlexNet model. We apply streamline merging approach to re-generate new layers by merging the first two convolution layers followed by LRN layers. We illustrate the re sult in Table[6 This indicates that by applying merging to the first two layers, the model forwarding time of AlexNet is reduced from 445 ms to 274 ms on Samsung Galaxy S5, and the Top-5 accuracy is slightly dropped from 80.03% to 79.57%.\nWe also apply the acceleration scheme to the state-of-the-art ResNet model. In the experiment, we. use the popular 50-layer ResNet-50 model as baseline. We mainly apply the acceleration framework. to conv1 and res2a layers (res2a has 2 branches; one branch has 1 convolution layer and anothe. branch has 3 convolution layers). We present the result in Table[7] The time latency on Samsung. Galaxy S5 for the processed layers (i.e., conv1 and res2a) is reduced from 189 ms to 104 ms. More-. over, the run-time memory cost is reduced by 2.21x. The accuracy is only slightly reduced.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Table 7: ResNet (conv1-res2a) Result (Accuracy vs. Speed up)\nRecent years have witnessed the breakthrough of deep learning techniques for image classification and object recognition. Mobile device becomes more and more popular due to its convenient mo- biles services provided for end users. More and more mobile applications require deep learning techniques to provide accurate, intelligent and effective services. However, the execution speed of the deep learning model on mobile devices becomes a bottleneck for many applications due to the large model size, deep network structure and complicated model parameters, which hinders the real-time deployment. However, if deep learning service is only provided at cloud side, transmis-\nStep Merged Layer(s) Top-5 Accuracy Speed-up Runtime-Mem Batch3 0 N/A 92.36% 189 ms 2505 MB 1 conv1 92.13% 162 ms (1.17x) 2113 MB (1.19x) 2 res2a_branch1 92.01% 140 ms (1.35x) 1721 MB (1.46x) 3 res2a_branch2a-2c 91.88% 104 ms (1.82x) 1133 MB (2.21x)\nMax Batch Size Model Energy Storage Runtime Memory on Titan X GoogLeNet 984 mJ 26.72 MB 33.2 MB 350 GoogLeNet-Tucker 902 mJ 14.38 MB 35.8 MB 323 GoogLeNet-Merge 447 mJ (2.2x) 23.77 MB 13.2 MB 882 (2.52x) GoogLeNet-Merge-Tucker 226 mJ (4.4x) 11.99 MB 14.8 MB 785 (2.24x) SqueezeNet 288 mJ 4.72 MB 36.5 MB 321\nXiaolong Wang\nSamsung Research America (SRA"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Deploying deep neural networks on mobile devices is a challenging task due to computation complexity and memory intensity. Existing works solve this prob- lem by reducing model size using weight compression methods based on dimen- sion reduction (i.e., SVD, Tucker decomposition and Quantization). However, the execution speed of these compressed models are still far below the real-time processing requirement of mobile services. To address this limitation, we pro- pose a novel acceleration framework: DeepRebirth by exploring the deep learning model parameter sparsity through merging the parameter-free layers with their neighbor convolution layers to a single dense layer. The design of DeepRebirth is motivated by the key observation: some layers (i.e., normalization and pool- ing) in deep learning models actually consume a large portion of computational time even few learned parameters are involved, and acceleration of these layers has the potential to improve the processing speed significantly. Essentially, the functionality of several merged layers is replaced by the new dense layer - re- birth layer in DeepRebirth. In order to preserve the same functionality, the rebirth layer model parameters are re-trained to be functionality equivalent to the orig inal several merged layers. The extensive experiments performed on ImageNet using several popular mobile devices demonstrate that DeepRebirth is not only providing huge speed-up in model deployment and significant memory saving but also maintaining the model accuracy, i.e., 3x-5x speed-up and energy saving on GoogLeNet with only 0.4% accuracy drop on top-5 categorization in ImageNet Further, by combining with other model compression techniques, DeepRebirth of- fers an average of 65ms model forwarding time on single image using Samsung Galaxy S6 with only 2.4% accuracy drop. In addition, 2.5x run-time memory saving is achieved with rebirth layers"}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "Running deep learning models efficiently on mobile CPUs is a highly intriguing feature due to. many reasons: (1) CPU is available for all mobile devices, even phones released many years ago;. (2) powerful CUDA-enabled GPUs are generally not available on (compact) mobile devices; (3) though a large majority of mobile devices are equipped with mobile GPUs, the speed-up achieved. on the mobile GPUs is quite limited when compared to CPU sh1r0 et al. (2015), not to mention the complexity caused by different mobile GPU architectures; (4) major deep learning frameworks such as Caffe Jia et al.(2014) and Tensorflow [Abadi et al.(2015) only support CPU implementation on mobile devices currently, and therefore an efficient CPU-friendly model is highly desirable..\nHowever, most of current mobile CPUs cannot meet the needs of deep learning model deploymen because it takes much longer time and higher energy cost to process an image using pre-trained deep learning models. For example, it takes more than 651ms to recognize an image using GoogleNet or Samsung S5 (Table 4) with 984mJ energy costs (Table 5). Therefore a question that naturally follows is: can we develop an efficient deep learning acceleration framework to facilitate deployment of deep learning service on mobile device?\nThis problem is challenging due to the fact that the practical solution is highly desirable to suppor different practical scenarios by addressing the following challenges (C1-C3)..\nIn order to improve the network running efficiency, some scalable networks have been proposec by balancing the running speed and the accuracy.Rastegari et al.(2016) designed a binary deej learning network (called XNOR-Net) where both the network weight and the input can be binarize for memory and computational saving. However, this network design depressed the accuracy greatly The top-5 accuracy obtained by this framework is reduced by more than 10% for ResNet-18 mode along with 2x speed-up. Another popular newly designed small model -SqueezeNet Iandola et al (2016) becomes widely used for its much smaller memory cost and increased speed. However the near-AlexNet accuracy is far below the state-of-the art performance. Compared with these tw newly networks, our approach has much better accuracy with more significant acceleration.\nC2: Leveraging existing trained deep framework. In order to provide the best deep learning service, the mechanism is designed to taking advantage of existing state-of-the-art deep learning architectures (e.g., GoogLeNet and ResNet) instead of training from scratch.\nC3: Supporting different deep learning architecture components. The proposed techniqu should provide generic framework that can be applied to these popular deep learning models tha may consist of different types of layers. In general, all neural network layers can be grouped into tw categories: tensor layer and non-tensor layer based on whether the layer contains tensor-type param. eters. For example, fully connected layer and the convolution layer are both tensor-layers since the contain 2-d and 4-d tensor-type weight parameters, respectively. Pooling layer and LRN layer ar. both non-tensor layers because they do not contain any high-order tensor-type weight parameters. Therefore, the framework is expected to support both tensor and non-tensor layers optimization..\nSpringenberg et al.[(2014) shows that the conv-relu-pool substructure may not be necessary for a neural network architecture. The authors find that max-pooling can simply be replaced by another convolution layer with increased stride without loss in accuracy. Different from this work, Deep- Rebirth replaces a complete substructure (e.g., conv-relu-pool, conv-relu-LRN-pool) with a single convolution layer, and aims to speed-up the model execution on a mobile device. In addition, our work fine-tunes a trained network by relearning the merged \"rebirth' layers and does not require to train from scratch.\nHowever, the current solutions for deep learning model acceleration are still quite limited in address. ing these challenges. The main goal of works (Han et al.(2016b),Li(2013); Kim et al.(2015) Jiaxiang Wu & Cheng(2016)) is to reduce the model size by approximating the tensor-type layer. using low rank approximation and vector quantization techniques. While they can provide som. acceleration for only fully-connected layers (used in AlexNet, VGGNet), the application scenario. of these methods are very limited and ineffective because modern deep learning architectures (e.g. Inception and ResNet) have removed large fully-connected layers. Moreover, for non-tensor layer. (e.g., normalization and pooling layers) that are generally used for speeding up the network training. and obtaining better generalization performance, none works, to the best of our knowledge, hav discussed how to accelerate the execution process."}, {"section_index": "5", "section_name": "6 CONCLUSION", "section_text": "We have proposed DeepRebirth acceleration framework which can speed up the neural network with satisfactory accuracy. Our method operates by re-generating new tensor layers from optimiz ing the non-tensor layers and their neighboring units. Moreover, as a generic method, DeepRebirtl is compatible with state-of-the-art deep models like GoogleNet and ResNet, where most paramete weight compression methods failed. By applying DeepRebirth at different deep learning architec tures, we obtain the significant speed-up on different processors, especially on mobile CPUs. Thi will greatly facilitate the deployment of deep learning models on mobile phones and make it possibl to provide more smart and intelligent services in the new AI tide.\nTo bridge these gaps, this paper proposes DeepRebirth, a new deep learning model acceleratior. framework by exploring the sparsity of deep neural network layers to accelerate both non-tensor lay ers and tensor layers from two types of rebirth: streaming merging and branch merging. In stream ing merging, the new tensor layers are generated by merging non-tensor layers with its neighboring. sparse tensor layers in the feed-forward structure as illustrated in Figure 2l while in branch merg. ing, the new tensor layers are created by fusing non-tensor banches with the sparse tensor branche. (at the same level) as shown in Figure [3] i.e., the inception module in GoogLeNet (Szegedy et al. (2014)). The design of DeepRebirth is guided by the key observation:."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Non-tensor layers are the major obstacles for real-time mobile CPU execution (Section 2)\nThen reducing the execution time on non-tensor layers can greatly reduce the overall model for warding time. In order to reduce the execution time, both streaming merging and branch merging\nlOther examples of non-tensor layers include dropout layer, normalization layer, softmax layer, etc\nTable 1: Percentage of Forwarding Time on Non-tensor Layers\nNetwork Intel x86 Arm Titan X AlexNet 32.08% 25.08% 22.37% GoogLeNet 62.03% 37.81% 26.14% ResNet-50 55.66% 36.61% 47.87% ResNet-152 49.77% N/A 44.49% Average 49.89% 33.17% 35.22% 60% 70% 70% Arm Arm Arm 50% Intel_x86 60% Intel_x86 60% Intel_x86 50% 50% 40% hreetoon uoI 40% hrect 40% 30% 38313333 30% Tmne 30% **** 20% 20% 20% 10% 10% 10% 7 0% 0% Convolutionopout ReLU LRN PoolingSoftmax ConvolutjonpoutReLU LRN Scale InnerProduct Convolutiq/LU Pooling Softma InnerProdutwise BatchNorm (a) AlexNet (b) GoogLeNet (c) ResNet-50\nNetwork Intel x86 Arm Titan X AlexNet 32.08% 25.08% 22.37% GoogLeNet 62.03% 37.81% 26.14% ResNet-50 55.66% 36.61% 47.87% ResNet-152 49.77% N/A 44.49% Average 49.89% 33.17% 35.22%\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. CoRR, abs/1310.6343,2013. URLhttp://arxiv.org/abs/1310. 6343\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressin? neural networks with the hashing trick. CoRR, abs/1504.04788, 2015..\nEmily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear. structure within convolutional networks for efficient evaluation. In Advances in Neural Informa tion Processing Systems. pp. 1269-1277. 2014.\n60% 70% 70% Arm Arm Arm **** Intel_x86 60% Intelx86 60% 50% Intelx86 + 50% 50% 40% 40% 40% 388388 30% 188888888883 38888888881 30% 30% 20% 20% 888883 20% 10% 10% 1381 10% 0% 0% 0% LRN Pooling ConvolutjonpoutReLUe LRN Poolingoftmax Softmax Scale InnerProduct InnerProdughcat ConvolutiqneLU Pooling Softma InnerProducwise BatchNorm\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neu- ral networks. In In Proceedings of the International Conference on Artificial Intelligence ana Statistics (AISTATS10). Society for Artificial Intelligence and Statistics, 2010.\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115, 2014\nFigure 1: Time Decomposition for each layer. Non-tensor layers (e.g., dropout, ReLU, LRN, soft max, pooling, etc) shown in red color while tensor layers (e.g., convolution, inner-product) shown. in black color.\nare applied to merge non-tensor layers into tensor layers. Overall, reducing the execution time or non-tensor layers can greatly reduce the model forwarding time given the fact that tensor-layer has been optimized to the minimum as suggested by (Han et al.(2016b),Kim et al.(2015)). Ideally we can combine both non-tensor and tensor layer optimization together and further reduce latency as well as the model size. To summarize, this paper makes the following contributions.\n. Our approach is the first work that optimizes non-tensor layers and significantly accelerates a deep. learning model on CPUs while reducing the required runtime-memory since there are less layers ir the reconstructed deep learning mode|2\nForrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size arXiv:1602.07360, 2016.\n. To address the challenges of (C1-C3), we perform both streaming merging and branch merging based on the original structure of old layers while the new tensor layers are generated by merging non-tensor layers with its neighboring sparse tensor layers vertically and horizontally..\n. As demonstrated in the experiment, our approach has obtained the state-of-the-art speeding up. on popular deep learning models with negligible accuracy loss. Our proposed method enables GoogLeNet to achieve 3x-5x speed-up for processing a single image with only 0.4% drop on Top-5 accuracy on ImageNet without any weights compression method. By further applying model com. pression techniques, we achieve around 65 ms for processing a single image with Top-5 accuracy of 86.5%. Furthermore, we show that our methods work for state-of-the-art non-tensor layers, e.g.,. batch normalization, in very deep neural network models such as ResNetHe et al.(2015).\nExperimental Settings To give a better understanding of the neural network latency, we evaluate the time cost of different types of layers within a given network. We measure their latency by using the time percentage measurement where larger value indicates longer time3] Our experiment is carried on different processors including Intel x86 CPU, Arm CPU and Titan X GPU. Along with different processors, we also use different state-of-the-art networks to evaluate. These networks\n2Tensor weights decomposition method such as Tucker Decomposition effectively reduces the model size. (i.e., the number of learned weights) and thus reduce the storage cost on hard drive. However, since the decom position methods increase the number of layers of the model, the actual runtime-memory (RAM) cost (whicl. is much more scarce resource than hard drive storage) can be even larger than the model before decomposition 3The accumulated percentage for a given network is 100%.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In Advances in Neural Information Processing Systems, pp. 2012..\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV, 2016."}, {"section_index": "7", "section_name": "include AlexNet (Figure|1a Krizhevsky et al.), GoogLeNet(Figure[1b]Szegedy et al.](2014)) anc ResNet(Figure1cHe et al. 2015)). We list the results in Figure[1and Table[1", "section_text": "Observations and Insights As demonstrated in the results, for classical deep models (e.g AlexNet), among the non-tensor layers, \"LRN\" and \"Pooling\" layers are the major obstacles tha slow-down the model execution. ResNet-50 has abandoned the \"LRN\" layers by introducing th batch normalization layer, but the findings remain valid as it takes up more than 25% of the tim on ARM CPU and more than 40% on Intel x86 CPU (in Caffe (Jia et al.(2014)), it was decom posed into a \"BatchNorm' layer followed by a \"Scale\" layer as shown in Figure 1c). The time fraction spent over such layers ranges from 22.37% to 62.03%. Among different types of proces sors, non-tensor layers have the largest impact on Intel x86 CPUs, and more specifically 62.03% o the computing time. On the other hand, though non-tensor layers do not affect the mainstream ARM CPUs, on average they still cost about 1/3 of the computing time. All these numbers confirm ou intuition: there is a great potential to accelerate the model by optimizing those non-tensor layers\nVincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. 2011."}, {"section_index": "8", "section_name": "3 DEEPREBIRTH", "section_text": "This section covers the design of DeepRebirth in three aspects: streaming merging, branching mer ing and adapting DeepRebirth to the whole model."}, {"section_index": "9", "section_name": "3.1 STREAMLINE MERGING", "section_text": "For deep network architecture with streamline layer connections, in order to accelerate the execution we first identify the layers that have large latency but also have potentials to be merged or processed. The merging design is motivated by the following two key observations..\nMethod The streamline merging regenerates a new tensor layer (i.e., rebirth layer) by merging non tensor layers with its bottom tensor units in the feed-forward structure. After layer-wise regenera- tion, we retrain the deep neural network model by fine-tuning the parameters of the new generated layers. There are two streamline merging operations in the proposed scheme. The choice of merging operation is depending on the type of non-tensor layers.\nExample Figure 2 illustrates how the optimization works using streamline merging. This is one. representative part in GoogLeNet where the convolution layer conv2/3 3 is followed by a LRN. layer conv2/norm2 and a pooling layer poo2/3 3_s2 (The ReLU layer which has negligible. latency is retained to keep accuracy). Before merging, the 2 non-tensor layers without a singl learned parameter weight take even more time than running the convolution layer. After merging.\nLide Zhang, Birjodh Tiwana, Zhiyun Qian, Zhaoguang Wang, Robert P. Dick, Zhuoqing Morley Mao, and Lei Yang. Accurate online power estimation and automatic battery behavior based power model generation for smartphones. In Proceedings of the Eighth IEEE/ACM/IFIP Interna- tional Conference on Hardware/Software Codesign and System Synthesis, CODES/ISSS '10, pp. 105-114, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-905-3. doi: 10.1145/1878961. 1878982. URLhttp://doi.acm.0rg/10.1145/1878961.1878982\nIn general deep learning models, the probability distribution of the dataset can be represented by a large, very sparse deep neural network that is constructed layer after layer. From analyzing the correlations of the current layer and preceding layers (or parallel layers), we can merge the highly correlated layers and substitute it as a new \"rebirth' layer. This process is similar to viewing the Inception model as a logical culmination as suggested byArora et al.(2013).\nNon-tensor layers are usually following a tensor layer such as convolution layer as shown. in Figure2 Several consecutive layers can be viewed as a blackbox for non-linear transformations. and therefore this can be replaced by a new tensor-layer by learning the parameters to. approximate the functionality of original several layers. An example is shown in Figure[2\nMerging Pooling Layer: The pooling layer down-samples feature maps learned from pre-. vious layers. Therefore, to merge a pooling layer to a convolution layer, we remove the pooling layer and set the stride value of the \"merged\"' convolution layer as the product of the stride values for both the original pooling layer and the convolution layer. With a larger stride value for the new \"merged\"' convolution layer, it further reduces the computation required for executing the new model.. Merging Non-Pooling Layer: For non-pooling layers such as LRN and batch normalization.. we directly prune those layers from the original deep neural network..\nTop Layers Output Shape 56x56x192 pool2/3x3_s2 Top Layers 16.3 ms (Pooling) Stride: 2 Output Shape 56x56x192 112x112x192 conv2/3x3_merge conv2/norm2 153.8 ms\" 68.4 ms (Convolution) (LRN) 16.6 ms Tensor: 3x3x64x192 MERGE Stride: 2 112x112x192 conv2/3x3 69.1 ms Input Shape (Convolution) 112x112x64 Tensor: 3x3x64x192 Stride: 1 Bottom Layers Input Shape. 112x112x64 Bottom Layers\nGoogleNet-Merge eption_4d/rel_5 /3x3_red uce_nev 13\nFigure 2: Streamline Merging: The GoogLeNet example and the running time is measured using bvlc_googlenet model in Caffe on a Samsung Galaxy S5. Left panel: convolution (in green), LRN (in red), pooling (in red). Right Panel: single convolution layer. The three layers in the left panel are merged and regenerated as a convolution layer (i.e., rebirth layer) in the right panel.\nprocess them to generate a new rebirth convolution layer conv2/3 3_merge, the time spent on the rebirth layer is greatly reduced compare to the original layers."}, {"section_index": "10", "section_name": "3.2 BRANCH MERGING", "section_text": "Example One representative unit is the inception module in GoogLeNet. For example as illustrated. in Figure[3] layer \"inception_3a\"' of GoogLeNet has 4 branches: 3 convolution branches take feature. maps from the bottom layer at various scales (1 1, 3 3 and 5 5) and 1 additional 3 3 pooling branch Szegedy et al.(2014). The output feature maps of each branch are concatenated as input for. the following top layer.\nMethod For deep network architecture with parallel branches, the output of each branch constitutes part of the feature maps as the input for the next layer. We identify non-tensor branches that have large latency (e.g., the pooling branch in Figure3). Similar to streamline merging, if we can use a faster tensor branch to simulate the function of the non-tensor branch by relearning its parameters. we can achieve clear speed-up.\nFigure 4: An illustration of GoogleNet-Merge's structure in details\nThe design of branch merging is motivated by the following key observation. Given the fact that non-tensor layer requires more time on computation, if we can learn new tensor layers by fusing non-tensor layers with the tensor units at the same layer level, then the the execution time will be decreased.\nTo merge a non-tensor branch into a tensor branch, we re-create a new tensor layer (i.e., rebirth layer) by fusing the non-tensor branch and a tensor unit with relatively small latency to output the feature maps that were originally generated by the non-tensor branch. If the non-tensor branch has a kernel size larger than 1 1 (e.g., the 3 3 pooling branch in Figure[3), the picked tensor branch's kernel size should be at least the size of the non-tensor branch. As shown in this figure, we re-learn a new tensor layer \"inception_3a\"' by merging the 3 3 pooling branch with the 5 5 convolution branch at the same level, and the number of feature maps obtained by the 5 5 convolution is increased from 32 to 64.\nReducing: Current deep neural networks usually include convolution branches with 1 1. convolution layers (e.g., inception_3a/3x3_reduce in Figure 3) aiming to reduce feature. maps channels. This unit will be processed by a following convolution layer with larger kernel size. For greater speed-up, we further reduce the number of feature maps generated.\nTop Layers Top Layers Output Shape Output Shape 56x56x256 56x56x256 inception_3a inception_3a /output /output_new (Concat) (Concat) 6x56x32 6 6.48 ms 20.6 ms 3.68 ms 1.51 ms 11.3 ms/ 5.23 ms 56 56 inception_3a inception_3a inception_3a inception_3a inception_3a inception_3a /1x1 /3x3 /5x5 /pool_proj /3x3_merge /5x5_merge (Convolution) (Convolution) (Convolution) (Convolution) (Convolution) (Convolution) 55.2 ms Tensor: 1x1x192x64 Tensor: 3x3x96x128 Tensor: 5x5x16x32 Tensor: 1x1x192x32 Tensor: 3x3x48x192 Tensor: 5x5x16x64 21.1 ms MERGE 6.58 ms 56x56x96 2.35 ms56x56x16 14.0 ms 56x56x192 2.19 ms 56x56x48 2.35 ms 56x56x16 inception_3a inception_3a inception_3a inception_3a inception_3a /3x3 reduce /5x5_reduce /pool /3x3_reduce_new /5x5_reduce (Convolution) (Convolution) (Pooling) (Convolution) (Convolution) Tensor: 1x1x192x96 Tensor: 1x1x192x16 Kernel_size: 3x3 Tensor: 1x1x192x48 Tensor: 1x1x192x16 Input Shape: 56x56x192 Input Shape: 56x56x192 Bottom Layers Bottom Layers"}, {"section_index": "11", "section_name": "3.3 ADAPTING DEEPREBIRTH TO OVERALL MODEL", "section_text": "The new generated layer (i.e., rebirth layer) is required to learn the new parameters using fine-. tuning as discussed inYosinski et al.(2014);[Razavian et al.(2014). We use standard initialization methods to (e.g., Xavier Glorot & Bengio (2010) initialization) to initialize the parameters in the. new layer while keeping the weights of other layers unchanged. In our optimization procedure, we. set the learning rate of the new learning layers 10 times over those in other layers. The proposed. optimization scheme is applied from the bottom layer to the top layer. It is also possible to learn multiple rebirth layers at the same time (we merge and fine-tune 3 sequential inception layers 4b-4d. together for GoogLeNet) or merge layers in orders other than bottom-to-top.."}, {"section_index": "12", "section_name": "4.1 GOOGLENET", "section_text": "To evaluate the performance of DeepRebirth, we performed a comprehensive evaluation using iffer ent optimization approaches on top of GoogLeNet. We use Caffe's GoogLeNet implementation (i.e. bvlc_googlenet) with its pre-trained model weights. Then we apply the proposed DeepRebirth opti mization scheme to accelerate the running speed of GoogLeNet, which is denoted as \"GoogLeNet- Merge\"' (see structure in appendix). After non-tensor layer optimization (streamline and branch merging), we further apply tucker decomposition approach (Kim et al.(2015)) to reduce the model size (i.e., the number of learned weights) by 50%, represented as \"GoogLeNet-Merge-Tucker\"'. In addition, we directly employ tucker decomposition method to compress original GoogLeNet. This\nFigure 3: Branch Merging: The GoogLeNet example and the running time is measured using bvlc googlenet model in Caffe on a Samsung Galaxy S5. Left panel: four branches in parallel. convolution layer, convolution + convolution, convolution + convolution, convolution + pooling Right panel: two branches in parallel, convolution + convolution, convolution + convolution. The four branches are merged into two branches.\nby the 1 1 \"reducer\"'. For layer inception_3a/3x3_reduce, we reduce the number of output feature maps from 96 to 48. Merging: A convolution branch with a smaller kernel size can be merged to a convolution branch with a larger kernel size. The method is similar to the merging of non-tensor lay- ers. To keep other layers' structures in network unchanged, we remove the small-kernel convolution branch and increase the number of feature maps generated by the large-kernel convolution layers. For examples, for layer inception_3a/3x3_reduce, we remove the 1 1 convolution branch and increase the number of feature maps generated by the 3 3 convo- lution from 128 to 196.\nTable 2: GoogLeNet Accuracy on each layer after merging\nStep Merged Layer(s) Top-5 Accurac. 0 N/A 88.89% 1 conv1 88.73% 2 conv2 88.82% 3 inception_3a 88.50% 4 inception_3b 88.27% 5 inception_4a 88.60% 6 inception_4b-4d 88.61% 7 inception_4e 88.43% 8 inception_5a 88.41% 9 inception_5b 88.43 % Tucker Decomposition N/A 86.54 %\nis indicated as \"GoogLeNet-Tucker\"'. Thus, we have 4 models to compare, namely GoogLeNet GoogLeNet-Merge, GoogLeNet-Tucker and GoogLeNet-Merge-Tucker.."}, {"section_index": "13", "section_name": "4.1.1 ACCURACY", "section_text": "Since one of our major goals is to propose a new acceleration approach which can speed up th model running time with satisfied accuracy (in constrast to the original model), we list the accurac. changes along with the optimization steps conducted on ImageNet ILSVRC-2012 validation datase as indicated in Table2 During the whole optimization procedure of model training, we set the base learning rate for the re-generated layer as O.01 (the rest layers are O.001). We apply stochastic gradient descent training method (Bottou(2012)) to learn the parameters with a batch size of 32 During our training phase, we set 40,o00 as the step size together with O.1 set for gamma value and 0.9 for momentum parameter. At each step, the model generally converges at around 90,000 iterations (2 epochs).\nThe result indicates that the proposed method has almost negligible impact on the model accu racy, and the accuracy even increases at certain step (e.g., step 5). This indicates that \"the new born\"' layers perfectly simulate the functionalities of previous non-tensor layers before optimiza tion. By applying tucker decomposition method on the merged model to reduce the weights by hal (GoogLeNet-Merge-Tucker), we observer that there is a larger drop on accuracy (around 2%). How ever, directly applying tucker decomposition method (GoogLeNet-Tucker) to reduce the GoogLeNe weights to a half drops the top-5 accuracy to 85.7%. These results imply that our method perform reasonable well even after streamline and branch layer mergings."}, {"section_index": "14", "section_name": "4.1.2 SPEED-UP", "section_text": "To evaluate and compare the latency of different optimization approaches, we evaluate the the layer-. wise running speed on a Samsung Galaxy S5 smart phone which has an ARMv7 quad-core CPU @ 2.5 GHz and 2 GB RAM. We use Caffe's integrated benchmark module to test the model forwarding time. Each test run includes 50 subtests with a random input. We try 10 test runs on each compared model and report the best test run in terms of forwarding time. During the whole experiment, we. turn on phone to the airplane mode and close all other apps..\nAs is demonstrated in Table [3] we observe that for the best case scenario, GoogLeNet-Merge i 3x faster than GoogLeNet and for the worst case scenario, GoogLeNet takes around 950 ms for a single forwarding while GoogLeNet-Merge takes only around 250 ms, which is almost 4x speed up. This is because the original GoogLeNet model has too many small layers and this results i1 performance fluctuation. The same finding is also sharply observed in|Kim et al.|(2015) . The Tucke Decomposition method further reduces the computation for around 50% at the cost of around 2% accuracy loss. On the other hand, directly applying tucker decomposition on tensor layers doesn show any significant acceleration.\nTable 3: Breakdown of GoogLeNet forwarding time cost using different methods on each layer\nTable 4: Execution time using different methods (including SqueezeNet) on different mobile device\nNot limited to mobile platform of Samsung Galaxy S5, we also apply the speed-up schemes or other popular processors. These mobile devices include (1) Moto E: a low-end mobile ARM CPU (2) Samsung Galaxy S5: a middle-end mobile ARM CPU, (3) Samsung Galaxy S6: a high-enc mobile ARM CPU, (4) Macbook Pro: an Intel x86 CPU, and (5) Titan X: a powerful server GPU We demonstrate the experimental results in Table4 The promising result indicates that the proposec method achieves significant speed-up on various types of CPUs. Even on the low-end mobile CPL. (i.e., Moto E), around 200 ms model forwarding time is achieved by further applying tensor weight compression method. Finally, we compare the proposed approach with SqueezeNet (Iandola et al. (2016)) which is a state-of-the-art compressed CNN model. We are very excited to see that oui optimization approach can obtain faster speed with higher accuracy compared to SqueezeNet(80% for Top-5)'s performance on all CPU platforms as listed in Table4\nWe measure the energy cost of each compared model using PowerTutor Android app (Zhang et al. (2010) on Samsung Galaxy S5. The original GoogLeNet consumes almost 1 Joule per image while GoogLeNet-Merge consumes only 447 mJ. Applying tucker decomposition further reduces the energy cost to only 1/4 at 226 mJ .\nGoogLeNet GoogLeNet GoogLeNet Device GoogLeNet -Tucker -Merge -Merge-Tucker conv1 94.92 ms 87.85 ms 8.424 ms 6.038 ms conv2 153.8 ms 179.4 ms 16.62 ms 9.259 ms inception_3a 55.23 ms 85.62 ms 21.17 ms 9.459 ms inception_3b 98.41 ms 66.51 ms 25.94 ms 11.74 ms inception_4a 30.53 ms 36.91 ms 16.80 ms 8.966 ms inception_4b 32.60 ms 41.82 ms 20.29 ms 11.65 ms inception_4c 46.96 ms 30.46 ms 18.71 ms 9.102 ms inception_4d 36.88 ms 21.05 ms 24.67 ms 10.05 ms inception_4e 48.24 ms 32.19 ms 28.08 ms 14.08 ms inception_5a 24.64 ms 14.43 ms 10.69 ms 5.36 ms inception_5b 24.92 ms 15.87 ms 14.58 ms 6.65 ms loss3 3.014 ms 2.81 ms 2.97 ms 2.902 ms Total 651.4 ms 614.9 ms (1.06x) 210.6 ms (3.09x) 106.3 ms (6.13x)\nGoogLeNet GoogLeNet GoogLeNet Device GoogLeNet SqueezeNe -Tucker -Merge -Merge-Tucker Moto E 1168.8 ms 897.9 ms 406.7 ms 213.3 ms 291.4 ms Samsung Galaxy S5 651.4 ms 614.9 ms 210.6 ms 106.3 ms 136.3 ms Samsung Galaxy S6 424.7 ms 342.5 ms 107.7 ms 65.34 ms 75.34 ms Macbook Pro (CPU) 91.77 ms 78.22 ms 23.69 ms 15.18 ms 17.63 ms Titan X 10.17 ms 10.74 ms 6.57 ms 7.68 ms 3.29 ms\nWhen deploying to the mobile devices, we remove the loss1 and loss2 branches from the trained models so that the storage cost of each model is reduced by 24.33 MB. GoogLeNet-Merge which achieves significant speed-up does not save much storage cost compared to the original GoogLeNet model. However, for modern mobile devices, storage is not a scarce resource (e.g., Samsung Galaxy S5 has 16 GB or 32 GB storage), so a 20 MB deep learning model is \"affordable' on mobile devices. Meanwhile, we can always perform the tensor weights compression method to further reduce the storage cost."}]
r1VdcHcxx
[{"section_index": "0", "section_name": "RECURRENT BATCH NORMALIZATION", "section_text": "Contrary to previous findings by Laurent et al. (2016); Amodei et al. (2015), we have demonstrated that batch-normalizing the hidden states of recurrent neural networks greatly improves optimiza tion. Indeed, doing so yields benefits similar to those of batch normalization in feed-forward neural networks: our proposed BN-LSTM trains faster and generalizes better on a variety of tasks in cluding language modeling and question-answering. We have argued that proper initialization of the batch normalization parameters is crucial, and suggest that previous difficulties (Laurent et al. 2016; Amodei et al., 2015) were due in large part to improper initialization. Finally, we have shown our model to apply to complex settings involving variable-length data, bidirectionality and highly nonlinear attention mechanisms.\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, Caglar Gulcehre & Aaron Courville\nfirstname.lastname@umontreal.ca\nVe propose a reparameterization of LSTM that brings the benefits of batch noi nalization to recurrent neural networks. Whereas previous works only apply batc ormalization to the input-to-hidden transformation of RNNs, we demonstrate tha is both possible and beneficial to batch-normalize the hidden-to-hidden trans. ion, thereby reducing internal covariate shift between time steps. Ve evaluate our proposal on various sequential problems such as sequence class cation, language modeling and question answering. Our empirical results sho nat our batch-normalized LSTM consistently leads to faster convergence and inr roved generalization."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural network architectures such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014) have recently exhibited state-of-the-art performance on a wide range of complex sequential problems including speech recognition Amodei et al. (2015), machine transla tion (Bahdanau et al., 2015) and image and video captioning (Xu et al., 2015; Yao et al., 2015) Top-performing models, however, are based on very high-capacity networks that are computation ally intensive and costly to train. Effective optimization of recurrent neural networks is thus an active area of study (Pascanu et al., 2012; Martens & Sutskever, 2011; Ollivier, 2013).\nIt is well-known that for deep feed-forward neural networks, covariate shift (Shimodaira, 2ooo; Ioffe & Szegedy, 2015) degrades the efficiency of training. Covariate shift is a change in the distribution of the inputs to a model. This occurs continuously during training of feed-forward neural networks where changing the parameters of a layer affects the distribution of the inputs to all layers above it As a result, the upper layers are continually adapting to the shifting input distribution and unable. to learn effectively. This internal covariate shift (Ioffe & Szegedy, 2015) may play an especially. important role in recurrent neural networks, which resemble very deep feed-forward networks..\nBatch normalization (Ioffe & Szegedy, 2015) is a recently proposed technique for controlling the distributions of feed-forward neural network activations, thereby reducing internal covariate shift. It involves standardizing the activations going into each layer, enforcing their means and variances to be invariant to changes in the parameters of the underlying layers. This effectively decouples each layer's parameters from those of other layers, leading to a better-conditioned optimization problem Indeed, deep neural networks trained with batch normalization converge significantly faster and generalize better.\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. arXiv:1609.01704, 2016.\nA. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013\nDavid Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv:1609.09106, 2016\nAlthough batch normalization has demonstrated significant training speed-ups and generalization. benefits in feed-forward networks, it is proven to be difficult to apply in recurrent architectures (Lau. rent et al., 2016: Amodei et al., 2015). It has found limited use in stacked RNNs, where the nor malization is applied \"vertically\"', i.e. to the input of each RNN, but not \"horizontally' between. timesteps. RNNs are deeper in the time direction, and as such batch normalization would be most. beneficial when applied horizontally. However, Laurent et al. (2016) hypothesized that applying. batch normalization in this way hurts training because of exploding gradients due to repeated rescal-. in g.\nK. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom Teaching machines to read and comprehend. In NIPS, 2015\nS. Hochreiter. Untersuchungen 1 zu dynamischen neuronalen netzen. Master's thesis. 1991\nS. Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997\nOur findings run counter to this hypothesis. We show that it is both possible and highly beneficial to. apply batch normalization in the hidden-to-hidden transition of recurrent models. In particular, we. describe a reparameterization of LSTM (Section 3) that involves batch normalization and demon strate that it is easier to optimize and generalizes better. In addition, we empirically analyze the\nD. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014\nD Krueger and R. Memisevic. Regularizing rnns by stabilizing activations. ICLR. 201\nThe authors would like to acknowledge the following agencies for research funding and computing. support: the Nuance Foundation, Samsung, NSERC, Calcul Quebec, Compute Canada, the Canada. Research Chairs and CIFAR. Experiments were carried out using the Theano (Team et al., 2016) and the Blocks and Fuel (van Merrienboer et al., 2015) libraries for scientific computing. We thank David. Krueger, Saizheng Zhang, Ishmael Belghazi and Yoshua Bengio for discussions and suggestions..\nD. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and. translate. ICLR, 2015. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is. difficult. Neural Networks, IEEE Transactions on, 1994.. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv:1406.1078, 2014.\nLiao & Poggio (2016) simultaneously investigated batch normalization in recurrent neural networks. albeit only for very short sequences (10 steps). Ba et al. (2016) independently developed a variant of batch normalization that is also applicable to recurrent neural networks and delivers similar im- provements as our method."}, {"section_index": "2", "section_name": "2 PREREOUISITES", "section_text": "M. Mahoney. Large text compression benchmark. 2009"}, {"section_index": "3", "section_name": "2.1 LSTM", "section_text": "ht = Q(Wnht-1+ WzXt+ b)\nYann Ollivier. Persistent contextual neural networks for learning symbolic data sequences. CoRR abs/1306.0514, 2013.\nMarius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language mod els: when are they needed? arXiv:1301.5650, 2013\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. arXiv:1211.5063, 2012\nIn what follows, we focus on the LSTM architecture (Hochreiter & Schmidhuber, 1997) with recur. rent transition given by\nThe Theano Development Team et al. Theano: A Python framework for fast computation of mathe matical expressions. arXiv e-prints, abs/1605.02688, May 2016.\nWnht-1+ WxXt+b Ct o(ft) O ct-1 + (it) O tanh(g ht (0t) O tanh(ct),\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning CoRR. abs/1506.00619.2015. URL http://arxiv.0rg/abs/1506.00619.\nThe LSTM differs from simple RNNs in that it has an additional memory cell c whose update is nearly linear which allows the gradient to flow back through time more easily. In addition, unlike the RNN which overwrites its content at each timestep, the update of the LSTM cell is regulated by a set of gates. The forget gate f determines the extent to which information is carried over from the previous timestep, and the input gate it controls the flow of information from the current input xt The output gate ot allows the model to read from the cell. This carefully controlled interaction with the cell is what allows the LSTM to robustly retain information for long periods of time."}, {"section_index": "4", "section_name": "2.2 BATCH NORMALIZATION", "section_text": "Covariate shift (Shimodaira, 2ooo) is a phenomenon in machine learning where the features pre. sented to a model change in distribution. In order for learning to succeed in the presence of covari ate shift, the model's parameters must be adjusted not just to learn the concept at hand but also tc. adapt to the changing distribution of the inputs. In deep neural networks, this problem manifests as.\ngradient backpropagation and show that proper initialization of the batch normalization parameters. is crucial to avoiding vanishing gradient (Section 4). We evaluate our proposal on several sequen-. tial problems and show (Section 5) that our LSTM reparameterization consistently outperforms the LSTM baseline across tasks, in terms of both time to convergence and performance..\nDavid Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rose. mary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, and Aaron Courville. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv:1606.01305, 2016. C. Laurent, G. Pereyra, P. Brakel, Y. Zhang, and Y. Bengio.Batch normalized recurrent neural networks. ICASSP, 2016. Quoc V Le, N. Jaitly, and G. Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv:1504.00941, 2015. Oianli Liao and Tomaso Pogoio. Bridoing the oans. hetween. residual learning recurrent neural\nM. P. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 1993..\nLong Short-Term Memory (LSTM) networks are an instance of a more general class of recurrent neural networks (RNNs), which we review briefly in this paper. Given an input sequence X = x1, X2, . . . , XT), an RNN defines a sequence of hidden states ht according to.\nRNNs are popular in sequence modeling thanks to their natural ability to process variable-length. sequences. However, training RNNs using first-order stochastic gradient descent (SGD) is notori- ously difficult due to the well-known problem of exploding/vanishing gradients (Bengio et al., 1994;. Hochreiter, 1991; Pascanu et al., 2012). Gradient vanishing occurs when states h, are not influenced by small changes in much earlier states h, t < t, preventing learning of long-term dependencies in the input data. Although learning long-term dependencies is fundamentally difficult (Bengio et al. 1994), its effects can be mitigated through architectural variations such as LSTM (Hochreiter &. Schmidhuber, 1997), GRU (Cho et al., 2014) and iRNN/uRNN (Le et al., 2015; Arjovsky et al. 2015).\nK. Xu, J. Ba, R. Kiros, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv: 1502.03044, 2015. L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing videos by. exploiting temporal structure. In ICCV, 2015. S. Zhang, Y. Wu, T. Che, Z. Lin, R. Memisevic, R. Salakhutdinov, and Y. Bengio. Architectural. complexity measures of recurrent neural networks. arXiv:1602.08210, 2016.\nwhere Wn E RdnX4dn,WzRdxX4dn,b E R4dn and the initial states ho E Rdn,Co E Rdn are model parameters. o is the logistic sigmoid function, and the O operator denotes the Hadamard product.\nmean of recurrent term mean of cell state.. 0.20 0.20 0.15 0.15 0.10 0.10 0.05 0.05 0.00 0.00 -0.05 -0.05 -0.10 -0.10 -0.15 -0.15 -0.20 -0.20 variance of recurrent term variance of cell state 2.0 0.009 0.008 1.5 0.007 0.006 1.0 0.005 0.004 0.5 0.003 0.002 0.0 0.001 0 10 20 30 40 50 0 10 20 30 40 50 time steps time steps"}, {"section_index": "5", "section_name": "BATCH-NORMALIZED LSTM", "section_text": "Figure 5: Convergence of population statistics to stationary distributions on the Penn Treebank task. The horizontal axis denotes RNN time. Each curve corresponds to a single hidden unit. Only a random subset of units is shown. See Section 3 for discussion..\nThis section introduces a reparameterization of LSTM that takes advantage of batch normalization Contrary to Laurent et al. (2016); Amodei et al. (2015), we leverage batch normalization in botl the input-to-hidden and the hidden-to-hidden transformations. We introduce the batch-normalizing transform BN( : ; , ) into the LSTM as follows:\n4 BN(Wnht-1;7h,Pn) + BN(WxXt;7x,x) + l Ot gt Ct o(ft) O ct-1 + o(it) O tanh(gt) ht o(0t) O tanh(BN(ct;Yc,c))\nIn Section 4 we investigated the effect of initial y on gradient flow. To show the practical implica- tions of this, we performed several experiments on the pMNIST and Penn Treebank benchmarks The resulting performances are shown in Figure 6.\nThe pMNIST training curves confirm that higher initial values of y are detrimental to the optimiza tion of the model. For the Penn Treebank task however, the effect is gone.\nWe believe this is explained by the difference in the nature of the two tasks. For pMNIST, the model absorbs the input sequence and only at the end of the sequence does it make a prediction on which it receives feedback. Learning from this feedback requires propagating the gradient all the way back through the sequence.\nIn our formulation, we normalize the recurrent term Wh-1 and the input term W.x separately Normalizing these terms individually gives the model better control over the relative contributior of the terms using the Yh and /x parameters. We set n = x = O to avoid unnecessary redun dancy, instead relying on the pre-existing parameter vector b to account for both biases. In order tc leave the LSTM dynamics intact and preserve the gradient flow through ct, we do not apply batch normalization in the cell update.\nIn the Penn Treebank task on the other hand, the model makes a prediction at each timestep. At. each step of the backward pass, a fresh learning signal is added to the backpropagated gradient. Essentially, the model is able to get off the ground by picking up short-term dependencies. This fails on pMNIST wich is dominated by long-term dependencies (Arjovsky et al., 2015)..\nThe batch normalization transform relies on batch statistics to standardize the LSTM activations. It would seem natural to share the statistics that are used for normalization across time, just as recurrent neural networks share their parameters over time. However, we find that simply averaging statistics. over time severely degrades performance. Although LSTM activations do converge to a stationary distribution, we observe that their statistics during the initial transient differ significantly (see Fig. ure 5 in Appendix A). Consequently, we recommend using separate statistics for each timestep to. preserve information of the initial transient phase in the activations.1.\nWe evaluate the models on the question answering task using the CNN corpus (Hermann et al. 2015), with placeholders for the named entities. We follow a similar preprocessing pipeline as Her- mann et al. (2015). During training, we randomly sample the examples with replacement and shuffle the order of the placeholders in each text inside the minibatch. We use a vocabulary of 65829 words\nGeneralizing the model to sequences longer than those seen during training is straightforward thank. to the rapid convergence of the activations to their steady-state distributions (cf. Figure 5). For ou experiments we estimate the population statistics separately for each timestep 1, ..., Tmax where.\nWe deviate from Hermann et al. (2015) in order to save computation: we use only the 4 most relevant sentences from the description, as identified by a string matching procedure. Both the training and validation sets are preprocessed in this way. Due to imprecision this heuristic sometimes strips the\n1 Note that we separate only the statistics over time and not the and parameters\nBatch Normalization (Ioffe & Szegedy, 2015) is a recently proposed network reparameterization. which aims to reduce internal covariate shift. It does so by standardizing the activations using. empirical estimates of their means and standard deviations. However. it does not decorrelate the activations due to the computationally costly matrix inversion. The batch normalizing transform is. as follows:\n.05 .10 .15 .20\nh -E[h] BN(h;x,) = + y Var[h] + e\nwhere h E Rd is the vector of (pre)activations to be normalized, y E Rd, E Rd are model. parameters that determine the mean and standard deviation of the normalized activation, and e E R. is a regularization hyperparameter. The division should be understood to proceed elementwise\nAt training time, the statistics E[h] and Var[h] are estimated by the sample mean and sample vari ance of the current minibatch. This allows for backpropagation through the statistics, preserving the convergence properties of stochastic gradient descent. During inference, the statistics are typically estimated based on the entire training set, so as to produce a deterministic prediction..\nPermuted MNIsT train Permuted MnisT valid 2.5 2.5 gamma 0.10 gamma 0.10 gamma 0.30 gamma 0.30 2.0 gamma 0.50 2.0 gamma 0.50 gamma 0.70 gamma 0.70 ennrry gamma 1.00 Crnrn errny gamma 1.00 1.5 1.5 Crsss 1.0 1.0 0.5 0.5 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 5000 training steps training steps PTB train PTB valid 1.10 1.10 gamma 0.10 gamma 0.10 1.05 gamma 0.30 gamma 0.30 gamma 0.50 1.08 gamma 0.50 gamma 0.70 lceer gamma 0.70 1.00 gamma 1.00 gamma 1.00 ea 1.06 0.95 Per 1.04 0.90 sq!q 1.02 0.85 0.80 1.00 0 5000 10000 15000 0 5000 10000 15000 training steps training steps\nTmax is the length of the longest training sequence. When at test time we need to generalize beyonc Tmax, we use the population statistic of time Tmax for all time steps beyond it.\nDuring training we estimate the statistics across the minibatch, independently for each timestep. At test time we use estimates obtained by averaging the minibatch estimates over the training set.\nAlthough batch normalization allows for easy control of the pre-activation variance through the -. parameters, common practice is to normalize to unit variance. We suspect that the previous difficul. ties with recurrent batch normalization reported in Laurent et al. (2016); Amodei et al. (2015) ar largely due to improper initialization of the batch normalization parameters, and y in particular. Ir. this section we demonstrate the impact of y on gradient flow..\nRNN gradient propagation 100 derivative through tanh 1.0 10-2 10-4 0.8 10-6 10-8 pue) 10-10 gamma=0.10 0.6 10-12 gamma=0.20 sso 10-14 gamma=0.30 10-16 gamma=0.40 0.4 gamma=0.50 10-18 gamma=0.60 10-20 gamma=0.70 0.2 10-22 gamma=0.80 gamma=0.90 10-24 gamma=1.00 0.8.0 10-26 0 100 200 300 400 500 600 700 800 0.2 0.4 0.6 0.8 1.0 t input standard deviation (a) We visualize the gradient flow through a batch- (b) We show the empirical expected derivative an normalized tanh RNN as a function of y. High interquartile range of tanh nonlinearity as a func variance causes vanishing gradient. tion of input variance. High variance causes satura. tion Iuhiohde\nFigure 6: Training curves on pMNIST and Penn Treebank for various initializations of\nanswers from the passage, putting an upper bound of 57% on the validation accuracy that can be achieved.\nFor the reported performances, the first three models (LSTM, BN-LSTM and BN-everywhere) are trained using the exact same hyperparameters, which were chosen because they work well for the baseline. The hidden state is composed of 240 units. We use stochastic gradient descent on mini batches of size 64, with gradient clipping at 10 and step rule determined by Adam (Kingma & Ba 2014) with learning rate 8 10-5\nIn Figure 1(a), we show how the pre-activation variance impacts gradient propagation in a simple RNN on the sequential MNIST task described in Section 5.1. Since backpropagation operates ir reverse, the plot is best read from right to left. The quantity plotted is the norm of the gradien of the loss with respect to the hidden state at different time steps. For large values of , the norn quickly goes to zero as gradient is propagated back in time. For small values of the norm is nearly constant.\nFor BN-e* and BN-e**, we use the same hyperparameters except that we reduce the learning rate to 8 10-4 and the minibatch size to 40."}, {"section_index": "6", "section_name": "D HYPERPARAMETER SEARCHES", "section_text": "To demonstrate what we think is the cause of this vanishing, we drew samples x from a set of. centered Gaussian distributions with standard deviation ranging from O to 1, and computed the. derivative tanh'(x) = 1 tanh2(x) E [0, 1] for each. Figure 1(b) shows the empirical distribution. of the derivative as a function of standard deviation. When the input standard deviation is low, the. input tends to be close to the origin where the derivative is close to 1. As the standard deviation increases, the expected derivative decreases as the input is more likely to be in the saturation regime. At unit standard deviation, the expected derivative is much smaller than 1..\nTable 5 reports hyperparameter values that were tried in the experiments\nWe conjecture that this is what causes the gradient to vanish, and recommend initializing y to a small value. In our trials we found that values of O.01 or lower caused instabilities during training Our choice of 0.1 seems to work well across different tasks."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "Table 5: Hyperparameter values that have been explored in the experiments\nThis section presents an empirical evaluation of the proposed batch-normalized LSTM on four dif- ferent tasks. Note that for all the experiments, we initialize the batch normalization scale and shift parameters y and to 0.1 and 0 respectively.\nFor MNIST and pMNIST, the hyperparameters were varied independently. For Penn Treebank, we performed a full grid search on learning rate and hidden state size, and later performed a sensitivity\n(a) MNIST and pMNIST (b) Penn Treebank Learning rate: 1e-2, 1e-3, 1e-4 Learning rate: 1e-1, 1e-2, 2e-2, 1e-3 RMSProp momentum: 0.5, 0.9 Hidden state size: 800, 1000, 1200, 1500, 200 Hidden state size: 100, 200, 400 Batch size:. 32, 64, 100, 128 Initial y:. 1e-1, 3e-1, 5e-1, 7e-1, 1.0 Initial : 1e-1, 3e-1, 5e-1, 7e-1, 1.0 (c) Text8 (d) Attentive Reader Learning rate: 1e-1, 1e-2, 1e-3 Learning rate: 8e-3, 8e-4, 8e-5, 8e-6 Hidden state size: 500, 1000, 2000, 4000 Hidden state size: 60, 120, 240, 280\nanalysis on the batch size and initial y. For the text8 task and the experiments with the Attentive Reader, we carried out a grid search on the learning rate and hidden state size..\nPixel-by-Pixel MNisT (Validation Set) Pixel-by-Pixel Permuted-MNisT (Validation Set) 1.0 1.0 0.9 0.8 0.8 0.7 0.6 ACeenrey 0.6 Aecnnne 0.5 0.4 0.4 0.3 0.2 Istm 0.2 Istm bn_Istm bn_Istm 0.0 0 0.1 20000 40000 60000 80000 100000 0 20000 40000 60000 80000 100000 Training Iteration Training Iteration\nThe same values were tried for both the baseline and our BN-LSTM. In each case, our reporte results are those of the model with the best validation performance.\nFigure 2: Accuracy on the validation set for the pixel by pixel MNIST classification tasks. The. batch-normalized LSTM is able to converge faster relatively to a baseline LSTM. Batch-normalized LSTM also shows some improve generalization on the permuted sequential MNIST that require to. preserve long-term memory information.."}, {"section_index": "8", "section_name": "5.1 SEOUENTIAL MNIST", "section_text": "We evaluate our batch-normalized LSTM on a sequential version of the MNIST classificatior task (Le et al., 2015). The model processes each image one pixel at a time and finally predicts the label. We consider both sequential MNIST tasks, MNIST and permuted MNIST (pMNIST). Ir MNIST, the pixels are processed in scanline order. In pMNIST the pixels are processed in a fixec random order.\nOur baseline consists of an LSTM with 100 hidden units, with a softmax classifier to produce a prediction from the final hidden state. We use orthogonal initialization for all weight matrices. except for the hidden-to-hidden weight matrix which we initialize to be the identity matrix, as this yields better generalization performance on this task for both models. The model is trained using RMSProp (Tieleman & Hinton, 2012) with learning rate of 10-3 and 0.9 momentum. We apply gradient clipping at 1 to avoid exploding gradients.\nThe in-order MNIST task poses a unique problem for our model: the input for the first hundred or so. timesteps is constant across examples since the upper pixels are almost always black. This causes the variance of the hidden states to be exactly zero for a long period of time. Normalizing these zero- variance activations involves dividing zero by a small number at many timesteps, which does not affect the forward-propagated activations but causes the back-propagated gradient to explode. We. work around this by adding Gaussian noise to the initial hidden states. Although the normalization amplifies the noise to signal level, we find that it does not hurt performance compared to data- dependent ways of initializing the hidden states..\nModel MNIST pMNIST TANH-RNN (Le et al., 2015) 35.0 35.0 iRNN (Le et al., 2015) 97.0 82.0 uRNN (Arjovsky et al., 2015) 95.1 91.4 sTANH-RNN (Zhang et al., 2016) 98.1 94.0 LSTM (ours) 98.9 90.2 BN-LSTM (ours) 99.0 95.4\nTable 1: Accuracy obtained on the test set for the pixel by pixel MNIST classification tasks\nIn Figure 2 we show the validation accuracy while training for both LSTM and batch-normalized LSTM (BN-LSTM). BN-LSTM converges faster than LSTM on both tasks. Additionally, we ob serve that BN-LSTM generalizes significantly better on pMNIST. It has been highlighted in Ar jovsky et al. (2015) that pMNIST contains many longer term dependencies across pixels than in the original pixel ordering, where a lot of structure is local. A recurrent network therefore needs to\nModel Penn Treebank. LSTM (Graves, 2013) 1.262 HF-MRNN (Mikolov et al., 2012) 1.41 Norm-stabilized LSTM (Krueger & Memisevic, 2016) 1.39 ME n-gram (Mikolov et al., 2012) 1.37 LSTM (ours) 1.38 BN-LSTM (ours) 1.32 Zoneout (Krueger et al., 2016) 1.27 HM-LSTM (Chung et al., 2016) 1.24 HyperNetworks (Ha et al., 2016). 1.22\nTable 2: Bits-per-character on the Penn Treebank test sequence\ncharacterize dependencies across varying time scales in order to solve this task. Our results suggest that BN-LSTM is better able to capture these long-term dependencies..\nWe evaluate our model on the task of character-level language modeling on the Penn Treebank. corpus (Marcus et al., 1993) according to the train/valid/test partition of Mikolov et al. (2012). Fo. training, we segment the training sequence into examples of length 100. The training sequence does. not cleanly divide by 100, so for each epoch we randomly crop a subsequence that does and segment. that instead.\nOur baseline is an LSTM with 1000 units, trained to predict the next character using a softma. classifier on the hidden state ht. We use stochastic gradient descent on minibatches of size 64, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.002. We use orthogonal initialization for all weight matrices. The setup for the batch-normalizec LSTM is the same in all respects except for the introduction of batch normalization as detailed in 3\nWe show the learning curves in Figure 3(a). BN-LSTM converges faster and generalizes better than the LSTM baseline. Figure 3(b) shows the generalization of our model to longer sequences. We observe that using the population statistics improves generalization performance, which confirms that repeating the last population statistic (cf. Section 3) is a viable strategy. In table 2 we report the performance of our best models (early-stopped on validation performance) on the Penn Treebank test sequence. Follow up works havd since improved the state of the art (Krueger et al., 2016; Chung et al., 2016; Ha et al., 2016).\nWe evaluate our model on a second character-level language modeling task on the much larger text8 dataset (Mahoney, 2009). This dataset is derived from Wikipedia and consists of a sequence of 1ooM characters including only alphabetical characters and spaces. We follow Mikolov et al (2012); Zhang et al. (2016) and use the first 90M characters for training, the next 5M for validation and the final 5M characters for testing. We train on nonoverlapping sequences of length 180.\nBoth our baseline and batch-normalized models are LSTMs with 2000 units, trained to predict the next character using a softmax classifier on the hidden state ht. We use stochastic gradient descen on minibatches of size 128, with gradient clipping at 1.0 and step rule determined by Adam (Kingma & Ba, 2014) with learning rate 0.001. Al1 weight matrices were initialized to be orthogonal\nTable 1 reports the test set accuracy of the early stop model for LSTM and BN-LSTM using the pop. ulation statistics. Recurrent batch normalization leads to a better test score, especially for pMNIST where models have to leverage long-term temporal depencies. In addition, Table 1 shows that our batch-normalized LSTM achieves state of the art on both MNIST and pMNIST.\nWe early-stop on validation performance and report the test performance of the resulting model in table 3. We observe that BN-LSTM obtains a significant performance improvement over the LSTM baseline. Chung et al. (2016) has since improved on our performance..\nModel text8 td-LSTM (Zhang et al., 2016) 1.63 HF-MRNN (Mikolov et al., 2012) 1.54 skipping RNN (Pachitariu & Sahani, 2013) 1.48 LSTM (ours) 1.43 BN-LSTM (ours) 1.36 HM-LSTM (Chung et al., 2016) 1.29\nTable 3: Bits-per-character on the text8 test sequence"}, {"section_index": "9", "section_name": "5.4 TEACHING MACHINES TO READ AND COMPREHEND", "section_text": "To demonstrate the generality and practical applicability of our proposal, we apply batch normaliza tion in the Attentive Reader model and show that this drastically improves training.\nWe evaluate several variants. The first variant, referred to as BN-LSTM, consists of the vanilla At tentive Reader model with the LSTM simply replaced by our BN-LSTM reparameterization. The second variant, termed BN-everywhere, is exactly like the first, except that we also introduce batch normalization into the attention computations, normalizing each term going into the tanh nonlin earities.\nOur third variant, BN-e*, is like BN-everywhere, but improved to more carefully handle variable. length sequences. Throughout this experiment we followed the common practice of padding each. batch of variable-length data with zeros. However, this biases the batch mean and variance of x. toward zero. We address this effect using sequencewise normalization of the inputs as proposed. by Laurent et al. (2016): Amodei et al. (2015). That is. we share statistics over time for normalization\n2.4 1.46 LSTM LSTM BN-LSTM BN-LSTM, population statistics 1.44 BN-LSTM, batch statistics 2.2 1.42 2.0 1.40 I sitq 1.38 bits 1.8 1.36 1.6 1.34 1.4 2000 4000 6000 8000 10000 12000 14000 16000 200 300 400 500 600 700 800 900 1000 training steps sequence length (a) Performance in bits-per-character on length- (b) Generalization to longer subsequences of Pen 100 subsequences of the Penn Treebank validation. Treebank using population statistics. The subse\nRecently, Hermann et al. (2015) introduced a set of challenging benchmarks for natural language processing, along with neural network architectures to address them. The tasks involve reading real news articles and answering questions about their content. Their principal model, the Atten- tive Reader, is a recurrent neural network that invokes an attention mechanism to locate relevant information in the document. Such models are notoriously hard to optimize and yet increasingly popular.\n(a) Performance in bits-per-character on length- (b) Generalization to longer subsequences of Penn 100 subsequences of the Penn Treebank validation Treebank using population statistics. The subse-. sequence during training quences are taken from the test sequence.\n1.0 1.0 LSTM train LSTM train BN-LSTM train BN-e** train BN-everywhere train 0.9 LSTM valid BN-e*train 0.8 BN-e** valid BN-e**train 0.8 LSTM valid BN-LSTM valid BN-everywhere valid 0.7 BN-e* valid 0.6 BN-e** valid 0.6 e errror 0.5 0.4 0.3 0.2 0.2 0.10 0.0 0 100 200 300 400 500 600 700 800 50 100 150 200 250 300 350 400 training steps (thousands) training steps (thousands)\nFigure 4: Training curves on the CNN question-answering tasks\nof the input terms W,xt, but not for the recurrent terms Wpht or the cell output c. Doing so avoids many issues involving degenerate statistics due to input sequence padding.\nOur fourth and final variant BN-e** is like BN-e* but bidirectional. The main difficulty in adapting to bidirectional models also involves padding. Padding poses no problem as long as it is properly gnored (by not updating the hidden states based on padded regions of the input). However tc perform the reverse application of a bidirectional model, it is common to simply reverse the paddec sequences, thus moving the padding to the front. This causes similar problems as were observec on the sequential MNIST task (Section 5.1): the hidden states will not diverge during the initia timesteps and hence their variance will be severely underestimated. To get around this, we reverse only the unpadded portion of the input sequences and leave the padding in place.\nBN-e* and BN-e** converge faster yet, and reach lower minima: 47.1% and 43.9% respectively\nTable 4: Error rates on the CNN question-answering task Hermann et al. (2015)\nWe train and evaluate our best model, BN-e**, on the full task from (Hermann et al., 2015). Or this dataset we had to reduce the number of hidden units to 120 to avoid severe overfitting. Training. curves for BN-e** and a vanilla LSTM are shown in Figure 4(b). Table 4 reports performances of the early-stopped models\n(a) Error rate on the validation set for the Atten- (b) Error rate on the validation set on the full CNN tive Reader models on a variant of the CNN QA QA task from Hermann et al. (2015).. task (Hermann et al., 2015).As detailed in Ap-. pendix C, the theoretical lower bound on the error. rate on this task is 43%\nFigure 4(a) shows the learning curves for the different variants of the attentive reader. BN-LSTM trains dramatically faster than the LSTM baseline. BN-everywhere in turn shows a significant im- provement over BN-LSTM. In addition, both BN-LSTM and BN-everywhere show a generalization benefit over the baseline. The validation curves have minima of 50.3%, 49.5% and 50.0% for the baseline, BN-LSTM and BN-everywhere respectively. We emphasize that these results were ob- tained without any tweaking - all we did was to introduce batch normalization.\nModel CNN valid CNN test Attentive Reader (Hermann et al., 2015) 38.4 37.0 LSTM (ours) 45.5 45.0 BN-e** (ours) 37.9 36.3"}]
HkyYqU9lx
[{"section_index": "0", "section_name": "REFERENCES", "section_text": "Roee Aharoni & Yoav Goldberg\nComputer Science Departmen Bar-Ilan University\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. CoRR, abs/1409.0473, 2014.\nroee.aharoni, yoav.goldberg}@gmail.com\nWe present a supervised sequence to sequence transduction model with a harc attention mechanism which combines the more traditional statistical alignmen nethods with the power of recurrent neural networks. We evaluate the model or he task of morphological inflection generation and show that it provides state o the art results in various setups compared to the previous neural and non-neura approaches. Eventually we present an analysis of the learned representations fo ooth hard and soft attention models, shedding light on the features such models. extract in order to solve the task..\nRyan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceedings oj the 2016 Meeting of SIGMORPHON, August 2016.\nMarkus Dreyer and Jason Eisner. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In EMNLP, pp. 616-627, 2011."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Markus Dreyer, Jason R Smith, and Jason Eisner. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the conference on empirical methods in natural language processing, pp. 1080-1089, 2008.\nNeural sequence to sequence transduction became a prominent approach to natural language pro. cessing tasks like morphological inflection generation (Faruqui et al.|2016) and automatic summa- rization (Rush et al.|2015) among others. A common way to improve the vanilla encoder-decoder framework for sequence to sequence tasks is the (soft) attention mechanism (Bahdanau et al.2014) which enables the decoder to attend at specific elements in the encoded sequence, overcoming the issues in encoding very long sequences to a single vector..\nJason Eisner. Parameter estimation for probabilistic finite-state transducers. In Proceedings of the 40th annual meeting on Association for Computational Linguistics, pp. 1-8. 2002\nIt was also shown that the attention mechanism effectively learns an alignment between the inpui and the output sequences which is naturally found in the data, and practically learns this alignment. to attend at the relevant elements of the input. However, in many NLP tasks like automatic translit-. eration or morphological inflection generation, the data is roughly monotonically aligned - meaning. the ability of the soft attention model to attend at the entire input sequence may be sub-optimal for such tasks, while also requiring a relatively large amount of training examples which is not always. available (especially for morphologically rich, low resource languages)..\nManaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. Morphological inflection gener ation using character sequence to sequence learning. In NAACL HLT 2016, 2016.\nThere have been several works on neural sequence transduction with monotonic assumptions. One approach is to train an alignment-aware RNN-based transducer which is composed of two indepen- dent RNN's - one over the input sequence and one over the output sequence. The output distribution is computed by feeding a pair of aligned RNN states through an MLP, where the alignment is de- fined by null symbols in the output (Graves 2012) or by a parameterized transition probability (Yu et al.]2016). In both cases training is performed by marginalizing over all possible alignments using a forward-backward procedure. This approach lacks an attention mechanism, as a dependency be- tween the input and output RNN's would make the inference intractable. Other related approaches employ modifications to the soft-attention mechanism, like attending on a fixed sized window over the input sequence (Jaitly et al.]2015) or '\"smoothing\" and \"sharpening\" the soft-attention weight distribution in different manners (Chorowski et al.||2015). These works are motivated by the need to attend over the very long input sequences found in speech recognition. We suggest that for shorter input sequences like the characters of a word in natural language, a simple, hard attention mecha- nism over the elements of a bi-directional encoder may be sufficient.\nMans Hulden, Markus Forsberg, and Malin Ahlberg. Semi-supervised learning of morphologica paradigms and lexicons. In EACL, pp. 569-578, 2014.\nKatharina Kann and Hinrich Schutze. Single-model encoder-decoder with explicit morphologica representation for reinflection. In ACL, August 2016a.\nKatharina Kann and Hinrich Schutze. Med: The lmu system for the sigmorphon 2016 shared task on morphological reinflection. 2016b.\nMore traditional approaches to monotonic sequence transduction in the NLP literature were hand engineered finite state transducers (FST) (Koskenniemi]1983, Kaplan & Kay1994) which relie on expert knowledge, or weighted finite state transducers (Mohri et al.]1997) Eisner!. 2002) whicl combined expert knowledge with data-driven parameter tuning. While the FST approaches may.\nRonald M. Kaplan and Martin Kay. Regular models of phonological rule systems. Computationa Linguistics, 20(3):331-378, 1994."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Mercedes Garcia-Martinez, Loic Barrault, and Fethi Bougares. Factored neural machine translation arXiv preprint arXiv:1609.04621, 2016\nwork well even on small datasets due to their engineered structure, it may be cumbersome to use them while conditioning on the entire output history as it requires a very large set of states, resulting. in a model conditioning only on the last predicted output symbol (Rastogi et al.|2016)..\nKimmo Koskenniemi. Two-level morphology: A general computational model of word-form recog nition and production. Technical report, 1983\nWe propose a model which handles the above issues by directly modeling a monotonic alignment between the input and output sequences which is used to perform hard attention. The model consists of an encoder-decoder neural network with a dedicated control mechanism: in each step, the decodei is fed with a single attended input state and either writes a symbol to the output sequence or advances the attention pointer to the next input state from the bi-directionally encoded sequence, as described visually in Figure1\nMehryar Mohri, Fernando Pereira, and Michael Riley. A rational design for a weighted finite-state transducer library. In International Workshop on Implementing Automata. pp. 144-158. 1997\nThis modeling suits the natural monotonic alignment between the input and output very well, as the network learns to attend at the relevant inputs before writing the output which they are alignec to. A bi-directional encoder together with the hard attention mechanism enables to condition on the entire input sequence, as each element in the input is represented using a concatenation of a forward LSTM and a backward LSTM over the input sequence. Since each element representation is aware of the entire context, non-monotone relations are also captured, which is important in tasks where segments in the output sequence are a result of long range dependencies in the input sequence. The recurrent nature of the decoder, together with a dedicated feedback connection that passes the lasi prediction to the next decoder step explicitly, enables the model to also condition on the entire output history at each prediction step. The hard attention mechanism allows the network to jointly align and transduce while using a focused representation at each step, rather then the weighted sum of representations used in the soft attention model. A simple training procedure using independently learned alignments enables training the network with correct alignments from the first gradient- based update, using a convenient cross-entropy loss.\nPushpendre Rastogi, Ryan Cotterell, and Jason Eisner. Weighting finite-state transductions with neural context. In Proc. of NAACL, 2016.\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. 2015.\nDavid Yarowsky and Richard Wicentowski. Minimally supervised morphological analysis by mul timodal alignment. In ACL, 2000..\nTo evaluate our model, we perform extensive experiments on three previously studied datasets for the morphological inflection generation task, which involves generating a target word (e.g. \"hartestem',. the German word for \"hardest'), given a source word (e.g. \"hart', the German word for \"hard') and the morpho-syntactic attributes of the target (POS=adjective, gender=masculine, type=superlative. etc.). Several studies showed that inflection generation is beneficial for phrase-based machine trans-. lation (Chahuneau et al.|2013) and more recently for neural machine translation (Garcia-Martinez et al.[2016). We show that while our model is on par or better than the previous neural and non-. neural state-of-the-art models on the task, it is also performing significantly better for very small training sets, being the first neural model to surpass the performance of a weighted FST model with latent variables specifically tailored for the task (Dreyer et al. 2o08). Finally, we analyze and com-. pare our model and the soft attention model, showing how they function very similarly with respect. to the alignments and representations they learn, in spite of our model being much simpler..\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012."}, {"section_index": "3", "section_name": "TRAINING DETAILS, IMPLEMENTATION AND HYPER PARAMETERS", "section_text": "Io train our models, we used the train portion of the datasets as-is and evaluated the model whicr performed best on the development portion of the dataset, without conducting any specific pre- processing steps on the data. We train the models for a maximum of 100 epochs over the training set. To avoid long training time, we trained the model for 20 epochs for datasets larger than 50k examples, and for 5 epochs for datasets larger than 200k examples. The models were implemented using the python bindings of the dynet toolkit4|We trained the network by optimizing the expected output sequence likelihood using cross-entropy loss as mentioned in equation [5 For optimization we used ADADELTA (Zeiler2012) without regularization. We updated the weights after every example. We used the dynet toolkit implementation of an LSTM network with two layers, each having 100 entries in both the encoder and decoder. The character embeddings were also vectors with 100 entries for the CELEX experiments, and with 300 entries for the SIGMORPHON and Wiktionary experiments. The morpho-syntactic attribute embeddings were vectors of 20 entries in all experiments. We did not use beam search while decoding for both the hard and soft attention models as it is significantly slower and did not show clear improvement in previous experiments we conducted. In all experiments, for both the hard and soft attention models, we report results using an ensemble of 5 models with different random initializations by using majority voting on the final sequences the models predicted, as reported in Kann & Schutze(2016b). This was done to perform fair comparison to the models of Kann & Schutze(2016b a); Faruqui et al.(2016) which also performed a similar ensembling technique."}, {"section_index": "4", "section_name": "2.1 MOTIVATION", "section_text": "We would like to transduce the input sequence, x1:n E * into the output sequence, y1:m E . where x and y are the input and output vocabularies, respectively. Imagine a machine with read only, random access to the encoding of the input sequence, and a single pointer that determines the current read location. We can then model the sequence transduction as a series of write operation. and pointer movement operations. In the case where the alignment between the sequences is mono. tonic, the pointer movement can be controlled by a single \"move one step forward' operation (step which we add to the output vocabulary. We implement this behavior using an encoder-decoder neu ral network, with a control mechanism which determines in each step of the decoder whether it i. time to predict an output symbol or promote the attention pointer the next element of the encodec input.\nIn prediction time, we seek the output sequence y1:m. E *, for which:\nY1:m = arg max p(y' x1:n,\nWhere: x E * is the input sequence and: f = {f1, ..., fm} is a set of features influencing the transduction task (for example, in the inflection generation task these would be the desired morpho- syntactic features of the output sequence). Since we want our model to force a monotonic alignment between the input and the output, we instead look for a sequence of actions: s1:q E *, where: s = y U{step}. This sequence is the step/write action sequence required to go from x1:n to y1:m according to the monotonic alignment between them. In this case we define:\nI1 S1:q = arg max p(s'|x1:n, f) = arg max p(s[S0...S-1, x1:n, J S s' Es'\nn 0 T e t step n step 0 0 step T step e step </W> + 4 <W> step n step 0 step step e step + F + + + 4 + pos=V mood=IMPER num=PL aspect=IPFV <W> n e T b <W>\nn 0 T e + t step n step 0 0 step T step e step </W> <W> step step 0 0 step step e step 4 + + pos=V mood=IMPER num=PL aspect=IPFV <W> n e T b <W>\nFigure 1: The hard attention network architecture. A round tip expresses concatenation of the inputs it receives. The attention is promoted to the next input element once a step action is predicted.\nNotation We use bold letters for vectors and matrices. We treat LSTM as a parameterized functior LSTM(x1...xn.) mapping a sequence of input vectors x1...Xn to a an output vector hn\nEncoder For every element in the input sequence: x1:n = x1...xn, we take the corresponding embedding: ex...exn, where: ex, E RE. These embeddings are parameters of the model which will be learned during training. We then feed the embeddings into a bi-directional LSTM encoder (Graves & Schmidhuber2005) which results in a sequence of vectors: x1:n = x1...Xn, where each concatenation of the forward LSTM and the backward LSTM outputs when fed with ex .\nDecoder Once the input sequence is encoded, we feed the decoder RNN, LSTMdec, with three inputs at each step:\nThose three inputs are concatenated into a single vector z; = [xa,f, yi-1] E R2H+F-m+E, which is fed into the decoder, providing the decoder output vector: LSTMdec(z1...z;) E RH. Finally, to.\nS1:q = arg max NN(x1:n, f,O) S\nWhere the network's parameters O are learned using a set of training examples. We will now describe the network architecture.\n1. The current attended input, xa E R2H, initialized with the first element of the encoded sequence, x1. 2. A set of feature embeddings that influence the generation process, concatenated to a single vector: f = [f1..fm] E RFm. 3. yi-1 E RF, which is the embedding of the predicted output symbol in the previous decoder Step.\nmodel the distribution over the possible actions, we project the decoder output to a vector of elements, followed by a softmax layer:\nControl Mechanism When the most probable action is step, the attention is promoted so xa con. tains the next encoded input representation to be used in the next step of the decoder. This process is demonstrated visually in Figure|1\nFor every example: (x1:n, Y1:m, f) in the training data, we should produce a sequence of step and. write actions s1:q to be predicted by the decoder, which is dependent on the alignment between. the input and the output - the network must attend at all the input elements aligned to an output element before writing it. While recent work in sequence transduction advocate jointly training the alignment and the decoding (Bahdanau et al.[2014fYu et al.[2016), we instead show that in our case it is worthwhile to decouple these stages and learn the hard alignment before hand, using it to guide the training of the encoder-decoder network and enabling the use of correct alignments for. the attention mechanism from the beginning of the network training process. For that purpose, we. first run a character level alignment process on the training data. We use the character alignment mode1 of Sudoh et al.[(2013) which is based on a Chinese Restaurant Process which weights single alignments (character-to-character) in proportion to how many times such an alignment has been seen elsewhere out of all possible alignments. Specifically, we use the implementation provided by the organizers of the SIGMORPHON2016 shared task'|Once we have the character level alignment per input-output sequence pair in the training set, we deterministically infer the sequence of actions S1:q that results in the desired output by attending at all the input elements aligned to an output element (using the step action) before writing it. We then train the network to predict this sequence of actions by using a conventional cross entropy loss function per example:.\nWe perform extensive experiments with three previously studied morphological inflection generation datasets to evaluate our hard attention model in various settings. In all experiments we report the results of the best performing neural and non-neural baselines which were previously published on those datasets to our knowledge. The implementation details for the models are available in the supplementary material section of this paper. The source code for the models is available on github2\n3The acronyms stand for: 13SIA=1st/3rd person, singular, indefinite, past;13SKE=1st/3rd person, subjunc tive, present; 2PIE=2nd person, plural, indefinite, present;13PKE=1st/3rd person, plural, subjunctive, present;. 2PKE=2nd person, plural, subjunctive, present; z=infinitive; rP=imperative, plural; pA=past participle..\np(s; = c) = softmax,(R: LSTMdec(z1...z) + b\nL(x1:n, Y1:m, f, O) = - ) log softmax,(R . LSTMdec(z1...z) + b Sj ES1:q\nCELEX In order to examine if our model fits the task, we first evaluate it on a very small dataset. to see if it avoids the tendency to overfit on few training examples. For this purpose we report exact match accuracy on the German inflection generation dataset compiled byDreyer et al. (2008) from the CELEX database (Baayen et al.1993). The dataset includes only 500 training examples for each. of the four inflection types: 13SIA->13SKE, 2PIE->13PKE, 2PKE->z, and rP->pA which we refer to as 13SIA, 2PIE, 2PKE and rP, respectively|3|We compare our model to three competitive baselines that reported results on this dataset: the Morphological Encoder-Decoder (MeD) of Kann & Schutze (2016b) which is based on the soft-attention model ofBahdanau et al.(2014), the neural-weighted. FST of Rastogi et al.(2016) which uses stacked bi-directional LSTM's to weigh its arcs (wFsT),. and the model of|Dreyer et al.(2008) which uses a weighted FST with latent-variables structured. particularly for morphological string transduction tasks (LAT). Following previous reports on this. dataset, we use the same data splits asDreyer et al.(2008), dividing the data for each inflection type. into five folds, each consisting of 500 training, 1000 development and 1000 test examples. We train. a separate model for each fold and report exact match accuracy, averaged over the five folds..\nWiktionary To neutralize the negative effect of very small training sets on the performance o. the different learning approaches, we also evaluate our model on the dataset created byDurrett &. DeNero (2013), which contains up to 360k training examples per language. It was built by extract ing Finnish, German and Spanish inflection tables from Wiktionary, used in order to evaluate thei system based on string alignments and a semi-CRF sequence classifier with linguistically inspire features. We also used the expansion made by Nicolai et al.(2015) to include French and Dutcl inflections as well. Their system also performs an align-and-transduce approach, extracting rule. from the aligned training set and applying them in inference time with a proprietary character se. quence classifier. In addition to those systems we also compare to the results of the recent neura approaches of Faruqui et al.(2016), which did not use an attention mechanism, andYu et al.(2016 which coupled the alignment and transduction tasks, requiring a beam search decoding procedure..\nSIGMORPHON As different languages show different morphological phenomena, we also ex- periment with how our model copes with this variety using the morphological inflection dataset from the SIGMORPHON2016 shared task (Cotterell et al.]2016). Here the training data consists of ten languages, with five morphological system types (detailed in Table 3): Russian (RU), Ger- man (DE), Spanish (ES), Georgian (GE), Finnish (FI), Turkish (TU), Arabic (AR), Navajo (NA) Hungarian (HU) and Maltese (MA) with roughly 12,800 training and 1600 development examples per language. We compare our model to two soft attention baselines on this dataset: MED (Kann & Schutze[[2016a), which was the best participating system in the shared task, and our implementation of the global (soft) attention model presented byLuong et al.(2015).\nTable 1: Results over the CELEX dataset\nOn the low resource setting (CELEX), our model significantly outperforms both the recent neura models of Kann & Schutze[(2016b) and|Rastogi et al.(2016) and the morphologically aware latent variable model of|Dreyer et al.(2008), as detailed in Table[1 It is also, to our knowledge, the first model that surpassed in overall accuracy the latent variable model on this dataset. We explain our advantage over the soft attention model by the ability of the hard attention control mechanism tc harness the monotonic alignments found in the data, while also conditioning on the entire output history which wasn't available in the FST models. Figure|2|plots the train-set and dev-set accuracies of the soft and hard attention models as a function of the training epoch. While both models perform similarly on the train-set (with the soft attention model fitting it slightly faster), the hard attentior model performs significantly better on the dev-set. This shows the soft attention model's tendency to overfit on the small dataset, as it has significantly more parameters and modeling power and is not enforcing the monotonic assumption of the hard attention model.\nTable 2: Results over the Wiktionary datasets\nDE-N DE-V ES-V FI-NA FI-V FR-V NL-V Avg. DDN13 88.31 94.76 99.61 92.14 97.23 98.80 90.50 94.47 NCK15 88.6 97.50 99.80 93.00 98.10 99.20 96.10 96.04 FTND16 88.12 97.72 99.81 95.44 97.81 98.82 96.71 96.34 YBB16 87.5 92.11 99.52 95.48 98.10 98.65 95.90 95.32 Hard 88.87 97.35 99.79 95.75 98.07 99.04 97.03 96.55\nOn the large training set experiments (Wiktionary), our model is the best performing model on. German verbs, Finnish nouns/adjectives and Dutch verbs, resulting in the highest reported average. accuracy across all the inflection types when compared to the four previous neural and non-neural state of the art baselines, as detailed in Table2 This shows the robustness of our model also with large amounts of training examples, and the advantage the hard attention mechanism provides over the encoder-decoder approach of Faruqui et al.(2016) which does not employ an attention mech-.\nanism. Our model is also significantly more accurate than the model of|Yu et al.(2016), showing the advantage in using independently learned alignments to guide the network's attention from the. beginning of the training process.\nTable 3: Results over the SIGMORPHON 2016 morphological inflection dataset. The text above each language lists the morphological phenomena it includes: circ.=circumfixing agg.=agglutinative, v.h.=vowel harmony, c.h.=consonant harmony.\nsoft-train 0.5 hard-train soft-dev hard-dey 0 0 20 40 epoch\nFigure 2: Learning curves for the soft and. hard attention models on the first fold of the CELEX dataset\nIn order to see if the alignments our model predict fit the monotonic alignment structure found in the data, and are they more suitable for the task when compared to the alignments found by the soft attention model, we examined alignment predictions of the two models from the CELEX dataset depicted in Figure [3] First, we notice the alignments found by the soft attention model are also monotonic, encouraging our modeling approach for the task. We also notice how our model learns to handle morphological phenomena like deletion, as can be seen on the right part of Figure[3] showing\nsuffixing+stem changes circ. suffixing+agg.+v.h.. c.h. templatic RU DE ES GE FI TU HU NA AR MA Avg. MED 91.46 95.8 98.84 98.5 95.47 98.93 96.8 91.48 99.3 88.99 95.56 Soft 92.18 96.51 98.88 98.88 96.99 99.37 97.01 95.41 99.3 88.86 96.34 Hard 92.21 96.58 98.92 98.12 95.91 97.99 96.25 93.01 98.77 88.32 95.61\nAs can be seen in Table[3] on the SIGMORPHON 2016 dataset our model performs better than both soft-attention baselines for the suffixing+stem-change languages (Russian, German and Spanish) and is slightly less accurate than our implementation of the soft attention model on the rest of the languages, which is now the best performing model on this dataset to our knowledge\nVe explain this by looking at the languages from a linguistic typology point of view, as detailed. nCotterell et al. (2016). Since Russian, German and Spanish employ a suffixing morphology. vith internal stem changes, they are more suitable for monotonic alignment as the transformations. hey need to model are the addition of suffixes and changing characters in the stem. The rest of. he languages in the dataset employ more context sensitive morphological phenomena like vowel. armony and consonant harmony, which require to model long range dependencies in the input. equence which better suits the soft attention mechanism. While our implementation of the soft. ttention model and MED are very similar model-wise, we hypothesize that our soft attention results re better due to the fact that we trained the model for 100 epochs and picked the best performing. nodel on the development set, while the MED system was trained for a fixed amount of 20 epochs. although trained on both train and development data)..\nfeats + 1 O > < f 0 9 + step + + step - - step - e - e 9 e step 9 e g e > e step < feats e t e > < - e 9 + e > - step e - step e 9 e step e 9 step step e e step\nFigure 3: A comparison of the alignments as predicted by the soft attention (left) and the hard attention (right) models on examples from the CELEX dataset.\nthe alignments for the inflection: legte->lege. This inflection requires the model to delete the fourth. character of the input sequence. We can see the model learned to delete by reading and writing. each character until it reaches the t character, which is deleted by performing two consecutive step. operations. Another notable morphological transformation is the one-to-many alignment, found on the example in the left: flog->fliege, where the model needs to transform a character in the input.. o, to two characters in the output, ie. We can see the model learns to do this by performing two. consecutive write operations after the step operation of the relevant character to be replaced. We also. notice that in this case, the soft attention model performs a slightly different alignment by aligning. the character i to o and the character g to the sequence eg, which is not the expected alignment in. this case from a linguistic point of view..\nSoft Attention-Encoded Inputs by Character C h n 2 le n n u he u 9 t. d nn ttxit ee S te h9 e C bo C3 nuu f's pt C Cp a C u tye C X e C zh r b e ++ en b 20 n - t the re e e ee e a aaeob eee te ao a e e te a hae a a a 2 -3 -2 1 0 1 2 3 4 Hard Attention -Encoded Inputs by Character e e e e ee e e eeg e e e e ee e e a ge ee tt ayu at 4 ZC aa t Za tI C ac 7 Zd A C Cd hh h It Fppd Tnn O OPO0 n Jh V/ b A n S_S 2 nb S SSS $sS 3 -2 1 0 1 2 3 4 5\nFigure 4: SVD dimension reduction to 2D of 500 character representations in context from the encoder. for both the soft attention (top) and the hard atten- tion (bottom) models. Colors indicate which character is encoded.\nWhen witnessing the success of the hard and soft attention models in sequence to sequence trans duction, the following questions arise: how does the models manage to learn monotonic alignments? perhaps the network learns to encode the sequential position as part of its encoding of an input el ement? In an attempt to answer those questions, we performed the following analysis. We took 500 continuous character representations in context from each model, where every representatior is a vector in R200 which is the output of the bi-LSTM encoder of the model. Every vector of this form carries information of a specific character with its context. We perform dimension reductior to reduce those two sets of vectors, each set in: R500 200 into: R500 2 using SVD. We then plot the 2D character-in-context representations and color them in two ways. First, we color the 2D repre sentations by the characters they represent, where we have different color for each character in the alphabet (Figure4). In the second plot we color the representations by the location of the characte they represent in the input sequence: here blue implies the character is closer to the beginning of the sequence and red implies it is closer to the end (Figure5).\nSoft Attention-Encoded Inputs by Location C C C & h m p S 2 n Ke C h u S K - n d nn an pr t e fs bo ryu C P6 S tte e S 5 C m tt O e + et zh ee e tt rb nen h - in n Pe retr i8 e - a ee te h ao a a e 09a a O a a 3 -2 -1 0 1 2 3 Hard Attention-Encoded Inputs by Location e e e e e ee e eee ee e e e eeee e ge e e ee e U tt La a4 ZC o aa t | :t ac 7 ZdA C - CC d I tj dk kK fFnh h k 1 Apjpd hIh n r sk sS 2 n mn nb 5 sn ns SSS $ S 3 -2 -1 0 1 2 3 4 5\nFigure 5: SVD dimension reduction to 2D of 500 character representations in context from the encoder, for both the soft attention (top) and hard attention. (bottom) models. Colors indicate the lo- cation of the character..\nWe can see that while both models tend to cluster similar character representations together (Figure 4), the hard attention model tends to have more dense character clusters. This is explained by look- ing at the location information in Figure 5] while both models encode the positional information to some extent, this information is much more pronounced in the soft-attention mechanism, where the X dimension correlates very strongly with the position information. It seems that the soft-attention mechanism encourages the encoder to encode positional information in its representation. In con- trast, our hard-attention model has other means of obtaining the position information in the decoder using the step actions, and indeed does not encode it as strongly in the continuous representations This behavior allows it to perform well even with fewer examples, as the location information is represented in the network training phase implicitly using the step actions.\nPrevious works on neural sequence transduction include the RNN Transducer (Graves! 2012) which uses two independent RNN's over monotonically aligned sequences to compute a probability over. the possible output symbols in each step, including a null symbol. The model byYu et al.(2016) improves this approach by replacing the null symbol with a learned transition probability. Both models are trained using a forward-backward approach, marginalizing over all possible alignments. Our model differs from the above by learning the alignments independently, which enables a depen- dency between the encoder and decoder - the hard attention mechanism. This provided improved results while also greatly simplifying the model training, enabling learning through simple cross- entropy loss. Jaitly et al.(2015) proposed the Neural Transducer model for online speech recogni tion, which is also trained on external alignments, similarly to our approach. They divide the input. into blocks of a constant size and perform soft attention separately on each block when predicting the output symbols.Lu et al.[(2016) used a combination of an RNN encoder together with a CRF layer to model the dependencies in the output sequence. A line of work on attention based speech recognition[Chorowski et al.[(2015); Bahdanau et al.(2016) proposed two relevant improvements to the vanilla attention mechanism: The first adds location awareness by using the previous attention weights when computing the next ones, and the second prevents the model from attending on too many or too few inputs using\"sharpening'\"' and \"smoothing' techniques on the soft-attention weight distributions."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "We presented the hard attention model for sequence to sequence transduction of monotonically. aligned sequences and evaluated it on the well-studied task of morphological inflection generation.. The model employs an explicit alignment model learned independently in training time which is. used to teach a neural network to perform both alignment and transduction when decoding with a hard attention mechanism. We showed that our model performs better or on par with more complex. soft attention and neural transduction models on various morphological inflection datasets, forming. a new state of the art on the CELEX dataset and the Wiktionary dataset and outperforming the best. system in the SIGMORPHON2016 inflection generation task. Future work may include experiment- ing with different external alignment methods, or applying the model to other tasks which require a monotonic align-and-transduce approach like abstractive summarization or transliteration..\nFor the morphological inflection task, previous approaches usually make use of manually con. structed Finite State Transducers (Koskenniemi 1983] Kaplan & Kay1994], which require expert knowledge, or machine learning methods (Yarowsky & Wicentowski]200of Dreyer & Eisner!2011. Durrett & DeNero[2013] Hulden et al.[[2014] [Ahlberg et al.[2015} Nicolai et al.2015) with spe cific assumptions about the set of possible processes that are needed to create the output sequence,. requiring feature engineering. More recently, Faruqui et al.(2016) used encoder-decoder neural net- works for the task, encoding the input sequence into a vector and decoding it one character at a time into the output sequence.Kann & Schutze[(2016a b) explored the soft attention model proposed for machine translation (Bahdanau et al.[|2014) which gave the best results in the SIGMORPHON 2016 shared task (Cotterell et al.[2016). Another notable contribution is the work on weighting finite state transducers with neural context (Rastogi et al.]2016). There, the arcs of an FST are scored by optimizing a global loss function over all the possible paths in the state graph while modeling. contextual features with bi-directional LSTM's.."}]
r1YNw6sxg
[{"section_index": "0", "section_name": "6.3 COMPARISON OF WEIGHTINGS FROM OTHER OPTIMIZATION METHODS", "section_text": "We compare our policy using conv4_3 feature dynamics, with weights optimized by FQI, agains policies that use these dynamics but with either no feature weighting or weights optimized by othe algorithms.\nFor the case of no weighting, we use a single feature weight w but optimize the relative weightin. of the controls with the cross entropy method (CEM) (De Boer et al.|2005). For the other cases. we learn the weights with Trust Region Policy Optimization (TRPO) (Schulman et al.|2015). Sinc the servoing policy is the minimizer of a quadratic objective (Equation (3)), we represent the polic. as a neural network that has a matrix inverse operation at the output. We train this network for. and 50 sampling iterations, and use a batch size of 4000 samples per iteration. All of these method use the same feature representation as ours, the only difference being how the weights w and A ar. chosen.\nWe use our proposed FQI algorithm to optimize for the weights w, X, and surpass the other methods. in terms of performance on test executions, sample efficiency, and overall computation efficiency'. The updates of the inner iteration of our algorithm are computationally efficient; since the data is fixed for a given sampling iteration, we can precompute $ (st, ut) and certain terms of (st+1,:). The parameters that achieved the best performance on 10 validation trajectories were used for the final policy. The policies are trained with FQI for S = 2 sampling iterations, a batch size of 10. trajectories per sampling iteration, K = 10 inner iterations per sampling iteration, and a regulariza-. tion coefficient of v = O.1. We found that regularization of the parameters was important for the. algorithm to converge. We show sample trajectories of the resulting policies in|Table 3."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We report the average costs of these methods on the right of Figure 6 In 2 sampling iterations the policy learned with TRPO does not improve by much, whereas our policy learned with FQ significantly outperforms the other policies. The policy learned with TRPO improves further in 5( iterations; however, the cost incurred by this policy is still about one and a half times the cost of ou policy, despite using more than 100 times as many trajectories.\nVisual servoing is a classic problem in robotics that requires moving a camera or robot to match a target configuration of visual features or image intensities. Many robot control tasks that combin perception and action can be posed as visual servoing, including navigation (DeSouza & Kak|2002 Chen et al.]2006), where a robot must follow a desired path; manipulation, where the robot mus servo an end-effector or a camera to a target object to grasp or manipulate it (Malis et al.]1999 Corke1993 Hashimoto1993Hosoda & Asada1994Kragic & Christensen 2002); and various other problems, as surveyed in Hutchinson et al.(1996). Most visual servoing methods assume ac cess to good geometric image features (Chaumette & Hutchinson]|2006f|Collewet et al.||2008f|Caror et al.] 2013) and require knowledge of their dynamics, which are typically obtained from domaii knowledge about the system. Using such hand-designed features and models prevents exploitatioi of statistical regularities in the world, and requires manual engineering for each new system.\nThe FQI algorithm often achieved most of its performance gain after the first iteration. We rai additional sampling iterations of FQI to see if the policies improved further. For each iteration, we. evaluated the performance of the policies on 10 validation trajectories. We did the same for the. policies trained with TRPO, and we compare the learning curves of both methods in|Figure 7."}, {"section_index": "2", "section_name": ".4 COMPARISON TO PRIOR METHODS", "section_text": "In this work, we study how learned visual features, learned predictive dynamics models, and re inforcement learning can be combined to learn visual servoing mechanisms. We focus on target following, with the goal of designing algorithms that can learn a visual servo using low amounts of\nFor one of the prior methods, we train a convolutional neural network (CNN) policy end-to-end with TRPO. The policy is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully- connected layers, with ReLU activations except for the output layer; the convolutional layers use\n'Our policy based on conv4_3 features takes around 650 s to run K = 10 iterations of FQI for a given batcl size of 10 training trajectories."}, {"section_index": "3", "section_name": "LEARNING VISUAL SERVOING WITH DEEP FEATURES AND FITTED O-ITERATION", "section_text": "5 prior methods that methods that use VGG conv4_3 do not use learned features and their learned feature dynamics locally connected feature dynamics 4 1 0 ORB C-COT CNN unweighted feature feature ours, feature visual +TRPO feature dynamics dynamics feature points tracker ( 20000) dynamics +TRPO +TRPO dynamics IBVS IBVS +CEM ( 80) ( 2000) +FQI (1500) (20) Feature Representation and Optimization Method"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "We use TRPO to optimize for the full space of parameters for each of the feature dynamics we con- sider in this work. We use a Gaussian policy, where the mean is the servoing policy of|Equation (3) and the standard deviation is fixed to exploration = 0.2 (i.e. we do not learn the standard devia- tion). Since the parameters are constrained to be non-negative, we parametrize the TRPO policies with w and . We use a Gaussian baseline, where the mean is a 5-layer CNN, consisting of 2 convolutional and 3 fully connected layers, and a standard deviation that is initialized to 1. The convolutional layers use 16 filters (4 4, stride 2) each, the first 2 fully-connected layers use 32 hidden units each, and all the layers except for the last one use ReLU activations. The input of the baseline network are the features (either pixel intensities or VGG features) corresponding to the feature dynamics being used. The parameters of the last iteration were used for the final policy. The policies are trained with TRPO for 50 iterations, a batch size of 4000 samples per iteration, and a step size of 0.01.\n16 filters (4 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. Tl policy takes in raw pixel-intensities and outputs controls..\ndata of the target in question, so as to be easy and quick to adapt to new targets. Successful targe following requires the visual servo to tolerate moderate variation in the appearance of the targe1 including changes in viewpoint and lighting, as well as occlusions. Learning invariances to all sucl distractors typically requires a considerable amount of data. However, since a visual servo is typ ically specific to a particular task, it is desirable to be able to learn the servoing mechanism ver quickly, using a minimum amount of data. Prior work has shown that the features learned by larg convolutional neural networks on large image datasets, such as ImageNet classification (Deng et al 2009), tend to be useful for a wide range of other visual tasks (Donahue et al.]2014). We explor whether the usefulness of such features extends to visual servoing.\nThis policy achieves a modest performance (although still worse than the policies based on conv4_: feature dynamics) but it requires significantly more training samples than any of the other learning. based methods. We also trained CNN policies that take in extracted VGG features (without any. dynamics) as inputs, but they perform worse (see|Table 4Jin the Appendix). This suggests that givei. a policy parametrization that is expressive enough and given a large number of training samples, i. is better to directly provide the raw pixel-intensity images to the policy instead of extracted VGC. features. This is because VGG features are not optimized for this task and their representation loses. some information that is useful for servoing.\nTo answer this question, we propose a visual servoing method that uses pre-trained features, in our case obtained from the VGG network (Simonyan & Zisserman 2014) trained for ImageNet classification. Besides the visual features, our method uses an estimate of the feature dynamics in visual space by means of a bilinear model. This allows the visual servo to predict how motion of the robot's camera will affect the perceived feature values. Unfortunately, servoing directly on the high-dimensional features of a pre-trained network is insufficient by itself to impart robustness on the servo: the visual servo must not only be robust to moderate visual variation, but it must also be able to pick out the target of interest (such as a car that the robot is tasked with following) from. irrelevant distractor objects. To that end, we propose a sample-efficient fitted Q-iteration procedure that automatically chooses weights for the most relevant visual features. Crucially, the actual ser- voing mechanism in our approach is extremely simple, and simply seeks to minimize the Euclidean distance between the weighted feature values at the next time step and the target. The form of the. servoing policy in our approach leads to an analytic and tractable linear approximator for the Q- function, which leads to a computationally efficient fitted Q-iteration algorithm. We show that we can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvement over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.\nObservation Modality. ground truth car position 0.59 0.24 raw pixel-intensity images 5.20 0.40 VGG conv1_2 features 8.35 0.44 VGG conv2_2 features 14.01 0.47 VGG conv3_3 features 10.51 0.65\nThe other two prior methods use classical image-based visual servoing (IBvS) (Chaumette &. Hutchinson2006) with respect to Oriented FAST and Rotated BRIEF (ORB) feature points (Rublee et al.||2011), or feature points extracted from a visual tracker. For the former, the target features con- sist of only the ORB feature points that belong to the car, and this specifies that the car is relevant for the task. For the tracker-based method, we use the Continuous Convolution Operator Tracker. (C-COT) (Danelljan et al.]2016) (the current state-of-the-art visual tracker) to get bounding boxes around the car and use the four corners of the box as the feature points for servoing. We provide the. ground truth car's bounding box of the first frame as an input to the C-COT tracker. For all of the. IBVS methods, we provide the ground truth depth values of the feature points, which are used in the. algorithm's interaction matrix5\n(b) Costs when using a new set of cars, none of which were seen during learning\nTable 4: Costs on test executions of servoing policies that were trained end-to-end with TRPO. These policies. take in different observation modalities: ground truth car position or image-based observations. This table. follows the same format asTable 2 The mean of the first policy is parametrized as a 3-layer MLP, with tanh. non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the. other policies, each of their means is parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully. connected layers, with ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4 4, stride 2) each and the first 2 fully-connected layers use 32 hidden units each. All the policies are trainec. with TRPO, a batch size of 4000 samples, 500 iterations, and a step size of O.01. The car position observations. are not affected by the appearance of the cars, so the test performance for that modality is the same regardles. of which set of cars are used.\nThe first method performs poorly, in part because ORB features are not discriminative enough fo. some of the cars, and the target feature points are sometimes matched to feature points that ar not on the car. The tracker-based method achieves a relatively good performance. The gap i1. performance with respect to our method is in part due to the lack of car dynamics information i the IBVS model, whereas our method implicitly incorporates that in the learned feature dynamics. It is also worth noting that the tracker-based policy runs significantly slower than our method. Th. open-source implementation of the C-COT tracker|runs at about 1Hz whereas our policy base. on conv4_3 features runs at about 16Hz. Most of the computation time of our method is sper computing features from the VGG network, so there is room for speedups if we use a network tha. is less computationally demanding.\nThe environment for the synthetic car following benchmark is available online as the package CitySim3'] and the code to reproduce our method and experiments is also available onlinq2] Sup plementary videos of all the test executions are available on the project's website3"}, {"section_index": "5", "section_name": "2 RELATED WORK", "section_text": "Manual design of visual features and dynamics models can limit the applicability of visual ser voing approaches. We described an approach that combines learned visual features with learning predictive dynamics models and reinforcement learning to learn visual servoing mechanisms. Our experiments demonstrate that standard deep features, in our case taken from a model trained foi object classification, can be used together with a bilinear predictive model to learn an effective visual servo that is robust to visual variation, changes in viewing angle and appearance, and occlu sions. For control we propose to learn Q-values, building on fitted Q-iteration, which at executior time allows for one-step lookahead calculations that optimize long term objectives. Our method can learn an effective visual servo on a complex synthetic car following benchmark using just 20 training trajectory samples for reinforcement learning. We demonstrate substantial improvemeni over a conventional approach based on image pixels or hand-designed keypoints, and we show an improvement in sample-efficiency of more than two orders of magnitude over standard model-free deep reinforcement learning algorithms.\nWe use TRPO to train end-to-end servoing policies for various observation modalities and report the performance of the learned policies in|Table 4 The policies are trained with the set of training cars, and tested on both this set and on the set of novel cars. The observation modalities that we consider are ground truth car positions (relative to the quadcopter), images of pixel intensities from the quadcopter's camera, and VGG features extracted from those images. Unlike our method and the other experiments, no feature dynamics are explicitly learned for these experiments.\nVisual servoing is typically (but not always) performed with calibrated cameras and carefully de. signed visual features. Ideal features for servoing should be stable and discriminative, and much. of the work on visual servoing focuses on designing stable and convergent controllers under the. assumption that such features are available (Espiau et al.[2002] Mohta et al.]2014] Wilson et al. 1996). Some visual servoing methods do not require camera calibration (Jagersand et al.] 1997. Yoshimi & Allen] 1994), and some recent methods operate directly on image intensities (Caron. et al.2013), but generally do not use learning to exploit statistical regularities in the world and. improve robustness to distractors.\nWe use a Gaussian policy, where the mean is either a multi-layer perceptron (MLP) or a convo. lutional neural net (CNN), and the standard deviation is initialized to 1. We also use a Gaussian. baseline, which is parametrized just as the corresponding Gaussian policy (but no parameters are. shared between the policy and the baseline). For the policy that takes in car positions, the mean. is parametrized as a 3-layer MLP, with tanh non-linearities except for the output layer; the first 2 fully connected layers use 32 hidden units each. For the other policies, each of their means is. parametrized as a 5-layer CNN, consisting of 2 convolutional and 3 fully-connected layers, with. ReLU non-linearities except for the output layer; the convolutional layers use 16 filters (4 4, stride. 2) each and the first 2 fully-connected layers use 32 hidden units each..\nLearning is a relatively recent addition to the repertoire of visual servoing tools. Several method.. have been proposed that apply ideas from reinforcement learning to directly acquire visual servoing. controllers (Lampe & Riedmiller2013} Sadeghzadeh et al.]2015).However, such methods have not been demonstrated under extensive visual variation, and do not make use of state-of-the-ar convolutional neural network visual features. Though more standard deep reinforcement learning. methods (Lange et al.[2012f Mnih et al.[2013 Levine et al.2016f Lillicrap et al.2015) could ir principle be applied to directly learn visual servoing policies, such methods tend to require large. numbers of samples to learn task-specific behaviors, making them poorly suited for a flexible visua. servoing algorithm that can be quickly repurposed to new tasks (e.g. to following a different object)."}, {"section_index": "6", "section_name": "ACKNOWLEDGEMENTS", "section_text": "The CNN policies would often not converge for several randomly initialized parameters. Thus, at the beginning of training, we tried multiple random seeds until we got a policy that achieved a relatively low cost on validation trajectories, and used the best initialization for training. The MLP policy. did not have this problem, so we did not have to try multiple random initializations for it. All the policies are trained with a batch size of 4000 samples, 500 iterations, and a step size of O.01. The parameters of the last iteration were used for the final policy.\na) Costs when using the set of cars seen during learning\nThis research was funded in part by the Army Research Office through the MAST program and the Berkeley DeepDrive consortium. Alex Lee was also supported by the NSF GRFP."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Instead, we propose an approach that combines learning of predictive models with pre-trained visual. features. We use visual features trained for ImageNet (Deng et al.]2009) classification, though any. pre-trained features could in principle be applicable for our method, so long as they provide a suit- able degree of invariance to visual distractors such as lighting, occlusion, and changes in viewpoint Using pre-trained features allows us to avoid the need for large amounts of experience, but we must. still learn the policy itself. To further accelerate this process, we first acquire a predictive model that. allows the visual servo to determine how the visual features will change in response to an action. General video prediction is an active research area, with a number of complex but data-hungry mod-. els proposed in recent years (Oh et al.]2015f [Watter et al.]2015] Mathieu et al.]2015]Xue et al. 2016Lotter et al.l2016 [Jia et al.2016Walker et al.[[2016[Vondrick et al.[[2016]\nOts corners of bounding box from C-COT tracker. (0.75) 1.70 0.30 corners of ground truth bounding box. (0.75) 0.86 0.25 corners of next frame's bounding box from C-COT tracker (O.65) 1.46 0.22 corners of next frame's ground truth bounding box. (0.65) 0.53 0.05 SIFT feature points (0.30) 14.47 0.75 SURF feature points (0.60) 16.37 0.78 ORB feature points (0.30) 4.41 0.60\nAndrea Censi and Richard M Murray. Bootstrapping bilinear models of simple vehicles. The Inter national Journal of Robotics Research, 34(8):1087-1113, 2015, 2015\nTable 5: Costs on test executions when using classical image-based visual servoing (IBVS) with respect to feature points derived from bounding boxes and keypoints derived from hand-engineered features. Since there is no learning involved in this method, we only test with one set of cars: the cars that were used for training in the other methods. This table follows the same format as|Table 2 This method has one hyperparameter, which is the gain for the control law. For each feature type, we select the best hyperparameter (shown in parenthesis) by validating the policy on 10 validation trajectories for gains between O.05 and 2, in increments of 0.05. The servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth ca dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the Other methods.\nFrancois Chaumette and Seth Hutchinson. Visual servo control. I. Basic approaches. IEEE Robotic. & Automation Magazine, 13(4):82-90, 2006, 2006.\nHowever, we observe that convolutional response maps can be interpreted as images and, unde mild assumptions, the dynamics of image pixels during camera motion can be well approximated by means of a bilinear model (Censi & Murray|2015). We therefore train a relatively simple bilinear model for short-term prediction of visual feature dynamics, which we can use inside a very simple visual servo that seeks to minimize the error between the next predicted feature values and a targei image.\nJian Chen, Warren E Dixon, M Dawson, and Michael McIntyre. Homography-based visual servo. tracking control of a wheeled mobile robot. IEEE Transactions on Robotics, 22(2):406-415 2006, 2006.\nUnfortunately, simply training predictive models on top of pre-trained features is insufficient to produce an effective visual servo, since it weights the errors of distractor objects the same amount as the object of interest. We address this challenge by using an efficient Q-iteration algorithm to train the weights on the features to maximize the servo's long-horizon reward. This method draws on ideas from regularized fitted Q-iteration (Gordon]1995] Ernst et al.]2005] Farahmand et al.]2009] and neural fitted Q-iteration (Riedmiller2005) to develop a sample-efficient algorithm that can directly estimate the expected return of the visual servo without the use of any additional function approximator.\nPeter I Corke. Visual control of robot manipulators - A review. Visual servoing, 7:1-31, 1993, 1993\nTraditional visual servoing techniques (Feddema & Mitchell] 1989, Weiss et al.] 1987) use the image-plane coordinates of a set of points for control. For comparison to our method, we evalu- ate the servoing performance of feature points derived from bounding boxes and keypoints derived from hand-engineered features, and report the costs of test executions on Table 5\nWe use bounding boxes from the C-COT tracker (Danelljan et al.]2016) (the current state-of-the-art visual tracker) and ground truth bounding boxes from the simulator. The latter is defined as the box that tightly fits around the visible portions of the car. We provide the ground truth bounding box of the first frame to the C-COT tracker to indicate that we want to track the car. We use the four corners of the box as the feature points for servoing to take into account the position and scale of the car in image coordinates.\nWe provide the ground truth depth values of the feature points for the interaction matrices. Ir. classical image-based visual servoing, the control law involves the interaction matrix (also knowr as feature Jacobian), which is the Jacobian of the points in image space with respect to the camera's. control (see Chaumette & Hutchinson(2006) for details). The analytical feature Jacobian used ir. IBVS assumes that the target points are static in the world frame. This is not true for a moving car. so we consider a variant where the feature Jacobian incorporates the ground truth dynamics of the. car. This amounts to adding a non-constant translation bias to the output of the dynamics function. where the translation is the displacement due to the car's movement of the 3-dimensional point ir the camera's reference frame. Note that this is still not exactly equivalent to having the car being. static since the roads have different slopes but the pitch and roll of the quadcopter is constrained tc. be fixed.\nLearning this policy amounts to learning the robot dynamics and the distance metric |I II\nTo learn the robot dynamics, we assume that we have access to a dataset of paired observations anc controls xt, ut, Xt+1. This data is relatively easy to obtain as it involves collecting a stream of the robot's observations and controls. We use this dataset to learn a general visual dynamics model that can be used for any task\nBernard Espiau, Francois Chaumette, and Patrick Rives. A new approach to visual servoing i robotics. IEEE Transactions on Robotics and Automation, 8(3):313-326, 2002, 2002\nFor the hand-crafted features, we consider SIFT (Lowe2004), SURF (Bay et al.] 2006) and ORB (Rublee et al.]2011) keypoints. We filter out the keypoints of the first frame that does not belong to the car and use these as the target keypoints. However, we use all the keypoints for the subsequent observations.\nAmir Massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesvari, and Shie Mannor. Reg ularized fitted Q-iteration for planning in continuous-space Markovian decision problems. In American Control Conference, 2009. ACC'09., pp. 725-730. IEEE, 2009, 2009.\nTo learn the distance metric, we assume that the robot interacts with the world and collects tuples of the form xt, ut, Ct, Xt+1, X*. At every time step during learning, the robot observes x and takes. action ut. After the transition, the robot observes xt+1 and receives an immediate cost ct. This cost. is task-specific and it quantifies how good that transition was in order to achieve the goal. At the beginning of each trajectory, the robot is given a goal observation x+, and it is the same throughout the trajectory. We define the goal feature map to be the featurization of the goal observation. We learn the distance metric using reinforcement learning and we model the environment as a Markov Decision Process (MDP). The state of the MDP is the tuple of the current observation and the episode's target observation, St = (xt, x*), the action ut is the discrete-time continuous control of. the robot, and the cost function maps the states and action (st, ut, St+1) to a scalar cost ct..\nThe servoing policies based on bounding box features achieve low cost, and even lower ones if ground truth car dynamics is used. However, servoing with respect to hand-crafted feature points is significantly worse than the other methods. This is, in part, because the feature extraction and match ing process introduces compounding errors. Similar results were found by Collewet & Marchand (2011), who proposed photometric visual servoing (i.e. servoing with respect to pixel intensities) and showed that it outperforms, by an order of magnitude, classical visual servoing that uses SURI features.\nKoichi Hashimoto. Visual servoing, volume 7. World scientific, 1993, 1993\nMartin Danelljan, Andreas Robinson, Fahad Shahbaz Khan, and Michael Felsberg. Beyond correla. tion filters: Learning continuous convolution operators for visual tracking. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 472-488. Springer, 2016, 2016..\nPieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the\nLet y, be a featurization of the camera's observations x and let y. be some given goal feature map. For the purposes of this work, we define visual servoing as the problem of choosing controls ut for a fixed number of discrete time steps t as to minimize the error y* - yt.\nWe use a relatively simple gradient-based servoing policy that uses one-step feature dynamics f : {yt, ut} -> yt+1. The policy chooses the control that minimizes the distance between the goal feature map and the one-step prediction:\n(xt,x+) = arg min||y- f(yt,u)||2\n2 y 2 yt+1 2 yt+1 d 1) yt 1 1 yt+1 16 Tc 1 t+1 16 d + 0 y yt+1 0) 0 .0 32 32 h h + t Xt 128 Xt+1 .Y 1\nPolicy Variant Observation Modality (Pose) Use Rotation Ignore Rotation car pose (1.55) 0.58 0.25 (1.90) 0.51 0.25 next frame's car pose (1.00) 0.0059 0.0020 (1.00) 0.0025 0.0017\nSeth Hutchinson, Gregory D Hager, and Peter I Corke. A tutorial on visual servo control. IEEL transactions on robotics and automation, 12(5):651-670, 1996, 1996.\nTable 6: Costs on test executions when using classical position-based visual servoing (PBVS). Since there i: no learning involved in this method, we only test with one set of cars: the cars that were used for training in the other methods. This table follows the same format as Table 2 This method has one hyperparameter, which is the gain for the control law. For each condition, we select the best hyperparameter (shown in parenthesis) by validating the policy on 10 validation trajectories for gains between 0.05 and 2, in increments of 0.05. These. servoing policies, which use ground truth car poses, outperforms all the other policies based on images. Ir. addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used.\nMartin Jagersand, Olac Fuentes, and Randal Nelson. Experimental evaluation of uncalibrated visual servoing for precision manipulation. In Proceedings of the IEEE International Conference on. Robotics and Automation (ICRA), volume 4, pp. 2874-2880. IEEE. 1997. 1997.\nXu Jia. Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic filter networks. In Advances in Neural Information Processing Systems (NIPS). pp. 667-675. 2016. 2016"}, {"section_index": "8", "section_name": "C.6 CLASSICAL POSITION-BASED VISUAL SERVOING", "section_text": "Figure 1: Multiscale bilinear model. The function h maps images x to feature maps y(o), the operator d downsamples the feature l), and the bilinear function f(l) predicts the next maps y(l C- feature y(l). The number of channels for each feature map is nc, regardless of the scale l.\nSimilar to our IBVS experiments, we consider a variant that uses the car pose of the next time step. as a way to incorporate the ground truth car dynamics into the interaction matrix. Since the cost function is invariant to the orientation of the car, we also consider a variant where the policy only minimizes the translational part of the pose error..\nDanica Kragic and Henrik I Christensen. Survey on visual servoing for manipulation. Computa tional Vision and Active Perception Laboratory, Fiskartorpsy, 15, 2002, 2002..\nThomas Lampe and Martin Riedmiller. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2013, 2013.\nThese servoing policies, which use ground truth car poses, outperforms all the other policies based. on images. In addition, the performance is more than two orders of magnitude better if ground truth car dynamics is used."}, {"section_index": "9", "section_name": "1 VISUAL FEATURES DYNAMICS", "section_text": "We learn a multiscale bilinear model to predict the visual features of the next frame given the curren image from the robot's camera and the action of the robot. An overview of the model is shown i1 Figure 1 The learned dynamics can then be used for visual servoing as described inSection 5.\nSascha Lange, Martin Riedmiller, and Arne Voigtlander. Autonomous reinforcement learning on raw visual input data in a real world application. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2012, 2012."}, {"section_index": "10", "section_name": "4.1 VISUAL FEATURES", "section_text": "We consider both pixels and semantic features for the visual representation. We define the functior h to relate the image x and its feature y = h (x). Our choice of semantic features are derived from the VGG-16 network (Simonyan & Zisserman2014), which is a convolutional neural network trained for large-scale image recognition on the ImageNet dataset (Deng et al.]2009). Since spatial invariance is undesirable for servoing, we remove some of the max-pooling layers and replace the convolutions that followed them with dilated convolutions, as done byYu & Koltun(2015). The modified VGG network is shown inFigure 2 We use the model weights of the original VGG-16. network, which are publicly available as a Caffe model (Jia et al.]2014). The features that we use are the outputs of some of the intermediate convolutional layers, that have been downsampled to a 32 32 resolution (if necessary) and standarized with respect to our training set..\nWilliam Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video pre diction and unsupervised learning. CoRR, abs/1605.08104, 2016, 2016.\nWe use multiple resolutions of these features for servoing. The idea is that the high-resolution repre sentations have detailed local information about the scene, while the low-resolution representations have more global information available through the image-space gradients. The features at level l of the multiscale pyramid are denoted as y(l). The features at each level are obtained from the features below through a downsampling operator d(y(l = y(l) that cuts the resolution in half.\nMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyon mean square error. CoRR, abs/1511.05440. 2015. 2015\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daa. Wierstra, and Martin A. Riedmiller. Playing Atari with deep reinforcement learning. CoRR abs/1312.5602, 2013, 2013."}, {"section_index": "11", "section_name": "4.2 BILINEAR DYNAMICS", "section_text": "The features yt) are used to predict the corresponding level's features yt?1 at the next time step, + (l) We use a bilinear model to represent these dynamics, motivated by prior work (Censi & Murray2015). In. order to servo at different scales, we learn a bilinear dynamics model at each scale. We consider two variants of the bilinear model in previous work in order to reduce the number of model parameters.\nJunhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in Atari games. In Advances in Neural Information Pro- cessing Systems (NIPS), pp. 2863-2871, 2015, 2015.\nThe first variant uses fully connected dynamics as in previous work but models the dynamics of each channel independently. When semantic features are used, this model interprets the feature maps as\nKoh Hosoda and Minoru Asada. Versatile visual servoing without knowledge of true Jacobian. In Intelligent Robots and Systems' 94.'Advanced Robotic Systems and the Real World', IROs'94 Proceedings of the IEEE/RsJ/GI International Conference on, volume 1, pp. 186-193. IEEE 1994, 1994.\nFigure 2: Dilated VGG-16 network The intermediate feature maps drawn in a lighter shade are outputs of max- pooling layers. The features maps in. the conv4 and conv5 blocks are out- puts of dilated convolutions with dila-. tion factors of 2 and 4, respectively.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. Journal of Machine Learning Research, 17(39):1-40, 2016, 2016.\nDavid G Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 60(2):91-110. 2004. 2004\nbeing abstract images with spatial information within a channel and different entities or factors o. variation across different channels. This could potentially allow the model to handle moving objects occlusions, and other complex phenomena.\nThe fully connected bilinear model is quite large, so we propose a bilinear dynamics that enforces. sparsity in the parameters. In particular, we constrain the prediction to depend only on the features that are in its local spatial neighborhood, leading to the following locally connected bilinear model:.\nYt+1,c=yt.c+ W(l)\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag recognition. CoRR, abs/1409.1556. 2014. 2014.\nJacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 835-851. Springer, 2016, 2016\nWe optimize for the dynamics while keeping the feature representation fixed. This is a supervised learning problem, which we solve with ADAM (Kingma & Ba]2014). The training set, consisting of triplets xt, ut, Xt+1, was obtained by executing a hand-coded policy that moves the robot around. the target with some Gaussian noise.."}, {"section_index": "12", "section_name": "LEARNING VISUAL SERVOING WITH REINFORCEMENT LEARNING", "section_text": "We propose to use a multiscale representation of semantic features for servoing. The challenge wher introducing multiple scales and multi-channel feature maps for servoing is that the features do no necessarily agree on the optimal action when the goal is unattainable or the robot is far away fror the goal. To do well, it's important to use a good weighing of each of the terms in the objective Since there are many weights, it would be impractically time-consuming to set them by hand, sc we resort to learning. We want the weighted one-step lookahead objective to encourage good long term behavior, so we want this objective to correspond to the state-action value function Q. So we propose a method for learning the weights based on fitted Q-iteration."}, {"section_index": "13", "section_name": "5.1 SERVOING WITH WEIGHTED MULTISCALE FEATURES", "section_text": "Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. CoRR abs/1511.07122, 2015, 2015.\nInstead of attempting to build an accurate predictive model for multi-step planning, we use the simple greedy servoing method in[Equation (1)] where we minimize the error between the target and predicted features for all the scales. Typically, only a few objects in the scene are relevant, so the errors of some channels should be penalized more than others. Similarly, features at different scales nioht need to be weiohted differentlv Thus\nL W c (Xt,X+) = arg min u C 1=0\nu-u= f(l t. C\nwhere : denotes the cardinality operator and the constant 1/|y)| normalizes the feature errors by its spatial resolution. We also use a separate weight X, for each control coordinate j. This optimization can be solved efficiently since the dynamics is linear in the controls (see|Appendix A)\nFurthermore, the bilinear dynamics allows the Jacobian matrix to be computed efficiently by simply doing a forward pass through the model. For the locally bilinear dynamics of|Equation (2)] the j-th column of the Jacobian matrix is given by\n4 The locally connected operator, with a local neighborhood of nf nf (analogous to the filter size i convolutions), is defined as:\nThe parameters are the 4-dimensional tensor W. and the matrix B! for each channel c. scale l, and control coordinate j. The last two terms are biases that allow to model action-independent visual changes, such as moving objects. The * is the locally connected operator, which is like a convolution but with untied filter weights\nJohn Schulman, Sergey Levine, Pieter Abbeel, Michael I Jordan, and Philipp Moritz. Trust re gion policy optimization. In Proceedings of the International Conference on Machine Learning (ICML), pp. 1889-1897, 2015, 2015\nThe loss that we use for training the bilinear dynamics is the sum of the losses of the predicted predicted features and the actual features of that level, e(l) (C\nManuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems (N1PS), pp. 2746-2754, 2015, 2015.\nee E Weiss, Arthur C Sanderson, and Charles P Neuman. Dynamic sensor-based control of robots ith O\nnight need to be weighted differently. Thus, we use a weighting w. > 0 per channel c and scale l:\nkn+[nf/2] kw+[nf/2] Wkn,kw,in-kn,iw-kwYin,iw ,ku in=kn-[nf/2] iw=kw-[nf/2]\nC,J du"}, {"section_index": "14", "section_name": "8 SERVOING COST FUNCTION FOR REINFORCEMENT LEARNING", "section_text": "2 2 if ||pt+1||2 T and car in F t,Ut, St._ Pt+1 Pt+ T-t+1)cSt) otherwise,\nReinforcement learning methods that learn a Q-function do so by minimizing the Bellman error\nCt + y min Q (St+1, u u\nThe camera is attached to the vehicle slightly in front of the robot's origin and facing down at an angle of /6rad, similar to a commercial quadcopter drone. The robot has 4 degrees of freedom corresponding to translation and yaw angle. Pitch and roll are held fixed.\nIn our simulations, the quadcopter follows a car that drives at 1 m s-1 along city roads during training. and testing. The quadcopter's speed is limited to within 10 m s-1 for each translational degree of. freedom, and its angular speed is limited to within /2 rad s-1. The simulator runs at 10 Hz. For each trajectory, a car is chosen randomly from a set of cars, and placed randomly on one of the roads. The quadcopter is initialized right behind the car, in the desired relative position for following. The. image observed at the beginning of the trajectory is used as the goal observation..\nIt is typically hard or unstable to optimize for both Q-functions that appear in the Bellman error. of |Equation (4)] so it is usually optimized by iteratively optimizing the current Q-function while. keeping the target Q-function constant. However, we notice that for a given state, the action that. minimizes its Q-values is the same for any non-negative scaling a of 0 and for any bias b. Thus, tc speed up the optimization of the Q-function, we first set a(k- ) and b(k-) by jointly solving for and b of both the current and target Q-function:.\nN e(k-1).b a>0.b\nThe dynamics of all the features were trained using a dataset of 10000 triplets xt, ut, Xt+1. The observations are 128 128 RGB images and the actions are 4-dimensional vectors of real numbers encoding the linear and angular (yaw) velocities. The actions are normalized to between -1 and 1.\nThis is similar to how, in policy evaluation, state values can be computed by solving a linear system We regularize the parameters with an l-2 penalty, weighted by v > 0. We use the term FQI iteratior to refer to each iteration k of optimizing the Bellman error, and we use the notation (k-) to denote an intermediate step between iterations (k-1) and (k). The parameters 0 can then be updated witl (k-) = a(k-t)g(k-1). Then, we update 0(k) and b(k) by optimizing for 0 and b of the current O-function while keeping the parameters of the target O-function fixed:.\nThe training set was generated from 100 trajectories of a quadcopter following a car around the cil. with some randomness. Each trajectory was 100 steps long. Only 5 training cars were shown durir earning. The generation process of each trajectory is as follows: First, a car is chosen at rando. rom the set of available cars and it is randomly placed on one of the roads. Then, the quadcopt. s placed at some random position relative to the car's horizontal pose, which is the car's pose th nas been rotated so that the vertical axis of it and the world matches. This quadcopter position. niformly sampled in cylindrical coordinates relative to the car's horizontal pose, with heights in tl. nterval 12 m to 18 m, and azimuthal angles in the interval -/2rad to /2rad (where the origin he azimuthal angle is the back of the car). The radii and yaw angles are initialized so that the ca s in the middle of the image. At every time step, the robot takes an action that moves it towards. arget pose, with some additive Gaussian noise (o = 0.2). The target pose is sampled according he same procedure as the initial pose, and it is sampled once at the beginning of each trajectory..\nA min mir t+1 0>0,b\nAlgorithm 1 FQI with initialization of policy-independent parameters\nWe try the fully and locally connected dynamics for pixel intensities to better understand the per- formance trade-offs when assuming locally connected dynamics. We do not use the latter for the semantic features since they are too high-dimensional for the dynamics model to fit in memory. The dynamics models were trained with ADAM using 10000 iterations, a batch size of 32, a learning rate of 0.001, and momentums of 0.9 and 0.999, and a weight decay of 0.0005.\nThe goal of reinforcement learning is to find a policy that maximizes the expected sum of rewards or equivalently, a policy that minimizes the expected sum of costs. The cost should be one that quantifies progress towards the goal. We define the cost function in terms of the position of the. target object (in the camera's local frame) after the action has been taken,.\ne.b(St,u)=$(St,u)'0+b,(St.\nWe denote the state of the MDP as st = (xt, x+) and add a bias b to the Q-function. The servoing policy is then simply e(st) = arg minu Qe.s(st, u). For reinforcement learning, we optimized for the weights 0 but kept the feature representation and its dynamics fixed.\nwhere T is the maximum trajectory length. The episode terminates early if the camera is too close to the car (less than a distance t) or the car's origin is outside the camera's field of view (FOV). The car's position at time t is pt = (pt, Pt, p?) and the car's target position is p* = (0, 0, p?), both in the camera's local frame (z-direction is forward). Our experiments use T = 100 and t = 4 m.\ni) (i) (i)1N In fitted Q-iteration, the agent iteratively gathers a dataset . of N samples , St+1i according to an exploration policy, and then minimizes the Bellman error using this dataset. We use the term sampling iteration to refer to each iteration j of this procedure. At the beginning of each sampling iteration, the current policy with added Gaussian noise is used as the exploration policy\n2: for s = 1,..., S do > sampling iterations 3: i 4: for k = 1,..., K do > FQI iterations Fit Q(k-) and b(k-) using (5) 5: Q(k-) Q(k-)Q(k-1) 6: Fit 0(k) and b(k) using (6) 7: q(0) 0(K) 8:\ndtlonlHlgonlln unweighted feature feature feature ours, Feature feature dynamics dynamics dynamics feature Dynamics dynamics + CEM + TRPO + TRPO dynamics + CEM (1500) (3250) ( 80) ( 2000) + FQI (20) pixel, FC 8.20 0.66 7.77 0.66 9.56 0.62 8.03 0.66 7.92 0.67 pixel, LC 8.07 0.74 7.13 0.74 10.11 0.60 7.97 0.72 7.98 0.77 VGG conv1_2 2.22 0.38 2.06 0.35 1.66 0.31 1.89 0.32 VGG conv2_2 2.40 0.47 2.42 0.47 1.89 0.40 1.40 0.29 VGG conv3_3 2.91 0.52 2.87 0.53 1.59 0.42 1.56 0.40 VGG conv4_3 2.70 0.52 2.57 0.49 1.69 0.41 1.11 0.29 VGG conv5_3 3.68 0.47 3.69 0.48 3.16 0.48 2.49 0.35\nFigure 3: Cars used to learn the dynamics and the feature weights. They were also used in some of the test experiments.\nCosts of Executions when Follow1ng Cars Seen During Training Costs of Executions when Following Novel Cars 9 9 8 8 6 6 4 4 3 3 2 2 1 11 0 0 pixel, pixel, VGG VGG VGG VGG VGG pixel, pixel, VGG VGG VGG VGG VGG fully locally conv1_2 conv2_2 conv3_3 conv4_3 conv5_3 fully locally conv1_2 conv2 2 conv3_3 conv4_3 conv5_3 connected connected connected connected Feature Dynamics Feature Dynamics\nFigure 5: Costs of test executions using various feature dynamics models, where the feature weights are op. timized with FQI. We test on cars that were used during learning (left plot) and on novel cars that were only. used at test time (right plot). The reported values are the mean and standard error across 100 trajectories, of up to 100 time steps each. The policies based on pixel intensities use either fully connected or locally connected. dynamics, whereas all the policies based on VGG features use locally connected dynamics. The policies basec. on deeper VGG features generally achieve better performance, except for the deepest feature representation. VGG conv5_3, which is not as suitable for approximating Q-values. The policies based on pixel intensities and. VGG conv5_3 features perform worse on the novel cars. However, VGG features conv1_2 through conv4_3. achieve some degree of generalization on the novel cars..\n(b) Costs when using novel cars, none of which were seen during learning\nTable 2: Costs on test executions of the dynamics-based servoing policies for different feature dynamics and weighting of the features. The reported numbers are the mean and standard error across 100 test trajectories, o. up to 100 time steps each. We test on executions with the training cars and the novel cars; for consistency, the novel cars follow the same route as the training cars. We compare the performance of policies with unweightec features or weights learned by other methods. For the case of unweighted feature dynamics, we use the cross entropy method (CEM) to learn the relative weights of the control and the single feature weight w. Fo. the other cases, we learn the weights with CEM, Trust Region Policy Optimization (TRPO) for either 2 or 50 iterations, and our proposed FQI algorithm. CEM searches over the full space of policy parameters w and X, but it was only ran for pixel features since it does not scale for high-dimensional problems. We report the number of training trajectories in parenthesis. For TRPO, we use a fixed number of training samples per iteration, whereas for CEM and FQI, we use a fixed number of training trajectories per iteration. We use a batch size of 4000 samples for TRPO, which means that at least 40 trajectories were used per iteration, since trajectories can terminate early, i.e. in less than 100 time steps."}, {"section_index": "15", "section_name": "6 EXPERIMENTS", "section_text": "We evaluate the performance of the model for visual servoing in a simulated environment. Th simulated quadcopter is governed by rigid body dynamics. The robot has 4 degrees of freedom corresponding to translation along three axis and yaw angle. This simulation is inspired by tasks ir which an autonomous quadcopter flies above a city, with the goal of following some target objec (e.g., a car).\nThe dynamics for each of the features were trained using a dataset of 1ooo0 samples (corresponding. to 100 trajectories) with ADAM (Kingma & Ba2014). A single dynamics model was learned foi. each feature representation for all the training cars (Figure 3). This training set was generated by. executing a hand-coded policy that navigates the quadcopter around a car for 100 time steps pe. trajectory, while the car moves around the city."}, {"section_index": "16", "section_name": "C.3 LEARNING WEIGHTING OF FEATURE DYNAMICS WITH REINFORCEMENT LEARNING", "section_text": "We use CEM, TRPO and FQI to learn the feature weighting and report the performance of the learned policies in Table 2] We use the cost function described in|Appendix B] a discount factor oi y = 0.9, and trajectories of up to 100 steps. All the algorithms used initial weights of w = 1 anc X = 1, and a Gaussian exploration policy with the current policy as the mean and a fixed standarc deviation exploration = 0.2.\nWe used the proposed FQI algorithm to learn the weightings of the features and control regularizer At every sampling iteration, the current policy was executed with Gaussian noise to gather data from 10 trajectories. All the trajectories in our experiments were up to 100 time steps long. The immediate cost received by the agent encodes the error of the target in image coordinates (details in|Appendix B). Then, the parameters were iteratively updated by running K = 10 iterations o FQI. We ran the overall algorithm for only S = 2 sampling iterations and chose the parameters that achieved the best performance on 10 validation trajectories. These validation trajectories wer. obtained by randomly choosing 10 cars from the set of training cars and randomly sampling initia states, and executing the policy with the parameters of the current iteration. All the experiment share the same set of validation trajectories.\nPolicy Optimization Algorithm\nFigure 4: Novel cars used only in the test experi- ments. They were never seen during training or vali. dation.\na) Costs when using the set of cars seen during learning\nPolicy Optimization Algorithm unweighted feature feature feature ours, Feature feature dynamics dynamics dynamics feature Dynamics dynamics + CEM + TRPO + TRPO dynamics + CEM (1500) (3250) ( 80) ( 2000) + FQI (20) pixel, FC 8.84 0.68 8.66 0.70 10.01 0.62 8.75 0.67 9.00 0.70 pixel, LC 8.37 0.75 7.17 0.75 11.29 0.57 8.25 0.71 8.36 0.79 VGG conv1_2 2.03 0.43 1.79 0.36 1.42 0.33 1.78 0.37 VGG conv2_2 2.01 0.44 2.00 0.45 1.26 0.30 1.28 0.30 VGG conv3_3 2.03 0.47 2.08 0.47 1.46 0.37 1.04 0.31 VGG conv4_3 2.40 0.50 2.57 0.53 1.48 0.36 0.90 0.26 VGG conv5_3 3.31 0.45 3.55 0.50 2.76 0.42 2.56 0.41\nFor the case of unweighted features, we use CEM to optimize for a single weight w and for the. weights X. For the case of weighted features, we use CEM to optimize for the full space of pa-. rameters, but we only do that for the pixel feature dynamics since CEM does not scale for high. dimensional problems, which is the case for all the VGG features. Each iteration of CEM performs. a certain number of noisy evaluations and selects the top 20% for the elite set. The number of noisy. evaluations per iteration was 3 times the number of parameters being optimized. Each noisy evalua\nFeature Observations from Test Executions Cost Dynamics pixel, 24.74 fully connected 16.69 pixel, 24.92 locally connected 16.47 15.91 VGG conv1_2 1.57 7.53 VGG conv2_2 2.56 6.01 VGG conv3_3 3.76 5.94 VGG conv4_3 4.31 15.51 VGG conv5_3 17.39\nTable 1: Sample observations from test executions in our experiments with the novel cars, and the costs fo. each trajectory, for different feature dynamics. We use the weights learned by our FQI algorithm. In each row. we show the observations of every 10 steps and the last one. The first observation of each trajectory is usec as the target observation. The trajectories shown here were chosen to reflect different types of behaviors. The servoing policy based on pixel feature dynamics can generally follow cars that can be discriminated based or RGB pixel intensities (e.g., a yellow car with a relatively uniform background). However, it performs poorly when distractor objects appear throughout the execution (e.g., a lamp) or when they appear in the target image (e.g., the crosswalk markings on the road). On the other hand, VGG conv4_3 features are able to discriminate the car from distractor objects and the background, and the feature weights learned by the FQI algorithm are able to leverage this. Additional sample executions with other feature dynamics can be found in Table 3Jin the Appendix.\nWe compare the servoing performance for various feature dynamics models, where the weights are. optimized with FQI. We execute the learned policies on 100 test trajectories and report the average. cost of the trajectory rollouts on Figure 5 The cost of a single trajectory is the (undiscounted) sum of costs ct. We test the policies with cars that were seen during training as well as with a set of novel. cars (Figure 4), to evaluate the generalization of the learned dynamics and optimized policies..\nTable 3: Sample observations from test executions in our experiments, and the costs for each trajectory, fo different feature dynamics. We use the weights learned by our FQI algorithm. This table follows the sam format asTable 1S Some of the trajectories were shorter than 100 steps because of the termination conditio (e.g. the car is no longer in the image). The first observation of each trajectory is used as the target observation The trajectories shown in here were chosen to reflect different types of behaviors. In the first trajectory, the blu car turns abruptly to the right, making the view significantly different from the target observation. In the secon trajectory, a distractor object (i.e. the lamp) shows up in the target image and an occluder object (i.e. the traffi light) appears through the execution. The policies based on deeper VGG features, up to VGG conv4_3, ar generally more robust to the appearance changes between the observations and the target observation, whic are typically caused by movements of the car, distractor objects, and occlusions.\nFrom these results, we notice that policies based on deeper VGG features, up to VGG conv4_3 generally achieve better performance. However, the deepest feature representation, VGG conv5_3,. is not as suitable for approximating Q-values. We hypothesize that this feature might be too spatially invariant and it might lack the necessary spatial information to differentiate among different car. positions. The policies based on pixel intensities and VGG conv5_3 features perform worse on the novel cars. However, VGG features conv1_2 through conv4_3 achieve some degree of generalization on the novel cars.\nWe show sample trajectories in Table 1 The policy based on pixel-intensities is susceptible to occlusions and distractor objects that appear in the target image or during executions. This is because distinguishing these occlusions and distractors from the cars cannot be done using just RGB features\nFeature Observations from Test Executions Cost Dynamics 0.95 pixel, locally 6.26 connected 14.49 Iin 0.38 VGG 0.48 conv4_3 118 1.02\nThe test trajectories were obtained by randomly sampling 100 cars (with replacement) from one of. the two sets of cars, and randomly sampling initial states (which are different from the ones used for validation). For consistency and reproducibility, the same sampled cars and initial states were used across all the test experiments, and the same initial states were used for both sets of cars. These test trajectories were never used during the development of the algorithm or for choosing. hyperparameters."}]
BkmM8Dceg
[{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "A crucial aspect of current deep learning architectures is the encoding of invariances. This fac. is epitomized in the success of convolutional neural networks (CNN), where equivariance to image. translation is key: translating the input results in a translated output. When invariances are present ir the data, encoding them explicitly in an architecture provides an important source of regularization. which allows to reduce the amount of training data required for learning. Invariances may also be. used to improve the efficiency of implementations; for instance, a convolutional layer requires orders. of magnitude less memory and also less computation compared to an equivalent fully-connectec layer.\nBaselines and results. The angular error of the proposed equivariant pose estimation, Warped CNN is shown in table 2, along with a number of baselines. Qualitative results are shown in fig. 4. The goal of these experiments is to demonstrate that it is possible to achieve equivariance to complex 3D. rotations. We also wish to disentangle the performance benefits of the warped convolution from the. other architectural aspects. The first baseline, STN+softargmax, is the same as the proposed method,. but without the warp. The large performance drop indicates that the spherical model incorporates important domain knowledge, which is ignored by a translation-equivariant STN. To allow non-. equivariant models, we also test two other baselines where the softargmax is replaced with a fully-. connected (FC) layer. The STN+FC includes an affine Spatial Transformer, while the CNN+FC does not, corresponding to a standard CNN of equivalent capacity. We observe that neither the FC. or the STN components can account up for the performance of the warped convolution, which better exploits the natural 3D rotation equivariance of the data..\nThe success of CNNs indicates that translation invariance is an important property of images. How. ever, this does not explain why translation equivariant operators work well for image understanding. The common interpretation is that such operators are matched to the statistics of natural images. which are well known to be translation invariant (Hyvarinen et al., 2o09). However, natural imag. statistics are also (largely) invariant to other transformations such as isotropic scaling and rotatior which suggests that alternative neural network designs may also work well with images. Further. more, in specific applications, invariances other than translation may be more appropriate.."}, {"section_index": "1", "section_name": "7 CONCLUSIONS", "section_text": "Therefore, it is natural to consider generalizing convolutional architectures to other image transfor. mations, and this has been the subject of extensive study (Kanazawa et al., 2014; Bruna et al., 2013;. Cohen & Welling, 2016). Unfortunately these approaches do not possess the same memory and. speed benefits that CNNs enjoy. The reason is that, ultimately, they have to transform (warp) an image or filter several times (Kanazawa et al., 2014; Marcos et al., 2016; Dieleman et al., 2015), in. curring a high computational burden. Another approach is to consider a basis of filters (analogous to. eigen-images) encoding the desired invariance (Cohen & Welling, 2014; Bruna et al., 2013; Cohen. & Welling, 2016), which requires more storage than a convolutional filter..\nIn this work we show that it is possible to reuse highly optimized convolutional blocks, which are equivariant to image translation, and coax them to exhibit equivariance to other operators, includ- ing 3D transformations. This is achieved by a simple warp of the input image, implemented with off-the-shelf components of deep networks, and can be used for image recognition tasks involving a large range of image transformations. Compared to other works, warped convolutions are simpler relying on highly optimized convolution routines, and can flexibly handle many types of continu- ous transformations. Studying generalizations that support more than two parameters seems like a fruitful direction for future work. In addition to the practical aspects, our analysis offers some in- sights into the fundamental relationships between arbitrary image transformations and convolutional architectures."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Although they are able to handle transformations with many pose parameters, in practice most recent proposals are limited to very coarsely discretized transformations, such as horizontal/vertical flips and 90 rotations (Dieleman et al., 2015; Cohen & Welling, 2014)..\nIn this work we propose a generalization of CNNs that overcomes these disadvantages. Our mair. result shows that a linear layer with equivariance w.r.t. a large class of 2-parameters transformations can always be implemented efficiently, using a standard convolution in a warped image space. The. image warp can be implemented using bilinear resampling, a simple and fast operation that has beer. popularized by spatial transformer networks (Jaderberg et al., 2015), and is part of most deep learn. ing toolboxes. Unlike previous proposals, the proposed warped convolutions can handle continuous. transformations, such as fine rotation and scaling.\nThis makes generalized convolution easily implementable in neural networks, including using fast convolution algorithms on GPU hardware, such as Winograd (Lavin, 2015) or the Fast Fourier Trans form (Lyons, 2010). We present these notions in the simplest possible way (sections 2 to 4), but we note that they can be derived in broader generality from well know concepts of group theory (sec- tion 4.2).\nFigure 4: Example pose estimates (yaw and pitch) on the AFLW dataset (Section 6.3)"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Joan Bruna, Arthur Szlam, and Yann LeCun. Learning stable group invariant representations with convolutional networks. arXiv preprint arXiv:1301.3537, 2013.\nTaco Cohen and Max Welling. Group equivariant convolutional networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16), 2016.\nH(u;I) = I(x) F(x+u) dx\nGerald B Folland. A course in abstract harmonic analysis. 1995\nwhere I(x) and F(x) are continuous functions over a bounded 2D region C R2, that is: I, F :. Q -> R. The real-valued 2D vectors x E now play the role of the indexes k E Z2. Equation 2. reduces to the discrete case of eq. 1 if we define I (x) and F(x) as the sum of delta functions on grids Intermediate values can be obtained by interpolation, such as bilinear (which amounts to convolution. of the delta functions with a triangle filter (Jaderberg et al., 2015)). Importantly, such continuous. images can be deformed by very rich continuous transformations of the input coordinates, whereas. strictly discrete operations would be more limiting.\nAapo Hyvarinen, Jarmo Hurri, and Patrick O Hoyer. Natural Image Statistics: A Probabilisti. nal Visi 30 Scie Media\nOver the next sections it will be more convenient to translate the image I instead of the filter F. Thi alternative form of eq. 2 is obtained by replacing x + u -> x:.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Systems, pp. 2017-2025. 2015.\nAngjoo Kanazawa, Abhishek Sharma, and David Jacobs. Locally scale-invariant convolutional neural networks. arXiv preprint arXiv:1412.5104, 2014.\nMartin Koestinger, Paul Wohlhart, Peter M. Roth, and Horst Bischof. Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization. In First IEEE. International Workshop on Benchmarking Facial Image Analysis Technologies. 2011\nThe standard convolution operator of eq. 3 can be interpreted as applying the filter to translatec versions of the image. Translations can be replaced by other transformations as follows (Henrique et al., 2014):\nH(t; I) = I(t(x)) F(x) dx, t E G\nKarel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equiv ariance and equivalence. In Proceedings of the IEEE conference on computer vision and patter recognition, pp. 991-999, 2015.\n'Note that eq. 1 defines cross-correlation in the signal processing literature, but here we follow the conven-. tion used in machine learning and call it convolution. We also ignore the possibility that the input image has more than one channel, and that convolution layers in CNN involve banks of filters instead of single ones. All. such details are immaterial to our discussion.\nWe start by looking at the basic building block of CNNs, i.e. the convolution operator. This operator computes the inner product of an image I E IRmx n with a translated version of the filter F E Rrx s. producing a new image as output:.\nH = IkFk+j k\nwhere k, i E Z2 are two-dimensional vectors of indexes, and the summation ranges inside the extents of both arrays.I To handle continuous deformations of the input, it is more natural to express eq. 1 as an integral over continuous rather than discrete inputs:.\nSander Dieleman. Kyle W Willett. and Joni Dambre. Rotation-invariant convolutional neural net works for galaxy morphology prediction. Monthly notices of the royal astronomical society, 450 (2):1441-1459, 2015.\nH(u;I) = I(xu) F(x) dx\nAndrew Lavin. Fast algorithms for convolutional neural networks. arXiv preprint arXiv: 1509.09308 2015.\nwhere G is a set of transformation functions t : -> (assumed to be invertible). Intuitively, this generalized convolution performs an exhaustive search for a pattern, at many different poses (Hen riques et al., 2014; Kanazawa et al., 2014). The interest in this definition lies in the fact that it makes convolution equivariant (Lenc & Vedaldi, 2015):\nLemma 1 (Equivariance). Consider the generalized convolution operator H(t; I) of eq. 4. Gener alized convolution \"commutes\" with any transformation q E G of the image..\nH(t;Ioq) = H(qot;I)\nProof. One has immediately H(t; I o I(q(t(x))) F(x) dx = H(q ot;I)\nOne has immediately H(t; I o q) = f I(q(t(x))) F(x) dx = H(q o t; I)\nA notable case is when transformations have an additive parametrization t : R2 -> Q, with x, u) +> t,,(x) and t,, o t, = tu+. In this case, the equivariance relation can be written as\nUnfortunately, what eq. 4 gains us in generality, it loses in both performance and ease of imple mentation. Most works in computer vision that looked at filtering under generalized transformations (e.g. scale pyramids (Kanazawa et al., 2014) or rotated filter banks (Marcos et al., 2016; Cohen & Welling, 2014; 2016; Henriques et al., 2014)) compute eq. 4 directly by evaluating a large number of transformations t E G. This entails warping (transforming) either the image or the filter once per transformation t, which can be expensive.\nOpting to transform the filter instead of the image can be advantageous, since it is smaller in size. On the other hand, the filter and its domain then become spatially-varying, which foregoes the benefit of the regular, predictable, and local pattern of computations in standard convolution. It precludes the use of fast convolution routines such as Winograd's algorithm (Lavin, 2015), or the Fast Fourier. Transform (Lyons, 2010), which has lower computational complexity than exhaustive search (eq. 3)..\nIn practice, most recent works focus on very coarse transformations that do not change the filter support and can be implemented strictly via permutations, like horizontal/vertical flips and 90 ro- tations (Dieleman et al., 2015; Cohen & Welling, 2014). Such difficulties explain why generalized convolutions are not as widespread as CNNs.\nIn section 4 we will show that, for an important class of transformations, including the ones con- sidered in previous works (such as Kanazawa et al. (2014); Cohen & Welling (2014); Marcos et al. (2016)) it is possible to perform generalized convolution by composing a single warp with a stan- dard convolution, instead of several warps. Thus, we are able to take full advantage of modern convolution implementations (Lavin, 2015; Lyons, 2010), including those with lower computational complexity."}, {"section_index": "4", "section_name": "4 MAIN RESULT", "section_text": "Our main contribution is to show that the generalized convolution operator of eq. 4 can be imple. mented efficiently by a standard convolution, by pre-warping the input image and filter appropriately The warp is the same for any image, depending solely on the nature of the relevant transformations. and can be written in closed form. This result, given in theorem 1, allows us to implement very efficient generalized convolutions using simple computational blocks, as shown in section 4.1. We name this method warped convolution..\nThe strongest assumption is that transformations must have an additive parametrization. By this, we mean that there exists a bijection ty : -> such that, for any u, v E R2, parameters compose additively tu o ty = tu+v. The second assumption is that there exists a pivot point xo E such\nB. Srinivasa Reddy and Biswanath N. Chatterji. An FFT-based technique for translation, rotation. and scale-invariant image registration. IEEE Transactions on Image Processing, 5(8):1266-1271, 1996. Uwe Schmidt and Stefan Roth. Learning rotation-aware features: From invariant priors to equivari-. ant descriptors. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2050-2057, 2012. Georgios Tzimiropoulos, Vasileios Argyriou, Stefanos Zafeiriou, and Tania Stathaki. Robust FFT-\nH(u;Ioty)=H(v+u;I)\n[n particular, standard convolution is obtained when t,(x) = x - u is the translation operator. In this case, the lemma above simply states that any translation of the input of the convolution results. n a corresponding translation of the output..\nIn section 5, we will look in more detail at a few concrete examples of transformations other than translations. Although we will not do so explicitly, in this construction it is also possible to let one or more dimensions of the parameter space R2 be given modulus a period Q, in the sense of replacing R with R/Z(Q); the latter is required to parameterise transformations such as rotation.\nTheorem 1. Consider the generalized convolution of eg. 4. Assume that the transformation is. additive (tu o t, = tu+v). Assume also that, for a fixed pivot point xo, the function u +> tu(xo) is. bijective. Then we can rewrite generalized convolution (eg. 4) as the standard convolution\nOur simplified model consists of a perspective camera with focal length f and all other camera parameters equal to identity, at a distance d from a centered sphere of radius r (see fig. 1-d).\nA 2D point x in image-space corresponds to the 3D point\nH(u;I) = I(u+v)F(v) dv,\nRaycasting it along the z axis, it will intersect the sphere surface at the 3D poin\np f d p\nThen, the yaw and pitch coordinates of the point a on the surface of the sphere are.\nThese polar coordinates are now rotated by the spatial transformation parameters.\nConverting the polar coordinates back to a 3D point q\nThe warp that is applied to both inputs in eq. 7 can be interpreted as follows. We start with ar arbitrary pivot point xo in the image and them sample other points by repeatedly applying the trans formation tu(xo) to the pivot (by varying u). When discretized, this sampling is performed ove a 2D grid of parameters u. Finally, sampling the input at these points (for example, by bilinear interpolation) yields the warped input.\nFinally, projection of q' into image-space yields\nAn illustration is given in fig. 1, for various transformations (each one is discussed in more detail ir section 5). The red dot shows the pivot point xo, and the two arrows pointing away from it show the two directions of increasing u values (recall that transformation parameters are two-dimensional). The grids were generated by sampling u at regular intervals. Note that the warp grids are independent. of the image contents - they can be computed once offline and then applied to any image.."}, {"section_index": "5", "section_name": "4.1 PRACTICAL CONSIDERATIONS", "section_text": "There are a few interesting aspects that simplify the use of theorem 1 in practice\nFirst, since in most applications the filter F is learned, we are free to ignore the constant warp and Jacobian in eq. 7 (which amounts to a simple reparametrization), and learn F directly. In practice, this means that we warp only the input image I to obtain I, and then perform a standard convolution with a filter F. The learned warped filter F has a one-to-one correspondence to an image-space filter F by means of eq. 7, although there is no real need to build the latter explicitly.\nSecond, we can choose either one or two spatial transformations for the generalized convolutior (e.g. scale and rotation, simultaneously). The reason is that the input image is 2D, so the parameter space after warping is also 2D. The choice is not arbitrary though: the two transformations mus1 commute. in order to respect additivity. This will be the case of the pairs we study in section 5\nthat u +> tu(xo) defines a bijection R2 -> from the parameter space to the real plane. The latter requirements means that any point x E can be \"reached\" by transforming xo under a suitable tu We then have that:\np = (x1,X2,f)\ndtu(xo) I(u) = I(tu(xo)) F(u) = F(tu(xo)) du\nIf the argument of the square-root is negative, the ray does not intersect the sphere and so the point transformation is undefined. This means that the domain of the image should be restricted to the sphere region. In practice, in such cases we simply leave the point unmodified\nH(u;I) = I(tu(x)) F(x) dx dtu(xo) I(tu(tx(xo))) F(tx(xo)) dv du dty(xo) I(tu+v(xo)) F(tx(xo)) du dv i(u+v) I(u+v)F(v) dv.\nq1 D1 = coS - atan d. 93\nr sin o' sin $?. r cos $' r sin $' cos $', - d\n1' x)= q2\nThe last factor in eq. 7 is the determinant of the Jacobian of the image transformation t. It rescales. the image values to account for the stretching and shrinking of space due to non-linear warps. It. can also be computed offline, and its application amounts to an element-wise product by a constant array. A generalization using group theory is discussed in section 4.2.\nBy theorem 1, these steps are equivalent to a generalized convolution, which performs an exhaustive search across the pose-space of transformation t, but at a much lower computational cost."}, {"section_index": "6", "section_name": "4.2 RELATIONSHIP TO GROUP THEORY", "section_text": "To this end, let G be a group of transformations. Under very mild conditions (the group has to be locally compact and Hausdorff), there exists a unique measure on the group, the Haar measure which is invariant to the group action, in the sense that, given a measurable function I : G -). R, then f' I(g'g) dg = I I(g) dg. Using this measure, one can define generalized convolution as. (I * F)(t) = SG I(tg)F(g-1) dg. This resembles our definition (4), although image and filter are. defined on the group G instead of the spatial domain R2. Lemma 1 translates immediately to this. case (Folland, 1995).\nIn order to extend Theorem 1, we need to make this general but abstract construction concrete. Here one assumes that the group acts transitively on a subset X C R2 (which means that any point x E X can be written as x = gxo, for a fixed point xo E X and a suitable transformation g E G). Ther one can define the image as I(g) = I(g(xo)), where I is a function of the spatial domain X insteac of the group G, and likewise for the filter. Next, it is necessary to explicitly calculate the integra over G. If the group is an Abelian (commutative) Lie group, then one can show that there exists a map exp : V -> G, the exponential map, defined on a vector space V. Under commutativity, thi. map is also additive, in the sense that exp(u) exp(v) = exp(u + v). The structure of V depends or the specific group, but under such restrictive conditions, it is a torus, which allows the calculation oi\nWe now give some concrete examples of pairs of spatial transformations that obey the conditions of theorem 1, and can be useful in practice..\nDetection tasks require predicting the extent of an object as a bounding box. While the location can be found accurately by a standard CNN, which is equivariant to translation, the size prediction could similarly benefit from equivariance to horizontal and vertical scale (equivalently, scale and aspect ratio).\nSuch a spatial transformation, from which a warp can be constructed, is given by.\nThis section relates our results, which have been presented using a simple formalism and in a re stricted setting, to a more general approach based on group theory (Folland, 1995).\nFinally, in order to swap integration over the group parameters with integration over space, one assumes that x = exp(u)xo defines a smooth bijection V -> X, so that it is possible to use the change of variable u -> u(x) where exp(u(x))xo = x. This allows writing the integral as f I(exp(u)xo) du = f I(x) |du/dx| dx. Note that this Jacobian is the inverse of the one found in. (1) due to the fact that we started by defining our convolution using I instead of I..\n(a) translation (b) scale/aspect ratio (c) scale/rotation (d) 3D rotation (yaw/pitch)\nsu2 ||x cos(atan2(x2, x1) + u1) tu(x) = su2 ||x|| sin(atan2(x2, x1) + u1)\nwhere atan2 is the standard 4-quadrant inverse tangent function (at an2). The domain in this case must exclude the origin ( E R2 {0}), since a pivot xo = 0 cannot reach any other points in the. image by rotation or scaling\nFigure 1: First row: Sampling grids that define the warps associated with different spatial transfor- mations. Second row: An example image (a) after warping with each grid (b-d). Third row: A small translation is applied to each warped image, which is then mapped back to the original space (by an inverse warp). Translation in one axis of the appropriate warped space is equivalent to (b) horizontal scaling: (c) planar rotation: (d) 3D rotation around the vertical axis..\nX1 sU1 X2 SU2\nThe s constant controls the total degree of scaling applied. Notice that the output must be exponential in the scale parameters u; this ensures the additive structure required by theorem 1: t(t,(x)) = tu+v(x). The resulting warp grid can be visualized in fig. 1-b. In this case, the domain of the image must be E R?, since a pivot xo in one quadrant cannot reach another quadrant by any amount of (positive) scaling.\nPlanar scale and rotation are perhaps the most obvious spatial transformations in images, and are a natural test case for works on spatial transformations (Kanazawa et al., 2014; Marcos et al., 2016) Rotating a point x by u1 radians and scaling it by u2, around the origin, can be performed with\nWarp CNN Soft argmax 1 Scale+bias\nThe resulting warp grid can be visualized in fig. 1-c. It is interesting to observe that it corresponds exactly to the log-polar domain, which is used in the signal processing literature to perform corre- lation across scale and rotation (Tzimiropoulos et al., 2010; Reddy & Chatterji, 1996). In fact, it was the source of inspiration for this work, which can be seen as a generalization of the log-polar domain to other spatial transformations."}, {"section_index": "7", "section_name": "5.3 3D SPHERE ROTATION UNDER PERSPECTIVE", "section_text": "We will now tackle a more difficult spatial transformation, in an attempt to demonstrate the gener ality of theorem 1. The transformations we will consider are yaw and pitch rotations in 3D space. as seen by a perspective camera. In the experiments (section 6) we will show how to apply it to face pose estimation.\nIn order to maintain additivity, the rotated 3D points must remain on the surface of a sphere. We consider a simplified camera and world model, whose only hyperparameters are a focal length f the radius of a sphere r, and its distance from the camera center d. The equations for the spatial transformation corresponding to yaw and pitch rotation under this model are in appendix A.\nAs mentioned in section 2.2, generalized convolution performs an exhaustive search for pattern across spatial transformations, by varying pose parameters. For tasks where invariance to that trans formation is important, it is usual to pool the detection responses across all poses (Marcos et al. 2016: Kanazawa et al., 2014).\nIn the experiments, however, we will test the framework in pose prediction tasks. As such, we do not want to pool the detection responses (e.g. with a max operation) but rather find the pose with the strongest response (i.e., an argmax operation). To perform this operation in a differentiable manner. we implement a soft argmax operation, defined as follows:\nmn mn S1(a) = m n ij ij\nwhere o(a) E Rmxn is the softmax over all spatial locations, and o(a) indexes the element al (i, j). The outputs are the two spatial coordinates of the maximum value, s(a) E R2.\nOur base architecture then consists of the following blocks, outlined in fig. 2. First, the input image is warped with a pre-generated grid, according to section 4. The warped image is then processed by a standard CNN, which is now equivariant to the spatial transformation that was used to generate the warp grid. A soft argmax (eq. 10) then finds the maximum over pose-space. To ensure the pose prediction is well registered to the reference coordinate system, a learnable scale and bias are applied\nFigure 2: Equivariant pose estimation strategy used in the experiments (section 6). With an appro priate warp and a standard CNN, the shaded block becomes equivalent to a generalized CNN (by theorem 1). which performs exhaustive searches across pose-space instead of image-space\nThe corresponding warp grid can be seen in fig. 1-d. It can be observed that the grid corresponds to what we would expect of a 3D rendering of a sphere with a discrete mesh. An intuitive picture of the effect of the warp grid in such cases is that it wraps the 2D image around the surface of the 3D object, so that translation in the warped space corresponds to moving between vertexes of the 3D geometry.\nCNN+FC CNN+softargmax Warped CNN Rotation error (degrees) 28.87 30.6 26.44 Scale error (px) 17.51 5.783 5.4\nTable 1: Results of scale and rotation pose estimation of vehicles in the Google Earth dataset\nFigure 3: Example pose estimates (rotation and scale) on the Google Earth dataset (Section 6.2)\nto the outputs. Training proceeds by minimizing the L- loss between the predicted pose and groun truth pose."}, {"section_index": "8", "section_name": "6.2 GOOGLE EARTH", "section_text": "For the first task in our experiments, we will consider aerial photos of vehicles, which have been used in several works that deal with rotation invariance (Liu et al., 2014; Schmidt & Roth, 2012 Henriques et al., 2014)\nDataset. The Google Earth dataset (Heitz & Koller, 2o08) contains bounding box annotations supplemented with angle annotations from (Henriques et al., 2014), for 697 vehicles in 15 large images. We use the first 10 for training and the rest for validation. Going beyond these previous works, we focus on the estimation of both rotation and scale parameters. The object scale is taker to be the diagonal length of the bounding box\nImplementation. A 48 48 image around each vehicle is cropped and downscaled by 50%, and then fed to a network for pose prediction. The proposed method, Warped CNN, follows the architecture. of section 6.1 (visualized in fig. 2). The CNN block contains 3 convolutional layers with 5 5 filters, with 20, 50 and 1 output channels respectively. Recall that the output of the CNN block is a. single-channel response map over 2D pose-space, which in this case consists of rotation and scale.. Between the convolutional layers there are 3 3 max-pooling operators, with a stride of 2, and a ReLU before the last layer. All networks are trained for 20 epochs with SGD, using hyperparameters chosen by cross-validation.\nBaselines and results. The results of the experiments are presented in table 1, which shows angular. and scale error in the validation set. Qualitative results are shown in fig. 3. To verify whether the. proposed warped convolution is indeed responsible for a boost in performance, rather than other ar-. chitectural details, we compare it against a number of baselines with different components removed.. The first baseline, CNN+softargmax, consists of the same architecture but without the warp (sec-. tion 5.2). This is a standard CNN, with the soft argmax at the end. Since CNNs are equivariant to. translation, rather than scale and rotation, we observe a drop in performance. For the second base. line, CNN+FC, we replace the soft argmax with a fully-connected layer, to allow a prediction that. is not equivariant with translation. The FC layer improves the angular error, but not the scale error.. The proposed Warped CNN has a similar (slightly lower) capacity to the CNN+FC baseline, but we. see it achieve better performance, since its architectural equivariance seems to be better matched to. the data distribution."}]
Sy7m72Ogg
[{"section_index": "0", "section_name": "AN ACTOR-CRITIC ALGORITHM FOR LEARNING RATE LEARNING", "section_text": "Table 3: Error rate of different methods on different network architectures\nChang Xu\nNankai University\nchangxu@nbjl.nankai.edu.cn\nworks, i.e., x; = x; in the algorithm, and in the second implementation, the input x; of the critic. network is different from the input x; of the actor network. It is easy to see from the figure that. setting x; = x; tends to oscillate during training and leads to poor test performance. Thus, we need. to feed different training data to the actor network and the critic network to ensure the performance of the algorithm.\nNankai University\nwgzwp@nbjl.nankai.edu.cn"}, {"section_index": "1", "section_name": "4.4 COMPARISON WITH OTHER ADAPTIVE LEARNING RATE METHOD", "section_text": "Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. To avoid manually searching of learning rates, which is tedious and inefficient, we propose an algorithm to automatically learn learning rates using actor-critic methods from reinforcement learning (RL). In particular, we train a policy network called actor to decide the learning rate at each step during training, and a value network called critic to give feedback about quality of the decision (e.g., the goodness of the learning rate outputted by the actor) that the actor made. Experiments show that our method leads to good convergence of SGD and can prevent overfitting to a certain extent, resulting in better performance than human-designed competitors.\nWe also compare our method with \"vSGD\" from previous by work Schaul et al.(2013), which car automatically adjust learning rates to minimize the expected error. This method tries to compute learning rate at each update by optimizing the expected loss after the next update according tc the square norm of the expectation of the gradient, and the expectation of the square norm of the gradient. Note that our method learns to predict a learning rate at each time step by utilizing the long term reward predicted by a critic network.\nFor a fair comparison, we followed the experiments settings ofSchaul et al.(2013), which design hree different network architectures for MNIST task to measure the performance. The first one denoted by M0' which is simple softmax regression (i.e. a network with no hidden layer). TI second one ('M1') is a fully connected multi-layer perceptron, with a single hidden layer. Tl third one (denoted M2') is a deep, fully connected multi-layer perceptron with two hidden laye The vSGD has three variants in their paper. We referred to the results reported in their paper ar compared our method with all of three variants of their algorithm (vSGD-1, vSGD-b, vSGD-g The learning rates of SGD are decreased according to a human designed schedule, and the hype parameters of SGD, ADAM, Adagrad, RMSprop are carefully determined by their lowest test err among a set of hyper-parameters. All hyper-parameters can be found in Schaul et al. (2013)."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "While facing large scale of training data, stochastic learning such as stochastic gradient descen. (SGD) is usually much faster than batch learning and often results in better models. An observatior. for SGD methods is that their performances are highly sensitive to the choice of learning rateLeCur. et al.[(2012). Clearly, setting a static learning rate for the whole training process is insufficient, since. intuitively the learning rate should decrease when the model becomes more and more close to a. (local) optimum as the training goes on over time Maclaurin et al.(2015). Although there are some. empirical suggestions to guide how to adjust the learning rate over time in training, it is still a difficul. task to find a good policy to adjust the learning rate, given that good policies are problem specific anc. depend on implementation details of a machine learning algorithm. One usually needs to try many. times and adjust the learning rate manually to accumulate knowledge about the problem. However. human involvement often needs domain knowledge about the target problems, which is inefficien. and difficult to scale up to different problems. Thus, a natural question arises: can we automatically. adjust the learning rate? This is exactly the focus of this work and we aim to automatically learn th learning rates for SGD based machine learning (ML) algorithms without human-designed rules o. hand-crafted features.\nThe experimental results are reported in Table[3] It shows that our proposed method performs bette. than vSGD and other baseline methods, and is stable across different network architectures.\nIn this work, we have studied how to automatically learn learning rates for gradient based machine learning methods and proposed an actor-critic algorithm, inspired by the recent success of reinforce- ment learning. The experiments on two image classification datasets have shown that our method (1) has comparable convergence speed with expert-designed optimizer while achieving better test accuracy, and (2) can successfully adjust learning rate for different datasets and CNN model struc-. tures.\nFor the future work, we will explore the following directions. In this work, we have applied our. algorithm to control the learning rates of SGD. We will apply to other variants of SGD methods. We. have focused on learning a learning rate for all the model parameters. We will study how to learr an individual learning rate for each parameter. We have considered learning learning rates using RI. techniques. We will consider learning other hyperparameters such as step-dependent dropout rates. for deep neural networks.\nBy examining the current practice of learning rate control/adjustment, we have two observations First, learning rate control is a sequential decision process. At the beginning, we set an initia learning rate. Then at each step, we decide whether to change the learning rate and how to change it, based on the current model and loss, training data at hand, and maybe history of the training process. As suggested in Orr & Muller(2003), one well-principled method for estimating the idea learning rate that is to decrease the learning rate when the weight vector oscillates, and increase i when the weight vector follows a relatively steady direction. Second, although at each step some immediate reward (e.g., the loss decrement) can be obtained by taking actions, we care more abou the performance of the final model found by the ML algorithm. Consider two different learning rate"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow. org, 1, 2015.\nMethods Network SGD ADAM Adagrad RMSprop vSGD-1 vSGD-b vSGD-g Our method MO 7.60 8.70 7.52 10.91 7.50 7.89 8.20 7.50 M1 2.34 4.12 2.70 6.17 2.42 2.44 4.14 2.04 M2 2.15 3.85 2.34 3.81 2.16 2.05 3.65 2.03\nTao Qin\nMicrosoft Research Asia\ntaoqin@microsoft.com\nMicrosoft Research Asia\ntie-van.liu@microsoft.com"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "control policies: the first one leads to fast loss decrease at the beginning but gets saturated and stuck in a local minimum quickly, while the second one starts with slower loss decrease but results in much smaller final loss. Obviously, the second policy is better. That is, we prefer long-term rewards over short-term rewards\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009"}, {"section_index": "5", "section_name": "2 RELATED WORK", "section_text": "Yann A LeCun, Leon Bottou, Genevieve B Orr, and Klaus-Robert Muller. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9-48. Springer, 2012\nOur focus is to improve gradient based ML algorithm through automatic learning of learning rate Different approaches have been proposed to improve gradient methods, especially for deep neural networks.\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003\nTom Schaul, Sixin Zhang, and Yann LeCun. No more pesky learning rates. ICML (3), 28:343-351 2013.\nAndrew Senior, Georg Heigold, Ke Yang, et al. An empirical study of learning rates in deep neural networks for speech recognition. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6724-6728. IEEE, 2013.\nDavid Silver, Guy Lever, and Nicolas Heess. Deterministic policy gradient algorithms. 2014\nSenior et al.(2013); Sutton(1992);Darken & Moody(1990) focus on predefining update rules t adjust learning rates during training. A limitation of these methods is that they have additional fre parameters which need to be set manually. Another recent work[Daniel et al.(2016) studies how t automatically select step sizes, but it still requires hand-tuned features. Schaul et al.[(2013) propose a method to choose good learning rate for SGD, which relies on the square norm of the expectatio of the gradient, and the expectation of the square norm of the gradient. The method is much mor constrained than ours and several assumption should be met.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nRichard S Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In AAAI, pp. 171-176, 1992\nCombining the two observations, it is easy to see that the problem of finding a good policy to control/adjust learning rate falls into the scope of reinforcement learning (RL) Sutton & Barto (1998), if one is familiar with RL. Inspired by the recent success of RL for sequential decision problems, in this work, we leverage RL techniques and try to learn the learning rate for SGD based methods.\nWe propose an algorithm to learn the learning rate within the actor-critic framework Sutton|(1984); Sutton et al.(1999);Barto et al.(1983); Silver et al.(2014) from RL. In particular, an actor network is trained to take an action that decides the learning rate for current step, and a critic network is trained to give feedbacks to the actor network about long-term performance and help the actor network to adjust itself so as to perform better in the future steps. The main contributions of this paper include:\nWe propose an actor-critic algorithm to automatically learn the learning rate for ML algo. rithms. Long-term rewards are exploited by the critic network in our algorithm to choose a better learning rate at each step. We propose to feed different training examples to the actor network and the critic network which improve the generalization performance of the learnt ML model.. A series of experiments validate the effectiveness of our proposed algorithm for learning. rate control.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nSince SGD solely rely on a given example (or a mini-batch of examples) to compare gradient, its model update at each step tends to be unstable and it takes many steps to converge. To solve this problem, momentum SGD Jacobs(1988) is proposed to accelerate SGD by using recent gradients RMSpropTieleman & Hinton(2012) utilizes the magnitude of recent gradients to normalize the gradients. It always keeps a moving average over the root mean squared gradients, by which it di- vides the current gradient. Adagrad Duchi et al. (2011) adapts component-wise learning rates, and performs larger updates for infrequent and smaller updates for frequent parameters. Adadelta Zeiler (2012) extends Adagrad by reducing its aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, Adadelta restricts the window of accumulated past gra- dients to some fixed size. Adam Kingma & Ba(2014) computes component-wise learning rates using the estimates of first and second moments of the gradients, which combines the advantages of AdaGrad and RMSProp.\nSince our proposed algorithm is based on RL techniques, here we give a very brief introduction to RL. which will ease the description of our algorithm in next section..\nReinforcement learning Sutton(1988) is concerned with how an agent acts in a stochastic envi ronment by sequentially choosing actions over a sequence of time steps, in order to maximize : cumulative reward. In RL, a state st encodes the agents observation about the environment at a time step t, and a policy function (st) determines how the agent behaves (e.g., which action tc take) at state st. An action-value function (or, Q function) Q(st, at) is usually used to denote the cumulative reward of taking action a' at state s' and then following policy afterwards.\nRichard S Sutton and Andrew G Barto. Time-derivative models of pavlovian reinforcement. pp 497-537, 1990\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.\nMany RL algorithms have been proposed Sutton & Barto(1998);Watkins & Dayan(1992), and many RL algorithmsSutton (1984); Sutton et al.(1999); Barto et al.(1983); Silver et al.(2014) can be described under the actor-critic framework. An actor-critic algorithm learns the policy function and the value function simultaneously and interactively. The policy structure is known as the actor and is used to select actions; the estimated value function is known as the critic, and it criticizes the actions made by the actor.\nRichard Stuart Sutton. Temporal credit assignment in reinforcement learning. 1984\nRecently, deep reinforcement learning, which uses deep neural networks to approximate/represent the policy function and/or the value function, have shown promise in various domains, including Atari gamesMnih et al.(2015), GoSilver et al.(2016), machine translation Bahdanau et al. 2016), image recognition Xu et al.(2015), etc.\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov. Richard S Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2(3):5, 2015.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nIn this section, we present an actor-critic algorithm that can automate the learning rate control for SGD based machine learning algorithms\nAw Automatic Learning Rate Controller Action Reward Last layer Last layer First layer First layer Optimizee = x(wt, X Actor Network. Critic Network.\nFigure 1: The framework of our osed automatic learning rate controller\nt+1\nIt is observed that the performance of SGD based methods is quite sensitive to the choice of at for. non-convex loss function f. Unfortunately, f is usually non-convex with respect to the parameters\nMany machine learning tasks need to train a model with parameters w by minimizing a loss function f defined over a set X of training examples:.\n= arg min fw(X)"}, {"section_index": "6", "section_name": "A APPENDIX", "section_text": "w in many ML algorithms, especially for deep neural networks. We aim to learn a learning rat controller using RL techniques that can automatically control at..\nA method of automatically controlling learning rate is proposed in the main body of the paper. The learning rate controller adjusts itself during training to control the learning rate. Here, we propose an improved version that can leverage experiences from several repeated training runs to learn a fixed learning rate controller. Empirically, this algorithm can achieve better performance than the previous one. Given that it requires more time for training the learning rate controller, this method is more suitable for training offline models.\nIn this algorithm, during every training run, we fix the actor network and compute the weighted sum. of the gradients of its parameter 0. The parameter is updated after each run (modified from Equation 9):"}, {"section_index": "7", "section_name": "3.1 ACTOR NETWORK", "section_text": "V0 = T-1h(t)Vee(st+1)Vg ct+1 Te(s\nThe actor network, which is called policy network in RL, plays the key role in our algorithm: i determines the learning rate control policy for the primary ML algorithm'|based on the current. model, training data, and maybe historical information during the training process..\nh(t) is weighted function which is used to amplify the feedback signal from the initial training stage It is defined as h(t) = 1/t in our experiments. An error rate of 0.48% was achieved with 5 repeated training runs in MNIST experiment (the same setting as Table|1), and in CIFAR-1O experiment (the same setting as Table|2, 80.23% accuracy was achieved with 10 training runs. This method showed better performance in both experiments.\nNote that w' could be of huge dimensions, e.g., one widely used image recognition model VGGNet Simonyan & Zisserman[(2014) has more than 140 million parameters. If the actor network takes all of those parameters as the inputs, its computational complexity would dominate the complexity oi the primary algorithm, which is unfordable. Therefore, we propose to use a function x() to process and yield a compact vector s' as the input of the actor network. Following the practice in RL, we call x() the state function, which takes w' and the training data x as inputs:\nThen the actor network e( parameterized by 0 yields an action at\nwhere the action at e R is a continuous value. When a' is determined, we update the model of th primary algorithm by Equation|2\nNote that the actor network has its own parameters and we need to learn them to output a good action. To learn the actor network, we need to know how to evaluate the goodness of an actor network. The critic network exactly plays this role."}, {"section_index": "8", "section_name": "3.2 CRITIC NETWORK", "section_text": "Recall that our goal is to find a good policy for learning rate control to ensure that a good mode can be learnt eventually by the primary ML algorithm. For this purpose, the actor network needs tc output a good action a' at state st so that finally a low training loss f() can be achieved. In RL, the Q function Q(s, a) is often used to denote the long term reward of the state-action pair s, a while following the policy to take future actions. In our problem, Q(st, at) indicates the accumulativ decrement of training loss starting from step t. We define the immediate reward at step t as the one step loss decrement:\nThe accumulative value Rt of policy at step t is the total discounted reward from step t\nwhere y E (0. 1 is the discount factor\nConsidering that both the states and actions are uncountable in our problem, the critic network use a parametric function Qo(s, a) with parameters to approximate the Q value function Q(s, a).\n'Here we have two learning algorithms. We call the one with learning rate to adjust as the primary M L algorithm, and the other one which optimizes the learning rate of the primary one as the secondary MI algorithm.\nFigure|1|illustrates our automatic learning rate controller, which adopts the actor-critic framework in RL. The basic idea is that at each step, given the current model w' and training sample x, an. actor network is used to take an action (the learning rate a', and it will be used to update the model wt), and a critic network is used to estimate the goodness of the action. The actor network will be. updated using the estimated goodness of at, and the critic network will be updated by minimizing. temporal difference (TD) Sutton & Barto(1990) error. We describe the details of our algorithm in. the following subsections.\nx(w,X)\n-Qo(st, at\nV0 = Vewe(st+1 =e(s)\nAlgorithm 1 Actor-Critic Algorithm for Learning Rate Learning\nRequire: Training steps T ; training set X; loss function f; state function X; discount factor: y . Ensure: Model parameters w, policy parameters 0 of the actor network, and value parameters the critic network;. 1: Initial parameters wo, 0o, Yo; 2: for t = O, ..., T do 3: Sample x; E X, i E 1, ..., N. 4: Extract state vector: s = X(wt, xi). 5: //Actor network selects an action. 6: Computes learning rate a, = e(st) 7: //Update model parameters w. 8: Compute ft(xi). Update w: wt+1 = wt - a$V ft(xi). 9: 10: //Update critic network by minimizing square error between estimation and label.. 11: rt=ft(xi)-ft+1(xi) 12: Compute Q(st+1,o(st+1)), Q(sf,at) 13: 14: Compute &' according to Equation7} gt=rt+yQ(st+Ig(st+1))Q(st,at) 15: Update using the following gradients according to Equation8]: Vy=8tVyQy(st,at) 16: / Update actor network 17: Sample x; E X, j E 1,..., N, j # i.. 18: Compute at+1 = e(st+1). 19: 20: Update 0 from Equation[9] st+1 =te(s) 21: end for 22: return w,0, ;\nRequire: Training steps T ; training set X; loss function f; state function X; discount factor: y ; Ensure: Model parameters w, policy parameters 0 of the actor network, and value parameters p of\nThe overall algorithm is shown in Algorithm 1. In each step, we sample an example (Line 3), extrac. the current state vector (Line 4), compute the learning rate using the actor network (Line 6), update. the model (Lines 8-9), compute TD error (Lines 11-14), update the critic network (Line 15), anc sample another example (Line 17) to update the actor network (Line 18-20). We would like to make. some discussions about the algorithm.\nThe critic network has its own parameters , which is updated at each step using TD learning. More precisely, the critic is trained by minimizing the square error between the estimation Qo(st, at) and the target yt:\nt\nVo=8V\nThe policy parameters 0 of the actor network is updated by ensuring that it can output the action with the largest Q value at state st, i.e., a* = arg maxa Q(st, a). Mathematically,\nSecond, one may notice that we use one example (e.g., x;) for model and the critic network update but a different example (e.g., xy) for the actor network update. Doing so we can avoid that the al- gorithm will overfit on some (too) hard examples and can improve the generalization performance of the algorithm on the test set. Consider a hard examplq-[in a classification task. Since such an example is difficult to be classified correctly, intuitively its gradient will be large and the learning rate given by the actor network at this step will also be large. In other words, this hard example will greatly change the model, while itself is not a good representative of its category and the learning algorithm should not pay much attention to it. If we feed the same example to both the actor network and the critic network, both of them will encourage the model to change a lot to fit the example, con- sequently resulting in oscillation of the training, as shown in our experiments. By feeding different examples to the actor and critic networks, it is very likely the critic network will find that the gradi. ent direction of the example fed into the actor network is inconsistent with its own training example and thus criticize the large learning rate suggested by the actor network. More precisely, the update of w is based on x; and the learning rate suggested by the actor network, while the training target of the actor network is to maximize the output of the critic network on x;. If there is big gradient disagreement between x; and x, the update of w, which is affected by actor's decision, would cause the critic's output on x; to be small. To compensate this effect, the actor network is forced to predict a small learning rate for a too hard x; in this situation."}, {"section_index": "9", "section_name": "4 EXPERIMENTS", "section_text": "We conducted a set of experiments to test the performance of our learning rate learning algorithm. and compared with several baseline methods. We report the experimental results in this section."}, {"section_index": "10", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "We tested our method on two widely used image classification datasets: MNIST LeCun et al (1998) and CIFAR-10 Krizhevsky & Hinton(2009). Convolutional neural networks (CNNs) are the standard model for image classification tasks in recent years, and thus the primary ML algorithm adopted the CNN model in all our experiments.\nWe specified our actor-critic algorithm in experiments as follows. Given that stochastic mini-batc. training is a common practice in deep learning, the actor-critic algorithm also operated on mini. batches, i.e., each step is a mini batch in our experiments. We defined the state st = x(wt, Xt) a. the average loss of learning model w' on the input min-batch X,. We specified the actor network a. a two-layer long short-term memory (LSTM) network with 20 units in each layer, considering that. good learning rate for step t depends on and correlates with the learning rates at previous steps whil. LSTM is well suited to model sequences with long-distance dependence. We used the absolute valu. activation function for the output layer of the LSTM to ensure a positive learning rate. The LSTN. was unrolled for 20 steps during training. We specified the critic network as a simple neural networ. with one hidden layer and 10 hidden units. We use Adam with the default setting in TensorFlo optimizer toolbox Abadi et al.(2015) to train the actor and critic networks in all the experiments.."}, {"section_index": "11", "section_name": "4.2 RESULTS ON MNIST", "section_text": "MNIST is a dataset for handwritten digit classification task. Each example in the dataset is a 28 2 black and white image containing a digit in {0, 1, ... , 9}. The CNN model used in the primary\n2For example, an example may has an incorrect label because of the limited quality of labelers\nWe compared our method with several mainstream SGD algorithms, including SGD, AdamKingma & Ba(2014), AdagradDuchi et al.(2011) and RMSpropTieleman & Hinton(2012). For each of these algorithms and each dataset, we tried the following learning rates 10-4, 10-3, .., 100. We report the best performance of these algorithms over those learning rates. If an algorithm needs some other parameters to set, such as decay coefficients for Adam, we used the default setting in TensorFlow optimizer toolbox. For each benchmark and our proposed method, five independent runs are averaged and reported in all of the following experiments.\n0.1 0.075 0.995 0.9925 0.08 0.065 0.99 0.06 0.055 0.9875 0.04 0.045 0.985 0.02 0.035 0.9825 0 0.025 0.98 2000 4000 6000 8000 10000 12000 0 2000 4000 6000 8000 10000 12000 2000 4000 6000 8000 10000 12000 0 -SGDADAM -Adagrad----RMSprop Our method -- SGD- ADAM - Adagrad ---- RMSprop Our method --SGD - ADAM -Adagrad----RMSprop Our method (a) (b)\nFigure 2: Results on MNIST. (a) Training loss. (b) Test loss. (c) Test accuracy. The x-axis is the number of mini batches\n1.4 0.8 1.2 0.79 0.78 0.8 0.77 0.6 0.7 0.76 0.4 0.2 0.6 0.75 0 20000 40000 60000 80000 100000 0 20000 40000 60000 80000 100000 10000 25000 40000 55000 70000 85000 100000 --=SGD----ADAM==- Adagrad--=RMSprop Our method - SGD = - ADAM - Adagrad -- RMSprop Our method =- SGD - ADAM * - Adagrad = = - RMSprop Our method a) b C\n0.8 0.79 0.9 0.78 0.8 0.77 0.6 0.76 0.4 0.2 0.75 80000 20000 40000 60000 10000 20000 40000 60000 100000 80000 100000 25000 40000 55000 70000 85000 100000 -Adagrad RMSpro ur method -SGD Adagrad - RMSprop Our method --SGD--ADAM Adagrad -- RMSprop Our method (a) (b) C\nFigure 3: Results on CIFAR10. (a) Training loss. (b) Test loss. (c) Test accuracy. The x-axis is the number of mini batches.\nML algorithm is consist of two convolutional layers, each followed by a pooling layer, and finally a. fully connected layer. The first convolutional layer filters each input image using 32 kernels of size. 5 5. The max-pooling layer following the first convolutional layer is performed over 2 2 pixe. windows, with stride 2. The second convolutional layer takes the outputs of the first max-pooling. layer as inputs and filters them with 64 kernels of size 5 5. The max-pooling layer following. the second convolutional layer is performed over 2 2 pixel windows, with stride 2. The outputs. of second max pooling layer are fed to a fully connected layer with 512 neurons. Dropout was. conducted on the fully connect layer with a dropout rate of O.5. ReLU activation functions are usec in the CNN model. There are 60,000 training images and 10,000 test images in this dataset. We. scaled the pixel values to the [0,1] range before inputting to all the algorithms. Each mini batch. contains 50 randomly sampled images.\nFigure2 shows the results of our actor-critic algorithm for learning rate learning and the baseline. methods, including the curves of training loss, test loss, and test accuracy. The final accuracies of these methods are summarized in Table[1 We have the following observations.\nTable 1: Error rate comparison on MNIST\nOptimizer Error Rate (%) SGD 0.75 ADAM 0.87 Adagrad 0.94 RMSprop 0.83 Our method 0.67\nIn terms of training loss, our algorithm has similar convergence speed to the baseline meth-. ods. One may expect that our algorithm should have significantly faster convergence speed. considering that our algorithm learns both the learning rate and the CNN model while the baselines only learn the CNN model and choose the learning rates per some predefined. rules. However, this is not correct. As discussed in Section 3.4, we carefully design the algorithm and feed different samples to the actor network and critic network. Doing so we can focus more on generalization performance than training loss: as shown in Figure4] our. algorithm achieves the best test accuracy.\nTable 2: Classification Accuracy on CIFAR-10\nOptimizer Accuracy SGD 78.74 ADAM 77.46 Adagrad 78.46 RMSprop 62.3 Our method 79.34\n0.055 2.5 1.8 0.050 1.6 0.045 0.040 1.5 1.4 bun 1 1.2 0.030 tee 0.025 0.5 A- 0.020 0 0.8 0.015 0 5000 10000 15000 20000 25000 30000 35000 40000 0 5000 10000 15000 20000 25000 30000 35000 40000 0.010 +-SGD - ADAM -Adagrad----RMSprop Our method -SGD - ADAM -Adagrad---- RMSprop Our method 102 (b)\nFigure 4: Results on CIFAR-10 with 20% training data (a) Training loss. (b) Test loss.."}, {"section_index": "12", "section_name": "4.3 RESULTS ON CIFAR-10", "section_text": "CIFAR-10 is a dataset consisting of 60000 natural 32 32 RGB images in 10 classes: 50,000 imagesfor training and 10,O00 for test. We used a CNN with 2 convolutional layers (each followed by max-pooling layer) and 2 fully connected layers for this task. There is a max pooling layer which performed over 2 2 pixel windows, with stride 2 after each convolutional layer. All convolutional layers filter the input with 64 kernels of size 5 5. The outputs of the second pooling layer are fed to a fully connected layer with 384 neurons. The last fully connected layer has 192 neurons. Before inputting an image to the CNN, we subtracted the per-pixel mean computed over the training set from each image.\n2.5 1.7 1.6 2 1.5 1.5 1.4 1.3 1 1.2 1.1 0.5 1 0 0.9 0 5000 10000 15000 20000 25000 30000 35000 40000 0 5000 10000 15000 20000 25000 30000 35000 40000 Different -Same - Different +-- Same (a) (b)\nFigure 6: Results on CIFAR-10 with 20% training data. (a) Training loss. (b) Test loss. Our algorithm with x; = x; is shown with blue line, and Our algorithm with x; x; is shown with. Orange line.\nalgorithms on two subsets of training data on CIFAR-10: one with only 20% training data The curves of training loss and test loss are shown in Figure 4] As can be seen from the figure, those baseline methods are easy to overfit and their test loss increases after 5oo0 steps (mini batches). Ir contrast, our algorithm is relatively robust and can prevent overfitting to some extent.\nAs we explained in Section 3.4, feeding different examples to the actor and critic networks is im portant to guarantee generalization ability. Here we conducted another experiment to verify our. intuitive explanation. Figure|6[shows the results of two different implementations of our actor-critic algorithm on CIFAR-10. In the first implementation, we fed the sample examples to the two net-\nOur algorithm achieves the lowest error rate on MNIST. Although the improvement looks small, we would like to point out that given that the accuracy of CNN is already close to 100%, it is a very difficult task to further improve accuracy, not to mention that we only changed learning rate policy without changing the CNN model.\nFigure3 shows the results of all the algorithms on CIFAR-10, including the curves of training loss. the test loss and test accuracy. Table2shows the final test accuracy. We get similar observations as MNIST: our algorithm achieves similar convergence speed in terms of training loss and slightly better test accuracy than baselines. Figure 5|shows the learning rate learned by our method on CIFAR-10. To further understand the generalization performance of our algorithm, we ran all the"}]