forum_id
stringlengths 9
10
| sections
stringlengths 1.26k
174k
|
---|---|
HkwoSDPgg | [{"section_index": "0", "section_name": "SEMI-SUPERVISED KNOWLEDGE TRANSFER FOR DEEP LEARNING FROM PRIVATE TRAINING DATA", "section_text": "Nicolas Papernot\nMartin Abadi\nGoogle Brain\nPennsylvania State University\ngoodfellow@google.com\nSome machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information.\nTo address this problem, we demonstrate a generally applicable approach to pro- viding strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not pub- lished, but instead used as \"teachers\"' for a \"student\"' model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Some machine learning applications with great benefits are enabled only through the analysis oi. sensitive data, such as users' personal contacts, private photographs or correspondence, or even. medical records or genetic sequences (Alipanahi et al.]2015] Kannan et al.]2016] Kononenko]2001 Sweeney[1997). Ideally, in those cases, the learning algorithms would protect the privacy of users. training data, e.g., by guaranteeing that the output model generalizes away from the specifics of any. individual user. Unfortunately, established machine learning algorithms make no such guarantee indeed, though state-of-the-art algorithms generalize well to the test set, they continue to overfit on. specific training examples in the sense that some of these examples are implicitly memorized..\nRecent attacks exploiting this implicit memorization in machine learning have demonstrated that. private, sensitive training data can be recovered from models. Such attacks can proceed directly, by. analyzing internal model parameters, but also indirectly, by repeatedly querying opaque models to. gather data for the attack's analysis. For example, [Fredrikson et al.(2015) used hill-climbing on the. output probabilities of a computer-vision classifier to reveal individual faces from the training data\nUlfar Erlingsson\nabadi@google.com\nulfar@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning\nBecause of those demonstrations-and because privacy guarantees must apply to worst-case out liers, not only the average- any strategy for protecting the privacy of training data should prudently assume that attackers have unfettered access to internal model parameters..\nTo protect the privacy of training data, this paper improves upon a specific, structured application of. the techniques of knowledge aggregation and transfer (Breiman|[1994), previously explored byNis- sim et al.(2007), Pathak et al.(2010), and particularlyHamm et al.[(2016). In this strategy, first, an ensemble (Dietterich!20o0) of teacher models is trained on disjoint subsets of the sensitive data Then, using auxiliary, unlabeled non-sensitive data, a student model is trained on the aggregate out-. put of the ensemble, such that the student learns to accurately mimic the ensemble. Intuitively, this. strategy ensures that the student does not depend on the details of any single sensitive training data. point (e.g., of any single user), and, thereby, the privacy of the training data is protected even if. attackers can observe the student's internal model parameters..\nThis paper shows how this strategy's privacy guarantees can be strengthened by restricting student training to a limited number of teacher votes, and by revealing only the topmost vote after care. fully adding random noise. We call this strengthened strategy PATE, for Private Aggregation oJ Teacher Ensembles. Furthermore, we introduce an improved privacy analysis that makes the strat egy generally applicable to machine learning algorithms with high utility and meaningful privacy guarantees--in particular, when combined with semi-supervised learning.\nTo establish strong privacy guarantees, it is important to limit the student's access to its teachers. so that the student's exposure to teachers' knowledge can be meaningfully quantified and bounded Fortunately, there are many techniques for speeding up knowledge transfer that can reduce the rate of student/teacher consultation during learning. We describe several techniques in this paper, the most effective of which makes use of generative adversarial networks (GANs) (Goodfellow et al. 2014) applied to semi-supervised learning, using the implementation proposed by Salimans et al. (2016). For clarity, we use the term PATE-G when our approach is combined with generative, semi- supervised methods. Like all semi-supervised learning methods, PATE-G assumes the student has access to additional, unlabeled data, which, in this context, must be public or non-sensitive. This assumption should not greatly restrict our method's applicability: even when learning on sensitive data, a non-overlapping, unlabeled set of data often exists, from which semi-supervised methods can extract distribution priors. For instance, public datasets exist for text and images, and for medical data.\nIt seems intuitive, or even obvious, that a student machine learning model will provide good privacy when trained without access to sensitive training data, apart from a few, noisy votes from a teacher quorum. However, intuition is not sufficient because privacy properties can be surprisingly hard to reason about; for example, even a single data item can greatly impact machine learning models trained on a large corpus (Chaudhuri et al.|2011). Therefore, to limit the effect of any single sensitive data item on the student's learning, precisely and formally, we apply the well-established, rigorous standard of differential privacy (Dwork & Roth!2014). Like all differentially private algorithms, our learning strategy carefully adds noise, so that the privacy impact of each data item can be analyzed and bounded. In particular, we dynamically analyze the sensitivity of the teachers' noisy votes: for this purpose, we use the state-of-the-art moments accountant technique fromAbadi et al.(2016). which tightens the privacy bound when the topmost vote has a large quorum. As a result, for MNIST and similar benchmark learning tasks, our methods allow students to provide excellent utility, while our analysis provides meaningful worst-case guarantees. In particular, we can bound the metric for privacy loss (the differential-privacy e) to a range similar to that of existing, real-world privacy protection mechanisms, such as Google's RAPPOR (Erlingsson et al.]2014).\nFinally, it is an important advantage that our learning strategy and our privacy analysis do not depend on the details of the machine learning techniques used to train either the teachers or their student Therefore, the techniques in this paper apply equally well for deep learning methods, or any such learning methods with large numbers of parameters, as they do for shallow, simple techniques. In comparison, Hamm et al.(2016) guarantee privacy only conditionally, for a restricted class of student classifiers-in effect, limiting applicability to logistic regression with convex loss. Also, unlike the methods of Abadi et al.(2016), which represent the state-of-the-art in differentially private deep learning, our techniques make no assumptions about details such as batch selection, the loss function, or the choice of the optimization algorithm. Even so, as we show in experiments on\nFigure 1: Overview of the approach: (1) an ensemble of teachers is trained on disjoint subsets of the sensitive data, (2) a student model is trained on public data labeled using the ensemble\nOur results are encouraging, and highlight the benefits of combining a learning strategy based on. semi-supervised knowledge transfer with a precise, data-dependent privacy analysis. However, the most appealing aspect of this work is probably that its guarantees can be compelling to both an expert and a non-expert audience. In combination, our techniques simultaneously provide both an intuitive and a rigorous guarantee of training data privacy, without sacrificing the utility of the targeted model. This gives hope that users will increasingly be able to confidently and safely benefit from machine learning models built from their sensitive data..\nIn this section, we introduce the specifics of the PATE approach, which is illustrated in Figure[1 We describe how the data is partitioned to train an ensemble of teachers, and how the predictions made by this ensemble are noisily aggregated. In addition, we discuss how GANs can be used in. training the student, and distinguish PATE-G variants that improve our approach using generative,. semi-supervised methods.\nNot accessible by adversary Accessible by adversary Data 1 Teacher 1 Data 2 Teacher 2 Sensitive Aggregate Student Queries Data Data 3 Teacher 3 Teacher Predicted Incomplete Data n Teacher n. completion Public Data Training Prediction Data feeding\nMNIST and SVHN, our techniques provide a privacy/utility tradeoff that equals or improves upon bespoke learning methods such as those of Abadi et al.(2016).\nWe demonstrate a general machine learning strategy, the PATE approach, that provides dif-. ferential privacy for training data in a \"black-box\"' manner, i.e., independent of the learning. algorithm, as demonstrated by Section4|and Appendix |C We improve upon the strategy outlined inHamm et al.(2016) for learning machine models that protect training data privacy. In particular, our student only accesses the teachers' top. vote and the model does not need to be trained with a restricted class of convex losses. We explore four different approaches for reducing the student's dependence on its teachers.. and show how the application of GANs to semi-supervised learning of Salimans et al.. (2016) can greatly reduce the privacy loss by radically reducing the need for supervision.. We present a new application of the moments accountant technique from|Abadi et al.[(2016 for improving the differential-privacy analysis of knowledge transfer, which allows the. training of students with meaningful privacy bounds.. . We evaluate our framework on MNIST and SVHN, allowing for a comparison of our results. with previous differentially private machine learning methods. Our classifiers achieve an. (c, ) differential-privacy bound of (2.04, 10-5) for MNIST and (8.19, 10-6) for SVHN, respectively with accuracy of 98.00% and 90.66%. In comparison, for MNIST,|Abadi et al.. (2016) obtain a looser (8, 10-5) privacy bound and 97% accuracy. For SVHN,Shokri & Shmatikov(2015) report approx. 92% accuracy with e > 2 per each of 300,000 model pa- rameters, naively making the total e > 600,000, which guarantees no meaningful privacy.. Finally, we show that the PATE approach can be successfully applied to other model struc-. tures and to datasets with different characteristics. In particular, in Appendix C PATE. protects the privacy of medical data used to train a model based on random forests..\nData partitioning and teachers: Instead of training a single model to solve the task associated with dataset (X, Y), where X denotes the set of inputs, and Y the set of labels, we partition the data in n disjoint sets (Xn, Yn) and train a model separately on each set. As evaluated in Section4.1] assum- ing that n is not too large with respect to the dataset size and task complexity, we obtain n classifiers fi called teachers. We then deploy them as an ensemble making predictions on unseen inputs x by querying each teacher for a prediction f;(x) and aggregating these into a single prediction.\nAggregation: The privacy guarantees of this teacher ensemble stems from its aggregation. Let m be the number of classes in our task. The label count for a given class j E [m] and an input x is the number of teachers that assigned class j to input x: n;(x) = {i : i E [n], f(x) = j}|. If we simply apply plurality-use the label with the largest count--the ensemble's decision may depend on a single teacher's vote. Indeed, when two labels have a vote count differing by at most one, there is a tie: the aggregated output changes if one teacher makes a different prediction. We add random noise to the vote counts n; to introduce ambiguity:\n1} f(x) = arg max n;(x) + Lap\nWhile we could use an f such as above to make predictions, the noise required would increase as we make more predictions, making the model useless after a bounded number of queries. Furthermore privacy guarantees do not hold when an adversary has access to the model parameters. Indeed as each teacher fi was trained without taking into account privacy, it is conceivable that they have sufficient capacity to retain details of the training data. To address these limitations, we train anothe. model, the student, using a fixed number of labels predicted by the teacher ensemble..\nWe train a student on nonsensitive and unlabeled data, some of which we label using the aggregatior mechanism. This student model is the one deployed, in lieu of the teacher ensemble, so as to fix the privacy loss to a value that does not grow with the number of user queries made to the student model Indeed, the privacy loss is now determined by the number of queries made to the teacher ensemble during student training and does not increase as end-users query the deployed student model. Thus the privacy of users who contributed to the original training dataset is preserved even if the student's architecture and parameters are public or reverse-engineered by an adversary.\nWe considered several techniques to trade-off the student model's quality with the number of labels it needs to access: distillation, active learning, semi-supervised learning (see Appendix[B). Here, we only describe the most successful one, used in PATE-G: semi-supervised learning with GANs.\nTraining the student with GANs: The GAN framework involves two machine learning models. a generator and a discriminator. They are trained in a competing fashion, in what can be viewed as a two-player game (Goodfellow et al.]2014). The generator produces samples from the data distribution by transforming vectors sampled from a Gaussian distribution. The discriminator is. trained to distinguish samples artificially produced by the generator from samples part of the real data distribution. Models are trained via simultaneous gradient descent steps on both players' costs. In practice, these dynamics are often difficult to control when the strategy set is non-convex (e.g., a. DNN). In their application of GANs to semi-supervised learning, Salimans et al.(2016) made the. following modifications. The discriminator is extended from a binary classifier (data vs. generator. sample) to a multi-class classifier (one of k classes of data samples, plus a class for generated. samples). This classifier is then trained to classify labeled real samples in the correct class, unlabeled real samples in any of the k classes, and the generated samples in the additional class..\nIn this equation, y is a privacy parameter and Lap(b) the Laplacian distribution with location O and. scale b. The parameter y influences the privacy guarantee we can prove. Intuitively, a large y leads. to a strong privacy guarantee, but can degrade the accuracy of the labels, as the noisy maximum f above can differ from the true plurality.\nAlthough no formal results currently explain why yet, the technique was empirically demonstrated to greatly improve semi-supervised learning of classifiers on several datasets, especially when the classifier is trained with feature matching loss (Salimans et al.[|2016).\nTraining the student in a semi-supervised fashion makes better use of the entire data available to the student, while still only labeling a subset of it. Unlabeled inputs are used in unsupervised learning to estimate a good prior for the distribution. Labeled inputs are then used for supervised learning\nWe now analyze the differential privacy guarantees of our PATE approach. Namely, we keep track. of the privacy budget throughout the student's training using the moments accountant (Abadi et al. 2016). When teachers reach a strong quorum, this allows us to bound privacy costs more strictly..\nDefinition 1. A randomized mechanism M with domain D and range R satisfies (E, 8)-differentia privacy iffor any two adjacent inputs d, d' E D and for any subset of outputs S C R it holds that:.\nThe privacy loss random variable C(M, aux, d, d') is defined as c(M(d); M, aux, d, d'), i.e. the. random variable defined by evaluating the privacy loss at an outcome sampled from M(d).\nA natural way to bound our approach's privacy loss is to first bound the privacy cost of each label. queried by the student, and then use the strong composition theorem (Dwork et al.|2010) to derive. the total cost of training the student. For neighboring databases d, d', each teacher gets the same. training data partition (that is, the same for the teacher with d and with d', not the same across. teachers), with the exception of one teacher whose corresponding training data partition differs Therefore, the label counts n,(x) for any example x, on d and d' differ by at most 1 in at most two. locations. In the next subsection, we show that this yields loose guarantees..\nDefinition 3. Let M : D -> R be a randomized mechanism and d, d' a pair of adjacent database. Let aux denote an auxiliary input. The moments accountant is defined as.\nThe following properties of the moments accountant are proved in[Abadi et al.(2016]\nDifferential privacy (Dwork et al.] [2006b, Dwork]2011) has established itself as a strong standard. It provides privacy guarantees for algorithms analyzing databases, which in our case is a machine learning training algorithm processing a training dataset. Differential privacy is defined using pairs of adjacent databases: in the present work, these are datasets that only differ by one training example. Recall the following variant of differential privacy introduced in|Dwork et al.(2006a).\nPr[M(d) E S] < ec Pr[M(d) e S]+ o\nPr[M(aux, d) = o c(o; M, aux, d, d') = 1og Pr[M(aux, d') = o]\naM(A) max, am(; aux, d, d' aux,d,d'\nwhere aM(; aux, d, d') logE[exp(C(M, aux, d, d'))] is the moment generating function oJ the privacy loss random variable.\nTheorem 1. 1. [Composability] Suppose that a mechanism M consists of a sequence of adap tive mechanisms M1,..., M where M: I'=1 R, D - Ri. Then, for any output sequence. , Ok-1 and any )1\nk QM(;d, d') =)QM,(;01,.,Oi-1, d, i=1\ns = minexp(aM() Xe)\nThe following theorem, proved in Appendix A] provides a data-dependent bound on the moments of any differentially private mechanism where some specific outcome is very likely\nTo upper bound q for our aggregation mechanism, we use the following simple lemma, also proved in AppendixA\nLemma 4. Let n be the label score vector for a database d with n;* > n; for all j. Then\n2+y(nj*-n Pr[M(d) F j*] > 4 exp(y(nj* - nj) j#j*\nThis allows us to upper bound q for a specific score vector n, and hence bound specific moments. We take the smaller of the bounds we get from Theorems|2|and|3] We compute these moments for a fev values of X (integers up to 8). Theorem 1|allows us to add these bounds over successive steps, anc derive an (e, ) guarantee from the final a. Interested readers are referred to the script that we used to empirically compute these bounds, which is released along with our code: https : / /github.\ncom/tensorflow/models/tree/master/differential privacy/multiple_teachers\nx(l; aux, d, d) < 2y2l(l +1\n) which is (2y. 0)-DP. Thus over. At each step, we use the aggregation mechanism with noise Lap(\nT steps, we get (4T2 + 2y/2T ln , )-differential privacy. This can be rather large: plugging. in values that correspond to our SVHN result, y = 0.05, T = 1000, = 1e-6 gives us e ~ 26 or alternatively plugging in values that correspond to our MNIST result, y = 0.05, T = 100, = 1e5 gives us E ~ 5.80.\nOur data-dependent privacy analysis takes advantage of the fact that when the quorum among the. teachers is very strong, the majority outcome has overwhelming likelihood, in which case the pri-. vacy cost is small whenever this outcome occurs. The moments accountant allows us analyze the composition of such mechanisms in a unified framework..\na(l; aux, d, d') < log((1 + q exp(2yl))\nSince the privacy moments are themselves now data dependent, the final e is itself data-dependent. and should not be revealed. To get around this, we bound the smooth sensitivity (Nissim et al. 2007) of the moments and add noise proportional to it to the moments themselves. This gives us a differentially private estimate of the privacy cost. Our evaluation in Section4|ignores this overhead and reports the un-noised values of e. Indeed, in our experiments on MNIST and SVHN, the scale of the noise one needs to add to the released e is smaller than 0.5 and 1.0 respectively..\nHow does the number of teachers affect the privacy cost? Recall that the student uses a noisy label computed in (1) which has a parameter y. To ensure that the noisy label is likely to be the correct one, the noise scale 1 should be small compared to the the additive gap between the two largest vales of ns. While the exact dependence of y on the privacy cost in Theorem|3lis subtle, as a general principle, a smaller leads to a smaller privacy cost. Thus, a larger gap translates to a smaller privacy cost. Since the gap itself increases with the number of teachers, having more teachers would lower the privacy cost. This is true up to a point. With n teachers, each teacher only trains on a fraction of the training data. For large enough n, each teachers will have too little training data to be accurate."}, {"section_index": "3", "section_name": "4 EVALUATION", "section_text": "In our evaluation of PATE and its generative variant PATE-G, we first train a teacher ensemble for. each dataset. The trade-off between the accuracy and privacy of labels predicted by the ensemble is greatly dependent on the number of teachers in the ensemble: being able to train a large set of teachers is essential to support the injection of noise yielding strong privacy guarantees while having. a limited impact on accuracy. Second, we minimize the privacy budget spent on learning the student. by training it with as few queries to the ensemble as possible..\nOur experiments use MNIST and the extended SVHN datasets. Our MNIST model stacks two convolutional layers with max-pooling and one fully connected layer with ReLUs. When trained on the entire dataset, the non-private model has a 99.18% test accuracy. For SVHN, we add two hidden layers|'|The non-private model achieves a 92.8% test accuracy, which is shy of the state-of-the-art.. However, we are primarily interested in comparing the private student's accuracy with the one of a. non-private model trained on the entire dataset, for different privacy guarantees. The source code. for reproducing the results in this section is available on GitHub2.\nAs mentioned above, compensating the noise introduced by the Laplacian mechanism presented ir Equation1requires large ensembles. We evaluate the extent to which the two datasets considered car be partitioned with a reasonable impact on the performance of individual teachers. Specifically, we show that for MNIST and SVHN, we are able to train ensembles of 250 teachers. Their aggregatec predictions are accurate despite the injection of large amounts of random noise to ensure privacy The aggregation mechanism output has an accuracy of 93.18% for MNIST and 87.79% for SVHN when evaluated on their respective test sets, while each query has a low privacy budget of e = 0.05\nPrediction accuracy: All other things being equal, the number n of teachers is limited by a trade off between the classification task's complexity and the available data. We train n teachers by partitioning the training data n-way. Larger values of n lead to larger absolute gaps, hence poten- tially allowing for a larger noise level and stronger privacy guarantees. At the same time, a larger n implies a smaller training dataset for each teacher, potentially reducing the teacher accuracy. We empirically find appropriate values of n for the MNIST and SVHN datasets by measuring the test\nThe model is adapted from https://www.tensorflow.org/tutorials/deep_cnn https://github.com/tensorflow/models/tree/master/differential_privacy/multiple_teach\nTo conclude, we note that our analysis is rather conservative in that it pessimistically assumes that, even if just one example in the training set for one teacher changes, the classifier produced by that teacher may change arbitrarily. One advantage of our approach, which enables its wide applica bility, is that our analysis does not require any assumptions about the workings of the teachers Nevertheless, we expect that stronger privacy guarantees may perhaps be established in specific settings-when assumptions can be made on the learning algorithm used to train the teachers\n100 (%) aeeeee eee aaee aeeetaete er eeeaneee 90 80 70 60 50 mNIST (n=10) X MNIST (n=100) 40 MNIST (n=250) K 30 SVHN (n=10) - SVHN (n=100) 20 SVHN (n=250) 10 0.0 0.1 0.2 0.3 0.4 0.5 Y per label query\nFigure 2: How much noise can be injected to a query? Accuracy of the noisy aggrega- tion for three MNIST and SVHN teacher en- sembles and varying y value per query. The noise introduced to achieve a given y scales inversely proportionally to the value of y: small values of y on the left of the axis corre- spond to large noise amplitudes and large values on the right to small noise\nPrediction confidence: As outlined in Section[3] the privacy of predictions made by an ensembl of teachers intuitively requires that a quorum of teachers generalizing well agree on identical labels. This observation is reflected by our data-dependent privacy analysis, which provides stricter privac. bounds when the quorum is strong. We study the disparity of labels assigned by teachers. In othe. words, we count the number of votes for each possible label, and measure the difference in vote between the most popular label and the second most popular label, i.e., the gap. If the gap is smal. introducing noise during aggregation might change the label assigned from the first to the seconc. Figure 3 shows the gap normalized by the total number of teachers n. As n increases, the ga. remains larger than 60% of the teachers, allowing for aggregation mechanisms to output the correc. label in the presence of noise..\nNoisy aggregation: For MNIST and SVHN, we consider three ensembles of teachers with varying. number of teachers n E {10, 100, 250}. For each of them, we perturb the vote counts with Laplacian noise of inversed scale y ranging between O.01 and 1. This choice is justified below in Section|4.2 We report in Figure|2|the accuracy of test set labels inferred by the noisy aggregation mechanism for. these values of e. Notice that the number of teachers needs to be large to compensate for the impact of noise injection on the accuracy.\nThe noisy aggregation mechanism labels the student's unlabeled training set in a privacy-preservin. fashion. To reduce the privacy budget spent on student training, we are interested in making as fev. label queries to the teachers as possible. We therefore use the semi-supervised training approach de. scribed previously. Our MNIST and SVHN students with (E, ) differential privacy of (2.04, 10-5 and (8.19, 10-6) achieve accuracies of 98.00% and 90.66%. These results improve the differentia privacy state-of-the-art for these datasets. Abadi et al.(2016) previously obtained 97% accuracy. with a (8, 10-5) bound on MNIST, starting from an inferior baseline model without privacy.Shokr. & Shmatikov[(2015) reported about 92% accuracy on SVHN with e > 2 per model parameter and model with over 300,000 parameters. Naively, this corresponds to a total e > 600,000.\n100 (%) danennne eo anarne aq pnnnnrnne den 80 60 40 20 MNIST SVHN 1 2 3 4 51025 50 100 250 Number of teachers\nFigure 3: How certain is the aggregation of. teacher predictions? Gap between the num- ber of votes assigned to the most and second. most frequent labels normalized by the num-. ber of teachers in an ensemble. Larger gaps. indicate that the ensemble is confident in as-. signing the labels, and will be robust to more. noise injection. Gaps were computed by av-. eraging over the test data..\nset accuracy of each teacher trained on one of the n partitions of the training data. We find that even for n = 250, the average test accuracy of individual teachers is 83.86% for MNIST and 83.18% for SVHN. The larger size of SVHN compensates its increased task complexity..\nFigure 4: Utility and privacy of the semi-supervised students: each row is a variant of the stu. dent model trained with generative adversarial networks in a semi-supervised way, with a differen number of label queries made to the teachers through the noisy aggregation mechanism. The last. column reports the accuracy of the student and the second and third column the bound e and failure probability & of the (c, o) differential privacy guarantee..\nWe apply semi-supervised learning with GANs to our problem using the following setup for eac. dataset. In the case of MNIST, the student has access to 9,000 samples, among which a subse of either 100, 500, or 1,000 samples are labeled using the noisy aggregation mechanism discussec. in Section2.1 Its performance is evaluated on the 1,000 remaining samples of the test set. Not that this may increase the variance of our test set accuracy measurements, when compared to thos. computed over the entire test data. For the MNIST dataset, we randomly shuffle the test set to ensur that the different classes are balanced when selecting the (small) subset labeled to train the studen1 For SVHN, the student has access to 10,000 training inputs, among which it labels 500 or 1,000. samples using the noisy aggregation mechanism. Its performance is evaluated on the remaining. 16,032 samples. For both datasets, the ensemble is made up of 250 teachers. We use Laplacian scal of 20 to guarantee an individual query privacy bound of e = 0.05. These parameter choices ar motivated by the results from Section|4.1\nIn Figure4] we report the values of the (c, d) differential privacy guarantees provided and the cor- responding student accuracy, as well as the number of queries made by each student. The MNIST student is able to learn a 98% accurate model, which is shy of 1% when compared to the accuracy of a model learned with the entire training set, with only 100 label queries. This results in a strict differentially private bound of e = 2.04 for a failure probability fixed at 10-5. The SVHN stu- dent achieves 90.66% accuracy, which is also comparable to the 92.80% accuracy of one teacher learned with the entire training set. The corresponding privacy bound is e = 8.19, which is higher than for the MNIST dataset, likely because of the larger number of queries made to the aggregation mechanism.\nSeveral privacy definitions are found in the literature. For instance, k-anonymity requires information about an individual to be indistinguishable from at least k - 1 other individuals in the dataset (L. Sweeney[2002). However, its lack of randomization gives rise to caveats (Dwork & Roth|[2014), and attackers can infer properties of the dataset (Aggarwal||2005). An alternative definition, differential privacy, established itself as a rigorous standard for providing privacy guarantees (Dwork et al.. 2006b). In contrast to k-anonymity, differential privacy is a property of the randomized algorithm. and not the dataset itself.\nA variety of approaches and mechanisms can guarantee differential privacy.Erlingsson et al.(2014 showed that randomized response, introduced by Warner(1965), can protect crowd-sourced data collected from software users to compute statistics about user behaviors. Attempts to provide dif- ferential privacy for machine learning models led to a series of efforts on shallow machine learning models, including work by Bassily et al.(2014); Chaudhuri & Monteleoni](2009); Pathak et al (2011); Song et al.(2013), and|Wainwright et al.(2012).\nDataset E 8 Queries Non-Private Baseline Student Accuracy MNIST 2.04 10-5 100 99.18% 98.00% MNIST 8.03 10- - 5 1000 99.18% 98.10% SVHN 5.04 10-6 500 92.80% 82.72% SVHN 8.19 10-6 1000 92.80% 90.66%\nWe observe that our private student outperforms the aggregation's output in terms of accuracy, with or without the injection of Laplacian noise. While this shows the power of semi-supervised learning the student may not learn as well on different kinds of data (e.g., medical data), where categories are not explicitly designed by humans to be salient in the input space. Encouragingly, as Appendix|C illustrates, the PATE approach can be successfully applied to at least some examples of such data.\nA privacy-preserving distributed SGD algorithm was introduced byShokri & Shmatikov(2015). I applies to non-convex models. However, its privacy bounds are given per-parameter, and the large number of parameters prevents the technique from providing a meaningful privacy guarantee. Abad et al.(2016) provided stricter bounds on the privacy loss induced by a noisy SGD by introducing th moments accountant. In comparison with these efforts, our work increases the accuracy of a privat MNIST model from 97% to 98% while improving the privacy bound e from 8 to 1.9. Furthermore the PATE approach is independent of the learning algorithm, unlike this previous work. Suppor for a wide range of architecture and training algorithms allows us to obtain good privacy bound on an accurate and private SVHN model. However, this comes at the cost of assuming that non private unlabeled data is available, an assumption that is not shared by (Abadi et al.]2016) Shokri & Shmatikov2015).\nPathak et al. (2010) first discussed secure multi-party aggregation of locally trained classifiers for a. global classifier hosted by a trusted third-party. Hamm et al.[(2016) proposed the use of knowledge transfer between a collection of models trained on individual devices into a single model guaran- teeing differential privacy. Their work studied linear student models with convex and continuously differentiable losses, bounded and c-Lipschitz derivatives, and bounded features. The PATE ap-. proach of this paper is not constrained to such applications, but is more generally applicable..\nPrevious work also studied semi-supervised knowledge transfer from private models. For instance. Jagannathan et al.(2013) learned privacy-preserving random forests. A key difference is that thei. approach is tailored to decision trees. PATE works well for the specific case of decision trees, as. demonstrated in Appendix [C] and is also applicable to other machine learning algorithms, includin more complex ones. Another key difference is that Jagannathan et al.[(2013) modified the classi model of a decision tree to include the Laplacian mechanism. Thus, the privacy guarantee doe. not come from the disjoint sets of training data analyzed by different decision trees in the randon forest, but rather from the modified architecture. In contrast, partitioning is essential to the privac. guarantees of the PATE approach.\nTo protect the privacy of sensitive training data, this paper has advanced a learning strategy and a corresponding privacy analysis. The PATE approach is based on knowledge aggregation and transfej from \"teacher'' models, trained on disjoint data, to a \"student\"' model whose attributes may be mad. public. In combination, the paper's techniques demonstrably achieve excellent utility on the MNIST and SVHN benchmark tasks, while simultaneously providing a formal, state-of-the-art bound or users' privacy loss. While our results are not without limits- e.g., they require disjoint training data for a large number of teachers (whose number is likely to increase for tasks with many outpu classes)-they are encouraging, and highlight the advantages of combining semi-supervised learn ing with precise, data-dependent privacy analysis, which will hopefully trigger further work. Ir particular, such future work may further investigate whether or not our semi-supervised approach will also reduce teacher queries for tasks other than MNIST and SVHN, for example when the discrete output categories are not as distinctly defined by the salient input space features.\nA key advantage is that this paper's techniques establish a precise guarantee of training data pri vacy in a manner that is both intuitive and rigorous. Therefore, they can be appealing, and easily. explained, to both an expert and non-expert audience. However, perhaps equally compelling are the. techniques' wide applicability. Both our learning approach and our analysis methods are \"black-. box,' i.e., independent of the learning algorithm for either teachers or students, and therefore apply,. in general, to non-convex, deep learning, and other learning methods. Also, because our techniques. do not constrain the selection or partitioning of training data, they apply when training data is natu-. rally and non-randomly partitioned--e.g., because of privacy, regulatory, or competitive concerns-. or when each teacher is trained in isolation, with a different method. We look forward to such further applications, for example on RNNs and other sequence-based models.."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "Nicolas Papernot is supported by a Google PhD Fellowship in Security. The authors would like t. thank Ilya Mironov and Li Zhang for insightful discussions about early drafts of this document."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Dana Angluin. Queries and concept learning. Machine learning, 2(4):319-342, 1988\nRaef Bassily, Adam Smith, and Abhradeep Thakurta. Differentially private empirical risk minimiza tion: efficient algorithms and tight error bounds. arXiv preprint arXiv:1405.7085, 2014\nEric B Baum. Neural net algorithms that learn in polynomial time from examples and queries. IEE! Transactions on Neural Networks, 2(1):5-19. 1991.\nLeo Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1994\nKamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Advances in Neural Information Processing Systems, pp. 289-296, 2009\nKamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(Mar):1069-1109, 2011\nThomas G Dietterich. Ensemble methods in machine learning. In International workshop on multi ple classifier systems, pp. 1-15. Springer, 2000\nple classifier systems, pp. 1-15. Springer, 2000 Cynthia Dwork. A firm foundation for private data analysis. Communications of the ACM, 54(1): 86-95, 2011. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. Cynthia Dwork and Guy N Rothblum.. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data. ourselves: privacy via distributed noise generation. In Advances in Cryptology-EUROCRYP7 2006, pp. 486-503. Springer, 2006a. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity. in private data analysis. In Theory of Cryptography, pp. 265-284. Springer, 2006b.. Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In Pro-. ceedings of the 51st IEEE Symposium on Foundations of Computer Science, pp. 51-60. IEEE, 2010. Ulfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. RAPPOR: Randomized aggregatable. privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC Conference on. Computer and Communications Security, pp. 1054-1067. ACM, 2014.\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a \"Siamese\"' time delay neural network International Journal. l of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi preprint arXiv:1503.02531, 2015.\nIgor Kononenko. Machine learning for medical diagnosis: history, state of the art and perspective Artificial Intelligence in medicine, 23(1):89-109, 2001.\nIlya Mironov. Renyi differential privacy. manuscript, 2016\nJason Poulos and Rafael Valle. Missing data imputation for supervised learning. arXiv preprini arXiv:1610.09075. 2016\nLatanya Sweeney. Weaving technology and policy together to maintain confidentiality. The Journa of Law, Medicine & Ethics, 25(2-3):98-110, 1997.\nStanley L Warner. Randomized response: A survey technique for eliminating evasive answer bias Journal of the American Statistical Association, 60(309):63-69, 1965.\nihun Hamm, Paul Cao, and Mikhail Belkin. Learning privately from multiparty data. arXiv preprin arXiv:1602.03552. 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. arXiv preprint arXiv:1606.03498, 2016.\nReza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd\nx(l; aux, d, d) < log((1 + q exp(2~l))\nfz)=(1- V\nWe next argue that this function is non-decreasing in (0, e4? -1) under the conditions of the lemma Towards this goal, define\nW 2yl gz,w)=(1-\nmma4 Let n be the label score vector for a database d with nj* n; for all j. Ther.\nProof. The probability that ny* + Lap() < n; + Lap() is equal to the probability that the sum. of two independent Lap(1) random variables exceeds y(n;* - n;). The sum of two independent. Lap(1) variables has the same distribution as the difference of two Gamma(2, 1) random variables Recalling that the Gamma(2, 1) distribution has pdf xe-x, we can compute the pdf of the difference. via convolution as\n1 1 + (y+|x|)e-y-|x| (y2 + y|x|)e-2y dy = 4e|x| -\nThe probability mass in the tail can then be computed by integration as 2+y(nj*-nj) Taking S 4exp(y(n;*-nj) union bound over the various candidate j's gives the claimed bound.\nPr[M(d) = o] exp(a(l;aux,d,d')) = >`Pr[M(d) = o Pr[M(d') = o] PrM(a Pr[M(d) =o]( Pr[M(d) Pr[M(d) Pr[M(d') = o*] \\Pr[M(d') = Pr[M(d) = o](e2~)4\nand observe that f(z) = g(z, z). We can easily verify by differentiation that g(z, w) is increasing individually in z and in w in the range of interest. This implies that f(q') < f(q) completing the proof.\n2+y(nj*-r Pr[M(d) j*]< 4exp(y(nj* - nj) j#j*\nIn this appendix, we describe approaches that were considered to reduce the number of queries made to the teacher ensemble by the student during its training. As pointed out in Sections3]and4] this effort is motivated by the direct impact of querying on the total privacy cost associated with student training. The first approach is based on distillation, a technique used for knowledge transfer and model compression (Hinton et al.2015). The three other techniques considered were proposed in the context of active learning, with the intent of identifying training examples most useful for learning. In Sections2land4] we described semi-supervised learning, which yielded the best results.. The student models in this appendix differ from those in Sections2land4] which were trained using. GANs. In contrast, all students in this appendix were learned in a fully supervised fashion from. a subset of public, labeled examples. Thus, the learning goal was to identify the subset of labels. yielding the best learning performance.."}, {"section_index": "6", "section_name": "B.1 TRAINING STUDENTS USING DISTILLATION", "section_text": "Distillation is a knowledge transfer technique introduced as a means of compressing large model. into smaller ones, while retaining their accuracy (Bucilua et al.||2006f|Hinton et al.||2015). This is fo. instance useful to train models in data centers before deploying compressed variants in phones. Th. transfer is accomplished by training the smaller model on data that is labeled with probability vector produced by the first model, which encode the knowledge extracted from training data. Distillatioi . is parameterized by a temperature parameter T, which controls the smoothness of probabilitie. output by the larger model: when produced at small temperatures, the vectors are discrete, wherea. at high temperature, all classes are assigned non-negligible values. Distillation is a natural candidat. to compress the knowledge acquired by the ensemble of teachers, acting as the large model, into . student, which is much smaller with n times less trainable parameters compared to the n teachers..\nTo evaluate the applicability of distillation, we consider the ensemble of n = 50 teachers for SVHN In this experiment, we do not add noise to the vote counts when aggregating the teacher predictions We compare the accuracy of three student models: the first is a baseline trained with labels obtained by plurality, the second and third are trained with distillation at T E {1, 5}. We use the first 10,000 samples from the test set as unlabeled data. Figure|5 reports the accuracy of the student model on the last 16,032 samples from the test set, which were not accessible to the model during training. It is plotted with respect to the number of samples used to train the student (and hence the number oi queries made to the teacher ensemble). Although applying distillation yields classifiers that perform more accurately, the increase in accuracy is too limited to justify the increased privacy cost of re vealing the entire probability vector output by the ensemble instead of simply the class assigned the largest number of votes. Thus, we turn to an investigation of active learning."}, {"section_index": "7", "section_name": "B.2 ACTIVE LEARNING OF THE STUDENT", "section_text": "Active learning is a class of techniques that aims to identify and prioritize points in the student' training set that have a high potential to contribute to learning (Angluin]1988] Baum]1991). If the label of an input in the student's training set can be predicted confidently from what we have learne so far by querying the teachers, it is intuitive that querying it is not worth the privacy budget spent In our experiments, we made several attempts before converging to a simpler final formulation\nSiamese networks: Our first attempt was to train a pair of siamese networks, introduced byBrom. ey et al.[(1993) in the context of one-shot learning and later improved byKoch[(2015). The siamese. networks take two images as input and return 1 if the images are equal and 0 otherwise. They are. two identical networks trained with shared parameters to force them to produce similar represen tations of the inputs, which are then compared using a distance metric to determine if the images. are identical or not. Once the siamese models are trained, we feed them a pair of images where the first is unlabeled and the second labeled. If the unlabeled image is confidently matched with a. known labeled image, we can infer the class of the unknown image from the labeled image. In our. experiments, the siamese networks were able to say whether two images are identical or not, but did. not generalize well: two images of the same class did not receive sufficiently confident matches. We. also tried a variant of this approach where we trained the siamese networks to output 1 when the twc.\n90 85 80 7 5 X 7 0 65 xDistilled Vectors x x Labels only XX Distilled Vectors at T=5. 60 0 2000 4000 6000 8000 1000 Student share of samples in SVHN test set (out of 26032)\nFigure 5: Influence of distillation on the accuracy of the SVHN student trained with respect to the. initial number of training samples available to the student. The student is learning from n = 50. teachers, whose predictions are aggregated without noise: in case where only the label is returned,. we use plurality, and in case a probability vector is returned, we sum the probability vectors output by each teacher before normalizing the resulting vector..\nimages are of the same class and 0 otherwise, but the learning task proved too complicated to be al effective means for reducing the number of queries made to teachers..\nCollection of binary experts: Our second attempt was to train a collection of binary experts, one. per class. An expert for class j is trained to output 1 if the sample is in class j and O otherwise. We first trained the binary experts by making an initial batch of queries to the teachers. Using. the experts, we then selected available unlabeled student training points that had a candidate labe. score below 0.9 and at least 4 other experts assigning a score above 0.1. This gave us about 500. unconfident points for 1700 initial label queries. After labeling these unconfident points using the. ensemble of teachers, we trained the student. Using binary experts improved the student's accuracy. when compared to the student trained on arbitrary data with the same number of teacher queries The absolute increases in accuracy were however too limited-between 1.5% and 2.5%..\nIdentifying unconfident points using the student: This last attempt was the simplest yet the mos effective. Instead of using binary experts to identify student training points that should be labeled b the teachers, we used the student itself. We asked the student to make predictions on each unlabele training point available. We then sorted these samples by increasing values of the maximum proba bility assigned to a class for each sample. We queried the teachers to label these unconfident input first and trained the student again on this larger labeled training set. This improved the accuracy o the student when compared to the student trained on arbitrary data. For the same number of teache queries, the absolute increases in accuracy of the student trained on unconfident inputs first whe compared to the student trained on arbitrary data were in the order of 4% - 10%.\nAPPENDIX: ADDITIONAL EXPERIMENTS ON THE UCI ADULT AND DIABETES DATASETS\nUCI Adult dataset: The UCI Adult dataset is made up of census data, and the task is to predic when individuals make over $50k per year. Each input consists of 13 features (which include the age workplace, education, occupation---see the UCI website for a full list). The only pre-processing we apply to these features is to map all categorical features to numerical values by assigning an integer value to each possible category. The model is a random forest provided by the scikit-learr Python package. When training both our teachers and student, we keep all the default parameter values, except for the number of estimators, which we set to 100. The data is split between a training set of 32,562 examples, and a test set of 16,282 inputs.\nUCI Diabetes dataset: The UCI Diabetes dataset includes de-identified records of diabetic patients. and corresponding hospital outcomes, which we use to predict whether diabetic patients were read mitted less than 30 days after their hospital release. To the best of our knowledge, no particulai. classification task is considered to be a standard benchmark for this dataset. Even so, it is valuable. to consider whether our approach is applicable to the likely classification tasks, such as readmission. since this dataset is collected in a medical environment-a setting where privacy concerns arise. frequently. We select a subset of 18 input features from the 55 available in the dataset (to avoic. features with missing values) and form a dataset balanced between the two output classes (see the. UCI website for more details4). In class 0, we include all patients that were readmitted in a 30-day. window, while class 1 includes all patients that were readmitted after 30 days or never readmitted a. all. Our balanced dataset contains 34,104 training samples and 12,702 evaluation samples. We use. a random forest model identical to the one described above in the presentation of the Adult dataset\nExperimental results: We apply our approach described in Section2 For both datasets, we trair ensembles of n = 250 random forests on partitions of the training data. We then use the noisy aggregation mechanism, where vote counts are perturbed with Laplacian noise of scale 0.05 tc privately label the first 500 test set inputs. We train the student random forest on these 500 test se inputs and evaluate it on the last 11,282 test set inputs for the Adult dataset, and 6,352 test set input for the Diabetes dataset. These numbers deliberately leave out some of the test set, which allowe us to observe how the student performance-privacy trade-off was impacted by varying the numbe of private labels, as well as the Laplacian scale used when computing these labels.\nFor the Adult dataset, we find that our student model achieves an 83% accuracy for an (e, ) : (2.66, 10-5) differential privacy bound. Our non-private model on the dataset achieves 85% accu racy, which is comparable to the state-of-the-art accuracy of 86% on this dataset (Poulos & Valle 2016). For the Diabetes dataset, we find that our privacy-preserving student model achieves a. 93.94% accuracy for a (e, ) = (1.44, 10-5) differential privacy bound. Our non-private mode. on the dataset achieves 93.81% accuracy.\nIn order to further demonstrate the general applicability of our approach, we performed experiments on two additional datasets. While our experiments on MNIST and SVHN in Section|used con volutional neural networks and GANs, here we use random forests to train our teacher and student models for both of the datasets. Our new results on these datasets show that, despite the differing data types and architectures. we are able to provide meaningful privacy guarantees."}] |
SyOvg6jxx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) studies an agent acting in an initially unknown environment, learning through trial and error to maximize rewards. It is impossible for the agent to act near-optimally unti it has sufficiently explored the environment and identified all of the opportunities for high reward, in all scenarios. A core challenge in RL is how to balance exploration--actively seeking out novel states and actions that might yield high rewards and lead to long-term gains; and exploitation-maximizing short-term rewards using the agent's current knowledge. While there are exploration techniques for finite MDPs that enjoy theoretical guarantees, there are no fully satisfying techniques for high dimensional state spaces; therefore, developing more general and robust exploration techniques is an active area of research.\n*These authors contributed equally\nCount-based exploration algorithms are known to perform near-optimally when. used in conjunction with tabular reinforcement learning (RL) methods for solving. small discrete Markov decision processes (MDPs). It is generally thought that. count-based methods cannot be applied in high-dimensional state spaces, since. most states will only occur once. Recent deep RL exploration strategies are able to. deal with high-dimensional continuous state spaces through complex heuristics. often relying on optimism in the face of uncertainty or intrinsic motivation. In. this work, we describe a surprising finding: a simple generalization of the classic. count-based approach can reach near state-of-the-art performance on various high. dimensional and/or continuous deep RL benchmarks. States are mapped to hash. codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based. exploration theory. We find that simple hash functions can achieve surprisingly good. results on many challenging tasks. Furthermore, we show that a domain-dependent. learned hash code may further improve these results. Detailed analysis reveals. important aspects of a good hash function: 1) having appropriate granularity and. 2) encoding information relevant to solving the MDP. This exploration strategy. achieves near state-of-the-art performance on both continuous control tasks and. Atari 2600 games, hence providing a simple yet powerful baseline for solving. MDPs that require considerable exploration.\nMost of the recent state-of-the-art RL results have been obtained using simple exploration strategies such as uniform sampling (Mnih et al.f2015) and i.i.d./correlated Gaussian noise (Schulman et al. 2015} Lillicrap et al. 2015). Although these heuristics are sufficient in tasks with well-shaped. rewards, the sample complexity can grow exponentially (with state space size) in tasks with sparse rewards (Osband et al.] 2016b). Recently developed exploration strategies for deep RL have led. to significantly improved performance on environments with sparse rewards. Bootstrapped DQN\n(Osband et al.]2016a) led to faster learning in a range of Atari 2600 games by training an ensemble of Q-functions. Intrinsic motivation methods using pseudo-counts achieve state-of-the-art performance. on Montezuma's Revenge, an extremely challenging Atari 2600 game (Bellemare et al.]2016). Variational Information Maximizing Exploration (VIME, Houthooft et al.(2016)) encourages the. agent to explore by acquiring information about environment dynamics, and performs well on various. robotic locomotion problems with sparse rewards. However, we have not seen a very simple and fast method that can work across different domains..\nThis paper presents a simple approach for exploration, which extends classic counting-based method. to high-dimensional. continuous state spaces. We discretize the state space with a hash function an apply a bonus based on the state-visitation count. The hash function can be chosen to appropriately. balance generalization across states, and distinguishing between states. We select problems from rllal. (Duan et al.]2016) and Atari 2600 (Bellemare et al.|2012) featuring sparse rewards, and demonstrate near state-of-the-art performance on several games known to be hard for naive exploration strategie. The main strength of the presented approach is that it is fast, flexible and complementary to mos. existing RL algorithms.\nIn summary, this paper proposes a generalization of classic count-based exploration to high-dimensional. spaces through hashing (Section|2); demonstrates its effectiveness on challenging deep RL benchmark problems and analyzes key components of well-designed hash functions (Section [3)"}, {"section_index": "1", "section_name": "2.1 NOTATION", "section_text": "Our approach discretizes the state space with a hash function $ : S -> Z. An exploration bonus is added to the reward function, defined as.\nSome of the classic, theoretically-justified exploration methods are based on counting state-action visitations, and turning this count into a bonus reward. In the bandit setting, the well-known UCB 2 log t algorithm of|Lai & Robbins|(1985) chooses the action at at time t that maximizes (at) + n(at) where r(ap) is the estimated reward, and n(at) is the number of times action at was previously chosen In the MDP setting, some of the algorithms have similar structure, for example, Model Based Interval Estimation-Exploration Bonus (MBIE-EB) of Strehl & Littman (2008) counts state-action pairs with a table n(s, a) and adding a bonus reward of the form - to encourage exploring less visited pairs Vn(s, a) Kolter & Ng(2009) show that the inverse-square-root dependence is optimal. MBIE and related algorithms assume that the augmented MDP is solved analytically at each timestep, which is only practical for small finite state spaces.\nThis paper assumes a finite-horizon discounted Markov decision process (MDP), defined by. (S,A, P,r, po, y,T), in which S is the state space, A the action space, P a transition proba-. bility distribution, r : S A -> R>o a reward function, po an initial state distribution, y E (0, 1] a. discount factor, and T the horizon. The goal of RL is to maximize the total expected discounted reward E.P T=o y'r(st, at)] over a policy r, which outputs a distribution over actions given a state..\nwhere e R>o is the bonus coefficient. Initially the counts n() are set to zero for the whole range of $. For every state st encountered at time step t, n((sp)) is increased by one. The agent is trained with rewards (r + r+). while performance is evaluated as the sum of rewards without bonuses.\nNote that our approach is a departure from count-based exploration methods such as MBIE-EB since we use a state-space count n(s) rather than a state-action count n(s, a). State-action counts n(s, a) are investigated in Appendix A.6] but no significant performance gains over state counting could be witnessed.\nAlgorithm 1: Count-based exploration through static hashing\n1 static hashing 1 Define state preprocessor g : S -> RK 2 (In case of SimHash) Initialize A e RkxK with entries drawn i.i.d. from the standard Gaussian. distribution N(0, 1) 3 Initialize a hash table with values n(-) = 0. 4 for each iteration j do. 5 6 Compute hash codes through any LSH method, e.g., for SimHash, $(sm) = sgn(Ag(sm)) 7 Update the hash table counts Vm : 0 m M as n($(sm)) n($(sm)) + 1 M Update the policy using rewards r(Sm, am). B 8 with any RL algorithm. n($(Sm)) Jm=0\nAlgorithm|1|summarizes our method. The main idea is to use locality-sensitive hashing (LSH) to convert continuous, high-dimensional data to discrete hash codes. LSH is a popular class of hash functions for querying nearest neighbors based on certain similarity metrics (Andoni & Indyk| 2006). A computationally efficient type of LSH is SimHash (Charikar 2002), which measures similarity by. angular distance. SimHash retrieves a binary code of state s e S as.\nWhen the MDP states have a complex structure, as is the case with image observations, measuring. their similarity directly in pixel space fails to provide the semantic similarity measure one would. desire. Previous work in computer vision (Lowe]1999] Dalal & Triggs2005] Tola et al.] 2010 introduce manually designed feature representations of images that are suitable for semantic tasks including detection and classification. More recent methods learn complex features directly from data by training convolutional neural networks (Krizhevsky et al.]2012,Simonyan & Zisserman,2014] He et al.]2015). Considering these results, it may be difficult for SimHash to cluster states appropriately. using only raw pixels.\ndownsample code linear softmax 6X6 b(.) 96 x 5 x 5 512 96 x 5 x 5 96 11 11 96 x 10 10 96 x 24 x 24 1024 96 x 24 x 24 2400 1 x 52 x 52 64 x 52 52 1 52 52\nFigure 1: The autoencoder (AE) architecture; the solid block represents the dense sigmoidal binary code layer, after which noise U(-a, a) is injected..\nTherefore, we propose to use an autoencoder (AE) consisting of convolutional, dense, and transposed convolutional layers to learn meaningful hash codes in one of its hidden layers. This AE takes as input states s and contains one special dense layer comprised of K saturating activation functions\nClearly the performance of this method will strongly depend on the choice of hash function $. One. important choice we can make regards the granularity of the discretization: we would like for \"distant' states to be be counted separately while \"similar\"' states are merged. If desired, we can incorporate prior knowledge into the choice of $, if there would be a set of salient state features which are known. to be relevant.\n$(s) = sgn(Ag(s)) e{-1,1}k\nwhere g : S -> Rd is an optional preprocessing function and A is a k d matrix with i.i.d. entries drawn from a standard Gaussian distribution N(0, 1). The value for k controls the granularity: higher values lead to fewer collisions and are thus more likely to distinguish states.\ndownsample code linear softmax 6x6 6X6 66 b(.) 96 x 5 x 5 512 96 5x 5 96 x 11 x 11 96 x 10 10 96 x 24 x 24 1024 96 x 24 x 24 2400 1 x 52 x 52 1 x 52 52 64 52 52\nAlgorithm 2: Count-based exploration using learned hash codes\n1 Define state preprocessor g : S -> BK as the binary code resulting from the autoencoder (AE) 2 Initialize A e RkxK with entries drawn i.i.d. from the standard Gaussian distribution N(0, 1) 3 Initialize a hash table with values n() = 0 4 for each iteration j do 5 6 7 if j mod jupdate = 0 then 8 Update the AE loss function in Eq. (3) using samples drawn from the replay pool Compute g(sm) = [b(sm)l, the K-dim rounded hash code for sm learned by the AE 9 10 Project g(sm) to a lower dimension k via SimHash as $(sm) = sgn(Ag(sm)) 11 Update the hash table counts Vm : 0 m M as n($(sm)) n($(sm)) + 1 M B 12 Update the policy using rewards r(sm, am) with any RL algorithm (n($(Sm)) m=0\nmore specifically sigmoid functions. By rounding the sigmoid output b(s) of this layer to the closes binary number, any state s can be binarized.\nSince gradients cannot be back-propagated through a rounding function, an alternative method must. be used to ensure that distinct states are mapped to distinct binary codes. Therefore, uniform noise. U(-a, a) is added to the sigmoid output. By choosing uniform noise with a sufficiently high variance. the AE is only capable of reconstructing distinct inputs s if its hidden dense layer outputs values b(s). that are sufficiently far apart from each other (Gregor et al.|2016). Feeding a state s to the AE input extracting b(s) and rounding it to [b(s)7 yields a learned binary code. As such, the loss function L() over a set of collected states {si}=, is defined as\nN K min{(1- b;(sn)) lOg K A i=1\nThis objective function consists of a cross-entropy term and a term that pressures the binary code laye to take on binary values, scaled by e R>o. The reasoning behind this is that uniform noise U(-a, a alone is insufficient, in case the AE does not use a particular sigmoid unit. This term ensures that al unused binary code output is assigned an arbitrary binary value. When omitting this term, the code i more prone to oscillations, causing unwanted bit flips, and destabilizing the counting process.\nIn order to make the AE train sufficiently fast---which is required since it is updated during the agent's training-we make use of a pixel-wise softmax output layer (van den Oord et al.2016) that shares weights between all pixels. The different softmax outputs merge together pixel intensities into discrete. bins. The architectural details are described in Appendix|A.1|and are depicted in Figure[1] Because the code dimension often needs to be large in order to correctly reconstruct the input, we apply a downsampling procedure to the resulting binary code [b(s)1, which can be done through random. projection to a lower-dimensional space via SimHash as in Eq. (2).\nOne the one hand, it is important that the mapping from state to code needs to remain relatively consistent over time, which is nontrivial as the AE is constantly updated according to the latest data (Algorithm|2|line[8). An obvious solution would be to significantly downsample the binary code to a very low dimension, or by slowing down the training process. But on the other hand, the code has tc remain relatively unique for states that are both distinct and close together on the image manifold This is tackled both by the second term in Eq. (3) and by the saturating behavior of the sigmoid units As such, states that are already well represented in the AE hidden layers tend to saturate the sigmoic units, causing the resulting loss gradients to be close to zero and making the code less prone to change\nTo answer question 1, we run the proposed method on deep RL benchmarks (rllab and ALE) that feature sparse rewards, and compare it to other state-of-the-art algorithms. Question 2 is answered by trying out different image preprocessors on Atari 2600 games. Finally, we investigate question 3 in Section3.3|and[3.4] Trust Region Policy Optimization (TRPO, Schulman et al.(2015)) is chosen as the RL algorithm for all experiments, because it can handle both discrete and continuous action spaces, it can conveniently ensure stable improvement in the policy performance, and is relatively insensitive to hyperparameter changes. The hyperparameters settings are reported in Appendix|A.1"}, {"section_index": "2", "section_name": "3.1 CONTINUOUS CONTROL", "section_text": "The rllab benchmark (Duan et al.]2016) consists of various control tasks to test deep RL algorithms We selected several variants of the basic and locomotion tasks that use sparse rewards, as shown in Figure[2] and adopt the experimental setup as defined in (Houthooft et al.]2016)--a description can be found in Appendix[A.2] These tasks are all highly difficult to solve with naive exploration strategies, such as adding Gaussian noise to the actions.\nFigure 2: Illustrations of the rllab tasks used in the continuous control experiments, namely MountainCar, CartPoleSwingup, SimmerGather, and HalfCheetah; taken from (Duan et al.]. 2016)\n350 baseline 1.0 VIME SimHash 0.8 25 0.6 0.4 0.2 50 7 200 400 600 800 200 600 800 1000 200 400 600 800 100 (a) MountainCar (b) CartPoleSwingup (c) SwimmerGather (d) HalfCheetah\nFigure 3: Mean average return of different algorithms on rllab tasks with sparse rewards; the solid line represents the mean average return, while the shaded area represents one standard deviation, over 5 seeds for the baseline and SimHash.\nFigure3 shows the results of TRPO (baseline), TRPO-SimHash, and VIME (Houthooft et al.]2016 on the classic tasks MountainCar and CartPoleSwingup, the locomotion task HalfCheetah, and th hierarchical task SwimmerGather. Using count-based exploration with hashing is capable of reachin the goal in all environments (which corresponds to a nonzero return), while baseline TRPO witl Gaussian control noise fails completely. Although TRPO-SimHash picks up the sparse reward oj HalfCheetah, it does not perform as well as VIME. In contrast, the performance of SimHash i comparable with VIME on MountainCar, while it outperforms VIME on SwimmerGather."}, {"section_index": "3", "section_name": "3.2 ARCADE LEARNING ENVIRONMENT", "section_text": "The Arcade Learning Environment (ALE,Bellemare et al.(2012), which consists of Atari 2600. video games. is an important benchmark for deep RL due to its high-dimensional state space and wide\n1. Can count-based exploration through hashing improve performance significantly across different domains? How does the proposed method compare to the current state of the art in exploration for deep RL? 2. What is the impact of learned or static state preprocessing on the overall performance when image observations are used? 3. What factors contribute to good performance, e.g., what is the appropriate level of granularity of the hash function?\nvariety of games. In order to demonstrate the effectiveness of the proposed exploration strategy, six. games are selected featuring long horizons while requiring significant exploration: Freeway, Frostbite. Gravitar, Montezuma's Revenge, Solaris, and Venture. The agent is trained for 50o iterations in all. experiments, with each iteration consisting of O.1 M steps (the TRPO batch size, corresponds to O.4 M. frames). Policies and value functions are neural networks with identical architectures to (Mnih et al. 2016). Although the policy and baseline take into account the previous four frames, the counting. algorithm only looks at the latest frame..\nBASs To compare with the autoencoder-based learned hash code, we propose using Basic Abstrac. tion of the ScreenShots (BASS, also called Basic; see|Bellemare et al.(2012) as a static preprocessing. function g. BASS is a hand-designed feature transformation for images in Atari 2600 games. BASS. builds on the following observations specific to Atari: 1) the game screen has a low resolution, 2. most objects are large and monochrome, and 3) winning depends mostly on knowing object location and motions. We designed an adapted version of BASs that divides the RGB screen into square. cells, computes the average intensity of each color channel inside a cell, and assigns the resulting. values to bins that uniformly partition the intensity range [0, 255]. Mathematically, let C be the cel. size (width and height), B the number of bins, (i, j) cell location, (x, y) pixel location, and z the. channel.\nfeature(i, j, z) = B (x,y)e cell(i,j) I(x, y, Z 255C2\nAfterwards. the resulting integer-valued feature tensor is converted to an integer hash code ((st) ir Line[6|of Algorithm|1). A BASS feature can be regarded as a miniature that efficiently encodes object locations, but remains invariant to negligible object motions. It is easy to implement and introduces little computation overhead. However, it is designed for generic Atari game images and may not capture the structure of each specific game very well.\nTable 1: Atari 2600: average total reward after training for 50 M time steps. Boldface numbers indicate best results. Italic numbers are the best among our methods..\nFreeway Frostbite1 Gravitar Montezuma Solaris Venture TRPO (baseline) 16.5 2869 486 0 2758 121 TRPO-pixel-SimHash 31.6 4683 468 0 2897 263 TRPO-BASS-SimHash 28.4 3150 604 238 1201 616 TRPO-AE-SimHash 33.5 5214 482 75 4467 445 Double-DQN 33.3 1683 412 0 3068 98.0 Dueling network. 0.0 4672 588 0 2251 497 Gorila 11.7 605 1054 4 N/A 1245 DQN Pop-Art 33.4 3469 483 0 4544 1172 A3C+ 27.3 507 246 142 2175 0 29.2 1450 pseudo-count2 3439 369\n1 WhileVezhnevets et al.(2016) reported best score 8108, their evaluation was based on top 5 agents trained with 500M time steps, hence not comparable. 2 Results reported only for 25 M time steps (100 M frames).\nWe compare our results to double DQN (van Hasselt et al.| 2016b), dueling network (Wang et al 2016), A3C+ (Bellemare et al.] 2016), double DQN with pseudo-counts (Bellemare et al. 2016) Gorila (Nair et al.[2015), and DQN Pop-Art (van Hasselt et al.]|2016a) on the \"null op\" metrid2] We show training curves in Figure4|and summarize all results in Table 1. Surprisingly, TRPO-pixel SimHash already outperforms the baseline by a large margin and beats the previous best result on Frostbite. TRPO-BASS-SimHash achieves significant improvement over TRPO-pixel-SimHash on\n1The original BASS exploits the fact that at most 128 colors can appear on the screen. Our adapted versio does not make this assumption.. 2The agent takes. ithin20 the heoi Of each enisode\nMontezuma's Revenge and Venture, where it captures object locations better than other methods[3 TRPO-AE-SimHash achieves near state-of-the-art performance on Freeway, Frostbite and Solaris|4\n10000 1000 30 TRPO-AE-SimHash 900 TRPO 25 8000 800 TRPO-BASS-SimHash TRPO-pixel-SimHash 20 700 6000 15 600 4000 500 10 400 2000 300 200 5 100 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 (a) Freeway (b) Frostbite (c) Gravitar 500 7000 1200 400 6000 1000 5000 800 300 4000 600 200 3000 400 2000 100 200 1000 100 200 400 500 -100 500 200 300 100 200 300 400 100 200 300 400 500 (d) Montezuma's Revenge (e) Solaris (f) Venture\nFigure 4: Atari 2600 games: the solid line is the mean average undiscounted return per iteration while the shaded areas represent the one standard deviation, over 5 seeds for the baseline, TRPO. pixel-SimHash, and TRPO-BASS-SimHash, while over 3 seeds for TRPO-AE-SimHash..\nAs observed in Table 1, preprocessing images with BASS or using a learned hash code through the AE leads to much better performance on Gravitar, Montezuma's Revenge and Venture. Therefore, an static or adaptive preprocessing step can be important for a good hash function\nIn conclusion, our count-based exploration method is able to achieve remarkable performance gains even with simple hash functions like SimHash on the raw pixel space. If coupled with domain-dependent state preprocessing techniques, it can sometimes achieve far better results."}, {"section_index": "4", "section_name": "3.3 GRANULARITY", "section_text": "While our proposed method is able to achieve remarkable results without requiring much tuning the granularity of the hash function should be chosen wisely. Granularity plays a critical role in count-based exploration, where the hash function should cluster states without under-generalizing or over-generalizing. Table[2|summarizes granularity parameters for our hash functions. In Table[3 we summarize the performance of TRPO-pixel-SimHash under different granularities. We choose Frostbite and Venture on which TRPO-pixel-SimHash outperforms the baseline, and choose as reward bonus coefficient = 0.01 256 to keep average bonus rewards at approximately the same scale. k = 16 only corresponds to 65536 distinct hash codes, which is insufficient to distinguish between semantically distinct states and hence leads to worse performance. We observed that k = 512 tends to capture trivial image details in Frostbite, leading the agent to believe that every state is new and equally worth exploring. Similar results are observed while tuning the granularity parameters for TRPO-BASS-SimHash and TRPO-AE-SimHash.\nThe best granularity depends on both the hash function and the MDP. While adjusting granularity parameter, we observed that it is important to lower the bonus coefficient as granularity is increased This is because a higher granularity is likely to cause lower state counts, leading to higher bonus rewards that may overwhelm the true rewards.\nTable 2: Granularity parameters of various hash functions\nTable 3: Average score at 50M time steps achieved by TRPO-pixel-SimHash\nk 16 64 128 256 512 Frostbite 3326 4029 3932 4683 1117 Venture 0 218 142 263 306\nMontezuma's Revenge is widely known for its extremely sparse rewards and difficult exploratior. (Bellemare et al.2016). While our method does not outperform Bellemare et al.[(2016) on this game. we investigate the reasons behind this through various experiments. The experiment process belov again demonstrates the importance of a hash function having the correct granularity and encoding. relevant information for solving the MDP..\nOur first attempt is to use game RAM states instead of image observations as inputs to the policy (details in Appendix A.1), which leads to a game score of 2500 with TRPO-BASS-SimHash. Oui second attempt is to manually design a hash function that incorporates domain knowledge, called SmartHash, which uses an integer-valued vector consisting of the agent's (x, y) location, room number and other useful RAM information as the hash code (details in Appendix|A.3). The best SmartHash agent is able to obtain a score of 3500. Still the performance is not optimal. We observe that a slight change in the agent's coordinates does not always result in a semantically distinct state, and thus the hash code may remain unchanged. Therefore we choose grid size s and replace the x coordinate by (x - xmin)/s] (similarly for y). The bonus coefficient is chosen as = 0.01s to maintain the scale relative to the true reward5|(see Table|4). Finally, the best agent is able to obtain 6600 total rewards after training for 1000 iterations (1000 M time steps), with a grid size s = 10.\n8000 exact enemy locations 7000 ignore enemies 6000 random enemy locations. 5000 4000 3000 2000 1000 0 1000! 0 200 400 600 800 1000\nFigure 5: SmartHash results on Montezuma's Revenge (RAM observations): the solid line is th mean average undiscounted return per iteration, while the shaded areas represent the one standard deviation, over 5 seeds.\nDuring our pursuit, we had another interesting discovery that the ideal hash function should not simply cluster states by their visual similarity, but instead by their relevance to solving the MDP. We\n5The bonus scaling is chosen by assuming all states are visited uniformly and the average bonus reward should remain the same for any grid size\nTable 4: Average score at 50M time steps achieved by TRPO-SmartHash on Montezuma's Revenge (RAM observations)\n8000 exact enemy locations 7000 ignore enemies 6000 random enemy locations 5000 4000 3000 2000 1000 0 -1000 0 200 400 600 800 1000\nexperimented with including enemy locations in the first two rooms into SmartHash (s = 1O), an observed that average score dropped to 1672 (at iteration 1000). Though it is important for the agen to dodge enemies, the agent also erroneously \"enjoys'' watching enemy motions at distance (sinc new states are constantly observed) and \"forgets' that his main objective is to enter other rooms. Ar alternative hash function keeps the same entry \"enemy locations', but instead only puts randoml sampled values in it, which surprisingly achieves better performance (3112). However, by ignoring enemy locations altogether, the agent achieves a much higher score (5661) (see Figure|5). In retrospec we examine the hash codes generated by BASS-SimHash and find that codes clearly distinguisl between visually different states (including various enemy locations), but fails to emphasize that th agent needs to explore different rooms. Again this example showcases the importance of encoding relevant information in designing hash functions."}, {"section_index": "5", "section_name": "4 RELATED WORK", "section_text": "Classic count-based methods such as MBIE (Streh1 & Littman]2005), MBIE-EB and (Kolter & Ng2009) solve an approximate Bellman equation as an inner loop before the agent takes an action (Strehl & Littman!2008). As such, bonus rewards are propagated immediately throughout the state-action space. In contrast, contemporary deep RL algorithms propagate the bonus signal based on rollouts collected from interacting with environments, with value-based (Mnih et al.[2015) or policy gradient-based (Schulman et al.]2015] Mnih et al.]2016) methods, at limited speed. In addition, our proposed method is intended to work with contemporary deep RL algorithms, it differs from classical count-based method in that our method relies on visiting unseen states first, before the bonus reward can be assigned, making uninformed exploration strategies still a necessity at the beginning. Filling the gaps between our method and classic theories is an important direction of future research.\nAnother type of exploration is curiosity-based exploration. These methods try to capture the agent's surprise about transition dynamics. As the agent tries to optimize for surprise, it naturally discovers novel states. We refer the reader to Schmidhuber (2010) and Oudeyer & Kaplan(2007) for an extensive review on curiosity and intrinsic rewards.\nThe most related exploration strategy is proposed byBellemare et al. (2016), in which an exploration bonus is added inversely proportional to the square root of a pseudo-count quantity. A state pseudo count is derived from its log-probability improvement according to a density model over the state space, which in the limit converges to the empirical count. Our method is similar to pseudo-count approach in the sense that both methods are performing approximate counting to have the necessary generalization over unseen states. The difference is that a density model has to be designed and learned to achieve good generalization for pseudo-count whereas in our case generalization is obtained by a wide range of simple hash functions (not necessarily SimHash). Another interesting connection total number of states visited. Another method similar to hashing is proposed by[Abel et al.[(2016) which clusters states and counts cluster centers instead of the true states, but this method has yet to be tested on standard exploration benchmark problems.\nA related line of classical exploration methods is based on the idea of optimism in the face of uncertainty Brafman & Tennenholtz 2002) but not restricted to using counting to implement \"optimism\"', e.g. R-Max (Brafman & Tennenholtz 2002), UCRL (Jaksch et al.]2010), and E3 (Kearns & Singh]2002) These methods, similar to MBIE and MBIE-EB, have theoretical guarantees in tabular settings..\nBayesian RL methods (Kolter & Ng2009] Guez et al.]2014] Sun et al.]2011Ghavamzadeh et al. 2015), which keep track of a distribution over MDPs, are an alternative to optimism-based methods. Extensions to continuous state space have been proposed by|Pazis & Parr (2013) and|Osband et al. (2016b).\nSeveral exploration strategies for deep RL have been proposed to handle high-dimensional state space. recently. Houthooft et al. (2016) propose VIME, in which information gain is measured in Bayesian neural networks modeling the MDP dynamics, which is used an exploration bonus.Stadie et al.. (2015) propose to use the prediction error of a learned dynamics model as an exploration bonus.. Thompson sampling through bootstrapping is proposed by|Osband et al.(2016a), using bootstrapped Q-functions."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers. Adam Stooke gratefully acknowledges funding from a Fannie and John Hertz Foundation fellowship. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FwO)."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, 2016.\nThis paper demonstrates that a generalization of classical counting techniques through hashing is able to provide an appropriate signal for exploration, even in continuous and/or high-dimensional MDPs using function approximators, resulting in near state-of-the-art performance across benchmarks. It. provides a simple yet powerful baseline for solving MDPs that require informed exploration..\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:. An evaluation platform for general agents. Journal of Artificial Intelligence Research. 2012\nRonen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3:213-231, 2002.\nMoses S Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the\nGraham Cormode and S Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. Journal of Algorithms, 55(1):58-75, 2005.\nThomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning Journal of Machine Learning Research. 11:1563-1600. 2010\nMichael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 49(2-3):209-232, 2002\nDavid G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on. volume 2. pp. 1150-1157. Ieee. 1999\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp 448-456, 2015.\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.\nArun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nBradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.\nAlexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for Markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.\nYi Sun, Faustino Gomez, and Jurgen Schmidhuber. Planning to be surprised: Optimal Bayesiar exploration in dynamic environments. In Artificial General Intelligence, pp. 41-51. 2011.\nAaron van den Oord. Nal Kalchbrenner. and Koray Kavukcuoglu. Pixel recurrent neural networks. Ir International Conference on Machine Learning (ICML), 2016.\nAlexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals. and Koray Kavukcuoglu. Strategic attentive writer for learning macro-actions. In Advances in Neural Information Processing Systems (NIPS), 2016..\nZiyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning (ICML), 2016.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region\nAlexander L Strehl and Michael L Littman. A theoretical analysis of model-based interval estimation In International Conference on Machine I earnino. (CML). pp. 856-863. 2005\nHado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016a."}, {"section_index": "8", "section_name": "A.1 HYPERPARAMETER SETTINGS", "section_text": "For the rllab experiments, we used batch size 5ooo for all tasks except SwimmerGather, for which we. used batch size 50ooo. CartpoleSwingup makes use of a neural network policy with one layer of 32. tanh units. The other tasks make use of a two layer neural network policy of 32 tanh units each for. MountainCar and HalfCheetah, and of 64 and 32 tanh units for SwimmerGather. The outputs are. modeled by a fully factorized Gaussian distribution N(u. -D). in which u is modeled as the network output, while is a parameter. CartPoleSwingup makes use of a neural network baseline with one. layer of 32 ReLU units, while all other tasks make use of a linear baseline function. For all tasks, we. used TRPO step size 0.01 and discount factor y = 0.99. We choose SimHash parameter k = 32 and. bonus coefficient = 0.01, found through a coarse grid search..\nFor Atari experiments, a batch size of 1ooooo is used, while the KL divergence step size is set tc 0.01. The policy and baseline both have the following architecture: 2 convolutional layers with respectively 16 and 32 filters, sizes 8 8 and 4 4, strides 4 and 2, using no padding, feeding intc a single hidden layer of 256 units. The nonlinearities are rectified linear units (ReLUs). The inpu frames are downsampled to 52 52. The input to policy and baseline consists of the 4 previous frames, corresponding to the frame skip of 4. The discount factor was set to y = 0.995. All input. are rescaled to [-1, 1] element-wise. All experiments used 5 different training seeds, except the experiments with the learned hash code, which uses 3 different training seeds. Batch normalizatior (Ioffe & Szegedy|2015) is used at each policy and baseline layer. TRPO-pixel-SimHash uses binary codes of size k = 256; BASS (TRPO-BASS-SimHash) extracts features using cell size C = 20 anc B = 20 bins. The autoencoder for the learned embedding (TRPO-AE-SimHash) uses a binary hidder layer of 512 bit, which are projected to 64 bit.\nRAM states in Atari 2600 games are integer-valued vectors over length 128 in the range [0, 255] Experiments on Montezuma's Revenge with RAM observations use a policy consisting of 2 hidden. layers, each of size 32. RAM states are rescaled to a range [-1, 1]. Unlike images, only the current RAM is shown to the agent. Experiment results are averaged over 10 random seeds..\nThe autoencoder used for the learned hash code has a 512 bit binary code layer, using sigmoid units. to which uniform noise U(-a, a) with a = 0.3 is added. The loss function Eq. (3, using = 10. is updated every Jupdate = 3 iterations. The architecture looks as follows: an input layer of size. 52 52, representing the image luminance is followed by 3 consecutive 6 6 convolutional layers. with stride 2 and 96 filters feed into a fully connected layer of size 1024, which connects to the binary. code layer. This binary code layer feeds into a fully-connected layer of 1024 units, connecting to a fully-connected layer of 2400 units. This layer feeds into 3 consecutive 6 6 transposed convolutional. layers of which the final one connects to a pixel-wise softmax layer with 64 bins, representing the. pixel intensities. Moreover, label smoothing is applied to the different softmax bins, in which the. log-probability of each of the bins is increased by O.003, before normalizing. The softmax weights. are shared among each pixel. All output nonlinearities are ReLUs; Adam (Kingma & Ba] 2015) is. used as an optimization scheme; batch normalization (Ioffe & Szegedyl 2015) is applied to each layer. The architecture was shown in Figure[1of Section|2.3"}, {"section_index": "9", "section_name": "A.2 DESCRIPTION OF THE ADAPTED RLLAB TASKS", "section_text": "This section describes the continuous control environments used in the experiments. The tasks are. implemented as described inDuan et al.(2016), following the sparse reward adaptation of|Houthooft. et al.(2016). The tasks have the following state and action dimensions: CartPoleSwingup, S R4. A c R; MountainCar S c R3, A R1; HalfCheetah, S c R20, A R6; SwimmerGather,. S R33, A R2. For the sparse reward experiments, the tasks have been modified as follows. In. CartPoleSwingup, the agent receives a reward of +1 when cos() > 0.8, with the pole angle. In. MountainCar, the agent receives a reward of +1 when the goal state is reached, namely escaping. the valley from the right side. Therefore, the agent has to figure out how to swing up the pole in. the absence of any initial external rewards. In HalfCheetah, the agent receives a reward of +1 when\nxbody > 5. As such, it has to figure out how to move forward without any initial external reward. The time horizon is set to T = 500 for all tasks."}, {"section_index": "10", "section_name": "A.3 EXAMPLES OF ATAR1 26OO RAM ENTRIES", "section_text": "Table 5: Interpretation of particular RAM entries in Montezuma's Revenge\nRAM index Group Meaning 3 room room number 42 agent x coordinate 43 agent y coordinate 52 agent orientation (left/right). 27 beam walls on/off 83 beam walls beam wall countdown (on: 0, off: 36 -> 0) 0 counter counts from 0 to 255 and repeats 55 counter death scene countdown. 67 objects existence of objects (doors, skull and key) in the 1st room 47 skull x coordinate (both 1st and 2nd rooms)"}, {"section_index": "11", "section_name": "A.4 ANALYSIS OF LEARNED BINARY REPRESENTATION", "section_text": "Figure 6|shows the downsampled codes learned by the autoencoder for several Atari 2600 games (Frostbite, Freeway, and Montezuma's Revenge). Each row depicts 50 consecutive frames (from O tc 49, going from left to right, top to bottom). The pictures in the right column depict the binary codes that correspond with each of these frames (one frame per row). Figure|7 shows the reconstructions of several subsequent images according to the autoencoder.\nTable 5|lists the semantic interpretation of certain RAM entries in Montezuma's Revenge. SmartHash as described in Section3.4 makes use of RAM indices 3, 42, 43, 27, and 67. \"Beam walls' are deadly barriers that occur periodically in some rooms.\n15 20 25 30 35 40 45 15 20 25 30 35 10 15 20 25 30 35 40 45\nFigure 6: Frostbite, Freeway, and Montezuma's Revenge: subsequent frames (left) and corresponding code (right); the frames are ordered from left (starting with frame number O) to right, top to bottom. the vertical axis in the right images correspond to the frame number.\n10\nFigure 7: Freeway: subsequent frames and corresponding code (top); the frames are ordered from left (starting with frame number O) to right, top to bottom; the vertical axis in the right images correspond to the frame number. Within each image, the left picture is the input frame, the middle picture the reconstruction, and the right picture, the reconstruction error..\nWe experimented with directly building a hashing dictionary with keys $(s) and values the state counts, but observed an unnecessary increase in computation time. Our implementation converts th integer hash codes into binary numbers and then into the \"bytes\"' type in Python. The hash table is a dictionary using those bytes as keys\nHowever, an alternative technique called Count-Min Sketch (Cormode & Muthukrishnan!2005), wit a data structure identical to counting Bloom filters (Fan et al.||2ooo), can count with a fixed intege array and thus reduce computation time. Specifically, let p', ...,p' be distinct large prime number and define $' (s) = $(s) mod pj. The count of state s is returned as mintj1 n' (' (s)). To increas the count of s, we increment n' (o' (s)) by 1 for all j. Intuitively, the method replaces $ by weake hash functions, while it reduces the probability of over-counting by reporting counts agreed by a such weaker hash functions. The final hash code is represented as ('(s), ..., $'(s\nThroughout all experiments above, the prime numbers for the counting Bloom filter are 999931 999953, 999959, 999961, 999979, and 999983, which we abbreviate as \"6M\". In addition, we experimented with 6 other prime numbers, each approximately 15 M, which we abbreviate as \"90 M' As we can see in Figure[8l counting states with a dictionary or with Bloom filters lead to similar performance, but the computation time of latter is lower. Moreover, there is little difference between direct counting and using a very larger table for Bloom filters, as the average bonus rewards are almost the same, indicating the same degree of exploration-exploitation trade-off. On the other hand, Bloom filters require a fixed table size, which may not be known beforehand.\nand define (s) = (s) mod p'. The count of state s is returned as min1<i<1 n ( .To increase S\n8000 0.012 direct count direct count 7000 Bloom 6M Bloom 6M 0.010 6000 Bloom 90M Bloom 90M 5000 0.008 4000 0.006 3000 2000 0.004 1000 0.002 0 1000 0.000! 0 100 200 300 400 500 0 100 200 300 400 500 (a) Mean average undiscounted return (b) Average bonus reward.\nFigure 8: Statistics of TRPO-pixel-SimHash (k = 256) on Frostbite. Solid lines are the mean, while the shaded areas represent the one standard deviation. Results are derived from 10 random seeds. Direct counting with a dictionary uses 2.7 times more computations than counting Bloom filters (6 M. Or 90 M).\nTheory of Bloom Filters Bloom filters (Bloom[1970) are popular for determining whether a data. sample s' belongs to a dataset D. Suppose we have l functions $' that independently assign each. data sample to an integer between 1 and p uniformly at random. Initially 1, 2, ...,p are marked as O.. Then every s E D is \"inserted\"' through marking $' (s) as 1 for all j. A new sample s' is reported as a member of D only if $' (s) are marked as 1 for all j. A bloom filter has zero false negative rate (any. s e D is reported a member), while the false positive rate (probability of reporting a nonmember as a member) decays exponentially in l..\nThough Bloom filters support data insertion, it does not allow data deletion. Counting Bloom filter (Fan et al.2000) maintain a counter n(.) for each number between 1 and p. Inserting/deleting corresponds to incrementing/decrementing n(' (s)) by 1 for all j. Similarly, s is considered a\nWe now derive the probability of over-counting. Let s be a fixed data sample (not necessarily inserted yet) and suppose a dataset D of N samples are inserted. We assume that p' > N. Let n := min1j<i n' ($' (s)) be the count returned by the Bloom filter. We are interested in computing. Prob(n > O|s D). Due to assumptions about $', we know n' ((s)) ~ Binomial (N, 1). Therefore,.\nProb(n > Ols e D). Due to assumptions about '. we know n((s)) ~ Binomial(N, - .Therefore\nCount-Min sketch is designed to support memory-efficient counting without introducing too many over-counts. It maintains a separate count n' for each hash function $' defined as $' (s) = $(s) mod p', where p' is a large prime number. For simplicity, we may assume that p' ~ p Vj and J. assigns s to any of 1, . . ., p with uniform probability..\nbe the count returned by the Bloom filter. We are interested in computing := min1<i<1 n\nProb(n > 0,s D) Prob(n > 0|s D) = Prob(s D) Prob(n > 0) - Prob(s E D) Prob(s D) Prob(n > 0) Prob(s D) 1 Prob(nJ(oJ(s)) > O) (1 - 1/pl)N (1-(1-1/p)N) (1 -1/pl)N 1- 0-\nApart from the experimental results shown in Table[1and Table[3] additional experiments have been performed to study several properties of our algorithm.\nHyperparameter sensitivity To study the performance sensitivity to hyperparameter changes, w. focus on evaluating TRPO-RAM-SimHash on the Atari 2600 game Frostbite, where the method has clear advantage over the baseline. Because the final scores can vary between different random seeds. we evaluated each set of hyperparameters with 30 seeds. To reduce computation time and cost, RAN states are used instead of image observations.\nTable 6: TRPO-RAM-SimHash performance robustness to hyperparameter changes on Frostbit\nThe results are summarized in Table[6 Herein, k refers to the length of the binary code for hashing. while is the multiplicative coefficient for the reward bonus, as defined in Section 2.2 This. table demonstrates that most hyperparameter settings outperform the baseline ( = O) significantly Moreover, the final scores show a clear pattern in response to changing hyperparameters. Small. -values lead to insufficient exploration, while large -values cause the bonus rewards to overwhelm. the true rewards. With a fixed k, the scores are roughly concave in , peaking at around O.2. Higher granularity k leads to better performance. Therefore, it can be concluded that the proposed exploration. method is robust to hyperparameter changes in comparison to the baseline, and that the best parameter. settings can obtained from a relatively coarse-grained grid search..\nState and state-action counting Continuing the results in Table[6 the performance of state-action counting is studied using the same experimental setup, summarized in Table[7] In particular, a bonus reward r+ = instead of r+ = is assigned. These results show that the relative Vn(s, a) Vn(s) performance of state counting compared to state-action counting depends highly on the selected hyperparameter settings. However, we notice that the best performance is achieved using state. counting with k = 256 and = 0.2\nTable 7: Performance comparison between state counting (left of the slash) and state-action counting (right of the slash) using TRPO-RAM-SimHash on Frostbite\nB k 0.01 0.05 0.1 0.2 0.4 0.8 1.6 64 879 / 976 2464 / 1491 2243 / 3954 2489 / 5523 1587 / 5985 1107 / 2052 441/742 128 1475 / 808 4248 / 4302 2801 / 4802 3239 / 7291 3621 / 4243 1543 / 1941 395 / 362 256 2583 / 1584 4497 / 5402 4437 / 5431 7849 / 4872 3516 / 3175 2260 / 1238 374 / 96\n3 k 0 0.01 0.05 0.1 0.2 0.4 0.8 1.6 397 64 879 2464 2243 2489 1587 1107 441 128 1475 4248 2801 3239 3621 1543 395 256 2583 4497 4437 7849 3516 2260 374\nk 0 0.01 0.05 0.1 0.2 0.4 0.8 1.6 397 64 879 2464 2243 2489 1587 1107 441 128 1475 4248 2801 3239 3621 1543 395 256 2583 4497 4437 7849 3516 2260 374"}] |
Sk8csP5ex | [{"section_index": "0", "section_name": "THE LOSS SURFACE OF RESIDUAL NETWORKS: ENSEMBLES & THE ROLE OF BATCH NORMALIZATION", "section_text": "Etai Littwin & Lior Wolf"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks (He et al.]2015) (ResNets) are neural networks with skip connections. Thes. networks, which are a specific case of Highway Networks (Srivastava et al.]2015), present state. of the art results in the most competitive computer vision tasks including image classification anc object detection.\nOur analysis reveals the mechanism for this dynamic behavior and explains the driving force behind it. This mechanism remarkably takes place within the parameters of Batch Normalization (Ioffe & Szegedy2015), which is mostly considered as a normalization and a fine-grained whitening mechanism that addresses the problem of internal covariate shift and allows for faster learning rates\nWe show that the scaling introduced by batch normalization determines the depth distribution in the virtual ensemble of the ResNet. These scales dynamically grow as training progresses, shifting the. effective ensemble distribution to bigger depths.\nThe main tool we employ in our analysis is spin glass models.Choromanska et al.(2015a) have created a link between conventional networks and such models, which leads to a comprehensive study of the critical points of neural networks based on the spin glass analysis of|Auffinger et al. (2013). In our work, we generalize these results and link ResNets to generalized spin glass models. These models allow us to analyze the dynamic behavior presented above. Finally, we apply the results of Auffinger & Arous (2013) in order to study the loss surface of ResNets."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Residual Networks present a premium in performance in comparison to con- ventional networks of the same depth and are trainable at extreme depths. It has recently been shown that Residual Networks behave like ensembles of relatively shallow networks. We show that these ensembles are dynamic: while initially the virtual ensemble is mostly at depths lower than half the network's depth, as training progresses, it becomes deeper and deeper. The main mechanism that con- trols the dynamic ensemble behavior is the scaling introduced, e.g., by the Batch Normalization technique. We explain this behavior and demonstrate the driving force behind it. As a main tool in our analysis, we employ generalized spin glass models. which we also use in order to study the number of critical points in the optimization of Residual Networks.\nThe success of residual networks was attributed to the ability to train very deep networks when employing skip connections (He et al.| 2016). A complementary view is presented byVeit et al. (2016), who attribute it to the power of ensembles and present an unraveled view of ResNets that depicts ResNets as an ensemble of networks that share weights, with a binomial depth distribution around half depth. They also present experimental evidence that short paths of lengths shorter than half-depth dominate the ResNet gradient during training\nThe analysis presented here shows that ResNets are ensembles with a dynamic depth behavior When starting the training process, the ensemble is dominated by shallow networks, with depths. lower than half-depth. As training progresses, the effective depth of the ensemble increases. This. Increase in depth allows the ResNet to increase its effective capacity as the network becomes more and more accurate."}, {"section_index": "3", "section_name": "2 A RECAP OF CHOROMANSKA ET AL. (2015A", "section_text": "A simple feed forward fully connected network N, with p layers and a single output unit is consid ered. Let n; be the number of units in layer i, such that no is the dimension of the input, and n, = 1 It is further assumed that the ReLU activation functions denoted by R( are used. The output Y of the network given an input vector x E Rd can be expressed as\nd p Y= (k) W. i=1 j=1 k=1\nDefinition 1. The mass o. f the network N is defined as i Y\nd Y p EA[Y]= k Xij P i=1 j=1 k=1\n(w) =EA[max(0,1-YxY)] La(w) =EA[[Yx-Y]]\nwhere Y, is a random variable corresponding to the true label of sample x. In order to equate either loss with the hamiltonian of the p-spherical spin glass model, a few key approximations are made:\nA4 Spherical constraint - The following is assumed:\nThese assumptions are made for the sake of analysis, and do not necessarily hold. The validity of these assumption was posed as an open problem in[Choromanska et al.[(2015b), where a different. degree of plausibility was assigned to each. Specifically, A1, as well as the independence assumption.. of Aj, were deemed unrealistic, and A2 - A4 as plausible. For example, A1 does not hold since. each input x; is associated with many different paths and x1 = x2 = ...xi. SeeChoromanska. et al.(2015a) for further justification of these approximations.\nWe briefly summarize [Choromanska et al.(2015a), which connects the loss function of multilayer networks with the hamiltonian of the p spherical spin glass model, and state their main contributions and results. The notations of our paper are summarized in Appendix|A|and slightly differ from those inChoromanska et al.(2015a).\nwhere the first summation is over the network inputs x1...xd, and the second is over all paths from input to output. There are = I=1n such paths and Vi, xi1 = x2 = ...xiy. The variable Aij E {0,1} denotes whether the path is active, i.e., whether all of the ReLU units along this path are producing positive activations, and the product II%=1 wf' represents the specific weight .(k) confi guration w1, ..w?, multiplying x, given path j. It is assumed throughout the paper that the input variables are sampled i.i.d from a normal Gaussian distribution.\nThe variables A,; are modeled as independent Bernoulli random variables with a success probability p, i.e., each path is equally likely to be active. Therefore,\nThe task of binary classification using the network V with parameters w is considered, using either the hinge loss Lh. r or the absolute loss L:\nA2 Redundancy in network parameterization - It is assumed the set of all the network weights. [w1, w2...w contains only A unique weights such that A < N.. A3 Uniformity - It is assumed that all unique weights are close to being evenly distributed on the graph of connections defining the network N. Practically, this means that we assume every. node is adjacent to an edge with any one of the A unique weights..\nA 1 < w: Y i=1\nUnder A1-A4, the loss takes the form of a centered Gaussian process on the sphere SA-1(/A) Specifically, it is shown to resemble the hamiltonian of the a spherical p-spin glass model given by:\nA 1 r 11 Hp,A(w) = Xi1 Wik A p- 2 i1...ip k=1\nwhere xi1... are independent normal Gaussian variables\nIn Auffinger et al.(2013), the asymptotic complexity of spherical p spin glass model is analyzed based on random matrix theory. In Choromanska et al.[(2015a) these results are used in order to shed light on the optimization process of neural networks. For example, the asymptotic complexity of spherical spin glasses reveals a layered structure of low-index critical points near the global op timum. These findings are then given as a possible explanation to several central phenomena found in neural networks optimization, such as similar performance of large nets, and the improbability of getting stuck in a \"bad' local minima.\nAs part of our work, we follow a similar path. First, a link is formed between residual networks and the hamiltonian of a general multi-interaction spherical spin glass model as given by..\np A Hp,(w)= Er II Xi1,i2...ir Wik A 2 r= i1,i2...ir=1 k=1\nwhere e1...ep are positive constants. Then, usingAuffinger & Arous(2013), we obtain insights or residual networks. The other part of our work studies the dynamic behavior of residual networks where we relax the assumptions made for the spin glass model.\nWe begin by establishing a connection between the loss function of deep residual networks and the hamiltonian of the general spherical spin glass model. We consider a simple feed forward fully connected network N, with ReLU activation functions and residual connections. For simplicity oi notations without the loss of generality, we assume n1 = ... = np = n. no = d as before. In our ResNet model, there exist p -- 1 identity connections skipping a single layer each, starting from the first hidden layer. The output of layer l > 1 is given by:\nNi(x) =R(W'Ni-1(x))+Ni-1(x\np d Yr r y=LL r)(k W r=1 i=1 j=1 k=1\nDefinition 2. The mass of a depth r subnetwork in N is defined as wr =dY\nThe properties of redundancy in network parameters and their uniform distribution, as described ir Sec.2] allow us to re-index Eq.9\nA 1 w?=1 i=1\nwhere W, denotes the weight matrix connecting layer l - 1 with layer l. Notice that the first hidden layer has no parallel skip connection, and so N1(x) = R(W' x). Without loss of generality, the scalar output of the network is the sum of the outputs of the output layer p and is expressed as\nwhereA.?) E {0,1} denotes whether path j of length r is open, and Vj, j', r, r' x, = x. The i3/. residual connections in W imply that the output Y is now the sum of products of different Iengths indexed by r. Since our ResNet model attaches a skip connection to every layer except the first.. 1 < r < p. See Sec.6[regarding models with less frequent skip connections..\nEach path of length r includes r 1 non-skip connections (those involving the first term in Eq.8 and not the second, identity term) out of layers l = 2.p. Therefore, ~r = (-1)nr. We define the following measure on the network:\nLemma 1. Assuming assumptions A2 - A4 hold, and E Z, then the output can be expressed after reindexing as:\nyr p ^ AT Y = i) Wik 12... r=1i1,i2...ir=1 j=1 k=1\nXr p EA[Y] = II Wik r=1i1,i2...ir=1j=1 k=1\nIn order to connect ResNets to generalized spherical spin glass models, we denote the variables\nA Si1,i2...ir Xi1,i2...ir I1,i2...ix En[?,2. j=1\nLemma 2. Assuming A2 - A3 hold, and n E N then V he following holds..\nThe independence assumption A1 was not assumed yet, and[14|holds regardless. Assuming A4 and denoting the scaled weights w, = w;, we can link the distribution of Y to the distribution on x:\nA I Xi1,i2...ir Wik /d A i1,i2...ir=1 k=1 A > I Xi1,i?...ir W i1,i2...ir=1 k=1\nwhere C1, C2 are positiye constants that do not. ffect the optimization process\nNote that since the input variables x1...xd are sampled from a centered Gaussian distribution (de pendent or not), then the set of variables x1,i2.... are dependent normal Gaussian variables.\nWe approximate the expected output EA(Y) with Y by assuming the minimal value in|13|holds. all weight configurations of a particular length in Eq. [10|will appear the same number of times. When A n, the uniformity assumption dictates that each configuration of weights would appear approximately equally regardless of the inputs, and the expectation values would be very close to\np A L I = I1 Wik r=1 i1,2...iz=1 k=1\nThe following lemma gives a generalized expression for the binary and hinge losses of the network\nLN(x) = C1 + CY\nWe denote the important quantities:\nn\nTheorem 1. Assuming p E N, we have that.. 1\n1 lim -arg max( p->0o\nTheorem 2. For any Q1 Q2, and assuming Q1p, Q2p p E N, it holds that. 1+B\nQ2P lim -1 p->0 r=Q1P\nThm.2 implies that for deep residual networks, the contribution of weight products of order far. an ensemble of potentially shallow conventional nets. The next Lemma shows that we can shift the effective depth to any value by simply controlling C..\nLemma 4. For any integer 1 < k < p there exists a global scaling parameter C such tha arg max,(er(C)) = k.\nThe expression for the output of a residual net in Eq.15 provides valuable insights into the machinery at work when optimizing such models. Thm.|1|and|2Jimply that the loss surface resembles that of ar ensemble of shallow nets (although not a real ensemble due to obvious dependencies), with variou depths concentrated in a narrow band. As noticed inVeit et al.(2016), viewing ResNets as ensembles of relatively shallow networks helps in explaining some of the apparent advantages of these models particularly the apparent ease of optimization of extremely deep models, since deep paths barely affect the overall loss of the network. However, this alone does not explain the increase in accuracy of deep residual nets over actual ensembles of standard networks. In order to explain the improvec performance of ResNets, we make the following claims:\nThe model in Eq.16 has the form of a spin glass model, except for the dependency between the variables i1,i2...tr. We later use an assumption similar to A1 of independence between these vari- ables in order to link the two binary classification losses and the general spherical spin glass model However, for the results in this section, this is not necessary.\nThe series (er)P-1 determines the weight of interactions of a specific length in the loss surface. No- tice that for constant depth p and large enough , arg max. (er) = p. Therefore, for wide networks, where n and, therefore, are large, interactions of order p dominate the loss surface, and the effect of the residual connections diminishes. Conversely, for constant and a large enough p (deep net- works), we have that arg max,(er) < p, and can expect interactions of order r < p to dominate the loss. The asymptotic behavior of e is captured by the following lemma:\nAs the next theorem shows. the epsilons are concentrated in a narrow band near the maximal value\nA simple global scaling of the weights is, therefore, enough to change the loss surface, from an ensemble of shallow conventional nets, to an ensemble of deep nets. This is illustrated in Fig.1(a-c) for various values of . In a common weight initialization scheme for neural networks, C = - (Orr & Muller2003f[Glorot & Bengio|2010). With this initialization and A = n, = p and the maximal weight is obtained at less than half the network's depth limp->oo arg max,(er) < . Therefore, at the initialization, the loss function is primarily influenced by interactions of considerably lower order than the depth p, which facilitates easier optimization.\n1. The distribution of the depths of the networks within the ensemble is controlled by th scaling parameter C.\np d Yr LLL LN(x,w) =C1 +C2 r)k W r=1 i=1 j=1 k=1\nNotice that the addition of a multiplier r indicates that the derivative is increasingly influenced by deeper networks."}, {"section_index": "4", "section_name": "4.1 BATCH NORMALIZATION", "section_text": "Batch normalization has shown to be a crucial factor in the successful training of deep residua networks. As we will show, batch normalization layers offer an easy starting condition for the. network, such that the gradients from early in the training process will originate from extremely. shallow paths.\nWe consider a simple batch normalization procedure, which ignores the additive terms, has the out- put of each ReLU unit in layer l normalized by a factor oj and then is multiplied by some parameter A. The output of layer l > 1 is therefore:\nR(WNi-1(x))+Ni-1(x) Ni(x) = 0\nwhere oj is the mean of the estimated standard deviations of various elements in the vector R(W,' Ni-1(x)). Furthermore, a typical initialization of batch normalization parameters is to set. Vi, i = 1. In this case, providing that units in the same layer have equal variance ot, the recursive relation E[Wi+1(x)?] = 1 + E[W(x)?] holds for any unit j in layer l. This, in turn, implies that the. output of the ReLU units should have increasing variance o? as a function of depth. Multiplying the weight parameters in deep layers with an increasingly small scaling factor , effectively reduces the influence of deeper paths, so that extremely short paths will dominate the early stages of opti-. mization. We next analyze how the weight scaling, as introduced by batch normalization, provides. a driving force for the effective ensemble to become deeper as training progresses..\nWe consider a simple network of depth p, with a single residual connection skipping p - m layers. We further assume that batch normalization is applied at the output of each ReLU unit as described in Eq.22 We denote by l1...lm the indices of layers that are not skipped by the residual connection.\n2. During training, C changes and causes a shift of focus from a shallow ensemble to deeper and deeper ensembles, which leads to an additional capacity. 3. In networks that employ batch normalization, C is directly embodied as the scale parameter X. The starting condition of X = 1 offers a good starting condition that involves extremely shallow nets.\nFor the remainder of Sec.4, we relax all assumptions, and assume that at some point in time the loss can be expressed:\nwhere C1, C2 are some constants that do not affect the optimization process. In order to gain addi tional insight into this dynamic mechanism, we investigate the derivative of the loss with respect to the scale parameter C. Using Eq.[9[for the output, we obtain:\np d 2r 0LN(x,w) rx(?A(?) II r)(k W ac r=1 i=1 j=1 k=1\n0.45 0.35 0.35 0.4 0.3 0.3 0.35 0.25 0.25 0.3 0.25 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 (a) (b) (c) 1.2 0.8 0.6 0.4 0.20 5000 10000 15000 20000 500 1000 1500 2000 (d) (e) (f)\nFigure 1: (a) A histogram of er(), r = 1..p, for = 0.1 and p = 100 . (b) Same for = 0.5. (c) Same for = 2. (d) Values (y-axis) of the batch normalization parameters X, (x-axis) for. 10 layers ResNet trained to discriminate between 50 multivariate Gaussians (see Appendix |C|for. more details). Higher plot lines indicate later stages of training. (e) The norm of the weights of a residual network, which does not employ batch normalization, as a function of the iteration. (f) The. asymptotic of the mean number of critical points of a finite index as a function of 3..\ndYm d Yp p N(x,w) =Xm m (m) I (m)(k) (m) s(p) ,(p)(k) wij W xij i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp (x.u\nWe denote by w, the derivative operator with respect to the parameters w, and the gradient g = VwL(x, w) = gm + gp evaluated at point w..\n0Ln(x,w - g)\naLn(x,w- g) aai\nThm.3 suggests that || will increase for layers l that do not have skip-connections. Conversely, if layer l has a parallel skip connection, then || will increase if ||gp||2 > l|gm|[2, where the later condition implies that shallow paths are nearing a local minima. Notice that an increase in |Aigl...lm results in an increase in [p], while [m] remains unchanged, therefore shifting the balance into deeper ensembles.\nThis steady increase of |], as predicted in our theoretical analysis, is also backed in experimen. tal results, as depicted in Fig.1[d). Note that the first layer, which cannot be skipped, behaves differently than the other layers. More experiments can be found in Appendix|C.\nIt is worth noting that the mechanism for this dynamic property of residual networks can also be. observed without the use of batch normalization, as a steady increase in the L2 norm of the weights as shown in Fig.1[e). In order to model this, consider the residual network as discussed above. without batch normalization layers. Recalling, ||w||2 = CA, w = w, the loss of this network is. expressed as:\nd Ym d Yp p LN(x,w) =Cm m) (m) (m)(k LL m) 1(p) I1 37(p)(k) xij wij in i=1 j=1 k=1 i=1 j=1 k=1 Lm(x,w) + Lp(x,W\n0LN(x,w - g (m|gm|l2 + pl|gp|l2 + (m + p)gp gm) dc\nThm.4|indicates that if either l|gpl|2 or l|gml|2 is dominant (for example, near local minimas of the shallow network, or at the start of training), the scaling of the weights C will increase. This expansion will, in turn, emphasize the contribution of deeper paths over shallow paths, and in- crease the overall capacity of the residual network. This dynamic behavior of the effective depth of residual networks is of key importance in understanding the effectiveness of these models. While optimization starts off rather easily with gradients largely originating from shallow paths, the overall advantage of depth is still maintained by the dynamic increase of the effective depth.\nWe now present the results of[Auffinger & Arous(2013) regarding the asymptotic complexity in the case of limA->oo of the multi-spherical spin glass model given by:.\nA He,^=- Er A r- 2 r=2 i1,...ir=1\nA 8 1 e=1 w=1, ^ i=1 r=2\ner(r- 1) a2 = r=2 r=2\nNote that for the single interaction spherical spin model a2 = 0. The index of a critical point of He,A is defined as the number of negative eigenvalues in the hessian V2 He.A evaluated at the critical. point w.\nDefinition 4. For any O < k < A and u E R, we denote the random number Crtx.k(u, e) as the number of critical points of the hamiltonian in the set BX = {AX|X E (-oo, u)} with index k\nCrtA.k(u, e) = 1{He,A E Au}1{i(V2He,A)=k w:VHe,A=0\nwhere J,... are independent centered standard Gaussian variables, and e = (er)r>2 are positive. real numbers such that r=2 er2r < oo. A configuration w of the spin spherical spin-glass model is a vector in RA satisfying the spherical constraint:.\n1. (29) =1 A =1 r=2 Note that the variance of the process is independent of e: OX E[H?.A] =A1-re? 2 = ^ =A (30) Definition 3. We define the following:. O 8 U' =) er, v\" =er(r-1), Q =v\" + v' (31)\n8 A 8 E[H?,A]= A1-r r e? w?)=^ e=A r=2 i=1 r=1\nEq.33|provides the asymptotic mean total number of critical points with non-diverging index k. It is presumed that the SGD algorithm will easily avoid critical points with a high index that have many descent directions, and maneuver towards low index critical points. We, therefore, investigate how the mean total number of low index critical points vary as the ensemble distribution embodied in er Jr>2 changes its shape by a steady increase in 3.\nFig.1(f) shows that as the ensemble progresses towards deeper networks, the mean amount of low index critical points increases, which might cause the SGD optimizer to get stuck in local minima This is, however, resolved by the the fact that by the time the ensemble becomes deep enough the loss function has already reached a point of low energy as shallower ensembles were more dominant earlier in the training. In the following theorem, we assume a finite ensemble such tha 1 Er2r ~ 0.\nTheorem 5. For any k E N, p > 1, we denote the solution to the following constrained optimization nroblems.\np e = 1 e* = argmax0g(R,e) s.t E r=2\nr = p otherwise\nThm.5|implies that any heterogeneous mixture of spin glasses contains fewer critical points of a. finite index, than a mixture in which only p interactions are considered. Therefore, for any distribu tion of e that is attainable during the training of a ResNet of depth p, the number of critical points is. lower than the number of critical points for a conventional network of depth p.."}, {"section_index": "5", "section_name": "6 DISCUSSION", "section_text": "In this work, we use spin glass analysis in order to understand the dynamic behavior ResNets dis. play during training and to study their loss surface. In particular, we use at one point or another the. assumptions of redundancy in network parameters, near uniform distribution of network weights, in. dependence between the inputs and the paths and independence between the different copies of the. nput as described in Choromanska et al.[(2015a). The last two assumptions, i.e., the two indepen dence assumptions, are deemed in Choromanska et al.[(2015b) as unrealistic, while the remaining. are considered plausible\nOur analysis of critical points in ensembles (Sec. 5) requires all of the above assumptions. However, Thm. 1 and 2, as well as Lemma. 4, do not assume the last assumption, i.e., the independence between the different copies of the input. Moreover, the analysis of the dynamic behavior of residual nets (Sec. 4) does not assume any of the above assumptions.\nOur results are well aligned with some of the results shown in Larsson et al.(2016), where it is noted empirically that the deepest column trains last. This is reminiscent of our claim that the deeper networks of the ensemble become more prominent as training progresses. The authors of Larsson et al.(2016) hypothesize that this is a result of the shallower columns being stabilized at a certain point of the training process. In our work, we discover the exact driving force that comes into play.\nIn addition, our work offers an insight into the mechanics of the recently proposed densely connecte. networks (Huang et al.[2016). Following the analysis we provide in Sec. 3, the additional shortcu paths decrease the initial capacity of the network by offering many more short paths from inpu to output, thereby contributing to the ease of optimization when training starts. The driving forc mechanism described in Sec. 4.2 will then cause the effective capacity of the network to increase\nNote that the analysis presented in Sec. 3 can be generalized to architectures with arbitrary skip connections, including dense nets. This is done directly by including all of the induced sub networks in Eq.9] The reformulation of Eq.[10|would still holds, given that I, is modified accordingly.\n0k(R,e) W +w"}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "Ensembles are a powerful model for ResNets, which unravels some of the key questions that have. surrounded ResNets since their introduction. Here, we show that ResNets display a dynamic en semble behavior, which explains the ease of training such networks even at very large depths, while. still maintaining the advantage of depth. As far as we know, the dynamic behavior of the effective. capacity is unlike anything documented in the deep learning literature. Surprisingly, the dynamic mechanism typically takes place within the outer multiplicative factor of the batch normalization. module."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Antonio Auffinger and Gerard Ben Arous. Complexity of random smooth functions on the high dimensional sphere. Annals of Probability, 41(6):4214-4247, 11 2013.\nAnna Choromanska, Yann LeCun, and Gerard Ben Arous. Open problem: The landscape of the los surfaces of multilayer networks. In COLT, pp. 1756-1760, 2015b\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiv preprint arXiv:1605.07648, 2016.\nGenevieve B Orr and Klaus-Robert Muller. Neural networks: tricks of the trade. Springer, 2003"}, {"section_index": "8", "section_name": "A SUMMARY OF NOTATIONS", "section_text": "Table[1presents the various symbols used throughout this work and their meaning\nAnna Choromanska. Mikael Henaff. Michael Mathieu, Gerard Ben Arous. and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS, 2015a..\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. In NIPS, 2016."}, {"section_index": "9", "section_name": "SYMBOL", "section_text": "The dimensionality of the input x The output of layer i of network given input x The final output of the network V True label of input x Loss function of network V Hinge loss Absolute loss The depth of network V Weights of the network w E RA A positive scale factor such that ||w||2 = C Scaled weights such that w = w The number of units in layers l > 0 The number of unique weights in the network The total number of weights in the network V The weight matrix connecting layer l - 1 to layer l in V. The hamiltonian of the p interaction spherical spin glass model. The hamiltonian of the general spherical spin glass model. A Total number of paths from input to output in network V yd Total number of paths from input to output in network N of length r Yr d ReLU activation function Bernoulli random variable associated with the ReLU activation functio Parameter of the Bernoulli distribution associated with the ReLU unit 3) multiplier associated with paths of length r in V. pnC VA Normalization factor Batch normalization multiplicative factor in layer l. The mean of the estimated standard deviation various elements in R(W\nProof of Lemma[1] There are a total of r paths of length r from input to output, and a total of Ar unique r length configurations of weights. The uniformity assumption then implies that each. configuration of weights is repeated Ir times. By summing over the unique configurations, and re. indexing the input we arrive at Eq.10.\nProof of Lemma[] From[12 we have that S1,2.., is defined as a sum of r inputs. Since there are only p distinct inputs, it holds that for each 1,i2..., there exists a sequence Q = (at)i=1 E N such that -1 Q; = Xr, and Si1,2.i, = 1 Q,x. We, therefore, have that E[?,...,] = |||l3 Note that the minimum value of E[&? ?, 2..r] is a solution to the following:\nmin(E[?,...]) = mina(||a|2) s.ta1 -1 E N."}, {"section_index": "10", "section_name": "DESCRIPTION", "section_text": "lim Bap) = H() + alog() Og p->0\nProof of Thm.2 For brevity, we provide a sketch of the proof. It is enough to show that limp->00 O17 = 0 for < 1. Ignoring the constants in the binomial terms. we have\nQ1P Q1p Q1 1 lim lim 9.- lim p->0o p->0o p->0o r=1\n/here z2 which can be expressed using the Legendre polynomial of order p:\nProof of Lemma|4 For simplicity, we ignore the constants in the binomial coefficient, and assume er = () r. Notice that for * = (), we have that arg max,(er(B*)) = p, arg max,(er(*)) = 1 and arg max,(er(1)) = . From the monotonicity and continuity of r, any value 1 k p can be attained. The linear dependency (C) = pnC completes the proof. A\nOLN(x,w- g) dLN(x,w) dLN(x,w) aai 9 aai aai\nOLN(x,w - g) gp +lgp dai\n-\nJsing taylor series expansion:. dLn(x, w- g). dLN(x,w) dLN(x,w) (40) aLN(x,w) Substituting Vw - (gm + gp) in40|we have: dLN(x,w- gw) I < 0 (41) 9m a 9p 9m. + And hence: dLN(x,w - gw) (42 Finally: (43) 1 OLN(x,w) 2. Since paths of length m skip layer l, we have that .. I, 9p. Therefore: dLv(x,w - g) (44) 9m9p - n? The condition ||gpl|2 > ||gm||2 implies that gmgp + l|gpll2 > 0, completing the proof.\ndLN(x,w- gw) 9m + gp)'(gm + gp) = lgm +gplI2 < 0\ngm+gp|l2)]=|i|(1+\ndLN(x,w) g = (mLm(x,w) +pLp(x,w)) ngm + pgp)' dc = (mLm(x,w) +pLp(x, lgm + pgp) mgm + pgp)\n0LN(x,w - gw mgm + pgp)'(gm + gp) dC -(m||gp|l2 + p||gp|l2 + (m +p)gp gm\neT(V\"_ V)e eT(V+ V)\nnaxe0k(R,e) < max\nFig. 1(d) and 1(e) report the experimental results of a straightforward setting, in which the task is to classify a mixture of 10 multivariate Gaussians in 50D. The input is therefore of size 50. The loss employed is the cross entropy loss of ten classes. The network has 10 blocks, each containing. 20 hidden neurons, a batch normalization layer, and a skip connection. Training was performed on. 10,000 samples, using SGD with minibatches of 50 samples..\nAs noted in Sec. 4.2, the dynamic behavior can be present in the Batch Normalization multiplica. tive coefficient or in the weight matrices themselves. In the following experiments, it seems that\nis orthogonal to the weights. We have that L(x,w) (mLm(x, w) +pLp(x, w)). Using taylor ac series expansion we have: dLv(x, w - g) dLN(x,w) dLN(x,w) uVw (45) ac ac ac For the last term we have: dLN(x,w) V w g = (mLm(x,w) + pLp(x, w ac =(mLm(x,w) + pLp(x, W mgm + pgp)'g,(46) d n45 we have: dLN(x,w- gw) 0-(mgm+pgp)(gm+ gp) aC -(m|gp|l2+p|gp|l2+(m+p)ggm) (47) Proof of Thm[5]Inserting Eq.31|into Eq.[33|we have that: qr=2er(r-1) _=2r(r-2) (48) r=2 e?r r=2 e?r2 We denote the matrices V' and V\" such that Vf, = ro, and V/f = r(r -- 1)oj. We then have: eT(V\" _V')e (49) eT(V\"+ V')e maxe0k(R,e) < max min V! - V) nax =0k(R,e*) (50)\nOLN(x,w- g) dLN(x,w) dLN(x,w) 9 ac ac ac\n2r(r-2\nFig.2|depicts the results. There are two types of plots: Fig. 2(a,c) presents for CIFAR-10 and CIFAR-100 respectively the magnitude of the various convolutional layers for multiple epochs (sim ilar in type to Fig. 1(d) in the paper). Fig.2(b,d) depict for the two datasets the mean of these norms over all convolutional layers as a function of epoch (similar to Fig. 1(e))\nAs can be seen, the dynamic phenomenon we describe is very prominent in the public ResNe implementation when applied to these conventional datasets: the dominance of paths with fewe. skip connections increases over time. Moreover, once the learning rate is reduced in epoch 81 the phenomenon we describe speeds up\nIn Fig. 3|we present the multiplicative coefficient of the Batch Normalization when not absorbed As future work, we would like to better understand why these coefficients start to decrease once the learning rate is reduced. As shown above, taking the magnitude of the convolutions into account the dynamic phenomenon we study becomes even more prominent at this point. The change oi location from the multiplicative coefficient of the Batch Normalization layers to the convolutions themselves might indicate that Batch Normalization is no longer required at this point. Indeed Batch Normalization enables larger training rates and this shift happens exactly when the training rate is reduced. A complete analysis is left for future work\nuntil the learning rate is reduced, the dynamic behavior is manifested in the Batch Normaliza- tion multiplicative coefficients and then it moves to the convolution layers themselves. We there- fore absorb the BN coefficients into the convolutional layer using the public code of https: //github.com/e-lab/torch-toolbox/tree/master/BN-absorber Note that the multiplicative coefficient of Batch Normalization is typically refereed to as y. However, throughout our paper, since we follow the notation of|Choromanska et al.[(2015a), y refers to the number of paths. The multiplicative factor of Batch normalization appears as A in Sec. 4.\nNorm of the weights of the convolution layers for multiple epochs for cifar10 Mean norm of convolution layers as a function of epoch for cifar1o 240 21 220 30 200 (pequosqe s!1 141 25 180 20 4C an 10 120 100 15 20 20 20 40 60 80 100 120 140 160 180 conv layer epoch (b) (a Norm of the weights of the convolution layers for multiple epochs for cifar100 Mean norm of convolution layers as a function of epoch for cifar100 350 21 41 60 300 250 S 40 30 150 100 20 25 20 40 60 80 100 120 140 160 180 conv layer epoch (d) c\nFigure 2: (a,c) The Norm of the convolutional layers once the factors of the subsequent Batch. Normalization layers are absorbed, shown for CIFAR-10 and CIFAR-100 respectively. Each graph. s a different epoch, see legend. Waving is due to the interleaving architecture of the convolutiona ayers. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the convolutional. ayers' weights per epoch.\nFigure 3: The norms of the multiplicative Batch Normalization coefficient vectors. (a,c) The Norn of the coefficients, shown for CIFAR-10 and CIFAR-100 respectively. Each graph is a differen. epoch (see legend). Since there is no monotonic increase between the epochs in this graph, it i harder to interpret. (b,d) Respectively for CIFAR-10 and CIFAR-100, the mean of the norm of the. multiplicative factors per epoch.\nBatch Normalization gamma per layer for multiple epochs for cifar10 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar10 160 1 10 150 161 140 30 120 110 100 10 20 25 30 35 0 20 40 80 100 120 140 15 60 160 180 conv layer epoch (a (b) Batch Normalization gamma per layer for multiple epochs for cifar100 Mean norm of Batch Normalization gamma vectors as a function of epoch for cifar100 20 200 21 18 41 190 16 180 14 162 12 170 0 mnea 160 150 140 25 130 10 15 30 35 20 40 80 20 60 100 120 140 160 180 conv layer epoch (d) c"}] |
BJxhLAuxg | [{"section_index": "0", "section_name": "A DEEP LEARNING APPROACH FOR JOINT VIDEC FRAME AND REWARD PREDICTION IN ATARI GAMES", "section_text": "Felix Leibfried\nfelix.leibfried@qmail.com\nabout the environment dynamics or the reward structure. In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demon- strate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex. initially unknown environments."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "When humans or animals receive reward for taking a particular action in a given situation, the prob- ability is increased that they will act similarly in similar situations in the future. This is described by principles such as the law of effect (Thorndike1898), operant conditioning (Skinner1938) and trial-and-error learning (Thorpe 1979) in behaviorist psychology, and has inspired a discipline of artificial intelligence called reinforcement learning (RL,Sutton & Barto[(1998)). RL is concerned with finding optimal behavior policies in order to maximize agents' cumulative future reward.\nApproaches to RL can be divided into model-free and model-based approaches. In model-free ap. proaches, agents learn by trial and error but do not aim to explicitly capture the dynamics of the envi. ronment or the structure of the reward function underlying the environment. State-of-the-art model. free approaches, such as DQN (Mnih et al.]2015), effectively approximate so-called Q-values, i.e the value of taking specific actions in a given state, using deep neural networks. The impressive. effectiveness of these approaches comes from their ability to learn complex policies directly fror. high-dimensional input (e.g., video frames). Despite their effectiveness, model-free approaches re. quire large amounts of training data that have to be collected through direct interactions with the environment, which makes them expensive to apply in settings where interactions are costly (sucl as most real-world applications). Additionally, model-free RL requires access to reward observa tions during training, which is problematic in environments with sparse reward structure unles. coupled with an explicit exploration mechanism..\nRL approaches that explicitly learn statistics about the environment or the reward are generally. referred to as model-based-in a more narrow definition these statistics comprise environment dy. namics and the reward function. In recent work. model-based techniques were successfully use to learn statistics about cumulative future reward (Veness et al.|2015) and to improve exploratior. by favoring actions that are likely to lead to novel states (Bellemare et al.. 2016] Oh et al. 2015\n*Research conducted while interning at Microsoft\nNate Kushman & Katia Hofimann\nnkushman@microsoft.com katja.hofmann@microsoft.cor"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model- based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure.\nresulting in substantially more data efficient learning compared to model-free approaches. When ar. accurate model of the true environment dynamics and the true reward function is available, model based approaches, such as planning via Monte-Carlo tree search (Browne et al.|2. 2012) outperfor model-free state-of-the-art approaches (Guo et al.]2014).\nOur empirical results on five Atari games demonstrate that our approach can successfully predic. cumulative reward up to roughly 200 frames. We complement our quantitative results with a de tailed error analysis by visualizing example predictions. Our results are the first to demonstrate the. feasibility of using a learned dynamics and reward model for accurate planning. We see this as a sig. nificant step towards data efficient RL in high-dimensional environments without prior knowledge.."}, {"section_index": "3", "section_name": "RELATED WORK AND MOTIVATION", "section_text": "Two lines of research are related to the work presented in this paper: model-based RL and optima. control theory. Model-based RL utilizes a given or learned model of some aspect of a task to, e.g.. reduce data or exploration requirements (Bellemare et al.2016f[Oh et al.[[2015f[Veness et al.[2015) Optimal control theory describes mathematical principles for deriving control policies in continuous action spaces that maximize cumulative future reward in scenarios with known system dynamics and known reward structure (Bertsekas20072005).\nThere has been recent interest in combining principles from optimal control theory and model-basec. learning in settings where no information on system dynamics is available a priori and instead has. to be acquired from visual data (Finn et al.]2016) |Wahlstrom et al.2015)Watter et al.[2015). The general idea behind these approaches is to learn a compressed latent representation of the visual. state space from raw images through autoencoder networks (Bengio2009) and to utilize the ac. quired latent representation to infer system dynamics. System dynamics are then used to specify a planning problem which can be solved by optimization techniques to derive optimal policies. Watte. et al.(2015) introduce an approach for learning system dynamics from raw visual data by jointly. training a variational autoencoder (Kingma & Welling2014) Rezende et al.]2014) and a state pre diction model that operates in the autoencoder's compressed latent state representation. A similai. approach for jointly learning a compressed state representation and a predictive model is pursued by. Wahlstrom et al.(2015).|Finn et al.(2016) devise a sequential approach that first learns a latent state. representation from visual data and that subsequently exploits this latent representation to augment. a robot's initial state space describing joint angles and end-effector positions. The augmented state. space is then used to improve estimates of local system dynamics for planning..\nThe approaches presented above assume knowledge of the functional form of the true reward signa and are hence not directly applicable in settings like ALE (and many real-world settings) where the eward function is initially unknown. Planning in such settings therefore necessitates learning botl ystem dynamics and reward function in order to infer optimal behavioral policies. Recent worl y Oh et al.(2015) introduced an approach for learning environment dynamics from pixel image and demonstrated that this enabled successful video frame prediction over up to 400 frames. Ii our current paper, we extend this recent work to enable reward prediction as well by modifying th network's architecture and training objective accordingly. The modification of the training objectiv ears a positive side effect: since our network must optimize a compound loss consisting of th ideo frame reconstruction loss and the reward loss, reward-relevant aspects in the video frames tc which the reconstruction loss alone might be insensitive are explicitly captured by the optimizatioi objective. In the subsequent section, we elucidate the approach fromOh et al.(2015) as well as ou extensions for reward prediction in more detail.\nA key open question is whether effective model-based RL is possible in complex settings where the. environment dynamics and the reward function are initially unknown, and the agent has to acquire. such knowledge through experience. In this paper, we take a step towards addressing this question by extending recent work on video frame prediction (Oh et al.|2015), which has been demonstrated to effectively learn system dynamics, to enable joint prediction of future states and rewards using a single latent representation. We propose a network architecture and training procedure for joint. state and reward prediction, and evaluate our approach in the Arcade Learning Environment (ALE,. Bellemare et al.(2013)).\nAction At most 18 Predicted reward Fc Lin Input Predicted frames 2048 next frame Softmax Conv Conv Conv Deconv Deconv Deconv 64,6x6 64,6x6 64.6x6 64,6x6 64,6x6 1, 6x6 pad 0,0 pad 2,2 pad 2,2 pad 2,2 pad 2,2 pad 0,0 stride 2 stride 2 stride 2 -( Stride 2 stride 2 stride 2 ReLU ReLU ReLU ReLL ReLL ReL ReL 4x84x84 64x40x40 64x20x20 64x10x10 1024 2048 2048 1024 64x10x10 64x20x20 64x40x40 1x84x84 Encoding Transformation Decoding and reward prediction\nAt most 18 reward -C Lin Input Predicted frames 2048 next frame Softmax Conv Conv Conv Deconv Deconv Deconv 64, 6x6 64, 6x6 64, 6x6 64, 6x6 64, 6x6 1,6x6 pad 0,0 pad 2,2 pad 2,2 pad 2,2 pad 2,2 pad 0,0 stride 2 stride 2 stride 2 stride 2 stride 2 stride 2 ReLU ReLU ReLU ReLU ReLU ReLU ReLU Lin 4x84x84 64x40x40 64x20x20 64x10x10 1024 2048 2048 1024 64x10x10 64x20x20 64x40x40 1x84x84 Encoding Transformation Decoding and reward prediction\nFigure 1: Network architecture for joint video frame and reward prediction. The architecture com prises three stages: an encoding stage mapping current input frames to some compressed latent. representation, a transformation stage integrating the current action into the latent representation. through element-wise vector multiplication denoted by ' ', and a final predictive stage for recon- structing the frame of the next time step and the current reward. The network uses three different types of neuron layers ('Conv' for convolutional, 'Deconv' for deconvolutional and 'Fc' for forward. connection) in combination with three different types of activation functions ('ReLU', 'Softmax' and. 'Lin' for linear activations). The dimensional extend of individual layers is either depicted beneath. or within layers. The network part coloured in red highlights the extension for reward prediction."}, {"section_index": "4", "section_name": "3.1 VIDEO FRAME PREDICTION", "section_text": "The deep network proposed by[Oh et al. (2015) for video frame prediction in Atari games aims at learning a function that predicts the video frame St+1 at the next time step t + 1, given the current history of frames st-h+1:t with time horizon h and the current action at taken by the agent-see Section[3.1 Here, we extend this work to enable joint video frame and reward prediction such that the network anticipates the current reward r as well-see Sections|3.2|and|3.3\nThe video-frame-predictive architecture fromOh et al. (2015) comprisesthreeinformation processing stages: an encoding stage that maps input frames to some compressed latent represen- tation, a transformation stage that integrates the current action into the compressed latent represen- tation, and a decoding stage that maps the compressed latent representation to the predicted next frame- --see Figure [1 The initial encoding stage is a sequence of convolutional and forward oper- ations that map the current frame history St-h+1:t-a three-dimensional tensor-to a compressed feature vector henc. The transformation stage converts this compressed feature vector henc into an action-conditional representation hdec in vectorized form by integrating the current action at. The current action a is represented as a one-hot vector with length varying from game to game since there are at least 3 and at most 18 actions in ALE. The integration of the current action into the compressed feature vector includes an element-wise vector multiplication-depicted as ' ' in Fig- ure|1-with the particularity that the two neuron layers involved in this element-wise multiplication are the only layers in the entire network without bias parameters, see Section 3.2 in|Oh et al. (2015). Finally, the decoding stage performs a series of forward and deconvolutional operations (Dosovit- skiy et al.| 2015Zeiler et al.]2010) by mapping the action-conditional representation hdec of the current frame history St-h+1:t and the current action at to the predicted video frame St+1 of the next time step t + 1. Note that this necessitates a reshape operation at the beginning of the decoding cascade in order to transform the vectorized hidden representation into a three-dimensional tensor The whole network uses linear and rectified linear units (Glorot et al.[[2011) only. In all our experi- ments, following DQN (Mnih et al.|2015), the video frames processed by the network are 84 84 grey-scale images down-sampled from the full-resolution 210 160 Atari RGB images from ALE. Following Mnih et al.(2015) and[Oh et al.[(2015). the history frame time horizon h is set to 4."}, {"section_index": "5", "section_name": "3.2 REWARD PREDICTION", "section_text": "In this section we detail our proposed network architecture for joint state and reward prediction. Our model assumes ternary rewards which result from reward clipping in line with Mnih et al.(2015) Original game scores in ALE are integers that can vary significantly between different Atari games and the corresponding original rewards are clipped to assume one of three values: -1 for negative rewards, 0 for no reward and 1 for positive rewards. Because of reward clipping, rewards can be represented as vectors rt in one-hot encoding of size 3.\nIn Figure[1 our extension of the video-frame-predictive architecture from[Oh et al.(2015) to enab] reward prediction is highlighted in red. We add an additional softmax layer to predict the currer reward rt with information contained in the action-conditional encoding hdec. The motivation be hind this extension is twofold. First, our extension makes it possible to jointly train the networ with a compound objective that emphasizes both video frame reconstruction and reward predictior and thus encourages the network to not abstract away reward-relevant features to which the recor struction loss alone might be insensitive. Second, this formulation facilitates the future use of th model for reward prediction through virtual roll-outs in the compressed latent space, without th computational expensive necessity of reconstructing video frames explicitly-note that this require\nI T-1 K 3 1 (i) l] . ln Pt+k t+k A t+k It+k[' 2:I.T.K i=1 t=0 k=1 l=1 video frame reconstruction loss reward prediction loss\nreward prediction loss where s(i), .(i) (i) t+ k denotes the k-step look ahead probability values of the reward-predicting softmax layer--depicted tween video frame reconstruction and reward loss. The parameter T is a time horizon parameter that. determines how often a single trajectory sample i is unrolled into the future, and K determines the. look ahead prediction horizon dictating how far the network predicts into the future by using its own. video frame predicted output as input for the next time step. FollowingOh et al.(2015) and Michal-. ski et al.(2014), we apply a curriculum learning (Bengio et al.2009) scheme by successively in- creasing K in the course of training such that the network initially learns to predict over a short time horizon and becomes fine-tuned on longer-term predictions as training advances (see Section|A.1. for details). The network parameters 0 are updated by stochastic gradient descent, derivatives of the. training objective w.r.t. 0 are computed with backpropagation through time (Werbos1988).\nFollowing previous work (Oh et al.]2015, Mnih et al.]2015), actions are chosen by the agent on. every fourth frame and are repeated on frames that were skipped. Skipped frames and repeated actions are hence not part of the data sets used to train and test the predictive network on, and. original reward values are accumulated over four frames before clipping.\nThe original training objective in Oh et al.(2015) consists of a video frame reconstruction loss in. terms of a squared loss function aimed at minimizing the quadratic l2-norm of the difference vector between the ground truth image and its action-conditional reconstruction. We extend this training objective to enable joint reward prediction. This results in a compound training loss consisting of the original video frame reconstruction loss and a reward prediction loss given by the cross entropy. Simard et al.| 2003) between the ground truth reward and the corresponding prediction:."}, {"section_index": "6", "section_name": "4 RESULTS", "section_text": "Our quantitative evaluation examines whether our joint model of system dynamics and reward func. tion results in a shared latent representation that enables accurate cumulative reward prediction. W assess cumulative reward prediction on test sets consisting of approximately 50,000 video frames pe game, including actions and rewards. Each network is evaluated on 1,o00 trajectories- suitable tc analyze up to 100-step ahead prediction- drawn randomly from the test set. Look ahead predictior is measured in terms of the cumulative reward error which is the difference between ground trutl. cumulative reward and predicted cumulative reward. For each game, this results in 100 empirical dis tributions over the cumulative reward error--one distribution for each look ahead step- consisting of 1,000 samples each (one for each trajectory). We compare our model predictions to a baseline. model that samples rewards from the marginal reward distribution observed on the test set for eacl game. Note that negative reward values are absent in the games investigated for this study..\nFigure 2|illustrates 20 of the 100 empirical cumulative reward error distributions in all games for. our network model in blue and for the baseline model in red (histograms, bottom), together with. the median and the 5 to 95 percentiles of the cumulative reward error over look ahead steps (top). Across all games, we observe that our joint state and reward prediction model accurately predicts fu-. ture cumulative rewards at least 20 look ahead steps, and that it predicts future rewards substantially more accurately than the baseline model. This is evidenced by cumulative reward error distributions that maintain a unimodal form with mode zero and do not flatten out as quickly as the distributions. for the random-prediction baseline model. Best results are achieved in Freeway and Q*bert where. the probability of zero cumulative reward error at 51 look ahead steps is still around 80% and 60%. respectively-see Figure2Note that 51 look ahead steps correspond to 204 frames because the underlying DQN agent, collecting trajectory samples for training and testing our model, skipped. every fourth frame when choosing an action-see Section|3.2 Lowest performance is obtained in Seaquest where the probability of zero cumulative reward error at 26 steps (104 frames) is around 40% and begins to flatten out soon thereafter-see Figure[2 Running the ALE emulator at a fre-. quency of 60fps, 26 steps correspond to more than 1 second real-time game play because of frame. skipping. Since our model is capable of predicting 26 steps ahead in less than 1 second, our model. enables real-time planning and could be therefore utilized in an online fashion..\nWe now turn our attention to error analysis. While the look ahead step at which errors become. prominent differs substantially from game to game, we find that overall our model underestimates. cumulative reward. This can be seen in the asymmetry towards positive cumulative reward error. values when inspecting the 5 to 95 percentile intervals in the first plot per each game in Figure2 We identify a likely cause in (pseudo-)stochastic transitions inherent in these games. Considering. Seaquest as our running example, objects such as divers and submarines can enter the scene ran-. domly from the right and from the left and at the same time have an essential impact on which rewards the agent can potentially collect. In the ground truth trajectories, the agent's actions are. reactions to these objects. If the predicted future trajectory deviates from the ground truth, targeted. actions such as shooting will miss their target, leading to underestimating true reward. We analyze. this effect in more detail in Section4.2\nAll our experiments were conducted in triplicate with different initial random seeds. Different initial random seeds did not have a significant impact on cumulative reward prediction in all games except Freeway---see Section[A.5|for a detailed analysis. So far, we discussed results concerning reward prediction only. In the appendix, we also evaluate the joint performance of reward and video frame. prediction on the test set in terms of the optimization objective as in Oh et al.[(2015), where the authors report successful video frame reconstruction up to approximately 100 steps (400 frames). and observe similar results-see SectionA.6\nIn our evaluations, we investigate cumulative reward predictions quantitatively and qualitatively on five different Atari games (Q*bert, Seaquest, Freeway, Ms Pacman and Space Invaders). The quan- titative analysis comprises evaluating the cumulative reward prediction error---see Section4.1 The qualitative analysis comprises visualizations of example predictions in Seaquest-see Section4.2\nIn the previous section, we identified stochasticity in state transitions as a likely cause for relativel. low performance in long-term cumulative reward prediction in games such as Seaquest. In Seaques. objects may randomly enter a scene in a non-deterministic fashion. Errors in predicting these event result in predicted possible futures that do not match actually observed future states, resulting i. inaccurate reward predictions. Here, we support this hypothesis by visualizations in Seaquest illus. trating joint video frame and reward prediction for a single network over 20 steps (80 frames)--se. Figure 3|where ground truth video frames are compared to predicted video frames in terms of er. ror maps. Error maps emphasize the difference between ground truth and predicted frames througl. squared error values between pixels in black or white depending on whether objects are absent o. present by mistake in the network's prediction. Actions, ground truth rewards and model-predictec. rewards are shown between state transitions. Peculiarities in the prediction process are shown in red.\nIn step 2, the model predicts reward by mistake because the agent barely misses its target. Steps 4 to 6 report how the model predicts reward correctly but is off by one time step. Steps 7 to 14 depic problems caused by objects randomly entering the scene from the right which the model cannol predict. Steps 26 to 30 show how the model has problems to predict rewards at steps 26 and 28 as. these rewards are attached to objects the model failed to notice entering the scene earlier.."}, {"section_index": "7", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "Our positive results open up intriguing directions for future work. Our long-term goal is the inte. gration of model-based and model-free approaches for effective interactive learning and planning. in complex environments. Directions for achieving this long-standing challenge include the Dyna. method (Sutton1990), which uses a predictive model to artificially augment expensive training data,. and has been shown to lead to substantial reductions in data requirements in tabular RL approaches.. Alternatively, the model could be could be utilized for planning via Monte-Carlo tree search (Guo. et al. 2014 , Browne et al.f 2012).We hypothesize that such an approach would be particularly. beneficial in multi-task or life-long learning scenarios where the reward function changes but the. environment dynamics are stationary. Testing this hypothesis requires a flexible learning framework. where the reward function and the artificial environment can be changed by the experimenter in an arbitrary fashion, which is not possible in ALE where the environment and the reward function. are fixed per game. A learning environment providing such a flexibility is the recently released. Malmo platform for Minecraft (Johnson et al.2016) where researchers can create user-defined en-. vironments and tasks in order to evaluate the performance of artificial agents. In the shorter-term,. we envision improving the prediction performance of our network by regularization methods such. as dropout and max norm regularization (Srivastava et al.2014)-a state-of-the-art regularizer in supervised learning-and by modifying the optimization objective to enforce similarity between. hidden encodings in multi-step ahead prediction and one-step ahead prediction-see Watter et al. (2015). Finally, extensions of our model to non-deterministic state transitions through dropout and. variational autoencoder schemes (Kingma & Welling2014] Rezende et al.]2014) is a promising direction to alleviate the limitations highlighted in Section 4.2 paving the way for models that. adequately predict and reason over alternative possible future trajectories.\nIn this paper, we extended recent work on video frame prediction (Oh et al.2015) in Atari games to enable reward prediction. Our approach can be used to jointly predict video frames and cumula tive rewards up to a horizon of approximately 200 frames in five different games (Q*bert, Seaquest, Freeway, Ms Pacman and Space Invaders). We achieved best results in Freeway and Q*bert where the probability of zero cumulative reward error after 200 frames is still around 80% and 60% respec- tively, and worst results in Seaquest where the probability of zero cumulative reward error after 100 frames is around 40%. Our study fits into the general line of research using autoencoder networks to learn a latent representation from visual data (Finn et al.]2016] Goroshin et al.]2015] Gregor et al.]2015]Kulkarni et al.]2015] Srivastava et al.[ 2015] Wahlstrom et al.[2015]Watter et al.|2015 Kingma & Welling2014; Rezende et al.]2014 Lange et al.2012]Hinton et al.]2011] Ranzato et al.|2007), and extends this line of research by showing that autoencoder networks are capable of learning a combined representation for system dynamics and the reward function in reinforcement learning settings with high-dimensional visual state spaces-a first step towards applying model- based techniques for planning in environments where the reward function is not initially known.\nQ*bert Seaquest nodel median model 5-95per Look ahead steps. Look ahead steps. model 56 86 61 56 Cumulative reward error Cumulative reward error Freeway Ms Pacman Space Invaders model median andor model 5-95per random 5-95per erwn Look ahead steps. Look ahead steps. Look ahead steps. 51 Cumulative reward error Cumulative reward error Cumulative reward error\nFreeway Ms Pacman Space Invaders error 10 model median random median model 5-95per random 5-95per 10 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Look ahead steps. Look ahead steps. Look ahead steps. model 6 26 76 56 81 31 56 86 36 61 86 41 16 96 96 96 Cumulative reward error Cumulative reward error Cumulative reward error\nFigure 2: Cumulative reward error over look ahead steps in five different Atari games. There are two plots for each game. The top plot per game shows how the median and the 5 to 95 percentiles of the cumulative reward error evolve over look ahead steps for both our model (in blue) and a base- line model that samples rewards from the marginal reward distribution of the test set (in red). Each vertical slice of this concise representation corresponds to a single empirical distribution over the cumulative reward error. We depict these for every fifth look ahead step in the compound plots be- low for both models. These empirical error distributions demonstrate successful cumulative reward prediction over at least 20 steps (80 frames) in all five games as evidenced by their zero-centered and unimodal shape in the first column of each compound plot per game.\nFigure 3: Example predictions in Seaquest. Ground truth video frames, model predictions and error. maps emphasizing differences between ground truth and predicted frames-in form of the squared error between pixel values-are compared column-wise. Error maps highlight objects in black or white respectively depending on whether these objects are absent by mistake or present by mistake. in the model's prediction. Actions taken by the agent as well as ground truth rewards ('rew') and. reward predictions ('pred') are shown below video and error frames. Peculiarities in the prediction process are marked in red. The figure demonstrates how our predictive model fails to anticipate objects that randomly enter the scene from the right and rewards associated to these objects..\nPrediction Error map Steps Ground trutn Error map 1 11 left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 2 12 right<rew=0, pred=1 down + right + fire, rew=0, pred=0 3 13 up + left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 4 14 left + firecrew=0, pred=1 down + right + fire, rew=0, pred=0 5 down + right<rew=1, pred=0 6 26 down + right, rew=0, pred=0 up + left,<rew=1, pred=0 7 27 left + fire, rew=0, pred=0 down + right + fire, rew=0, pred=0 8 28 down + right + fire, rew=0, pred=0 up + left,<rew=1, pred=0 9 29 left + fire, rew=0, pred=0 down, rew=0, pred=0 10 30 up + fire, rew=0, pred=0 up + fire, rew=0, pred=0\nD P Bertsekas. Dynamic programming & optimal control, volume 1. Athena Scientific, 2005\nD P Bertsekas. Dynamic programming & optimal control, volume 2. Athena Scientific, 2007\nrowne, E Powiey, D wnllenouse. OWIL P Ronnsnagen, S Tavener. D Perez S Samothrakis, and S Colton. A survey of monte carlo tree search methods. IEEE Transactions. on Computational Intelligence and AI in Games, 4(1):1-49, 2012. A Dosovitskiy, J T Springenberg, and T Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. C Finn, X Y Tan, Y Duan, T Darrell, S Levine, and P Abbeel. Deep spatial autoencoders for. visuomotor learning. In Proceedings of the IEEE International Conference on Robotics and Au-. tomation, 2016. X Glorot and Y Bengio. Understanding the difficulty of training deep feedforward neural networks.. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010..\nC Browne, E Powley, D Whitehouse, S Lucas, P I Cowling, P Rohlfshagen, S Tavener, D Perez, S Samothrakis. and S Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1):1-49, 2012. A Dosovitskiy, J T Springenberg, and T Brox. Learning to generate chairs with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015. C Finn, X Y Tan, Y Duan, T Darrell, S Levine, and P Abbeel. Deep spatial autoencoders for visuomotor learning. In Proceedings of the IEEE International Conference on Robotics and Au- tomation, 2016. X Glorot and Y Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010. X Glorot, A Bordes, and Y Bengio. Deep sparse rectifier neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2011. R Goroshin, M Mathieu, and Y LeCun. Learning to linearize under uncertainty. Advances in Neural Information Processing Systems, 2015. K Gregor, I Danihelka, A Graves, D J Rezende, and D Wierstra. DRAW: a recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning, 2015. X Guo, S Singh, H Lee, R Lewis, and X Wang. Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems, 2014. G E Hinton, A Krizhevsky, and S D Wang. Transforming auto-encoders. In Proceedings of the International Conference on Artificial Neural Networks, 2011.\nK Gregor, I Danihelka, A Graves, D J Rezende, and D Wierstra. DRAW: a recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning,. 2015. X Guo, S Singh, H Lee, R Lewis, and X Wang. Deep learning for real-time Atari game play using. offline Monte-Carlo tree search planning. In Advances in Neural Information Processing Systems, 2014. G E Hinton, A Krizhevsky, and S D Wang. Transforming auto-encoders. In Proceedings of the. International Conference on Artificial Neural Networks, 2011..\nB F Skinner. The behavior of organisms: an experimental analysis. Appleton-Century-Crofts, 1938\nR S Sutton and A G Barto. Reinforcement learning: an introduction. MIT Press, 1998\nW H Thorpe. The origins and rise of ethology. Heinemann Educational Books, 1979\nS Lange, M Riedmiller, and A Voigtlander. Autonomous reinforcement learning on raw visual input. data in a real world application. In Proceedings of the International Joint Conference on Neural. Networks, 2012. V Michalski, R Memisevic, and K Konda. Modeling deep temporal dependencies with recurrent. grammar cells. In Advances in Neural Information Processing Systems, 2014. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller. A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran,. D Wierstra, S Legg, and D Hassabis. Human-level control through deep reinforcement learning.. Nature, 518(7540):529-533, 2015. J Oh, X Guo, H Lee, R Lewis, and S Singh. Action-conditional video prediction using deep networks. in Atari games. In Advances in Neural Information Processing Systems, 2015.. R Pascanu, T Mikolov, and Y Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the International Conference on Machine Learning, 2013..\n1O1C N Srivastava, G E Hinton, A Krizhevsky, I Sutskever, and R Salakhutdinov. Dropout : a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15: 1929-1958, 2014. N Srivastava, E Mansimov, and R Salakhutdinov. Unsupervised learning of video representations using LSTMs. In Proceedings of the International Conference on Machine Learning, 2015. R S Sutton. Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the International Conference on Machine Learning, 1990.\nE L Thorndike. Animal intelligence: an experimental study of the associative processes in animals. The Psychological Review: Monograph Supplements. 2(4):1-107. 1898\nJ Veness, M G Bellemare, M Hutter, A Chua, and G Desjardins. Compress and control. In Proceed ings of the AAAI Conference on Artificial Intelligence, 2015.."}, {"section_index": "8", "section_name": "A.1 TRAINING DETAILS", "section_text": "In our experiments, we modified the reward prediction loss slightly in order to prevent exploding gradient values by replacing the term - ln p with a first-order Taylor approximation for p-values smaller than e-10_a similar technique is used in DQN (Mnih et al.]2015) to improve the sta- bility of the optimization algorithm. To identify optimal values for the reward weight , we per- formed initial experiments on Ms Pacman without applying the aforementioned curriculum learning scheme instead using a fixed look ahead parameter K = 1. We evaluated the effect of different X-values E {0.1, 1, 10, 100} on the training objective and identified X = 1 for conducting further experiments- see Section|A.2] After identifying an optimal reward weight, we conducted additional initial experiments without curriculum learning with fixed look ahead parameter K = 1 on all of the five different Atari games used in this paper. We observed periodic oscillations in the reward predic. tion loss of the training objective in Seaquest, which was fixed by adding gradient clipping (Pascanu et al.[2013) with threshold parameter 1 to our optimization procedure- experiments investigating the effect of gradient clipping in Seaquest are reported in Section[A.3] The fine-tuning effect of curriculum learning on the training objective in our final experiments is shown in Section[A.4|for all of the five analysed Atari games."}, {"section_index": "9", "section_name": "A.3 EFFECT OF GRADIENT CLIPPING IN SEAOUEST", "section_text": "After identifying an optimal value for the reward weight, see Section[A.2] we observed oscillation in the reward loss of the training objective in Seaquest--see first column in Figure 5which wa. solved by adding gradient clipping to our optimization procedure-- see second and third column i1 Figure[5] We tested two different values for the gradient clipping threshold (5 and 1) both of whicl worked, but for a value of 1 the oscillation vanished completely..\nWe performed all our experiments in Python with Chainer and adhered to the instructions in|Oh et al. (2015) as close as possible. Trajectory samples for learning the network parameters were obtained from a previously trained DQN agent according to Mnih et al.(2015). The dataset for training. comprised around 500, 000 video frames per game in addition to actions chosen by the DQN agent and rewards collected during game play. Video frames used as network input were 84 84 grey-scale. images with pixel values between 0 and 255 down-sampled from the full-resolution 210 160 ALE RGB images. We applied a further preprocessing step by dividing each pixel by 255 and subtracting. mean pixel values from each image leading to final pixel values E [-1; 1]. A detailed network architecture is shown in Figure 1|in the main paper. All weights in the network were initialized according to|Glorot & Bengio(2010) except for those two layers that participate in the element-wise multiplication in Figure[1] the weights of the action-processing layer were initialized uniformly in the range [-0.1; 0.1] and the weights of the layer receiving the latent encoding of the input video. frames were initialized uniformly in the range [-1; 1]. Training was performed for 1, 500, 000 minibatch iterations with a curriculum learning scheme increasing the look ahead parameter K. every 500, 000 iterations from 1 to 3 to 5. When increasing the look ahead parameter K for the first time after 500, 000 iterations, the minibatch size I was also altered from 32 to 8 as was the learning. rate for parameter updates from 10-4 to 10-5. Throughout the entire curriculum scheme, the time. horizon parameter determining the number of times a single trajectory is unrolled into the future was T = 4. The optimizer for updating weights was Adam (Kingma & Ba] 2015) with gradient. momentum 0.9, squared gradient momentum 0.95 and epsilon parameter 10-8. In evaluation mode.. network outputs were clipped to -1; 1| so that strong activations could not accumulate over roll-out time in the network.\nTo identify optimal values for the reward weight X, we conducted initial experiments in Ms Pacman without curriculum learning and a fixed look ahead horizon K = 1. We tested four different X- values E {0.1, 1, 10, 100} and investigated how the frame reconstruction loss and the reward loss of the training objective evolve over minibatch iterations-see Figure4 Best results were obtained for X = 1 and for X = 10, whereas values of X = 0.1 and X = 100 lead to significantly slower convergence and worse overall training performance respectively.\nReward weight = 0.1 Reward weight = 1 sso| 1.0 1.0 0.8 Experiment 1 0.8 Experiment 1 Experiment 2 Experiment 2 0.6 Experiment 3 0.6 Experiment 3 0.4 0.4 0.2 0.2 0.0 0.0 200000 400000 600000 800000 1000000 1200000 1400000 U.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 aeernp 0.008 0.008 0.006 0.006 0.004 0.004 0.002 0.002 0.000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations Minibatch iterations Reward weight = 10 Reward weight = 100 s 1.0 1.0 0.8 Experiment 1 0.8 Experiment1 Experiment 2 Experiment 2 0.6 Experiment3 0.6 Experiment 3 0.4 0.4 0.2 0.2 0.0 0.0 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 Reernd 0.006 0.006 0.004 0.004 0.002 0.002 0.000 0.000 0 200000 400000600000800000 1000000 1200000 1400000 0 200000 400000 6000008000001000000 1200000 1400000 Minibatch iterations Minibatch iterations\nFigure 4: Effect of reward weight on training loss in Ms Pacman. Each of the four panels depicts one experiment with a different reward weight X. Each panel shows how the training loss evolves over minibatch iterations in terms of two subplots reporting video frame reconstruction and reward loss respectively. Each experiment was conducted three times with different initial random seeds depicted in blue, green and red. Graphs were smoothed with an exponential window of size 1000\nNo gradient clipping. Gradient clipping, threshold = 5. Gradient clipping, threshold = 1 punodwog 1.0 1.0 0.8 0.6 0.4 SSOL 0.2 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 800000 1000000 1200000 1400000 0.0 0.010 200000 400000600000 0.010 200000400000600000 0.010 200000 400000600000 800000 1000000 1200000 1400000 0.008 0.008 0.008 aerrrp 0.006 0.006 0.006 0.004 0.004 0.004 0.002 0.002 0.002 0.000 200000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 0.000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations. Minibatch iterations. Minibatch iterations.\nFigure 5: Effect of gradient clipping on training loss in Seaquest. The three panels compare ex periments with no reward clipping to those with reward clipping using the threshold values 5 and 1. respectively. Subplots within each panel are similar to those in Figure|4|but display in the first row the evolution of the compound training loss in addition to the frame reconstruction and reward loss"}, {"section_index": "10", "section_name": "A.4 EFFECT OF CURRICULUM LEARNING", "section_text": "In our final experiments with curriculum learning, the networks were trained for 1, 500, O00 mini batch iterations in total but the look ahead parameter K was gradually increased every 500, 000 iterations from 1 to 3 to 5. The networks were hence initially trained on one-step ahead prediction only and later on fine-tuned on further-step ahead prediction. Figure 6|shows how the training ob jective evolves over iterations. The characteristic \"bumps\"' in the training objective every 500, 000. iterations as training evolves demonstrate improvements in long-term predictions in all games ex cept Freeway where the training objective assumed already very low values within the first 500, 000. iterations and might have been therefore insensitive to further fine-tuning by curriculum learning..\n0 200000 400000 600000 800000 1000000 1200000 1400000 U.010 0.008 0.006 0.004 0.002 0.000 200000 400000 600000 800000 1000000 1200000 1400000\n0.0 200000 400000 600000800000 1000000 1200000 1400000 0.010 0.008 0.006 0.004 0.002 0.000 0 200000 400000 600000 8000001000000 1200000 1400000\nQ*bert Seaquest punodond . erimen 0.8 0.6 0.4 nsol bulu!e 0.2 0.0 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 1.0 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 200000400000 0.010 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 aerrrp 0.006 0.006 0.004 0.004 0.002 0.002 0.000 200000 400000 600000 800000 1000000 1200000 1400000 0.000 200000400000 600000800000 1000000 12000001400 Minibatch iterations. Minibatch iterations. Freeway Ms Pacman Space Invaders punodund 1.0 periment 0.8 0.8 0.6 Experimen 0.6 Experiment 3 0.4 0.4 SsoL 0.2 0.2 0.0 0.0 0.0 1.0 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 1.0 200000 1.0 400000 600000 0.8 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.2 0.1 0.0 0.0 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.010 200000 400000 600000 800000 1000000 1200000 1400000 0.008 0.008 0.008 0.006 0.006 0.006 0.004 0.004 0.004 0.002 0.002 0.002 0.000 0.000 0.000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 200000 400000 600000 800000 1000000 1200000 1400000 Minibatch iterations. Minibatch iterations. Minibatch iterations.\nFigure 6: Effect of curriculum learning on five different Atari games. Each panel corresponds to different game, individual panels are structured in the same way as are those in Figure[5"}, {"section_index": "11", "section_name": "A.5 EFFECT OF RANDOM SEEDS", "section_text": "In order to investigate this reward overestimation in Freeway further, we analyse visualizations of joint video frame and reward prediction for this particular seed (similar in style to Figure 3 from Section4.2|in the main paper). The results are shown in Figure[8|where a peculiar situation occurs after 31 predicted look ahead steps. In Freeway, the agent's job is to cross a busy road from the bottom to the top without bumping into a car in order to receive reward. If the agent bumps into a car, the agent is propelled downwards further away from the reward-yielding top. This propelled downwards movement happens even when the agent tries to move upwards. Exactly that kind of situation is depicted at the beginning of Figure [8|and occurs for this particular prediction after 31 steps. Our predictive model is however not able to correctly predict the aforementioned downwards movement caused by the agent hitting the car, which is highlighted in red throughout steps 31 to 35 documenting an increasing gap between ground truth and predicted agent position as the propelled downwards movement of the ground truth agent continues. In the course of further prediction, the network model assumes the agent to reach the reward-yielding top side of the road way too early which results in a sequence of erroneous positive reward predictions throughout steps 41 to 50 and as a side effect seemingly that the predictive model loses track of other objects in the scene. Concluding, this finding may serve as a possible explanation for cumulative reward overestimation for that particular experiment in Freeway.\nWe conducted three different experiments per game with different initial random seeds. The effect of different initial random seeds on the cumulative reward error is summarized in Figure|7|which reports how the median and the 5 to 95 percentiles of the cumulative reward error evolve over look ahead steps in the different experiments per game. Note that the results of the first column in Figure7 are shown in Figure|2|from the main paper together with a more detailed analysis depicting empirical cumulative reward error distributions for some look ahead steps. The random initial seed does not seem to have a significant impact on the cumulative reward prediction except for Freeway where the network in the third experiment starts to considerably overestimate cumulative rewards at around 30 to 40 look ahead steps.\nExperiment 1 Experiment 2 Experiment 3 Crnnnn enrrnn ennnnnnrnn Daq*r 60 20 60 80 40 60 seeneest 10 40 60 20 40 60 80 20 40 60 100 model median nodel 5-95pe random 5-95per Freeeay 10 40 60 80 20 40 60 80 uewmed Mn 10 80 40 00 dnnperes spede 10 20 40 60 80 20 40 60 80 20 40 60 80 100 Look ahead steps Look ahead steps Look ahead steps\nFigure 7: Effect of different initial random seeds on cumulative reward error. The plots show how. the cumulative reward error evolves over look ahead steps in terms of the median and the 5 tc 95 percentiles for our network model (blue) as well as the baseline model (red) in each experiment.. Each row refers to a different game, each column refers to a different experiment per game initialized. with a different random seed. The first column of this figure is presented in Figure2|of the mair. paper explaining the results in more detail by additionally illustrating empirical distributions over. the cumulative reward error for some look ahead steps..\nSteps Ground truth Prediction Error map Steps Ground truth. Prediction Error map 31 41 up, rew=0, pred=0 up<rew=0, pred=1 32 42 noop, rew=0, pred=0 up, rew=0, pred=0 33 43 up, rew=0, pred=0 up<rew=0, pred=1 34 44 up, rew=0, pred=0 up, rew=0, pred=0 35 45 up, rew=0, pred=0 up<rew=0, pred=1 36 46 noop, rew=0, pred=0 up, rew=0, pred=0 37 47 up, rew=0, pred=0 up, rew=0, pred=0 38 48 up, rew=0, pred=0 up<rew=0, pred=1 39 49 up, rew=0, pred=0 up<rew=0, pred=1 40 50 up, rew=0, pred=0 up<rew=0, pred=1\nFigure 8: Example predictions in Freeway over 20 steps. The figure is similar in nature to Figure|3 from the main paper with the only difference that predictions are depicted from time step 31 onwards."}, {"section_index": "12", "section_name": "A.6 LOSS ON TEST SET", "section_text": "In the main paper, our analysis focuses on evaluating how well our model serves the purpose of. cumulative reward prediction. Here. we evaluate network performance in terms of both the videc frame reconstruction loss as well as the reward prediction loss on the test set following the analysi. conducted in Oh et al.(2015). For each game, we sample 300 minibatches of size I = 50 from. the underlying test set and compute the test loss over K = 100 look ahead steps with the formula. presented in the main paper in Section|3.3|used for learning network parameters, but without aver- aging over look ahead steps because we aim to illustrate the test loss as a function of look ahead. steps-statistics of this analysis are plotted in Figure|9\nBest overall test loss is achieved in Freeway and for initial look ahead steps (up to roughly between 40 and 60 steps) in Q*bert, which is in accordance with results for cumulative reward prediction from the main paper. Also in line with results from the main paper is the finding that the reward loss on the test set is worse in Seaquest, Ms Pacman and Space Invaders when compared to Q*beri (up to approximately 40 steps) and Freeway. Worst video frame reconstruction loss is observed for Space Invaders in compliance withOh et al.[(2015) where the authors report that there are objects in the scene moving at a period of 9 time steps which is hard to predict by a network only taking the last 4 frames from the last 4 steps as input for future predictions. At first sight, it might seem a bit surprising that the reward prediction loss in Space Invaders is significantly lower than in Seaquest and Ms Pacman for long-term ahead prediction despite the higher frame reconstruction loss in Space Invaders. A possible explanation for this paradox might be the frequency at which rewards are collected-this frequency is significantly higher in Seaquest and Ms Pacman than in Space Invaders. A reward prediction model with bias towards zero rewards-as indicated by the main results in the paper---might therefore err less often in absolute terms when rewards are collected at a lower frequency and may hence achieve lower overall reward reconstruction loss.\nCompound loss Reconstruction loss Reward loss median median 20 100 mediar median Tereeen en eeen eeey media 5-95per 10 60 minmax minmax L0C lnnreres median median median minmax laedn uf aasn ssse 20 40 20 20 40 60 80 100 Look ahead steps Look ahead steps Look ahead steps\nFigure 9: Loss on test set over look ahead steps. Each row reports the loss on the test set over 100. look ahead steps for a different game. The first column illustrates the compound loss consisting of the video frame reconstruction loss (second column) and the reward prediction loss (third column).. The loss on the test set is computed according to |Oh et al.(2015) similar to the training loss for. learning network parameters, however with a different look ahead parameter K = 100 and a differ-. ent minibatch size I = 50, and without averaging over look ahead steps since we aim to plot the test. loss as a function of look ahead steps. For each game, the test loss is computed for 300 minibatches. resulting in an empirical distribution with 300 loss values per look ahead step. The figure shows the. mean (in green), the median (in red), the 5 to 95 percentiles (in shaded blue) as well as minimum. and maximum elements (in black dashed lines) of these empirical distributions.."}] |
BJmCKBqgl | [{"section_index": "0", "section_name": "DYVEDEEP: DYNAMIC VARIABLE EFFORT DEEP NEURAL NETWORKS", "section_text": "Sanjay Ganapathy\nDepartment of Computer Science and Engineering Indian Institute of Technology Madras Chennai. Tamil Nadu, India"}, {"section_index": "1", "section_name": "Balaraman Ravindran", "section_text": "Department of Computer Science and Engineering Indian Institute of Technology Madras Indis\nDeep Neural Networks (DNNs) have advanced the state-of-the-art in a variety of machine learning tasks and are deployed in increasing numbers of products and services. However, the computational requirements of training and evaluating large-scale DNNs are growing at a much faster pace than the capabilities of the underlying hardware platforms that they are executed upon. In this work, we pro- pose Dynamic Variable Effort Deep Neural Networks (DyVEDeep) to reduce the computational requirements of DNNs during inference. Previous efforts propose specialized hardware implementations for DNNs, statically prune the network, or compress the weights. Complementary to these approaches, DyVEDeep is a dy- namic approach that exploits the heterogeneity in the inputs to DNNs to improve their compute efficiency with comparable classification accuracy. DyVEDeep equips DNNs with dynamic effort mechanisms that, in the course of processing an input, identify how critical a group of computations are to classify the input. DyVEDeep dynamically focuses its compute effort only on the critical computa- tions, while skipping or approximating the rest. We propose 3 effort knobs that operate at different levels of granularity viz. neuron, feature and layer levels. We build DyVEDeep versions for 5 popular image recognition benchmarks - one for CIFAR-10 and four for ImageNet (AlexNet, OverFeat and VGG-16, weight- compressed AlexNet). Across all benchmarks, DyVEDeep achieves 2.1-2.6 reduction in the number of scalar operations, which translates to 1.8-2.3 per- formance improvement over a Caffe-based implementation, with < 0.5% loss in accuracy."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep Neural Networks (DNNs) have greatly advanced the state-of-the-art on a variety of machin learning tasks from different modalities including image, video, text, and natural language pro. cessing. However, from a computational standpoint, DNNs are highly compute and data intensiv workloads. For example, DNN topologies that have won the ImageNet Large-Scale Visual Recogni tion Contest (ILSVRC) for the past 5 years, contain between 60-150 million parameters and requir. 2-20 giga operations of compute to classify a single image. These requirements are only projecte. to increase in the future, as data sets of larger sizes and topologies of larger complexity (more layers. features and feature sizes) are actively explored. Indeed, the growth in computational requirement. of DNNs has far outpaced improvements in the capabilities of commodity computational platform. in recent years.\nCurrently a Research Staff Member at IBM T.J. Watson Reseach Center. Yorktown Heights. Ny\nSwagath Venkataramani\nDepartment of Electrical and Computer Engineering Purdue University\nraghunathan@purdue.edu"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Related Research Directions. Prior research efforts to improve the computational efficiency o. DNNs can be classified into 4 broad directions. The first comprises parallel implementations ol. DNNs on commercial multi-core and GPGPU platforms. Parallelization strategies such as model. data and hybrid parallelism (Krizhevsky (2014); Das et al. (2016)), techniques such as asynchronou SGD (Dean et al. (2012)) and 1-bit SGD (Seide et al. (2014)) to alleviate communication overheads are representative examples. The next set of efforts design specialized hardware accelerators tc. realize DNNs, trading off programmability, the cost of specialized hardware and design effort for. efficiency. A spectrum of architectures ranging from low-power IP cores to large-scale systems have. been proposed (Farabet et al. (2011); Chen et al. (2014); Jouppi). The third set of efforts focus. on developing new device technologies whose characteristics intrinsically match the computationa. primitives in neural networks, leading to improvements in energy efficiency (Liu et al. (2015b). Ramasubramanian et al. (2014)). The final set of efforts exploit the fact that DNNs are typicall. over-parametrized (Denil et al. (2013)) due to the non-convex nature of the optimization space (Hin. ton et al. (2012)). Therefore, they approximate DNNs by statically pruning network connections. representing weights with reduced bit precision and/or in a compressed format, thereby improving. compute efficiency for a negligible loss in classification accuracy (LeCun et al. (1989); Han et al. (2015b); Liu et al. (2014); Venkataramani et al. (2014); Anwar et al. (2015); Tan & Sim (2016)).\nDyVEDeep: Motivation and Concept. In contrast to the above efforts, our proposal, Dynamic Variable Effort Deep neural networks (DyVEDeep ), leverages the heterogeneity in the charac teristics of inputs to a DNN to improve its compute efficiency. The motivation behind DyVEDeep stems from the following key insights..\nFirst, in real-world data, not all inputs are created equal, i.e., inputs vary considerably in their \"dif- ficulty'. Intuitively, only inputs that lie very close to the decision boundary require the full effort of the classifier, while the rest could be classified with a much simpler (e.g., linear) decision boundary In the context of DNNs, we can see that increasing network size provides a valuable, but neverthe. less diminishing increase in accuracy. For example, in the context of ImageNet, increasing network's computational requirements by over 15 (from AlexNet to VGG) yields an additional 16% increase in classification accuracy. This raises the question of whether some of the inputs can be classified with substantially fewer computations, while expending increased effort only for inputs that require it.\nSecond, for a given input, the effort needs to be expended across different parts of the network. For example, in an image recognition problem, the computations corresponding to neurons that operate on the image region where an object of interest is located are more critical to the classification outpu than the others. Also, some features may be less relevant than others in the context of a given input For example, features that detect sharp edges may be less relevant if the current input is comprised mostly of curved surfaces.\nNotwithstanding the above observations, state-of-the-art DNNs are static i.e., they are computation. ally agnostic to the nature of the input being processed and expend the same (worst case) com- putational effort on all inputs, which leads to significant inefficiency. DyVEDeep addresses this limitation by dynamically predicting which computations are critical to classify a given input and. focusing compute effort only on those computations, while skipping or approximating the rest. In. effect, the network expends computational effort on different subsets of computations for each input. reducing computational requirements in each case without sacrificing classification accuracy..\nDynamic Effort Knobs. The key to the efficiency of DyVEDeep lies in favorably navigating the. trade-off between the cost of identifying critical computations vs. the benefits accrued by skipping o1. approximating computations. To this end, we identify three dynamic effort mechanisms at different. levels of granularity viz. neuron, feature and layer-levels. These mechanisms employ run-time\n1The name stems from the notion that a network should \"dive deep\", or expend computational effort, judi ciously as and where it is needed.\nTwo key scenarios exemplify the computational challenges imposed by DNNs: (i) Large-scale train-. ing, in which DNNs are trained on massive data-sets using high-performance server clusters or in. the cloud, and (ii) Low-power inference, in which DNN models are evaluated on energy-constrained platforms such as mobile and deeply-embedded (Internet-of-Things) devices. Towards addressing the latter challenge, we propose Dynamic Variable Effort Deep neural networks (DyVEDeep), a new. dynamic approach to improve the computational efficiency of DNN inference..\ncriteria to dynamically evaluate the criticality of groups of computations and appropriately skip o approximate those that are deemed to be less critical.\nWe develop a systematic methodology to identify the hyper-parameters for each of these mecha. nisms during the training phase for any given DNN. We built DyVEDeep versions for 5 populai DNN benchmarks viz. CIFAR-10, AlexNet, OverFeat-accurate, VGG-16 and a weight-compressed AlexNet model. Our experiments demonstrate that by dynamically exploiting the heterogeneity across inputs, DyVEDeep achieves 2.1 -2.6 reduction in the total number of scalar operations for <0.5% loss in classification accuracy. The reduction in scalar operations translates to 1.8-2.3 improvement in performance in our software implementation of DyVEDeep using the Caffe deep. learning framework on an Intel Xeon 2.7GHz server with 128GB memory..\nThe rest of the paper is organized as follows. Section 2 describes prior research efforts related to. DyVEDeep. Section 3 details the proposed dynamic effort mechanisms and how they are integrated in DyVEDeep. Section 4 outlines the methodology used in our experiments. The experimental results are presented in Section 5, and Section 6 concludes the paper..\nIn this section, we provide a brief summary of prior research efforts related to DyVEDeep, and highlight the distinguishing features of our work. Prior research on improving the computational efficiency of DNNs follows 4 distinct directions\nThe first class of efforts focus on parallelizing DNNs on commercial multi-cores and GPGPU plat forms. Different work distribution strategies such as model, data and hybrid parallelism (Krizhevsky (2014); Das et al. (2016)), and hardware transparent on-chip memory allocation/managemen schemes such as virtualized DNNs (Rhu et al. (2016)) are representative examples. The seconc class of efforts design specialized hardware accelerators that realize the key computation kernels in DNNs. A range of architectures targeting low-power mobile devices (Farabet et al. (2011)) tc high-performance server clusters (Chen et al. (2014); Jouppi) have been explored. The third set o. efforts investigate new device technologies whose characteristics intrinsically match the compute primitives present in DNNs. Memristor-based crossbar array architectures (Liu et al. (2015b)) anc spintronic neuron designs (Ramasubramanian et al. (2014)) are representative examples.\nThe final set of efforts improve efficiency by approximating computations in the DNN. DyVEDeep falls under this category, as we propose to dynamically skip or approximate computations based on their criticality in the context of a given input. Therefore, we describe the approaches that fall under this category in more detail. To this end, we classify these approaches into static vs. dynamic Optimizations.\nStatic Techniques Almost all efforts that approximate computations in DNNs are static in nature i.e. they apply the same approximation uniformly across all inputs. Static techniques primarily reduce\nSaturation Prediction and Early Termination (SPET) operates at the neuron-level. It monitors the intermediate output of each neuron after processing a subset of its inputs (partial dot product between a subset of inputs and corresponding weights) and predicts the likelihood of the neuron eventually saturating after applying the activation function. If the partial sum is deep within the saturation regime (e.g., a large negative value in the case of ReLU), all further computations corresponding to the neuron are deemed to be non-critical and skipped. Significance-driven Selective Sampling (SDsS) operates within each feature map, and exploits the spatial locality between neuron activations. A uniformly spatially sampled ver- sion of the feature is first computed. The activations of each remaining neuron is either approximated or accurately computed based on the magnitude and variance of its neigh bors. Similarity-based Feature Map Approximation (SFMA) operates at the layer level, and examines the similarity between neuron activations in each feature map. If all neuron activations are similar, the convolution operation on the feature map is approximated by a single scalar multiplication of the average neuron activation value with the precomputed sum of kernel weights\nIn the context of convolution layers, Denton et al. (2014); Jaderberg et al. (2014) exploit the linea. structure of the network to find a suitable low rank approximation. On the other hand, Liu et al. (2015a) propose sparse convolutional DNNs, wherein almost 90% of the parameters in the ker. nels are zeroed out by adding a weight sparsity term to the objective function. In contrast, Mathiei. et al. (2013) demonstrate that performing convolution in the Fourier domain can yield substantial im. provement in efficiency. Finally, /citeDBLP:journals/corr/FigurnovVK15 propose perforated CNNs. in which only a subset of the neurons in a feature are evaluated. The neurons to be evaluated fo each feature are determined statically at training time.\nDynamic Techniques. Dynamic optimizations adapt the computations that are approximated base on the input currently being processed. Dynamic techniques are more powerful than statically opti mised DNNs, as they can capture additional input-dependent opportunities for efficiency that static methods lack. Notwithstanding this, very little focus has been devoted to developing dynamic DNN approximation techniques. One of the first efforts in this direction (Bengio (2013)), utilizes stochas tic neurons to gate regions within the DNN. Along similar lines, Ba & Frey (2013) propose Standout where the dropout probability of each neuron is estimated using a binary belief network. The dropou mask is computed for the network in one shot, conditioned on the input to the network. Bengio et al (2015) extends a similar idea, wherein the dropout distribution of each layer is computed based o1 the output of the preceding layer.\nThe dynamic effort mechanisms proposed in DyVEDeep are qualitatively different from the afore- mentioned efforts. Rather than stochastically dropping computations, effort knobs in DyVEDeep exploit properties such as the saturating nature of activation to directly predict the effect of ap- proximation on the neuron output. Further, prior dynamic approaches have only be been applied to fully-connected networks trained on small datasets. Their applicability to large-scale DNNs remains unexplored. On the other hand, DyVEDeep is naturally applicable to both convolutional and fully connected layers, and we demonstrate substantial benefits on large-scale networks for ImageNet."}, {"section_index": "4", "section_name": "DYVEDEEP: DESIGN APPROACH AND DYNAMIC EFFORT KNOBS", "section_text": "The key idea behind DyVEDeep is to improve the computational efficiency of DNNs by modulating. the effort that they expend based on the input that is being processed. As shown in Figure 1, we achieve this by equipping the DNN with dynamic effort mechanisms (\"effort knobs') that dynami. cally predict criticality of groups of computations with very low overhead, and correspondingly skij or approximate them, thereby improving efficiency with negligible impact on classification accu. racy. We identify three such dynamic effort mechanisms in DNNs that operate at different level of granularity. We also propose a methodology to tune the hyper-parameters associated with these. mechanisms so that variable effort versions of any DNN can be obtained with negligible loss ir classification accuracy."}, {"section_index": "5", "section_name": "3.1 SATURATION PREDICTION AND EARLY TERMINATION", "section_text": "Saturation Prediction and Early Termination (SPET) works at the finest level of granularity, which is. at the level of each neuron in the DNN. In this case. we leverage the fact the almost all convolutiona and fully connected layers are followed by an activation function that saturates on at least one side For example, the commonly used Rectified Linear Unit (ReLU) activation function saturates at one. end by truncating the negative inputs to zero, while passing the positive inputs as is..\nThe key idea in SPET is that the actual value of the weighted sum (dot product between a neuron's inputs and weights) does not impact the neuron's output, provided the sum will eventually cause the neuron's activation function to saturate. In the case of ReLU. it is unnecessary to compute the actual\nhe model size of DNNs by using mechanisms such as pruning connections (LeCun et al. (1989) Han et al. (2015b); Liu et al. (2014)), reducing the precision of computations (Venkataramani et al 2014); Anwar et al. (2015)), and storing weights in a compressed format (Han et al. (2015a)). Fo xample, in the context of fully connected layers, HashNets ( Chen et al. (2015)) use a hash functioi o randomly group weights into bins, which share a common parameter value, thereby reducing he number of parameters needed to represent the network. Deep compression (Han et al. (2015a) attempts to prune connections in the network by adding a regularization term during training, anc emoving connections with weights below a certain threshold.\nTerminate & Saturate High if Partial Sum > SPETuThre SPETout = Terminate & Saturate Low if Partial Sum < SPETiThres Continue otherwise\nPrediction interval. Skip Inputs WI OR Partial Sum SPETIthresh SPETuthresh Activation Saturation Prediction & function Early Termination (SPET) Output\nTo demonstrate the potential benefits from SPET, Figure 3 shows the fraction of neurons in the convolutional layers of the CIFAR-10 DNN that saturate. We find that between 50%-73% of the neuron activations are zeros due to the ReLU activation function. Figure 3 also reveals that the fraction of neurons saturating increases as we proceed deeper into the network. We observed similar trends for larger networks such as AlexNet and OyerFeat. Since a maiority of neuron activations\n@ Feature granularity Dyn. Effort Knob @ Neuron granularity M/P Dyn. Effort Knob. M/P M/P Dyn. Effort Knob. Network state Skip/approx. ops @ Layer granularity Monitor &. Dynamic Predict Effort Knob\nNeuron granularity M/P Dyn. Effort Knob M/P M/P Dyn.Effort Knob Network state Skip/approx. ops @ Layer granularity Monitor & Dynamic Predict Effort Knob\nsum if it will eventually be a negative value, as any negative value would result in a neuron output of zero. Based on the above observation, as shown in Figure 2, SPET monitors the partial weighted sum of a neuron after a predefined fraction of its inputs have been multiplied-and-accumulated. SPET then predicts whether the final partial sum would cause the neuron's activation function to saturate. To this end. we introduce the following hyper-parameters:\nSPETiThresh and SPETuThresh: We set two thresholds on the partial sum value of the. each neuron. At the time of prediction, as shown in Equation 1, if the partial sum is found to be smaller than SP ETiThresh or greater than SP ETuThresh, the partial sum computation. is terminated early, and the appropriate saturated activation function value is returned as the neuron's output. If not, we continue to completely evaluate the partial sum value for. the neuron. Terminate & Saturate High i f Partial Sum > SPETuThresh SPETout Terminate & Saturate Low i f Partial Sum SP ETiThresh (1) Continue otherwise e note that if the activation function saturates in just one direction, only one of the SPET thresholds + OfRe nlv the S P ET\nSPETiThresh and SPETuThresh: We set two thresholds on the partial sum value of the each neuron. At the time of prediction, as shown in Equation 1, if the partial sum is found tc be smaller than SP ETiThresh or greater than SP ETuThresh, the partial sum computation is terminated early, and the appropriate saturated activation function value is returned as the neuron's output. If not, we continue to completely evaluate the partial sum value foi the neuron.\nWe note that if the activation function saturates in just one direction, only one of the SPET thresholds will be useful to predict saturation. For example, in the case of ReLU, only the SPETiThresh is used to predict saturation.\nsaturate in typical DNNs, SPET has a potential to achieve significant improvements in processing efficiency.\nProbability of Deactivation in Cifar Network\n-robabntyorDeaclvalloninCllarNelwork 0.8 0.7 0.6 0.5 0.4 C1 C2 C3 Layers\nSaturation Prediction Interval. A key aspect of SPET is the interval at which we predict for sat. uration. On the one hand, predicting saturation after processing a small number of inputs to eacl. neuron would frequently result in the prediction being incorrect, leading to a loss in classificatio. accuracy. On the other hand, a larger prediction interval yields progressively smaller computationa. savings. Quantifying the above trade-off, Figure 4 illustrates, for the CIFAR-10 DNN, the fractio of neuron that were predicted to be saturated correctly at various prediction intervals. For the illus tration in Figure 4, we assume a SP ETiThresh of O i.e., a neuron is predicted to saturate if its partia. sum at the point of prediction is negative. We find that the fraction of neurons predicted correctl increases with the prediction interval..\nThe SPETiThresh and SPETuThresh hyper-parameters are determined during DNN training. We note that the prediction interval could also be learnt during the training process. However, we found that a simpler scheme where we fix the prediction interval at 50% (i.e., we predict for saturation after half the inputs to a neuron have been processed) worked quite well in practice.\nRearranging Neuron Inputs. For SPET to be most effective, the weights should be processed in. decreasing order of magnitude, as larger weights are likely to have the most impact on the partial. sum. However, this is not feasible in practise, as it affects the regularity in the memory access pat-. tern, directly offsetting the savings from skipping computations. Also, in the case of convolutional. layers, if the prediction interval is set to 50%, inputs from half of the feature maps are ignored at the time of prediction. To maximize the range of inputs processed before prediction, while maintaining. regularity in the memory access pattern, we rearrange the neuron inputs such that all odd indexed. inputs are processed first, after which the prediction is made. The even indexed inputs are computed. only if the neuron was not predicted to saturate.."}, {"section_index": "6", "section_name": "3.2 SIGNIFICANCE-DRIVEN SELECTIVE SAMPLING", "section_text": "Significance-driven Selective Sampling (SDSS) operates the granularity of each feature in the con volutional layers of the DNN. SDSS leverages the spatial locality in neuron activations within eacl feature. For example, in the context of images, adjacent pixels in the input image frequently take similar values. As the neuron activations are computed by sliding the kernel over the image, th"}, {"section_index": "7", "section_name": "Probability of Deactivated Neuron Deactivating within fraction of MACC Computed", "section_text": "within fraction of MACC Computed 1 0.7 5 0.5 0.25 0 4 0 % 4 5 % 50 % 55% 60 % 65% Fraction ofMACC Computed\nFigure 4: Saturation prediction accuracy at different prediction intervals\nspatial locality naturally permeates to the feature outputs of convolutional layers. This behavior is also observed in deeper layers in the network. In fact, the saturating nature of the activation function enhances locality, as variations in the weighted sum between neighbors are masked if they both fall within the same saturation regime.\nVEIC qhboractivations MaxActthresh Find Max MAX, MIN, AND Avg. Min Eval. DelActhresh Approximate/Evaluate Uniformly sampled feature Significance-Driven Selective Sampling (SDS (Sampling Stride = 2)\nUniform Feature Sampling. In the first step, we compute the activation values for a subset ol neurons in the feature by uniformly sampling the feature. For this purpose, we define a paramete. SP that denotes the periodicity of sampling in each dimension. The value of SP is chosen based or. the size of the feature and the correlation between adjacent neuron activations. In our experiments we used a sampling period of 2 across all convolutional layers in a DNN..\nSignificance-driven Selective Evaluation. In the second step, as shown in Figure 5 we selec-. tively approximate activation values of neurons that were not sampled in the first step. To this. end, we define the following two hyper-parameters: (i) Maximum Activation Value Threshold. (MaxActthresh), (ii) Delta Activation Value Threshold (DelActthresh). For each neuron in the. feature that is yet to be computed, we examine the activation values of its immediate neighbors in all directions, and compute the maximum and range (difference between max and min) of the neighbors' activation values. If the maximum value is below the MaxActthresh threshold and the.\nrange is less than the DelActthresh, then the activation value of the neuron is approximated to b. the average of its neighbors. If not, the actual activation value of the neuron is evaluated.\nThus, the SDSS effort knob utilizes the magnitude and variance of neighbors to gauge whether a neuron lies within a region of interest, and accordingly expends computational effort to compute its activation value.\nSimilarity-based Feature Map Approximation (SFMA) also exploits the correlation between activa-. tion values in a feature, but in a very different way. In SDSS, the spatial locality was exploited in. computing the neuron activations themselves. In contrast, in the case of SFMA, the spatial locality. is used to approximate computations that use the feature as their input. Consider a convolutional. layer in which one of the input features has all of its neuron activations similar to each other. When. a convolution operation is performed on this input feature by sliding the kernel matrix, all the entries. in the convolution output are likely to be close to each other. Therefore, as shown in Figure 6, we. approximate entire convolution operation as follows. First, the average value of all neuron activa. tions in the feature is computed. Next the sum of all weights in the kernel matrix is evaluated. We note that the sum can be precomputed and stored along with the kernel matrix. We then approximate. all outputs of the convolution as the product of the average input activation and the sum of all kernel. weights.\nFigure 6: Similarity-based Feature Map Approximation\nMathematically, the above . oressed as follows. oproximation can be ex\nConvOutw = 0Wi*Wi=* 0Wi*(Wi-)\nTo determine on which convolutions to apply the aforementioned approximation, we define th following 2 hyper-parameters:\nGiven the hyper-parameters, the convolution is approximated when (i) the sum of the kernel weights are below W Sigthresh, indicating that the convolution is relatively less significant to the output\nInput Find Avg,. FeaVarthresh Features Variance FeaAvg Kernel Find Sum, WSigthresh Weights Average Wsum AND Convolution Output Approximate +Do Conv. (or) Evaluate Similarity-based Feature Map Approximation (SFMA) Accumulation Output Feature\n=0Wi\nIn the above equation, ConvOutw is the convolution output for a window W of size k k, where k is the kernel size. is the mean of all the activation values in the feature. This approximation is valid when w; * (W, ) is negligible\nWeight Significance Threshold (W Sigthresh) - We set this threshold on the sum of absolute values of the kernel weights. This is an approximate measure of significance of the current convolution to the output feature Feature Variance Threshold (FeaV arthresh) - We set this threshold on the variance of the neuron activations in the feature\nfeature, and (ii) the variance of neuron activations in the feature is below FeaV arthresh, indicatir that the error due to replacing the entire feature with its average is tolerable..\nWhen the feature sizes are large, we do not check for the variance across the entire feature. Instead we split the feature into multiple regions, that overlap on each dimension by the size of the kernel window. We check for variance within each region, and if the variance is below FeaV arthresh, the kernel windows that fit entirely within the region are approximated."}, {"section_index": "8", "section_name": "3.4 INTEGRATING EFFORT KNOBS", "section_text": "We now describe how the different effort knobs-SPET, SDSS and SFMA-are combined ir DyVEDeep. Since each effort knob operates at a different level of granularity, they can be eas. ily integrated with each other. To combine SPET and SDSS, each neuron activation in the uniformly sampled features of SDSS are computed with SPET. However, we do not apply SPET to the neurons. that are selectively computed in SDSS, as they are located in the midst of neurons with large acti vation values and/or variance, and are hence unlikely to saturate. SFMA fundamentally amounts tc grouping a set of inputs (within a convolution window) to a neuron into a single input, and therefore directly fits with the process of evaluating a neuron with SPET/SDSS..\nIn summary, the SPET effort knob applies to both convolutional and fully connected layers of DNNs and is most effective when majority of the neurons saturate. Since the convolutional layers towards the middle of the DNN have a large number of inputs per neuron and contain a substantial fractior of saturated neurons, we expect SPET to be most beneficial for those layers. The SDSS effort knob primarily applies only to convolutional layers, and is most effective when the features sizes are large Therefore, the initial convolutional layers would benefit the most from SDSS. On the other hand SFMA works best when there are a large number of features in the layer and when the feature sizes are small. Hence the middle and later convolutional layers are likely to benefit from SFMA."}, {"section_index": "9", "section_name": "3.5 HYPER-PARAMETER TUNING", "section_text": "As described in the previous subsections, the dynamic effort knobs together contain 6 hyper pa. rameters viz. SPETiThresh, SPETuThresh, MaxActthresh, DelActthresh, WSigthresh and. FeaV arthresh. These hyper-parameters control how aggressively the effort knobs skip or approx-. imate computations, thereby yielding a direct trade-off between computational savings vs. classi-. fication accuracy. Using a pre-trained network and a training dataset, we systematically determine. the DyVEDeep hyper-parameters before the DNN model is deployed. Ideally, we could define these. parameters uniquely for each neuron in the DNN. For example, each neuron could have its unique SP ETiThresh threshold to predict when it saturates (SPET), or FeaV arthresh threshold to deem if. an input feature map can be approximated during its partial sum evaluation (SFMA). Clearly, this. results in a prohibitively large hyper-parameter search space, and adds substantial overhead to the overall size of the DNN model. Since neurons in a given layer are computationally similar (same set. of inputs, number of computationsetc.), we define the hyper-parameters at a layer-wise granularity. i.e., all neurons within a layer share the same set of hyper-parameters. Also, since all our bench-. marks utilized the ReLU activation function, we ignored the SPETuThresh when identifying the. hyper-parameter configuration.\nAlgorithm 1 shows the pseudocode for the hyper-parameter tuning process. Empirically, we ob served that parameters corresponding to each effort knob can be independently tuned. Therefore. we adopt a strategy wherein we first identify a range of possible values for each hyper-parameter.. Since computational savings monotonically increase or decrease with the value of each parameter,. we perform a greedy binary search on its range. The range of each parameter can be identified as follows. The SP ETiThresh and MaxActthresh parameters vary over the entire range of values the. partial sum of neurons can take in a layer. However, we typically observe that zero is a good lower bound for these parameters, as ReLU sets all negative values to O. The upper bound is determined by evaluating the DNN on each input in the training dataset and recording the maximum partial sum value for each layer. The other parameters DelActthresh, W Sigthresh and FeaV arthresh are natu-. rally lowered bounded by O as they are thresholds on absolute magnitudes. Similar to SP ETiThresh. and MaxActhresh, the upper limit of the other parameters are also estimated by evaluating the. DNN on the training set.\nGiven a hyper-parameter and its range, the highest possible value for the parameter yields the max imum computation savings but adversely affects the classification accuracy. On the other extreme the lowest value of the parameter does not impact the classification accuracy. However, it yields nc computation savings and in fact adds a penalty for criticality prediction. Therefore, we perform binary search on the range to identify the highest value of the parameter that yields negligible loss ir classification accuracy (<0.5% in our experiments). In the case of SFMA, we observed that the twc hyper-parameters (FeaVarthresh and W Sigthresh) need to be searched together. Since the rang of FeaV arthresh is more coarser than W Sigthresh, we loop over the values of FeaV arthresh, anc search for possible values of W Sigthresh in each case.\nBenchmarks. To evaluate DyVEDeep, we utilized pre-trained DNN models available publicly o. the Caffe Model Zoo (BVLC (a)) benchmark repository. This reinforces DyVEDeep's ability t. adapt to any given trained network. We used the following 5 DNN benchmarks in our experiments. CIFAR-10 Caffe network (BVLC (b)) for the CIFAR-10 dataset (Krizhevsky (2009)), and AlexNe (Krizhevsky et al. (2012)), Overfeat-accurate (Sermanet et al.), VGG-16 (Simonyan & Zisserma (2014)), and compressed AlexNet (Han et al. (2015a)) for the ImageNet ILSVRC 2012 data se (Deng et al. (2009)). The inputs for the ImageNet dataset are generated by using a 224 224 cente crop of the images in the test set. We randomly selected 5% of the test inputs and used it as. validation set to tune the hyper parameters. We report speedup and classification accuracy results oI. the remaining 95% of the test inputs.\nPerformance Measurement. We implemented DyVEDeep in C++ within the Caffe deep learn ing framework (Jia et al. (2014)). However, we could not directly integrate DyVEDeep withir Caffe, as it composes all computations within a layer for a given batch size into a single GEMM (GEneral Matrix Multiplication) operation, which is offered by BLAS (Basic Linear Algebra Sub- programs) libraries. BLAS libraries specifically optimize matrix operations at the assembly level Since DyVEDeep requires more fine-grained computation skipping/approximation, we were unable to directly incorporate it within these routines. Therefore, we prototyped our own implementation for the convolutional layers within Caffe and used it in our experiments.\nOur experiments were conducted on an Intel Xeon server operating at 2.7GHz frequency and 128GB. memory. We added performance counters to both DyVEDeep and the baseline DNN implementatior to measure the software execution time. All our timing results are reported for a single-threadec sequential execution. Also, for our experiments, we introduced dynamic effort knobs only in the. convolutional layers of the DNN, as they dominated the overall runtime for all our benchmarks However, we note that the reported execution times and performance benefits include the time taker by all layers in the network.\nIn summary, by embedding dynamic effort knobs into DNNs, DyVEDeep seamlessly varies compu tational effort across inputs to achieve significant computational savings while maintaining classifi cation accuracy."}, {"section_index": "10", "section_name": "5 RESULTS", "section_text": "In this section, we present the results of our experiments that demonstrate the benefits of DyVEDeep\n5.1 IMPROVEMENT IN SCALAR OPERATIONS AND EXECUTION TIME\n1.2 Baseline Norm. Ops Norm. Runtime 1 <-- Benetss 0.8 0.6 NNnmn. 0.4 0.2 0 CIFAR-10 AlexNet OverFeat-Acc. VGG-16 CmprAlexNet GeoMean"}, {"section_index": "11", "section_name": "Figure 7: Normalized improvement in scalar operations and execution time", "section_text": "We first present the reduction in scalar operations and execution time achieved by DyVEDeep ir Figure 7. Please note that the Y-axis in Figure 7 is a normalized scale to represent the benefits in both scalar operations and runtime. We find that, across all benchmarks, DyVEDeep consistently achieves substantial reduction in operation count, ranging between 2.1-2.6. This translates to 1.8-2.3 benefits in software execution time. In all the above cases, the difference in classifica tion accuracy between the baseline DNN and DyVEDeep was <0.5%. On an average, the runtime overhead of the dynamic effort knobs in DyVEDeep was 5% of the baseline DNN. Also, while the runtime benefits with DyVEDeep are quite significant, they are smaller compared to the reduction in scalar operations. This is expected as applying the knobs require us to alter memory access patterns and perform additional book keeping operations. Also, control operations, such as loop counters etc., that are inherent to any software implementation limits the fraction of runtime DyVEDeep can benefit."}, {"section_index": "12", "section_name": "5.2 LAYER-WISE AND KNOB-WISE BREAKDOWN OF COMPUTE SAVINGS", "section_text": "Figure 8a shows the break down of run time savings across different layers of AlexNet, with the. layers plotted on the X-axis and the average run time per layer normalized to the total baseline DNN run time on the Y-axis. We achieve 1.5 reduction in run time in the initial convolutiona layers (C1,C2), which increases to 2.6 in the deeper convolutional layers (C3-C5). The C1 laye. in AlexNet has a kernel size of 11 11 and operates with a stride of 4. Hence, its output is less likel.. to have the correlation that SsDS expects. Also, since there are very few input features, SFMA is also not very effective. Also, the fraction of neurons saturating is relatively small in the firs. layers, which impacts the effectiveness of SPET. Hence, we achieve better savings in the deepe. convolutional layers compared to the initial ones.\nFigure 8b compares the contribution of each effort knob to the overall savings for each convolutional layer in AlexNet. Over all layers, the SDSS knob yields the highest savings, reducing 31% of the. total scalar operations. The SPET and SFMA knobs contribute 19% and 7% respectively. We find that the effectiveness of each knob is more pronounced in the deeper convolutional layers..\nFigures 10 and 11 illustrate the normalised effort map of DyVEDeep for all features in layer C1 for two sample images (Figure 9) from the CIFAR-10 data set. We use layer C1, as this is the closes1 layer to the actual image and allows for better visualization. The normalization is done with re spect to the number of operations that would have been performed to compute the neuron, had our knobs not been in place. Darker regions represent more computations. It is remarkable to see that DyVEDeep focuses more effort on precisely the regions of the image, that contains the object of interest. We compare this with the activation map of the corresponding features. Here, the darker regions represent activated neurons. This has been done to highlight the correlation between the\n0.35 Total 0.3 Tine AlexNet C5 0.25 Baseline SPET C4 0.2 DyVEDeep SDSS C3 0.15 SFMA Fracs C2 0.1 AlexNet 0.05 C1 0 0 0.2 0.4 0.6 0.8 C1 C2 c3 C4 C5 F6 F7 F8 Frac. Scalar Ops -> (a) (b)\nFigure 9: Two sample images horse and dog from the CIFAR-10 data set to visualize the effort map of DyVEDeep\nactivation values and the effort that DyVEDeep expends on the corresponding neurons. The acti vation map demonstrates that regions where the activation value of neurons are high have a highe variance in the values, that makes it harder to approximate them. However, the Del Actthresh param eter ensures that DyVEDeep constrains the effort spent in regions with uniform activation values These effort maps corroborate our knobs' effectiveness in identifying the critical computations fo the current input.\nDeep Neural Networks have significantly impacted the field of machine learning, by enabling state. of-the-art functional accuracies on a variety of machine learning problems involving image, video. text, speech and other modalities. However, their large-scale structure renders them compute and. data intensive, which remains a key challenge. We observe that state-of-the-art DNNs are static i.e.. they perform the same set of computations on all inputs. However, in many real-world datasets there exists significant heterogeneity in the compute effort required to classify each input. Leverag. ing this opportunity, we propose Dynamic Variable Effort Deep Neural Networks (DyVEDeep), o1. DNNs that modulate their compute effort dynamically ascertaining which computations are critical. to classify a given input. We build DyVEDeep versions of 4 popular image recognition benchmarks. Our experiments demonstrate that DyVEDeep achieves 2.1-2.6 reduction in scalar operations and 1.9-2.3 reduction in runtime on a Caffe-based sequential software implementation, while. maintaining the same level of classification accuracy."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional neural networks for object recognition. In 2015 IEEE International Conference on Acoustics Speech and Signal Processing, ICASsP 2015, South Brisbane, Queensland, Australia, April 19\nFigure 8: (a) Layer wise breakdown of runtime benefits in AlexNet (b) Contribution of each effort knob to ops improvement\ne\nFigure 10: Comparing the activation map and effort map of DyVEDeep for features in CIFAR-10 network layer C1 for the horse input\nFigure 11: Comparing the activation map and effort map of DyVEDeep for features in CIFAR-10 network layer C1 for the dog input\nYoshuaBengio. Estimating or propagating gradients through stochastic neurons.. CoRR abs/1305.2982,2013. URL http://arxiv.0rg/abs/1305.2982\nLei Jimmy Ba and Brendan J. Frey. Adaptive dropout for training deep neural networks. In Ad- vances in Neural Information Processing Systems 26: 27th Annual Conference on Neural In- formation Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 3084-3092, 2013. URL http://papers.nips.cc/ paper/5032-adaptive-dropout-for-training-deep-neural-networks.\nBVLC. Caffe cifar-10 network. b. URL https://github.com/BvLC/caffe/blob/ master/examples/cifarl0/cifarl0 quick train test.prototxt.\nJeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao. MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS. 2012\nia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large scale hierarchical image database. In 2009 IEEE Computer Society Conference on Compute Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 248- 255, 2009. doi: 10.1109/CVPRW.2009.5206848. URL http://dx.doi.0rg/10.1109 CVPRW.2009.5206848.\nMisha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Fr itas. Predicting parameters in deep learning. In Advances in Neural Information Pr. cessing Systems 26: 27th Annual Conference on Neural Information Processing Sy tems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevad. UnitedStates., pp.2148-2156, 2013. URL http://papers.nips.cc/paper 5025-predicting-parameters-in-deep-learning..\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In British Machine Vision Conference, BMVC 2014, Nottingham UK, September 1-5, 2014, 2014. URL http://www.bmva.org/bmvc/2014/papers/ paper073/index.htm1.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009\nAlexKrizhevsky. One weird trick for parallelizing convolutional neural networks.. CoRR abs/1404.5997,2014. URL http://arxiv.0rg/abs/1404.5997.\nBaoyuan Liu, Min Wang, Hassan Foroosh, Marshall F. Tappen, and Marianna Pensky. Sparse con volutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition CVPR 2015, Boston, MA, USA, June 7-12, 2015, pp. 806-814, 2015a. doi: 10.1109/CVPR.2015 7298681. URLhttp://dx.doi.0rg/10.1109/cvPR.2015.7298681.\nMichael Mathieu, Mikael Henaff, and Yann LeCun. Fast training of convolutional networks througl ffts. CoRR.abs/1312.5851,2013. URL http://arxiv.0rg/abs/1312.5851.\nMinsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W. Keckler. Virtual izing deep neural networks for memory-efficient neural network design. CoRR, abs/1602.08124, 2016. URLhttp://arxiv.0rg/abs/1602.08124.\nYann LeCun, John S. Denker, and Sara A. Solla. Optimalbraindamage. InAdvances in Neural Information Processing Systems 2, INIPs Conference, Denver, Colorado, USA November 27-30, 1989], pp. 598-605, 1989. URL http://papers.nips.cc/paper/ 250-optimal-brain-damage\nChao Liu, Zhiyong Zhang, and Dong Wang. Pruning deep neural networks by optimal brain damage In INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014, pp. 1092-1095, 2014. URL http://www. isca-speech.org/archive/interspeech_2014/i14_1092.html.\nShankar Ganesh Ramasubramanian, Rangharajan Venkatesan, Mrigank Sharad, Kaushik Roy, and Anand Raghunathan. Spindle: Spintronic deep learning engine for large-scale neuromorphic computing. In Proceedings of the 2014 International Symposium on Low Power Electronics and. Design, ISLPED '14, pp. 15-20, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2975- 0. doi: 10.1145/2627369.2627625. URL http://doi.acm.org/10.1145/2627369. 2 627625.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.0rg/abs/1409.1556.\nShawn Tan and Khe Chai Sim. Towards implicit complexity control using variable-depth deep neura networks for automatic speech recognition. In 2016 IEEE International Conference on Acoustic.. Speech and Signal Processing, ICASSP 2016, Shanghai, China, March 20-25, 2016, pp. 5965. 5969, 2016. doi: 10.1109/ICASSP.2016.7472822. URL http://dx.doi.0rg/10.1109 ICASSP.2016.7472822."}] |
BJbD_Pqlg | [{"section_index": "0", "section_name": "HUMAN PERCEPTION IN COMPUTER VISION / CONFERENCE E SUBMISSIONS", "section_text": "Ron Dekel\nDepartment of Neurobiology. Weizmann Institute of Science Rehovot. PA 7610001. Israel\nComputer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprece- dented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now be solved more easily based on previous learning. Bio- logical vision (learned in life and through evolution) is also accurate and general- purpose. Is it possible that these different learning regimes converge to similar problem-dependent optimal computations? We therefore asked whether the hu- man system-level computation of visual perception has DNN correlates and con- sidered several anecdotal test cases. We found that perceptual sensitivity to image changes has DNN mid-computation correlates, while sensitivity to segmentation. crowding and shape has DNN end-computation correlates. Our results quantify the applicability of using DNN computation to estimate perceptual loss, and are consistent with the fascinating theoretical view that properties of human percep tion are a consequence of architecture-independent visual learning."}, {"section_index": "1", "section_name": "OUICK EXPERT SUMMARY", "section_text": "Considering the learned computation of ImageNet-trained DNNs, we find\nLarge computation changes for perceptually salient image changes (Figure 1). Gestalt: segmentation, crowding, and shape interactions in computation (Figure 2) Contrast constancy: bandpass transduction in first layers is later corrected (Figure 3).\nThese properties are reminiscent of human perception, perhaps because learned general-purpos classifiers (human and DNN) tend to converge\nDeep neural networks (DNNs) are a class of computer learning algorithms that have become widely used in recent years (LeCun et al., 2015). By training with millions of examples, such models achieve unparalleled degrees of task-trained accuracy (Krizhevsky et al., 2012). This is not unprece. dented on its own - steady progress has been made in computer vision for decades, and to some degree current designs are just scaled versions of long-known principles (Lecun et al., 1998). In pre vious models, however, only the design is general-purpose, while learning is mostly specific to the context of a trained task. Interestingly, for current DNNs trained to solve a large-scale image recog nition problem (Russakovsky et al., 2014), the learned computation is useful as a building block for drastically different and untrained visual problems (Huh et al., 2016; Yosinski et al., 2014).\nFor example, orientation- and frequency-selective features (Gabor patches) can be considered general-purpose visual computations. Such features are routinely discovered by DNNs (Krizhevsky et al., 2012; Zeiler & Fergus, 2013), by other learning algorithms (Hinton & Salakhutdinov, 2006."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "As an extension, general-purpose computations are perhaps of universal use. For example, a dimen. sionality reduction transformation that optimally preserves recognition-relevant information may constitute an ideal computation for both DNN and animal. More formally, different learning algo. rithms with different physical implementations may converge to the same computation when similar (or sufficiently general) problems are solved near-optimally. Following this line of reasoning, DNN. models with good general-purpose computations may be computationally similar to biological vi. sual systems, even more so than less accurate and less general biologically plausible simulations (Kriegeskorte, 2015; Yamins & DiCarlo, 2016).\nHere, we quantify the similarity between human visual perception, as measured by psychophys ical experiments, and individual computational stages (layers) in feed-forward DNNs trained on a large-scale image recognition problem (ImageNet LSVRC). Comparison is achieved by feeding the experimental image stimuli to the trained DNN and comparing a DNN metric (mean mutual information or mean absolute change) to perceptual data. The use of reduced (simplified and typi cally non-natural) stimuli ensures identical inherent task difficulty across compared categories and prevents confounding of categorization consistency with measured similarity. Perception, a system level computation, may be influenced less by the architectural discrepancy (biology vs. DNN) than are neural recordings\nFrom a perceptual perspective, an image change of fixed size has different saliency depending on im age context (Polat & Sagi, 1993). To investigate whether the computation in trained DNNs exhibits similar contextual modulation, we used the Local Image Masking Database (Alam et al., 2014), ir which 1080 partially-overlapping images were subjected to different levels of the same random ad ditive noise perturbation, and for each image, a psychophysical experiment determined the thresholc noise level at which the added-noise image is discriminated from two noiseless copies at 75% (Fig ure 1a). Threshold is the objective function that is compared with an L1-distance correlate in the DNN representation. The scale of measured threshold was:\nstd (noise) 20 : l0g10 T\nwhere std (noise) is the standard deviation of the additive noise, and T is the mean image pixe value calculated over the region where the noise is added (i.e. image center).\nLee et al., 2008: 2009; Olshausen & Field, 1997), and are extensively hard-coded in computer vision (Jain & Farrokhnia, 1991). Furthermore, a similar computation is believed to underlie the spatial re-. sponse properties of visual neurons of diverse animal phyla (Carandini et al., 2005; DeAngelis et al., 1995; Hubel & Wiesel, 1968; Seelig & Jayaraman, 2013), and is evident in human visual perception (Campbell & Robson, 1968; Fogel & Sagi, 1989; Neri et al., 1999). This diversity culminates in sat- isfying theoretical arguments as to why Gabor-like features are so useful in general-purpose vision (Olshausen, 1996; Olshausen & Field, 1997).\nRelated work seems to be consistent with computation convergence. First, different DNN training. regimes seem to converge to a similar learned computation (Li et al., 2015; Zhou et al., 2014). Sec. ond, image representation may be similar in trained DNN and in biological visual systems. That. is, when the same images are processed by DNN and by humans or monkeys, the final DNN com. putation stages are strong predictors of human fMRI and monkey electrophysiology data collected. from visual areas V4 and IT (Cadieu et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Yamins et al., 2014). Furthermore, more accurate DNN models exhibit stronger predictive power (Cadieu. et al., 2014; Dubey & Agarwal, 2016; Yamins et al., 2014), and the final DNN computation stage is even a strong predictor of human-perceived shape discrimination (Kubilius et al., 2016). However,. some caution is perhaps unavoidable, since measured similarity may be confounded with catego. rization consistency, view-invariance resilience, or similarity in the inherent difficulty of the tasks. undergoing comparison. A complementary approach is to consider images that were produced by. optimizing trained DNN-based perceptual metrics (Gatys et al., 2015a;b; Johnson et al., 2016; Ledig. et al., 2016), which perhaps yields undeniable evidence of non-trivial computational similarity, al-. though a more objective approach may be warranted.\na b 50 R2=0.6 L1 = 37.8 etne 30 E 10 L1 = 23.4 -60 -40 -20 0 Perceptual threshold (dB) -40 dB -30 dB -20 dB -10 dB 0 dB Perceptual threshold c d <-45 dB -40 dB -30 dB -20 dB >-15 dB 50% 0.6 orooreeor 0.4 r 66% 0.2 100% Computational stage\nFigure 1: Predicting perturbation thresholds. a, For a fixed image perturbation, perceptual detection. threshold (visualized by red arrow) depends on image context. b, Measured perceptual threshold is. correlated with the average L1 change in DNN computation due to image perturbation (for DNN. model VGG-19, image scale=100%). c, Explained variability (R2) of perceptual threshold data. when L1 change is based on isolated computational layers for different input image scales. Same VGG-19 model as in (b). X-axis labels: data refers to raw image pixel data, conv*_1 and fc_* are. the before-ReLU output of a convolution and a fully-connected operation, respectively, and prob. is the output class label probabilities vector. d, Example images for whcih predicted threshold in b. is much higher than perceptually measured (\"Overshoot\"', where perturbation saliency is better than. predicted), or vise versa (\"Undershoot'). Examples are considered from several perceptual threshold. ranges (2 dB of shown number).\nThe DNN correlate of perceptual threshold we used was the average L1 change in DNN computatio. between added-noise images and the original, noiseless image. Formally,.\nLin(I)=aiI+ noise(n))-ai(I)\nwhere a, (X) is the activation value of neuron i during the DNN feedforward pass for input image. X, and the inner average (denoted by bar) is taken over repetitions with random n-sized noise (noise. is introduced at random phase spectra in a fixed image location, an augmentation that follows the between-image randomization described by Alam et al., 2014; the number of repetitions was 10 or more). Unless otherwise specified, the final L1 prediction is Ln averaged across noise levels (-40. to 25 dB with 5-dB intervals) and computational neurons (first within and then across computa tional stages). Using L1 averaged across noise levels as a correlate for the noise level of perceptua. threshold is a simple approximation with minimal assumptions..\nResults show that the L1 metric is correlated with the perceptual threshold for all tested DNN archi tectures (Figure 1b, 4a-c). In other words, higher values of the L1 metric (indicating larger changes in DNN computation due to image perturbation, consistent with higher perturbation saliency) are\nTo quantify and compare predictive power, we considered the percent of linearly explained vari- ability (R2). For all tested DNN architectures, the prediction explains about 60% of the perceptual variability (Tables 1, 2; baselines at Tables 3-5), where inter-person similarity representing theoreti- cal maximum is 84% (Alam et al., 2014). The DNN prediction is far more accurate than a prediction based on simple image statistical properties (e.g. RMS contrast), and is on par with a detailed per- ceptual model that relies on dozens of psychophysically collected parameters (Alam et al., 2014). The Spearmann correlation coefficient is much higher compared with the perceptual model (with an absolute SROCC value of about 0.79 compared with 0.70, Table 1), suggesting that the L1 metric gets the order right but not the scale. We did not compare these results with models that fit the experimental data (e.g. Alam et al., 2015; Liu & Allebach, 2016), since the L1 metric has no explicit parameters. Also, different DNN architectures exhibited high similarity in their predictions (R2 of about 0.9, e.g. Figure 4d).\nPrediction can also be made from isolated computational stages, instead of across all stages as before. This analysis shows that the predictive power peaks mid-computation across all tested image scales (Figure 1c). This peak is consistent with use of middle DNN layers to optimize perceptual metrics (Gatys et al., 2015a;b; Ledig et al., 2016), and is reminiscent of cases in which low- tc mid-level vision is the performance limiting computation in the detection of at-threshold stimuli (Campbell & Robson, 1968; Del Cul et al., 2007).\nFinally, considering the images for which the L1-based prediction has a high error suggests a factor which causes a systematic inconsistency with perception (Figures 1d, 6). This factor may be related to the mean image luminance: by introducing noise perturbations according to the scale of Equation 1, a fixed noise size (in dB) corresponds to smaller pixel changes in dark compared with bright images. (Using this scales reflects an assumption of multiplicative rather than additive conservation: this assumption may be justified for the representation at the final but perhaps not the intermediate computational stages considering the log-linear contrast response discussed in Section 5). Another factor may the degree to which image content is identifiable.\nTable 1: Prediction accuracy. Percent of linearly explained variability (R2), absolute value of Spear man rank-order correlation coefficient (SROCC), and the root mean squared error of the linear prediction (RMsE) are presented for each prediction model. Note the measurement scale of the threshold data being predicted (Eq. 1). (*) Thresholds linearized through a logistic transform be- fore prediction (see Larson & Chandler, 2010), possibly increasing but not decreasing measured predictive strength. (**) Average of four similar alternatives.\nThe previous analysis suggested gross computational similarity between human perception anc trained DNNs. Next, we aimed to extend the comparison to more interpretable properties of per ception by considering more highly controlled designs. To this end, we considered cases in which a static background context modulates the difficulty of discriminating a foreground shape, despite nc spatial overlap of foreground and background. This permits interpretation by considering the cause of the modulation.\nWe first consider segmentation, in which arrangement is better discriminated for arrays of consis. tently oriented lines compared with inconsistently oriented lines (Figure 2a) (Pinchuk- Yacobi et al 2016). Crowding is considered next, where surround clutter that is similar to the discriminate. arget leads to deteriorated discrimination performance (Figure 2b) (Livne & Sagi, 2007). Last t be addressed is object superiority, in which a target line location is better discriminated when i is in a shape-forming layout (Figure 2c) (Weisstein & Harris, 1974). In this case, clutter is con trolled by having the same fixed number of lines in context. To measure perceptual discriminatior these works introduced performance-limiting manipulations such as location jittering, brief presen ation, and temporal masking. While different manipulations showed different measured values order-of-difficulty was typically preserved. Here we changed all the original performance-limiting. nanipulations to location jittering (whole-shape or element-wise, see Section 8.4).\nTo quantify discrimination difficulty in DNNs, we measured the target-discriminative information of isolated neurons (where performance is limited by location jittering noise), then averaged across all neurons (first within and then across computational layer stages). Specifically, for each neuron, we measured the reduction in categorization uncertainty due to observation, termed mutual information (MI):\nwhere H stands for entropy, and A, is a random variable for the value of neuron i when the DNN. processes a random image from a category defined by the random variable C. For example, if a. neuron gives a value in the range of 100.0 to 200.0 when the DNN processes images from category. A, and 300.0 to 400.0 for category B, then the category is always known by observing the value, and. so mutual information is high (MI=1 bits). On the other extreme, if the neuron has no discriminative task information, then MI=0 bits. To measure MI, we quantized activations into eight equal-amount. bins, and used 500 samples (repetitions having different location jittering noise) across categories. The motivation for this correlate is the assumption that the perceptual order-of-difficulty reflects the. quantity of task-discriminative information in the representation..\nResults show that, across hundreds of configurations (varying pattern element size, target location. jitter magnitude, and DNN architecture; see Section 8.4), the qualitative order of difficulty in terms. of the DNN MI metric is consistent with the order of difficulty measured in human psychophysica. experiments, for the conditions addressing segmentation and crowding (Figures 2d, 7; for baseline. models see Figure 8). It is interesting to note that the increase in similarity develops gradually along. different layer types in the DNN computation (i.e. not just pooling layers), and is accompaniec. by a gradual increase in the quantity of task-relevant information (Figure 2e-g). This indicates a. link between task relevance and computational similarity for the tested conditions. Note that unlike. the evident increase in isolated unit task information, the task information from all units combine. decreases by definition along any computational hierarchy. An intuition for this result is that the. total hidden information decreases, while more accessible per-unit information increases..\nFor shape formation, four out of six shapes consistently show order of difficulty like perception, anc two shapes consistently do no (caricature at Figure 2h; actual data at Figure 9)..\nMI(A;C) = H(C)- H(CA)\na b d Segmentation Crowding Shape Consistent Connnnnonns Inconsistent 90 88 90 Ase A B 66 60 1 30 24 0 2 C wnu N N uip TAS TBS MO Shap M M e f g h Segmentation Crowding Shape Inconsistent 0.3r 0.6 Easy ref (siiq) 0.2 Hard 0.2 6 0.4 NNC 0.1 Consistent 0. 0.2 Perception Computational stage\nFigure 2: Background context. a-c, Illustrations of reproduced discrimination stimuli for three. psychophysical experiments (actual images used were white-on-black rather than black-on-white.. and pattern size was smaller, see Figures 12-14). d, Number of configurations for which order of-difficulty in discrimination is qualitatively consistency with perception according to a mutual. information DNN metric. Configurations vary in pattern (element size, target location, and jitter. magnitude; see Section 8.4) and in DNN architecture used (CaffeNet, GoogLeNet, VGG-19, and. ResNet-152). DNN metric is the average across neurons of the isolated neuron target-discriminative. information (averaged first within, and then across computational layer stages), where performance. is limited by location jittering (e.g. evident jitter in illustrations). e-g, The value of the MI metric. across computational layers of model VGG-19 for a typical pattern configuration. The six 'hard\". (gray) lines in Shape MI correspond to six different layouts (see Section 8.4.3). Analysis shows that. for isolated computation stages, similarity to perception is evident only at the final DNN computation. stages. h, A caricature summarizing the similarity and discrepancy of perception and the MI-based. DNN prediction for Shape (see Figure 9).\nA cornerstone of biological vision research is the use of sine gratings at different frequencies, ori- entations, and contrasts (Campbell & Robson, 1968). Notable are results showing that the lowest perceivable contrast in human perception depends on frequency. Specifically, high spatial frequen. cies are attenuated by the optics of the eye, and low spatial frequencies are believed to be attenuated due to processing inefficiencies (Watson & Ahumada, 2008), so that the lowest perceivable contrast. is found at intermediate frequencies. (To appreciate this yourself, examine Figure 3a). Thus, for. low-contrast gratings, the physical quantity of contrast is not perceived correctly: it is not preserved. across spatial frequencies. Interestingly, this is corrected for gratings of higher contrasts, for which. perceived contrast is more constant across spatial frequencies (Georgeson & Sullivan, 1975)..\nThe DNN correlate we considered is the mean absolute change in DNN representation between a gray image and sinusoidal gratings, at all combinations of spatial frequency and contrast. Formally for neurons in a given layer, we measured:.\nNneurons 1 L1(contrast, frequency) = ai (contrast, frequency) - ai (0, 0) Nneurons i=1\nwhere a; (contrast, frequency) is the average activation value of neuron i to 250 sine images. (random orientation, random phase), a (0, 0) is the response to a blank (gray) image, and Nneurons. is the number of neurons in the layer. This measure reflects the overall change in response vs. the gray image.\nResults show a bandpass response for low-contrast gratings (blue lines strongly modulated by fre quency, Figures 3, 10), and what appears to be a mostly constant response at high contrast foi. end-computation layers (red lines appear more invariant to frequency), in accordance with percep tion.\nWe next aimed to compare these results with perception. Data from human experiments is generall iso-output (i.e. for a pre-set output, such as 75% detection accuracy, the input is varied to find th value which produce the preset output). However, the DNN measurements here are iso-input (i.e for a fixed input contrast the Lj is measured). As such, human data should be compared to the inter poalted inverse of DNN measurements. Specifically, for a set output value, the interpolated contras value which produce the output is found for every frequency (Figure 11). This analysis permit quantifying the similarity of iso-output curves for human and DNN, measured here as the percent o log-Contrast variability in human measurements which is explained by the DNN predictions. Thi showed a high explained variability at the end computation stage (prob layer, R2 = 94%), bu importantly, a similarly high value at the first computational stage (conv1_1 layer, R2 = 96%) Intiutively, while the 'internal representation\"' variability in terms of Lj is small, the iso-outpu number-of-input-contrast-cahnges variability is still high. For example. for the prob layer, abou the same L1 is measured for (Contrast=1.freq=75) and for (Contrast=0.18.freq=12).\nAn interesting, unexpected observation is that the logarithmically spaced contrast inputs are linearly spaced at the end-computation layers. That is, the average change in DNN representation scales logarithmically with the size of input change. This can be quantified by the correlation of output L1 with log Contrast input, which showed R2 = 98% (averaged across spatial frequencies) for prob, while much lower values were observed for early and middle layers (up to layer fc7). The same computation when scrambling the learned parameters of the model showed R2 = 60%. Because the degree of log-linearity observed was extremely high, it may be an important emergent property of the learned DNN computation, which may deserve further investigation. However, this property is only reminiscent and not immediately consistent with the perceptual power-law scaling (Gottesman et al., 1981).\n0.72 a b data fc8 0.5 conv1 1 x10-prob 2 0.36 6 2 8 0.26 1.5 0.18 connnst 1.5 4 6 0.126 1 0.086 1 0.062 2 0.5 0.5 0.046 2 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.015 Frequency Frequency (cycles/image) 0.007\nFigure 3: Contrast sensitivity. a. Perceived contrast is strongly affected by spatial frequency a low contrast, but less so at high contrast (which preserves the physical quantity of contrast and thu termed constancy). b. The L1 change in VGG-19 representation between a gray image and image depicting sinusoidal gratings at each combination of sine spatial frequency (x-axis) and contras (color) (random orientation, random phase), considering the raw image pixel data representatior (data), the before-ReLU output of the first convolutional layer representation (conv1_1), the out put of the last fully-connected layer representation (fc8), and the output class label probabilitie. representation (prob).\n0.72 a b 0.5 data conv1_1 fc8 2 x10~pr0b 0.36 8 2 0.26 Connst 1.5 0.18 1.5 4 0.126 0.086 4 1 0.062 2 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 Frequency 0.0078 Frequency (cycles/image)\nIt may be tempting to believe that what we see is the result of a simple transformation of visua input. Centuries of psychophysics have, however, revealed complex properties in perception, by crafting stimuli that isolate different perceptual properties. In our study, we used the same stimuli tc investigate the learned properties of deep neural networks (DNNs), which are the leading compute vision algorithms to date (LeCun et al., 2015).\nThe DNNs we used were trained in a supervised fashion to assign labels to input images. To some degree, this task resembles the simple verbal explanations given to children by their parents. Since human perception is obviously much richer than the simple external supervision provided, we were not surprised to find that the best correlate for perceptual saliency of image changes is a part of the DNN computation that is only supervised indirectly (i.e. the mid-computation stage). This similarity is so strong, that even with no fine-tuning to human perception, the DNN metric is competitively accurate, even compared with a direct model of perception.\nThis strong, quantifiable similarity to a gross aspect of perception may, however, reflect a mix of sim ilarities and discrepancies in different perceptual properties. To address isolated perceptual effects we considered experiments that manipulate a spatial interaction, where the difficulty of discrimi nating a foreground target is modulated by a background context. Results showed modulation o DNN target diagnostic, isolated unit information, consistent with the modulation found in percep tual discrimination. This was shown for contextual interactions reflecting grouping/segmentatior (Harris et al., 2015), crowding/clutter (Livne & Sagi, 2007; Pelli et al., 2004), and shape superiority (Weisstein & Harris, 1974). DNN similarity to these groupings/gestalt phenomena appeared at the end-computation stages.\nNo less interesting, are the cases in which there is no similarity. For example, perceptual effects related to 3D (Erdogan & Jacobs, 2016) and symmetry (Pramod & Arun, 2016) do not appear to have a strong correlate in the DNN computation. Indeed, it may be interesting to investigate the influence of visual experience in these cases. And, equally important, similarity should be considered in terms of specific perceptual properties rather than as a general statement.\nIn the human hierarchy of visual processing areas, information is believed to be processed in a feed. forward sweep, followed by recurrent processing loops (top-down and lateral) (Lamme & Roelf. sema, 2ooo). Thus, for example, the early visual areas can perform deep computations. Since. mapping from visual areas to DNN computational layers is not simple, it will not be considered. here. (Note that ResNet connectivity is perhaps reminiscent of unrolled recurrent processing).\nInterestingly, debate is ongoing about the degree to which visual perception is dependent on re. current connectivity (Fabre-Thorpe et al., 1998; Hung et al., 2005): recurrent representations ar. obviously richer, but feedforward computations converge much faster. An implicit question her regarding the extent of feasible feed-forward representations is, perhaps: Can contour segmentation. contextual influences, and complex shapes be learned? Based on the results reported here for feed. forward DNNs, a feedforward representation may seem sufficient. However, the extent to which thi. is true may be very limited. In this study we used small images with a small number of lines, whil. effects such as contour integration seem to take place even in very large configurations (Field et al. 1993). Such scaling seems more likely in a recurrent implementation. As such, a reasonable hy. pothesis may be that the full extent of contextual influence is only realizable with recurrence, whil. feedforward DNNs learn a limited version by converging towards a useful computation..\nThe use of DNNs in modeling of visual perception (or of biological visual systems in general) is subject to a tradeoff between accuracy and biological plausibility. In terms of architecture, other deep models better approximate our current understanding of the visual system (Riesenhuber &\nPoggio, 1999; Serre, 2014). However, the computation in trained DNN models is quite general- purpose (Huh et al., 2016; Yosinski et al., 2014) and offers unparalleled accuracy in recognition tasks (LeCun et al., 2015). Since visual computations are, to some degree, task- rather than architecture- dependent, an accurate and general-purpose DNN model may better resemble biological processing than less accurate biologically plausible ones (Kriegeskorte, 2015; Yamins & DiCarlo, 2016). We support this view by considering a controlled condition in which similarity is not confounded with task difficulty or categorization consistency."}, {"section_index": "3", "section_name": "6.3.2 USE IN PSYCHOPHYSICS", "section_text": "Our results imply that trained DNN models have good predictive value for outcomes of psychophys ical experiments, permitting a zero-cost first-order approximation. Note, however, that the scope of such simulations may be limited, since learning (Sagi, 2011) and adaptation (Webster, 2011) were not considered here."}, {"section_index": "4", "section_name": "6.3.3 USE IN ENGINEERING (A PERCEPTUAL LOSS METRIC", "section_text": "As proposed previously (Dosovitskiy & Brox, 2016; Johnson et al., 2016; Ledig et al., 2016), the saliency of small image changes can be estimated as the representational distance in trained DNNs Here, we quantified this approach by relying on data from a controlled psychophysical experiment (Alam et al., 2014). We found the metric to be far superior to simple image statistical properties and on par with a detailed perceptual model (Alam et al., 2014). This metric can be useful in image. compression, whereby optimizing degradation across image sub-patches by comparing perceptual loss may minimize visual artifacts and content loss.."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Yoram Bonneh for his valuable questions which led to much of this work"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Md Mushfiqul Alam. Kedarnath P Vilankar, David J Field, and Damon M Chandler. Local masking in natural images: A database and analysis. Journal of vision, 14(8):22-, jan 2014. ISSN 1534- 7362. doi: 10.1167/14.8.22\nMatteo Carandini, Jonathan B Demb, Valerio Mante, David J Tolhurst, Yang Dan, Bruno A Ol shausen, Jack L Gallant, and Nicole C Rust. Do we know what the early visual system does? The Journal of Neuroscience, 25(46):10577-97, nov 2005. ISsN 1529-2401. doi: 10.1523/JNEUROSCI.3726-05.2005.\nAnother fascinating option is the formation of hypotheses in terms of mathematically differentiable trained-DNN constraints, whereby it is possible to efficiently solve for the visual stimuli that opti mally dissociate the hypotheses (see Gatys et al. 2015a;b; Mordvintsev et al. 2015 and note Goodfel low et al. 2014; Szegedy et al. 2013). The conclusions drawn from such stimuli can be independent of the theoretical assumptions about the generating process (for example, creating new visual illu sions that can be seen regardless of how they were created).\nAntoine Del Cul, Sylvain Baillet, and Stanislas Dehaene. Brain dynamics underlying the nonlinea. threshold for access to consciousness. PLoS Biol, 5(10):e260, 2007. ISsN 1545-7885\nGoker Erdogan and Robert A Jacobs. A 3D shape inference model matches human visual objec similarity judgments better than deep convolutional neural networks. In Proceedings of the 38th Annual Conference of the Cognitive Science Society. Cognitive Science Society Austin, TX, 2016\nMichele Fabre-Thorpe, Ghislaine Richard, and Simon J Thorpe. Rapid categorization of natural images by rhesus monkeys. Neuroreport, 9(2):303-308, 1998. 1SSN 0959-4965\nDavid J Field, Anthony Hayes, and Robert F Hess. Contour integration by the human visual system. evidence for a local association field. Vision research. 33(2):173-193. 1993. 1SsN 0042-6989.\nItzhak Fogel and Dov Sagi. Gabor filters as texture discriminator. Biological cybernetics, 61(2): 103-113. 1989. ISSN 0340-1200.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A Neural Algorithm of Artistic Style aug 2015a.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. may 2015b.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor- mation Processing Systems, pp. 2672-2680. 2014.\nJon Gottesman, Gary S Rubin, and Gordon E Legge. A power law for perceived contrast in human vision. Vision research, 21(6):791-799, 1981. ISSN 0042-6989.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. dec 2015.\nMinyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes ImageNet good for transfe learning? aug 2016.\nHinton and Salakhutdinov. Reducing the dimensionality of data with neural networks. Science (Nes York. N.Y.). 313(5786):504-7. iul 2006. ISSN 1095-9203. doi: 10.1126/science.1127647\nJustin Johnson., Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer an super-resolution. arXiv preprint arXiv:1603.08155, 2016\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444 may 2015. ISSN 0028-0836. doi: 10.1038/nature14539.\nYixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent Learning: Dc. different neural networks learn the same representations? arXiv preprint arXiv:1511.07543. 2015\nYucheng Liu and Jan P. Allebach. Near-threshold perceptual distortion prediction based on optima structure classification. In 2016 IEEE International Conference on Image Processing (ICIP), pp 106-110. IEEE. sep 2016. 1SBN 978-1-4673-9961-6. doi: 10.1109/ICIP.2016.7532328.\nTomer Livne and Dov Sagi. Configuration influence on crowding. Journal of Vision, 7(2):4, 2007 ISSN 1534-7362\nHonglak Lee, Chaitanya Ekanadham, and Andrew Y. Ng. Sparse deep belief net model for visual area V2. In Advances in Neural Information Processing Systems, pp. 873-880, 2008.\nHonglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09, pp. 1-8, New York. New York, USA, jun 2009. ACM Press. ISBN 9781605585161. doi: 10.1145/1553374.1553453\nPeter Neri, Andrew J Parker, and Colin Blakemore. Probing the human stereoscopic system witl reverse correlation. Nature, 401(6754):695-698, 1999. ISSN 0028-0836.\nBruno A Olshausen. Emergence of simple-cell receptive field properties by learning a sparse cod for natural images. Nature, 381(6583):607-609, 1996. 1SSN 0028-0836.\nDenis G Pelli, Melanie Palomares, and Najib J Majaj. Crowding is unlike ordinary masking: Distin guishing feature integration from detection. Journal of vision, 4(12):12, 2004. ISsN 1534-7362\nNoga Pinchuk-Yacobi, Ron Dekel, and Dov Sagi. Expectation and the tilt aftereffect. Journal o vision, 15(12):39, sep 2015. 1SSN 1534-7362. doi: 10.1167/15.12.39\nNoga Pinchuk-Yacobi, Hila Harris, and Dov Sagi. Target-selective tilt aftereffect during texture learning. Vision research, 124:44-51, 2016. 1SSN 0042-6989\nU Polat and D Sagi. Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments. Vision research, 33(7):993-9, may 1993. ISSN 0042- 6989. R T Pramod and S P Arun. Do computational models differ systematically from human object perception? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1601-1609, 2016.\nJohannes D Seelig and Vivek Jayaraman. Feature detection and orientation tuning in the Drosophila central complex. Nature. 503(7475):262-266. 2013. 1SSN 0028-0836\nThomas Serre. Hierarchical Models of the Visual System. In Encyclopedia of Computational Neu roscience, pp. 1-12. Springer, 2014. ISBN 1461473209\nEero P Simoncelli and William T Freeman. The steerable pyramid: a flexible architecture for multi scale derivative computation. In ICIP (3), pp. 444- 447, 1995.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. sep 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow. and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013\nAndrew B Watson and Albert J Ahumada. Predicting visual acuity from wavefront aberrations Journal of vision, 8(4):17.1-19, ian 2008. 1SSN 1534-7362. doi: 10.1167/8.4.17.\nMichael A Webster. Adaptation and visual coding. Journal of vision, 11(5), jan 2011. ISSN 1534 7362.\nN. Weisstein and C. S. Harris. Visual Detection of Line Segments: An Object-Superiority Effect Science, 186(4165):752-755, nov 1974. ISsN 0036-8075. doi: 10.1126/science.186.4165.752\nDaniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356-365, 2016. ISSN 1097-6256.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014\nMatthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. nov 2013.\nZhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object Detectors Emerg in Deep Scene CNNs. pp. 12, dec 2014.\na b C d 5 0.2 5 R2=0.87 Ccaret 0.15 3 3 3 0.1 7 0.05 1 -60 -40 -20 0 60 -40 -20 0 -60 -40-20 0 10 30 50 Perceptual threshold Perceptual threshold Perceptual threshold L VGG-19 (dB) (dB) (dB)\na b VGG-19 ResNet-152 0.6 0.6 0.4 0.4 Y 0.2 0.2 Best isolated Computational stage\nVGG-19 ResNet-152 . 0.4 0.4 0.2 0.2 Best isolated\nFigure 5: Prediction accuracy as a function of computational stage. a, Predicting perceptual sen sitivity for model VGG-19 using the best single kernel (i.e. using one fitting parameter, no cross validation), vs. the standard L1 metric (reproduced from Figure 1). b, For non-branch computa tional stages of model ResNet-152.\nR2 Model SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nR2 Model SROCC RMSE Recognition accuracy CaffeNet .59 .78 5.44 56% GoogLeNet .59 .79 5.45 66% VGG-19 .60 .79 5.40 70% ResNet-152 .53 .74 5.82 75%\nTable 2: Accuracy of perceptual sensitivity prediction and task-trained ImageNet center-crop top-1 validation accuracy for different DNN models (following Table 1 from which third row is repro duced; used scale: 100%). The quality of prediction for ResNet-152 improves dramatically if only the first tens of layers are considered (see Figure 5b)..\nFigure 4: Predicting perceptual sensitivity to image changes (following Figure 1). a-c, The Lj. change in CaffeNet, GoogLeNet, and ResNet-152 DNN architectures as a function of perceptual. threshold. d, The L1 change in GoogLeNet as a function of the L1 change in VGG-19\nModel R2 SROCC RMSE VGG-19, scrambled weights .18 .39 7.76 Gabor filter bank. .32 .12 8.03 Steerable-pyramid filter bank .37 .15 7.91\nTable 3: Accuracy of perceptual sensitivity prediction for baseline models (see Section 8.2; used scale: 100%).\nModel R2 SROCC RMSE Recognition accurac CaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nCaffeNet iter 1 .46 .67 6.30 0% CaffeNet iter 50K .59 .79 5.43 37% CaffeNet iter 100K .60 .79 5.41 39% CaffeNet iter 150K .60 .78 5.43 53% CaffeNet iter 200K .59 .78 5.45 54% CaffeNet iter 250K .59 .78 5.43 56% CaffeNet iter 300K .59 .78 5.44 56% CaffeNet iter 310K .59 .78 5.44 56%\nTable 4: Accuracy of perceptual sensitivity prediction during CaffeNet model standard training (use. scale: 100%). Last row reproduced from Table 2.\nTable 5: Robustness of perceptual sensitivity prediction for varying prediction parameters for mode VGG-19. First three rows reproduced from Table 1. Measurements for the lower noise range of -60:-40 dB were omitted by mistake.\nScale Metric Augmentation Noise range R- SROCC RMSE 100% L1 noise phase -40:25 dB .60 .79 5.40 66% L1 noise phase -40:25 dB .60 .79 5.42 50% L1 noise phase -40:25 dB .57 .77 5.57 100% L2 noise phase -40:25 dB .62 .80 5.29 100% L1 None -40:25 dB .58 .77 5.55 100% L1 noise phase -20:25 dB .59 .78 5.46 100% L1 noise phase -40:5 dB .59 .79 5.43\nModel Day 1 Days 2-4 Masked VGG-19 .36 .37 .15 GoogLeNet .31 .22 .16 MRSA-152 .26 .26 .11 CaffeNet iter 1. .32 .29 .39 CaffeNet iter 50K .15 .19 .16 CaffeNet iter 310K .16 .12 .18 Gabor Decomposition .26 .27 .48 Steerable Pyramid .24 .32 .25\n.36 .37 .15 .31 .22 .16 .26 .26 .11 .32 .29 .39 50K .15 .19 .16 310K .16 .12 .18 .26 .27 position .48 mid .24 .32 .25\nTable 6: Background context for Shape. Shown is the Spearmann correlation coefficient (SROCC) of perceptual data vs. model-based MI prediction across shapes (i.e. considering all shapes rather than only Easy vs. Hard; note that the original robust finding the superiority of the Easy shape) Perceptual data from Weisstein & Harris (1974), where 'Day 1\" and 'Days 2-4\" (averaged) are for the reduced-masking condition depicted in their Figure 3.).\nPerceptual threshold <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB orrnrroor <-45 dB -40 dB -35 dB -30 dB -25 dB -20 dB >-15 dB oreoreeor <-45 dB -4O dB -35 dB -30 dB -25 dB -20 dB >-15 dE oreoreeor deonmp Gaoor\nFigure 6: Images where predicted threshold is too high (\"Overshoot\", where perturbation saliency is better than predicted) or too low (\"'Undershoot'), considered from several perceptual threshold ranges (2 dB of shown number). Some images are reproduced from Figure 1.\nGoogLeNet ResNet-152 CaffeNet 90 90 86 89 90 82 90 90 65 60 57 57 60 60 4347 33 33 30 25 30 30 8 0 Figure 7: Background context for different DNN models (following figure CaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 90 90 r 74 75 61 57 60 50 60 60 -C 33 30 30 30 0 0 0 Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 60 55 60 Consistent 4248 35 Inconsistent 30 30 'wnu 0 0 Figure 8: Background context for baseline DNN models (following figure 2). \"Caff is reproduced from Figure 7.\nFigure 7: Background context for different DNN models (following figure 2)\nCaffeNet iter 1 CaffeNet iter 50K CaffeNet iter 310K 90 90 86 89 90 74 75 90 90 61 50 57 60 60 60 40 33 29 30 16 15 30 30 0 0 4 1 0 0 0 connnnnnonns Gabor Decomposition Steerable Pyramid 82 85 90 90 80 62 55 60 60 Consistent 4248 35 28 Inconsistent 30 30 8 19 'wnu 5 0 O Shape rowding Segmentation\nFigure 8: Background context for baseline DNN models (following figure 2). \"CaffeNet iter 310K' is reproduced from Figure 7.\nVGG-19 GoogLeNet ResNet-152 101 t Easy (ref.). 10 X 10 W8 + *+ X a 80 X X +* * * * * + * 102 x++ Hard + X 102 * 10 10-2 101 10~2 101 10-2 101 f1 CaffeNetiter 1. CaffeNetiter soK CaffeNetiter 310K 101 101 10 1 Q$8KEK O D \\0+* *< f2 Ox|* *+ X * XX 10 0 b ** 10 10 X X 10~2 10~1 10~2 101 102 101 * d GaborDecomposition SteerablePyramid Humanperception 70 0 KEX 101 * (%) errreeennney x e 101 x + 50 + +T x C X 0 10 40 10-2 10~1 10~2 101 60 62 64 66 68 70 MI Easy (bits) AccuracyEasy (%)\n10 Easy 10 B X X tOx+#K + a D X X +* * \\O + * * * + 10 +x Hard + + Xx + X * 102 10 10-2 101 102 10~1 102 101\nRO 10 O X X XOx+RK + a 80 X + O +* * * + \\C * 10 X Hard + X C 10 10 10~2 10~1 102 101 102 10~1 f1 CaffeNetiter1 CaffeNetiter soK CaffeNetiter 310K 101 10 101 O&KEK 10 O 0o f2 * OX* A 0 XX +x $+ 0 b 10 ** 10 104 X 10~2 101 10-2 10~1 10-2 10~1 * GaborDecomposition SteerablePyramid Humanperception 70 0 * 4 10 (%) perreeennney X e 10 60 x + 50 + x 1 c 104 10 40 10-2 10~1 102 101 60 62 64 66 68 70 AccuracyEasy (%) MI Easy (bits)\nFigure 9: Background context for Shape. Shown for each model is the measured MI for the six 'Hard\"' shapes as a function of the MI for the \"Easy\" shape. The last panel shows an analagous comparison measured in human subjects by Weisstein & Harris (1974). A data point which lies below the dashed diagonal indicates a configuration for which discriminating line location is easier for the Easy shape compared with the relevant Hard shape.\ndata conv1 fc8 2 10-prob 6 2 8 1.5 1.5 6 1 4 2 0.5 0.5 0 0 0 0 1 7 75 7 75 1 7 75 7 75 Contrast 1 1 1 data conv1 cls3 fc 10-3prob 0.72 6 0.5 15 0.36 1.5 0.26 1.5 10 0.18 0.126 1 1 0.086 5 0.062 0.5 0.5 0.046 0.032 0 0 0 0 0.024 1 7 75 1 7 75 1 7 75 1 7 75 0.0156 0.0078 data conv1 fc1000 orob 6 8 1.5 6 1 7 4 2 0.5 0.5 0 0 0 0 1 7 75 1 7 75 1 7 75 1 7 75 Frequency (cycles/image)\nFigure 10: Contrast sensitivity (following Figure 3) for DNN architectures CaffeNet, GoogLeNet and ResNet-152.\na Human VGG-19, prob 1 Connst connst 0.1 0.1 0.01 0.01 0.18 2.18 26.79 0.18 2.18 26.79 Frequency Frequency (cycles/deg. of vis. field) (0.18 * cycles/image) b Human VGG-19, conv1 1 coonest connst 0.1 0.1 0.01 0.01 0.09 1.09 13.39 0.09 1.09 13.39 Frequency Frequency (cycles/deg. of vis. field) (0.09 * cycles/image)\nFigure 11: Comparison of contrast sensitivity. Shown are iso-output curves, for which perceived contrast is the same (Human), or for which the L1 change relative to a gray image is the same (DNN. model VGG-19). To obtain a correspondence between human frequency values (given in cycles pet degree of visual field) to DNN frequency values (given in cycles per image), a scaling was choser. such that the minima of the blue curve is given at the same frequency value. Human data is foi subject M.A.G. as measured by Georgeson & Sullivan (1975)."}, {"section_index": "7", "section_name": "8.1 DNN MODELS", "section_text": "To collect DNN computation snapshots, we used MATLAB with MatConvNet version 1.0-beta2. (Vedaldi & Lenc, 2015). All MATLAB code will be made available upon acceptance of thi. manuscript. The pre-trained DNN models we have used are: CaffeNet (which is a variant of AlexNe provided in Caffe, Jia et al., 2014), GoogLeNet (Szegedy et al., 2014), VGG-19 (Simonyan & Zis. serman, 2014), and ResNet-152 (He et al., 2015). The models were trained on the same ImageNe. LSVRC. The CaffeNet model was trained using Caffe with the default ImageNet training parame. ters (stopping at iteration 310, 000) and imported into MatConvNet. For the GoogLeNet model, w. used the imported pre-trained reference-Caffe implementation. For VGG-19 and ResNet-152, w used the imported pre-trained original versions. In all experiments input image size was 224 22. 0r 227 x 227."}, {"section_index": "8", "section_name": "8.2 BASELINE MODELS", "section_text": "As baselines to compare with pre-trained DNN models, we consider: (a) a multiscale linear filter. bank of Gabor functions, (b) a steerable-pyramid linear filter bank (Simoncelli & Freeman, 1995), (c) the VGG-19 model for which the learned parameters (weights) were randomly scrambled within layer, and (d) the CaffeNet model at multiple time points during training. For the Gabor decom-. position, the following Gabor filters were used: all compositions of = {1, 2, 4, 8, 16, 32, 64}px. = {1, 2} , orientation= {0, /3, 2/3, , 4/3, 5/3}, and phase= {0, /2}."}, {"section_index": "9", "section_name": "8.3 IMAGE PERTURBATION EXPERIMENT", "section_text": "The noiseless images were obtained from Alam et al. (2014). In main text, \"image scale\"' refers t percent coverage of DNN input. Since size of original images (149 149) is smaller than DNN input of (224 224) or (227 227), the images were resized by a factor of 1.5 so that 100% imag. scale covers approximately the entire DNN input area.\nHuman psychophysics and DNN experiments were done for nearly identical images. A slight dis. crepancy relates to how the image is blended with the background in the special case where the. region where noise is added has no image surround at one or two side. In these sides (which depenc. on the technical procedure with which images were obtained, see Alam et al., 2014), the surrounc blending here was hard, while the original was smooth.."}, {"section_index": "10", "section_name": "8.4.1 SEGMENTATION", "section_text": "The images used are based on the Texture Discrimination Task (Karni & Sagi, 1991). In the variant considered here (Pinchuk-Yacobi et al., 2015), subjects were presented with a grid of lines, all of which were horizontal, except two or three that were diagonal. Subjects discriminated whether the arrangement of diagonal lines is horizontal or vertical, and this discrimination was found to be more difficult when the central line is horizontal rather than diagonal ('Hard' vs. 'Easy' in Figure 2a) To limit human performance in this task, two manipulations were applied: (a) the location of each line in the pattern was jittered, and (b) a noise mask was presented briefly after the pattern. Here we only retained (a).\nA total of 90 configurations were tested, obtained by combinations of the following alternatives\nThree scales: line length of 9, 12.3, or 19.4 px (number of lines co-varied with line length,. see Figure 12). Three levels of location jittering, defined as a multiple of line length: {1, 2, 3} : 0.0625 : l. px, where l is the length of a line in the pattern. Jittering was applied separately to each. line in the pattern. Ten locations of diagonal lines: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\nFor each configuration, the discriminated arrangement of diagonal lines was either horizontal o vertical, and the central line was either horizontal or diagonal (i.e. hard or easy).\nFigure 12: Pattern scales used in the different configurations of the Segmentation condition. Actual images used were white-on-black rather than black-on-white\nhe images used are motivated by the crowding effect (Livne & Sagi, 2007; Pelli et al., 2004)\nFor each configuration, the discriminated letter was either A, B, C, D, E, or F, and the background was either blank (easy) or composed of the letters M, N, S, and T (hard)..\nFigure 13: Pattern scales used in the different configurations of the Crowding condition. Actua images used were white-on-black rather than black-on-white\nThe images used are based on the object superiority effect by Weisstein & Harris (1974), where. discriminating a line location is easier when combined with surrounding lines a shape is formed.\nA total of 90 configurations were tested, obtained by combinations of the following alternatives.\nThree scales: font size of 15.1, 20.6, or 32.4 px (see Figure 13). Three levels of discriminated-letter location jittering, defined as a multiple of font size: {1, 2, 3} . 0.0625 . l px, where l is font size. The jitter of surround letters (M, N, S, and T). was fixed (i.e. the background was static).. Ten locations: center, random, four locations of half-distance from center to corners, four locations of half-distance from center to image borders..\nA A A\nThree scales: discriminated-line length of 9, 15.1, or 22.7 px (see Figure 14) Five levels of whole-pattern location jittering, defined as a multiple of discriminated-lin length: {1, 2, 5, 10, 15} : 0.0625 . l px, where l is the length of the discriminated line.\nFigure 14: Pattern scales used in the different configurations of the Shape condition. Actual images used were white-on-black rather than black-on-white."}, {"section_index": "11", "section_name": "8.5 CONTRAST SENSITIVITY EXPERIMENT", "section_text": "Used images depicted sine gratings at different contrast, spatial frequency, sine phase, and sine orientation combinations.\nSix 'hard' background line layouts (patterns b-f of their Figure 2 and the additional patterr f of their Figure 3 in Weisstein & Harris, 1974). The \"easy\"' layout was always the same (pattern a).\nFor each configuration, the line whose location is discriminated had four possible locations (two locations are shown in Figure 2c), and the surrounding background line layout could compose a shape (easy) or not (hard)"}] |
HJ0NvFzxl | [{"section_index": "0", "section_name": "LEARNING GRAPHICAL STATE TRANSITIONS", "section_text": "Daniel D. Johnson\nDanerD..lonnson Department of Computer Science. Harvey Mudd College. 301 Platt Boulevard.\nGraph-structured data is important in modeling relationships between multiple. entities, and can be used to represent states of the world as well as many data. structures.Li et al.(2016) describe a model known as a Gated Graph Sequence. Neural Network (GGS-NN) that produces sequences from graph-structured input In this work I introduce the Gated Graph Transformer Neural Network (GGT. NN), an extension of GGS-NNs that uses graph-structured data as an intermediate. representation. The model can learn to construct and modify graphs in sophisti. cated ways based on textual input, and also to use the graphs to produce a variety. of outputs. For example, the model successfully learns to solve almost all of the. bAbI tasks (Weston et al.J2016), and also discovers the rules governing graphica. formulations of a simple cellular automaton and a family of Turing machines.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different types of data can be formulated using a graph structure. One form of data that lends itself to a graphical representation is data involving relationships (edges) between entities (nodes) Abstract maps of places and paths between them also have a natural graph representation, where places are nodes and paths are edges. In addition, many data structures can be expressed in graphical form, including linked lists and binary trees.\nSubstantial research has been done on producing output when given graph-structured input (Kashima et al.]2003] Shervashidze et al.2011, Perozzi et al.] 2014] Bruna et al.]2013] Duvenaud et al. 2015). Of particular relevance to this work are Graph Neural Networks (Gori et al.][2005) |Scarselli et al.| 2009), or GNNs, which extend recursive neural networks by assigning states to each node in a graph based on the states of adjacent nodes. RecentlyLi et al.(2016) have modified GNNs to use gated state updates and to produce output sequences. The resulting networks, called GG-NNs and GGS-NNs, are successful at solving a variety of tasks with graph-structured input.\nThe current work further builds upon GG-NNs and GGS-NNs by allowing graph-structured inter mediate representations, as well as graph-structured outputs. This is accomplished using a mor flexible graph definition, along with a set of graph transformations which take a graph and othe information as input and produce a modified version of the graph. This work also introduces th Gated Graph Transformer Neural Network model (GGT-NN), which combines these transforma tions with a recurrent input model to incrementally construct a graph given natural language input and can either produce a final graph representing its current state, or use the graph to produce natural language output.\nExtending GG-NNs in this way opens up a wide variety of applications. Since many types of dat. can be naturally expressed as a graph, it is possible to train a GGT-NN model to manipulate. meaningful graphical internal state. In this paper I demonstrate the GGT-NN model on the bAb task dataset, which contains a set of stories about the state of the world. By encoding this state a a graph and providing these graphs to the model at training time, a GGT-NN model can be traine. to construct the correct graph from the input sentences and then answer questions based on thi. internal graph. I also demonstrate that this architecture can learn complex update rules by trainin. it to model a simple 1D cellular automaton and arbitrary 4-state Turing machines. This requires th network to learn how to transform its internal state based on the rules of each task."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Nodes Connectivity Annotation Strength State Destination 1234567 Q D 9 9 Q 9 9 Ch Ch 4h 4h 4h sounee h 4h 9h4h 4h 9h 4 Dh 9 9 9 Q 9 9 h 4h h 4h L 5 6 Q Q Q Q 9 Q 9 6 7 Q9999 99\nNodes Connectivity Annotation Strength State Destination 123456 . h Ch 9h 9h 9h 9h Y h 4h 4h 4h 4h suunee Dh 9h 9h 0h Ch 4h Yh Th 4h 9h 9h l Q Q Q 9 Q Q Q 5 Q 9 9 9 9 9 9 6 7 Q999999\nFigure 1: Diagram of the differentiable encoding of a graphical structure, as described in section|3. On the left, the desired graph we wish to represent, in which there are 6 node types (shown as blue. purple, red, orange, green, and yellow) and two edge types (shown as blue/solid and red/dashed). Node 3 and the edge between nodes 6 and 7 have a low strength. On the right, depictions of the node and edge matrices: annotations, strengths, state, and connectivity correspond to xy, Sv, hy, and C, respectively. Saturation represents the value in each cell, where white represents O, and. fully saturated represents 1. Note that each node's annotation only has a single nonzero entry,. corresponding to each node having a single well-defined type, with the exception of node 3, which. has an annotation that does not correspond to a single type. State vectors are shaded arbitrarily. to indicate that they can store network-determined data. The edge connectivity matrix C is three dimensional, indicated by stacking the blue-edge cell on top of the red-edge cell for a given source-. destination pair. Also notice the low strength for cell 3 in the strength vector and for the edge. between node 6 and node 7 in the connectivity matrix.."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Gated Recurrent Units (GRU) are a type of recurrent network cell introduced byCho et al.(2014] Each unit uses a reset gate r and an update gate z, and updates according to.\nr(t) =o(W,x(t) +U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1) + h(t) =(Wx+U(r(t) Oh(t-1)) +b) h(t) =zOh(t-1) +(1-z)Oh(t)\nr(t) =o(W,x(t) + U,h(t-1) + br) z(t) =o(Wzx(t) +U,h(t-1 h(t) =(Wx+U(r(t) O h(t-1)) +b h(t)=zOh(t-1)+(1-z\nwhere o is the logistic sigmoid function, is an activation function (here tanh is used), x(t) is the input vector at timestep t, h(t) is the hidden output vector at timestep t, and W, U, Wr, Ur, Wz. Uz, b, br and bz are learned weights and biases. Note that O denotes elementwise multiplication."}, {"section_index": "4", "section_name": "2.2 GG-NN AND GGS-NN", "section_text": "The Gated Graph Neural Network (GG-NN) is a form of graphical neural network model described byLi et al.[(2016). In a GG-NN, a graph G = (V,E) consists of a set V of nodes v with unique values and a set E of directed edges e = (v, v') E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD, and each edge has a type ye E {1, ... , M}.\nGG-NNs operate by first initializing the state h, of each node to correspond to the annotation xy.. Then, a series of propagation steps occur. In each step, information is transferred between nodes. across the edges, and the types of edge determine what information is sent. Each node sums the input it receives from all adjacent nodes, and uses that to update its own internal state, in the same manner as a GRU cell. Finally, the states of all nodes are used either to create a graph-level aggregate. output, or to classify each individual node.\nGGS-NNs extend GG-NNs by performing a large number of propagation-output cycles. At each stage, two versions of the GG-NN propagation process are run. The first is used to predict an outpu for that timestep, and the second is used to update the annotations of the nodes for the next timestep This allows GGS-NNs to predict a sequence of outputs from a single graph.\na)\nFigure 2: Summary of the graph transformations. Input and output are represented as gray squares. a) Node addition (Tadd), where the input is used by a recurrent network (white box) to produce nev. nodes, of varying annotations and strengths. b) Node state update (Th), where each node receives input (dashed line) and updates its internal state. c) Edge update (Tc), where each existing edge. (colored) and potential edge (dashed) is added or removed according to the input and states of the. adjacent nodes (depicted as solid arrows meeting at circles on each edge). d) Propagation (Tprop). where nodes exchange information along the current edges, and update their states. e) Aggregatior. (Trepr), where a single representation is created using an attention mechanism, by summing informa. tion from all nodes weighted by relevance (with weights shown by saturation of arrows).."}, {"section_index": "5", "section_name": "DIFFERENTIABLE E GRAPH TRANSFORMATIONS", "section_text": "In this section, I describe some modifications to the graph structure to make it fully differentiable and then propose a set of transformations which can be applied to a graph structure in order t transform it. In particular, I redefine a graph g = (V, C) e T as a set V of nodes v, and a connectivit matrix C E R!V||V|Y, where Y is the number of possible edge types. As before, each node has an annotation x, E RN and a hidden state h, E RD. However, there is an additional constraint tha N possible node types. Each node also has a strength s, E [0, 1]. This represents the level of belie that node v should exist, where s, = 1 means the node exists, and s, = 0 indicates that the node should not exist and thus should be ignored.\nSimilarly, elements of C are constrained to the range [0, 1], and thus one can interpret Cu,v',y as the level of belief that there should be a directed edge of type y from v to v'. (Note that it is possible for there to be edges of multiple types between the same two nodes v and v', i.e. it is possible for Cy,v',y = Cu,v',y' = 1 where y y'.) Figure[1shows the values of xu, Su, h,, and C corresponding to a particular graphical structure\nThere are five classes of graph transformation:"}, {"section_index": "6", "section_name": "1+ GATED GRAPH TRANSFORMER NEURAL NETWORK (GGT-NN)", "section_text": "In this section I introduce the Gated Graph Transformer Neural Network (GGT-NN), which is con structed by combining a series of these transformations. Depending on the configuration of the transformations, a GGT-NN can take textual or graph-structured input, and produce textual or graph\na)\na) Node addition (Tadd), which modifies a graph by adding new nodes and assigning ther. annotations x, and strengths s, based on an input vector.. b) Node state update (Tn), which modifies the internal state of each node using an input vector. (similar to a GRU update step). Optionally, different input can be given to nodes of each type, based on direct textual references to specific node types. This version is called a direct. reference update (Th.direct). c) Edge update (Tc), which modifies the edges between each pair of nodes based on the inter nal states of the two nodes and an external input vector.. d) Propagation (Tprop), which allows nodes to trade information across the existing edges and. then update their internal states based on the information received.. e) Aggregation (Trepr), which uses an attention mechanism to select relevant nodes and then. generates a graph-level output.\nEach transformation has its own trainable parameters. Together, these transformations can be com bined to process a graph in complex ways. An overview of these operations is shown in Figure 2. For details about the implementation of each of these transformations, see Appendix|B.\nAlgorithm 1 Graph Transformation Pseudocode\nstructured output. Here I describe one particular GGT-NN configuration, designed to build an modify a graph based on a sequence of input sentences, and then produce an answer to a query.\nWhen run, the model performs the following: For each sentence k, each word is converted to a. one-hot vector w(~), and the sequence of words (of length L) is passed through a GRU layer to -(k) p(k). The full sentence representation produce a sequence of partial-sentence representation vectors pi~). vector i(k) is initialized to the last partial representation vector p). Furthermore, a direct-reference (k) input matrix D(k) is set to the sum of partial representation vectors corresponding to the words that. (k) that directly refer to node type n. This acts like an attention mechanism, by accumulating the partial representation vectors for the words that directly reference each type, and masking out the vectors. corresponding to other words.\n'. The full sentence representation\nNext, a series of graph transformations are applied, as depicted in Algorithm[1] Depending on the task, direct reference updates and per-sentence propagation can be enabled or disabled. The output. function foutput will depend on the specific type of answer desired. If the answer is a single word,. foutput can be a multilayer perceptron followed by a softmax operation. If the answer is a sequence. of words, foutput can use a recurrent network (such as a GRU) to produce a sequence of outputs. transformations with different learned weights.\nSince the processing of the input and all of the graph transformations are differentiable, at this point the network output can be compared with the correct output for that query and used to update the network parameters, including both the GRU parameters used when processing the input and the internal weights associated with each transformation."}, {"section_index": "7", "section_name": "4.1 SUPERVISION", "section_text": "As with many supervised models, one can evaluate the loss based on the likelihood of producing. an incorrect answer, and then minimize the loss by backpropagation. However, based on initial. experiments, the model appeared to require additional supervision to extract meaningful graph. structured data. To provide this additional supervision, I found it beneficial to provide the correct graph at each timestep and train the network to produce that graph. This occurs in two stages, first. when new nodes are proposed, and then when edges are adjusted. For the edge adjustment, the edge. loss between a correct edge matrix C* and the computed edge matrix C is given by.\nLedge =-C* ln(C)(1C*) ln(1C)\nThe node adjustment is slightly more complex. Multiple nodes are added in each timestep, but the. order of those nodes is arbitrary, and only their existence is important. Thus it should be possible for the network to determine the optimal ordering of the nodes. In fact, this is important because there. is no guarantee that the nodes will be ordered consistently in the training data..\nVinyals et al.(2016) demonstrate a simple method for training a network to output unordered sets the network produces a sequence of outputs, and these outputs are compared with the closest order\nTlanslonlnlallonlfseudocode 1: G 11: G Tada(9, [i(k) hadd]) 2: for k from 1 to K do 12: 9 Tc(G,i(k)) 3: G Tn(9,i(k)) 13: end for 4: if direct reference enabled then 14: G Tquery(G,iquery) 5: G Th,direct(G,D(k)) 15: if direct reference enabled then 9 Tnudrc(G, Dquery) 6: end if 16: 7: if intermediate propagation enabled then 17: end if 18: g Tpupry(9) 8: G Tprop(9) 9: end if 19: hanswer Treuery () hadd Trepr(9) 10: 20: return foutput(hanswer)\n13: end for 14: GT 15: if direct reference enabled t 16: 17: end if 18: G Tppupry( ron 19: hanswer Trequery () 20: return foutput(hanswer inswer\ninput matrix D(\ning of the training data, i.e., the ordering of the training data which would produce the smallest loss. when compared with the network output. Vinyals et al.|show that when using this method, the net- work arbitrarily chooses an ordering which may not be the optimal ordering for the task. However. in this case any ordering should be sufficient, and I found the arbitrary orderings selected in this. way to work well in practice. In particular, letting s*() and x*(o) denote the correct strength and annotations of node v under ordering , the loss becomes.\n|Vnew| Lnode = s*(u) ln(s) + (1- s*(v)) ln(1- s) + x*(v) ln(xy) max TT v=|Vo1d|+1"}, {"section_index": "8", "section_name": "4.2 OTHER TRANSFORMATION CONFIGURATIONS", "section_text": "The structure described in Algorithm 1is designed for question-answering tasks. However, due. to the composability of the individual graph transformations, other configurations could be used to solve other tasks that operate on structured data.."}, {"section_index": "9", "section_name": "5.1 BABI TASKS", "section_text": "I evaluated the GGT-NN model on the bAbI tasks, a set of simple natural-language tasks, where eacl task is structured as a sequence of sentences followed by a query (Weston et al.|2016). The gener ation procedure for the bAbI tasks includes a \"Knowledge\" object for each sentence, representing the current state of knowledge after that sentence. I exposed this knowledge object in graph format and used this to train a GGT-NN in supervised mode. The knowledge object provides names fo. each node type, and direct reference was performed based on these names: if a word in the sentence matched a node type name, it was parsed as a direct reference to all nodes of that type. For details on this graphical format, see Appendix|C"}, {"section_index": "10", "section_name": "5.1.1 ANALYSIS AND RESULTS", "section_text": "Results are shown in Tables[1and2 The GGT-NN model was able to reach 95% accuracy in all but one of the tasks, and reached 1o0% accuracy in eleven of them (see Table2). Additionally, for fourteen of the tasks, the model was able to reach 95% accuracy using 500 or fewer of the 1000 training examples (see Table1).\nThe only task that the GGT-NN was unable to solve with 95% accuracy was task 17 (Positional Reasoning), for which the model was not able to attain a high accuracy. Task 17 has a larger number\nFor instance, if a task consists of tracking relationships between a fixed set of objects, one could. construct a version of the model that does not use the new-nodes transformation (Tadd), but instead. only modifies edges. If the task was to extract information from an existing graph, a structure similar. to the GGS-NNs could be built by using only the propagation and aggregation transformations. If the. task was to construct a graph based on textual input, the query processing steps could be omitted, and. instead the final graph could be returned for processing. And if information should be gathered from. a sequence of graphs instead of from a single graph, the query processing steps could be modified. to run in parallel on the full sequence of graphs and extract information from each graph. This last modification is demonstrated in AppendixD\nI trained two versions of the GGT-NN model for each task: one with and one without direct refer ence. Tasks 3 and 5, which involve a complex temporal component, were trained with intermediate propagation, whereas all of the other tasks were not because the structure of the tasks made such complexity unnecessary. Most task models were configured to output a single word, but task 19. (pathfinding) used a GRU to output multiple words, and task 8 (listing) was configured to output a strength for each possible word to allow multiple words to be selected without having to consider ordering.\nTable 1: Number of training examples needed before the GGT-NN model could attain 5% error. on each of the bAbI tasks. Experiments were run with 50, 100, 250, 500, and 1000 examples \"GGT-NN + direct ref\" denotes the performance of the model with direct reference, and \"GGT. NN\" denotes the performance of the model without direct reference. Dashes indicate that the model. was unable to reach the desired accuracy with 1000 examples..\nTable 2: Error rates of various models on the bAbI tasks. Bold indicates 5% error. For descriptions. of each of the tasks, see Table[1] \"GGT-NN + direct ref\" denotes the GGT-NN model with direct reference, and \"GGT-NN\"' denotes the version without direct reference. See text for details regarding. the models used for comparison. Results from LSTM and MemNN reproduced from Weston et al. (2016). Results from other existing models reproduced fromHenaff et al.(2016).\nNN-LON NN-LON Task Task 1 - Single Supporting Fact. 100 1000 11 - Basic Coreference. 100 1000 2 - Two Supporting Facts. 250 12 - Conjunction 500 1000 3 - Three Supporting Facts. 1000 13 - Compound Coref. 100 1000 4 - Two Arg. Relations. 1000 1000 14 - Time Reasoning 1000 5 - Three Arg. Relations. 500 15 - Basic Deduction 500 500 6 - Yes/No Questions 100 16 - Basic Induction. 100 500 7 - Counting 250 17 - Positional Reasoning. 8 - Lists/Sets 250 1000 18 - Size Reasoning. 1000 9 - Simple Negation. 250 19 - Path Finding 500 0 - Indefinite Knowledge 1000 20 - Agent's Motivations. 250 250\n1,000 examples 10,000 examples CC-LNN run treee te NernNNN WLN-U WLST +NWC WLN DNC Task 1 0 0.7 50.0 0 0 0.7 31.5 4.4 0 0 0 0 2 0 5.7 80.0 0 8.3 56.4 54.5 27.5 0.3 0.4 0.3 0.1 3 1.3 12.0 80.0 0 40.3 69.7 43.9 71.3 2.1 1.8 1.1 4.1 4 1.2 2.2 39.0 0 2.8 1.4 0 0 0 0 0 0 5 1.6 10.9 30.0 2.0 13.1 4.6 0.8 1.7 0.8 0.8 0.5 0.3 6 0 7.7 52.0 0 7.6 30.0 17.1 1.5 0.1 0 0 0.2 7 0 5.6 51.0 15.0 17.3 22.3 17.8 6.0 2.0 0.6 2.4 0 8 0 3.3 55.0 9.0 10.0 19.2 13.8 1.7 0.9 0.3 0 0.5 9 0 11.6 36.0 0 13.2 31.5 16.4 0.6 0.3 0.2 0 0.1 10 3.4 28.6 56.0 2.0 15.1 15.6 16.6 19.8 0 0.2 0 0.6 11 0 0.2 28.0 0 0.9 8.0 15.2 0 0 0 0 0.3 12 0.1 0.7 26.0 0 0.2 0.8 8.9 6.2 0 0 0.2 0 13 0 0.8 6.0 0 0.4 9.0 7.4 7.5 0 0 0 1.3 14 2.2 55.1 73.0 1.0 1.7 62.9 24.2 17.5 0.2 0.4 0.2 0 15 0.9 0 79.0 0 0 57.8 47.0 0 0 0 0 0 16 0 0 77.0 0 1.3 53.2 53.6 49.6 51.8 55.1 45.3 0.2 17 34.5 48.0 49.0 35.0 51.0 46.4 25.5 1.2 18.6 12.0 4.2 0.5 18 2.1 10.6 48.0 5.0 11.1 8.8 2.2 0.2 5.3 0.8 2.1 0.3 19 0 70.6 92.0 64.0 82.8 90.4 4.3 39.5 2.3 3.9 0 2.3 20 0 1.0 9.0 0 0 2.6 1.5 0 0 0 0 0\nof possible entities than the other tasks: each entity consists of a color (chosen from five options and a shape (chosen from four shapes), for a total of 20 unique entities that must be represented. separately. Additionally, the stories are much shorter than those in other tasks (2 facts for each se1 of 8 questions). It is likely that these additional complexities caused the network performance tc. suffer.\nFor comparison, accuracy on the bAbI tasks is also included for a simple sequence-to-sequence. LSTM model and for a variety of existing state-of-the-art approaches (see Table 2): a simple. sequence-to-sequence LSTM model, as implemented in |Weston et al.(2016), a modified Mem-. ory Network model (MemNN, Weston et al.] 2016), End-To-End Memory Network (MemN2N.. Sukhbaatar et al.]2015), Recurrent Entity Network (EntNet, Henaff et al.]2016), Neural Turing. Machine (NTM, Graves et al.]2014), Dynamic NTM (D-NTM, Gulcehre et al.]2016), a larger version of the MemN2N model with weight tying and nonlinearity (MemN2N*, Sukhbaatar et al.. 2015), Differentiable Neural Computer (DNC,Graves et al.J2016), and Dynamic Memory Network (DMN+,Xiong et al.]2016). Although the GGT-NN model was trained using only 1,000 training. examples, results using 10,o00 examples have also been reproduced here for comparison. Also, it is important to note that the GGT-NN and MemNN models were trained with strong supervision:. the GGT-NN model was trained with full graph information, and the MemNN model was trained. with information on which sentences were relevant to the query. All other models were trained. end-to-end without additional supervision.\nSince the GGT-NN and MemNN models are both strongly supervised, it is interesting to note tha each approach outperforms the other on a subset of the tasks. In particular, the GGT-NN model with direct reference attains a higher level of accuracy on the following tasks, with an improvement o1 0.4-64% depending on the task: task 5 (0.4%), task 7 (15%), task 8 (9%), task 17 (0.5%), task 18 (2.9%), and task 19 (64%). This may indicate that a graphical representation is superior to a list o sentence memories for solving these tasks. On the other hand, the MemNN model outperforms the GGT-NN model (0.1-2.9% greater accuracy) on tasks 3, 4, 10, 12, 14, and 15.\nOf particular interest is the performance on task 19, the pathfinding task, for which the GGT-NN model with direct reference performs better than all but one of the other models (DMN+), anc shows a large improvement over the performance of the MemNN model. This is reasonable, since pathfinding is a task that is naturally suited to a graphical representation. The shortest path betweer two nodes can be easily found by sending information across all paths away from one of the nodes ir a distributed fashion, which the GGT-NN model allows. Note that the preexisting GGS-NN mode. (discussed in Section2.2) was also able to successfully learn the pathfinding task, but required the. input to be preprocessed into graphical form even when evaluating the model, and thus could no. be directly evaluated on the textual form of any of the bAbI tasks (Li et al.]2016). The curren results demonstrate that the proposed GGT-NN model is able to solve the pathfinding task wher given textual input.\nSimilarly, both variants of the GGT-NN model show improvement over many other models on task. 16, the induction task. Solving the induction task requires being able to infer relationships based on. similarities between entities. (One example from this task: Lily is a swan. Lily is white. Bernhard is green. Greg is a swan. What color is Greg? A:white.) In a graphical setting, this can be done. by following a sequence of edges (Greg -> swan -> Lily > white), and the performance of the. GGT-NN model indicates that this task is particularly suited to such a representation..\nIn general, the GGT-NN model with direct reference performs better than the model without it. The. model with direct reference reaches 95% accuracy on 19/20 of the bAbI tasks, while the model without direct reference reaches that level of accuracy on 9/20 of the tasks (see Table 2). Addi-. tionally, when compared to the direct-reference model, the model without direct reference requires more training examples in order to reach the accuracy threshold (see Table|1). This indicates that although the model can be used without direct reference, adding direct reference greatly improves. the training of the model.."}, {"section_index": "11", "section_name": "5.2 RULE DISCOVERY TASKS", "section_text": "To demonstrate the power of GGT-NN to model a wide variety of graph-based problems, I applied the GGT-NN to two additional tasks. In each task, a sequence of data structures were transformed into a graphical format, and the GGT-NN was tasked with predicting the data for the next timestep\nTable 3: Accuracy of GGT-NN on the Rule 30 Automaton and Turing Machine tasks\n000 iterations 2000 iterations 3000 iterations 7000 iterations Ground truth .. .\nFigure 3: Visualization of network performance on the Rule 30 Automaton task. Top node (purple represents zero, bottom node (blue) represents 1, and middle nodes (green, orange, and red) repre sent individual cells. Blue edges indicate adjacent cells, and gold edges indicate the value of each cell. Three timesteps occur between each row..\nbased on the current timestep. No additional information was provided as textual input; instead, the. network was tasked with learning the rules governing the evolution of the graph structure over time"}, {"section_index": "12", "section_name": "5.2.1 CELLULAR AUTOMATON TASK", "section_text": "The first task used was a 1-dimensional cellular automaton, specifically the binary cellular automa. ton known as Rule 30 (Wolfram2002). Rule 30 acts on an infinite set of cells, each with a binary. state (either O or 1). At each timestep, each cell deterministically changes state based on its previous state and the states of its neighbors. In particular, the update rules are.\nCell states can be converted into graphical format by treating the cells as a linked list. Each of the cells is represented by a node with edges connecting it to the cell's neighbors, and a value edge is used to indicate whether the cell is O or 1. This format is described in more detail in AppendixC"}, {"section_index": "13", "section_name": "5.2.2 TURING MACHINES", "section_text": "The second task was simulating an arbitrary 2-symbol 4-state Turing machine. A Turing machin. operates on an infinite tape of cells, each containing a symbol from a finite set of possible symbols. It has a head, which points at a particular cell and can read and write the symbol at that cell. It als has an internal state, from a finite set of states. At each timestep, based on the current state and th contents of the cell at the head, the machine writes a new symbol, changes the internal state, and ca move the head left or right or leave it in place. The action of the machine depends on a finite set o. rules, which specify the actions to take for each state-symbol combination. Note that the version o Turing machine used here has only 2 symbols, and requires that the initial contents of the tape be al. O (the first symbol) except for finitely many 1s (the second symbol)..\nWhen converting a Turing machine to graphical format, the tape of the machine is modeled as a linked list of cells. Additionally, each state of the machine is denoted by a state node, and edges between these nodes encode the transition rules. There is also a head node, which connects both to the current cell and to the current state of the machine. See Appendix|C[for more details.."}, {"section_index": "14", "section_name": "5.2.3 ANALYSIS AND RESULTS", "section_text": "The GGT-NN model was trained on 1000 examples of the Rule 30 automaton with different ini tial states, each of which simulated 7 timesteps of the automaton, and 20,o00 examples of Turing\nOriginal Task Generalization: 20 Generalization: 30\n.. .\nCurrent neighborhood 111 110 101 100 011 010 001 000 Next value 0 0 0 1 1 1 1 0\nmachines with different rules and initial tape contents, each of which simulated 6 timesteps of the Turing machine. Performance was then evaluated on 1o00 new examples generated with the same format. The models were evaluated by picking the most likely graph generated by the model, and. comparing it with the correct graph. The percent accuracy denotes the fraction of the examples for which these two graphs were identical at all timesteps. In addition to evaluating the performance on identical tasks, the generalization ability of the models was also assessed. The same trained models were evaluated on versions of the task with 20 and 30 timesteps of simulation..\nResults are shown in Table[3] The models successfully learned the assigned tasks, reaching high levels of accuracy for both tasks. Additionally, the models show the ability to generalize to large inputs, giving a perfect output in the majority of extended tasks. For visualization purposes, Figure 3|shows the model at various stages of training when evaluated starting with a single 1 cell.\nMany methods have been proposed for combining neural networks with graphs. These methods gen. erally require the input to the network to be in graphical format. For instance, GNNs and GGS-NNs. take a graph as input, and propagate information between nodes according to the graph structure. (Gori et al.]2005] Scarselli et al.]2009]Li et al.]2016). Similarly, graph convolutional networks extract information from an existing graph structure by using approximations to spectral graph con volutions (Kipf & Welling2016). These methods are similar to GGT-NNs in that they all store. information in the nodes of a graph and use edges to determine how information flows. However,. they all use a graph with fixed structure, and can only accept graphical data. The GGT-NN model,. on the other hand, allows the graph structure to be built and modified based on unstructured input.\nGiles et al.[(1992) describe a method for extracting a finite state machine from a trained recurren neural network by quantizing the hidden states of the network, recording all possible state transi tions, and using them to construct a minimal directed graph representing the state machine. This method, however, requires postprocessing of the network to extract the graph, and is limited to ex. tracting graphs that represent state machines. Additionally, although the FSM extraction method described byGiles et al.(1992) and the GGT-NN model both produce graphs using neural networks the goals are different: the FSM extraction method aims to learn a single graph that can classify. sequences, whereas the GGT-NN model aims to learn a neural network that can manipulate graphs.\nThe lifted relational neural network (LRNN) is another approach to working with structured data. (Sourek et al.|2015). LRNNs require the input to be formatted as a combination of weighted predi. cate logic statements, encompassing both general rules and specific known facts. For each training example, the statements are used to construct a \"ground neural network'', with a connection patterr determined by the dependencies between the statements. LRNNs can learn to extract information b adjusting the weights of each statement, but require the rules to be composed by hand based on the. task structure. Furthermore, unlike in GGT-NNs, a LRNN has no internal state associated with the objects it describes (which are instead represented by single neurons), and the relationships betweer objects cannot be constructed or modified by the network..\nMultiple recent architectures have included differentiable internal states. Memory Networks, as de- scribed in|Weston et al.(2014), and the fully differentiable end-to-end memory networks, described. in Sukhbaatar et al.(2015), both utilize a differentiable long-term memory component, consisting. of a set of memories that are produced by encoding the input sentences. To answer a query, an. attention mechanism is used to select a subset of these memories, and the resulting memories are processed to produce the desired output. Differentiable Neural Computers (DNCs), described in Graves et al.(2016), interact with a fixed-size memory using a set of read and write \"heads\", which. can be moved within the memory either by searching for particular content or by following temporal. \"links of association'' that track the order in which data was written..\nMemory networks and DNCs share with the GGT-NN model the ability to iteratively construct an internal state based on textual input, and use that internal state to answer questions about the underlying structured data. However, in these models, the structure of the internal state is implicit: although the network can store and work with structured data, the actual memory consists of a set of vectors that cannot be easily interpreted, except by monitoring the network access patterns. The GGT-NN model, on the other hand, explicitly models the internal state as a graph with labeled\nnodes and edges. This allows the produced graph to be extracted, visualized, and potentially used i downstream applications that require graph-structured input.\nHierarchical Attentive Memory (HAM) is a memory-based architecture that consists of a binary tre. built on top of an input sequence (Andrychowicz & Kurach]2016). A recurrent controller accesses. the HAM module by performing a top-down search through the tree, at each stage choosing tc. attend to either the left or right subtrees. Once this process reaches a leaf, the value of the leaf is provided to the controller to use in predicting the next output, and this leaf's value can be updatec. with a new value. This architecture is especially suited toward sequence-based tasks, and has beer. shown to generalize to longer sequences very efficiently due to the tree structure. However, it i. unclear whether a HAM module would work well with non-sequential structured data, since the tre structure is fixed by the network.\nOne advantage of the GGT-NN model over existing works is that it can process data in a distributec. fashion. Each node independently processes its surroundings, which can be beneficial for complex. tasks such as pathfinding on a graph. This is in contrast to memory networks, DNCs, and HAM. modules, which are restricted to processing only a fixed number of locations in a given timestep. On the other hand, the distributed nature of the GGT-NN model means that it is less time and space. efficient than these other networks. Since every node can communicate with every other node, the. time and space required to run a GGT-NN step scales quadratically with the size of the input. A. DNC or memory network, on the other hand, either scales linearly (since it attends to all storec. data or memories) or is constant (if restricted to a fixed-size memory), and a HAM module scale. logarithmically (due to the tree structure)."}, {"section_index": "15", "section_name": "7 CONCLUSION", "section_text": "The GGT-NN architecture has a few advantages over the architectures described in existing works In contrast to other approaches to working with structured data, GGT-NNs are designed to work witl unstructured input, and are able to modify a graphical structure based on the input. And in contras to memory networks or DNCs, the internal state of the network is explicitly graph structured, anc complex computations can be distributed across the nodes of the graph..\nOne downside of the current model is that the time and space required to train the model increase very quickly as the complexity of the task increases, which limits the model's applicability. It would be very advantageous to develop optimizations that would allow the model to train faster and with. smaller space requirements, such as using sparse edge connections, or only processing some subset. of the nodes at each timestep. Another promising direction of future work is in reducing the level of supervision needed to obtain meaningful graphs, for example by combining a few examples that have full graph-level supervision with a larger set of examples that do not have graph-level information. or using additional regularization to enable the GGT-NN model to be trained without any graph. information."}, {"section_index": "16", "section_name": "ACKNOWLEDGMENTS", "section_text": "I would like to thank Harvey Mudd College for computing resources. I would also like to thank th developers of the Theano library, which I used to run my experiments. This work used the Extrem Science and Engineering Discovery Environment (XSEDE), which is supported by National Scienc. Foundation grant number ACI-1053575.\nThe results presented here show that GGT-NNs are able to successfully model a wide variety of tasks using graph-structured states and potentially could be useful in solving many other types of problems. The specific GGT-NN model described here can be used as-is for tasks consisting of a sequence of input sentences and graphs, optionally followed by a query. In addition, due to the modular nature of GGT-NNs, it is possible to reconfigure the order of the transformations to produce a model suitable for a different task.\nThere are exciting potential uses for the GGT-NN model. One particularly interesting application would be using GGT-NNs to extract graph-structured information from unstructured textual de scriptions. More generally, the graph transformations provided here may allow machine learning to interoperate more flexibly with other data sources and processes with structured inputs and outputs"}, {"section_index": "17", "section_name": "REFERENCES", "section_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locall connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nDavid K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan. Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pp. 2224-2232, 2015.\nMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969, 2016\nHisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. Marginalized kernels between labeled graphs In ICML, volume 3, pp. 321-328, 2003\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. ICLR, 2016.\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks. 20(1:61-80. 2009\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advance. 44 8 0015\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net works. arXiv preprint arXiv:1609.02907. 2016\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy. tasks. ICLR, 2016.\nStephen Wolfram. A new kind of science, volume 5. Wolfram media Champaign, 2002\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual an textual question answering. In Proceedings of The 33rd International Conference on Machin Learning, pp. 2397-2406, 2016."}, {"section_index": "18", "section_name": "APPENDIX A BACKGROUND ON GG-NNS AND GGS-NNs", "section_text": "This section gives additional background on the implementation of GG-NNs and GGS-NNs, de scribed byLi et al.(2016).\nt) + Ul a t = tanh(Wa(t) + U(r(t) o h\nM Sedge(V, v', y) O Py + Sedge(v', v, y) O P! U'EV y=1\nwhere Sedge(v, v', y) is 1 if e = (v, v) E and ye = y, and O otherwise\nGated Graph Sequence Neural Networks (GGS-NN) are an extension of GG-NNs to sequential .,o(K).At each output step k, the annotation matrix I is given by (k). Output o(1). k (k)1 E R|V|Ly. A GG-NN F, is trained to predict an output sequence o(k) from (k), and another GG-NN Fx is trained to predict (k+1) from (k). Prediction of the output at. each step is performed as in a normal GG-NN, and prediction of (k+1) from the set of all final. hidden states H(k,T) (after T propagation steps of Fx) occurs according to the equation.\nRecall from section|2.2|that GG-NNs represent a graph G = (V, &) as a set V of nodes v with unique values 1,..., V and a set E of directed edges e = (v, v) E V V oriented from v to v'. Each node has an annotation x, E RN and a hidden state h, E RD. Additionally, each edge has a type Ye E {1,.::, M}\nInitially, hs') is set to the annotation x, padded with zeros. Then nodes exchange information for some fixed number of timesteps T according to the propagation model.\nHere at) represents the information received by each node from its neighbors in the graph, and the (t) matrix A E IRD|V|2D|V! has a specific structure that determines how nodes communicate. The first half of A, denoted A(out) E RD|V| D|VI, corresponds to outgoing edges, whereas the second half A(in) E RD|V| D|V| corresponds to incoming edges.\nhg = tanh Xe ) O tanh(j(h)\nk+1 X\nNode addition Node state update Edge update New states GRU Input GRU-style Dest update State GRU Q Q Q Q GRU 999(Q XH Q QQQj Propagation Aggregation blue frerd To node 2 + fbwd To node 3 X Output tanh(i) New states From node 2 From node 3 GRU-style update Input\nFigure 4: Diagram of the operations performed for each class of transformation. Graph state is shown in the format given by Figure [1 Input and output are shown as gray boxes. Black dots represent concatenation, and + and represent addition and multiplication, respectively. 1 - #/. represents taking the input value and subtracting it from 1. Note that for simplicity, operations are only shown for single nodes or edges, although the operations act on all nodes and edges in parallel. In particular, the propagation section focuses on information sent and received by the first node only. In that section the strengths of the edges in the connectivity matrix determine what information is. sent to each of the other nodes. Light gray connections indicate the value zero, corresponding to. situations where a given edge is not present."}, {"section_index": "19", "section_name": "APPENDIX B GRAPH TRANSFORMATION DETAILS", "section_text": "In this section I describe in detail the implementations of each type of differentiable graph trans formation!'A diagram of the implementation of each transformation is shown in Figure |4 Note that it is natural to think of these transformations as operating on a single graphical state, and each modifying the state in place. However, in the technical descriptions of these transformations, th operations will be described as functions that take in an old graph and produce a new one, similarly to unrolling a recurrent network over time."}, {"section_index": "20", "section_name": "B.1 NODE ADDITION", "section_text": "The node addition transformation Tadd : T R -> T takes as input a graph G and an input vecto a E Ra, and produces a graph g' with additional nodes. The annotation and strength of each new node is determined by a function fadd : R R -> R RN R, where a is the length of the. input vector, is the length of the internal state vector, and as before N is the number of node types The new nodes are then produced according to.\n(S|Vg|+i,X|Vg|+i,hi) = fadd(a,h-1)\nstarting with ho initialized to some learned initial state, and recurrently computing s, and x, for. each new node, up to some maximum number of nodes. Based on initial experiments, I found that. implementing fadd as a GRU layer followed by 2 hidden tanh layers was effective, although other. recurrent networks would likely be similarly effective. The node hidden states h, are initialized to zero. The recurrence should be computed as many times as the maximum number of nodes that.\n1The code for each transformation, and for the GGT-NN model itself, is available at|ht tps : / / git hub om/hexahedria/gated-graph-transformer-network\nmight be produced. The recurrent function fadd can learn to output s, = 0 for some nodes to create fewer nodes, if necessary."}, {"section_index": "21", "section_name": "B.2 NODE STATE UPDATE", "section_text": "r, = (Wr|a x,+ Urhv +br) z zy =(Wz|axv+Uzhy+bz) h', = tanh(W[a x,] + U (r O h,) + b), h', =z, O h, +(1 -zy) O h,\nry = o(Wr[a xy] + U,h, + br), h', = tanh(W[a x,] + U (r O hy) + b)\nFor some tasks, performance can be improved by providing information to nodes of a particular type only. For instance, if the input is a sentence, and one word of that sentence directly refers to a node type (e.g., if nodes of type 1 represent Mary, and Mary appears in the sentence), it can be helpful tc allow all nodes of type 1 to perform an update using this information. To accomplish this, Th car be modified to take node types into account. (This modification is denoted Th,direct.) Instead of a single vector a E R, the direct-reference transformation takes in A E RNxa, where An E Ra is the input vector for nodes with type n. The update equations then become"}, {"section_index": "22", "section_name": "B.3 EDGE UPDATE", "section_text": "The edge update transformation Tc : F R -> T takes a graph G and an input vector a E Ra, and produces a graph g' with updated edges. For each pair of nodes (v, v'), the update equations are."}, {"section_index": "23", "section_name": "B.4 PROPAGATION", "section_text": "The propagation transformation Tprop : F -> I takes a graph G = g(o) and runs a series of T propagation steps (as in GG-NN), returning the resulting graph G' = G(T). The GG-NN propagation step is extended to handle node and edge strengths, as well as to allow more processing to occur to\nNote that in order to use information from all of the existing nodes to produce the new nodes, the input to this transformation should include information provided by an aggregation transformation. Trepr, described in sectionB.5\nThe node state update transformation Tn : I Ra -> T takes as input a graph G and an input vector a E Ra, and produces a graph g' with updated node states. This is accomplished by performing a GRU-style update for each node, where the input is a concatenation of a and that node's annotation vector x, and the state is the node's hidden state, according to\nry = o(Wr|a, xv+ Urhv + br), Zy = o(Wz[ay xu]+ Uzhy+bz) ; h', =tanh(W[a, xy] +U(rOhy) + b), h', =z, Oh,+(1-zy) O h'\nCv,v' = fset(a, Xu, hu, Xu', hy,) rv,v' = freset(a, Xv, hu, Xu', hy,) Cy,v' = (1-Cu,v') O Cu,v'+ Cu,v'O (1-ru,u')\nThe functions fset, freset : IR2N 2D > [0, 1]Y are implemented as neural networks.(In my experiments, I used a simple 2-layer fully connected network.) cu,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be created if it does not exist, and ru,v',y gives the level of belief in [0, 1] that an edge from v to v' of type y should be removed if it does. Setting both to zero results in no change for that edge, and setting both to 1 toggles the edge state.\nthe information transferred across edges. The full pr agation equations for step t are.\nv'EV y=] ) = o xy]+ U,h(t-1) + br tanh(W[ (t\nEquation|5|has been adjusted in the most significant manner (relative to Equation|2). In particular Sy' restricts propagation so that nodes with low strength send less information to adjacent nodes. Sedge has been replaced with C to allow edges with fractional strength, and the propagation matrices. Py, P' have been replaced with arbitrary functions ffwd, fbwd : RN RD > R, where is the. length of the vector a. I used a fully connected layer to implement each function in my experiments.. Equations[6[ 7] and[8|have also been modified slightly to add a bias term."}, {"section_index": "24", "section_name": "B.5 AGGREGATION", "section_text": "The aggregation transformation Trepr : T -> R produces a graph-level representation vector from a graph. It functions very similarly to the output representation of a GG-NN (equation|3), combining an attention mechanism with a node representation function, but is modified slightly to take into account node strengths. As in GG-NN, both i and j are neural networks, and in practice a single fully connected layer appears to be adequate for both.\nThe knowledge graph object used during generation of the bAbI tasks is structured as a dictionary relating entities to each other with specific relationship types. Entities are identified based on thei names, and include people (John, Mary, Sandra), locations (bedroom, kitchen, garden), object (football, apple, suitcase), animals (mouse, wolf, cat), and colors (white, yellow, green), depending on the particular task. Relationships between entities are also expressed as strings, and are directed if John is holding the milk there is an \"is_in' relationship from \"milk' to \"John'; if Sandra is ir the bedroom there is an \"is_in'' relationship from \"Sandra' to \"bedroom'; if Lily is green there is a \"has_color\"' relationship from \"Lily\"' to \"green\", etc.\nThe transformation from the knowledge object to a graph is straightforward: each entity used is assigned to a new node type, and relationships between entities are represented as edges between the corresponding nodes. To avoid confusion from overloaded relationships (such as \"is_in'' being used to represent an object being held by a person as well as a person being in a room), relation names are given a distinct edge type depending on the usage context. For instance, when a person is carrying an object, the generic \"is_in'' relationship becomes an edge of type \"gettable_is_in_actor'\nSome of the graph representations had to be modified in order to ensure that they contained all o. the necessary information. For instance, task 3 requires the network to remember where items wer in the past, but the knowledge object only contained references to their current locations. In thes cases, a linked list structure was added to the knowledge object to allow the history information t be represented in the graph.\nIn particular, each time an item changed locations, a new \"record'' node was added, with a \"previous' edge to the previous history node and a \"value\"' edge to the current location of the item. Each item then connected to the most recent history node using a \"history-head\"' edge. This ensures that the history of each node is present in the graph.\nhc = tanh xu)) O tanh(j(h\n1. John grabbed the milk.. 2. John travelled to the bedroom. 3. Sandra took the football.. 4. John went to the garden.. 5. John let go of the milk.. 6. Sandra let go of the football.. 7. John got the football.. 8. John grabbed the milk.. Where is the milk?.\nFigure 5: Diagram of one sample story from the bAbI dataset (Task 2), along with a graphica representation of the knowledge state after the italicized sentence..\nFigure 6: Diagram of one example from the automaton task, along with a graphical representation of the automaton state after the fourth simulate command (italicized).\nFigure 7: Diagram of an example from the Turing machine task, with a graphical representation of the machine state after the second run command (italicized).\ndCLOlTS 1T TOCa cTOn John Garden Bedroom ettable is in location. gettable is in actor. Football Milk Sandra\nZero Value edges Neighbor edges Initial cells New cells (left) New cells (right) One\n10. input symbol_0 head 11. input symbol_0 12. input symbol_0 States and rules 13. input symbol_1 14. run Current state 15. run Head Current cell 16. run Cells 17. run 18. run 19. run Zero One\ntates and rules Currentstat Head Current cell Cells Zero One\nAn example of a graph produced from the bAbI tasks is given in Figure5\nThe cellular automaton task was mapped to graphical format as follows: Nodes have 5 types: zerc. one, init-cell, left-cell, and right-cell. Edges have 2 types: value, and next-r. There is always exactl. one \"zero'' node and one \"one' node, and all of the cell nodes form a linked list, with a \"value' edg connecting to either zero or one, and a \"next-r' edge pointing to the next cell to the right (or no edg. for the rightmost cell).\nAt the start of each training example, there are 13 timesteps with input of the form \"init X\" where X is O or 1. These timesteps indicate the first 13 initial cells. Afterward, there are 7 \"simulate\"' inputs At each of these timesteps, one new left-cell node is added on the left, one new right-cell node is added on the right, and then all cells update their value according to the Rule 30 update rules.\nAn example of the graphical format for the cellular automaton task is given in Figure6\nFor the Turing machine task, nodes were assigned to 8 types: state-A, state-B, state-C, state-D. head, cell, 0, and 1. Edges have 16 types: head-cell, next-left, head-state, value, and 12 types o the form rule-R-W-D, where R is the symbol read (O or 1), W is the symbol written (0 or 1), an D is the direction to move afterward (Left, Right, or None). State nodes are connected with rule edges, which together specify the rules governing the Turing machine. Cell nodes are connected tc. adjacent cells with next-left edges, and to the symbol on the tape with value edges. Finally, the heac. node is connected to the current state with a head-state edge, and to the current cell of the head witl. a head-cell edge.\nAn example of the graphical format for the Turing machine task is given in Figure7"}, {"section_index": "25", "section_name": "APPENDIX D GRAPH SEOUENCE INPUT", "section_text": "The model described in Section 4 conditions the output of the model on the final graph producec by the network. This is ideal when the graph represents all of the necessary knowledge for solving the task. However, it may also be desirable for each graph to represent a subset of knowledge corre sponding to a particular time, and for the output to be based on the sequence of graphs produced. Foj nstance, in the third bAbI task (which requires reasoning about the temporal sequence of events each graph could represent the state of the word at that particular time, instead of representing the full sequence of events prior to that time. In Appendix [C] section[C.1] I describe a transformation to the tasks which allows all information to be contained in the graph. But this adds complexity tc the graphical structure. If it were possible for the model to take into account the full sequence of graphs, instead of just the final one, we could maintain the simplicity of the graph transformation\nTo this end, I present an extension of the GGT-NN model that can produce output using the full graphical sequence. In the extended model, the graphical output of the network after each input sentence is saved for later use. Then, when processing the query, the same set of query transfor- mations are applied to every intermediate graph, producing a sequence of representation vectors hanswer. These are then combined into a final summary representation vector hanswer hanswer summary\nIn a few of the tasks, specific entities had multi-word representations. While this works for normal input, it makes it difficult to do direct reference, since direct reference is checked on an individual word level. These tasks were modified slightly so that the entities are referred to with single words (e.g. \"red_square'\" instead of \"red square')..\nAt the start of each training example, each of the rules for the Turing machine are given, in the. form \"rule state-X R W state-Y D\". Next, the initial state is given in the format \"start state-X\", and the initial contents of the tape (of length 4) are given sequentially in the format \"input symbol-X\". with the position for the head to start marked by \"input symbol-X head\". Finally, there are 6 \"run'. inputs, after each of which the head node updates its edges and the cell at the head updates its value. according to the rules of the Turing machine. If the head leaves the left or right of the tape, a new node is introduced there.\nDirect reference No direct reference Task Accuracy Accuracy 3 - Three Supporting Facts 90.3% 65.4% 5 - Three Arg. Relations 89.8% 74.2%\nTable 4: Performance of the sequence-extended GGT-NN on the two bAbI tasks with a tempora component.\nAlgorithm 2 Sequence-Extended Pseudocode 9o > Initialize G to an empty grap for k from 1 to K do > Process each sentenc 9k Tn(9k-1,i(k)) if direct reference enabled then 9k Tdirect(9k,D(k)) end if if intermediate propagation enabled then 9k Tprop(Gk) end if hag Ter(9r) 9k Tada(9k,[i(k) hagd]) 9k t Tc(9k,i(k)) end for hauweary 0 > Initialize hsusweary to the zero vecto for k from 1 to K do > Process the query for each grap 9k Tquery(9k,iquery) if direct reference enabled then end if 9k Tqupry(9r) haumeary end for return foutput (hansweary)\nusing a recurrent network such as a GRU layer, from which the output can be produced. The modi fied pseudocode for this is shown in Algorithm|2\nI evaluated the extended model on bAbI tasks 3 and 5, the two tasks which asked questions about a sequence of events. (Note that although Task 14 also involves a sequence of events, it uses a set of discrete named time periods and so is not applicable to this modification.) The model was trainec on each of these tasks, without the extra record and history nodes used to store the sequence, insteac simply using the sequence of graphs to encode the relevant information. Due to the simpler graphs produced, intermediate propagation was also disabled.\nResults from training the model are shown in Table4] The accuracy of the extended model appears to be slightly inferior to the original model in general, although the extended direct-reference model of task 5 performs slightly better than its original counterpart. One possible explanation for the inferiority of the extended model is that the increased amount of query processing made the model more likely to overfit on the training data. Even so, the extended model shows promise, and could be advantageous for modeling complex tasks for which preprocessing the graph would be impractical."}] |
S1Bb3D5gg | [{"section_index": "0", "section_name": "LEARNING END-TO-END GOAL-ORIENTED DIALOG", "section_text": "Antoine Bordes, Y-Lan Boureau & Jason Weston\nTraditional dialog systems used in goal-oriented applications require a lot of. domain-specific handcrafting, which hinders scaling up to new domains. End. to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in. chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a. testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols in order to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet. imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on. data from the second Dialog State Tracking Challenge (Henderson et al.]2014a). We show similar result patterns on data extracted from an online concierge service"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The most useful applications of dialog systems such as digital personal assistants or bots are currentl goal-oriented and transactional: the system needs to understand a user request and complete a relatec task with a clear goal within a limited number of dialog turns. The workhorse of traditional dialo, systems is slot-filling (Lemon et al.2006f Wang and Lemon2013] Young et al.2013) whicl predefines the structure of a dialog state as a set of slots to be filled during the dialog. For a restauran reservation system, such slots can be the location, price range or type of cuisine of a restaurant Slot-filling has proven reliable but is inherently hard to scale to new domains: it is impossible tc manually encode all features and slots that users might refer to in a conversation.\nEnd-to-end dialog systems, usually based on neural networks (Shang et al.]2015} Vinyals and. Le] 2015} Sordoni et al.]2015] Serban et al.]2015a] [Dodge et al.2016), escape such limitations all their components are directly trained on past dialogs, with no assumption on the domain or dialog state structure, thus making it easy to automatically scale up to new domains. They have shown promising performance in non goal-oriented chit-chat settings, where they were trained. to predict the next utterance in social media and forum threads (Ritter et al.]2011) Wang et al. 2013 Lowe et al.2015) or movie conversations (Banchs2012). But the performance achieved on chit-chat may not necessarily carry over to goal-oriented conversations. As illustrated in Figure1 in a restaurant reservation scenario, conducting goal-oriented dialog requires skills that go beyond language modeling, e.g., asking questions to clearly define a user request, querying Knowledge Bases (KBs), interpreting results from queries to display options to users or completing a transaction. This. makes it hard to ascertain how well end-to-end dialog models would do, especially since evaluating chit-chat performance in itself is not straightforward (Liu et al.|[2016). In particular, it is unclear if. end-to-end models are in a position to replace traditional dialog methods in a goal-directed setting can end-to-end dialog models be competitive with traditional methods even in the well-defined narrow-domain tasks where they excel? If not, where do they fall short?.\nThis paper aims to make it easier to address these questions by proposing an open resource to test end. to-end dialog systems in a way that 1) favors reproducibility and comparisons, and 2) is lightweight. and easy to use. We aim to break down a goal-directed objective into several subtasks to test some crucial capabilities that dialog systems should have (and hence provide error analysis by design).."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In the spirit of the bAbI tasks conceived as question answering testbeds (Weston et al.2015b), w designed a set of five tasks within the goal-oriented context of restaurant reservation. Grounde with an underlying KB of restaurants and their properties (location, type of cuisine, etc.), these task cover several dialog stages and test if models can learn various abilities such as performing dialog management, querying KBs, interpreting the output of such queries to continue the conversation o dealing with new entities not appearing in dialogs from the training set. In addition to showing hov the set of tasks we propose can be used to test the goal-directed capabilities of an end-to-end dialo system, we also propose results on two additional datasets extracted from real interactions with users to confirm that the pattern of results observed in our tasks is indeed a good proxy for what would be observed on real data, with the added benefit of better reproducibility and interpretability.\nThe goal here is explicitly not to improve the state of the art in the narrow domain of restaurant booking, but to take a narrow domain where traditional handcrafted dialog systems are known to perform well, and use that to gauge the strengths and weaknesses of current end-to-end systems with no domain knowledge. Solving our tasks requires manipulating both natural language and symbols from a KB. Evaluation uses two metrics, per-response and per-dialog accuracies, the latter tracking completion of the actual goal. Figure[1depicts the tasks and Section|3|details them. Section 4|compares multiple methods on these tasks. As an end-to-end neural model, we tested Memory Networks (Weston et al.|2015a), an attention-based architecture that has proven competitive for non goal-oriented dialog (Dodge et al.]2016). Our experiments in Section 5 show that Memory Networks can be trained to perform non-trivial operations such as issuing API calls to KBs and manipulating entities unseen in training. We confirm our findings on real human-machine dialogs\nHi! Hello, what can I help you with today? I'd like to book a table for six people in an expensive price range with British food. Task 1 I'm on it!. Issuing API calls Where should it be?. In London. Ok, let me look into some options for you.. api call(British, London, Six, Expensive). Actually i would prefer for four.. Task 2 Sure. Is there anything else to update? No. Updating API calls Ok let me look into some options for you.. api call(British, London, Four, Expensive). The Place R phone The Place phone. * The Place R_cuisine british. * The Place R address The_Place_address. * The Place R location london. * The_Place R_availability four. * The Place R price expensive. * The Place R rating 7. * The Fancy Pub R phone The Fancy Pub phone. The Fancy Pub R cuisine british. * The_Fancy_Pub R_address The_Fancy_Pub_address. * The Fancy Pub R location london. Task 3 * The Fancy_Pub R_availability four. Displaying options * The_Fancy_Pub R_price expensive * The_Fancy_Pub R_rating 8. What do you think of this option: The Fancy Pub. No, I don't like that.. Sure, let me find an other option for you. What do you think of this option: The_Place. Let's do it!. Great let me do the reservation.. Can you provide me the address?. Here it is: The Place address.. You rock!. Task 4 Is there anything else I can help you with?. Providing extra-information. No thanks. You're welcome.. Task 5 Conducting full dialogs.\nFigure 1: Goal-oriented dialog tasks. A user (in green) chats with a bot (in blue) to book a table at. a restaurant. Models must predict bot utterances and API calls (in dark red). Task 1 tests the capacity of interpreting a request and asking the right questions to issue an API call. Task 2 checks the ability to modify an API call. Task 3 and 4 test the capacity of using outputs from an API call (in light red) to propose options. (sorted by rating) and to provide extra-information. Task 5 combines everything.\nTasks T1 T2 T3 T4 T5 T6 Concierge Number of utterances: 12 17 43 15 55 54 8 DIALOGS - user utterances 5 7 7 4 13 6 4 Average statistics - bot utterances 7 10 10 4 18 8 4 - outputs from API calls 0 0 23 7 24 40 0 Vocabulary size 3,747 1,229 8,629 Candidate set size. 4,212 2,406 11,482 DATASETS Training dialogs. 1,000 1,618 3,249 Tasks 1-5 share the. Validation dialogs. 1,000 500 403 Test dialogs same data source 1,000(*) 1,117 402\nfrom the restaurant reservation dataset of the 2nd Dialog State Tracking Challenge, or DSTC2. (Henderson et al.]2014a), which we converted into our task format, showing that Memory Networks. can outperform a dedicated slot-filling rule-based baseline. We also evaluate on a dataset of human. human dialogs extracted from an online concierge service that books restaurants for users. Overall. the per-response performance is encouraging, but the per-dialog one remains low, indicating tha end-to-end models still need to improve before being able to reliably handle goal-oriented dialog..\nThe most successful goal-oriented dialog systems model conversation as partially observable Marko decision processes (POMDP) (Young et al.|2013). However, despite recent efforts to learn modules (Henderson et al.|2014b), they still require many hand-crafted features for the state and action space representations, which restrict their usage to narrow domains. Our simulation, used to generate goal-oriented datasets, can be seen as an equivalent of the user simulators used to train POMDP (Young et al. 2013, Pietquin and Hastie2013), but for training end-to-end systems.\nSerban et al.[(2015b) list available corpora for training dialog systems. Unfortunately, no good. resources exist to train and test end-to-end models in goal-oriented scenarios. Goal-oriented datasets are usually designed to train or test dialog state tracker components (Henderson et al.[[2014a) and. are hence of limited scale and not suitable for end-to-end learning (annotated at the state level and. noisy). However, we do convert the Dialog State Tracking Challenge data into our framework. Some. datasets are not open source, and require a particular license agreement or the participation to a. challenge (e.g., the end-to-end task of DSTC4 (Kim et al.|2016)) or are proprietary (e.g.,Chen et al. (2016)). Datasets are often based on interactions between users and existing systems (or ensemble of. systems) like DSTC datasets, SFCore (Gasic et al.2014) or ATIS (Dahl et al.][1994). This creates noise and makes it harder to interpret the errors of a model. Lastly, resources designed to connect. dialog systems to users, in particular in the context of reinforcement learning, are usually built around. a crowdsourcing setting such as Amazon Mechanical Turk, e.g., (Hixon et al.]2015) Wen et al.. 2015] Su et al.l2015a b). While this has clear advantages, it prevents reproducibility and consistent. comparisons of methods in the exact same setting.\nThe closest resource to ours might be the set of tasks described in (Dodge et al.2016), since some of them can be seen as goal-oriented. However, those are question answering tasks rather than dialog i.e. the bot only responds with answers, never questions, which does not reflect full conversation."}, {"section_index": "3", "section_name": "GOAL-ORIENTED DIALOG TASKS", "section_text": "All our tasks involve a restaurant reservation system, where the goal is to book a table at a restaurant The first five tasks are generated by a simulation, the last one uses real human-bot dialogs. The data for all tasks is available at http : // fb. ai/babi We also give results on a proprietary dataset extracted from an online restaurant reservation concierge service with anonymized users\nTable 1: Data used in this paper. Tasks 1-5 were generated using our simulator and share the same KB Task 6 was converted from the 2nd Dialog State Tracking Challenge (Henderson et al.||2014a). Concierge is made of chats extracted from a real online concierge service. (*) Tasks 1-5 have two test sets, one using the. vocabulary of the training set and the other using out-of-vocabulary words.."}, {"section_index": "4", "section_name": "3.1 RESTAURANT RESERVATION SIMULATION", "section_text": "The simulation is based on an underlying KB, whose facts contain the restaurants that can be booke. and their properties. Each restaurant is defined by a type of cuisine (10 choices, e.g., French, Thai), . location (10 choices, e.g., London, Tokyo), a price range (cheap, moderate or expensive) and a rating. (from 1 to 8). For simplicity, we assume that each restaurant only has availability for a single part. size (2, 4, 6 or 8 people). Each restaurant also has an address and a phone number listed in the KB\nThe KB can be queried using API calls, which return the list of facts related to the corresponding restaurants. Each query must contain four fields: a location, a type of cuisine, a price range and a party size. It can return facts concerning one, several or no restaurant (depending on the party size)\nUsing the KB, conversations are generated in the format shown in Figure[1 Each example is a dialog comprising utterances from a user and a bot, as well as API calls and the resulting facts. Dialogs are generated after creating a user request by sampling an entry for each of the four required fields: e.g the request in Figure|1 is [cuisine: British, location: London, party size: six, price range: expensive] We use natural language patterns to create user and bot utterances. There are 43 patterns for the user and 20 for the bot (the user can use up to 4 ways to say something, while the bot always uses the same). Those patterns are combined with the KB entities to form thousands of different utterances"}, {"section_index": "5", "section_name": "3.1.1 TASK DEFINITIONS", "section_text": "We now detail each task. Tasks 1 and 2 test dialog management to see if end-to-end systems can learn to implicitly track dialog state (never given explicitly), whereas Task 3 and 4 check if they can learn to use KB facts in a dialog setting. Task 3 also requires to learn to sort. Task 5 combines all tasks\nTask 1: Issuing API calls A user request implicitly defines a query that can contain from O to 4 o1 the required fields (sampled uniformly; in Figure 1] it contains 3). The bot must ask questions fo filling the missing fields and eventually generate the correct corresponding API call. The bot asks fo. information in a deterministic order, making prediction possible.\nTask 2: Updating API calls Starting by issuing an API call as in Task 1, users then ask to update their requests between 1 and 4 times (sampled uniformly). The order in which fields are updated is random. The bot must ask users if they are done with their updates and issue the updated API call.\nTask 3: Displaying options Given a user request, we query the KB using the corresponding API call and add the facts resulting from the call to the dialog history. The bot must propose options to users by listing the restaurant names sorted by their corresponding rating (from higher to lower) until users accept. For each option, users have a 25% chance of accepting. If they do, the bot must stop displaying options, otherwise propose the next one. Users always accept the option if this is the last remaining one. We only keep examples with API calls retrieving at least 3 options.\nTask 4: Providing extra information Given a user request, we sample a restaurant and start the. dialog as if users had agreed to book a table there. We add all KB facts corresponding to it to the dialog. Users then ask for the phone number of the restaurant, its address or both, with proportions. 25%, 25% and 50% respectively. The bot must learn to use the KB facts correctly to answer.\nTask 5: Conducting full dialogsWe combine Tasks 1-4 to generate full dialogs just as in Figure[1 Unlike in Task 3, we keep examples if API calls return at least 1 option instead of 3"}, {"section_index": "6", "section_name": "3.1.2 DATASETS", "section_text": "We want to test how well models handle entities appearing in the KB but not in the dialog training. sets. We split types of cuisine and locations in half, and create two KBs, one with all facts abou. restaurants within the first halves and one with the rest. This yields two KBs of 4,200 facts and 60( restaurants each (5 types of cuisine 5 locations 3 price ranges 8 ratings) that only share price. ranges, ratings and party sizes, but have disjoint sets of restaurants, locations, types of cuisine, phone. and addresses. We use one of the KBs to generate the standard training, validation and test dialogs. and use the other KB only to generate test dialogs, termed Out-Of-Vocabulary (OOV) test sets..\nFor training, systems have access to the training examples and both KBs. We then evaluate on both test sets, plain and OOV. Beyond the intrinsic difficulty of each task, the challenge on the OOV test\nsets is for models to generalize to new entities (restaurants, locations and cuisine types) unseen in any training dialog - something natively impossible for embedding methods. Ideally, models could, for instance, leverage information coming from the entities of the same type seen during training\nWe generate five datasets, one per task defined in|3.1.1] Table[1gives their statistics. Training sets are relatively small (1,000 examples) to create realistic learning conditions. The dialogs from the training and test sets are different, never being based on the same user requests. Thus, we test if models car generalize to new combinations of fields. Dialog systems are evaluated in a ranking, not a generation setting: at each turn of the dialog, we test whether they can predict bot utterances and API calls by selecting a candidate, not by generating it|'|Candidates are ranked from a set of all bot utterances and API calls appearing in training, validation and test sets (plain and OOV) for all tasks combined."}, {"section_index": "7", "section_name": "3.2 DIALOG STATE TRACKING CHALLENGE", "section_text": "Since our tasks rely on synthetically generated language for the user, we supplement our dataset. with real human-bot dialogs. We use data from DSTC2 (Henderson et al.|[2014a), that is also in the restaurant booking domain. Unlike our tasks, its user requests only require 3 fields: type of cuisine (91 choices), location (5 choices) and price range (3 choices). The dataset was originally designed for dialog state tracking hence every dialog turn is labeled with a state (a user intent + slots) to be predicted. As our goal is to evaluate end-to-end training, we did not use that, but instead converted the data into the format of our 5 tasks and included it in the dataset as Task 6..\nThis dataset has similar statistics to our Task 5 (see Table[1) but is harder. The dialogs are noisier and. the bots made mistakes due to speech recognition errors or misinterpretations and also do not always have a deterministic behavior (the order in which they can ask for information varies)."}, {"section_index": "8", "section_name": "3.3 ONLINE CONCIERGE SERVICE", "section_text": "Tasks 1-6 are, at least partially, artificial. This provides perfect control over their design (at least. for Tasks 1-5), but no guarantee that good performance would carry over from such synthetic to. more realistic conditions. To quantify this, we also evaluate the models from Section 4 on data extracted from a real online concierge service performing restaurant booking: users make requests through a text-based chat interface that are handled by human operators who can make API calls. All conversations are between native English speakers.\nWe collected around 4k chats to create this extra dataset, denoted Concierge. All conversations have been anonymized by (1) removing all user identifiers, (2) using the Stanford NER tagger to remove named entities (locations, timestamps, etc.), (3) running some manually defined regex to filter out any remaining salient information (phone numbers, etc.). The dataset does not contain results from API calls, but still records when operators made use of an external service (Yelp or OpenTable) to gather information. Hence, these have to be predicted, but without any argument (unlike in Task 2).\nThe statistics of Concierge are given in Table[1 The dialogs are shorter than in Tasks 1-6, especiall since they do not include results of API calls, but the vocabulary is more diverse and so is the candidate set; the candidate set is made of all utterances of the operator appearing in the training, validatior and test sets. Beyond the higher variability of the language used by human operators compared tc bots, the dataset offers additional challenges. The set of user requests is much wider, ranging fron managing restaurant reservations to asking for recommendations or specific information. Users dc not always stay focused on the request. API calls are not always used (e.g., the operator might use neither Yelp nor OpenTable to find a restaurant), and facts about restaurants are not structured no. constrained as in a KB. The structure of dialogs is thus much more variable. Users and operators alsc make typos, spelling and grammar mistakes.\nLowe et al.(2016) termed this setting Next-Utterance-Classification\nWe used the provided speech transcriptions to create the user and bot utterances, and given the dialog states we created the API calls to the KB and their outputs which we added to the dialogs. We also added ratings to the restaurants returned by the API calls, so that the options proposed by the bots can be consistently predicted (by using the highest rating). We did use the original test set but use a slightly different training/validation split. Our evaluation differs from the challenge (we do not predict the dialog state), so we cannot compare with the results from (Henderson et al.|2014a)."}, {"section_index": "9", "section_name": "4 MODELS", "section_text": "To demonstrate how to use the dataset and provide baselines, we evaluate several learning methods or our goal-oriented dialog tasks: rule-based systems, classical information retrieval methods, supervised embeddings, and end-to-end Memory networks."}, {"section_index": "10", "section_name": "4.1 RULE-BASED SYSTEMS", "section_text": "Our tasks T1-T5 are built with a simulator so as to be completely predictable. Thus it is possible to hand-code a rule based system that achieves 10o% on them, similar to the bAbI tasks of|Weston et al.(2015b). Indeed, the point of these tasks is not to check whether a human is smart enough to be able to build a rule-based system to solve them, but to help analyze in which circumstances machine learning algorithms are smart enough to work, and where they fail.\nHowever, the Dialog State Tracking Challenge task (T6) contains some real interactions with users This makes rule-based systems less straightforward and not so accurate (which is where we expec machine learning to be useful). We implemented a rule-based system for this task in the followin? way. We initialized a dialog state using the 3 relevant slots for this task: cuisine type, location anc price range. Then we analyzed the training data and wrote a series of rules that fire for triggers like word matches, positions in the dialog, entity detections or dialog state, to output particular responses API calls and/or update a dialog state. Responses are created by combining patterns extracted fron the training set with entities detected in the previous turns or stored in the dialog state. Overall we built 28 rules and extracted 21 patterns. We optimized the choice of rules and their application priorit (when needed) using the validation set, reaching a validation per-response accuracy of 40.7%. We did not build a rule-based system for Concierge data as it is even less constrained."}, {"section_index": "11", "section_name": "4.2 CLASSICAL INFORMATION RETRIEVAL MODELS", "section_text": "TF-IDF MatchFor each possible candidate response, we compute a matching score between the input and the response, and rank the responses by score. The score is the TF-IDF weighted cosine similarity between the bag-of-words of the input and bag-of-words of the candidate response. We consider the case of the input being either only the last utterance or the entire conversation history and choose the variant that works best on the validation set (typically the latter).\nNearest Neighbor Using the input, we find the most similar conversation in the training set, and. output the response from that example. In this case we consider the input to only be the last utterance and consider the training set as (utterance, response) pairs that we select from. We use word overlap as the scoring method. When several responses are associated with the same utterance in training, we. sort them by decreasing co-occurence frequency."}, {"section_index": "12", "section_name": "4.3 SUPERVISED EMBEDDING MODELS", "section_text": "A standard, often strong, baseline is to use supervised word embedding models for scoring (conversa tion history, response) pairs. The embedding vectors are trained directly for this goal. In contrast word embeddings are most well-known in the context of unsupervised training on raw text as in word2vec (Mikolov et al.||2013). Such models are trained by learning to predict the middle word given the surrounding window of words, or vice-versa. However, given training data consisting of dialogs, a much more direct and strongly performing training procedure can be used: predict the next response given the previous conversation. In this setting a candidate reponse y is scored against the input x: f(x, y) = (Ax)' By, where A and B are d V word embedding matrices, i.e. input and response are treated as summed bags-of-embeddings. We also consider the case of enforcing A = B which sometimes works better, and optimize the choice on the validation set.\nThe embeddings are trained with a margin ranking loss: f(x, y) > m + f(x, y), with m the size. of the margin, and we sample N negative candidate responses y per example, and train with SGD. This approach has been previously shown to be very effective in a range of contexts (Bai et al.|2009)\nMemory Networks (Weston et al.|[2015afSukhbaatar et al.[2015) are a recent class of models tha have been applied to a range of natural language processing tasks, including question answering Weston et al.2015b), language modeling (Sukhbaatar et al.|2015), and non-goal-oriented dialog Dodge et al.||2016). By first writing and then iteratively reading from a memory component (using nops) that can store historical dialogs and short-term context to reason about the required response they have been shown to perform well on those tasks and to outperform some other end-to-enc architectures based on Recurrent Neural Networks. Hence, we chose them as end-to-end mode baseline.\nWords denoting entities have two important traits: 1) exact matches are usually more appropriate tc deal with them than approximate matches, and 2) they frequently appear as OOV words (e.g., the name of a new restaurant). Both are a challenge for embedding-based methods. Firstly, embedding into a low dimensional space makes it hard to differentiate between exact word matches, and matches between words with similar meaning (Bai et al.2009). While this can be a virtue (e.g. when using synonyms), it is often a flaw when dealing with entities (e.g. failure to differentiate between phone numbers since they have similar embeddings). Secondly, when a new word is used (e.g. the name ol a new restaurant) not seen before in training, no word embedding is available, typically resulting ir failure (Weston et al.2015a).\nBoth problems can be alleviated with match type features. Specifically, we augment the vocabulary. with 7 special words, one for each of the KB entity types (cuisine type, location, price range, party. size, rating, phone number and address). For each type, the corresponding type word is added tc. the candidate representation if a word is found that appears 1) as a KB entity of that type, 2) in the candidate, and 3) in the input or memory. Any word that matches as a KB entity can be typed ever. if it has never been seen before in training dialogs. These features allow the model to learn to rel. on type information using exact matching words cues when OOV entity embeddings are not known as long as it has access to a KB with the OOV entities. We assess the impact of such features fo TF-IDF Match, Supervised Embeddings and Memory Networks.."}, {"section_index": "13", "section_name": "5 EXPERIMENTS", "section_text": "Our main results across all the models and tasks are given in Table2l(extra results are also given ir Table[10|of Appendix[D. The first 5 rows show tasks T1-T5, and rows 6-10 show the same tasks in the out-of-vocabulary setting. Rows 11 and 12 give results for the Dialog State Tracking Challenge task (T6) and Concierge respectively. Columns 2-7 give the results of each method tried in terms of per-response accuracy and per-dialog accuracy, the latter given in parenthesis. Per-response accuracy counts the percentage of responses that are correct (i.e., the correct candidate is chosen out of all possible candidates). Per-dialog accuracy counts the percentage of dialogs where every response is correct. Ultimately, if only one response is incorrect this could result in a failed dialog, i.e. failure to achieve the goal (in this case, of achieving a restaurant booking). Note that we test Memory Networks (MemNNs) with and without match type features, the results are shown in the last two columns. The hyperparameters for all models were optimized on the validation sets; values for best performing models are given in Appendix|C\nThe classical IR method TF-IDF Match performs the worst of all methods. and much worse than th Nearest Neighbor IR method, which is true on both the simulated tasks T1-T5 and on the real data of T6 and Concierge. Supplementing TF-IDF Match with match type features noticeably improves performance, which however still remains far behind Nearest Neighbor IR (adding bigrams to the\nWe use the MemN2N architecture of Sukhbaatar et al.(2015), with an additional modification to leverage exact matches and types, described shortly. Apart from that addition, the main components of the model are (i) how it stores the conversation in memory, (ii) how it reads from the memory to reason about the response; and (iii) how it outputs the response. The details are given in Appendix[A\nTable 2: Test results across all tasks and methods. For tasks T1-T5 results are given in the standar setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been see luring training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Bes performing methods (or methods within 0.1% of best performing) are given in bold for the per-response accurac metric, with the per-dialog accuracy given in parenthesis. (*) For Concierge, an example is considered correctly answered if the correct response is ranked among the top 10 candidates by the bot, to accommodate the muc larger range of semantically equivalent responses among candidates (see ex. in Tab.7) . (t) we did not implemen MemNNs+match type on Concierge, because this method requires a KB and there is none associated with it.\ndictionary has no effect on performance). This is in sharp contrast to other recent results on data. driven non-goal directed conversations, e.g. over dialogs on Twitter (Ritter et al.|2011) or Reddi (Dodge et al.]2016), where it was found that TF-IDF Match outperforms Nearest Neighbor, as genera. conversations on a given subject typically share many words. We conjecture that the goal-oriente nature of the conversation means that the conversation moves forward more quickly, sharing fewe. words per (input, response) pair, e.g. consider the example in Figure[1\nSupervised embeddings outperform classical IR methods in general, indicating that learning mapping between words (via word embeddings) is important. However, only one task (T1, Issuing API calls is completely successful. In the other tasks, some responses are correct, as shown by the per-respons. accuracy, however there is no dialog where the goal is actually achieved (i.e., the mean dialog accuracy is O). Typically the model can provide correct responses for greeting messages, asking to wait, making API calls and asking if there are any other options necessary. However, it fails t interpret the results of API calls to display options, provide information or update the calls with nev information, resulting in most of its errors, even when match type features are provided.\nMemory Networks (without match type features) outperform classical IR and supervised embedding. across all of the tasks. They can solve the first two tasks (issuing and updating API calls) adequately On the other tasks, they give improved results, but do not solve them. While the per-response accuracy. is improved, the per-dialog accuracy is still close to O on T3 and T4. Some examples of predictions of the MemNN for T1-4 are given in Appendix[B] On the OOV tasks again performance is improved but this is all due to better performance on known words, as unknown words are simply not usec. without the match type features. As stated in Appendix [Cl optimal hyperparameters on several of the tasks involve 3 or 4 hops, indicating that iterative accessing and reasoning over the conversation helps. e.g. on T3 using 1 hop gives 64.8% while 2 hops yields 74.7%. Appendix B|displays illustrative examples of Memory Networks predictions on T 1-4 and Concierge..\nMemory Networks with match type features give two performance gains over the same models without match type features: (i) T4 (providing information) becomes solvable because matches can be made to the results of the API call; and (ii) out-of-vocabulary results are significantly improved as well. Still, tasks T3 and T5 are still fail cases, performance drops slightly on T2 compared tc not using match type features, and no relative improvement is observed on T6. Finally, note that matching words on its own is not enough, as evidenced by the poor performance of TF-IDF matching this idea must be combined with types and the other properties of the MemNN model.\nUnsurprisingly, perfectly coded rule-based systems can solve the simulated tasks T1-T5 perfectly. whereas our machine learning methods cannot. However. it is not easy to build an effective rule-based\nTask Rule-based TF-IDF Match Nearest Supervised Memory Networks Systems no type + type Neighbor Embeddings no match type + match type T1: Issuing API calls 100 (100) 5.6 (0) 22.4 (0) 55.1 (0) 100 (100) 99.9 (99.6) 100 (100) T2: Updating API calls 100 (100) 3.4 (0) 16.4 (0) 68.3 (0) 68.4 (0) 100 (100) 98.3 (83.9) T3: Displaying options 100 (100) 8.0 (0) 8.0 (0) 58.8 (0) 64.9 (0) 74.9 (2.0) 74.9 (0) T4: Providing information 100 (100) 9.5 (0) 17.8 (0) 28.6 (0) 57.2 (0) 59.5 (3.0) 100 (100) T5: Full dialogs 100 (100) 4.6 (0) 8.1 (0) 57.1 (0) 75.4 (0) 96.1 (49.4) 93.4 (19.7) T1(OOV): Issuing API calls 100 (100) 5.8 (0) 22.4 (0) 44.1 (0) 60.0 (0) 72.3 (0) 96.5 (82.7) T2(OOV): Updating API calls 100 (100) 3.5 (0) 16.8 (0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4) T3(OOV): Displaying options 100 (100) 8.3 (0) 8.3 (0) 58.8 (0) 65.0 (0) 74.4 (0) 75.2 (0) T4(OOV): Providing inform 100 (100) 9.8 (0) 17.2 (0) 28.6 (0) 57.0 (0) 57.6 (0) 100 (100) T5(OOV): Full dialogs 100 (100) 4.6 (0) 9.0 (0) 48.4 (0) 58.2 (0) 65.5 (0) 77.7 (0) T6: Dialog state tracking 2 33.3 (0) 1.6 (0) 1.6 (0) 21.9 (0) 22.6 (0) 41.1 (0) 41.0 (0) Concierge(*) n/a 1.1 (0.2) n/a 13.4 (0.5) 14.6 (0.5) 16.7 (1.2) n/a(t)\nOverall, while the methods we tried made some inroads into these tasks, there are still many challenges left unsolved. Our best models can learn to track implicit dialog states and manipulate OOV words and symbols (T1-T2) to issue API calls and progress in conversations, but they are still unable to perfectly handle interpreting knowledge about entities (from returned API calls) to present results tc the user, e.g. displaying options in T3. The improvement observed on the simulated tasks e.g. where MemNNs outperform supervised embeddings which in turn outperform IR methods, is also seen on the realistic data of T6 with similar relative gains. This is encouraging as it indicates that future work on breaking down, analysing and developing models over the simulated tasks should help in the real tasks as well. Results on Concierge confirm this observation: the pattern of relative performances of methods is the same on Concierge and on our series of tasks. This suggests that our synthetic data can indeed be used as an effective evaluation proxy.\nWe have introduced an open dataset and task set for evaluating end-to-end goal-oriented dialog. learning methods in a systematic and controlled way. We hope this will help foster progress of end-to. end conversational agents because (i) existing measures of performance either prevent reproducibility (different Mechanical Turk jobs) or do not correlate well with human judgements (Liu et al.|[2016). (ii) the breakdown in tasks will help focus research and development to improve the learning methods and (iii) goal-oriented dialog has clear utility in real applications. We illustrated how to use the. testbed using a variant of end-to-end Memory Networks, which prove an effective model on these. tasks relative to other baselines, but are still lacking in some key areas.."}, {"section_index": "14", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Martin Raison, Alex Lebrun and Laurent Landowski for their hel with the Conciergedata"}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009 Supervised semantic indexing. In Proceedings of ACM CIKM, pages 187-196. ACM..\nBanchs, R. E. (2012). Movie-dic: a movie dialogue corpus for research and development. In Proceedings of th 5Oth Annual Meeting of the ACL..\nChen, Y.-N., Hakkani-Tur, D., Tur, G., Gao, J., and Deng, L. (2016). End-to-end memory networks wi knowledge carryover for multi-turn spoken language understanding. In Proceedings of Interspeech.\nDahl, D. A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., and Shriberg, E. (1994). Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the workshop on Human Language Technology, pages 43-48. Association for Computational Linguistics.\nDodge, J., Gane, A., Zhang, X., Bordes, A., Chopra, S., Miller, A., Szlam, A., and Weston, J. (2016). Evaluatin prerequisite qualities for learning end-to-end dialog systems. In Proc. of ICLR\nHenderson, M., Thomson, B., and Williams, J. (2014a). The second dialog state tracking challenge. In 15tl Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 263\nIsbell, C. L., Kearns, M., Kormann, D., Singh, S., and Stone, P. (2ooo). Cobot in lambdamoo: A social statistic agent. In AAAI/IAAI, pages 36-41.\nJafarpour, S., Burges, C. J., and Ritter, A. (2010). Filter, rank, and transfer the knowledge: Learning to cha Advances in Ranking, 10\nLowe, R., Serban, I. V., Noseworthy, M., Charlin, L., and Pineau, J. (2016). On the evaluation of dialogue systems with next utterance classification. arXiv preprint arXiv:1605.05414.\nMikolov. T.. Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vecto space. arXiv:1301.3781.\nPietquin, O. and Hastie, H. (2013). A survey on metrics for the evaluation of user simulations. The knowledg engineering review, 28(01), 59-73.\nSordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J.-Y., Gao, J., and Dolan, B. (2O15). 7 neural network approach to context-sensitive generation of conversational responses. Proceedings of NAACI\nSu, P.-H., Vandyke, D., Gasic, M., Kim, D., Mrksic, N., Wen, T.-H., and Young, S. (2015a). Learning from real users: Rating dialogue success with neural networks for reinforcement learning in spoken dialogue systems.. arXiv preprint arXiv:1508.03386. Su, P.-H., Vandyke, D., Gasic, M., Mrksic, N., Wen, T.-H., and Young, S. (2O15b). Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391.\nVinyals. O. and Le. O. (2015). A neural conversational model. arXiv preprint arXiv:1506.05869\nWang, H., Lu, Z., Li, H., and Chen, E. (2013). A dataset for research on short-text conversations. In EMNLP\nWang, Z. and Lemon, O. (2013). A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference..\nWen, T.-H., Gasic, M., Mrksic, N., Su, P.-H., Vandyke, D., and Young, S. (2015). Semantically conditione lstm-based natural language generation for spoken dialogue systems. arXiv preprint arXiv:1508.01745.\nWeston, J., Chopra, S., and Bordes, A. (2015a). Memory networks. Proceedings of ICLR\nYoung, S., Gasic, M., Thomson, B., and Williams, J. D. (2013). Pomdp-based statistical spoken dialog systems. A review. Proceedings of the IEEE, 101(5), 1160-1179.\nKim, S., D'Haro, L. F., Banchs, R. E., Williams, J. D., and Henderson, M. (2O16). The fourth dialog state tracking challenge. In Proceedings of the 7th International Workshop on Spoken Dialogue Systems (IwsDS\nRitter, A., Cherry, C., and Dolan, W. B. (2011). Data-driven response generation in social media. In Proceedings\nWeston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015b). Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698."}, {"section_index": "16", "section_name": "MEMORY NETWORKS IMPLEMENTATION", "section_text": "Storing and representing the conversation history As the model conducts a conversation with the. user, at each time step t the previous utterance (from the user) and response (from the model) are appended to the memory. Hence, at any given time there are ci, ... ct user utterances and c1, ... ct-1 model responses stored (i.e. the entire conversation)|The aim at time t is to thus choose the next response ct. We train on existing. full dialog transcripts, so at training time we know the upcoming utterance ct and can use it as a training target FollowingDodge et al.(2016), we represent each utterance as a bag-of-words and in memory it is represented as. a vector using the embedding matrix A, i.e. the memory is an array with entries:.\nm = (A(ci),A(ci)...,A(ct-1),A(ct-1)\nwhere (.) maps the utterance to a bag of dimension V (the vocabulary), and A is a d V matrix, where. d is the embedding dimension. We retain the last user utterance ct' as the \"input\"' to be used directly in the. controller. The contents of each memory slot m; so far does not contain any information of which speaker spoke an utterance, and at what time during the conversation. We therefore encode both of those pieces of information. in the mapping by extending the vocabulary to contain T' = 1000 extra \"time features\"' which encode the index i into the bag-of-words, and two more features that encode whether the utterance was spoken by the user. Or the model.\nAttention over the memory The last user utterance ct' is embedded using the same matrix A giving q = A(ct), which can also be seen as the initial state of the controller. At this point the controller read from the memory to find salient parts of the previous conversation that are relevant to producing a responst The match between q and the memories is computed by taking the inner product followed by a softmax Pi = Softmax(u' mi), giving a probability vector over the memories. The vector that is returned back to the controller is then computed by o = R t Pim; where R is a d d square matrix. The controller state is ther updated with q2 = o + q. The memory can be iteratively reread to look for additional pertinent informatioi using the updated state of the controller q2 instead of q, and in general using qn on iteration h, with a fixe. number of iterations N (termed N hops). Empirically we find improved performance on our tasks with up to or 4 hops.\nwhere there are C candidate responses in y, and W is of dimension d V. In our tasks the set y is a (large) se of candidate responses which includes all possible bot utterances and API calls..\nThe entire model is trained using stochastic gradient descent (SGD), minimizing a standard cross-entropy loss between a and the true label a."}, {"section_index": "17", "section_name": "B EXAMPLES OF PREDICTIONS OF A MEMORY NETWORK", "section_text": "Tables[3][4][5|and[6|display examples of predictions of the best performing Memory Network on full dialogs Task 5, (with 3 hops) on test examples of Tasks 1-4 along with the values of the attention over each memory for each hop (p as defined in Sec.[A). This model does not use match type features. Then, Table[7|displays an example of prediction of the best performing Memory Network on Concierge (with 2 hops) on a test example along with the values of the attention over each memory for each hop..\nTables|8|and 9respectively display the values of the hyperparameters of the best Supervised Embeddings anc Memory Networks selected for each task. These models were selected using the best validation validation sets"}, {"section_index": "18", "section_name": "D ADDITIONAL RESULTS", "section_text": "Table[10|provides results for additional variants of supervised embeddings, using either a dictionary that includes. all bigrams to leverage some word order information, or match type features. On some tasks, supervised embeddings perform better when the last user utterance is used as sole input, without the full dialog history (see Table 8). When no history is used, we slightly adapt match type features to only record type: a special word corresponding to type T (e.g., phone, address, etc) is appended to the representation of a candidate if the.\nAPI calls are stored as bot utterances c, and KB facts resulting from such calls as user utterances c\na = Softmax(qN+1 W(y1),...,qN+1 W(yc))\nTime Dialog History Locutor Hop #1 Hop #2 Hop #3 1 User hi .054 .016 .020 2 Bot hello what can i help you with today .040 .024 .008 3 User may i have a table in paris .099 .137 .141 4 Bot i'm on it .048 .028 .004 5 User <silence> .181 .099 .037 6 Bot any preference on a type of cuisine .056 .090 .014 7 User i love indian food .159 .188 .238 8 Bot how many people would be in your party .051 .022 .010 9 User we will be six .129 .144 .114 10 Bot which price range are looking for .039 .028 .006 11 User in a moderate price range please .128 .213 .405 12 Bot ok let me look into some options for you .016 .011 .003 User input <silence> Correct answer api_call j indian paris six moderate Predicted answer api_cal1 indian paris six moderate [Correct]\nTable 4: Task 2 (Update API call) Out of the multiple memories from the current dialog, the mode. correctly focuses on the 2 important pieces: the original API call and the utterance giving the update\nTime Locutor Dialog history Hop #1 Hop #2 Hop #3 3 User may i have a table in paris. .061 .072 .040 4 Bot i'm on it .026 .012 .001 5 User <silence> .087 .042 .012 6 Bot any preference on a type of cuisine. .026 .023 .001 7 User i love indian food. .081 .070 .055 8 Bot how many people would be in your party. .025 .006 .001 9 User we will be six .059 .051 .018 10 Bot which price range are looking for. .038 .043 .004 11 User in a moderate price range please. .080 .095 .096 12 Bot ok let me look into some options for you. .025 .042 .003 13 User <silence> .127 .069 .032 14 Bot api_call indian paris six moderate .062 .113 .043 15 User instead could it be with french food. .188 .311 .683 16 Bot sure is there anything else to update. .016 .007 .001 17 User no .028 .013 .007 18 Bot ok let me look into some options for you. .011 .006 .000 User input <silence> Correct answer api_call 1 french paris six moderate. Predicted answer api_call french paris six moderate. [Correct]\nTable 3: Task 1 (Issue API call) The model learns to direct its attention towards the 4 memories containing the information key to issue the API call. More hops help to strengthen this signal. <silence> is a special token used to indicate that the user did not speak at this turn - the model has to carry out the conversation with no additional input.\nTable 5: Task 3 (Displaying options) The model knows it has to display options but the attention is wrong: it should attend on the ratings to select the best option (with highest rating). It cannot learn that properly and match type features do not help. It is correct here by luck, the task is not solved overall (see Tab.[2). We do not show all memories in the table, only those with meaningful attention.\nTable 6: Task 4 (Providing extra-information) The model knows it must display a phone or an address but, as explained in Section[A|the embeddings mix up the information and make it hard to distinguish between different phone numbers or addresses, making answering correctly very hard. As shown in the results of Tab.2 this problem can be solved by adding match type features, that allow to emphasize entities actually appearing in the history. The attention is globally wrong here.\nTime Locutor Dialog history Hop #1 Hop #2 Hop #3 14 Bot api_call indian paris six moderate .012 .000 .000 15 User instead could it be with french food. .067 .103 .147 20 Bot api_call french paris six moderate .012 .000 .000 21 User resto_1 r_phone rest_1_phone .018 .004 .000 23 User resto_1 r_cuisine french. .029 .005 .000 24 User resto_1 r_location paris. .060 .292 .094 25 User resto_1 r_number six. .050 .298 .745 26 User resto_1 r_price moderate .060 .090 .002 27 User resto_1 r_rating 6. .016 .002 .000 30 User resto_2 r_cuisine french. .031 .007 .000 31 User resto_2 r_location paris. .040 .081 .004 32 User resto_2 r_number six. .020 .012 .000 33 User resto_2 r_price moderate. .029 .009 .000 37 User resto_3 r_cuisine french. .014 .001 .000 38 User resto_3 r_location paris. .028 .016 .001 39 User resto_3 r_number six .024 .022 .004 40 User resto_3 r_price moderate .039 .015 .001 User input <silence> Correct answer what do you think of this option: resto_. Predicted answer what do you think of this option: resto_. [Correct]\nTime Locutor Dialog history. Hop #1 Hop #2 Hop #3 14 Bot api_call indian paris six moderate .006 .000 .000 15 User instead could it be with french food .024 .011 .007 20 Bot api_call french paris six moderate .005 .000 .001 21 User resto_1 r_phone resto_1_phone .011 .005 .004 22 User resto_1 r_address resto_l_address .018 .004 .001 23 User resto_1 r_cuisine french .018 .003 .001 24 User resto_1 r_location paris .068 .091 .108 25 User resto_1 r_number six .086 .078 .020 26 User resto_1 r_price moderate .070 .225 369 27 User resto_1 r_rating 6 .014 .006 .008 28 User resto_2 r_phone resto_2_phone .015 .009 .006 29 User resto_2 r_address resto_2_address .014 .004 .001 31 User resto_2 r_location paris .075 .176 .193 32 User resto_2 r_number six .100 .126 .026 33 User resto_2 r_price moderate .038 .090 .167 35 User resto_3 r_phone resto_3_phone .004 .001 .001 36 User resto_3 r_address resto_3_address .005 .002 .001 37 User resto_3 r_location paris .028 .028 .026 39 User resto_3 r_number six .039 .013 .002 40 User resto_3 r_price moderate .018 .008 .013 42 Bot what do you think of this option: resto_1 .074 .001 .000 43 User let's do it. .032 .004 .001 14 Bot great let me do the reservation .003 .000 .000 User input do you have its address Correct answer here it is resto 1 address Predicted answer here it is: resto 8 address [Incorrect]\nTable 7: Concierge Data The model is also able to learn from human-human dialogs. <person>, <org>. <number> and <date> are special tokens used to anonymize the data. We report the top 5 answers predicted by the model. They are all semantically equivalent. Note that the utterances, while all produced by humans, are no perfect English (\"rservation\", \"I'll check into it\").\nTable 8: Hyperparameters of Supervised Embeddings. When Use History is True, the whole conversation history is concatenated with the latest user utterance to create the input. If False, only the latest utterance is used as input.\nTable 9: Hyperparameters of Memory Networks. The longer and more complex the dialogs are, the more hops are needed.\nTime Locutor Dialog History Hop #1 Hop #2 1 User hey concierge .189 .095 2 User could you check if i can get a rservation at <org> <date> for brunch .209 .178 3 User <number> people .197 .142 4 User <silence> .187 .167 5 Bot hi <person> unfortunately <org> is fully booked for <date> .225 .410 and there's <number> people on the waiting list. User input when's the earliest availability. Correct answer i'll check Pred. answer #1 i'm on it [Incorrect] Pred. answer #2 i'll find out [Incorrect] Pred. answer #3 i'll take a look [Incorrect] Pred. answer #4 i'll check [Correct] Pred. answer #5 i'll check into it [Incorrect]\nTask Learning Rate Margin m Embedding Dim d Negative Cand. N Use History Task 1 0.01 0.01 32 100 True Task 2 0.01 0.01 128 100 False Task 3 0.01 0.1 128 1000 False Task 4 0.001 0.1 128 1000 False Task 5 0.01 0.01 32 100 True Task 6 0.001 0.01 128 100 False Concierge 0.001 0.1 64 100 False\nTask Learning Rate Margin m Embedding Dim d Negative Cand. N Nb Hops Task 1 0.01 0.1 128 100 1 Task 2 0.01 0.1 32 100 1 Task 3 0.01 0.1 32 100 3 Task 4 0.01 0.1 128 100 2 Task 5 0.01 0.1 32 100 3 Task 6 0.01 0.1 128 100 4 Concierge 0.001 0.1 128 100 2\ncandidate contains a word that appears in the knowledge base as an entity of type T, regardless of whether the same word appeared earlier in the conversation. As seen on Table[10] match type features improve performance on out-of-vocabulary tasks 1 and 5, bringing it closer to that of Memory Networks without match type features but still quite lagging Memory Networks with match type features. Bigrams slightly hurt rather than help performance, except in Task 5 in the standard in-vocabulary setup (performance is lower in the OOV setup).\nTable 10: Test results across all tasks and methods. For tasks T1-T5 results are given in the standard setup and the out-of-vocabulary (OOV) setup, where words (e.g. restaurant names) may not have been seer during training. Task T6 is the Dialog state tracking 2 task with real dialogs, and only has one setup. Best performing methods (or methods within O.1% of best performing) are given in bold for the per-response accuracy metric, with the per-dialog accuracy given in parenthesis.\nSupervised Embeddings Memory Networks. Task no match type. + match type + bigrams no match type. + match type no bigram no bigram no match type. T1: Issuing API calls. 100 (100) 83.2 (0) 98.6 (92.4) 99.9 (99.6) 100 (100) T2: Updating API calls 68.4 (0) 68.4 (0) 68.3 (0) 100 (100) 98.3 (83.9) T3: Displaying options. 64.9 (0) 64.9 (0) 64.9 (0) 74.9 (2.0) 74.9 (0) T4: Providing information 57.2 (0) 57.2 (0) 57.3 (0) 59.5 (3.0) 100 (100) T5: Full dialogs. 75.4 (0) 76.2 (0) 83.4 (0) 96.1 (49.4) 93.4 (19.7) T1(OOV): Issuing API calls 60.0 (0) 67.2 (0) 58.8 (0) 72.3 (0) 96.5 (82.7) T2(OOV): Updating API calls 68.3 (0) 68.3 (0) 68.3 (0) 78.9 (0) 94.5 (48.4) T3(OOV): Displaying options 65.0 (0) 65.0 (0) 62.1 (0) 74.4 (0) 75.2 (0) T4(OOV): Providing inform 57.0 (0) 57.1 (0) 57.0 (0) 57.6 (0) 100 (100) T5(OOV): Full dialogs 58.2 (0) 64.4 (0) 50.4 (0) 65.5 (0) 77.7 (0) T6: Dialog state tracking 2 22.6 (0) 22.1 (0) 21.8 (0) 41.1 (0) 41.0 (0)"}] |
r10FA8Kxg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Cybenko(1989) proved that a network with a large enough single hidden layer of sigmoid units car. approximate any decision boundary. Empirical work, however, suggests that it can be difficult to. train shallow nets to be as accurate as deep nets.Dauphin and Bengio (2013) trained shallow nets. on SIFT features to classify a large-scale ImageNet dataset and found that it was difficult to train. large, high-accuracy, shallow nets. A study of deep convolutional nets suggests that for vision tasks. deeper models are preferred under a parameter budget (e.g. Eigen et al.(2014); He et al.(2015) Simonyan and Zisserman(2014); Srivastava et al.[(2015)). Similarly, Seide et al.[(2011) and Geras et al.(2015) show that deeper models are more accurate than shallow models in speech acoustic modeling. More recently, Romero et al.[(2015) showed that it is possible to gain increases in accuracy. in models with few parameters by training deeper, thinner nets (FitNets) to mimic much wider nets. Cohen and Shashua(2016);Liang and Srikant|(2016) suggest that the representational efficiency of deep networks scales exponentially with depth, but it is unclear if this applies only to pathological. problems, or is encountered in practice on data sets such as TIMIT and CIFAR.\nBa and Caruana (2014), however, demonstrated that shallow nets sometimes can learn the functions learned by deep nets, even when restricted to the same number of parameters as the deep nets. They did this by first training state-of-the-art deep models, and then training shallow models to mimic the deep models. Surprisingly, and for reasons that are not well understood, the shallow models learned more accurate functions when trained to mimic the deep models than when trained on the original data used to train the deep models. In some cases shallow models trained this way were as accurate as state-of-the-art deep models. But this demonstration was made on the TIMIT speech recognition benchmark. Although their deep teacher models used a convolutional layer, convolution is less important for TIMIT than it is for other domains such as image classification.\nBa and Caruana (2014) also presented results on CIFAR-10 which showed that a shallow mode could learn functions almost as accurate as deep convolutional nets. Unfortunately, the results on CIFAR-10 are less convincing than those for TIMIT. To train accurate shallow models on CIFAR-10"}, {"section_index": "1", "section_name": "DO DEEP CONVOLUTIONAL NETS REALLY NEED TO BE DEEP AND CONVOLUTIONAL?", "section_text": "Gregor Urban1, Krzysztof J. Geras?, Samira Ebrahimi Kahou3, Ozlem Aslan4, Shengjie Wang Abdelrahman Mohamed6, Matthai Philipose6, Matt Richardson6, Rich Caruana6"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we show that the methods Ba and Caruana used to train shallow students to mimic deep teacher models on TIMIT do not work as well on problems such as CIFAR-1O where multiple layer of convolution are required to train accurate teacher models. If the student models have a simila number of parameters as the deep teacher models, high accuracy can not be achieved without multipl layers of convolution even when the student models are trained via distillation.\nTo ensure that the shallow student models are trained as accurately as possible, we use Bayesian optimization to thoroughly explore the space of architectures and learning hyperparameters. Although this combination of distillation and hyperparameter optimization allows us to train the most accurate shallow models ever trained on CIFAR-10, the shallow models still are not as accurate as deep models. Our results clearly suggest that deep convolutional nets do, in fact, need to be both deep and convolutional, even when trained to mimic very accurate models via distillation (Hinton et al.]2015)\nn this paper, we revisit the CIFAR-10 experiments in Ba and Caruana (2014). Unlike in that worl nere we compare shallow models to state-of-the-art deep convolutional models, and restrict th number of parameters in the shallow student models to be comparable to the number of parameters i he deep convolutional teacher models. Because we anticipated that our results might be differen ve follow their approach closely to eliminate the possibility that the results differ merely because o changes in methodology. Note that the goal of this paper is not to train models that are small or fas. as inBucila et al.(2006),Hinton et al.(2015), and Romero et al.(2015), but to examine if shallov nodels can be as accurate as deep convolutional models given the same parameter budget..\nThere are many steps required to train shallow student models to be as accurate as possible: train. state-of-the-art deep convolutional teacher models, form an ensemble of the best deep models, collect and combine their predictions on a large transfer set, and then train carefully optimized shallow. student models to mimic the teacher ensemble. For negative results to be informative, it is important. that each of these steps be performed as well as possible. In this section we describe the experimental. methodology in detail. Readers familiar with distillation (model compression), training deep models. on CIFAR-10, data augmentation, and Bayesian hyperparameter optimization may wish to skip to the. empirical results in Section3"}, {"section_index": "3", "section_name": "2.1 MODEL COMPRESSION AND DISTILLATION", "section_text": "The key idea behind model compression is to train a compact model to approximate the functior. learned by another larger, more complex model. Bucila et al.(2006) showed how a single neural net. of modest size could be trained to mimic a much larger ensemble. Although the small neural net. contained 1o00 fewer parameters, often they were as accurate as the large ensembles they were. trained to mimic.\nModel compression works by passing unlabeled data through the large, accurate teacher model tc. collect the real-valued scores it predicts, and then training a student model to mimic these scores. Hinton et al.[(2015) generalized the methods of Bucila et al.(2006) and Ba and Caruana (2014 by incorporating a parameter to control the relative importance of the soft targets provided by the. teacher model to the hard targets in the original training data, as well as a temperature parameter tha. regularizes learning by pushing targets towards the uniform distribution.Hinton et al.(2015) alsc. demonstrated that much of the knowledge passed from the teacher to the student is conveyed as dar. knowledge contained in the relative scores (probabilities) of outputs corresponding to other classes as opposed to the scores given to just the output for the one correct class..\nSurprisingly, distillation often allows smaller and/or shallower models to be trained that are nearly as accurate as the larger, deeper models they are trained to mimic, yet these same small models are not as accurate when trained on the 1-hot hard targets in the original training set. The reason for this is not yet well understood. Similar compression and distillation methods have also successfully\nthey had to include at least one convolutional layer in the shallow model. and increased the numbei of parameters in the shallow model until it was 30 times larger than the deep teacher model. Despite this, the shallow convolutional student model was several points less accurate than a teacher model that was itself several points less accurate than state-of-the-art models on CIFAR-10.\nbeen used in speech recognition (e.g.Chan et al.[(2015); Geras et al.[(2015);Li et al.(2014)) anc reinforcement learningParisotto et al.(2016); Rusu et al.(2016).Romero et al.(2015) showed tha distillation methods can be used to train small students that are more accurate than the teacher models by making the student models deeper, but thinner, than the teacher model..\nWe train shallow mimic nets using data labeled by an ensemble of deep teacher nets trained on the original 1-hot CIFAR-10 training data. The deep teacher models are trained in the usual way using softmax outputs and cross-entropy cost function. Following Ba and Caruana (2014), the student mimic models are not trained with cross-entropy on the ten p values where pk = ek / , e output by the softmax layer from the deep teacher model, but instead are trained on the un-normalized log probability values z (the logits) before the softmax activation. Training on the logarithms of predictec probabilities (logits) helps provide the dark knowledge that regularizes students by placing emphasis on the relationships learned by the teacher model across all of the outputs.\nAs in Ba and Caruana (2014), the student is trained as a regression problem given training data z(T))}: ((x(1) 3(T\n1 (W) = T t\nwhere W represents all of the weights in the network, and g(x(t); W) is the model prediction on the tth training data sample."}, {"section_index": "4", "section_name": "2.3 USING A LINEAR BOTTLENECK TO SPEED UP TRAINING", "section_text": "A shallow net has to have more hidden units in each layer to match the number of parameters ir a deep net.Ba and Caruana(2014) found that training these wide, shallow mimic models witl. backpropagation was slow, and introduced a linear bottleneck layer between the input and non-linea. layers to speed learning. The bottleneck layer speeds learning by reducing the number of parameter. that must be learned, but does not make the model deeper because the linear terms can be absorbe back into the non-linear weight matrix after learning. See[Ba and Caruana (2014) for details. To matcl their experiments we use linear bottlenecks when training student models with O or 1 convolutiona. layers, but did not find the linear bottlenecks necessary when training student models with more thar. 1 convolutional layer."}, {"section_index": "5", "section_name": "2.4 BAYESIAN HYPERPARAMETER OPTIMIZATION", "section_text": "The goal of this work is to determine empirically if shallow nets can be trained to be as accurate as. deep convolutional models using a similar number of parameters in the deep and shallow models. If. we succeed in training a shallow model to be as accurate as a deep convolutional model, this provides an existence proof that shallow models can represent and learn the complex functions learned by. deep convolutional models. If, however, we are unable to train shallow models to be as accurate as deep convolutional nets, we might fail only because we did not train the shallow nets well enough..\nIn all our experiments we employ Bayesian hyperparameter optimization using Gaussian process regression to ensure that we thoroughly and objectively explore the hyperparameters that govern learning. The implementation we use is Spearmint (Snoek et al.|2012). The hyperparameters we optimize with Bayesian optimization include the initial learning rate, momentum, scaling of the initial random weights, scaling of the inputs, and terms that determine the width of each of the network's layers (i.e. number of convolutional filters and neurons). More details of the hyperparameter optimization can be found in Sections2.5||2.72.8|and in the Appendix"}, {"section_index": "6", "section_name": "2.5 TRAINING DATA AND DATA AUGMENTATION", "section_text": "The CIFAR-10 (Krizhevsky2009) data set consists of a set of natural images from 10 different object. classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. The dataset is a labeled subset of the 80 million tiny images dataset (Torralba et al.2008) and is divided into 50,000 train and\n10,000 test images. Each image is 32 32 pixels in 3 color channels, yielding input vectors with 3072 dimensions. We prepared the data by subtracting the mean and dividing by the standard deviation of each image vector. We train all models on a subset of 40,o00 images and use the remaining 10,ooo images as the validation set for the Bayesian optimization. The final trained models only used 80% of the theoretically available training data (as opposed to retraining on all of the data after. hyperparameter optimization).\nWe employ the HsV-data augmentation technique as described by Snoek et al. (2015). Thu we shift hue, saturation and value by uniform random values: U(-Dh,Dn), s U(-Ds,Ds), , ~ U(-D,D). Saturation and value values are scaled globally: as as additional hyperparameters in the Bayesian hyperparameter optimization.\nAll training images are mirrored left-right randomly with a probability of O.5. The input images ar further scaled and jittered randomly by cropping windows of size 2424 up to 3232 at randon locations and then scaling them back to 32 32. The procedure is as follows: we sample an intege value S ~ U(24, 32) and then a pair of integers x, y ~ U(0, 32 S). The transformed resulting image is R = fspline,3(I[x : x + S, y : y + S]) with I denoting the original image and fspline, denoting the 3rd order spline interpolation function that maps the 2D array back to 32 32 (applied tc the three color channels separately).\nAll data augmentations for the teacher models are computed on the fly using different random seeds For student models trained to mimic the ensemble (see Section|2.7|for details of the ensemble teache. model), we pre-generated 160 epochs worth of randomly augmented training data, evaluated the ensemble's predictions (logits) on these samples, and saved all data and predictions to disk. All studen models thus see the same training data in the same order. The parameters for HSV-augmentation ir. this case had to be selected beforehand; we chose to use the settings found with the best single mode. (Dn = 0.06, Ds = 0.26, D, = 0.20, As = 0.21, A, = 0.13). Pre-saving the logits and augmentec data is important to reduce the computational cost at training time, and to ensure that all studen models see the same training data"}, {"section_index": "7", "section_name": "2.6 LEARNING-RATE SCHEDULE", "section_text": "We train all models using SGD with Nesterov momentum. The initial learning rate and momentum. are chosen by Bayesian optimization. The learning rate is reduced according to the evolution of the model's validation error: it is halved if the validation error does not drop for ten epochs in a row. It is. not reduced within the next eight epochs following a reduction step. Training ends if the error did not. drop for 30 epochs in a row or if the learning rate was reduced by a factor of more than 2000 in total.\nOne limitation of the CIFAR-10 experiments performed in Ba and Caruana (2014) is that the teacher models were not state-of-the-art. The best deep models they trained on CIFAR-10 had only 88% accuracy, and the ensemble of deep models they used as a teacher had only 89% accuracy. The accuracies were not state-of-the-art because they did not use augmentation and because their deepest models had only three convolutional layers. Because our goal is to determine if shallow models can be as accurate as deep convolutional models, it is important that the deep models we compare to (and. use as teachers) are as accurate as possible..\nWe train deep neural networks with eight convolutional layers, three intermittent max-pooling layer and two fully-connected hidden layers. We include the size of these layers in the hyperparamete optimization, by allowing the first two convolutional layers to contain from 32 to 96 filters each, the next two layers to contain from 64 to 192 filters, and the last four convolutional layers to contair\nBecause augmentation allows us to generate large training sets from the original 50,o00 images, we use augmented data as the transfer set for model compression. No extra unlabeled data is required\nThis schedule provides a way to train the highly varying models in a fair manner (it is not feasible to optimize all of the parameters that define the learning schedule). It also decreases the time spent to train each model compared to using a hand-selected overestimate of the number of epochs to train thus allowing us to train more models in the hyperparameter search..\nfrom 128 to 384 filters. The two fully-connected hidden layers can contain from 512 to 1536 neurons We parametrize these model-sizes by four scalars (the layers are grouped as 2-2-4) and include the scalars in the hyperparameter optimization. All models are trained using Theano (Bastien et al.]2012 Bergstra et al.|2010).\nVe optimize eighteen hyperparameters overall: initial learning rate on [0.01, 0.05], momentum o 0.80, 0.91], l2 weight decay on [5 : 10-5,4 : 10-4], initialization coefficient on [0.8, 1.35] whic cales the initial weights of the CNN, four separate dropout rates, five constants controlling th HSV data augmentation, and the four scaling constants controlling the networks' layer widths. Th earning rate and momentum are optimized on a log-scale (as opposed to linear scale) by optimizin he exponent with appropriate bounds, e.g. LR = e-x optimized over x on [3.0, 4.6]. See th Appendix for more details about hyperparameter optimization.\nWe trained 129 deep CNN models with Spearmint. The best model obtained an accuracy of 92.78%. the fifth best achieved 92.67%. See Table1Ifor the sizes and architectures of the three best models\nWe are able to construct a more accurate model on CIFAR-10 by forming an ensemble of multipl. deep convolutional neural nets, each trained with different hyperparameters, and each seeing slightl. different training data (as the augmentation parameters vary). We experimented with a number o. ensembles of the many deep convnets we trained, using accuracy on the validation set to select th best combination. The final ensemble contained 16 deep convnets and had an accuracy of 94.0% or. the validation set, and 93.8% on the final test set. We believe this is among the top published result for deep learning on CIFAR-10. The ensemble averages the logits predicted by each model before. the softmax layers.\nWe used this very accurate ensemble model as the teacher model to label the data used to train the shallower student nets. As described in Section|2.2] the logits (the scores just prior to the final softmax layer) from each of the CNN teachers in the ensemble model are averaged for each class, and the average logits are used as final regression targets to train the shallower student neural nets.\n2.8 TRAINING SHALLOW STUDENT MODELS TO MIMIC AN ENSEMBLE OF DEEP CONVOLUTIONAL MODELS\nWe trained student mimic nets with 1, 3.1d' 10 and 31.6 million trainable parameters on the. pre-computed augmented training data (Section|2.5) that was re-labeled by the teacher ensemble. (Section|2.7). For each of the four student sizes we trained shallow fully-connected student MLPs containing 1, 2, 3, 4, or 5 layers of non-linear units (ReLU), and student CNNs with 1, 2, 3 or 4 convolutional layers. The convolutional student models also contain one fully-connected ReLU layer Models with zero or only one convolutional layer contain an additional linear bottleneck layer to speed up learning (cf. Section 2.3). We did not need to use a bottleneck to speed up learning for the deeper models as the number of learnable parameters is naturally reduced by the max-pooling layers\nThe student CNNs use max-pooling and Bayesian optimization controls the number of convolutiona filters and hidden units in each layer. The hyperparameters we optimized in the student models are initial learning rate, momentum, scaling of the initially randomly distributed learnable parameters scaling of all pixel values of the input, and the scale factors that control the width of all hidder and convolutional layers in the model. Weights are initialized as in|Glorot and Bengio(2010). We intentionally do not optimize and do not make use of weight decay and dropout when training studen models because preliminary experiments showed that these consistently reduced the accuracy o1 student models by several percent. Please refer to the Appendix for more details on the individua architectures and hyperparameter ranges.\nTable[1summarizes results after Bayesian hyperparameter optimization for models trained on the original 0/1 hard CIFAR-10 labels. All of these models use weight decay and are trained with the. dropout hyperparameters included in the Bayesian optimization. The table shows the accuracy of. the best three deep convolutional models we could train on CIFAR-10. as well as the accuracy of\n13.16 ~ Sqrt(10) falls halfway between 1 and 10 on log scale\nTable 1: Accuracy on CIFAR-10 of shallow and deep models trained on the original 0/1 hard clas. labels using Bayesian optimization with dropout and weight decay. Key: c = convolution layer; m. - max-pooling layer; fc = fully-connected layer; lfc = linear bottleneck layer; exponents indicat repetitions of a layer. The last two models (*) are numbers reported byBa and Caruana[(2014). Th. models with 1-4 convolutional layers at the top of the table are included for comparison with studer. models of similar architecture in Table[2]. All of the student models in Table2|with 1, 2, 3, and. convolutional layers are more accurate than their counterparts in this table that are trained on th. original O/1 hard targets -- as expected distillation yields shallow models of higher accuracy tha. shallow models trained on the original training data..\nModel Architecture # parameters Accuracy 1 conv. layer c-mp-lfc-fc 10M 84.6% 2 conv. layer c-mp-c-mp-fc 10M 88.9% 3 conv. layer c-mp-c-mp-c-mp-fc 10M 91.2% 4 conv. layer c-mp-c-c-mp-c-mp-fc 10M 91.75% Teacher CNN 1st 76c2-mp-126c2-mp-148c4-mp-1200fc2 5.3M 92.78% Teacher CNN 2nd 96c2-mp-171c2-mp-128c4-mp-512fc2 2.5M 92.77% Teacher CNN 3rd 54c2-mp-158c2-mp-189c4-mp-1044fc2 5.8M 92.67% Ensemble of 16 CNNs c2-mp-c2-mp-c4-mp-fc2 83.4M 93.8% Teacher CNN (*) 128c-mp-128c-mp-128c-mp-1k fc 2.1M 88.0% Ensemble, 4 CNNs (*) 128c-mp-128c-mp-128c-mp-1k fc 8.6M 89.0%\nTable 2: Comparison of student models with varying number of convolutional layers trained to mimic. the ensemble of 16 deep convolutional CIFAR-10 models in Table[1]. The best performing student models have 3-4 convolutional layers and 10M -31.6M parameters. The student models in this table are more accurate than the models of the same architecture in Table[1|that were trained on the original O/1 hard targets -- shallow models trained with distillation are more accurate than shallow. models trained on 0/1 hard targets. The student model trained byBa and Caruana(2014) is shown in. the last line for comparison; it is less accurate and much larger than the student models trained here that also have 1 convolutional layer.\n1 M 3.16 M 10 M 31.6 M 70 M Bottleneck, 1 hidden layer 65.8% 68.2% 69.5% 70.2% 2 hidden layers 66.2% 70.9% 73.4% 74.3% 3 hidden layers 66.8% 71.0% 73.0% 73.9% 4 hidden layers 66.7% 69.5% 71.6% 72.0% 5 hidden layers 66.4% 70.0% 71.4% 71.5% 1 conv. layer, 1 max-pool, Bottleneck 84.5% 86.3% 87.3% 87.7% 2 conv. layers, 2 max-pool 87.9% 89.3% 90.0% 90.3% 3 conv. layers, 3 max-pool 90.7% 91.6% 91.9% 92.3% 4 conv. layers, 3 max-pool 91.3% 91.8% 92.6% 92.6% SNN-ECNN-MIMIC-30k 128c-p-1200L-30k 85.8% trained on ensemble (Ba and Caruana2014)\nthe ensemble of 16 deep CNNs. For comparison, the accuracy of the ensemble trained by Ba anc Caruana(2014)) is included at the bottom of the table.\nTable 2summarizes the results after Bayesian hyperparameter optimization for student mod- els of different depths and number of parameters trained on soft targets (average logits) to mimic the teacher ensemble of 16 deep CNNs. For comparison, the student model trained byBa and Caruana (2014) also is shown.\nThe first four rows in Table|1|show the accuracy of convolutional models with 10 million param- eters and 1. 2. 3. and 4 convolutional layers The accuracies of these same architectures with 1M. 3.16M, 10M, and 31.6M parameters when trained as students on the soft targets predicted by the teacher ensemble are shown in Table|2 Comparing the accuracies of the models with 10 million parameters in both tables, we see that training student models to mimic the ensemble leads to significantly better accuracy in every case. The gains are more pronounced for shal- lower models, most likely because their learn- able internal representations do not naturally lead to good generalization in this task when trained on the O/1 hard targets: the difference in accuracy for models with one convolutional layer is 2.7% (87.3% vs. 84.6%) and only 0.8% (92.6% vs. 91.8%) for models with four convo lutional layers\nFigure [1summarizes the results in Table2 for student models of different depth, number of convolutional layers, and number of parame- ters when trained to mimic the ensemble teacher model. Student models trained on the ensemble logits are able to achieve accuracies previously unseen on CIFAR-10 for models with so few layers. Also, it is clear that there is a huge gap between the convolutional student models at the top of the figure, and the non-convolutional stu- dent models at the bottom of the figure: the most accurate student MLP has accuracy less than 75%. while the least accurate convolutional stu- dent model with the same number of parameters but only one convolutional layer has accuracy above 87%. And the accuracy of the convolu- tional student models increases further as more layers of convolution are added. Interestingly the most accurate student MLPs with no convo- lutional layers have only 2 or 3 hidden layers; the student MLPs with 4 or 5 hidden layers are not as accurate.\nComparing the student MLP with only one hidden layer (bottom of the graph) to the student CNN with 1 convolutional layer clearly suggests that convolution is critical for this problem even wher models are trained via distillation, and that it is very unlikely that a shallow non-convolutional mode with 100 million parameters or less could ever achieve accuracy comparable to a convolutional model It appears that if convolution is critical for teacher models trained on the original O/1 hard targets, i1\nteacher ensemble 90 compression gap 85 O CNN: 1 convolutional layer CNN: 2 convolutional layers CNN: 3 convolutional layers Aeennney 80 CNN: 4 convolutional layers MLP: 1 hidden layer O MLP: 2 hidden layer. O MLP: 3 hidden layer. MLP: 4 hidden layer. MLP: 5 hidden layer. 75 3 4 5 70 1 compression gap 65 3 10 31 Number of Parameters [millions]\nFigure 1: Accuracy of student models with differ- ent architectures trained to mimic the CIFAR10 ensemble. The average performance of the five best models of each hyperparameter-optimization experiment is shown, together with dashed lines indicating the accuracy of the best and the fifth best model from each setting. The short horizontal lines at 10M parameters are the accuracy of mod els trained without compression on the original 0/1 hard targets.\nis likely to be critical for student models trained to mimic these teacher models. Adding depth to th student MLPs without adding convolution does not significantly close this \"convolutional gap\"\nFurthermore, comparing student CNNs with 1, 2, 3, and 4 convolutional layers, it is clear that CNN students benefit from multiple convolutional layers. Although the students do not need as many layers as teacher models trained on the original O/1 hard targets, accuracy increases significantly as multiple convolutional layers are added to the model. For example, the best student with only on convolutional layer has 87.7% accuracy, while the student with the same number of parameters (31M and 4 convolutional layers has 92.6% accuracy.\nOne pattern that is clear in the graph is that all student models benefit when the number of parameters increases from 1 million to 31 million parameters. It is interesting to note, however, that the largest student (31M) with a one convolutional layer is less accurate than the smallest student (1M) with two convolutional layers, further demonstrating the value of depth in convolutional models..\nIn summary, depth-constrained student models trained to mimic a high-accuracy ensemble of deep convolutional models perform better than similar models trained on the original hard targets (the \"compression\"' gaps in Figure[1), student models need at least 3-4 convolutional layers to have high accuracy on CIFAR-10, shallow students with no convolutional layers perform poorly on CIFAR-10 and student models need at least 3-10M parameters to perform well. We are not able to compress deep convolutional models to shallow student models without significant loss of accuracy.\nWe are currently running a reduced set of experiments on ImageNet, though the chances of shallow models performing well on a more challenging problem such as ImageNet appear to be slim.."}, {"section_index": "8", "section_name": "4 DISCUSSION", "section_text": "Although we are not able to train shallow models to be as accurate as deep models, the models trained via distillation are the most accurate models of their architecture ever trained on CIFAR-10. For example, the best single-layer fully-connected MLP (no convolution) we trained achieved an accuracy. of 70.2%. We believe this to be the most accurate shallow MLP ever reported for CIFAR-10 (in. comparison to 63.1% achieved byLe et al.[(2013), 63.9% by[Memisevic et al.(2015) and 64.3% by Geras and Sutton (2015)). Although this model cannot compete with convolutional models, clearly. distillation helps when training models that are limited by architecture and/or number of parameters.. Similarly, the student models we trained with 1, 2, 3, and 4 convolutional layers are, we believe. the most accurate convnets of those depths reported in the literature. For example, the ensemble. teacher model in|Ba and Caruana[(2014) was an ensemble of four CNNs, each of which had 3. convolutional layers, but only achieved 89% accuracy, whereas the single student CNNs we train via. distillation achieve accuracies above 90% with only 2 convolutional layers, and above 92% with 3. convolutional layers. The only other work we are aware of that achieves comparable high accuracy. with non-convolutional MLPs is recent work by Lin et al.(2016). They train multi-layer Z-Lin. networks, and use a powerful form of data augmentation based on deformations that we did not use..\nInterestingly, we noticed that mimic networks perform consistently worse when trained using dropou. This surprised us, and suggests that training student models on the soft-targets from a teacher provide.. significant regularization for the student models obviating the need for extra regularization method. such as dropout. This is consistent with the observation made by Ba and Caruana[(2014) that studen. mimic models did not seem to overfit.Hinton et al. (2015) claim that soft targets convey more. information per sample than Boolean hard targets. The also suggest that the dark knowledge in the. soft targets for other classes further helped regularization, and that early stopping was unnecessary. Romero et al. (2015) extend distillation by using the intermediate representations learned by the. teacher as hints to guide training deep students, and teacher confidences further help regularizatior. by providing a measure of sample simplicity to the student, akin to curriculum learning. In othe. work,Pereyra et al.(2017) suggest that the soft targets provided by a teacher provide a form o. confidence penalty that penalizes low entropy distributions and label smoothing, both of whicl. improve regularization by maintaining a reasonable ratio between the logits of incorrect classe.\nFigure[1includes short horizontal lines at 10M parameters indicating the accuracy of non-student models trained on the original O/1 hard targets instead of on the soft targets. This \"compression gap\" is largest for shallower models, and as expected disappears as the student models become architecturally more similar to the teacher models with multiple layers of convolution. The benefits of distillation are most significant for shallow models, yielding an increase in accuracy of 3% or more\nZhang et al.[(2016) question the traditional view of regularization in deep models. Although they dc not discuss distillation, they suggest that in deep learning traditional function approximation appears to be deeply intertwined with massive memorization. The multiple soft targets used to train student models have a high information density (Hinton et al.2015) and thus provide regularization by reducing the impact of brute-force memorization."}, {"section_index": "9", "section_name": "5 CONCLUSIONS", "section_text": "We train shallow nets with and without convolution to mimic state-of-the-art deep convolutiona nets. If one controls for the number of learnable parameters, nets containing a single fully-connectec non-linear layer and no convolutional layers are not able to learn functions as accurate as deepe convolutional models. This result is consistent with those reported in Ba and Caruana (2014 However, we also find that shallow nets that contain only 1-2 convolutional layers also are unable to achieve accuracy comparable to deeper models if the same number of parameters are used ir the shallow and deep models. Deep convolutional nets are significantly more accurate than shallov convolutional models, given the same parameter budget. We do, however, see evidence that mode compression allows accurate models to be trained that are shallower and have fewer convolutiona layers than the deep convolutional architectures needed to learn high-accuracy models from th original 1-hot hard-target training data. The question remains why extra layers are required to trair accurate models from the original training data."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPs, 2014\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeror Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning an Unsupervised Feature Learning NIPS 2012 Workshop, 2012.\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins. Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler In SciPy, 2010.\nCristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In KDD, 2006\nNadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decomposition arXiv preprint arXiv:1603.00162, 2016.\nGeorge Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303-314, 1989.\nYann N. Dauphin and Yoshua Bengio. Big neural networks waste capacity. arXiv:1301.3583, 2013\nKrzysztof J. Geras and Charles Sutton. Scheduled denoising autoencoders. In ICLR, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition arXiv:1512.03385, 2015.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv:1503.02531 2015.\nAlex Krizhevsky. Learning multiple layers of features from tiny images, 2009\nKrzysztof J. Geras, Abdel-rahman Mohamed, Rich Caruana, Gregor Urban, Shengjie Wang, Ozlem Aslan,. Matthai Philipose, Matthew Richardson, and Charles Sutton. Blending LSTMs into CNNs. arXiv:1511.06433. 2015.\nJinyu Li, Rui Zhao, Jui-Ting Huang, and Yifan Gong. Learning small-size dnn with output-distribution-base criteria. In INTERSPEECH, 2014\nShiyu Liang and R Srikant. Why deep neural networks? arXiv preprint arXiv:1610.04161, 2016\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforce ment learning. In ICLR, 2016.\nGabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. Regularizing neura networks by penalizing output distributions. ICLR, 2017..\nAdriana Romero, Ballas Nicolas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengic FitNets: Hints for thin deep nets. ICLR, 2015.\nAndrei A. Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick. Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In ICLR, 2016\nFrank Seide, Gang Li, and Dong Yu. Conversational speech transcription using context-dependent deep neura networks. In INTERSPEECH. 2011\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition In ICLR, 2014.\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. NIPS, 2012.\nRupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. In NIPs, 2015\nAntonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. TPAMI, 30(11), 2008\nChiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learnin, requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.\nQuoc Le, Tamas Sarlos, and Alexander Smola. Fastfood-computing hilbert space expansions in loglinear time In ICML, 2013.\nZhouhan Lin, Roland Memisevic, Shaoqing Ren, and Kishore Konda. How far can we go without convolution: Improving fully-connected networks. arXiv:1511.02580v1, 2016.\nRoland Memisevic, Kishore Konda, and David Krueger. Zero-bias autoencoders and the benefits of co-adapting features. In ICLR. 2015.\nWeights of trained nets are initialized as inGlorot and Bengio (2010). The models trained in Section2.7 contain eight convolutional layers organized into three groups (2-2-4) and two fully-connected hidden layers The Bayesian hyperparameter optimization controls four constants C1, C2, C3, H1 all in the range [0, 1] that are then linearly transformed to the number of filters/neurons in each layer. The hyperparameters for which ranges were not shown in Section|2.7 are: the four separate dropout rates (DOc1, DOc2, DOc3, DOf) and the five constants Dn, Ds, D,, As, A, controlling the HSV data augmentation. The ranges we selected are DOc1 E [0.1, 0.3], DOc2 E [0.25, 0.35], DOc3 E [0.3, 0.44], DOf1 E [0.2, 0.65], DOf2 E [0.2, 0.65], Dn E [0.03, 0.11], Ds E [0.2, 0.3], D, E [0.0, 0.2], As E [0.2, 0.3], A, E [0.03, 0.2], partly guided bySnoek et al. (2015) and visual inspection of the resulting augmentations.\nThe number of filters and hidden units for the models have the following bounds:. 1 conv. layer: 50 - 500 filters, 200 - 2000 hidden units, number of units in bottleneck is the dependent variable 2 conv. layers: 50 - 500 filters, 100 - 400 filters, number of hidden units is the dependent variable.. 3 conv. layers: 50 - 500 filters (layer 1), 100 - 300 filters (layers 2-3), # of hidden units is dependent the variable 4 conv. layers: 50 - 300 filters (layers 1-2), 100 - 300 filters (layers 3-4), # of hidden units is the dependent Variable.\nTable 3: Optimization bounds for student models. (Models trained on O/1 hard targets were described in Sections[6.1and[6.2]) Abbreviations: fc (fully-connected layer, ReLu), c (convolutional, ReLu) linear (fully-connected bottleneck layer, linear activation function), dependent (dependent variable. chosen s.t. parameter budget is met)..\n1st layer 2nd layer 3rd layer 4th layer 5th layer No conv. layer (1M) 500 - 5000 (fc) dependent (linear) No conv. layer (3.1M) 1000 - 20000 (fc) dependent (linear) No conv. layer (10M) 5000 - 30000 (fc) dependent (linear) No conv. layer (31M) 5000 - 45000 (fc) dependent (linear) 1 conv. layer (1M) 40 - 150 (c) dependent (linear) 200 - 1600 (fc) 1 conv. layer (3.1M) 50 - 300 (c) dependent (linear) 100 - 4000 (fc) 1 conv. layer (10M) 50 - 450 (c) dependent (linear) 500 - 20000 (fc) 1 conv. layer (31M) 200 - 600 (c) dependent (linear) 1000 - 4100 (fc) 2 conv. layers (1M) 20 - 120 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (3.1M) 50 - 250 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (10M) 50 - 350 (c) 20 - 120 (c) dependent (fc) 2 conv. layers (31M) 50 - 800 (c) 20 - 120 (c) dependent (fc) 3 conv. layers (1M) 20 - 110 (c) 20 - 110 (c) 20 - 110 (c) dependent (fc) 3 conv. layers (3.1M) 40 - 200 (c) 40 - 200 (c) 40 - 200 (c) dependent (fc) 3 conv. layers (10M) 50 - 350 (c) 50 - 350 (c) 50 - 350 (c) dependent (fc) 3 conv. layers (31M) 50 - 650 (c) 50 - 650 (c) 50 - 650 (c) dependent (fc) 4 conv. layers (1M) 25 - 100 (c) 25 - 100 (c) 25 - 100 (c) 25 - 100 (c) dependent (fc) 4 conv. layers (3.1M) 50 - 150 (c) 50 - 150 (c) 50 - 200 (c) 50 - 200 (c) dependent (fc) 4 conv. layers (10M) 50 - 300 (c) 50 - 300 (c) 50 - 350 (c) 50 - 350 (c) dependent (fc) 4 conv. layers (31M) 50 - 500 (c) 50 - 500 (c) 50 - 650 (c) 50 - 650 (c) dependent (fc)\n2n d layer\nModels in the first four rows in Table[1are trained similarly to those in Section 6.1 and are architecturally equivalent to the four convolutional student models shown in Table2|with 10 million parameters. The following hyperparameters are optimized: initial learning rate [0.0015, 0.025] (optimized on a log scale), momentum [0.68, 0.97] (optimized on a log scale), constants C1, C2 E [0, 1] that control the number of filters or neurons in different layers, and up to four different dropout rates DOc1 E [0.05, 0.4], DOc2 E [0.1, 0.6], DOc3 E [0.1, 0.7], DOf1 E [0.1, 0.7] for the different layers. Weight decay was set to 2 : 10-4 and we used the same data augmentation settings as for the student models. We use 5 5 convolutional filters, one nonlinear hidden layer in each model and each max-pooling operation is followed by dropout with a separately optimized rate We use 22 max-pooling except in the model with only one convolutional layer where we apply 3 3 pooling as this seemed to boost performance and reduces the number of parameters.\nAll convolutional filters in the model are sized 3 3, max-pooling is applied over windows of 22 and we use ReLU units throughout all our models. We apply dropout after each max-pooling layer with the three rates. DOc1, DOc2, DOc3 and after each of the two fully-connected layers with the same rate DOf.."}, {"section_index": "11", "section_name": "6.2 DETAILS OF TRAINING MODELS OF VARIOUS DEPTHS ON CIFAR-1O HARD O/1 LABELS", "section_text": "Our student models have the same architecture as models in Section[6.2] The model without convolutional layers consists of one linear layer that acts as a bottleneck followed by a hidden layer of ReLU units. The following. hyperparameters are optimized: initial learning rate [0.0013, 0.016] (optimized on a log scale), momentum [0.68, 0.97] (optimized on a log scale), input-scale E [0.8, 1.25], global initialization scale (after initialization E 0.4, 2.0], layer-width constants C1, C2 E [0, 1| that control the number of filters or neurons. The exact ranges. for the number of filters and implicitly resulting number of hidden units was chosen for all twenty optimization. experiments independently, as architectures, number of units and number of parameters strongly interact..\nFor the non-convolutional models we chose a slightly different hyper-parameterization. Given that all layers (in. models with \"two layers\" or more) are nonlinear and fully connected we treat all of them similarly from the hyperparameter-optimizer's point of view. In order to smoothly enforce the parameter budgets without rejecting. any samples from the Bayesian optimizer we instead optimize the ratios of hidden units in each layer (numbers between O and 1), and then re-normalize and scale them to the final number of neurons in each layer to match. the target parameter budget."}, {"section_index": "12", "section_name": "6.3 DETAILS OF TRAINING STUDENT MODELS OF VARIOUS DEPTHS ON ENSEMBLE LABELS", "section_text": "Figure|2|is similar to|1|but includes preliminary re sults from experiments for models with 100M param. eters. We are also running experiments with 300M. parameters. Unfortunately, Bayesian optimization. on models with 100M and 300M parameters is even more expensive than for the other points in the graph.\nAs expected, adding capacity to the convolutional. students (top of the figure) modestly increases their. accuracy. Preliminary results for the MLPs however. (too preliminary to include in the graph) may not. show the same increase in accuracy with increasing. model size. Models with two or three hidden layers may benefit from adding capacity to each layer, but. we have yet to see any benefit from adding capacity. to the MLPs with four or five hidden layers.\nteacher ensemble 90 compression gap 85 CNN: 1 convolutional layer de6 CNN: 2 convolutional layers CNN: 3 convolutional layers CNN: 4 convolutional layers Aeenney 80 MLP: 1 hidden layer MLP: 2 hidden layers MLP: 3 hidden layers MLP: 4 hidden layers MLP: 5 hidden layers 75 70 compression gap 65 3 10 31 100 Number of Parameters [millions] Figure 2: See figure1"}] |
Hk85q85ee | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we focus on the first problem and use dynamical system to analyze the nonlinea. gradient descent dynamics of certain two-layered nonlinear network in the following form:.\nwhere o(x) = max(x, O) is the ReLU nonlinearity. We consider the following setting: a student. network learns the parameters that minimize the l, distance between its prediction and the super vision provided by the teacher network of the same size with a fixed set of parameters w*. We. assume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn.1is. highly nonconvex and could contain exponential number of symmetrically equivalent solutions..\nTo analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks (See Lemma[2.1) in the teacher-student setting under l2 loss. Then for K = 1, we prove that the nonlinear gradient dynamics of Eqn.1|has a close form and converges to w* with at least (1 -"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this paper, we use dynamical system to analyze the nonlinear weight dynam ics of two-layered bias-free networks in the form of g(x; w) = j=1 (wJx), where o() is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w* using l, loss. We first show that when K = 1, the nonlinear dynamics can be written in close form. and converges to w* with at least (1 - e)/2 probability, if random weight ini- tializations of proper standard derivation (~ 1/d) is used, verifying empirical practice [Glorot & Bengio (2010);He et al.(2015);LeCun et al.(2012)]. For net- works with many ReLU nodes (K > 2), we apply our close form dynamics and symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w* without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.\nDeep learning has made substantial progress in many applications, including Computer Vision [He. et al.(2016); Simonyan & Zisserman(2015); Szegedy et al.(2015); Krizhevsky et al.(2012)], Nat- ural Language Processing [Sutskever et al.[(2014)] and Speech Recognition [Hinton et al.[(2012)] However, till now, how and why it works remains elusive due to a lack of theoretical understanding First, how simple approaches like gradient descent can solve a very complicated non-convex opti mization effectively. Second, how the deep models, especially deep convolutional models, achieve generalization power despite massive parameters.\nK g(x;w)=) 0(w[x) j=1\ng b (d) C layer c Student Teacher Network Network Wjk W c W W Xc k 8 layer c+ 1 X\n111 c (d) layer c Student Teacher Wik Network Network W c W * W W c Xc k O Ok' layer c + 1\ne)/2 probability, if initialized randomly with standard derivation on the order of 1/d, verifying. commonly used initialization techniques [Glorot & Bengio (2010); He et al.(2015); LeCun et al (2012)],. When K 2, we prove that when the teacher parameters {w}j=1 form orthonorma. bases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) unde a certain symmetric breaking weight initialization, the dynamics converges to w*, without gettin. stuck into any local minima. Note that in both cases, the initialization can be arbitrarily close t the origin for a fixed w*|l, showing that such a convergence behavior is beyond the local conve. structure at w*. To our knowledge, this is the first proof of its kind..\nPrevious works also use dynamical system to analyze deep neural networks. [Saxe et al.(2013 analyzes the dynamics of multilayer linear network, and [Kawaguchi (2016)] shows every loc. minima is global for multilinear network. Very little theoretical work has been done to analyz the dynamics of nonlinear networks, especially deep ones.[Mei et al.(2016)] shows the globa convergence when K = 1 with activation function o(x) when its derivatives o', o\", o\" are bounde. and o' > 0. Similar to our approach, [Saad & Solla[(1996)] also uses the student-teacher setting an analyzes the dynamics of student network when the teacher's parameters w* forms a orthonoma. bases; however, it uses o(x) = erf(x) as the nonlinearity and only analyzes the local behaviors c. the two critical points (the saddle point in symmetric initializations, and w*). In contrast, we prov. the global convergence behavior in certain symmetry-breaking cases.\nThe paper is organized as follows. Sec.2 introduces the basic formulation and some interesting. novel properties of ReLU in multilayered ReLU networks. Sec.3|and Sec.4 then analyze the two-. layered model Eqn.1|for K = 1 and K 2, respectively. Sec.5[shows that simulation results are consistent with theoretical analysis. Finally Sec.7lgives detailed proofs for all theorems.."}, {"section_index": "2", "section_name": "2.1 NOTATION", "section_text": "Denote X as a N-by-d input data matrix and w* is the parameter of the teacher network with desired N-by-1 output u = g(X; w*). Now suppose we have an estimator w and the estimated. output v = g(X;w). We want to know with l2 loss E(w) = 2||u - v||2 = ||u - g(X;w)||2 whether gradient descent will converge to the desired solution w*.\nFigure 1: (a) We consider the student and teacher network as nonlinear neural networks with ReLU nonlinearity. The student network updates its weight w from the output of the teacher, whose weights w* are fixed. (b)-(c) The network structure we consider in K = 1 and K > 2 cases (d) Notations used in multilayer ReLU gradient update rule (Sec.2.2)\nMany previous works analyze nonlinear network based on the assumption of independent activa tions: the activations of ReLU (or other nonlinear) nodes are independent of the input and/or mutu ally independent. For example, [Choromanska et al.(2015a b)] relate the nonlinear ReLU network with spin-glass models when several assumptions hold, including the assumption of independent ac- tivations (A1p and A5u). [Kawaguchi (2016)] proves that every local minimum in nonlinear network is global based on similar assumptions. [Soudry & Carmon (2016)] shows the global optimality of the local minimum in a two-layered ReLU network, by assuming small sample size and applying independent multiplicative Bernoulli noise on the activations. In practice, the activations are highly dependent due to their common input. Ignoring such dependency also misses important behaviors, and may lead to misleading conclusions. In this paper, no assumption of independent activation is nade. For sigmoid activation, [Fukumizu & Amari[(2o0o)] gives quite complicated conditions for a local minimum to be global when adding a new node to a two-layered network. [Janzamin et al. 2015)] gives guarantees on recovering the parameters of a 2-layered neural network learnt with ten- sor decomposition. In comparison, we analyze ReLU networks trained with gradient descent, which is a more popular setting in practice.\nThe gradient descent update is w(t+1) = w(t) + nw(t), where w(t) = -VE(w(t)). If we let n -> 0, then the update rule becomes a first-order differential equation dw/dt = - E(w), or more concisely, w = -VE(w). In this case, E = VE(w)w = -|VE(w)||2 < 0, i.e., the functior value E is nonincreasing over time. The key is to check whether there exist other critical points w w* so that VE(w) = 0.\nIn our analysis, we assume entries of input X follow Gaussian distribution. In this situation, the gra dient is a random variable and w = -E [VE(w)]. The expected E [E(w)] is also nonincreasing no matter whether we follow the expected gradient or the gradient itself, because\nE E =-E[VE(w)VE(w)]<-E[VE(w)]'E[VE(w)]<0\nIn this paper, we discover a few useful properties of ReLU that make our analysis much simpler Denote D = D(w) = diag(Xw > 0) as a N-by-N diagonal matrix. The l-th diagnonal element of D is a binary variable showing whether the neuron is on for sample l. Using this notation, we could write o(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.\nNote that for ReLU, D is also \"tranparent\"' on derivatives. For example, the Jacobian Jw[o(X w)] - o'(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in. ReLU network: suppose we have negative gradient inflow vector g (of dimension N-by-1) on the. current ReLU node with weights w, then we can simply write the update w as:.\nLemma 2.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:\nThe intuition here is to start from g = u - v (true for l2 loss) at the top layer and use induction With this formulation, we could write the finite dynamics for wc (all parameters in layer c). Denote the N-by-dc+1dc matrix Rc = [LjDj]je[qXc and R* = [L*D*]jE[q]X*. Using gradient descent rules:\nwj =X[Djgj=X[DjLj *w* XID;L;(R*w*- Rcwc)\nAw(t) = XTD(t) g(t) = XTD(t)(D*Xw* _D(t)Xw\nAw = Jw[o(Xw)]'g = X'Dg\nThis can be easily applied to multilayer ReLU network. Denote j E [c] if node j is in layer c dc as the width of layer c, and u; and v; as the output of teacher network and student network respectively. A simple deduction yields the following lemma:\nLu-LjV gj = Lj i\nAwc = RI (R*w* - Rcwc\nLinear case. In this situation D(t) D* = I (no gating in either forward or backward propagation)\nn A\nNonlinear (ReLU) case. In this case, w = Xt D(D* Xw* - DXw) in which D is a function o w. Intuitively, this term goes to zero when w -> w*, and should be approximated to be (w* w) in the i.i.d Gaussian case, since roughly half of the samples are blocked. However, once we make such approximation, we lost the nonlinear behavior of the network and would draw the wrong conclusion of global convergence.\nThen how should we analyze it? Notice that in w, both of the two terms have the form F(e, w) = XTD(e)D(w)Xw. Using this form, E[w] = E[F(w/l|w|l,w*)]- E[F(w/l|w],w)]. Here e is a unit vector called the \"projected\" weight. In the following, we will show that E [F(e, w)] has the following close form under i.i.d Gaussian assumption on X :.\nNote that the expectation analysis smooths out the non-differentiable property of ReLU, leaving. only one singularity at e = 0. The intuition is that expectation analysis involves an integration over the data distribution. With simple algebraic manipulation, E [w] takes the following closed form:\nN N E[w] = a sin 0w - 0w* W\nwhere a = ||w*Il/l|w|| and 0 e [0, ] is the angle between w and w*. The first term is expected while the last two terms show the nonlinear behavior. Using Lyapunov's method, we show that the dynamics (if treated continuously) converges to w* when w(1) e = (w : ||w w*\nSee Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vectoi. w|, w*l, and the bilinear coefficient matrix is positive definite. One question arises: will the. same approach show the dynamics converges when the initial conditions lie outside the region N, ir. particular for any region that includes the origin? The answer is probably no. Note that w = O is a. singularity in which w is not continuous (if approaching from different directions towards w = 0. w is different). It is due to the fact that ReLU function is not differentiable at the origin. We could remove this singularity by \"smoothing out' ReLU around the origin. This will yield w -> 0 wher w -> 0. In this case, V(0) = 0 so Lyapunov method could only tell that the dynamics is stable bu not convergent. Note that for ReLU activation, '(x) = 0 for certain negative x even after a local. smoothing, so the global convergence claim in [Mei et al.(2016)] for l2 loss does not apply.\nRandom Initialization. Then we study how to sample w(1) so that w(1) E . We would like to sample within Q, but we don't know where is w*. Sampling around origin with big radius r 2||w* I is inefficient in particular in high-dimensional space. This is because when the sam ple is uniform, the probability of hitting the ball is proportional to (r/||w*||)d 2-d, which is exponentially small.\nLemma 3.1 DenoteF(e, w) = XD(e)D(w)Xw where e is a unit vector, X X1, X2,... ,Xv is N-by-d sample matrix and D(w) = diag(Xw > O) is a binary diagonal matrix. If x; ~ N(0, I) and are i.i.d (and thus bias-free), then:\nN E[F(e, w)] = [( -0)w + w sin 0e 21\nLemma 3.2 When w(1) E = {w : ||w w*|| < |w*|I}, following the dynamics of Eqn.11 the Lyapunov function V(w) = ||w - w*|2 has V < 0 and the system is asymptotically stable and thus w(t) > w* when t -> +oo.\n|w-w*|I<|w*II (a) Convergent region (b) (c) W Sample Successful samples 0.0 -0.5 region\nFigure 2: (a) Sampling strategy to maximize the probability of convergence. (b) Relationship be tween sampling range r and desired probability of success (1 - e)/2. (c) Geometry of K = 1 2D case. There is a singularity at the origin. Initialization with random weights around the origin has decent probability to converge to w*\nA better idea is to sample around the origin with very small radius (but not at w = O), so tha the convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the samples is usefu1 (Fig.2(a)), as shown in the following theorem:\nThe intution here is to lower-bound the probability of the shaded area (Fig.2(b)). From the proof. the conclusion could be made stronger to show r ~ 1/d. consistent with common initializatior techniques [Glorot & Bengio(2010);He et al.(2015);LeCun et al. (2012)]. Fig.2(c) shows ar example in the 2D case, in which there is a singularity at the origin, and sampling towards w* yields. the convergence. This is consistent with the analysis above.."}, {"section_index": "3", "section_name": "4 MULTIPLE RELUS CASE", "section_text": "Now we are ready to analyze the network g(x) = j=1 (wJx) for K 2 (Fig.1(c)). Theoretical analysis of such networks is also the main topic in many previous works [Saad & Solla (1996) Soudry & Carmon(2016);Fukumizu & Amari](2000)]. In this case, L; = L* = I for 1 j K Then we have the following nonlinear dynamics from Eqn.7\nWj! wi l|wj|I\nEqn.12|(and its expected version) gives very complicated nonlinear dynamics and could be hard to solve in general. Unlike K = 1, a similar approach with Lyaponov function does not yield a decisive conclusion. However, if we consider the symmetric case: w; = P,w and w* = P,w where P, is a cyclic permutation matrix that maps index j' + 1 to (j' + j mod K) + 1 (and P1 is the identity matrix), then RHS of the expected version of Eqn.12[can be simplified as follows:\nE[wj] =>`E[f(wj,Wj,w*)]=>`E[f(P;w,Pj,w,Pj,w*)] E[f(P;w,PjPj,w,PjPj,w*)] ({Pj}j1is agroup) Pj>`E[f(w,P,w,P;,w*)] (|Pw1||=||wi|l,Z(Pw1,Pw2)=(w1,w2 P;E[w1] (14\nK wj=f(Wj,Wj,W j'=1\nwj]= E[f(wj,Wj,w*)]=`E[f(P;w,Pjw,Pjw*)] E[f(Pjw,PjPj\"w,PjPjw*)] ({Pj}j=1is agroup) Pj>`E[f(w,Pj,w,Pj,w*)] (|Pwi|=|w1|l, Z(Pw1,Pw2)=Z(w1,w2)) PjE[w1] (14)\nK E[w]= E[f(w, P;w,P,w*) j=1\n271 )(x-1+(K-1)y)] N [(K - 1)(a sin $* - sin ) + a sin\n=(x2+(K-1)y2)-1/2 cos 0 = ax, cos * = ay, cos$ = a?(2xy+(K-2)y)\nCorollary 4.2 For a bias-free two-layered ReLU network g(x; w) = , (wJx) that takes Gaus-. sian i.i.d inputs (Fig. 1), if the teacher's parameters {w*} form orthogonal bases, then when = x(1)w* + y(1) i+j Wj, where (x(1), y(1)) E = {x E (0, 1], y E [0, 1], x > y}, then the dynamics (Eqn.12) converges to { w*} without being trapped into local minima..\nWhen symmetry is broken, since the closure of includes the origin, there exists a path starting. at arbitrarily small neighborhood of origin to w*, regardless of how large w* is. In contrast tc traditional convex analysis that only gives the local parameter-dependent convergence basin around w*, here we obtain a convergence basin that is parameter-independent. In comparison, [Saad &. Solla(1996)] uses a different activation function (o(x) = erf(x)) and only analyzes local behaviors near the two fixed points (the symmetric saddle point and the teacher's weights w*), leaving sym metry breaking an empirical procedure. Here we show that it is possible to give global convergence. analysis on certain symmetry breaking cases for two-layered ReLU network..\nwhich means that if all w; and w* are symmetric under the action of cyclic group, so does their expected gradient. Therefore, the trajectory {w(t)} keeps such cyclic structure. Instead of solving a system of K equations, we only need to solve one:\n(a) Distribution of relative RMS error on angle. (b) Relative RMS error w.r.t #sample (Gaussian distribution) (c) Relative RMS error w.r.t #sample (Uniform distri.). 0.7 0.40 0.40 d=5 d=5 Id=5 0.6 /2 0.35 d=10 d=10 0.35 d=10 0.30 d=20 d=20 0.30 I d =20 d =50 d=50 d =50 0.25 0.25 0.20 0.20 0.15 0.15 0.2 0.10 0.10 0.1 0.05 0.05 0.0 0.00 0.00 0.0 0.5 1.0 1.5 2.0 2.5 3.0 10 10 102 106 10 10 10 10 10 107 103 10 0 10 10 Angle (in radius) #Samples #Samples #Samples\nFigure 3: (a) Distribution of relative RMS error with respect to 0 = (w, e). (b) Relative RMS error decreases with sample size, showing the asympototic behavior of the close form expression Eqn. 10] (c) Eqn.10]also works well when the input data X are generated by other zero-mean distribution X, e.g., uniform distribution in [1/2, 1/2].\n(a) Vector field in (x, y) plane (K = 2) (b Vector field in (x, y) plane (K = 5). (c) Trajectory in (x, y) plane.. 1.0 1.0 0.6 y =x K =2 0.5 K=5 Saddle points K = 10 0.4 0.8V 0.8 y o.3 iter200 \"iter100 0.2 iter100 iter200 0.6 0.6 0.1 iter100 y > 0.2 0.4 X 0.6 0.8 1.0 1.0 (d) Convergence K = 2 0.4 0.4 K=5 0.8 K =10 0.2 0.2 .2 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 200 600 800 1000\nFigure 4: (a)-(b) Vector field in (x, y) plane following 2D dynamics (Eqn.16) for K = 2 and. K = 5. Saddle points are visible. The parameters of teacher's network are at (1,0). (c) Trajectory in (x, y) plane for K = 2, K = 5, and K = 10. All trajectories start from (10-3,0). Even the. starting point are aligned with w*, gradient descent dynamics takes detours. (d) Training curve When K is larger the convergence is faster..\nFrom the simulation shown in Fig.] we could see that gradient descent takes a detour to reach th desired solution w*, even when the initialization is aligned with w*. This is because in the firs stage, all ReLU nodes receive the residue and try to explain the data in the same way (both x anc y increases); when the \"obvious\" component has been explained away, then the residue change its direction and pushes some ReLU nodes to explain other components as well (x increases but g decreases).\nEmpirically this path also converges to w* under noise. We leave it a conjecture that the system con verges in the presence of reasonably large noise. If this conjecture is true, then with high probability a random initialization stays in the convergence basin and converges to a permutation of w*. The reason is that a random initialization almost never gives ties. Without a tie, there exists one leading component which will dominate the convergence.\nConjecture 4.3 When the initialization w(1) = x(1)w* j'+j w*, + e, where e is Gaussian. noise and (x(1), y(1)) E Q, then the dynamics Eqn.12|also converges to w* without trapped into local minima.\nWe verify our close form expression of E [F(e, w)] = E [XT D(e)D(w)Xw] (Eqn.[10) with sim ulation. We randomly pick e and w so that their angle (e, w) is uniformly distributed in [0, ]. We prepare the input data X with standard Gaussian distribution and compare the close form so- lution E [F(e, w)] with F(e, w), the actual data term in gradient descent without expectation. We. use relative RMS error: err = ||E [F(e, w)] - F(e, w)|l/||F(e, w)|l. As shown in Fig.3[a), The error distribution on angles shows the properties of the close-form solution. For small 0, D(w) and\n(a) Distribution of relative RMS error on angle (b) Relative RMS error w.r.t #sample (Gaussian distribution) (c) Relative RMS error w.r.t #sample (Uniform distri.) 0.7 0.40 0.40 d =5 d=5 d=5 0.6 t/2 TT 0.35 d=10 d=10 0.35 d=10 0.30 d=20 d=20 0.30 d=20 d =50 d=50 d =50 0.25 0.25 0.20 0.20 0.3 0.15 0.15 0.2 0.10 0.10 0.1 0.05 0.05 0.0 1.0 2.0 2.5 3.0 0.00 0.00 0.0 0.5 1.5 103 104 105 106 10 103 104 105 106 10 103 104 105 106 107\n1.0 1.0 1.0 1.0 0.8 noise = 0.5, top-w = 1 0.8 noise = 1.0, top-w = 1 errrr 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 noise = 1.5, top-w = 1 0.4 noise = 2.0, top-w = 1 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 60 20 40 80 100 20 40 60 80 100 20 0 20 40 80 100 0 60 0 0 40 60 80 100 #lteration #lteration #Iteration #Iteration 1.0 1.0 1.0 1.0 p noise = 0.5, top-w E [1, 2] noise = 0.5, top-w E [0.1, 1.1] 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 noise = 0.5, top-w E [0.01, 0.11] 0.2 noise = 0.5, top-w ~ N(0, 1) 0.2 0.0 0.0 60 0.0 0.0 0 20 40 60 80 100 20 40 80 100 0 20 40 60 80 100 0 20 0 40 60 80 100 #lteration #lteration #Iteration #Iteration\nFigure 5: Top row: Convergence when the initial weights deviates from symmetric initialization: w(1) = 10-3w* + e. Here e ~ N(0, 10-3 * noise). The 2-layered network converges to w* until experiment has 8 runs. Bottom row: Convergence when we use g2(x) = j=1 ajo(wJx). Here the top weights a; is fixed at different numbers (rather than 1). Large positive a, correponds to fast convergence. When a; has positive/negative components, the network does not converge to w*.\nFig.3(a) shows that the close form expression becomes more accurate with more samples. We also examine other zero-mean distributions of X, e.g., uniform distribution in [-1/2, 1/2]. As shown in Fig.3(d), the close form expression still works for large d, showing that it could be quite general. Note that the error is computed up to a scaling constant, due to the difference in normalization constants among different distributions. We leave it to the future work to prove its usability for broader distributions.\nFig.4[a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn.16) and Fig.4(c) shows the 2D trajectory towards convergence to the teacher's parameters w*. Interestingly, even when we initialize the weights as (10-3, 0), aligning with w*, the gradient descent takes detours to reach the destination. One explanation is, at the beginning all nodes move similar direction trying to explain the data, once the data have been explained partly, specialization follows (y decreases).\nIn this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLl networks in the form of g(x; w) = j=1 o(wJx), where = max(x, 0) is the ReLU node. We assume that the input x follows Gaussian distribution and the output is generated by a teacher net work with parameters w*. In K = 1 we show a close-form nonlinear dynamics can be obtained and its convergence to w* can be proven, if we sample the initialization properly. Such initialization is consistent with common practice [Glorot & Bengio (2010); He et al.[(2015)] and is independent of the value of w*. For K 2, when the teacher parameters {w* } form a orthonormal bases, we prove that the trajectory from symmetric initialization is trapped into a saddle point, while certain sym- metric breaking initialization converges to w* without trapped into any local minima. Future work includes analysis of general cases (or symmetric case plus noise) for K 2, and a generalization to multilayer ReLU (or other nonlinear) networks.\n1.0 1.0 1.0 1.0 0.8 noise = 0.5, top-w = 1 0.8 noise = 1.0, top-w = 1 errrr 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 noise = 1.5, top-w = 1 0.4 noise = 2.0, top-w = 1 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 o: 20 40 60 80 100 20 40 60 80 100 #lteration #Iteration #lteration #lteration 1.0 1.0 1.0 1.0 p noise = 0.5, top-w E [1, 2] noise = 0.5, top-w E [0.1, 1.1] errorr 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 RN 0.4 0.4 0.4 0.4 RRel 0.2 0.2 0.2 noise = 0.5, top-w E [0.01, 0.11] 0.2 noise = 0.5, top-w ~ N(0, 1) 0.0 0.0 0.0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 20 40 60 80 0 100 #lteration #lteration #lteration #lteration\nFig.5 shows empirical convergence for K > 2, when the initialization deviates from symmetric initialization in Thm.4.1 Unless the deviation is large, gradient descent converges to w*. We also check the convergence of a more general network g2(x) = j=1 ajo(wJx). When aj > 0 convergence follows; however, when some a; is negative, the network does not converge to w*"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Choromanska, Anna, Henaff, Mikael, Mathieu, Michael, Arous, Gerard Ben, and LeCun, Yann. The loss surfaces of multilayer networks. In A1STATS, 2015a.\nFukumizu, Kenji and Amari, Shun-ichi. Local minima and plateaus in hierarchical structures oi multilayer perceptrons. Neural Networks, 13(3):317-327, 2000.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Sur- passing human-level performance on imagenet classification. In Proceedings of the IEEE Inter-. national Conference on Computer Vision, pp. 1026-1034, 2015.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. Computer Vision anad Pattern Recognition (CVPR), 2016\nHinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep. Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural net works for acoustic modeling in speech recognition: The shared views of four research groups IEEE Signal Processing Magazine, 29(6):82-97, 2012\nJanzamin, Majid, Sedghi, Hanie, and Anandkumar, Anima. Beating the perils of non-convexity Guaranteed training of neural networks using tensor methods. CoRR abs/1506.08473. 2015\nKawaguchi, Kenji. Deep learning without poor local minima. Advances in Neural Informatior Processing Systems, 2016\nLeCun, Yann A, Bottou, Leon, Orr, Genevieve B, and Muller, Klaus-Robert. Efficient backprop. Ir Neural networks: Tricks of the trade. pp. 9-48. Springer. 2012.\nSaxe, Andrew M, McClelland, James L, and Ganguli, Surya. Exact solutions to the nonlinear dy namics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognitionIntern tions (CLR) 2015\nSoudry, Daniel and Carmon, Yair. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016.\nSutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. Sequence to sequence learning with neural net works. In Advances in neural information processing systems, pp. 3104-3112, 2014.\nSzegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott, Anguelov, Dragomir Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions In Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.\nSaad, David and Solla, Sara A. Dynamics of on-line gradient descent learning for multilayer neural networksAduances .302-308.1996\nHere we list all detailed proof for all the theorems\nLemma 7.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form\nwhere Lj and L* are N-by-N diagonal matrices. For any k E [c +1], Lk = jE[c] WjkDjLj an similarly for Lk.\nProof We prove by induction on layer. For the first layer, there is only one node with g = u - v. therefore L; = L,, = I. Suppose the condition holds for all node j E [c]. Then for node k E [c+1] we have:\ngk = LkLf,Uk'- LkLk'Vk'= Lk k k'\n(a x=y+e Saddle point x*X* Ne Teacher's params (e,0) (1,0) x e > 0 0 <0\nFigure 6: (a)-(b) Two cases in Lemma7.2 (c) Convergence analysis in the symmetric two-layered case.\n>`(L*,u- L gj = Lj j'\nWjkDjgj = gk WikD;L L*,u LjD,WjkUx-LjDjWjk'Vk WjkD;Lj i A WjkDjLjLh,D*,w*k,Uk-WjkD;LjLzDj \"Wjk'Vk Uk! Wiki WjkD;L i k Vk k k\nN E[F(e, w)] = (( -0)w + w sin 0e) 2\nProof Note that F can be written in the following form\n290 - sin 290 1 cos 2@o 0 1 1 R(9o E X;X! 1 cos 2o 290 + sin 200 0 N 4 0 0 290Id-2 i:$iE[0,$0] sin 200 1 - cos 2o 0] 1 1 - cos 2o sin 2o 0 2T 4 0 0 0]\nE[F(e,w)]=N(R() R(O)) w - sin 20 1 cos 20 0 cos 0 N 2(-0)w-||w| 1 cos 20 sin 20 0 sin 0 4T 0 0 0 0 N sin 0 T - 0 2T N W + sin 0e\nN EF(e,w)]=N(R(+0)-R(O))w +0)w-wsin0e 27\nF(e, w) = X i:x]e>0,x[w>0\nX;x=E[X;x[i E [0,$o]]P[i E [O,o] N i:$iE[0,$0] r sin o r cos o r sin$r cos$ .xd] p(r)p(0) 1p(xk)rdrdodx3...dxd k=3 X d\nNotice that by abuse of notation, the 0 appears in Eqn.20 is the absolute value and Eqn.20|follows\n1 sin 20 + 2 - 20 -(2 - 0) cos0 - sin 0 M -(2 0) cos 0 - sin 0 2T 2\n1det(M) 2(sin 20 + 2 20) [(2 0) cos 0 + sin 0]4 2(sin 20 + 2 - 20) - (2 - 0)2 cos2 0 + (2 - 0) sin 20 + si (42 - 1) sin? 0 - 40 + 40 cos? 0 - 02 cos? 0 + 0 sin 20 (42 40 - 1) sin? 0 + 0 cos0(2 sin 0 - 0 cos 0)\nVa(1) W\nwhere Vd(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is\nLemma 7.3 In the region ||w(1) - w*|| < ||w*|, following the dynamics (Eqn.11), the Lyapunov function V(w) = ||w - w*||2 has V < 0 and the system is asymptotically stable and thus w(t) >- w* when t -> +oo.\nIn the following we will show that M is positive definite when 0 E (0, /2]. It suffices to show that Mj1 > 0, M22 > 0 and det(M) > 0. The first two are trivial, while the last one is:.\n2\n1 1 Va(r) - 8Va-1> 2 2\nVd e 8 < 2 Vd-1\nVa(1) = r(d/2+1)\nI(x+1) (x+s s1- x>0,0<s<1\n-1/2 d+1\\ r(d/2+1/2) d 2 T(d/2+1) 2\nLemma 7.5 For *. 0 and defined in Eqn.[17\nwe have the following relations in the triangular region eo = {(x, y) : x 0, y 0, x y + eo (Fig.6(c)):\n(1) $, * E [0, /2[ and 0 E 0, 0o) where 0o = arccos (2) cos $ = 1- a2(x-y)2 and sin =a(x-y) Q2(x - y)2 3 * > (equality holds only when y = 0) and * > 0\n2) cos = 1-a2(x-y)2 and sin = a(x-y)/2-a2(x-y)\ncOs $ a2(2xy + (K -2)y2) a(2x+(K-2)y)>a(x+(K-1)y)>1 cOs * Qy\nProof We discuss the three boundaries as follows:\nCase 1: y = 0, 0 < x < 1, horizontal line. In this case, 0 = 0, = /2 and * = /2. The component of the dynamics in this line is:.\nNDCC 'dt. So we have Va(1) T(d/2 + 1/2) (40) Vd-1(1) I(d/2 + 1) From Gautschi's Inequality T(x + 1) x1-s (x+ 8 x > 0, 0 < s < 1 (41) T(x + s) with s = 1/2 and x = d/2 we have: 1 / 2 I(d/2 + 1/2) (42) T(d/2 + 1) Therefore, it suffices to have 27T (43) Note that this upper bound is tight when & -> 0 and d -> +oo, since all inequality involved asymp\nVa(1) r(d/2+1/2) Vd-1(1) r(d/2+1)\n2 d +\nNote that this upper bound is tight when -> 0 and d -> +oo, since all inequality involved asymp totically becomes equal.\nQ (x2+(K-1)y2)-1/2 cos 0 = ax cos = Qy cos a?(2xy+(K -2)y2) -\nProof Propositions (1) and (2) are computed by direct calculations. In particular, note that since cos 0 = ax = 1/1 + (K - 1)(y/x)2 and x > y 0, we have cos 0 e (1/K,1] and 0 E [0, 0o). For Preposition (3), $* = arccos ay > 0 = arccos ax because x > y. Finally, for x > y > 0, we have\nTheorem 7.6 For the dynamics defined in Eqn.16J there exists eo > 0 so that the trianglar region Neo = {(x, y) : x 0, y 0, x y + eo} (Fig.6(c)) is a convergent region. That is, the flow goes inwards for all three edges and any trajectory starting in Seo stays.\n2 A f1 = 1)>0 N 2\n2T f2 7x -(-)(K-1)y-0+(K-1)asin$*-sin)+ asi N (K1) [(-$)y (asin*- sin$)]+ (a sin0-0)\n2TT - 0 - e + (K - 1)(a sin * - sin ) + a sin 0e (56 N K - 1\nLemma 7.7 (Reparametrization) Denote e = x - y > 0. The terms ax, ay and ae involved in the trigometric functions in Egn.16 has the following parameterization..\n[y] -2 1 3 +(K -1)2 a x K K 2\nProof This transformation can be checked by simple algebraic manipulation. For example\nK ar K\n(-$)y+a(x-y)2-a2(x-y)2-a1-a2y2 -a2-a2(x-y)2 V2-a2(x-y)2-1-a2y2\nTT 2-a2(x-y)2 T- 2\n1 1 1 Oy /(x/y)2+(K-1) /(1+e/y)2+(K-1 /K\n0 = cos0 +vK - 1sin 0\nTo prove Eqn.59 first we notice that K cos0 = Kax = + (K - 1)2. Therefore, we have (K cos 0 )2 - (K - 1)2? = 0, which gives 2 - 2 cos 0 + 1 - K sin2 0 = 0. Solving this quadratic equation and notice that 1, 0 e [0, /2] and we get:\n= cos0 + cos2 0 + K sin? - 1 = cos 0 + K 1sin 0\nDenote f3(, e) = f31 + f32 where\nf31(, e') *-0- D + e' a sin 6. f32(, e') (K - 1)(a sin * - sin )e\n0 f31 = e'($* $) + (1- e')($* - 0)- e'0 + 2 sin0 -e'0 + 2 sin0 2 Sin 0\n1 f33(0) = sin0-0 = sin 20 + VK - 1sin2 0 - 0 2\ne-1+Ky B2-a+ -2) 0 Q Q Q\n1 ((K - 1)(a sin $* - sin $) + a sin0) = -\n2T --)e-1+Ky)-* $y + ((K - 1)(a sin $* - sin $) + a sin 0 N --e-1+Ky)- (6\nf3 = hi()-(+(K-1) sin)\nWhen is fixed, f3 now is a monotonously decreasing function with respect to e > 0. Therefore. f3(, e) f3(, e') for 0 < e e' = 2/. If we could prove f3(, e) 0 and only attain zero at known critical point (, e) = (1, 1), the proof is complete\nTheorem 7.10 Any trajectory in Ne, converges to (y, e) = (1, 0), following the dynamics defined in Eqn.16\nProof We have Lyaponov function V = E [E] so that V = -E [ww] -E [w] E [w] 0. By Thm.7.9 other than the optimal solution w*, there is no other symmetric critical point. w 0 and thus V < 0. On the other hand, by Thm.7.6 the triangular region Neo is convergent, in. which the 2D dynamics is C differentiable. Therefore, any 2D solution curve &(t) will stay within. By PoincareBendixson theorem, when there is a unique critical point, the curve either converges to a limit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous decreasing along the curve. Therefore, (t) will converge to the unique critical point, which is (y, e) = (1, 0) and so does the symmetric system (Eqn.12).\nProof The 1D system can be computed with simple algebraic manipulations (note that when x = y,. = 0 and 0 = * = arccos(1/K)). Note that the 1D system is linear and its close form solution is x(t) = xo + Ce-K/2Nt and thus convergent.\n211 \\x= -K(x-x* N"}] |
Skvgqgqxe | [{"section_index": "0", "section_name": "LEARNING TO COMPOSE WORDS INTO SENTENCES WITH REINFORCEMENT LEARNING", "section_text": "Dani Yogatama', Phil Blunsom1,2, Chris Dyer', Edward Grefenstette', and Wang Ling 1DeepMind and 2University of Oxford\n{dyogatama, pblunsom, cdyer, etg, lingwang}@google. com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Our work can be understood as a compromise between the first two approaches. Rather than using. explicit supervision of tree structure, we use reinforcement learning to learn tree structures (and thus, sentence-specific compositional architectures), taking performance on a downstream task that. uses the computed sentence representation as the reward signal. In contrast to sequential RNNs.. which ignore tree structure, our model still generates a latent tree for each sentence and uses it tc."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We use reinforcement learning to learn tree-structured neural networks for com outing representations of natural language sentences. In contrast with prior work. on tree-structured models, in which the trees are either provided as input or pre-. dicted using supervision from explicit treebank annotations, the tree structures. n this work are optimized to improve performance on a downstream task. Ex periments demonstrate the benefit of learning task-specific composition orders.. outperforming both sequential encoders and recursive encoders based on treebank. annotations. We analyze the induced trees and show that while they discover. some linguistically intuitive structures (e.g., noun phrases, simple verb phrases) hey are different than conventional English syntactic structures..\nanguages encode meaning in terms of hierarchical, nested structures on sequences o. vords (Chomsky 1957). However, the degree to which neural network architectures that com. ute representations of the meaning of sentences for practical applications should explicitly reflec uch structures is a matter for debate. In this work, we use reinforcement learning to learn to con. truct trees for computing sentence representations, guided by feedback from downstream tasks tha. epend on these representations. The space of structures that are considered by the learner includes. oth fully sequential structures (corresponding to traditional recurrent neural network \"encoders'') s well as all projective binary trees. Thus, although we take seriously the notion that good compo itional architectures might be tree-structured, we specify neither the form of the tree nor whether a. ree is necessary at all, and instead leave those decisions up to the learner (and the data)..\nTo place this work in context, there are three predominant approaches for constructing vector rep resentations of sentences from a sequence of words. The first composes words sequentially using a recurrent neural network, treating the RNN's final hidden state as the representation of the sen- tence (Cho et al.[[2014]Sutskever et al.J[2014]|Kiros et al.[2015). In such models, there is no explicit hierarchical organization imposed on the words, and the RNN's dynamics must learn to simulate it. The second approach uses tree-structured networks to recursively compose representations of words and phrases to form representations of larger phrases and, finally, the complete sentence. In con- trast to sequential models, these models' architectures are organized according to each sentence's syntactic structure, that is, the hierarchical organization of words into nested phrases that charac- terizes human intuitions about how words combine to form grammatical sentences. Prior work on tree-structured models has assumed that trees are either provided together with the input sentences [Clark et al.]2008]Grefenstette & Sadrzadeh]2011} Socher et al.]2012}2013] Tai et al.]2015) or that they are predicted based on explicit treebank annotations jointly with the downstream task (Bowman et al.]2016] Dyer et al.]2016). The last approach for constructing sentence representa- tions uses convolutional neural networks to produce the representation in a bottom up manner, either with syntactic information (Ma et al.]2015) or without (Kim]2014] Kalchbrenner et al.]2014).\nstructure the composition. Our hypothesis is that encouraging the model to learn tree-structured compositions will bias the model toward better generalizations about how words compose to form sentence meanings, leading to better performance on downstream tasks.\nThis work is related to unsupervised grammar induction (Klein & Manning2004]Blunsom & Cohr. 2010, Spitkovsky et al.]2011, inter alia), which seeks to infer a generative grammar of an infinit. language from a finite sample of strings from the language-but without any semantic feedbacl. Previous work on unsupervised grammar induction that incorporates semantic supervision involve designing complex models for Combinatory Categorial Grammars (Zettlemoyer & Collins||2005) c marginalizing over latent syntactic structures (Naradowsky et al.2012). Since semantic feedbac. has been proposed as crucial for the acquisition of syntax (Pinker|[1984), our model offers a simple. alternative||However, our primary focus is on improving performance on the downstream model, s. the learner may settle on a different solution than conventional English syntax. We thus also explor. what kind of syntactic structures are derivable from shallow semantics..\nExperiments on various tasks (i.e., sentiment analysis, semantic relatedness, natural language infer. ence, and sentence generation) show that reinforcement learning is a promising direction to discove. hierarchical structures of sentences. Notably, representations learned this way outperformed botl conventional left-to-right models and tree-structured models based on linguistic syntax in down. stream applications. This is in line with prior work showing the value of learning tree structures ir. statistical machine translation models (Chiang2007). Although the induced tree structures mani. fested a number of linguistically intuitive structures (e.g., noun phrases, simple verb phrases), there. are a number of marked differences to conventional analyses of English sentences (e.g., an overall. left-branching structure)."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "Our model consists of two components: a sentence representation model and a reinforcement learn. ing algorithm to learn the tree structure that is used by the sentence representation model."}, {"section_index": "4", "section_name": "2.1 TREE LSTM", "section_text": "Our sentence representation model follows the Stack-augmented Parser-Interpreter Neural Networ (SPINN; Bowman et al., 2016), SPINN is a shift-reduce parser that uses Long Short-Term Memor (LSTM; Hochreiter and Schmidhuber, 1997) as its composition function. Given an input sentenc of N words x = {x1, 2,...,xN}, we represent each word by its embedding vector x, E RD The parser maintains an index pointer p starting from the leftmost word (p = 1) and a stack. T parse the sentence, it performs a sequence of operations a = {a1, a2,..., a2v-1}, where at {sHIFT, REDUCE}. A sHIFT operation pushes xp to the stack and moves the pointer to the nex word (p++); while a REDucE operation pops two elements from the stack, composes them to single element, and pushes it back to the stack. SPINN uses Tree LSTM (Tai et al.I|2015)Zhu et al 2015) as the REDUCE composition function, which we follow. In Tree LSTM, each element of th stack is represented by two vectors, a hidden state representation h and a memory representation c Two elements of the stack (h;, c;) and (h;, c) are composed as:\ni=o(W1[hi,hj]+b1) o = o(Wo[h;,h;]+ b1 fL =o(WFL[hi,hj]+bFL] fR =o(WFR[hi,hj]+ bFR g = tanh(Wg[hi,h;] + bG) c=fOc;+fROc;+iOg h =oO c\nwhere [h,, h,] denotes concatenation of h, and h;, and o is the sigmoid activation function\nA unique sequence of {sHIFT, REDUcE} operations corresponds to a unique binary parse tree of the sentence. A sHIFT operation introduces a new leaf node in the parse tree, while a REDUCE operation combines two nodes by merging them into a constituent. See Figure1for an example. We note that for a sentence of length N, there are exactly N sHIFT operations and N 1 REDUCE operations that are needed to produce a binary parse tree of the sentence. The final sentence representation produced\n' Our model only produces an interpretation grammar that parses language instead of a generative gramma\nS, S, R, S, S, R, R S, S, S, R, R, S, R S, S, R, S, R, S, R S, S, S, S, R, R, R\nby the Tree LSTM is the hidden state of the final element of the stack hy-1 (i.e., the topmost node of the tree).\nTracking LSTM. SPINN optionally augments Tree LSTM with another LSTM that incorporate contextual information in sequential order called tracking LSTM, which has been shown to improv performance for textual entailment. It is a standard recurrent LSTM network that takes as input th hidden states of the top two elements of the stack and the embedding vector of the word indexed b the pointer at timestep t. Every time a REDUcE operation is performed, the output of the trackin LSTM e is included as an additional input in Eq.1(i.e., the input to the REDucE compositior function is [h;, h;, e] instead of [hi, h;]).\nIn previous work (Tai et al.2015 Bowman et al.2016), the tree structures that guided compositior orders of Tree LSTM models are given directly as input (i.e., a is observed and provided as an input) Formally, each training data is a triplet {x, a, y}.Tai et al.(2015) consider models where a is alsc. given at test time, whereas|Bowman et al.[(2016) explore models where a can be either observed or. not at test time. When it is only observed during training, a policy is trained to predict a at test time. Note that in this case the policy is trained to match explicit human annotations (i.e., Penn TreeBank annotations), so the model learns to optimize representations according to structures that follows. human intuitions. They found that models that observe a at both training and test time are better than models that only observe a during training..\nOur main idea is to use reinforcement learning (policy gradient methods) to discover the best tree. structures for the task that we are interested in. We do not place any kind of restrictions wher learning these structures other than that they have to be valid binary parse trees, so it may resuli. in tree structures that match human linguistic intuition, heavily right or left branching, or othe. solutions if they improve performance on the downstream task..\nWe parameterize each action a E {sHIfT, REDucE} by a policy network (a s; W R), where s i. a representation of the current state and W r is the parameter of the network. Specifically, we use two-layer feedforward network that takes the hidden states of the top two elements of the stack h,. and h, and the embedding vector of the word indexed by the pointer x, as its input:.\nIf a is given as part of the training data, the policy network can be trained--in a supervised training regime-to predict actions that result in trees that match human intuitions. Our training data, o the other hand, is a tuple {x, y}. We use REINFORcE (Williams1992), which is an instance of broader class of algorithms called policy gradient methods, to learn W r such that the sequence o actions a = { a1, ..., aT} maximizes:\nT R(W) = E(a,s;WR) rtat t=1\nFigure 1: Four examples of trees and their corresponding sHIFT (S) and REDUcE (R) sequences. In each of the examples, there are 4 input words (4 leaf nodes), so 7 operations (4 S, 3 R) are needed to construct a valid tree. The nodes are labeled with the timesteps in which they are introduced to the trees t E {1,..., 7}. A sHIFT operation introduces a leaf node, whereas a REDUCE operation introduces a non-leaf node by combining two previously introduced nodes. We can see that different S-R sequences lead to different tree structures.\nwhere rt is the reward at timestep t. We use performance on a downstream task as the reward func- tion. For example, if we are interested in using the learned sentence representations in a classification task, our reward function is the probability of predicting the correct label using a sentence represen- tation composed in the order given by the sequence of actions sampled from the policy network, so R(W) = log p(y | T-LSTM(x); W), where we use W to denote all model parameters (Tree LSTM, policy network, and classifier parameters), y is the correct label for input sentence x, and x is rep- resented by the Tree LSTM structure in 2.1| For a natural language generation task where the goal is to predict the next sentence given the current sentence, we can use the probability of predicting words in the next sentence as the reward function, so R(W) = log p(xs+1 | T-LSTM(xs); W).\nNote that in our setup, we do not immediately receive a reward after performing an action at timestej. t. The reward is only observed at the end after we finish creating a representation for the curren. sentence with Tree LSTM and use the resulting representation for the downstream task. At eacl. timestep t, we sample a valid action according to (a s; WR). We add two simple constraints t. make the sequence of actions result in a valid tree: REDUcE is forbidden if there are fewer than twc. elements on the stack, and sHiFT is forbidden if there are no more words to read from the sentence After reaching timestep 2N - 1, we construct the final representation and receive a reward that is. used to update our model parameters.\nWe experiment with two learning methods: unsupervised structures and semi-supervised structures Suppose that we are interested in a classification task. In the unsupervised case, the objective func. tion that we maximize is logp(y T-LSTM(x); W). In the semi-supervised case, the objectiv. function for the first E epochs also includes a reward term for predicting the correct sHIFT or RE. DUcE actions obtained from an external parser-in addition to performance on the downstream task so we maximize log p(y | T-LSTM(x); W) + log (a | s; W R). The motivation behind this mode. is to first guide the model to discover tree structures that match human intuitions, before letting i. explore other structures close to these ones. After epoch E, we remove the second term from our ob jective function and continue maximizing the first term. Note that unsupervised and semi-supervise. here refer to the tree structures, not the nature of the downstream task.."}, {"section_index": "5", "section_name": "3.1 BASELINES", "section_text": "The goal of our experiments is to evaluate our hypothesis that we can discover useful task-specific tree structures (composition orders) with reinforcement learning. We compare the following com position methods (the last two are unique to our work):.\n2We choose to include right to left as a baseline since a right-branching tree structure---which is the output of a right to left composition order--has been shown to be a reliable baseline for unsupervised grammar induction (Klein & Manning2004)\n:Right to left: words are composed from right to left|2. Left to right: words are composed from left to right. This is the standard recurrent neura. network composition order. Bidirectional: A bidirectional right to left and left to right models, where the final sentenc. embedding is an average of sentence embeddings produced by each of these models.. Balanced binary tree: words are composed according to a balanced binary parse tree o. the sentence. Supervised syntax: words are composed according to a predefined parse tree of the ser tence. When parse tree information is not included in the dataset, we use Stanford parse. (Klein & Manning2003) to parse the corpus. Semi-supervised syntax: a variant of our reinforcement learning method, where for th. first E epochs we include rewards for predicting predefined parse trees given in the supei. vised model, before letting the model explore other kind of tree structures at later epoch. (i.e., semi-supervised structures in 2.2). Latent syntax: another variant of our reinforcement learning method where there is n predefined structures given to the model at all (i.e., unsupervised structures in 2.2)..\nFor learning, we use stochastic gradient descent with minibatches of size 1 and l2 regularization con. stant tune on development data from {10-4, 10-5, 10-6, 0}. We use performance on development data to choose the best model and decide when to stop training.\nTable 1: Descriptive statistics of datasets used in our experiments\nDataset # of train # of dev # of test Vocab size SICK 4,500 500 4,927 2,172 SNLI 550,152 10,000 10,000 18,461 SST 98,794 872 1,821 8,201 IMDB 441,617 223,235 223,236 29,209\nStanford Sentiment Treebank. We evaluate our model on a sentiment classification task from the Stanford Sentiment Treebank (Socher et al.] 2013). We use the binary classification task where the. goal is to predict whether a sentence is a positive or a negative movie review..\nWe set the word embedding size to 100 and initialize them with Glove vectors (Pennington et al. 20143 For each sentence, we create a 100-dimensional sentence representation s E R100 with. Tree LSTM, project it to a 200-dimensional vector and apply ReLU: q = ReLU(Wps + bp), anc. compute p(y = cq; wq) x exp(wq,cq+ bq\nTable 2: Classification accuracy on Stanford Sentiment Treebank dataset. The number of parameter. includes word embedding parameters and is our approximation when not reported in previous work\nModel Acc. # params. 100D-Right to left 83.9 1.2m 100D-Left to right 84.7 1.2m 100D-Bidirectional 84.7 1.5m 100D-Balanced binary tree 85.1 1.2m 100D-Supervised syntax 85.3 1.2m 100D-Semi-supervised syntax 86.1 1.2m 100D-Latent syntax 86.5 1.2m RNTN (Socher et al.. 2013) 85.4 DCNN (Kalchbrenner et al.) 2014 86.8 CNN-random(Kim 2014 82.7 CNN-word2vec (Kim 2014 87.2 CNN-multichannel (Kim.) 2014 88.1 NSE (Munkhdalai & Yu) 2016a 89.7 5.4m NTI-SLSTM (Munkhdalai & Yu) 2016b 87.8 4.4m NTI-SLSTM-LSTM (Munkhdala1 & Yu 2016b 89.3 4.8m Left to Right LSTM Tai et al 2015 84.9 2.8m Bidirectional LSTM Tai et al 87.5 2.8m Constituency Tree-LSTM-random Tai et al 82.0 2.8m Constituency Tree-LSTM-GloVe Tai et al 115 88.0 2.8m Dependency Tree-LSTM 85.7 2.8m Tai et al. 2015\nhttp://nlp.stanford.edu/projects/glove,\nWe evaluate our method on four sentence representation tasks: sentiment classification, semantic relatedness, natural language inference (entailment), and sentence generation. We show statistics of the datasets in Table[1land describe each task in detail in this subsection.\nWe run each model 3 times (corresponding to 3 different initialization points) and use the devel. opment data to pick the best model. We show the results in Table2 Our results agree with prior work that have shown the benefits of using syntactic parse tree information on this dataset (i.e., su- pervised recursive model is generally better than sequential models). The best model is the latent syntax model, which is also competitive with results from other work on this dataset. Both the latent and semi-supervised syntax models outperform models with predefined structures, demonstrating. the benefit of learning task-specific composition orders..\nSemantic relatedness. The second task is to predict the degree of relatedness of two sentences. from the Sentences Involving Compositional Knowledge corpus (SICK; Marelli et al., 2014) . In. this dataset, each pair of sentences are given a relatedness score on a 5-point rating scale. For each. sentence, we use Tree LSTM to create its representations. We denote the final representations by {S1, s2} E R100.We construct our prediction by computing: u = (S2.-s1)?, v =.S1 O s2,. R200, bg E R1 are model parameters, and [u, v] denotes concatenation of vectors inside the brackets. We learn the model to minimize mean squared error..\nWe run each model 5 times and use the development data to pick the best model. Our results are shown in Table 3| Similarly to the previous task, they clearly demonstrate that learning the tree structures yields better performance..\nWe also provide results from other work on this dataset for comparisons. Some of these models (La & Hockenmaier2014]Jimenez et al.]2014]Bjerva et al.]2014) rely on feature engineering and are designed specifically for this task. Our Tree LSTM implementation performs competitively wit most models in terms of mean squared error. Our best model-semi-supervised syntax-is bette than most models except LSTM models of Tai et al.(2015) which were trained with a differen objective function4Nonetheless, we observe the same trends with their results that show the benefi of using syntactic information on this dataset\nTable 3: Mean squared error on SICK dataset\nStanford Natural Language Inference. We next evaluate our model for natural language infer. ence (i.e., recognizing textual entailment) using the Stanford Natural Language Inference corpus. (SNLI; Bowman et al., 2015) . Natural language inference aims to predict whether two sentence. are entailment, contradiction, or neutral, which can be formulated as a three-way classification prob lem. Given a pair of sentences, similar to the previous task, we use Tree LSTM to create sentenc representations {S1, s2} E R100 for each of the sentences. FollowingBowman et al. (2016), we con- struct our prediction by computing: u = (S2-s1)2, v = S1 Os2, q = ReLU(Wp[u, v, S1, S2]+bp) and p(y =c|q;wq) x exp(wq,cq+bq), where Wp E IR200x400,bp E R200,wq E R200,bq E R are model parameters. The objective function that we maximize is the log likelihood of the correc label under the models.\nWe show the results in Table 4 The latent syntax method performs the best. Interestingly, the. sequential left to right model is better than the supervised recursive model in our experiments, which. contradicts results from Bowman et al.(2016) that show 300D-LSTM is worse than 300D-SPINN. A possible explanation is that our left to right model has identical number of parameters with the supervised model due to the inclusion of the tracking LSTM even in the left to right model (the. only difference is in the composition order), whereas the models in Bowman et al.[(2016) have.\n4Our experiments with the regularized KL-divergence objective function (Tai et al.2015) do not result ir significant improvements, so we choose to report results with the simpler mean squared error objective function\nIOanr sqaae Model MSE # params. 100D-Right to left 0.461 1.0m 100D-Left to right 0.394 1.0m 100D-Bidirectional 0.373 1.3m 100D-Balanced binary tree 0.455 1.0m 100D-Supervised syntax 0.381 1.0m 100D-Semi-supervised syntax 0.320 1.0m 100D-Latent syntax 0.359 1.0m Illinois-LH (Lai & Hockenmaier 2014 0.369 UNAL-NLP(Jimenez et al 2014 0.356 Meaning Factory (Bjerva et al. 2014 0.322 DT-RNN (Socher et al.) 2014 0.382 Mean Vectors Tai et al 2015 0.456 650k Left to Right LSTM. Tai et al. 2015 0.283 1.0m Bidirectional LSTM. Tai et al 2015 0.274 1.0m Constituency Tree-LSTM. Ia1 et al. 2015 0.273 1.0m Dependency Tree-LSTM Tai et al. 2015 0.253 1.0m\ndifferent number of parameters. Due to the poor performance of the supervised model relative to the unsupervised model, semi-supervised training can only mitigate the loss in accuracy, rathel than improve over unsupervised learning. Our models underperform state-of-the-art models on this dataset that have almost four times the number of parameters. We only experiment with smaller models since tree-based models with dynamic structures (e.g., our semi-supervised and latent syntax models) take longer to train. See d4|for details and discussions about training time.\nTable 4: Classification accuracy on SNLI dataset\nSentence generation. The last task that we consider is natural language generation. Given a sen- tence, the goal is to maximize the probability of generating words in the following sentence. This is a similar setup to the Skip Thought objective (Kiros et al.J|2015), except that we do not generate the previous sentence as well. Given a sentence, we encode it with Tree LSTM to obtain s E R100. We use a bag-of-words model as our decoder, so p(w; | s; V) exp(vT s), where V E R10029,209 and v; E R100 is the i-th column of V. Using a bag-of-words decoder as opposed to a recurrent neural network decoder increases the importance of producing a better representation of the current sentence, since the model cannot rely on a sophisticated decoder with a language model component to predict better. This also greatly speeds up our training time.\nWe use IMDB movie review corpus (Diao et al.]2014) for this experiment, The corpus consists of 280,593, 33,793, and 34,029 reviews in training, development, and test sets respectively. We construct our data using the development and test sets of this corpus. For training, we process 33,793 reviews from the original development set to get 441,617 pairs of sentences. For testing. we use 34,029 reviews in the test set (446,471 pairs of sentences). Half of these pairs is used as our development set to tune hyperparamaters, and the remaining half is used as our final test set Our results in Table|5|further demonstrate that methods that learn tree structures perform better than methods that have fixed structures.\nTable 5: Word perplexity on the sentence generation task. We also show perplexity of the mode that does not condition on the previous sentence (unconditional) when generating bags of words for comparison.\nIaatasol Model Acc. # params. 100D-Right to left 79.1 2.3m 100D-Left to right 80.2 2.3m 100D-Bidirectional 80.2 2.6m 100D-Balanced binary tree 77.4 2.3m 100D-Supervised syntax 78.5 2.3m 100D-Semi-supervised syntax 80.2 2.3m 100D-Latent syntax 80.5 2.3m 100D-LSTM (Bowman et a1. 2015 77.6 5.7m 300D-LSTM Bowman et al. 2016 80.6 8.5m 300D-SPINN (Bowman et al. 2016 83.2 9.2m 1024D-GRU TVendrov et al. 2016 81.4 15.0m 300D-CNN (Mou et al. 2016 82.1 9m 300D-NTI (Munkhdala1 & Yu 2016b 83.4 9.5m 300D-NSE (Munkhdalai & Yu 2016a 84.6 8.5m\nModel Perplexity # params. 100D-Unconditional 105.6 30k 100D-Right to left 101.4 6m 100D-Left to right 101.1 6m 100D-Bidirectional 100.2 6.2m 100D-Balanced binary tree 103.3 6.2m 100D-Supervised syntax 100.8 6m 100D-Semi-supervised syntax 98.4 6m 100D-Latent syntax 99.0 6m\nFigure 2: Examples of tree structures learned by our model which show that the model discover. simple concepts such as noun phrases and verb phrases\nme fami stan outs hom men playi frisb mbe ding ide tWO are ng in the ee park V rs e\nFigure 3: Examples of unconventional tree structures"}, {"section_index": "6", "section_name": "4 DISCUSSION", "section_text": "LearnedStructures. Our results in 3show that our proposed method outperforms competing methods with predefined composition order on all tasks. The right to left model tends to perform worse than the left to right model. This suggests that the left to right composition order, similar to how human reads in practice, is better for neural network models. Our latent syntax method is able to discover tree structures that work reasonably well on all tasks, regardless of whether the task is better suited for a left to right or supervised syntax composition order.\nWe inspect what kind of structures the latent syntax model learned and how closely they match human intuitions. We first compute unlabeled bracketing F1 scores|for the learned structures and parses given by Stanford parser on SNLI and Stanford Sentiment Treebank. In the SNLI dataset, there are 10,000 pairs of test sentences (20,000 sentences in total), while the Stanford Sentiment Treebank test set contains 1,821 test sentences. The F1 scores for the two datasets are 41.73 and 40.51 respectively. For comparisons, F1 scores of a right (left) branching tree are 19.94 (41.37) for SNLI and 12.96 (38.56) for SST.\nWe also manually inspect the learned structures. We observe that in SNLI, the trees exhibit overall. left-branching structure, which explains why the F1 scores are closer to a left branching tree struc-. ture. Note that in our experiments on this corpus, the supervised syntax model does not perform. as well as the left-to-right model, which suggests why the latent syntax model tends to converge. towards the left-to-right model. We handpicked two examples of trees learned by our model and show them in Figure[2] We can see that in some cases the model is able to discover concepts such as. noun phrases (e.g., a boy, his sleds) and simple verb phrases (e.g., wearing sunglasses, is frowning). Of course, the model sometimes settles on structures that make little sense to humans. We show two. such examples in Figure[3] where the model chooses to compose playing frisbee in and outside a as. phrases.\nsun wo wea frow drag sled thro glas S boy his ring ning S ugh the man ses S W\nTraining Time. A major limitation of our proposed model is that it takes much longer to train compared to models with predefined structures. We observe that our models only outperforms mod- els with fixed structures after several training epochs; and on some datasets such as SNLI or IMDB. an epoch could take a 5-7 hours (we use batch size 1 since the computation graph needs to be recon- structed for every example at every iteration depending on the samples from the policy network). This is also the main reason that we could only use smaller 100-dimensional Tree LSTM models in\nall our experiments. While for smaller datasets such as SiCK the overall training time is approxi. mately 6 hours, for SNLI or IMDB it takes 3-4 days for the model to reach convergence. In general the latent syntax model and semi-supervised syntax models take about two or three times longer to. converge compared to models with predefined structures.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large anno tated corpus for learning natural language inference. In Proc. of EMNLP, 2015..\nDavid Chiang. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228 2007.\nNoam Chomsky. Syntactic Structures. Mouton. 1957\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, 1997.\nSergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios Batiz, and Av Mendizabal. UNAL-NLP: Combining soft cardinality features for semantic textual similarity. relatedness and entailment. In Proc. of SemEval, 2014..\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network fo. modelling sentences. In Prof. of ACL, 2014..\nWe presented a reinforcement learning method to learn hierarchical structures of natural language. sentences. We demonstrated the benefit of learning task-specific composition order on four tasks. sentiment analysis, semantic relatedness, natural language inference, and sentence generation. We. qualitatively and quantitatively analyzed the induced trees and showed that they both incorporate some linguistically intuitive structures (e.g., noun phrases, simple verb phrases) and are different than conventional English syntactic structures..\nJohannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity. In Proc. oj SemEval, 2014.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder. for statistical machine translation. arXiv preprint, 2014.\nYoon Kim. Convolutional neural networks for sentence classification. In Proc. EMNLP, 2014\nDan Klein and Christopher D. Manning. Accurate unlexicalized parsing. In Proc. of ACL, 2003\nAlice Lai and Julia Hockenmaier. Illinois-lh: A denotational and distributional approach to seman tics. In Proc. of SemEval, 2014\nMarco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proc. of SemEval, 2014.\nsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint, 2016a\nJason Naradowsky, Sebastian Riedel, and David A. Smith. Improving nlp through marginalizatior of hidden syntactic structure. In Proc. of EMNLP, 2012..\nJeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Proc. of EMNLP, 2014.\nSteven Pinker. Lan Learnability and L Development. Harvard, 1984\nRichard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. Semantic composition ality through recursive matrix-vector spaces. In Proc. of EMNLP, 2012..\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proc. of ACL, 2015.\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In Proc. of ICLR, 2016.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen earningMachinee 8:229-256.1992\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proc. of UAI, 2005.\nXiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. Long short-term memory over recursive struc tures. In Proc. of ICML, 2015.\nDan Klein and Christopher D. Manning. Corpus-based induction of syntactic structure: Models of dependency and constituenc In Proc. of ACL. 2004"}] |
BymIbLKgl | [{"section_index": "0", "section_name": "INTRODUCTION", "section_text": "The discussion on invariance is a strong component of the solutions to many classical problems in numerical differential geometry. A typical example is that of planar shape analysis where one desires to have a local function of the contour which is invariant to rotations, translations and reflections like the Euclidean curvature. This representation can be used to obtain correspondence between the shapes and also to compare and classify them. However, the numerical construction of such functions from discrete sampled data is non-trivial and requires robust numerical techniques for their stable and efficient computation.\nConvolutional neural networks have been very successful in recent years in solving problems ir. image processing, recognition and classification. Efficient architectures have been studied and de-. veloped to extract semantic features from images invariant to a certain class or category of transfor-. mations. Coupled with efficient optimization routines and more importantly, a large amount of data. a convolutional neural network can be trained to construct invariant representations and semanti-. cally significant features of images as well as other types of data such as speech and language. It. is widely acknowledged that such networks have superior representational power compared to more. principled methods with more handcrafted features such as wavelets, Fourier methods, kernels etc which are not optimal for more semantic data processing tasks..\nIn Section2|we begin by giving a brief summary of the theory and history of invariant curve repre. sentations. In Section|3|we explain our main contribution of casting the problem into the form which"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this paper we connect two seemingly different fields: convolutional neural network based metric. learning methods and numerical differential geometry. The results we present are the outcome of investigating the question: \"Can metric learning methods be used to construct invariant geometric. quantities?\" By training with a Siamese configuration involving only positive and negative examples. of Euclidean transformations, we show that the network is able to train for an invariant geometric function of the curve which can be contrasted with a theoretical quantity: Euclidean curvature. An example of each can be seen Figure[1] We compare the learned invariant functions with axiomatic counterparts and provide a discussion on their relationship. Analogous to principled constructions like curvature-scale space methods and integral invariants, we develop a multi-scale representation. using a data-dependent learning based approach. We show that network models are able to con-. struct geometric invariants that are numerically more stable and robust than these more principled. approaches. We contrast the computational work-flow of a typical numerical geometry pipeline with. that of the convolutional neural network model and develop a relationship among them highlighting. important geometric ideas.\nFigure 1: Comparing the axiomatic and learned invariants of a curve\nAn invariant representation of a curve is the set of signature functions assigned to every point of. the curve which does not change despite the action of a certain type of transformation. A powerful. theorem from E. Cartan (Cartan((1983)) and Sophus Lie (Ackerman((1976)) characterizes the space of these invariant signatures. It begins with the concept of arc-length which is a generalized notion of the length along a curve. Given a type of transformation, one can construct an intrinsic arc-. length that is independent of the parameterization of the curve, and compute the curvature with respect to this arc-length. The fundamental invariants of the curve, known as differential invariants (Bruckstein & Netravali](1995), Calabi et al.(1998)) are the set of functions comprising of the curvature and its successive derivatives with respect to the invariant arc-length. These differential. invariants are unique in a sense that two curves are related by the group transformation if and only. if their differential invariant signatures are identical. Moreover, every invariant of the curve is a. c(m)\nP Cp dp = + y? dp\ndet(Cp, Cpp) XpYpp -YpXpp \\Cp 3 (x3 + y?) 2\nThe difficulty with differential invariants is their stable numerical computation. Equations [1and 2] involve non-linear functions of derivatives of the curve and this poses serious numerical issues. for their practical implementation where noise and poor sampling techniques are involved. Apart from methods likePajdla & Van Gool|(1995) and Weiss(1993), numerical considerations motivated the development of multi-scale representations. These methods used alternative constructions of invariant signatures which were robust to noise. More importantly, they allowed a hierarchical rep- resentation, in which the strongest and the most global components of variation in the contour of the. curve are encoded in signatures of higher scale, and as we go lower, the more localized and rapid. changes get injected into the representation. Two principal methods in this category are scale-space methods and integral invariants. In scale-space methods (Mokhtarian & Mackworth|(1992); Sapiro & Tannenbaum (1995);Bruckstein et al.(1996)), the curve is subjected to an invariant evolution pro- cess where it can be evolved to different levels of abstraction. See Figure[5] The curvature function\nNetwork-Invariant Eucledian-Curvature\nenables training a convolutional neural network for generating invariant signatures to the Euclidean and Similarity group transformations. Section 4|provides a discussion on developing a multi-scale. representation followed by the experiments and discussion in Section|5.\nThus, we have the Euclidean differential invariant signatures given by the set {k, Ks, Kss ...} for. every point on the curve. Cartan's theorem provides an axiomatic construction of invariant signatures and the uniqueness property of the theorem guarantees their theoretical validity. Their importance is highlighted from the fact that any invariant is a function of the fundamental differential invariants.\nel: X e {0,1} Curve1: C1 Curve2: C2 Shared Weights Network1: Network2: Output1: Se(C1) Output2: So(C2) Cost: (e)\nat each evolved time t is then recorded as an invariant. For example, {k(s, t), ks(s, t), Kss(s, t).. would be the Euclidean-invariant representations at scale t.\nIntegral invariants (Manay et al.(2004);Fidler et al.(2008);Pottmann et al.(2009); Hong & Soatto (2015)) are invariant signatures which compute integral measures along the curve. For example, for. each point on the contour, the integral area invariant computes the area of the region obtained from. the intersection of a ball of radius r placed at that point and the interior of the contour. The integral. nature of the computation gives the signature robustness to noise and by adjusting different radii of the ball r one can associate a scale-space of responses for this invariant.Fidler et al.(2o08) and. Pottmann et al.(2009) provide a detailed treatise on different types of integral invariants and their. properties.\nIt is easy to observe that differential and integral invariants can be thought of as being obtained from non-linear operations of convolution filters. The construction of differential invariants employ. filters for which the action is equivalent to numerical differentiation (high pass filtering) whereas integral invariants use filters which act like numerical integrators (low pass filtering) for stabilizing the invariant. This provides a motivation to adopt a learning based approach and we demonstrate that the process of estimating these filters and functions can be outsourced to a learning framework.. We use the Siamese configuration for implementing this idea. Such configurations have been used in signature verification (Bromley et al.[(1993)), face-verification and recognition(Sun et al.(2014); Taigman et al.(2014); Hu et al.(2014)), metric learning (Chopra et al.(2005)), image descriptors (Carlevaris-Bianco & Eustice (2014)), dimensionality reduction (Hadsell et al.(2006)) and also for generating 3D shape descriptors for correspondence and retrieval (Masci et al.(2015); Xie et al. (2015)). In these papers, the goal was to learn the descriptor and hence the similarity metric from. data using notions of only positive and negative examples. We use the same framework for estima-. tion of geometric invariants. However, in contrast to these methods, we contribute an analysis of the output descriptor and provide a geometric context to the learning process. The contrastive loss function driving the training ensures that the network chooses filters which push and pull different features of the curve into the invariant by balancing a mix of robustness and fidelity..\nA planar curve can be represented either explicitly by sampling points on the curve or using an implicit representation such as level sets (Kimmel(2012)). We work with an explicit representa- tion of simple curves (open or closed) with random variable sampling of the points along the curve Thus, every curve is a N 2 array denoting the X and Y coordinates of the N points. We build a convolutional neural network which inputs a curve and outputs a representation or signature for every point on the curve. We can interpret this architecture as an algorithmic scheme of repre senting a function over the curve. However feeding in a single curve is insufficient and instead we run this convolutional architecture in a Siamese configuration (Figure|2) that accepts a curve and a\ntransformed version (positive) of the curve or an unrelated curve (negative). By using two identica. copies of the same network sharing weights to process these two curves we are able to extract geo metric invariance by using a loss function to require that the two arms of the Siamese configuratior. must produce values that are minimally different for curves which are related by Euclidean transfor mations representing positive examples and maximally different for carefully constructed negative. examples. To fully enable training of our network we build a large dataset comprising of positive. and negative examples of the relevant transformations from a database of curves. We choose tc. minimize the contrastive loss between the two outputs of the Siamese network as this directs the network architecture to model a function over the curve which is invariant to the transformation.."}, {"section_index": "2", "section_name": "3.1 LOSS FUNCTION", "section_text": "where is a cross validated hyper-parameter known as margin which defines the metric threshol beyond which negative examples are penalized.."}, {"section_index": "3", "section_name": "3.2 ARCHITECTURE", "section_text": "The network inputs a N 2 array representing the coordinates of N points along the curve. The. sequential nature of the curves and the mostly 1D-convolution operations can also be looked at from. the point of view of temporal signals using recurrent neural network architectures. Here however. we choose instead to use a multistage CNN pipeline. The network, given by one arm of the Siamese. configuration, comprises of three stages that use layer units which are typically considered the basic building blocks of modern CNN architectures. Each stage contains two sequential batches of convo-. lutions appended with rectified linear units (ReLU) and ending with a max unit. The convolutional. unit comprises of convolutions with 15 filters of width 5 as depicted in Figure[3 The max unit. computes the maximum of 15 responses per point to yield an intermediate output after each stage. The final stage is followed by a linear layer which linearly combines the responses to yield the final. output. Since, every iteration of convolution results in a reduction of the sequence length, sufficient. padding is provided on both ends of the curve. This ensures that the value of the signature at a point. is the result of the response of the computation resulting from the filter centered around that point.\nConv ReLU Conv ReLU Max Conv ReLU Conv ReLU Max Conv ReLU Conv ReLU Linear 15 15 15 15 15 15 Filters, Filters, Filters, Filters, Filters, Filters, Linear Width=5 Width=5 Width=5 Width=5 Width=5 Width=5 Layer Output Input Sig- Curve nature\nWe employ the contrastive loss function (Chopra et al.(2005); LeCun et al.(2006)) for training our network. The Siamese configuration comprises of two identical networks of Figure 3 computing signatures for two separate inputs of data. Associated to each input pair is a label which indicates whether or not that pair is a positive ( = 1) or a negative ( = 0) example (Figure2). Let C1i. and C2i be the curves imputed to first and second arm of the configuration for the ith example of. the data with label A. Let So(C) denote the output of the network for a given set of weights O for. input curve C. The contrastive loss function is given by:.\n=N Xi I|Se(C1i)-So(C2i) lI +(1-Xi) max(0, -l|Se(C1i)-Se(C2i) Il)}\n(O Xi I|Se(C1i)-Se(C2i) lL + (1-i) max(0, -I| Se(C1i)-Se(C2i) ID)} (3) vhere is a cross validated hyper-parameter known as margin which defines the metric threshold\nO } 0.45 0.4 Test Train 0.35 JOII 0.3 0.25 0.2 0.15 M 0.1 AN 0.05 10 20 30 40 50 70 80 90 100 Epoch\nFigure 4: Contours extracted from the MPEG7 Database and the error plot for training"}, {"section_index": "4", "section_name": "3.3 BUILDING REPRESENTATIVE DATASETS AND IMPLEMENTATION", "section_text": "In order to train for invariance, we need to build a dataset with two major attributes: First, it needs to contain a large number of examples of the transformation and second, the curves involved ir the training need to have sufficient richness in terms of different patterns of sharp edges, corners smoothness, noise and sampling factors to ensure sufficient generalizability of the model. To suffi- ciently span the space of Euclidean transformations, we generate random two dimensional rotations by uniformly sampling angles from -, r|. The curves are normalized by removing the mean anc dividing by the standard deviation thereby achieving invariance to translations and uniform scaling The contours are extracted from the shapes of the MPEG7 Database (Latecki et al.(200o)) as showr in first part of Figure 4 It comprises a total of 1400 shapes containing 70 different categories of objects. 700 of the total were used for training and 350 each for testing and validation. The positive examples are constructed by taking a curve and randomly transforming it by a rotation, translatior and reflection and pairing them together. The negative examples are obtained by pairing curves which are deemed dissimilar as explained in Section|4 The contours are extracted and each contour is sub-sampled to 500 points. We build the training dataset of 10, 000 examples with approximately 50% each for the positive and negative examples. The network and training is performed using the Torch libraryCollobert et al.(2002). We trained using Adagrad Duchi et al.(2011) at a learning rate of 5 10-4 and a batch size of 10. We set the contrastive loss hyperparameter margin = 1 and Figure|4 shows the error plot for training and the convergence of the loss to a minimum. The rest of this work describes how we can observe and extend the efficacy of the trained network on new data"}, {"section_index": "5", "section_name": "MULTI-SCALE REPRESENTATIONS", "section_text": "A valuable insight for multi-scale representations is provided in the theorems of Gage, Hamiltor and Grayson (Gage & Hamilton(1986);Grayson[(1987)). It says that if we evolve any smooth non intersecting planar curve with mean curvature flow, which is invariant to Euclidean transformations it will ultimately converge into a circle before vanishing into a point. The curvature corresponding tc this evolution follows a profile as shown in Figure[5] going from a possibly noisy descriptive feature to a constant function. In our framework, we observe an analogous behavior in a data-dependen setting. The positive part of the loss function ( = 1) forces the network to push the outputs of the positive examples closer, whereas the negative part (X = 0) forces the weights of network to push the outputs of the negative examples apart, beyond the distance barrier of . If the training data does not contain any negative example, it is easy to see that the weights of the network will converge tc a point which will yield a constant output that trivially minimizes the loss function in Equation |3\n0.45 d 0.4 Test 0.35 Train JOII 0.3 0.25 0.2 0.15 0.1 AN 0.05 10 20 30 40 50 60 70 80 90 100 Epoch\nInvariant representations at varying levels of abstraction have a theoretical interest as well as prac tical importance to them. Enumeration at different scales enables a hierarchical method of analysis which is useful when there is noise and hence stability is desired in the invariant. As mentioned. in Section[2] the invariants constructed from scale-space methods and integral invariants, naturally allow for such a decomposition by construction..\nFigure 5: Curve evolution and the corre- sponding curvature profile\nFigure 6: Experiments with multi-scale representations. Each signature is the output of a network trained on a dataset with training examples formed as per the rows of Table[1] Index1 indicates low and 5 indicates a higher level of abstraction.\nDesigning the negative examples of the training data provides the means to obtain a multi-scale representation. Since we are training for a local descriptor of a curve, that is, a function whose value at a point depends only on its local neighborhood, a negative example must pair curves such that corresponding points on each curve must have different local neighborhoods. One such possibility is to construct negative examples which pair curves with their smoothed or evolved versions as in Table[1 Minimizing the loss function in equation 3|would lead to an action which pushes apart the signatures of the curve and its evolved or smoothed counterpart, thereby injecting the signature with fidelity and descriptiveness. We construct separate data-sets where the negative examples are drawn as shown in the rows of Tablq1 and train a network model for each of them using the loss function 3 In our experiments we perform smoothing by using a local polynomial regression with weighted. linear least squares for obtaining the evolved contour. Figure 6|shows the outputs of these different networks which demonstrate a scale-space like behavior..\nAbility to handle low signal to noise ratios and efficiency of computation are typical qualities desired in a geometric invariant. To test the numerical stability and robustness of the invariant signatures\nPositive Example Negative Example Scale Index Low High\nPositive Example\nTable 1: Examples of training pairs for different scales. Each row indicates the pattern of training examples for a different scale.\nDifferential Invariant Integral Invariant Network Invariant Differential Invariant Integral Invariant Network Invariant\nFigure 7: Stability of different signatures in varying levels noise and Euclidean transformations. The correspondence for the shape and the signature is the color. All signatures are normalized..\nwe designed two experiments. In the first experiment, we add increasing levels of zero-mean Gaus sian noise to the curve and compare the three types of signatures: differential (Euclidean curvature) integral (integral area invariant) and the output of our network (henceforth termed as network in variant) as shown in Figure[7] Apart from adding noise, we also rotate the curve to obtain a bette assessment of the Euclidean invariance property. In Figure[8] we test descriptiveness of the signatur under noisy conditions in a shape retrieval task for a set of 30 shapes with 6 different categories. Fo every curve, we generate 5 signatures at different scales for the integral and the network invariar and use them as a representation for that shape. We use the Hausdorff distance as a distance measur (Bronstein et al.(2008)) between the two sets of signatures to rank the shapes for retrieval. Figure and|8|demonstrate the robustness of the network especially at high noise levels..\nWe have demonstrated a method to learn geometric invariants of planar curves. Using just positive and negative examples of Euclidean transformations, we showed that a convolutional neural network\n0.5 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 0.5 0.5 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 -0.5 0 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 0.5 0.5 -0.5 100 200 300 400 500 100 200 300 400 500 U 100 200 300 400 500\nIn the second experiment, we decimate a high resolution contour at successive resolutions by ran domly sub-sampling and redistributing a set of its points (marked blue in Figure 9) and observe the signatures at certain fixed points (marked red in Figure9) on the curve. Figure 9 shows that the network is able to handle these changes in sampling and compares well with the integral invariant Figures|7|and Figure9|represent behavior of geometric signatures for two different tests: large noise for a moderate strength of signal and low signal for a moderate level of noise..\nNetwork Invariant, = 0.1 Integral Invariant, = 0.1 0.9 -Network Invariant, = 0.3 --Integral Invariant, = 0.3 0.8 uo 0.7 0.6 0.5 0.3 1 0.2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Recall\nFigure 8: 5 shape contours of 6 different categories and the shape retrieval results for this set for different noise levels.\n70% 50% 30% 20% 10% 5% **+ : * + Differential Invariant 70% 50% 30% 20% 10% 5% Integral Invariant. 70% 50% 30% 20% 10% 5% Network Invariant. . 70% 50% 30% 20% 0.2 10% 0.4 5% 0.6\n70% 50% 30%\n0.3 0.2 70% 0.1 50% 30% -0.1 20% 10% -0.2 5% 20 40 60 Integral Invariant 0.6 0. 70% 50% 30% 20% -0.2 10% -0.4 5% -0.6 20 40 60 Network Invariant 0.6 0.A 70% 0.2 50% 30% -0.2 20% 10% -0.4 5% -0.6 10 20 30 40 50 60\nFigure 9: Testing robustness of signatures to different sampling conditions. The signatures are evaluated at the fixed red points on each contour and the density and distribution of the blue points along the curve is varied from 70% to 5% of the total number of points of a high resolution curve.\nis able to effectively discover and encode transform-invariant properties of curves while remaining. numerically robust in the face of noise. By using a geometric context to the training process we were. able to develop novel multi-scale representations from a learning based approach without explicitly\nenforcing such behavior. As compared to a more axiomatic framework of modeling with differentia. geometry and engineering with numerical analysis, we demonstrated a way of replacing this pipeline. with a deep learning framework which combines both these aspects. The non-specific nature of this framework can be seen as providing the groundwork for future deep learning data based problems in differential geometry."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 664800)"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "M Ackerman. Sophus Lie's 1884 Differential Invariant Paper. Math Sci Press, 1976\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard. Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nAlexander M Bronstein, Michael M Bronstein, and Ron Kimmel. Numerical geometry of non-rigi shapes. Springer Science & Business Media, 2008\nRonan Collobert, Samy Bengio, and Johnny Mariethoz. Torch: a modular machine learning software library. Technical report, Idiap, 2002.\nohn Duchi. Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning an stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nThomas Fidler, Markus Grasmair, and Otmar Scherzer. Identifiability and reconstruction of shape from integral invariants. Inverse Problems and Imaging, 2(3):341-354, 2008\nMatthew A Grayson. The heat equation shrinks embedded plane curves to round points. Journal oj Differential geometry, 26(2):285-314, 1987.\nRaia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recogni. tion (CVPR'06), volume 2, pp. 1735-1742. IEEE, 2006.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005.\nYann LeCun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006\nJonathan Masci. Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic con volutional neural networks on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 37-45, 2015.\nFarzin Mokhtarian and Alan K Mackworth. A theory of multiscale, curvature-based shape repre sentation for planar curves. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14. (8):789-805, 1992.\nHelmut Pottmann, Johannes Wallner, Qi-Xing Huang, and Yong-Liang Yang. Integral invariants for robust geometry processing. Computer Aided Geometric Design, 26(1):37-60, 2009\nGuillermo Sapiro and Allen Tannenbaum. Area and length preserving geometric invariant scale spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(1):67-72, 1995.\nJin Xie, Yi Fang, Fan Zhu, and Edward Wong. Deepshape: Deep learned shape descriptor for 3d shape matching and retrieval. In Proceedings of the IEEE Conference on Computer Vision ana Pattern Recognition, pp. 1275-1283, 2015.\nSiddharth Manay, Byung-Woo Hong, Anthony J Yezzi, and Stefano Soatto. Integral invariant signa tures. In European Conference on Computer Vision, pp. 87-99. Springer, 2004..\nFigure 10: (a) Standard 1D Gaussian filters and its derivatives used for curvature and curvature scale space calculations. (b) Some of the filters from the first layer of the network proposed in this paper One can interpret the shapes of the filters in (b) as derivative kernels which are learned from data and therefore adapted to its sampling conditions.\n0.5 0.5 0.5 202 2 -0.5 0.5 0.5 0.5 3 5 2 2 3 5 0.5 0.5 0.5 0.5 2 x a'x. e 202 32 0.5 -0.5 -0.5 0.5 0.5 2 o2-x2 dx2 g(x,) e 202 52 0.5 -0.5 -0.5 0.5 (a) (b)"}] |
rJ8Je4clg | [{"section_index": "0", "section_name": "LEARNING TO PLAY IN A DAY: FASTER DEEP REIN- FORCEMENT LEARNING BY OPTIMALITY TIGHTENING", "section_text": "Frank S. He\nDepartment of Computer Science University of Illinois at Urbana-Champaign Zhejiang University\nfrankheshibi@qmail.com\nDepartment of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.\nWe propose a novel training algorithm for reinforcement learning which com bines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel tech nique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improve ments in both training time and accuracy."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recent advances of supervised deep learning techniques (LeCun et al., 2015) in computer vision speech recognition and natural language processing have tremendously improved the performance on challenging tasks, including image processing (Krizhevsky et al., 2012), speech-based transla- tion (Sutskever et al., 2014) and language modeling (Hinton et al., 2012). The core idea of deep learning is to use artificial neural networks to model complex hierarchical or compositional data abstractions and representations from raw input data (Bengio et al., 2013). However, we are still far from building intelligent solutions for many real-world challenges, such as autonomous driv- ing, human-computer interaction and automated decision making, in which software agents need tc consider interactions with a dynamic environment and take actions towards goals. Reinforcement learning (Bertsekas & Tsitsiklis, 1996; Powell, 2011; Sutton & Barto, 1998; Kaelbling et al., 1996) studies these problems and algorithms which learn policies to make decisions so as to maximize a reward signal from the environment. One of the promising algorithms is Q-learning (Watkins, 1989: Watkins & Dayan, 1992). Deep reinforcement learning with neural function approximation (Tsit- siklis & Roy, 1997; Riedmiller, 2005; Mnih et al., 2013; 2015), possibly a first attempt to combine deep learning and reinforcement learning, has been proved to be effective on a few problems which classical AI approaches were unable to solve. Notable examples of deep reinforcement learning include human-level game playing (Mnih et al., 2015) and AlphaGo (Silver et al., 2016).\nDespite these successes, its high demand of computational resources makes deep reinforcemer learning not yet applicable to many real-world problems. For example, even for an Atari game, th deep Q-learning algorithm (also called deep Q-networks, abbreviated as DQN) needs to play up t hundreds of millions of game frames to achieve a reasonable performance (van Hasselt et al., 2015 AlphaGo trained its model using a database of game records of advanced players and, in additior about 30 million self-played game moves (Silver et al., 2016). The sheer amount of required com putational resources of current deep reinforcement learning algorithms is a major bottleneck for it applicability to real-world tasks. Moreover, in many tasks, the reward signal is sparse and delayed thus making the convergence of learning even slower.\nDepartment of Computer Science. University of Illinois at Urbana-Champaigr liu30l@illinois.edu\nDepartment of Computer Science"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Here we propose optimality tightening, a new technique to accelerate deep Q-learning by fast reward propagation. While current deep Q-learning algorithms rely on a set of experience replays, they only consider a single forward step for the Bellman optimality error minimization, which becomes highly inefficient when the reward signal is sparse and delayed. To better exploit long-term high-reward strategies from past experience, we design a new algorithm to capture rewards from both forward and backward steps of the replays via a constrained optimization approach. This encourages faster reward propagation which reduces the training time of deep Q-learning.\nWe evaluate our proposed approach using the Arcade learning environment (Bellemare et al., 2013) and show that our new strategy outperforms competing techniques in both accuracy and training. time on 30 out of 49 games despite being trained with significantly fewer data frames"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Nonetheless, the original DQN algorithm required millions of training steps to achieve human level performance on Atari games. To improve the stability, recently, double Q-learning was com bined with deep neural networks, with the goal to alleviate the overestimation issue observed ir Q-learning (Thrun & Schwartz, 1993; van Hasselt, 2010; van Hasselt et al., 2015). The key idea is to use two Q-networks for the action selection and Q-function value calculation, respectively. The greedy action of the target is first chosen using the current Q-network parameters, then the targe value is computed using a set of parameters from a previous iteration. Another notable advance is 'prioritized experience replay' (Schaul et al., 2016) or \"prioritized sweeping'' for deep Q-learning The idea is to increase the replay probability of experience tuples that have a high expected learning progress measured by temporal difference errors.\nIn addition to the aforementioned variants of Q-learning, other network architectures have beer proposed. The dueling network architecture applies an extra network structure to learn the impor tance of states and uses advantage functions (Wang et al., 2015). A distributed version of the deej actor-critic algorithm without experience replay was introduced very recently (Mnih et al., 2016). It deploys multiple threads learning directly from current transitions. The approach is applicable tc. both value-based and policy-based methods, off-policy as well as on-policy methods, and in discrete. as well as in continuous domains. The model-free episodic control approach evaluates state-actior pairs based on episodic memory using k-nearest neighbors with hashing functions (Blundell et al.. 2016). Bootstrapped deep Q-learning carries out temporally-extended (or deep) exploration, thus. leading to much faster learning (Osband et al., 2016).\nMore precisely, consider an agent operating over time t E {1, ..., T}. At time t the agent is in ar environment state st and reacts upon it by choosing action at E A. The agent will then observe new state St+1 and receive a numerical reward rt E R. Throughout, we assume the set of possibl actions, i.e., the set A, to be discrete\nThere have been a number of approaches improving the stability, convergence and runtime of deep reinforcement learning since deep Q-learning, also known as deep Q-network (DQN), was first proposed (Mnih et al., 2013; 2015). DQN combined techniques such as deep learning, reinforcement. learning and experience replays (Lin, 1992; Wawrzynski, 2009)..\nOur fast reward propagation differs from all of the aforementioned approaches. The key idea of. our method is to propagate delayed and sparse rewards during Q-network training, and thus greatly. improve the efficiency and performance. We formulate this propagation step via a constrained pro-. gram. Note that our program is also different from earlier work on off-policy Q*() algorithms. with eligibility traces and n-step Q learning (Munos et al., 2016; Watkins, 1989; Mnih et al., 2016). which have been recently shown to perform poorly when used for training deep Q-networks on Atari games.\nReinforcement learning considers agents which are able to take a sequence of actions in an environ ment. By taking actions and experiencing at most one scalar reward per action, their task is to learn a policy which allows them to act such that a high cumulative reward is obtained over time\nA well established technique to address the aforementioned reinforcement learning task is Q learning (Watkins, 1989; Watkins & Dayan, 1992). Generally, Q-learning algorithms maintain an action-value function, often also referred to as Q-function, Q(s, a). Given a state s, the action-value function provides a 'value' for each action a E A which estimates the expected future reward if action a E A is taken. The estimated future reward is computed based on the current state s or a series of past states st if available.\nThe core idea of Q-learning is the use of the Bellman equation as a characterization of the optima future reward function Q* via a state-action-value function.\nHereby the expectation is taken w.r.t. the distribution of state St+1 and reward rt obtained afte. taking action a, and y is a discount factor. Intuitively, reward for taking action a plus best future reward should equal the best total return from the current state.\nThe choice of Q-function is crucial for the success of Q-learning algorithms. While classical meth ods use linear Q-functions based on a set of hand-crafted features of the state, more recent ap proaches use nonlinear deep neural networks to automatically mine intermediate features from th state (Riedmiller, 2005; Lange & Riedmiller, 2010; Mnih et al., 2013; 2015). This change ha been shown to be very effective for many applications of reinforcement learning. However, auto matic mining of intermediate representations comes at a price: larger quantities of data and mor computational resources are required. Even though it is sometimes straightforward to extract larg amounts of data, e.g., when training on video games, for successful optimization, it is crucial that th algorithms operate on un-correlated samples from a dataset D for stability. A technique called \"ex perience replay\" (Lin, 1992; Wawrzynski, 2009) encourages this property and quickly emerged as standard step in the well-known deep Q-learning framework (Mnih et al., 2013; 2015). Experienc replays are stored as a dataset D = {(sj, aj, j, Sj+1)} which contains state-action-reward-futur state-tuples (s;, a;, r, S+1), including past observations from previous plays.\nThe characterization of optimality given in Eq. (1) combined with an \"experience replay\"' dataset D results in the following iterative algorithmic procedure (Mnih et al., 2013; 2015): start an episode in the initial state so; sample a mini-batch of tuples B = {(sj, aj,T, Sj+1)} C D; compute and fix the targets yj = rj + y maxa Qe-(sj+1, a) for each tuple using a recent estimate Qe- (the maximization is only considered if s; is not a terminal state); update the Q-function by optimizing the following program w.r.t. the parameters 0 typically via stochastic gradient descent:\nmin (Qe(Sj,aj) 0 (Sj,aj,rj,Sj+1)EB\nAfter having updated the parameters of the Q-function we perform an action simulation either choos. ing an action at random with a small probability e, or by following the strategy arg maxa Qe(st, a. which is currently estimated. This strategy is also called the e-greedy policy. We then obtain the. actual reward rt. Subsequently we augment the replay memory with the new tuple (St, at, ^t, St+1). and continue the simulation until this episode terminates or reaches an upper limit of steps, anc we restart a new episode. When optimizing w.r.t. the parameter 0, a recent Q-network is used tc. compute the target yj = rj + y maxa Qe- (sj+1, a). This technique is referred to as 'semi-gradient. descent, i.e., the dependence of the target on the parameter 0 is ignored.."}, {"section_index": "4", "section_name": "FAST REWARD PROPAGATION VIA OPTIMALITY TIGHTENING", "section_text": "*(st,a) = E[rt +ymaxQ*(st+1, a\nInvestigating the cost function given in Eq. (2) more carefully, we observe that it operates on a set of short one-step sequences, each characterized by the tuple (sj, aj, Tj, Sj+1). Intuitively, each. step encourages an update of the parameters 0, such that the action-value function for the chosen action aj, i.e., Qe(s,, aj), is closer to the obtained reward plus the best achievable future value, i.e.,. yj = rj + y maxa Q(sj+1, a). As we expect from the Bellman optimality equation, it is instructive. to interpret this algorithm as propagating reward information from time j + 1 backwards to time j.\nTo understand the shortcomings of this procedure consider a situation where the agent only receives. a sparse and delayed reward once reaching a target in a maze. Further let P characterize the short. est path from the agents initial position to the target. For a long time, no real reward is available\nand the aforementioned algorithm propagates randomly initialized future rewards. Once the target is reached, real reward information is available. Due to the cost function and its property of prop- agating reward time-step by time-step, it is immediately apparent that it takes at least an additional O(|P) iterations until the observed reward impacts the initial state.\nIn the following we propose a technique which increases the speed of propagation and achieve mproved convergence for deep Q-learning. We achieve this improvement by taking advantage o longer state-action-reward-sequences which are readily available in the \"experience replay memory. Not only do we propagate information from time instances in the future to our current state, bu also will we pass information from states several steps in the past. Even though we expect to se substantial improvements on sequences where rewards are sparse or only available at terminal states we also demonstrate significant speedups for situations where rewards are obtained frequently. Thi is intuitive as the Q-function represents an estimate for any reward encountered in the future. Faste propagation of future and past rewards to a particular state is therefore desirable.\nSubsequently we discuss our technique for fast reward propagation, a new deep Q-learning algo rithm that exploits longer state-transitions in experience replays by tightening the optimization via constraints. For notational simplicity, we assume that the environmental dynamics is deterministic i.e., the new state and the reward are solely determined by the current state and action. It is possibl to show that mathematically our proposed approach also approximately works in stochastic environ ments. Please see details in the appendix. From the Bellman optimality equation we know that the following series of eaualities hold for the optimal O-function Q* .\nQ*(s;,aj) =rj + y maxQ*(sj+1,a) =rj + ymax rj+1+ y max|rj+2+ y maxQ*(sj+3,d a'\nEvaluating such a sequence exactly is not possible in a reinforcement learning setting since the enumeration of intermediate states sj+i requires exponential time complexity O(|A[). It is however possible to take advantage of the episodes available in the replay memory D by noting that the following sequence of inequalities holds for the optimal action-value function Q* (with the greedy policy), irrespective of whether a policy generating the sequence of actions aj, aj+1, etc., which results in rewards r;, ri+1, etc. is optimal or not:\njaj)=rj+ y maxQ*(Sj+1,a)] ...yrj+i+k+1 max Q*(Sj+k+1,a) = Lj I\nNote the definition of the lower bounds Lj,, for sample j and time horizon k in the aforementione series of inequalities..\nWe can also use this series of inequalities to define u per bounds. To see this note that\nk Sj-k-1,aj-k-1) i-k-1+i Si,a) > 0 i=0\nwhich follows from the definition of the lower bound by dropping the maximization over the actions and a change of indices from j -> j - k - 1. Reformulating the inequality yields an upper bound. U* , for sample j and time horizon k by fixing state s; and action a; as follows:.\nSj-k-1,(j-k-1) -k-1+i >Q*(Si,aj i=0\nIn contrast to classical techniques which optimize the Bellman criterion given in Eq. (2), we propose. which defines the largest lower bound, and Qe(Sj, aj) Umin = min%e{1,.,K} Uj,h, which speci-. fies the smallest upper bound. Hereby, Lj,k and Uj,k are computed using the Q-function Qe- with. a recent estimated parameter 0- rather than the unknown optimal Q-function Q*, and the integer K. specifies the number of future and past time steps which are considered. Also note that the target used in the Bellman equation is obtained from y; = Lj,o = r; + y maxa Qe-(s+1, a). In this. way, we ignore the dependence of the bounds and the target on the parameter 0 to stabilize the train- ing. Taking all the aforementioned definitions into account, we propose the following program for\nOutput : Parameters 0 of a Q-function. Initialize: 0 randomly, set 0- = 0 for episode 1 to M do initialize S1; for t 1 to T do Choose action at according to e-greedy strategy;. Observe reward rt and next state St+1;. Store the tuple (St, at, Tt, :, St+1) in replay memory D;. Sample a minibatch of tuples B = {(sj, aj, Tj, Rj, Sj+1}) from replay memory D;. Update 0 with one gradient step of cost function given in Eq. (4);. Reset 0- = 0 every C steps; end for t T to 1 do Compute Rt = rt + yRt+1; Insert Rt into the corresponding tuple in replay memory D;. end end Algorithm 1: Our algorithm for fast reward propagation in cement learning tasks.\nAlgorithm 1: Our algorithm for fast reward propagation in reinforcement learning tasks\nreinforcement learning tasks\nQo(sj,aj) Lmax V(sj,aj) E B min (Qo(sj,aj)- Yj s.t. Qe(Sj,aj) Umin V(sj,aj) E B 0 (Sj,aj,Sj+1,rj)EB\nBefore doing so we describe our optimization procedure for the constrained program in Eq. (3) more carefully. The cost function is generally non-convex in the parameters 0, and so are the constraints. We therefore make use of a quadratic penalty method to reformulate the program into.\nmin (Qo(sj,aj)-yj)2+A(Lmax_Qo(sj,aj))?+ A(Qo(sj,aj)-Umin)? 0 (Sj,aj,rj,Sj+1)EB\nwhere A is a penalty coefficient and (x)+ = max(0, x) is the rectifier function. Augmenting the cos1 function with (Lmax - Q(s, aj))? and/or A(Qe(sj, aj) Umin)? results in a penalty whenever any optimality bounding constraint gets violated. The quadratic penalty function is chosen for sim- plicity. The penalty coefficient A can be set as a large positive value or adjusted in an annealing scheme during training. In this work, we fix its value, due to time constraints. We optimize this cost function with stochastic (sub-)gradient descent using an experience replay memory from which we randomly draw samples, as well as their successors and predecessors. We emphasize that the deriva- tives correcting the prediction of Q(sj, aj) not only depend on the Q-function from the immediately successive time step Q(sj+1, a) stored in the experience replay memory, but also on more distant time instances if constraints are violated. Our proposed formulation and the resulting optimization technique hence encourage faster reward propagation, and the number of time steps depends on the constant K and the quality of the current Q-function. We summarize the proposed method in Algorithm 1.\nThe computational complexity of the proposed approach increases with the number of considerec time steps K, since additional forward passes are required to compute the bounds Lmax and Umin However, we can increase the memory size on the GPU to compute both the bounds and targets ir a single forward pass if K is not too large. If at all a problem, we can further alleviate this increase by randomly sampling a subset of the constraints rather than exhaustively using all of them. More informed strategies regarding the choice of constraints are possible as well since we may expec lower bounds in the more distant future to have a larger impact early in the training. In contrast once the algorithm is almost converged we may expect lower bounds close to the considered time-step tc have bigger impact.\nTo efficiently compute the discounted reward over multiple time steps we add a new element to the experience replay structure. Specifically, in addition to state, action, reward and next state for\nThis program differs from the classical approach given in Eq. (2) via the constraints, which is cru- cial. Intuitively, the constraints encourage faster reward propagation as we show next, and result in tremendously better results as we will demonstrate empirically in Sec. 5.\n% 8%266 %188 %18614 % 3312 % 6698 %441 % 969 % 999 % 8L'09 % 98'99 % 9960 %2844 %2309 %LL'17 % L8'87 %3330 % 200 % 20 % 402 % 4%9 % 4%9 % 88 % 9 l % 9% %9%l % 90 %0 % 40 0 %81- % 80Z % 982- %6%3 % 9- % 90'9- % 8%L- % 98'L- %004- %116- %586- % 011- % Z944- % 6888- % 9941- % 611- % ZL94- % L983 %112 Atinns qung boqnnk Funpuy uMoq pue dn uoxxez Riunne pnnr hsnnit JoM r paezr aaansaast oBerr Buoxng Bane aae ennns Booiig reennay buod Nann ahne name HHERO Crrmar enner Aesrreols Meeman an chhommn ehnnmnn seeneeet vennre Annnnr leekke eey Alin Bean aaer Dermn arme Breneonr Coodkr spnaeeee aaees Stnneanr\nFigure 1: Improvements of our method trained on 10M frames compared to results of 20oM frame DQN training presented by Mnih et al. (2015), using the metric given in Eq. (5)..\ntime-step j, we also store the real discounted return R; which is the discounted cumulative return achieved by the agent in its game episode. R is computed via R, = T=j -r, where T is the end of the episode and y is the discount factor. R, is then inserted in the replay memory after the termination of the current episode or after reaching the limit of steps. All in all, the structure of our experience replay memory consists of tuples of the form (s, aj, rj, Rj, Sj+1). In practice, we also found that incorporating R, in the lower bound calculation can further improve the stability of the training.\nWe leave the questions regarding a good choice of penalty function and a good choice of the penalty. coefficients to future work. At the moment we use a quadratic penalty function and a constant penalty coefficient X identical for both bounds. More complex penalty functions and sophisticated optimization approaches may yield even better results than the ones we report in the following.."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "We evaluate the proposed algorithm on a set of 49 games from the Arcade Learning Environ. ment (Bellemare et al., 2013) as suggested by Mnih et al. (2015). This environment is considered t be one of the most challenging reinforcement learning task because of its high dimensional output. Moreover, the intrinsic mechanism varies tremendously for each game, making it extremely de. manding to find a single, general and robust algorithm and a corresponding single hyperparamete. setting which works well across all 49 games..\nFollowing existing work (Mnih et al., 2015), our agent predicts an action based on only raw image. pixels and reward information received from the environment. A deep neural network is used as. the function approximator for the Q-function. The game image is resized to an 84 84 grayscale. image st. The first layer is a convolutional layer with 32 filters of size 8 8 and a stride of 4; the. second layer is a convolutional layer with 64 filters of size 4 4 and stride of 2; the third layer is. a convolutional layer with 64 filters of size 3 3 and a stride of 1; the next fully connected laye. transforms the input to 512 units which are then transformed by another fully connected layer to ar output size equal to the number of actions in each game. The rectified linear unit (ReLU) is used as the activation function for each layer. We used the hyperparameters provided by Mnih et al. (2015. for annealing e-greedy exploration and also applied RMSProp for gradient descent. As in previous. work we combine four frames into a single step for processing. We chose the hyperparamenter. K = 4, for GPU memory efficiency when dealing with mini-batches. In addition, we also include. the discounted return R, = Lj,oo in the lower bound calculation to further stabilize the training. We. use the penalty coefficient X = 4 which was obtained by coarsely tuning performance on the games. Alien,' 'Amidar,' 'Assault,' and 'Asterix.' Gradients are also rescaled so that their magnitudes are. comparable with or without penalty. All experiments are performed on an NVIDIA GTX Titan-X 12GB graphics card.\n35393 %23533 % 5333 % 65'66 % 69:96 %476 % L4:88 %86.98 %8098 %60'08 % 89't % 0'29 % 24 99 % 069 % 2989 % 9319 % 98'8t % 62 20 %9204 %3524 %331 %3225 %3031 % 955 % 58 44 % 4111 %0:0% % 0011 % 2724 %661 % 351 % 0013 %L %436 % LZ'8 % 98 % 0'8 % 339 %6%9 % 99 4 %000 %33% % 99'0 % 8930 % 930 %41% % 90'9- 853 Alns qung aoqnik puoqsawnd Tile nmor hsnnit uexxez uMog pue dn oonebuey Jrm oo paezr Freenay oerrr chhnmmn ennnaan Funpuo Riunnr annr Benr aner Gonder fennns Crnnr enner Buoxlg Sunnneannr Bann aaie yennnre spnaeeee gaees Asttx seennest Baans mnst Alin Buoiig Aasrolliss wame rane aame 6uod HHERO meeman an\nFigure 2: Improvements of our method trained on 10M frames compared to results of 10M frame DQN training, using the metric given in Eq. (5)."}, {"section_index": "6", "section_name": "5.1 EVALUATION", "section_text": "We strictly follow the evaluation procedure in (Mnih et al., 2015) which is often referred to as 30 no-op evaluation.' During both training and testing, at the start of the episode, the agent always performs a random number of at most 30 no-op actions. During evaluation, our agent plays each game 30 times for up to 5 minutes, and the obtained score is averaged over these 30 runs. An e- greedy policy with e = 0.05 is used. Specifically, for each run, the game episode starts with at most 30 no-op steps, and ends with death' or after a maximum of 5 minute game-play, which corresponds to 18000 frames.\nOur training consists of M = 40 epochs, each containing 250000 frames, thus 10M frames ir total. For each game, we evaluate our agent at the end of every epoch, and, following commor practice (van Hasselt et al., 2015; Mnih et al., 2015), we select the best agent's evaluation as the result of the game. So almost all hyperparameters are selected identical to Mnih et al. (2015) anc Nair et al. (2015).\nScoreAgent - ScoreBaseline max{ ScoreHuman, ScoreBaseline} -- ScoreRandom\nFig. 1 shows the improvement of our algorithm over the DQN baseline proposed by Mnih et a. (2015) and trained for 200M frames, i.e., 50M steps. Even though our agent is only trained for 10N frames, we observe that our technique outperforms the baseline significantly. In 30 out of 49 games our algorithm exceeds the baseline using only 5% of the baseline's training frames, sometime. drastically, e.g., in games such as 'Atlantis,' 'Double Dunk,' and 'Krull.' The remaining 19 games. often require a long training time. Nonetheless, our algorithm still reaches a satisfactory level o. performance.\nIn previous work (Mnih et al., 2015; van Hasselt et al., 2015; Schaul et al., 2016; Wang et al., 2015). the Q-function is trained on each game using 200 million (200M) frames or 50M training steps. We. compare to those baseline results obtained after 20oM frames using our proposed algorithm which. ran for only 10M frames or 2.5M steps, i.e., 20 times fewer data, due to time constraints. Instead of training more than 10 days we manage to finish training in less than one day. Furthermore, for a fair. comparison, we replicate the DQN results and compare the performance of the proposed algorithm. after 10M frames to those obtained when training DQN on only 10M frames..\nTo compare the performance of our algorithm to the DQN baseline, we follow the approach of Wang et al. (2015) and measure the improvement in percent using\nWe select this approach because the denominator choice of either human or baseline score prevents nsignificant changes or negative scores from being interpreted as large improvements.\nTable 1: Mean and median human-normalized scores. DON baseline and D-DON results are from Mnih et al. (2015); van Hasselt et al. (2015) and trained with 200M frames while our method is trained with 10M frames. Note that our approach can be combined with the D-DQN method\nFrostbite Atlantis Zaxxon 3500 180000 8000 Ours Ours Ours DQN 160000 DQN 7000 DQN 3000 DQN + returr DQN + retur DQN + returr 140000 6000 DQN(A) DQN() DQN(A) 2500 120000 5000 100000 4000 S 1500 80000 3000 60000 2000 1000 40000 1000 500 20000 1000 5 8 10 10 10 Training Frames (1e6). Training Frames (1e6) Training Frames (1e6). H.E.R.O. Q*Bert Chopper Command 20000 12000 5000 Ours Ours Ours DQN DQN DQN DQN + return 10000 DQN + return 4000 DQN + return 15000 DQN() DQN(A) DQN() 8000 3000 sCrree 10000 SOOS 6000 2000 4000 5000 1000 2000 2 4 6 10 6 10 6 10 Training Frames (1e6). Training Frames (1e6) Training Frames (1e6)\nAs suggested by van Hasselt et al. (2015). we use the following score\nScOreHuman - ScoreRandom to summarize the performance of our algorithm in a single number. We normalize the scores of our algorithm, the baseline reported by Mnih et al. (2015), and double DQN (D-DQN) (van Hassel et al., 2015), and report the training time, mean and median in Table 1. We observe our technique with 10M frames to achieve comparable scores to the D-DQN method trained on 200M frames (van Hasselt et al., 2015), while it outperforms the DQN method (Mnih et al., 2015) by a large margin. We believe that our method can be readily combined with other techniques developed for DQN, such as D-DQN (van Hasselt et al., 2015), prioritized experience replay (Schaul et al., 2016), dueling networks (Wang et al., 2015), and asynchronous methods (Mnih et al., 2016) to further improve the accuracy and training speed.\nIn Fig. 3 we illustrate the evolution of the score for our algorithm and the DQN approach. In addition we demonstrate two additional techniques: 'DQN+return' and DQN().' 'DQN+return' uses only the discounted future return as a bound, but does not take advantage of the additional constraints we propose. 'DQN(A)' combines TD-A with the DQN algorithm. We illustrate the performance of. those four algorithms on the six games 'Frostbite,' 'Atlantis,' 'Zaxxon,' 'H.E.R.O,' 'Q*Bert,' and 'Chopper Command.' We observe our method to achieve higher scores than the three baselines on. the majority of the games. We refer the reader to the supplementary material for additional results.."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "In this paper we proposed a novel program for deep Q-learning which propagates promising rewards to achieve significantly faster convergence than the classical DQN. Our method significantly outper forms competing approaches even when trained on a small fraction of the data on the Atari 2600 domain. In the future, we plan to investigate the impact of penalty functions, advanced constrained optimization techniques and explore potential synergy with other techniques.\nFigure 3: Game scores for our algorithm (blue), DQN (black), DQN+return (red) and DQN(A) yellow) using 10M training frames. 30 no-op evaluation is used and moving average over 4 points is applied.\nIn order to further illustrate the effectiveness of our method, we compare our results with our imple mentation of DQN trained on 1OM frames. The results are illustrated in Fig. 2. We observe a better performance on 46 out of 49 games, demonstrating in a fair way the potential of our technique"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. J. of Artificial Intelligence Research, 2013. Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. PAMI, 2013. D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. C. Blundell, B. Uria, A. Pritzel, Y. Li, A. Ruderman, J. Z. Leibo, J. Rae, D. Wierstra, and D. Hassabis. Model Free Episodic Control. In http://arxiv.org/pdf/1606.04460v1.pdf, 2016. G. E. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012. L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. JMLR, 1996. A. Krizhevsky, I. Sutskever, , and G. E. Hinton. Imagenet classification with deep convolutional neural net- works. In Proc. NIPS, 2012. S. Lange and M. Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Proc. Int. Jt. Conf. Neural. Netw., 2010. Y. LeCun, Y. Bengio, and G. E. Hinton. Deep learning. Nature, 2015. L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 1992. V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari with Deep Reinforcement Learning. In NIPS Deep Learning Workshop, 2013. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asyn- chronous Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1602.01783, 2016. R. Munos, T. Stepleton, A. Harutyunyan, and M. G. Bellemare. Safe and efficient off-policy reinforcement learning. In Proc. NIPS, 2016. A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, V. Panneershelvam A. De Maria, M. Suleyman, C. Beattie, S. Petersen, S. Legg, V. Mnih, K. Kavukcuoglu, and D. Silver. Massively Parallel Methods for Deep Reinforcement Learning. In https://arxiv.org/abs/1507.04296, 2015. I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep Exploration via Bootstrapped DQN. In http://arxiv.org/abs/1602.04621, 2016. W. P. Powell. Approximate Dynamic Programming. Wiley, 2011. M. Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In Proc. ECML, 2005. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized Experience Replay. In Proc. ICLR, 2016. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 2016. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Proc. NIPS 2014. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. S. Thrun and A. Schwartz. Issues in using function approxima- tion for reinforcement learning. In Proc. Connectionist Models Summer School, 1993. J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. 1997 H. van Hasselt. Double Q-learning. In Proc. NIPs, 2010. H. van Hasselt, A. Guez, and D. Silver. Deep Reinforcement Learning with Double Q-learning. In https://arxiv.org/abs/1509.06461, 2015. Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling Network Architectures for Deep Reinforcement Learning. In https://arxiv.org/abs/1511.06581, 2015. C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 1992. P. Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience replay. Neural Networks, 2009."}, {"section_index": "9", "section_name": "OPTIMALITY TIGHTENING FOR STOCHASTIC ENVIRONMENTS", "section_text": "Similar to the inequalities we obtained for deterministic environments, we can also derive the fol lowing sequence of inequalities holds for the optimal action-value function Q* (with the greedy policy), under the expectation of the environmental dynamics:.\nSo we have the following. expectation constraint. on traiectories from state s: and action a.\nV EQ* maxQ*(Sj+k+1,a))] 0 Si,aj i=0\nE[Q*(si,aj) Lj,k] 0\nWe can also use this series of inequalities to define upper bounds, on trajectories to state s; and action aj.\nk c|Q*(s;,a; *S-k-1,a-k-1)- -k-1+i)< i=0\nE[Q*(s;,aj) -Uj,k]\nWith these expectation constraints, we can formulate a constrained optimization problem as follows.\nApplying the quadratic penalty function method, we obtain the objective\nIt is easy to see that, since we have trajectory samples in the replay memory which were drawr under the environmental dynamics, we can perform stochastic optimization using these trajectories In this way, a sample of this upper bound is identical to that in the deterministic setting in Eq. (4) As a result, our proposed algorithm can be used to optimize an upper bound of the above constrainec optimization in stochastic environments.\nPlease note that here we provide a mathematical derivation of our approach for stochastic environ ments. We expect that it would work in practice, but due to time constraints and the lack of good. stochastic simulators, we cannot provide any empirical results here.\nE[rj + y max Q*(sj+1,a) k IE max ( (Sj+k+1,a) i=0\nmin (Qe(Si,aj 0 (Sj,aj,Sj+1,rj)EB minx E[Qe(Sj,aj) - Lj,k] 0 )V(si,aj) E B S.t. maxx E[Qe(Sj,aj) -Uj,k] < O )V(sj,aj) E B\n(Qo(Sj,aj)-yj)+ A(maxE[Lj,k Qo(s;,aj)]+maxE[(Qe(Sj,aj)-Uj,k)]) (Sj,ajrj,Sj+1)EB\nBy applying the Jensen's inequality, we are able to obtain an upper bound by first exchanging the expectation with the max and then exchanging the expectation with the rectifier function, because both the max function and the rectifier function are convex..\n(Qo(Sj,aj)- yj)+E[X(maxLj,k-Qe(sj,aj)]+E[X(Qe(sj,aj)-maxUj,k)) Sj,aj,rj,8j+1)EB\nGame Random Human DQN 200M Ours 10M Alien 227.80 6875 3069 1864 Amidar 5.8 1676 739.5 565.67 Assault 222.4 1496 3359 5142.37 Asterix 210 8503 6012 5408.33 Asteroids 719.1 13157 1629 1481.67 Atlantis 12850 29028 85641 316766.67 Bank Heist 14.2 734.4 429.7 596 Battle Zone 2360 37800 26300 30800 Beam Rider 363.9 5775 6846 8069 Bowling 23.1 154.8 42.4 49.3 Boxing 0.1 4.3 71.8 81.17 Breakout 1.7 31.8 401.2 229.79 Centipede 2091 11963 8309 4470.06 Chopper Command 811 9882 6687 6360 Crazy Climber 10781 35411 114103 114146 Demon Attack 152.1 3401 9711 5738.67 Double Dunk -18.6 -15.5 -18.1 -10.07 Enduro 0 309.6 301.8 672.83 Fishing Derby -91.7 5.5 -0.8 5.27 Freeway 0 29.6 30.3 31.3 Frostbite 65.2 4335 328.3 3974.11 Gopher 257.6 2321 8520 4660 Gravitar 173 2672 306.7 346.67 H.E.R.O 1027 25763 19950 19975 Ice Hockey. -11.2 0.9 -1.6 -3.43 Jamesbond 29 406.7 576.7 1088.33 Kangaroo 52 3035 6740 11716.67 Krull 1598 2395 3805 9461.1 258.5 Kung-Fu Master 22736 23270 27820 Montezuma's Revenge 0 4376 0 23.33 Ms. Pacman 307.3 15693 2311 1805 Name This Game 2292 4076 7257 7314.67 Pong -20.7 9.3 18.9 19.4 Private Eye 24.9 69571 1788 342.37 Q*Bert 163.9 13455 10596 12355 River Raid 1339 13513 8316 8028.33 Road Runner 11.5 7845 18257 29346.67 Robotank 2.2 11.9 51.6 34.5 Seaquest 68.4 20182 5286 4070 Space Invaders 148 1652 1976 995 Star Gunner 664 10250 57997 16653.95 Tennis -23.8 -8.9 -2.5 -1 Time Pilot 3568 5925 5947 5423.33 Tutankham 11.4 167.6 186.7 232 Up and Down 533.4 9082 8456 14406 Venture 0 1188 380 286.67 Video Pinball 16257 17298 42684 74873.2 Wizard of Wor. 4757 3393 563.5 4716.67 Zaxxon 32.5 9173 4977 10598\nTable S1: Raw Scores across 49 games, using 30 no-op start evaluation (5 minutes emulator time 18000 frames, e = 0.05). Results of DQN is taken from Mnih et al. (2015)\nWe present our quantitative results in Table S1 and Table S2. We also illustrate the normalized score provided in Eq. (6) over the number of episodes in Fig. S1.\nGame DQN 200M Ours 10M Alien 42.74% 24.62% Amidar 43.93% 33.52% Assault 246.27% 386.31% Asterix 69.96% 62.68% Asteroids 7.32% 6.13% Atlantis 449.94% 1878.60% Bank Heist 57.69% 80.78% Battle Zone 67.55% 80.25% Beam Rider 119.79% 142.39% Bowling 14.65% 19.89% Boxing 1707.14% 1930.24% Breakout 1327.24% 757.77% Centipede 62.99% 24.10% Chopper Command 64.78% 61.17% Crazy Climber 419.50% 419.67% Demon Attack 294.22% 171.95% Double Dunk 16.13% 275.16% Enduro 97.48% 217.32% Fishing Derby 93.52% 99.76% Freeway 102.36% 105.74% Frostbite 6.16% 91.55% Gopher 400.43% 213.36% Gravitar 5.35% 6.95% H.E.R.O 76.50% 76.60% Ice Hockey 79.34% 64.22% Jamesbond 145.00% 280.47% Kangaroo 224.20% 391.04% Krull 276.91% 986.59% Kung-Fu Master 102.38% 122.62% Montezuma's Revenge 0% 0.53% Ms. Pacman 13.02% 9.73% Name This Game 278.31% 281.54% Pong 132% 133.67% Private Eye 2.54% 0.46% Q*Bert 78.49% 91.73% 57.31% 54.95% River Raid Road Runner 232.92% 374.48% Robotank 509.28% 332.99% Seaquest 25.94% 19.90% 121.54% 56.31% Space Invaders Star Gunner 598.10% 166.81% Tennis 142.95% 153.02% Time Pilot 100.93% 78.72% Tutankham 112.23% 141.23% Up and Down 92.68% 162.38% Venture 31.99% 24.13% Video Pinball 2538.62% 5630.76% Wizard of Wor 67.47% 99.04% Zaxxon 54.09% 115.59%\n350 Our 10M Mean Our 10M Median 300 250 Nature 200M Mean (%) eeeeeeeeee e! 200 150 scrre e 100 Nature200MMedian 50 0 -50 0 2 4 6 8 10 Training frames (1e6)\nFigure S1: Convergence of mean and median of normalized percentages on 49 games"}] |
By1snw5gl | [{"section_index": "0", "section_name": "L-SR1: A SECOND ORDER OPTIMIZATION METHOD FOR DEEP LEARNING", "section_text": "Vivek Ramamurthy\nvivek.ramamurthy@sentient.ai\nWe describe L-SR1, a new second order method to train deep neural networks. Second order methods hold great promise for distributed training of deep net- works. Unfortunately, they have not proven practical. Two significant barriers to their success are inappropriate handling of saddle points, and poor conditioning of the Hessian. L-SR1 is a practical second order method that addresses these concerns. We provide experimental results showing that L-SR1 performs at least as well as Nesterov's Accelerated Gradient Descent, on the MNIST and CIFAR10 datasets. For the CIFAR10 dataset, we see competitive performance on shallow networks like LeNet5, as well as on deeper networks like residual networks. Fur- thermore, we perform an experimental analysis of L-SR1 with respect to its hyper- parameters to gain greater intuition. Finally, we outline the potential usefulness of L-SR1 in distributed training of deep neural networks."}, {"section_index": "1", "section_name": "1 MOTIVATION", "section_text": "Second order methods hold great potential for distributing the training of deep neural networks Due to their use of curvature information, they can often find good minima in far fewer steps thar first order methods such as stochastic gradient descent (SGD). Moreover, stochastic second ordei methods can benefit from larger mini-batches (Le et al.12011). This is because they estimate seconc derivatives via differences between estimated gradients. The gradient estimates need to have less variance, so that when we take their differences, the result has low variance. As a result they provid a different trade-off between number of steps and mini-batch size than do SGD-like methods. This trade-off is interesting, because while steps must be evaluated sequentially, a mini-batch may be evaluated in parallel. Thus, second order methods present an opportunity to extract more parallelisn in neural network training. In particular, when mini-batches are sufficiently large, their evaluatior may be distributed. Furthermore, there are relatively fewer hyperparameters to tune in second order methods, compared to variants of stochastic gradient descent.\nL-BFGS (Nocedal] [1980] Liu & Nocedal]1989) is perhaps the most commonly used second orde1 method in machine learning. BFGS is a quasi-Newton method that maintains an approximation tc. the inverse Hessian of the function being optimized. L-BFGS is a limited memory version of BFGS. that stores the most recent updates to the inverse Hessian approximation and can therefore be used. practically for large scale problems. L-BFGS is typically combined with a line search technique tc choose an appropriate step size at each iteration. L-BFGS has been used to good effect in convex. optimization problems in machine learning, but has not found effective use in large scale non-convex problems such as deep learning.\nThree critical weaknesses have been identified. First, we know that training deep neural networks involves minimizing non-convex error functions over continuous, high dimensional spaces. It has been argued that the proliferation of saddle points in these problems presents a deep and profounc difficulty for quasi-Newton optimization methods (Dauphin et al.2014). Furthermore, it has beer argued that curvature matrices generated in second order methods are often ill-conditioned, anc these need to be carefully repaired. A variety of approaches to this have been suggested, including the use of an empirical Fisher diagonal matrix (Martens2016). Finally, popular quasi-Newtor\nNigel Duffy"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose L-SR1, a second order method that addresses each of these concerns. SR1 (Symmetri. Rank One) is a quasi-Newton method that uses a rank one update for updating the Hessian approx. imation of the function being optimized (Nocedal & Wright]2006). Unlike BFGS, the SR1 update. does not guarantee positive definiteness of the updated matrix. This was considered a major problen in the early days of nonlinear optimization when only line search iterations were used, and possibl led to the obscurity of SR1 outside the optimization community. However, with the development o trust-region methods, the SR1 updating formula is potentially very useful, and its ability to generat. indefinite Hessian approximations can actually prove to be advantageous..\nTwo other insights make L-SR1 practical by removing the requirement for a line search and ad- dressing the conditioning problem. First, we replace the line search using a trust region approach While L-BFGS using line search is well studied, recently, an L-BFGS method that uses a trust- region framework has also been proposed (Burke et al.]2008). Second, we combine L-SR1 with batch normalization. Batch normalization is a technique of normalizing inputs to layers of a neural network, used to address a phenomenon known as internal covariate shift during training (Ioffe & Szegedy 2015). Our hypothesis is that batch normalization may cause parameters of a neural net- work to be suitably scaled so that the Hessian becomes better conditioned. We tested this hypothesis empirically and outline the results below.\nWe now briefly summarize some other second order approaches that have been suggested in the literature, in order to place our approach in context.Pearlmutter(1994) derived a technique that directly calculated the product of the Hessian with an arbitrary vector, and applied this technique to a few variants of backpropagation, thereby showing a way to use the full Hessian without needing to compute and store it. Martens[(2010) used a generalization of this technique, introduced by Schrau- dolph (2002), to develop a second order optimization method based on the \"Hessian-free\" approach, using it to train deep auto-encoders (Martens!2010), as well as recurrent neural networks (Martens & Sutskever2011). The \"Hessian-free\" approach is essentially a line search Newton-CG (Conju- gate Gradient) method, also known as the truncated Newton method (Nocedal & Wright]2006), in which the search direction is computed by applying CG to the Newton method, and terminating it once it has made sufficient progress. This approach differs from ours in its use of line search instead of a trust region method. Moreover, it computes Hessian-vector products using finite differencing, as opposed to the limited-memory symmetric rank one update with trust region method, used in our approach. The cost of skipping the Hessian calculation in a truncated Newton method is one ad- ditional gradient evaluation per CG iteration (Nocedal & Wright| 2006). As mentioned previously, Dauphin et al.[(2014) argue, that in high dimensional problems of practical interest, the proliferation of saddle points poses greater difficulty than local minima. In a bid to escape these saddle points. they propose second order optimization method called the saddle-free Newton method. Key to this\n1 The reference[Brust et al.(2016) describes an approach to solve the trust region sub-problem encounterec in an L-SR1 method, but does not describe the L-SR1 method itself..\nWe believe that it is possible to overcome saddle points using rank-one update based second order methods. The more common rank-two methods, e.g. L-BFGS, maintain a positive definite approx imation to the inverse of the Hessian, by design (Nocedal & Wright]2006). At saddle-points, the true Hessian cannot be well approximated by a positive definite matrix, causing commonly used second order methods to go uphill (Dauphin et al.] 2014). On the other hand, rank-one approaches such as SR1 don't maintain this invariant, so they can go downhill at saddle points. Numerical ex- periments (Conn et al.1991) suggest that the approximate Hessian matrices generated by the SR1 method show faster progress towards the true Hessian than those generated by BFGS. This suggests that a limited memory SR1 method (L-SR1, if you like) could potentially outperform L-BFGS in the task of high dimensional optimization in neural network training. The building blocks needed to construct an L-SR1 method have been suggested in the literature (Byrd et al.|1994] Khalfan et al. 1993). To the best of our knowledge, however, there is no complete L-SR1 method previously de scribed in the literature[ This prompted us to develop and test the approach, specifically in the large scale non-convex problems that arise in deep learning.\napproach is the definition of a class of generalized trust region methods. This class extends classica. trust region methods in a couple of ways. A first order Taylor expansion of the function is mini. mized, instead of the second order Taylor expansion. Moreover, the constraint on the step norm is. replaced by generalized constraint on the distance between consecutive iterates. Our approach, by. contrast, uses a a classical trust-region method. Rather than compute the Hessian exactly, Dauphil et al.(2014) use an approach similar Krylov subspace descent (Vinyals & Povey2012). The func tion is optimized in a lower-dimensional Krylov subspace, which is determined through Lanczos. iteration of the Hessian (Vinyals & Povey2012). The Lanczos method may be considered a gen. eralization of the CG method that can be applied to indefinite systems, and may be used to aid the. CG method by gathering negative curvature information (Nocedal & Wright2006). The Lanczos. method also involves finding an approximate solution to a trust-region subproblem in the range of a. Krylov basis that it generates. This trust region problem differs from the one we solve, in that the. Krylov basis generated has a special structure due to its mapping to a tridiagonal matrix (Nocedal &. Wright2006).\nIt is worth noting that several approaches have been proposed to overcome the weaknesses of L. BFGS. First, it has been proposed to initialize L-BFGS with a number of SGD steps. However, this. diminishes the potential for parallelism (Dean et al.]2012) Le et al.]2011). Second, it has beer. proposed to use \"forgetting\"', where every few (say, for example, 5) steps, the history for L-BFGS is. discarded. However, this greatly reduces the ability to use second order curvature information. There. has also been a recent spurt of work on stochastic quasi-Newton methods for optimization.Byrc. et al.(2016) propose a stochastic quasi-Newton method which uses the classical L-BFGS formula. but collects curvature information pointwise, at regular intervals, through sub-sampled Hessian vec tor products, rather than at every iteration. Mokhtari & Ribeiro (2014) propose RES, a regularizec. stochastic version of BFGS to solve convex optimization problems with stochastic objectives, anc. prove its convergence for bounded Hessian eigenvalues.Mokhtari & Ribeiro(2015) propose an on. line L-BFGS method for solving optimization problems with strongly convex stochastic objectives. and establish global almost sure convergence of their approach for bounded Hessian eigenvalues o. sample functions. In the case of nonconvex stochastic optimization, Wang et al.(2014) propose. based on a general framework, two concrete stochastic quasi-Newton update strategies, namely. stochastic damped-BFGS update and stochastic cyclic Barzilai-Borwein-like update, to adaptively. generate positive definite Hessian approximations. They also analyze the almost sure convergence. of these updates to stationary points.Keskar & Berahas (2015) propose ADAQN, a stochastic quasi Newton algorithm for training RNNs. This approach retains a low per-iteration cost while allowing. for non-diagonal scaling through a stochastic L-BFGS updating scheme. The method also uses a. novel L-BFGS scaling initialization scheme and is judicious in storing and retaining L-BFGS cur. vature pairs. Finally,Curtis (2016) proposes a variable-metric algorithm for stochastic nonconvex. optimization which exploits fundamental self-correcting properties of BFGS-type updating, and uses. it to solve a few machine learning problems. As one may notice, all of these approaches adapt the. BFGS-style rank two updates in different ways to solve convex and non-convex problems. In con. trast, our approach uses SR1-type updates, which we think can help better navigate the pathologica. saddle points present in the non-convex loss functions found in deep learning, by not constraining. the Hessian approximation to be positive definite, as in the case of BFGS-style updates. Comparisor. of our approach with one of these recent stochastic second order methods is an interesting next step. In the Appendix, we provide a brief primer on line search and trust region methods, as well as or. quasi-Newton methods and their limited memory variants..\nOur algorithm is synthesized as follows. We take the basic SR1 algorithm described in|Nocedal & Wright(2006) (Algorithm 6.2), and represent the relevant input matrices using the limited-memory representations described in|Byrd et al.(1994). The particular limited-memory representations used in the algorithm vary, depending on whether we use trust region or line search methods as sub- routines to make parameter updates, as does some of the internal logic. For instance, if k updates resulting matrix Bk can be expressed as (Nocedal & Wright|2006).\nBk = Bo+(Yk BoSk)(Dk + Lk + LfSfBoSk)-(Yk BoSk)\nwhere Sr. Ye. Dr. and Lk are defined as follows.\nif i > j 19i-1 otherwise\nDk = diag[so yo, Sk-1Yk-1 The self-duality of the SR1 method (Nocedal & Wright2006) allows the inverse formula Hg to be obtained simply by replacing B, s, and y by H, y, and s, respectively, using standard matrix. identities. Limited-memory SR1 methods can be derived exactly like in the case of the BFGS method. Additional details are present in the pseudocode provided in the Appendix. The algorithm we develop is general enough to work with any line search or trust region method. While we tested the algorithm with line search approaches described in Dennis Jr. & Schnabel(1983), and with. the trust region approach described in|Brust et al.(2016), in this paper, we focus our experimental. investigations on using the trust region approach, and the advantage that provides over using other. first and second order optimization methods..\nWe also make a note here about the space and time complexity of our algorithm. We respectively denote by m and n, the memory size, and parameter dimensions. We assume m << n. As dis cussed in Section 7.2 ofNocedal & Wright(2006), the limited-memory updating procedure of B requires approximately 2mn + O(m) operations, and matrix vector products of the form Bv can be performed at a cost of (4m + 1)n + O(m4) multiplications. Moreover, the Cholesky and eigen value decompositions we perform within our trust-region method for m m matrices require O(m3 operations. It follows quite easily2[from this that the space complexity of our algorithm is O(mn) and the per iteration time complexity of our algorithm is O(mn)."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In the following, we summarize the results of training standard neural networks on the MNIST and CIFAR10 datasets using our approach, and benchmarking the performance with respect to other. first and second order methods. First, we compared our L-SR1 (with trust region) approach, with Nesterov's Accelerated Gradient Descent (NAG), L-BFGS with forgetting every 5 steps, defauli. SGD, AdaDelta, and SGD with momentum, by training small standard networks on the MNIST anc CIFAR10 datasets. On these problems, we also studied the effect of varying the minibatch size, fo. L-SR1, Adam (Kingma & Ba2014), and NAG. Next, we compared our L-SR1 with trust regior. approach with default hyperparameters, with a benchmark SGD with momentum, and Adam, by. training a 20-layer deep residual network on the CIFAR10 dataset. Following that, we varied each. hyperparameter of the L-SR1 with trust region approach to observe its effect on training the residual. network on CIFAR10."}, {"section_index": "4", "section_name": "4.1 LENET-LIKE NETWORKS", "section_text": "For each approach, and for each dataset, we considered the case where our networks had batch normalization layers within them, and the case where they did not. The parameters of the networks were randomly initialized. All experiments were repeated 10 times to generate error bars"}, {"section_index": "5", "section_name": "4.1.1 MNIST", "section_text": "We considered the LeNet5 architecture in this case, which comprised 2 convolutional layers, fol lowed by a fully connected layer and an outer output layer. Each convolutional layer was followed by a max-pooling layer. In the case where we used batch-normalization, each convolutional and fully connected layer was followed by a spatial batch normalization layer. We used a mini-batch size of 20 for the first order methods like NAG, SGD, AdaDelta and SGD with momentum, and a mini-batch size of 400 for the second order methods like L-SR1 and L-BFGS. The memory size was set to 5 for both L-SR1 and L-BFGS. The networks were trained for 20 epochs. Further details on the network architecture and other parameter settings are provided in the Appendix.\n2Deep neural networks typically have paramater dimensions in the tens of millions, while the memory size typically does not exceed 10. So n is indeed several orders of magnitude larger than m.\nMNIST with batch normalization MNIST without batch normalization 0.024 0.09 I I NAG I I NAG 0.022 I I L-SR1 0.08 I I L-SR1 I I L-BFGS with forgetting I I L-BFGS with forgetting 0.020 I I SGD 0.07 I I SGD I I AdaDelta I I AdaDelta 0.018 0.06 II sGD with momentum II sGD with momentum 0.016 0.05 LSoT SSoT 0.014 0.04 0.012 0.03 0.010 0.02 0.008 0.01 0.006 0.00 0 5 10 15 20 0 5 10 15 20 Epoch index Epoch index\nFigure 1: Variation of test loss with number of epochs, on the MNIST dataset, with and without batch normalization. Note that the scales on the y-axes are different."}, {"section_index": "6", "section_name": "4.1.2 CIFAR10", "section_text": "We considered a slight modification to the 'LeNet5' architecture described above. We used a mini batch size of 96 for NAG, SGD, AdaDelta and SGD with momentum. The other mini-batch sizes and memory sizes for L-SR1 and L-BFGS were as above. As above, the networks were trained for 20 epochs. Further details on the network architecture and other parameter settings are provided ir the Appendix.\nCIFAR10 with batch normalizatior. CIFAR10 without batch normalization 0.50 0.8 I I NAG II NAG I I L-SR1 I I L-SR1 0.45 I I L-BFGS with forgetting 0.7 I I L-BFGS with forgetting I I SGD I I SGD I I AdaDelta I I AdaDelta 0.40 0.6 I I sGD with momentum II sGD with momentum 0.30 0.4 0.25 0.3 0.20 0.2 0 5 10 15 20 0 5 10 15 20 Epoch index Epoch index\nFigure 2: Variation of test loss with number of epochs, on the CIFAR10 dataset, with and without batch normalization. Note that the scales on the y-axes are different"}, {"section_index": "7", "section_name": "4.1.3 VARIATION OF MINIBATCH SIZE", "section_text": "We also compared the variation of test loss between L-SR1, Adam and NAG, as we varied the. mini-batch size from 500 to 1000 to 10000, in the presence of batch normalization. The network. architectures were as above. For minibatch sizes 500 and 1000. we trained the networks for 5C epochs, while for the minibatch size of 10000, the networks were trained for 200 epochs.\nMNIST with batch normalization: Minibatch size 500 MNIST with batch normalization: Minibatch size 1000 MNIST with batch normalization: Minibatch size 10000 0.013 0.016 0.024 II NAG HI NAG 0.022 0.012 L-SR1 0.014 L-SR1 0.020 .Adam #Adam 0.011 0.012 0.018 0.010 0.016 SSO 0.010 0.009 0.014 0.008 0.012 0.008 II NAG 0.010 0.006 - H L-SR1 0.007 0.008 HAdam 0.006 0.004 0.006 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140 160 180 200 Epoch index Epoch index Epoch index\nCIFAR with batch normalization: Minibatch size 500 CIFAR with batch normalization: Minibatch size 1000 CIFAR with batch normalization: Minibatch size 10000 0.34 0.36 0.50 I NAG H NAG II NAG 0.32 H L-SR1 0.34 L-SR1 H L-SR1 0.45 HAdam # Adam # Adam 0.30 0.32 0.40 M0.30 SSO 0.35 0.26 0.28 0.30 0.24 0.26 0.22 0.24 0.25 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140 160 180 200 Epoch index Epoch index Epoch index\nCIFAR with batch normalization: Minibatch size 500 CIFAR with batch normalization: Minibatch size 1000 CIFAR with batch normalization: Minibatch size 10000 0.34 0.36 0.50 II NAG II NAG II NAG 0.32 L-SR1 0.34 L-SR1 L-SR1 0.45 #Adam HAdam HAdam 0.30 0.32 0.40 0.28 0.35 0.26 0.28 0.30 0.24 0.26 0.22 0.24 0.25 20 25 30 35 40 45 50 20 25 30 35 40 45 50 100 120 140160 180 200 Epoch index Epoch index Epoch index\nFigure 4: Variation of test loss with number of epochs, on the CIFAR10 dataset, with batch nor malization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are different."}, {"section_index": "8", "section_name": "4.1.4 DISCUSSION", "section_text": "Our first set of experiments (Figures[1] 2) suggest that L-SR1 performs as well as, or slightly better. than all the first order methods on both the MNIST and CIFAR10 datasets, with or without batch normalization. L-SR1 is substantially better than L-BFGS in all settings, with or without forgetting. Forgetting appears to be necessary in order to get L-BFGS to work. Without forgetting, the approach. appears to be stuck where it is initialized. For this reason, the plots for L-BFGS without forgetting. have not been included. Batch normalization appears to improve the performance of all approaches,. particularly the early performance of second order approaches like L-SR1 and L-BFGS.\nThe experiments with variation of minibatch sizes (Figures 3] 4), seem to provide compelling evi dence of the potential for distributed training of deep networks, as may be seen from Table|1] First we note that first order methods like NAG are not as sensitive to size of the minibatch, as commonly understood. For example, a 20 fold increase in minibatch size did not decrease the speed of conver gence by the same or higher order of magnitude. Furthermore, approaches like L-SR1 and Adam appear to be much less sensitive to increasing minibatch size than NAG. This strengthens the case for their application to distributed training of deep neural networks. Finally, while Adam makes much faster initial progress than the other approaches, its final test loss by the end of training is worse than in the case of L-SR1.\nOne of the limitations of SR1 updating is that the denominator in the update can vanish. The liter ature however suggests that this happens rarely enough that the updates can be skipped when this. phenomenon occurs, without affecting performance. In this regard, we had some interesting obser. vations from our experiments. While in most cases, updates were either never skipped, or skippec. less than 2.5% of the time, the cases of MNIST training with batch normalization, yielded abnor.\nFigure 3: Variation of test loss with number of epochs, on the MNIST dataset, with batch normal ization, for varying minibatch sizes. Note that the scales on the x and y-axes across figures are different.\nTable 1: Speed of conve nce of NAG, L-SR1, and Adam, with varying minibatch sizes\nmally high levels of skipped updates, ranging all the way from 7% to higher than 60% (for minibatcl size 10oo0). While this did not seem to affect performance adversely, it certainly warrants future investigation. Moreover, a better understanding of the interplay between batch normalization and optimization could help inform potential improvements in optimization approaches"}, {"section_index": "9", "section_name": "4.2 RESIDUAL NETWORKS", "section_text": "We next considered a deeper residual network architecture described in section 4.2 of He et al. (2015b), with n = 3. This led to a 20-layer residual network including 9 shortcut connections. As in He et al.(2015b), we used batch normalization (Ioffe & Szegedy2015) and the same initialization method (He et al.2015a).\nWe trained the residual network using the benchmark SGD with momentum, and other parameter. settings as described in He et al.[(2015b). We also trained the network using L-SR1 with defauli. settings. These included, a memory size of 5, a trust-region radius decrease factor of 0.5, and. a trust-region radius increase factor of 2.0. Finally, we also compared with Adam, with defauli. settings (Kingma & Ba] 2014). We used the same mini-batch size of 128 for all algorithms. Based. on the learning rate schedule used, the learning rate was equal to 0.1 through the first 80 epochs. 0.01 up to 120 epochs, and 0.001 thereafter, for SGD with momentum. Figure 5|shows variation. of test loss, over epochs, and by time. It needs to be noted that default L-SR1, with no parameter. tuning at all, has a superior final test loss to Adam, and is competitive with SGD with momentum which used custom parameters that were tuned carefully. L-SR1 does make slower progress over. time, which can be further optimized. Finally, we note that the test loss for L-SR1 bounces around. a lot more than the test loss for the other algorithms. This bears further exploration.."}, {"section_index": "10", "section_name": "4.2.2 VARIATION OF L-SR1 HYPERPARAMETERS", "section_text": "We varied the hyperparameters of L-SR1 in turn, keeping the remaining fixed. In each case, we trained the network for 200 epochs. We first considered varying the increase and decrease factors together. We considered a trust-region radius decrease factor of 0.2, 0.5 and 0.8, and a trust-region radius increase factor 1.2 and 2.0. The respective default values of these factors are 0.5 and 2.0 respectively. This led to six different combinations of decrease and increase factors. We kept the memory size and mini-batch size fixed at 5 and 128 respectively. Next, we considered memory sizes of 2 and 10 (in addition to 5, which we tried earlier), keeping the mini-batch size, decrease factor, and increase factor fixed at 128, 0.5, and 2.0 respectively. Finally, we considered mini-batch sizes of 512, 2048 and 8192 (in addition to 128, which we tried earlier), keeping the memory size, decrease factor, and increase factor fixed at 5, 0.5, and 2.0 respectively. Figure|6 shows the results.\nThe following may be noted, based on the experiments with L-SR1 for training a residual network on CIFAR10. While there is potential value in increasing and decreasing the trust region radius at different rates, our experiments suggest that it may not be necessary to tune these hyperparameters. There is no noticeable performance gain from using a higher memory size in L-SR1. Furthermore using a smaller memory size performs at least as well as in the default case. This is good news, due to the consequent savings in storage and computational resources. L-SR1 is relatively insensitive to a 4-fold increase in mini-batch size from 128 to 512, and a further 4-fold increase to 2048. The minibatch sensitivity of L-SR1 seems to be higher in the case of the residual network, compared\nL-SR1 vs SGD vs Adam on a residual network L-SR1 vs SGD vs Adam on a residual network. 0.7 0.7 x x sGD with momentum Test (benchmark. SGD with momentum Test benchmark L-SR1 Test (default) 0.6 0.6 L-SR1 Test (default) + + Adam Test (default) Adam Testdefault 0.5 0.5 0.4 0.4 SSO SSoT 0.3 0.3 0.2 0.2 0.1 0.1 0.0 0.0 5000 10000 15000 20000 25000 30000 35000 40000 45000 0 50 100 150 200 Epoch index Time in seconds\nFigure 5: LSR1 vs SGD vs Adam, on the CIFAR10 dataset, using a residual network. The x-axis on the left shows number of epochs. while the x-axis on the right shows time in seconds\nL-SR1 - Variation of increase and decrease factors L-SR1 - Variation of mini-batch size L-SR1 - Variation of memory size 0.8 0.9 0.8 Test: Decrease -0.2,Increase - 1.2 xTest: minibatch size 128 x xTest: memory size 2 0.7 Test: Decrease - 0.2, Increase - 2.0 0.8 Test:minibatch size 512 0.7 Test: memory size 5 Test: Decrease - 0.5, Increase - 1.2 Test: minibatch size 2048 Test: memory size 10 Test: Decrease - 0.5, Increase - 2.0 0.7 0.6 +Train: minibatch size 8192 0.6 Test: Decrease - 0.8, Increase - 1.2 Test: Decrease - 0.8, Increase - 2.0 0.6 0.5 0.5 0.5 SSO 1 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 Epoch index Epoch index Epoch index\nL-SR1 - Variation of increase and decrease factors L-SR1 - Variation of mini-batch size L-SR1 - Variation of memory size 0.8 0.9 0.8 x x Test: Decrease - 0.2, Increase - 1.2 x xTest: minibatch size 128 x Test: memory size 2 0.7 Test: Decrease - 0.2, Increase - 2.0 0.8 Test: minibatch size 512 0.7 Test: memory size 5 Test: Decrease - 0.5, Increase - 1.2 Test: minibatch size 2048 Test: memory size 10 0.6 + Test: Decrease - 0.5, Increase - 2.0 0.7 ++ Train: minibatch size 8192 0.6 Test: Decrease - 0.8, Increase - 1.2 0.6 0.5 Test: Decrease - 0.8, Increase - 2.0 0.5 5 0.4 SSO 50.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 50 100 150 200 50 100 150 200 50 100 150 200 Epoch index Epoch index Epoch index\nFigure 6: Variation of trust region radius increase and decrease factors, mini-batch size and memory size with number of epochs, on the CIFAR10 dataset, using a residual network. Note that the scales on the y-axes are different.\nwith the Le-Net like networks seen earlier. Finally, we found the proportion of skipped updates in the case of residual networks to be less than 0.5% in all cases."}, {"section_index": "11", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we have described L-SR1, a new second order method to train deep neural networks Our experiments suggest that this approach is at the very least, competitive, with other first order. methods, and substantially better than L-BFGS, a well-known second order method. Our experi- ments also appear to validate our intuition about the ability of L-SR1 to overcome key challenges. associated with second order methods, such as inappropriate handling of saddle points, and poor conditioning of the Hessian. Our experimentation with the hyperparameters of L-SR1 suggested that it is relatively robust with respect to them, and requires minimal tuning. Furthermore, we have evidence to suggest that L-SR1 is much more insensitive to larger minibatch sizes than a first order method like NAG. This suggests that L-SR1 holds promise for distributed training of deep networks,. and we see our work as an important step toward that goal.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Johannes Brust, Jennifer B. Erway, and Roummel F. Marcia. On solving 1-sr1 trust-region subprob lems. arXiv.org, 8 2016. arXiv:1506.07222v3.\nRichard H. Byrd, Jorge Nocedal, and Robert B. Schnabel. Representations of quasi-newton matrice and their use in limited-memory methods. Mathematical Programming. 63(1):129-156. 1 1994\nYann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op timization. CoRR, abs/1406.2572, 2014. URLhttp://arxiv.0rg/abs/1406.2572\nJohn E. Dennis Jr. and Robert B. Schnabel. Numerical methods for unconstrained optimization ana nonlinear equations. Prentice Hall, 1 edition, 1983.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CoRR. abs/1512.03385.2015b. URLhttp://arxiv.org/abs/1512.03385\nHumaid Khalfan, Richard H. Byrd, and Robert B. Schnabel. A theoretical and experimental study of the symmetric rank one update. SIAM Journal on Optimization, 3(1):1-24, 1993\nDong C. Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimizatior Mathematical Programming, 45(1):503-528. 1989\nAryan Mokhtari and Alejandro Ribeiro. RES: regularized stochastic BFGS algorithm. IEEE Trans Signal Processing, 62(23):6089-6104, 2014. doi: 10.1109/TSP.2014.2357775. URLhttp: //dx.d01.0rg/10.1109/TSP.2014.2357775\nJorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of Computation 35(151):773-782, 7 1980.\nJorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer-Verlag, New York, 2 edition, 2006."}, {"section_index": "13", "section_name": "BACKGROUND", "section_text": "In the following, we provide a brief primer on line search and trust region methods, as well as or. quasi-Newton methods and their limited memory variants. Further details may be found in Noceda & Wright (2006)\nIn any optimization algorithm, there are two main ways of moving from the current point xk tc a new iterate xk+1. One of them is line search. In it, the algorithm picks a descent direction pk and searches along this direction from the current iterate xk for a new iterate with a lower function value. The distance to move along px can be found by solving the following one-dimensional minimization problem:\nmin f(xk + apk a>0\nInstead of an exact minimization which may be expensive, the line search algorithm generates a limited number of trial step lengths until it finds one that generates a sufficient decrease in function\nJames Martens. Deep learning via hessian-free optimization. In Proceedings of the 27th Interna tional Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pp. 735-. 742,2010. URLhttp://www.icm12010.org/papers/458.pdf\nJames Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimiza tion. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pp. 1033-1040, 2011.\nBarak A. Pearlmutter. Fast exact multiplication by the hessian. Neural Computation, 6:147-160 1994.\nvalue. At the new point, the process of computing the descent direction and step length is repeated The other way is to use a trust region method. In a trust region method, the information about f is used to construct a model function mk, which is supposed to approximate f near the current point xk. Since the model m, may not approximate f well when x is far from xk, the search for a minimizer of mk is restricted to some trust region within a radius x around xk. To wit, the candidate step p approximately solves the following sub-problem:\nmin mk(xk+p) p:[pk\nIf the candidate solution does not produce a sufficient decrease in f, the trust region is considere too large for the model function to approximate f well. So we shrink the trust region and re-solve Essentially, the line search and trust region approaches differ in the order in which they choose th direction and magnitude of the move to the next iterate. In line search, the descent direction pk i fixed first, and then the step length ag to be taken along that direction is computed. In trust region, a maximum distance equal to the trust-region radius is first set, and then a direction is determinec within this radius, that achieves the best improvement in the objective value. If such a directio does not yield sufficient improvement, the model function is determined to be a poor approximatior to the function, and the trust-region radius is reduced until the approximation is deemed goo enough. Conversely, as long as the model function appears to approximate the objective functior well, the trust region radius is increased until the approximation is not good enough."}, {"section_index": "14", "section_name": "LIMITED MEMORY OUASI-NEWTON METHODS", "section_text": "Quasi-Newton methods are a useful alternative to Newton's method in that they do not require com. putation of the exact Hessian, and yet still attain good convergence. In place of the true Hessian V2 fk, they use an approximation Bk, which is updated after each step based on information gained. during the step. At each step, the new Hessian approximation Bk+1 is required to satisfy the follow-. ing condition, known as the secant equation:.\nSk =Xk+1-Xk,Yk = Vfk+1- Vfk\nTypically, Bk+1, is also required to be symmetric (like the exact Hessian), and the difference be- tween successive approximations Bk and Bk+1 is constrained to have low rank. One of the most popular formulae for updating the Hessian approximation Bg is the BFGS formula, named after its inventors, Broyden, Fletcher, Goldfarb, and Shanno, which is defined by\nBESkSt Bk YkYk Bk+1 = Bk sT BkSk Sk\nA less well known formula, particularly in the machine learning community, is the symmetric-rank one (SR1) formula, defined by\nThe former update is a rank-two update, while the latter is a rank-one update. Both updates satisfy the secant equation and maintain symmetry. The BFGS update always generates positive definite approximations whenever the initial approximation Bo is positive definite and sf'yk > 0. Often, in practical implementations of quasi-Newton methods, the inverse Hessian approximation Hx is used instead of the Bk,and the corresponding update formulae can be generated using the Sherman Morrison-Woodbury matrix identity (Hager1989).\nLimited-memory quasi-Newton methods are useful for solving large problems where computation of Hessian matrices is costly or when these matrices are dense. Instead of storing fully dense n n approximations, these methods save only a few vectors of length n that capture the approximations. Despite these modest storage requirements, they often converge well. The most popular limited memory quasi-Newton method is L-BFGS, which uses curvature information from only the most recent iterations to construct the inverse Hessian approximation. Curvature information from earlier\nBk+1Sk = Yk\nyk - BkSk)(yk - BkSk Bk+1 = Bk + (Yk - BkSk)Tsk\niterations, which is less likely to be useful to modeling the actual behavior of the Hessian at the current iteration, is discarded in order to save memory.\nLimited-memory quasi-Newton approximations can be used with line search or trust region methods As described in Byrd et al.[(1994), we can derive efficient limited memory implementations ol several quasi-Newton update formulae, and their inverses..\nNETWORK ARCHITECTURES AND HYPERPARAMETER SETTINGS"}, {"section_index": "15", "section_name": "MNIST", "section_text": "The layers of the LeNet5 architecture used, are described below. All the batch normalization layer were removed, in the 'without batch normalization' case.\nAdditionally, the network was trained with L2 regularization with parameter 0.0o01. Training loss was measured as softmax cross entropy, while test loss was measured as multi-class error count In the case of the first order methods, the learning rate was set to O.003 where needed, and the momentum was set to 0.9, where needed. AdaDelta did not take any parameters."}, {"section_index": "16", "section_name": "CIFAR10", "section_text": "The layers of the architecture used, are described below. All the batch normalization layers were removed, in the 'without batch normalization' case.\nConvolutional Layer - filter size 5 5, 20 feature maps, stride 1, padding 0, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Convolutional Layer - filter size 5 5, 50 feature maps, stride 1, padding 0, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Fully Connected Layer - 500 hidden units, and a tangent hyperbolic activation function Spatial Batch Normalization Layer Outer Output Layer - 10 outputs and output standard deviation of 0.1\nConvolutional Layer - filter size 5 5, 32 feature maps, stride 1, padding 2, and a ReLU. activation function with bias 0 and Gaussian noise with mean 0 and standard deviation O.01. Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Activation Layer - ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1 Convolutional Layer - filter size 5 5, 32 feature maps, stride 1, padding 2, and a ReLU. activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01. Spatial Batch Normalization Layer oMax Pooling Layer - filter size 2 Convolutional Layer - filter size 5 5, 64 feature maps, stride 1, padding 2, and a ReLU. activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.01. Spatial Batch Normalization Layer Max Pooling Layer - filter size 2 Fully Connected Layer - 64 hidden units, and a ReLU activation function with bias 0 and Gaussian noise with mean 0 and standard deviation 0.1. Spatial Batch Normalization Layer\nOuter Output Layer - 10 outputs and output standard deviation of 0.1\nAdditionally, the network was trained with L2 regularization with parameter 0.001. Training loss was measured as softmax cross entropy, while test loss was measured as multi-class error count. Ir the case of the first order methods, the learning rate was set to 0.01 where needed, and the momentum was set to 0.9, where needed. AdaDelta did not take any parameters."}, {"section_index": "17", "section_name": "PSEUDOCODE", "section_text": "Algorithm[1provides the pseudocode for L-SR1 with trust region method, while Algorithm2|pr vides the pseudocode for L-SR1 with line search."}] |
S1Y0td9ee | [{"section_index": "0", "section_name": "SHIET AGGREGATE EXTRACT NETWORKS", "section_text": "Francesco Orsini. Daniele Baracchi and Paolo Frasconi\nThe Shift Aggregate Extract Network (sAEN) is an architecture for learning repre sentations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying shift, aggregate and extract operations on the vector representations of its parts. We propose an algorithm for domain compression which takes ad- vantage of symmetries in hierarchical decompositions to reduce the memory us- age and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many different problems in various fields of science require the classification of structured data. i.e. collections of objects bond together by some kind of relation. A natural way to represent such structures is through graphs, which are able to encode both the individual objects composing the. collection (as vertices) and the relationships between them (as edges). A number of approaches to. the graph classification problem has been studied in graph kernel and neural network literature..\nGraph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel. 2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave,. 2010). The similarity between two graphs is then computed by comparing the respective sets of. parts. Methods based on recursive neural networks unfold a neural network over input graphs and learn vector representations of their nodes employing backpropagation though structure (Goller &. Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as nat-. ural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003).. An advantage of recursive neural networks over graph kernels, is that the vector representations of. the input graphs are learnt rather than handcrafted..\nWe propose Shift Aggregate Extract Networks (sAEN), a neural network architecture for learning. representations of input graphs. SAEN decomposes input graphs into H-hierarchies made of multiple. strata of objects. Objects in each stratum are connected by \"part-of' relations to the objects to the stratum above."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning on social network data can be considerably hard due to their peculiar structure: as opposed. to chemical compounds and parse trees, the structure of social network graphs is highly irregular Indeed in social networks it is common to have nodes in the same graph whose degree differs by. orders of magnitude. This poses a significant challenge for the substructure matching approach used. by some graph kernels as the variability in connectivity generates a large number of unique patterns. leading to diagonally dominant kernel matrices..\nIn case we wish to classify graphs we can use an H-hierarchical decomposition in which the top stratum contains the graph G that we want to classify, while the intermediate strata contain subgraphs of G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the. vertices v of G.\nUnlike R-convolution relations in kernel methods (which decompose objects into the set of thei. parts), H-hierarchical decompositions are deep as they can represent the parts of the parts of al object.\nRecursive neural networks associate to the vertices of the input graphs vector representations impos ing that they have identical dimensions. Moreover, the propagation follows the edge connectivity and weights are shared over the whole input graph. If we consider that vector representations of nodes (whose number of parents can differ by orders of magnitude) must share the same weights. learning on social network data with recursive neural networks might be nontrivial..\nSAEN compensates the limitations of recursive neural networks by adding the following degrees o flexibility:\n1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the input graph, 2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratun basis instead of globally..\nAnother contribution of this paper is the introduction of a domain compression algorithm, that we. use in our experiments to reduce memory usage and runtime. Domain compression collapses objects in the same stratum of an H-hierarchical decomposition into a compressed one whenever these objects are indistinguishable for the SAEN computation schema. In particular objects made of the. same sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical. decomposition we store counts on symmetries adopting some mathematical results from lifted linear programming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the. work of Sperduti & Starita (1997) in which common substructures of recursive neural networks are collapsed in order to reduce the computational cost.\nMost graph kernels decompose graphs into parts by using an R-convolution relation (Haussler 1999). We extend this approach by decomposing graphs into a hierarchy of -parametrized \"part of\"' relations. Formally, an H-hierarchical decomposition is a pair ({St}|=o, {R,n}I=1) where:\n{St}I=o are disjoint sets of objects St called strata, or levels of the hierarchy. The bottom stratum So contains non-decomposable objects (e.g. individual vertices), while the other strata S, l =- 1, . .., L contain composite objects, o; E St, whose parts o; E St-1 belong to the preceding stratum, S1-1. o {Rt,}f=1 is a set of l, -parametrized R,n-convolution relations. A pair (0i, 0j) E St St-1 belongs to Rt. iff \"o; is part of o, with membership type \". For notational convenience, the parts of o: are denoted as R-1(o:) -\nThe membership type is used to represent the roles of the parts of an object. For example, we could decompose a graph as a multiset of -neighborhood subgraphs ' in which is the radius o the neighborhoods (see Figure 1 on the left). Another possible use of the r membership type is tc\nIThe r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G consisting of all vertices whose shortest-path distance from v is at most r.\nIndeed sAeN allows to use vector representations of different sizes for different strata of objects (e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes the vector representation of each object by applying shift, aggregate and extract operations on the vector representations of its parts.\nWe propose a neural network architecture that takes as input an undirected attributed graph G =. (V, E, X) where V is the vertex set, E C V V is the edge set, and X = {x, E RP}vev is a. set of p-dimensional vertex attributes. When vertices do not have associated attributes (for example this happens in some of the social network datasets of 4.1), we can set x, to some vertex invariant. such as node centrality or betweenness.\nS2) S1) 0 1( S1) S2) Ego Graph Graph T : 0 T : 1 : 0 T ROOT . t : ELEM A T: 0 0 TT ELEM T.: :1 0 1\nS2) S1) 0 1( S1) S2) Ego Graph Graph t : 0 : 0 0 T : .... .1. \"\" ELEM T : 0 7 ELEM T K : 0 0\nFigure 1: Image of an H-hierarchical decomposition (in particular the EGNN explained in $ 4.2) On the left we decompose a graph into rooted ego graphs of radius O and 1, while on the right we. decompose an ego graph into the set of its vertices. The directed arrows represent \"part of' relations. labeled with their membership type . The membership type r represents the radius = 0, 1 of the. ego graphs (decomposition on the left) and the role (i.e. = ROoT, ELEm) of a vertex in the ego. graph (decomposition on the right) respectively..\nAn H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and i reduces to an R-convolution relation for L = 1.\nWe propose Shift Aggregate Extract Network (sAen) to learn vector representations for all the work architecture over an H-hierarchical decomposition by using the Shift Aggregate Extract (SAE schema.\nAccording to the SAE schema the vector representation of each object in the H-hierarchical decom- position is either computed by applying a neural network on the vertex attributes (for the objects in bottom stratum) or defined in terms of the vector representations of its parts (for the other objects).\nMore formally, the SAE schema associates a di-dimensional representation h; E Rdi to each object O, E S of the H-hierarchical decomposition according to the following formula:.\nf0(Xv;;O0) if o; E Sq fi (zn O hj);Oi otherwise EII 0jER(0i) Shift Aggregate Extract\nwhere f(:: OD). , are multilay r neural networks with parameters Oj\nThe recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (sAE) schema\nf0(xvi;O0) if 0; E So fi (zn O hj);Oi otherwise EIIi 0jER(0i) Shift Aggregate Extract\nWith respect to the base case (first branch of Eq. 1) we have that each object o; in the bottom stratum So is in one-to-one correspondence with the vertices v; E V of the graph that we are decomposing Indeed the vector representations h; are computed by evaluating fo(; Oo) in correspondence of the vertex attributes Xu, E X.\nH H\nFigure 2: Pictorial representation of the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see $ 4.1) together with its compressed version..\nThe shift and aggregate steps, that we have seen so far, are identical to those used in kernel desigr when computing the explicit feature of a kernel k(x, z) derived from a sum en1 k (x, z) of base kernels k(x, z), E II. In principle, it would be indeed possible to turn SAEN into a kernel methoc by removing the extraction step E from the SAE schema. However, such an approach would increase the dimensionality of the feature space by a multiplicative factor II for each level l of the H hierarchical decomposition, thus leading to an exponential number of features. When using SAEN the feature space growth is prevented by exploiting a distributed representation (via a multilayerec neural network) during the E step of the SAE schema. As a result, SAEN can easily cope with H hierarchical decompositions consisting of multiple strata."}, {"section_index": "3", "section_name": "2.3 EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION", "section_text": "In this section we propose a technique, called domain compression, which allows to save memory and speedup the SAEN computation. Domain compression exploits symmetries in H-hierarchical de compositions by collapsing equivalent objects in each stratum. The greater the number of collapse objects the highest the compression ratio.\nTwo objects a, b in a stratum S are collapsable a ~ b if they share the same representation (i.e stratum S w.r.t. the collapsibility relation ~. We assume that the attributes of the elements in the bottom stratum So are categorical, so that the same vector representation can be shared by multiple elements with non-zero probability. 2 While objects in the bottom stratum So are collapsable when their attributes are identical, for all the other strata St, l = 1, . . ., L, objects are collapsable if they are made by the same sets of parts for all the membership types r.\nIn Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical decomposition (EGNN, described in 4.2). On the left we show the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see $ 4.1) together with its compressed version on the right."}, {"section_index": "4", "section_name": "2.3.1 DOMAIN COMPRESSION ALGORITHM", "section_text": "In order to compress H-hierarchical decompositions we adapt the lifted linear programming tech nique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M E Rnp has\n2 Vectors of real valued attributes could be discretized using clustering techniques. However, we leave discretization in SAEN to future works.\nmake sure that vector representations h; of object parts will fall in the same slot if and only if they have the same membership type r. .Aggregate: the shifted representations (z h,) of the parts o; are then aggregated with a sum. . Extract: the aggregated representation is compressed to a d-dimensional space by a Ot parametrized nonlinear map fi(, Ot) : R|II,d-1! -> Rd implemented with a multilayer neural network.\nm n distinct rows it can be decomposed as the product DMcomp where Mcomp is a compressed version of M in which the distinct rows of M appear exactly once. The Boolean decompression matrix, D, encodes the collapsibility relation among the rows of M so that Di, = 1 iff the ith row of M falls in the equivalence class j of ~. A pseudo-inverse C of D can be computed by dividing the rows of DT by their sum (where D is the transpose of D).\nExample 1 If we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding [0, 0, 0] rows 3 and 5 share the encoding [1,1, 0] while the encoding 1, 0,1 appears only once at row 2 Matrix Mcomp is the compressed version of M..\n[0 0 07 1 0 07 1 0 1 [0 0 0] 0 1 0 [1/2 0 0 1/2 01 = 1 1 0 Mcomp 1 0 1 D = 0 0 1 C = 0 1 0 0 0 0 0 0 1 1 0 1 0 0 0 1/2 0 1/2] 1 1 0] 0 0 !\nMatrix M can be expressed as the matrix product between the decompression matrix D and the compressed version of Mcomp (i.e. M = DMcomp), while the matrix multiplication between the. compression matrix C and the M leads to the compressed matrix Mcomp (i.e.Mcomp = CM).\nTo apply domain compression we rewrite Eq. 1 in matrix form as follows.\nfo(X;Oo) |So|x do\n. Hj E R|Si|di is the matrix that represents the di-dimensional encodings of the objects in St. The rows of Hi are the vector representations h, in Eq. 1, while the rows of Hi-1 are the vector representations h; in Eq. 1:.\no X E R|So|p is the matrix that represents the p-dimensional encodings of the vertex attributes in V (i.e. the rows of X are the xu, of Eq. 1); fi(; Oi) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise; . Ri, E R|St||St-1! Vr E II are the matrix representations of the Ri,-convolution relations of Eq. 1 whose elements are (Rt.); = 1 if (o;, 0) E Rt. and 0 otherwise.\nDomain compression on Eq. 3 is performed by the DOmAIN-COmPREss1ON procedure (see Algo rithm 3) that takes as input the attribute matrix X and the part-of matrices Ri, and returns their the procedure COmPUTE-CD on X to obtain the compression and decompression matrices Co and Do respectively. The compression matrix Co is used to compress X (line 2) then we start iterating over the levels l = 0, ..., L of the H-hierarchical decomposition (line 4) and compress the Rt, matrices. The compression of the Ri. matrices is done by right-multiplying them by the decom pression matrix Di-1 of the previous level l - 1 (line 5). In this way we collapse the parts of relation Rt. (i.e. the columns of Ri.) as these were identified in stratum Si-1 as identical objects (i.e those objects corresponding to the rows of X or Ri-1, collapsed during the previous step). The result is a list Rcol_comp = [Ri,nDi-1, V = 1, ...,|II|] of column compressed Rt,n-matrices. We proceed collapsing equivalent objects in stratum St, i.e. those made of identical sets of parts: we find symmetries in Rcol_comp by invoking COMpUTE-cD (line 6) and obtain a new pair Ct, Dj of compression, and decompression matrices respectively. Finally the compression matrix Ci is ap- plied to the column-compressed matrices in Rcol comp in order to obtain the II compressed matrices\nfo(X;Oo) ifl =0 |So|x do Hi- Hj = (3) Ri.1,.: ., R. .,RII ;Oi otherwise f 0 H1 |Si||II||Si-1| |HIi|S-1||IIi|d-1 |Si|xdl\n1 Co, Do = COMPUTE-CD(X) 2 Xcomp = CoX // Compress the X matrix. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L Rcol comp = [Ri,Di-1, V = 1,...,|II|] // column compression 5 6 Ct, Di = COMPUTE-CD(Rcol_comp) 7 for r = 1 toII 8 9 return Xcomp, Rcomp\nof stratum S, (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3 Hi with Hcomp. Willing to recover the original encodings Hi we just need to employ the decom-\nAs we can see by substituting St with Scomp, the more are the symmetries (i.e. when |Scomp| St[) the greater the domain compression will be.\nWhen learning with graph inputs two fundamental design aspects that must be taken into account are:. the choice of the pattern generator and the choice of the matching operator. The former decomposes the graph input in substructures while the latter allows to compare the substructures..\nAmong the patterns considered from the graph kernel literature we have paths, shortest paths walks (Kashima et al., 2003), subtrees (Ramon & Gartner, 2003; Shervashidze et al., 2011) and neighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G' is computed by counting the number of matches between their common the substructures (i.e. a kernel on the sets of the substructures). The match between two substructures can be defined by using graph isomorphism or some other weaker graph invariant.\nWhen the number of substructures to enumerate is infinite or exponential with the size of the graph. (perhaps this is the case for random walks and shortest paths respectively) the kernel between the two graphs is computed without generating an explicit feature map. Learning with an implicit fea. ture map is not scalable as it has a space complexity quadratic in the number of training examples (because we need to store in memory the gram matrix)..\nOther graph kernels such as the Weisfeiler-Lehman Subtree Kernel (wLsT) (Shervashidze et al., 2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NspDk) (Costa & De Grave, 2010) deliberately choose a pattern generator that scales polynomially and produces an explicit. feature map. However the vector representations produced by wLsT and NspDK are handcrafted and not learned.\nA recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as graphlets, shortest paths and wLsT subtrees to transform input graphs into documents. The gener ated substructures are then treated as words and embedded in the Euclidean space with a CBOW or a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the counts of the substructures by the square root of their word-vector self similarity.\nAnother recent work by Niepert et al. (2016) upgrades the convolutional neural networks cNNs fo1 images to graphs. While the receptive field of a cNn is usually a square window (Niepert et al. 2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific temporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the nodes of the subgraphs/receptive fields.."}, {"section_index": "5", "section_name": "4.1 DATASETS", "section_text": "In order to answer the experimental questions we tested our method on six publicly available dataset first proposed by Yanardag & Vishwanathan (2015).\nCOLLAB is a dataset where each graph represent the ego-network of a researcher, and the task is to determine the field of study of the researcher between High Energy Physics, Condensed Matter Physics and Astro Physics.. IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB where in each graph the ver- tices represent actors/actresses and the edges connect people which have performed in the same. movie. Collaboration graphs are generated from movies belonging to genres Action and Romance. for IMDB-BINARYand Comedy, Romance and Sci-Fi for IMDB-MULTI, and for each actor/actress in those genres an ego-graph is extracted. The task is to identify the genre from which the ego-graph. has been generated. REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is de rived from a discussion thread from Reddit. In those datasets each vertex represent a distinct user and two users are connected by an edge if one of them has responded to a post of the other in. that discussion. The task in REDDIT-BINARYis to discriminate between threads originating from a discussion-based subreddit (TrollxChromosomes, atheism) or from a question/answers-based subreddit (IAmA, AskReddit). The task in REDDIT-MULTI5Kand REDDIT-MULTI12Kis a multi class classification problem where each graph is labeled with the subreddit where it has originated. (worldnews, videos, AdviceAnimals, aww, mildlyinteresting for REDDIT-MULT15Kand AskReddit,. AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned, worldnews, TrollXChromosomes for REDDIT-MULTI12K)."}, {"section_index": "6", "section_name": "4.2 EXPERIMENTS", "section_text": "Before applying EGNN we turn unattributed graphs (V, E) into attributed graphs (V, E, X) by an notating their vertices v E V with attributes x, E X. We label vertices v of G with their degree and encode this information into the attributes x, by employing the 1-hot encoding\nwith the following strata (see Figure 1 for a pictorial representation of EGNN):. stratum So contains objects o, that are in one-to-one correspondence with the vertices v E V. stratum S1 contains Vroot-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (Vroot, Ve, Ee). of radius r = 0, 1,..., R and has part-of alphabet II = {RooT, ELEm}. Objects o E So are. \"ELEM-part-of\" ego graph e if v E Ve \\{vroot}, while the are \"ROOT-part-of\" ego graph e if. U = Uroot: stratum S2 contains the graph G that we want to classify and has part-of alphabet II = {0,1}. which correspond to the radius of the ego graphs e E S1 of which G is made of..\nE1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each dataset, we manually chose the parameters of sAeN, i.e. the number of hidden layers for each stratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al. activation function on all the units. We report the chosen parameters in Table A1 of the appendix In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss.\nThe classification accuracy of sAEN was measured with 10-times 10-fold cross-validation. We man ually chose the number of layers and units for each level of the part-of decomposition; the numbe. of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs oi the 10-times 10-fold cross-validation.\nn our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network EGNN), that mimics the graph kernel NsPDK with the distance parameter set to 0..\nEGNN decomposes attributed graphs G = (V, E, X) into a 3 level H-hierarchical decomposition with the following strata (see Figure 1 for a pictorial representation of EGNn):.\nFigure 4: Comparison of accuracy results. DATASET DGK PSCN SAEN (Yanardag et al. 2015) (Niepert et al., 2016) (our method) COLLAB 73.09 0.25 72.60 2.16 75.63 0.31 IMDB-BINARY 66.96 0.56 71.00 2.29 71.260.74 IMDB-MULTI 44.55 0.52 45.23 2.84 49.11 0.64 REDDIT-BINARY 78.04 0.39 86.30 1.58 86.08 0.53 REDDIT-MULT15K 41.27 0.18 49.10 0.70 52.24 0.38 REDDIT-MULTI12K 32.22 0.10 41.32 0.42 46.72 0.23\nThe mean accuracies and their standard deviations obtained by our method are reported in Ta ble 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015) and by Niepert et al. (2016)\nAlthough our method was conceived for social network data, it can also handle other types of graph. For the sake of completeness in Table 5 we report the mean accuracies obtained with sAEN on th molecule and protein datasets studied in previous works (e.g. Niepert et al. (2016))..\nTable 1: Comparison of sizes and runtimes of the datasets before and after the compression\nSIZE (MB) RUNTIME DATASET ORIGINAL COMP. RATIO ORIGINAL COMP. SPEEDUP 1190 448 COLLAB 0.38 43' 18' 8' 20\" 5.2 68 34 0.50 3' 9\" 0' 30\" IMDB-BINARY 6.3 74 40 0.54 7' 41\" 1' 54\" IMDB-MULTI 4.0 326 56 REDDIT-BINARY 0.17 TO 2' 35\" 100.0 952 162 0.17 OOM 9' 51\" REDDIT-MULTI5 K REDDIT-MULTI12K 1788 347 0.19 OOM 29' 55\"\nE2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression. together with the data compression ratio. 3 We also estimate the benefit of the relational compression. from a computational time point of view and report the measurement of the runtime for 1 run with and without compression together with the speedup factor..\nFor the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeor E5-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server's memory during the test are marked as \"oom'' (out of memory) in the table, while those who exceeded the time limit of 100 times the time needed for the uncompressed version are marked as \"To\" (timeout)"}, {"section_index": "7", "section_name": "4.3 DISCUSSION", "section_text": "A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the. social network datasets. This confirms that the chosen H-hierarchical decomposition is effective or this kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line. with the current state of the art. A2 The (\n3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratic Indeed the last version of our code compresses the files on the fly.\nFigure 5: Comparison of accuracy on bio-informatics datasets\nA2 The compression algorithm has proven to be effective in improving the computational cost of our method. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the\nsame expressive power. Moreover, experiments on REDDIT-MULT15K and REDDIT-MULT112K have only been possible thanks to the size reduction operated by the algorithm as the script exhausted the memory while executing the training step on the uncompressed files."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "P Baldi and G Pollastri. The principled design of large-scale recursive neural network architectures-. dag-rnns and the protein structure prediction problem. J Mach Learn Res, 4(Sep):575-602, 2003\nD Haussler. Convolution kernels on discrete structures. Technical report, Citeseer, 1999\nH Kashima, K Tsuda, and A Inokuchi. Marginalized kernels between labeled graphs. In ICML-03 volume 3, pp. 321-328, 2003\nM Mladenov, B Ahmadi, and K Kersting. Lifted linear programming. In AISTATS-12, pp. 788-797 2012.\nWe proposed sAEN, a novel architecture for learning vector representations of H-decompositions of input graphs. We applied sAEN for graph classification on 6 real world social network datasets. outperforming the current state of the art on 4 of them and obtaining state-of-the-art classification accuracy on the others. Another important contribution of this paper is the domain compression algorithm which greatly reduces memory usage and allowed us to speedup the training time of a factor of at least 4.\nA Vullo and P Frasconi. Disulfide connectivity prediction using recursive neural networks and evolutionary information. Bioinformatics, 20(5):653-659, 2004. P Yanardag and SVN Vishwanathan. Deep graph kernels. In Proc. of KDD-15, pp. 1365-1374, 2015.\nFrancesco Orsini', Daniele Baracchi and Paolo Frasconi"}, {"section_index": "9", "section_name": "APPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS", "section_text": "In Table A1 we report for each dataset: the radiuses r of the neighborhood subgraphs used in the EGNN decomposition and the number of units in the hidden layers for each stratum..\nFigure A1: Parameters for the neural networks used in the experiments DATASET A\nDATASET RADIUSES HIDDEN UNITS So S1 S2 r COLLAB 0,1 15 - 5 5 - 2 5 - 3 0,1, 2 2 5-2 IMDB-BINARY 5-3-1 0,1, 2 2 5-2 IMDB-MULTI 5 - 3 REDDIT-BINARY 0,1 10 5 5 -2 5-3-1 REDDIT-MULTI5K 0,1 10 10 6 - 5 REDDIT-MULTI12K 0,1 10 10 20 11 MUTAG 0,1, 2, 3 10 5 - 5 5- 5_1 PTC 0,1 15 15 15 - 1 NC11 0,1,2, 3 15 15 15 10 - 1 PROTEINS 0,1, 2, 3 3 -2 6 - 5 4 6 -3 - 1 D&D 0,1, 2, 3 10 5 -2 5-3-1\nSo S1 S2 r 0,1 15 - 5 5-2 5 - 3 0,1, 2 2 5 -2 5-3-1 0,1,2 2 5 -2 5 3 0,1 10 5 5 -2 5-3-1 0,1 10 10 6 - 5 K 0,1 10 10 20 11 0,1, 2, 3 10 5 -5 5-5-1 0,1 15 15 15 -1 0,1,2, 3 15 15 15-10-1 0,1, 2, 3 3 - 2 6-5-4 6-3-1 0,1, 2, 3 10 5 -2 5-3-1"}] |
Sy2fzU9gl | [{"section_index": "0", "section_name": "3-VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK", "section_text": "Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner.\nGoogle DeepMind\n{irinah, lmatthey,arkap,cpburgess,glorotx, ootvinick,shakir,lerchner}@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning of a disentangled posterior distribution over the underlying generative factors. of sensory data is a major challenge in AI research (Bengio et al.]2013) Lake et al.]2016). Most. previous attempts required a priori knowledge of the number and/or nature of the data generative. factors (Hinton et al.]2011) Rippel & Adams2013} Reed et al. 2014]Zhu et al.]2014] Yang et al.[2015] [Goroshin et al.[2015] Kulkarni et al.[2015}Cheung et al.[2015} Whitney et al.[2 2016 Karaletsos et al.[2016). This is not always feasible in the real world, where the newly initialised learner may be exposed to complex data where no a priori knowledge of the generative factors exists.. and1ittle to 011C10nt01d1 veringthefoetc"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning an interpretable factorised representation of the independent data gen. erative factors of the world without supervision is an important precursor for the. development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce 3-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw. image data in a completely unsupervised manner. Our approach is a modification. of the variational autoencoder (VAE) framework. We introduce an adjustable hy. perparameter that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that -VAE with appropriately tuned. > 1 qualitatively outperforms VAE ( = 1), as well as state of the art unsu-. pervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled. factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we. devise a protocol to quantitatively compare the degree of disentanglement learnt. by different models, and show that our approach also significantly outperforms. all baselines quantitatively. Unlike InfoGAN, -VAE is stable to train, makes few. assumptions about the data and relies on tuning a single hyperparameter , which. can be directly optimised through a hyperparameter search using weakly labelled. data or through heuristic visual inspection for purely unsupervised data..\nThe difficulty of learning a task for a given machine learning approach can vary significantly. depending on the choice of the data representation. Having a representation that is well suited to the. particular task and data domain can significantly improve the learning success and robustness of the. chosen model (Bengio et al.|2013). It has been suggested that learning a disentangled representation. of the generative factors in the data can be useful for a large variety of tasks and domains (Bengio. et al.[[2013] Ridgeway2016). A disentangled representation can be defined as one where single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors (Bengio et al.]2013). For example, a model trained on a dataset of 3D objects might learn independent latent units sensitive to single independent data generative factors, such as object identity, position, scale, lighting or colour, thus acting as an inverse graphics model (Kulkarni. et al.]2015). In a disentangled representation, knowledge about one factor can generalise to novel. configurations of other factors. According to|Lake et al.[(2016), disentangled representations could boost the performance of state-of-the-art AI approaches in situations where they still struggle but where humans excel. Such scenarios include those which require knowledge transfer, where faster learning is achieved by reusing learnt representations for numerous tasks; zero-shot inference, where. reasoning about new data is enabled by recombining previously learnt factors; or novelty detection.\nB-VAE VAE InfoGAN (aaannnn) aaaannn (e) (eans) unnnnne (q) (ahuag) ahey ()\nFigure 1: Manipulating latent variables on celebA: Qualitative results comparing disentangling performance of -VAE ( = 250), VAE (Kingma & Welling2014) ( = 1) and InfoGAN (Chen et al.|2016). In all figures of latent code traversal each block corresponds to the traversal of a single latent variable while keeping others fixed to either their inferred (-VAE, VAE and DC-IGN where. applicable) or sampled (InfoGAN) values. Each row represents a different seed image used to infer the latent values in the VAE-based models, or a random sample of the noise variables in InfoGAN -VAE and VAE traversal is over the [-3, 3] range. InfoGAN traversal is over ten dimensional categorical latent variables. Only -VAE and InfoGAN learnt to disentangle factors like azimuth (a), emotion (b) and hair style (c), whereas VAE learnt an entangled representation (e.g. azimuth is entangled with emotion, presence of glasses and gender). InfoGAN images adapted from Chen et al (2016). Reprinted with permission."}, {"section_index": "3", "section_name": "approaches to disentangled factor learning have not scaled well (Schmidhuber 1992) Desjardins et al. 2012 Tang et al. 2013} Cohen & Welling2014]2015]", "section_text": "Recently a scalable unsupervised approach for disentangled factor learning has been developed. called InfoGAN (Chen et al.]2016). InfoGAN extends the generative adversarial network (GAN. Goodfellow et al.[2014) framework to additionally maximise the mutual information between a. subset of the generating noise variables and the output of a recognition network. It has been reported. to be capable of discovering at least a subset of data generative factors and of learning a disentanglec. epresentation of these factors. The reliance of InfoGAN on the GAN framework, however, comes. at the cost of training instability and reduced sample diversity. Furthermore, InfoGAN requires some a priori knowledge of the data, since its performance is sensitive to the choice of the prio. distribution and the number of the regularised noise variables. InfoGAN also lacks a principlec. nference network (although the recognition network can be used as one). The ability to infer the. oosterior latent distribution from sensory input is important when using the unsupervised model ir. ransfer learning or zero-shot inference scenarios. Hence, while InfoGAN is an important step in the. ight direction, we believe that further improvements are necessary to achieve a principled way of. ising unsupervised learning for developing more human-like learning and reasoning in algorithms as. described byLake et al.(2016).\nFinally, there is currently no general method for quantifying the degree of learnt disentanglement Therefore there is no way to quantitatively compare the degree of disentanglement achieved by. different models or when optimising the hyperparameters of a single model..\nDC-IGN InfoGAN 3-VAE VAE aannune (e) r d r m mm ydp!m (q) ***** Factor not learnt liree tte () Factor not learnt Factor not learnt\nFigure 2: Manipulating latent variables on 3D chairs: Qualitative results comparing disentangling performance of -VAE ( = 5), VAE (Kingma & Welling]2014) ( = 1), InfoGAN (Chen et al. 2016) and DC-IGN (Kulkarni et al.]2015). InfoGAN traversal is over the [-1, 1] range. VAE always learns an entangled representation (e.g. chair width is entangled with azimuth and leg style (b)) All models apart from VAE learnt to disentangle the labelled data generative factor, azimuth (a) InfoGAN and -VAE were also able to discover unlabelled factors in the dataset, such as chair width (b). Only -VAE, however, learnt about the unlabelled factor of chair leg style (c). InfoGAN and DC-IGN images adapted fromChen et al.(2016) and Kulkarni et al.(2015), respectively. Reprinted with permission.\nWe propose augmenting the original VAE framework with a single hyperparameter that modulates the learning constraints applied to the model. These constraints impose a limit on the capacity of the latent information channel and control the emphasis on learning statistically independent latent factors. -VAE with = 1 corresponds to the original VAE framework (Kingma & Welling2014 Rezende et al.]2014). With > 1 the model is pushed to learn a more efficient latent representation of the data, which is disentangled if the data contains at least some underlying factors of variation that are independent. We show that this simple modification allows -VAE to significantly improve the degree of disentanglement in learnt latent representations compared to the unmodified VAE framework (Kingma & Welling2014] Rezende et al.2014). Furthermore, we show that -VAE achieves state of the art disentangling performance against both the best unsupervised (InfoGAN: Chen et al.|2016) and semi-supervised (DC-IGN: Kulkarni et al.|2015) approaches for disentangled factor learning on a number of benchmark datasets, such as CelebA (Liu et al.]2015), chairs (Aubry et al. 2014) and faces (Paysan et al. 2009) using qualitative evaluation. Finally, to help quantify the differences, we develop a new measure of disentanglement and show that -VAE significantly outperforms all our baselines on this measure (ICA, PCA, VAE Kingma & Ba (2014), DC-IGN Kulkarni et al.(2015), and InfoGANChen et al.(2016)).\nOur main contributions are the following: 1) we propose -VAE, a new unsupervised approach for learning disentangled representations of independent visual data generative factors; 2) we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models; 3) we demonstrate both qualitatively and quantitatively that our 3-VAE approach achieves state-of-the-art disentanglement performance compared to various baselines on a variety of complex datasets.\nIn this paper we attempt to address these issues. We propose -VAE, a deep unsupervised generative. approach for disentangled factor learning that can automatically discover the independent latent factors of variation in unsupervised data. Our approach is based on the variational autoencoder (VAE) framework (Kingma & Welling2014) Rezende et al.]2014), which brings scalability and training stability. While the original VAE work has been shown to achieve limited disentangling performance. on simple datasets, such as FreyFaces or MNIST (Kingma & Welling|2014), disentangling perfor mance does not scale to more complex datasets (e.g.Aubry et al.2014]Paysan et al.[|2009| Liu et al.. 2015), prompting the development of more elaborate semi-supervised VAE-based approaches for learning disentangled factors (e.g.Kulkarni et al.]2015] Karaletsos et al.]2016).\nDC-IGN InfoGAN 3-VAE VAE 6uhhyb!l (q) ereeilnn en)\nFigure 3: Manipulating latent variables on 3D faces: Qualitative results comparing disentangling. performance of -VAE ( = 20), VAE (Kingma & Welling] 2014) ( = 1), InfoGAN (Chen et al. 2016) and DC-IGN (Kulkarni et al.] 2015). InfoGAN traversal is over the [-1, 1] range. All models learnt to disentangle lighting (b) and elevation (c). DC-IGN and VAE struggled to continuously. interpolate between different azimuth angles (a), unlike -VAE, which additionally learnt to encode a wider range of azimuth angles than other models. InfoGAN and DC-IGN images adapted from|Chen et al. (2016) and Kulkarni et al.(2015), respectively. Reprinted with permission.\n(a) Skin colour. (b) Age/gender (c) Image saturation 666666\nFigure 4: Latent factors learnt by 3-VAE on celebA: traversal of individual latents demonstrates that 3-VAE discovered in an unsupervised manner factors that encode skin colour, transition from an elderly male to younger female, and image saturation\nLet D = {X, V, W} be the set that consists of images x E RN and two sets of ground truth data generative factors: conditionally independent factors v E RK, where log p(v|x) = k log p(x|x) and conditionally dependent factors w E RH. We assume that the images x are generated by the true world simulator using the corresponding ground truth data generative factors: p(x|v, w) - Sim(v, w).\nWe want to develop an unsupervised deep generative model that, using samples from X only, can learn the joint distribution of the data x and a set of generative latent factors z (z E RM, where M > K) such that z can generate the observed data x; that is, p(x|z) ~ p(x[v, w) = Sim(v, w) Thus a suitable objective is to maximise the marginal (log-)likelihood of the observed data x in expectation over the whole distribution of latent factors z:\nFor a given observation x, we describe the inferred posterior configurations of the latent factors z by a probability distribution qo(z|x). Our aim is to ensure that the inferred latent factors qo(z|x) capture the generative factors v in a disentangled manner. The conditionally dependent data generative factors w can remain entangled in a separate subset of z that is not used for representing v. In order to encourage this disentangling property in the inferred qo(z|x), we introduce a constraint over it by trying to match it to a prior p(z) that can both control the capacity of the latent information bottleneck and embodies the desiderata of statistical independence mentioned above. This can be achieved if we set the prior to be an isotropic unit Gaussian (p(z) = N (0, I), hence arriving at the constrained optimisation problem in Eq.2] where e specifies the strength of the applied constraint.\nF(0,,B;x,z) = Eq(z|x)[logpe(x|z)]- (DkL(qp(z|x)|p(z)) - e)\nwhere the KKT multiplier is the regularisation coefficient that constrains the capacity of the laten information channel z and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior p(z). Since , e > 0 according to the complementary slacknes. KKT condition, Eq.3|can be re-written to arrive at the -VAE formulation - as the familiar variationa free energy objective function as described byJordan et al.(1999), but with the addition of the coefficient:\nF(0,,;x,z) (0,;x,z,) = Eqn(z|x)[logpe(x|z)] DkL(qo(z|x)||p(z)\nVarying changes the degree of applied learning pressure during training, thus encouraging different. learnt representations. -VAE where = 1 corresponds to the original VAE formulation of (Kingma. & Welling2014). We postulate that in order to learn disentangled representations of the conditionally. independent data generative factors v, it is important to set > 1, thus putting a stronger constraint. on the latent bottleneck than in the original VAE formulation ofKingma & Welling(2014). These. constraints limit the capacity of z, which, combined with the pressure to maximise the log likelihood of the training data x under the model, should encourage the model to learn the most efficient. representation of the data. Since the data x is generated using at least some conditionally independent. ground truth factors v, and the DkL term of the -VAE objective function encourages conditional. independence in qo(z[x), we hypothesise that higher values of should encourage learning a. disentangled representation of v. The extra pressures coming from high values, however, may. create a trade-off between reconstruction fidelity and the quality of disentanglement within the learnt. latent representations. Disentangled representations emerge when the right balance is found between. information preservation (reconstruction cost as regularisation) and latent channel capacity restriction ( > 1). The latter can lead to poorer reconstructions due to the loss of high frequency details when. passing through a constrained latent bottleneck. Hence, the log likelihood of the data under the learnt. model is a poor metric for evaluating disentangling in -VAEs. Instead we propose a quantitative. metric that directly measures the degree of learnt disentanglement in the latent representation..\nmax Epe(z)[Pe(x|z)]\nmax Ez~D[Eqx(z|x)[log pe(x|z)]] subjectto DkL(qo(z|x)||p(z)) < e $,0\nSince our proposed hyperparameter directly affects the degree of learnt disentanglement, we would. like to estimate the optimal for learning a disentangled latent representation directly. However, it is. not possible to do so. This is because the optimal will depend on the value of e in Eq|2 Different. datasets and different model architectures will require different optimal values of e. However, when optimising in Eq.4] we are indirectly also optimising e for the best disentanglement (see Sec|A.7. for details), and while we can not learn the optimal value of directly, we can instead estimate it using either our proposed disentanglement metric (see Sec.3) or through visual inspection heuristics\nX1,1 Z1,1 qz|x) Zdiff l = 1 X uoilsod > uo!!dod Rooooon seele 000 q(z|x) y X2,1 Z2,1 p(y\\Zdiff OO~O Linear - X1,L Z1,L L b 1 Zdiff Zdiff L qz|x l=1 l = L 00 L qz|x) Zdiff X2,L Z2,L 3 DISENTANGLEMENT METRIC\nAs stated above, we assume that the data is generated by a ground truth simulation process whicl. ises a number of data generative factors, some of which are conditionally independent, and we als assume that they are interpretable. For example, the simulator might sample independent factor orresponding to object shape, colour and size to generate an image of a small green apple. Becaus of the independence property, the simulator can also generate small red apples or big green apples. A representation of the data that is disentangled with respect to these generative factors, i.e. whicl. encodes them in separate latents, would enable robust classification even using very simple linea classifiers (hence providing interpretability). For example, a classifier that learns a decision boundar. hat relies on object shape would perform as well when other data generative factors, such as size o. colour, are varied.\nNote that a representation consisting of independent latents is not necessarily disentangled, according. to our desiderata. Independence can readily be achieved by a variety of approaches (such as PCA or. ICA) that learn to project the data onto independent bases. Representations learnt by such approaches. do not in general align with the data generative factors and hence may lack interpretability. For this. reason, a simple cross-correlation calculation between the inferred latents would not suffice as a disentanglement metric.\nOur proposed disentangling metric, therefore, measures both the independence and interpretability. (due to the use of a simple classifier) of the inferred latents. To apply our metric, we run inference. on a number of images that are generated by fixing the value of one data generative factor while randomly sampling all others. If the independence and interpretability properties hold for the inferred representations, there will be less variance in the inferred latents that correspond to the fixed generative. factor. We use a low capacity linear classifier to identify this factor and report the accuracy value as the final disentanglement metric score. Smaller variance in the latents corresponding to the targe1. factor will make the job of this classifier easier, resulting in a higher score under the metric. See. Fig.5|for a representation of the full process..\nMore formally, we start from a dataset D = {X, V, W} as described in Sec.2I assumed to contain a balanced distribution of ground truth factors (v, w), where images data points are obtained using a ground truth simulator process x ~ Sim(v, w). We also assume we are given labels identifying a. subset of the independent data generative factors v E V for at least some instances.. We then construct a batch of B vectors zh:c.. .. to be fed as inputs to a linear classifier as follows.\nFigure 5: Schematic of the proposed disen-. tanglement metric: over a batch of L samples. each pair of images has a fixed value for one. target generative factor y (here y = scale). and differs on all others. A linear classifier is then trained to identify the target factor us- ing the average pairwise difference zdiff in the. latent space over L samples.\nIt is important to be able to quantify the level of disentanglement achieved by different models Designing a metric for this, however, is not straightforward. We begin by defining the properties. that we expect a disentangled representation to have. Then we describe our proposed solution for. quantifying the presence of such properties in a learnt representation..\nThe classifier's goal is to predict the index y of the generative factor that was kept fixed for a giver U: c We choose a linear classifier with low VC-dimension in order to ensure it has no capacity to perform nonlinear disentangling by itself. We take differences of two inferred latent vectors to reduce the variance in the inputs to the classifier, and to reduce the conditional dependence on the inputs x. This the process."}, {"section_index": "4", "section_name": "4.1 OUALITATIVE BENCHMARKS", "section_text": "We trained 3-VAE (see Tbl.[1|for architecture details) on a variety of datasets commonly used to evaluate disentangling performance of models: celebA (Liu et al.|. 2015), chairs (Aubry et al.12014 and faces (Paysan et al.]2009). Figures[13|provide a qualitative comparison of the disentangling performance of -VAE, VAE ( = 1) (Kingma & Welling]2014), InfoGAN (Chen et al.]2016) and DC-IGN (Kulkarni et al.] 2015) as appropriate.\n(a) Sample two sets of latent representations, V1,t and V2,l, enforcing [v1,]k [V2,t]k if k = y (so that the value of factor k = y is kept fixed).. (b) Simulate image x1.1 ~ Sim(v1.1), then infer z1.1 = (x1.t), using the encoder q(z|x) ~ N ((x),o(x)). Repeat the process for v2,l. (c) Compute the difference zaiff = |z1, 22,t|, the absolute linear difference between the inferred latent representations. 3. Use the average zaiff = i=1 dirf to predict p(y|zdirt) (again, y = scale in Fig. anc report the accuracy of this predictor as disentangement metric score..\nIn this section we first qualitatively demonstrate that our proposed -VAE framework consistently discovers more latent factors and disentangles them in a cleaner fashion that either unmodified VAE (Kingma & Welling2014) or state of the art unsupervised (InfoGAN:Chen et al.]2016) and semi supervised (DC-IGN: Kulkarni et al.| 2015) solutions for disentangled factor learning on a variety of benchmarks. We then quantify and characterise the differences in disentangled factor learning between our -VAE framework and a variety of benchmarks using our proposed new disentangling metric.\nIt can be seen that across all datasets 3-VAE is able to automatically discover and learn to disentangle. all of the factors learnt by the semi-supervised DC-IGN (Kulkarni et al.]2015): azimuth (Fig.3a Fig.2h), lighting and elevation (Fig.3b,c)). Often it acts as a more convincing inverse graphics. network than DC-IGN (e.g. Fig.3h) or InfoGAN (e.g. Fig.2a, Fig.1a-c or Fig.3h). Furthermore,. unlike DC-IGN, -VAE requires no supervision and hence can learn about extra unlabelled data. generative factors that DC-IGN can not learn by design, such as chair width or leg style (Fig.2b,c). The unsupervised InfoGAN (Chen et al.|2016) approach shares this quality with -VAE, and the two. frameworks tend to discover overlapping, but not necessarily identical sets of data generative factors. For example, both -VAE and InfoGAN (but not DC-IGN) learn about the width of chairs (Fig.2b). Only -VAE, however, learns about the chair leg style (Fig.2). It is interesting to note how -VAE. is able to generate an armchair with a round office chair base, even though such armchairs do not exist. in the dataset (or, perhaps, reality). Furthermore, only 3-VAE is able to discover all three factors of. variation (chair azimuth, width and leg style) within a single model, while InfoGAN learns to allocate. its continuous latent variable to either azimuth or width. InfoGAN sometimes discovers factors tha -VAE does not precisely disentangle, such as the presence of sunglasses in celebA. -VAE does,. however, discover numerous extra factors such as skin colour, image saturation, and age/gender that. are not reported in the InfoGAN paper (Chen et al.|2016) (Fig.4). Furthermore, -VAE latents tend. to learn a smooth continuous transformation over a wider range of factor values than InfoGAN (e.g. rotation over a wider range of angles as shown in Figs.13h).\nOverall -VAE tends to consistently and robustly discover more latent factors and learn cleaner disentangled representations of them than either InfoGAN or DC-IGN. This holds even on such challenging datasets as celebA. Furthermore, unlike InfoGAN and DC-IGN, -VAE requires no design decisions or assumptions about the data, and is very stable to train.\nWhen compared to the unmodified VAE baseline ( = 1) -VAE consistently learns significantl more disentangled latent representations. For example, when learning about chairs, VAE entangles chair width with leg style (Fig.2b). When learning about celebA, VAE entangles azimuth with emotion and gender (Fig.1h); emotion with hair style, skin colour and identity (Fig.1b); while the VAE fringe latent also codes for baldness and head size (Fig.1). Although VAE performs relatively well on the faces dataset, it still struggles to learn a clean representation of azimuth (Fig.3h). This however, suggests that a continuum of disentanglement quality exists, and it can be traversed by varying within the -VAE framework. While increasing often leads to better disentanglement it may come at the cost of blurrier reconstructions and losing representations for some factors particularly those that correspond to only minor changes in pixel space."}, {"section_index": "5", "section_name": "4.2 OUANTITATIVE BENCHMARKS", "section_text": "In order to quantitatively compare the disentangling performance of -VAE against various baselines we created a synthetic dataset of 737,280 binary 2D shapes (heart, oval and square) generated from the Cartesian product of the shape and four independent generative factors vk defined in vectoi graphics: position X (32 values), position Y (32 values), scale (6 values) and rotation (40 values over the 2 range). To ensure smooth affine object transforms, each two subsequent values for each facto. vk were chosen to ensure minimal differences in pixel space given 64x64 pixel image resolution This dataset was chosen because it contains no confounding factors apart from its five independent data generative factors (identity, position X, position Y, scale and rotation). This gives us knowledge of the ground truth for comparing the disentangling performance of different models in an objective manner.\nWe used our proposed disentanglement metric (see Sec.3) to quantitatively compare the ability of. 3-VAE to automatically discover and learn a disentangled representation of the data generative factors. of the synthetic dataset of 2D shapes described above with that of a number of benchmarks (see Tbl.1in Appendix for model architecture details). The table in Fig.6|(left) reports the classification. accuracy of the disentanglement metric for 5,000 test samples. It can be seen that -VAE ( = 4) significantly outperforms all baselines, such as an untrained VAE and the original VAE formulation. of|Kingma & Welling(2014) ( = 1) with the same architecture as -VAE, the top ten PCA or ICA components of the data (see Sec.|A.3|for details), or when using the raw pixels directly. -VAE also. does better than InfoGAN. Remarkably, 3-VAE performs on the same level as DC-IGN despite the. latter being semi-supervised and the former wholly unsupervised. Furthermore, -VAE achieved. similar classification accuracy as the ground truth vectors used for data generation, thus suggesting. that it was able to learn a very good disentangled representation of the data generative factors..\nWe also examined qualitatively the representations learnt by -VAE, VAE, InfoGAN and DC-IGN on the synthetic dataset of 2D shapes. Fig.7A demonstrates that after training, -VAE with = 4 learnt a good (while not perfect) disentangled representation of the data generative factors, and its decoder learnt to act as a rendering engine. Its performance was comparative to that of DC IGN (Fig.7C), with the difference that DC-IGN required a priori knowledge about the quantity of the data generative factors, while -VAE was able to discover them in an unsupervised manner. The most informative latent units zm of -VAE have the highest KL divergence from the unit Gaussian prior (p(z) = N(0, I)), while the uninformative latents have KL divergence close to zero. Fig.7A demonstrates the selectivity of each latent zm to the independent data generating factors: z f(Vk) Vvk E {VpositionX, UpositionY, Vscale, Vrotation} (top three rows), where zm is the learnt Gaussian mean of latent unit zm. The effect of traversing each latent zm on the resulting reconstructions is shown in the bottom five rows of Fig.7A. The latents z6 and z2 learnt to encode X and Y coordinates of the objects respectively; unit z1 learnt to encode scale; and units z5 and z7 learnt to encode rotation. The frequency of oscillations in each rotational latent corresponds to the rotational symmetry of the corresponding object (2 for heart, for oval and /2 for square). Furthermore the two rotational latents seem to encode cos and sin rotational coordinates, while the positional latents align with the Cartesian axes. While such alignment with intuitive factors for humans is not guaranteed, empirically we found it to be very common. Fig.7B demonstrates that the unmodified\n100% 45.75 0.8% 84.9 0.4% 42.03 10.6%\n61.58 0.5% 99.23 0.1%\nFigure 6: Disentanglement metric classification accuracy for 2D shapes dataset. Left: Accuracy for different models and training regimes Right: Positive correlation is present between the size of z and the optimal normalised values of for disentangled factor learning for a fixed -VAE architecture. values are normalised by latent z size m and input x size n. Note that values are not uniformly sampled. Orange approximately corresponds to unnormalised = 1. Good reconstructions are asso ciated with entangled representations (lower disentanglement scores). Disentangled representations (high disentanglement scores) often result in blurry reconstructions.\nVAE baseline ( = 1) is not able to disentangle generative factors in the data as well as -VAE with. appropriate learning pressures. Instead each latent z (apart from z9, which learnt rotation) encodes at. least two data generative factors. InfoGAN also achieved a degree of disentangling (see Fig.7D). particularly for positional factors. However, despite our best efforts to train InfoGAN, we were not. able to achieve the same degree of disentangling in other factors, such as rotation, scale and shape. We also found its ability to generate the different shapes in the dataset to be inaccurate and unstable. during training, possibly due to reported limitations of the GAN framework, which can struggle to. learn the full data distribution and instead will often learn a small subset of its modes (Salimans et al. 2016Zhao et al.2016).\nUnderstanding the effects of We hypothesised that constrained optimisation is important fo. enabling deep unsupervised models to learn disentangled representations of the independent data. generative factors (Sec.2). In the -VAE framework this corresponds to tuning the coefficient. One. way to view is as a mixing coefficient (see Sec.A.6|for a derivation) for balancing the magnitudes. of gradients from the reconstruction and the prior-matching components of the VAE lower bounc. formulation in Eq.4 during training. In this context it makes sense to normalise by latent z siz. m and input x size n in order to compare its different values across different latent layer sizes. constraint pressures (higher values), see Fig.6|(Right). Furthermore, the relationship of for a. given m is characterised by an inverted U curve. When is too low or too high the model learns ar. entangled latent representation due to either too much or too little capacity in the latent z bottleneck. We find that in general > 1 is necessary to achieve good disentanglement. However if is toc. high and the resulting capacity of the latent channel is lower than the number of data generative. factors, then the learnt representation necessarily has to be entangled (as a low-rank projection of. the true data generative factors will compress them in a non-factorial way to still capture the ful. data distribution well). We also note that VAE reconstruction quality is a poor indicator of learn. disentanglement. Good disentangled representations often lead to blurry reconstructions due to th restricted capacity of the latent information channel z, while entangled representations often result ir. the sharpest reconstructions. We therefore suggest that one should not necessarily strive for perfec. reconstructions when using -VAEs as unsupervised feature learners - though it is often possible. to find the right -VAE architecture and the right value of to have both well disentangled laten. representations and good reconstructions.\nWe proposed a principled way of choosing for datasets with at least weak label information. If label information exists for at least a small subset of the independent data generative factors of variation, one can apply the disentanglement metric described in Sec.3|to approximate the level of learnt disentanglement for various choices during a hyperparameter sweep. When such labelled information is not available, the optimal value of can be found through visual inspection of what\nDisentanglement Metric Score Disentanglement (normalised) Model 10 metric score Original Ground truth 100% 5 Raw pixels 45.75 0.8% 0.75 PCA 84.9 0.4% Reconstruction ICA 42.03 10.6% 0.5 0.5 DC-IGN 99.3 0.1% 3 0.1 InfoGAN 73.5 0.9% 0.25 VAE untrained 44.14 2.5% 0.01 VAE 61.58 0.5% 3-VAE 99.23 0.1% 0.002 10 100 200\nA 3-VAE B VAE naennaneaeenn nenne Z2 Z6 Z1 Z5 Z7 Zunused Z4 Z10 Z1 Z2 Z9 Zunused Fi poooiin a seee se roon th R Y va ea 1o tic 0 m tic er >8- sp yannnee earnnt 0.08 0.08 0.22 0.50 0.51 0.99 0.07 0.13 0.18 0.18 0.47 0.99 1S pos. Y pos. X scale rotation rotation R c DC-IGN D InfoGAN st Z3 Z4 Z1 Z2 Z5 26...10 C1 C2 C3 C4 C5 poooiin of V1 sete m roeon la by 5 th 3 ( IC sc sh us us 5 0.01 0.01 0.07 0.16 0.110.320.45 (n pos. X pos. Y scale rotation object\neffect the traversal of each single latent unit zm has on the generated images (x|z) in pixel spac (as shown in Fig.7|rows 4-8). For the 2D shapes dataset, we have found that the optimal value. of as determined by visual inspection match closely the optimal values as determined by the. disentanglement metric.\nlused Figure 7: A: Representations learnt by. a -VAE ( = 4). Each column repre sents a latent z, ordered according to. the learnt Gaussian variance (last row) Row 1 (position) shows the mean acti-. vation (red represents high values) of. each latent z; as a function of all 32x32. locations averaged across objects, rota. tions and scales. Row 2 and 3 show the. mean activation of each unit z; as a func tion of scale (respectively rotation), av-. eraged across rotations and positions (re. spectively scales and positions). Square. .99 is red, oval is green and heart is blue. Rows 4-8 (second group) show recon-. structions resulting from the traversal. of each latent z; over three standard de-. viations around the unit Gaussian prior. mean while keeping the remaining 9/10. latent units fixed to the values obtained by running inference on an image from. the dataset. B: Similar analysis for VAE ( = 1). C: Similar analysis for DC-. IGN, clamping a single latent each for. scale, positions, orientation and 5 for. shape. D: Similar analysis for InfoGAN. using 5 continuous latents regularized. using the mutual information cost, and. 5 additional unconstrained noise latents (not shown).\nIn this paper we have reformulated the standard VAE framework (Kingma & Welling2014) Rezende. et al.2014) as a constrained optimisation problem with strong latent capacity constraint and in. dependence prior pressures. By augmenting the lower bound formulation with the coefficient. that regulates the strength of such pressures and, as a consequence, the qualitative nature of the representations learnt by the model, we have achieved state of the art results for learning disentangled. representations of data generative factors. We have shown that our proposed -VAE framework. significantly outperforms both qualitatively and quantitatively the original VAE (Kingma & Welling. 2014), as well as state-of-the-art unsupervised (InfoGAN:Chen et al.|2016) and semi-supervised (DC-IGN:Kulkarni et al.2015) approaches to disentangled factor learning. Furthermore, we have. shown that -VAE consistently and robustly discovers more factors of variation in the data, and it. learns a representation that covers a wider range of factor values and is disentangled more cleanly. than other benchmarks, all in a completely unsupervised manner. Unlike InfoGAN and DC-IGN.. our approach does not depend on any a priori knowledge about the number or the nature of data. generative factors. Our preliminary investigations suggest that the performance of the -VAE frame. work may depend on the sampling density of the data generative factors within a training dataset. (see Appendix|A.8|for more details). It appears that having more densely sampled data generative. factors results in better disentangling performance of -VAE, however we leave a more principled. investigation of this effect to future work..\n-VAE is robust with respect to different architectures, optimisation parameters and datasets, henc requiring few design decisions. Our approach relies on the optimisation of a single hyperparamete 3, which can be found directly through a hyperparameter search if weakly labelled data is availabl to calculate our new proposed disentangling metric. Alternatively the optimal can be estimate heuristically in purely unsupervised scenarios. Learning an interpretable factorised representatioi of the independent data generative factors in a completely unsupervised manner is an importan precursor for the development of artificial intelligence that understands the world in the same wa that humans do (Lake et al.[|2016). We believe that using our approach as an unsupervised pretrainin stage for supervised or reinforcement learning will produce significant improvements for scenario such as transfer or fast learning\nWe would like to thank Charles Blundell, Danilo Rezende, Tejas Kulkarni and David Pfau for helpfu comments that improved the manuscript"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "REFERENCES M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In CVPR, 2014.. Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. In IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013.. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv. 2016. Brian Cheung, Jesse A. Levezey, Arjun K. Bansal, and Bruno A. Olshausen. Discovering hidden factors of variation in deep networks. In Proceedings of the International Conference on Learning Representations, Workshop Track, 2015. T. Cohen and M. Welling. Transformation properties of learned visual representations. In ICLR,. 2015. Taco Cohen and Max Welling. Learning the irreducible representations of commutative lie groups arXiv. 2014. G. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of variation via generative. entangling. arXiv, 2012. Carl Doersch. Tutorial on variational autoencoders. arxiv, 2016.. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011.. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. NIPS, pp. 2672-2680, 2014. Ross Goroshin, Michael Mathieu, and Yann LeCun. Learning to linearize under uncertainty. NIPS, 2015. G. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. International Conference on Artificial Neural Networks, 2011.. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183-233, 1999 Theofanis Karaletsos, Serge Belongie, and Gunnar Ratsch. Bayesian representation learning with. oracle constraints. ICLR. 2016 W. Karush. Minima of Functions of Several Variables with Inequalities as Side Constraints. Master's thesis, Univ. of Chicago, Chicago, Illinois, 1939.. D. P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014.. D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014. H. W. Kuhn and A. W. Tucker. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium,. pp. 481-492, 1951. Tejas Kulkarni, William Whitney, Pushmeet Kohli, and Joshua Tenenbaum. Deep convolutional inverse graphics network. NIPS, 2015. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. arXiv, 2016.. 11\nA summary of all model architectures used in this paper can be seen in Tbl1"}, {"section_index": "7", "section_name": "A.2 INFOGAN TRAINING", "section_text": "To train the InfoGAN network described in Tbl.1 on the 2D shapes dataset (Fig.7), we followed. the training paradigm described in |Chen et al.(2016) with the following modifications. For the. mutual information regularised latent code, we used 5 continuous variables c; sampled uniformly from (-1, 1). We used 5 noise variables z, as we found that using a reduced number of noise. variables improved the quality of generated samples for this dataset. To help stabilise training, we. used the instance noise trick described in Shi et al.(2016), adding Gaussian noise to the discriminator inputs (0.2 standard deviation on images scaled to -1, 1). We followed Radford et al.[(2015) for the architecture of the convolutional layers, and used batch normalisation in all layers except the last in. the generator and the first in the discriminator..\nZ. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. ICCV, 2015. P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and illumination invariant face recognition. AVSS, 2009. Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas. Alexandre Passos, and David Cournapeau. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 2015. Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of. variation with manifold interaction. ICML, 2014. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi- mate inference in deep generative models. arXiv, 2014. Karl Ridgeway. A survey of inductive biases for factorial Representation-Learning. arXiv, 2016 URLhttp://arxiv.0rg/abs/1612.05299 Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv, 2013. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. arXiv, 2016. URLhttp: //arxiv.org/abs/1606 0 3 4 9 8 Jurgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-869.1992 Wenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-Time single image and video Super-Resolution using an efficient Sub-Pixel convolutional neural network. arXiv, 2016. Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In Proceedings of the 3Oth International Conference on Machine Learning, 2013, Atlanta, USA, 2013. William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. arXiv, 2016. URL http://arxiv.org/pdf/ 1602.06822.pdf Jimei Yang, Scott Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. NIPS, 2015. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv,. 2016. URLhttp://arxiv.0rg/abs/1609.03126 Z. Zhu, P. Luo, X. Wang, and X. Tang. Multi-view perceptron: a deep model for learning face identity and view representations. In Advances in Neural Information Processing Systems 27. 2014."}, {"section_index": "8", "section_name": "A.3 ICA AND PCA BASELINES", "section_text": "In order to calculate the ICA benchmark, we applied fastICA (Pedregosa et al.] 2011) algorithn to the whitened pixel data. Due to memory limitations we had to apply the algorithm to pairwise combinations of the subsets of the dataset corresponding to the transforms of each of the three 2I object identities. We calculated the disentangling metric for all three ICA models trained on each o the three pairwise combinations of 2D objects, before presenting the average of these scores in Fig.\nWe performed PCA on the raw and whitened pixel data. Both approaches resulted in similar disentangling metric scores. Fig.6|reports the PCA results calculated using whitened pixel data for more direct comparison with the ICA score.\nWe used a linear classifier to learn the identity of the generative factor that produced zdiff (see Equations (5) for the process used to obtain samples of zdiff). We used a fully connected linear\nDataset Optimiser Architecture 2D shapes Adagrad Input 4096 (flattened 64x64x1). (VAE) 1e-2 Encoder FC 1200, 1200. ReLU activation. Latents 10 Decoder FC 1200, 1200, 1200, 4096. Tanh activation. Bernoulli 2D shapes rmsprop Input 64x64x1. (DC-IGN) (as in Kulkarni et al.2015) Encoder Conv 96x3x3, 48x3x3, 48x3x3 (padding 1) ReLU activation and Max pooling 2x2. Latents 10 Decoder Unpooling, Conv 48x3x3, 96x3x3, 1x3x3. ReLU activation, Sigmoid. 2D shapes Adam Generator FC 256, 256, Deconv 128x4x4, 64x4x4 (stride 2). Tanh.. (InfoGAN) 1e-3 (gen) Discriminator Conv and FC reverse of generator. Leaky ReLU activation.. 2e-4 (dis) FC 1. Sigmoid activation.. Recognition Conv and FC shared with discriminator. FC 128, 5. Gaussian Latents 10: z1...5 ~ Unif(-1, 1), c1...5 ~ Unif(-1,1). Chairs Adam Input 64x64x1. (VAE) 1e-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2). 64x4x4 (stride 2), FC 256. ReLU activation. Latents 32 Decoder Deconv reverse of encoder. ReLU activation. Bernoulli.. CelebA Adam Input 64x64x3. (VAE) 1e-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2). 64x4x4 (stride 2), FC 256. ReLU activation. Latents 32 Decoder Deconv reverse of encoder. ReLU activation. Gaussian. 3DFaces Adam Input 64x64x1. (VAE) 1e-4 Encoder Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2). 64x4x4 (stride 2), FC 256. ReLU activation.. Latents 32 Decoder Deconv reverse of encoder. ReLU activation. Bernoulli..\nD={V e RK,W E RH,X E RM}, y~ Unif[1...K\n1 Zdiff =|Z1,lZ2,l] Zdiff Zdiff L l=1\nAll disentanglement metric score results reported in the paper were calculated in the following manner. Ten replicas of each model with the same hyperparameters were trained using different random seeds to obtain disentangled representations. Each of the ten trained model replicas was evaluated three times using the disentanglement metric score algorithm, each time using a different random seed to initialise the linear classifier. We then discarded the bottom 50% of the thirty resulting scores and reported the remaining results. This was done to control for the outlier results from the few experiments that diverged during training.\nThe results reported in table in Fig.[6[(left) were calculated using the following data. Ground truth. uses independent data generating factors v (our dataset did not contain any correlated data generating. factors w). PCA and ICA decompositions keep the first ten components (PCA components explain. 60.8% of variance). -VAE ( = 4), VAE ( = 1) and VAE untrained have the same fully connected architecture with ten latent units z. InfoGAN uses \"inferred'' values of the five continuous latents that were regularised with the mutual information objective during training.."}, {"section_index": "9", "section_name": "A.5 CLASSIFYING THE GROUND TRUTH DATA GENERATIVE FACTORS VALUE.", "section_text": "In order to further verify the validity of our proposed disentanglement metric we ran an extra quantitative test: we trained a linear classifier to predict the ground truth value of each of the five data generative factors used to generate the 2D shapes dataset. While this test does not measure disentangling directly (since it does not measure independence of the latent representation), a disentangled representation should make such a classification trivial. It can be seen in Table2that the representation learnt by 3-VAE is on average the best representation for factor classification across all five factors. It is closely followed by DC-IGN. It is interesting to note that ICA does well only a encoding object identity, while PCA manages to learn a very good representation of object position\nTable 2: Linear classifier classification accuracy for predicting the ground truth values for each dat generative factor from different latent representations. Each factor could take a variable number o possible values: 3 for id, 6 for scale, 40 for rotation and 32 for position X or Y. Best performing model results in each column are printed in bold.\nclassifier to predict p(y|zdiff), where y is one of four generative factors (position X, position Y, scale and rotation). We used softmax output nonlinearity and a negative log likelihood loss function. The classifier was trained using the Adagrad (Duchi et al.J[2011) optimisation algorithm with learning rate of 1e-2 until convergence.\nif k = y V1.l V1,l ~p(v), w1,l~p(w), w2,l~p(w), [V2,l]k ~ p(Uk)z otherwis\nEqx(z|x)[log pe(x|z)] = Eqs(z|x)[log]]pe(xn|z)] =Eqg(z|x) logpe(xnz n n\nL(0,$;x,z,) x Eqs(z|x)En[logPe(xn|z)] DKL(qo(z|x)|[p(z) N\nDKL(qo(z|x)[[p(z)) ) zm m\nBM L(0,b;x,z,) x Eqs(z|x)En[logpe(xn|z)] N BM = Eqx(z|x)En[l0g Pe(xn|z) m[DKL(qo(Zm|x)|[p(Zm)) N"}, {"section_index": "10", "section_name": "A.7 RELATIONSHIP BETWEEN AND e", "section_text": "For a given e we can solve the constrained optimisation problem in Eq.3|(find the optimal (0* , $*, * such that F(0*, $* , *) = 0). We can then re-write our optimal solution to the original optimisatior problem in Eq.2as a function of e:.\nNow can be interpreted as the rate of change of the optimal solution (0*, $*) to G when varying the constraint e:\n8G 8e\nWe hypothesise that data continuity plays a role in guiding unsupervised models towards learning the correct data manifolds. To test this idea we measure how the degree of learnt disentangling changes with reduced continuity in the 2D shapes dataset. We trained a -VAE with = 4 (Figure|7A) or subsamples of the original 2D shapes dataset, where we progressively decreased the generative facto sampling density. Reduction in data continuity negatively correlates with the average pixel wise (Hamming) distance between two consecutive transforms of each object (normalised by the average number of pixels occupied by each of the two adjacent transforms of an object to account for objec\n(0,;x,z,) = Eqo b(z|x)[logpe(xz)]- DkL(qo(zx)[[p(z)\nWe design 3-VAE to learn conditionally independent factors of variation in the data. Hence we assume conditional independence of every latent zm given x (where m E 1...M, and M is the dimensionality of z). Since our prior p(z) is an isotropic unit Gaussian, we can re-write the second term of Eq.6 as:\n3M 2orm N\nin Eq.10|is equivalent to optimising the original -VAE formulation from Sec.2] but with the additional independence assumptions that let us calculate data log likelihood and KL divergence terms in expectation over the individual pixels xn and individual latents zm.\nG(0*(e),*(e)) = Eg. e)(z|x)[logP*(e)(x|z)]\n100% Bernoulli noise level 0.0 80% 0.01 0.1 0.2 0.3 60% 0.4 0.5 40% 20% 0% 0 1 2 Normalised Average Hamming distance [pixels]\nFigure 8: Negative correlation between data transform continuity and the degree of disentangling. achieved by B-VAE. Abscissa is the average normalized Hamming distance between each of the two consecutive transforms of each object. Ordinate is disentanglement metric score. Disentangling. performance is robust to Bernoulli noise added to the data at test time, as shown by slowly degrading. classification accuracy up to 10% noise level, considering that the 2D objects occupy on average. between 2-7% of the image depending on scale. Fluctuations in classification accuracy for similar Hamming distances are due the different nature of subsampled generative factors (i.e. symmetries are. present in rotation but are lacking in position)..\nSamples from -VAE that learnt disentangled ( = 4) and entangled ( = 1) representations can be seen in Figure9\nWe present extra latent traversal plots from -VAE that learnt disentangled representations of 3D chairs (Figures 10|11) and CelebA (Figures [12|14) datasets. Here we show traversals from all informative latents from a large number of seed images\nscale). Figure[8|demonstrates that as the continuity in the data reduces, the degree of disentanglement in the learnt representations also drops. This effect holds after additional hyperparameter tuning and can not solely be explained by the decrease in dataset size, since the same VAE can learn disentangled representations from a data subset that preserves data continuity but is approximately 55% of the. original size (results not shown)..\nB-VAE Data Samples VAE\nFigure 9: Samples from -VAE trained on the dataset of 2D shapes that learnt either a disentanglec. (left, = 4) or an entangled (right, = 1) representation of the data generative factors. It can be. seen that sampling from an entangled representation results in some unrealistic looking samples. A disentangled representation that inverts the original data generation process does not suffer from such. errors.\nz, - width / size Z, - azimuth\nWidth / size azimuth 1\nFigure 10: Latent traversal plots from -VAE that learnt disentangled representations on the 3D chairs dataset.\nz, - width / size z, - azimuth E X EE FE R EE\nE h h h h EE EE EE EE EE EC EE E F E h h h h h h T T EC S # UEC 1H\nFigure 11: Latent traversal plots from -VAE that learnt disentangled representations on the 3 chairs dataset.\nZ1 - background Z, - skin colour Z3 - age/gender\nFigure 12: Latent traversal plots from -VAE that learnt disentangled representations on the CelebA dataset.\nZ4 - azimuth Z5 - hair parting Z6 - fringe\nFigure 13: Latent traversal plots from -VAE that learnt disentangled representations on the CelebA dataset.\nZ, - sunglasses/smile Z8 - saturation\nFigure 14: Latent traversal plots from -VAE that learnt disentangled representations on the Celeb. dataset."}] |
SJg498clg | [{"section_index": "0", "section_name": "NEURAL GRAPH MACHINES: NETWORKS USING GRAPHS", "section_text": "Thang D. Bui\nSujith Ravi\nUniversity of Cambridge. tdb40@cam.ac.uk\nUniversity of Cambridge\nGoogle Research\ntdb40@cam.ac.uk\nLabel propagation is a powerful and flexible semi-supervised learning technique on graphs. Neural network architectures, on the other hand, have proven track records in many supervised learning tasks. In this work, we propose a training. objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural. networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to. learn similar hidden representations for neighboring nodes on a graph, in the same. vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs The proposed method is experimentally validated on a wide range of tasks (multi-. label classification on social graphs, news categorization and semantic intent clas-. sification) using different architectures (NNs, CNNs, and LSTM RNNs).."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Semi-supervised learning is a powerful machine learning paradigm that can improve the predictior performance compared to techniques that use only labeled data, by leveraging a large amount of unlabeled data. The need of semi-supervised learning arises in many problems in computer vision natural language processing or social networks, in which getting labeled datapoints is expensive or unlabeled data is abundant and readily available..\nThere exist a plethora of semi-supervised learning methods. The simplest one uses bootstrapping techniques to generate pseudo-labels for unlabeled data generated from a system trained on labeled data. However, this suffers from label error feedbacks (Lee]2013). In a similar vein, autoencoder based methods often need to rely on a two-stage approach: train an autoencoder using unlabeled data to generate an embedding mapping, and use the learnt embeddings for prediction. In practice, this procedure is often costly and inaccurate in practice. Another example is transductive SVMs (Joachims1999), which is too computationally expensive to be used for large datasets. Methods that are based on generative models and amortized variational inference (Kingma et al.[2014) can work well for images and videos, but it is not immediately clear on how to extend such techniques to handle sparse and multi-modal inputs or graphs over the inputs. In contrast to the methods above. graph-based techniques such as label propagation (Zhu & Ghahramani; Bengio et al.]2006) often provide a versatile, scalable, and yet effective solution to a wide range of problems. These methods construct a smooth graph over the unlabeled and labeled data. Graphs are also often a natural way to describe the relationships between nodes, such as similarities between embeddings, phrases or images, or connections between entities on the web or relations in a social network. Edges in the graph connect semantically similar nodes or datapoints, and if present, edge weights reflect how strong such similarities are. By providing a set of labeled nodes, such techniques iteratively refine the node labels by aggregating information from neighbours and propagate these labels to the nodes neighbours. In practice, these methods often converge quickly and can be scaled to large datasets with a large label space (Ravi & Diao|2016). We build upon the principle behind label propagation for our method.\nWork done during an internship at Google\nVivek Ramavajjala\nGoogle Research\nvramavaj@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Another key motivation of our work is the recent advances in neural networks and their performance on a wide variety of supervised learning tasks such as image and speech recognition or sequence-to- sequence learning (Krizhevsky et al.[2012)Hinton et al.[2012] Sutskever et al.|2014). Such results are however conditioned on training very large networks on large datasets, which may need millions. of labeled training input-output pairs. This begs the question: can we harness previous state-of-the-. art semi-supervised learning techniques, to jointly train neural networks using limited labeled data and unlabeled data to improve its performance?.\nContributions: We propose a discriminative training objective for neural networks with grapl augmentation, that can be trained with gradient descent and efficiently scaled to large graphs. Ir particular, we introduce a regularization term for generic neural network architectures that enforces similarity between nodes in the graphs. This is inspired by the objective function of label propa gation. The resulting cost is amenable to stochastic training and can be applied to various mode classes. We also investigate using graphs as direct inputs to train neural network classifiers and ex perimentally demonstrate that this procedure is more efficient and accurate than previous two-stage approaches such as finding embeddings and using them for classification.\nThe closet approach to our work is the framework proposed by[Weston et al.(2012), we extend their. work in several ways: (a) our proposed training scheme is flexible, for example multiple graphs from multiple domains can be combined, (b) we provide extensive experiments on different types of neural networks and on properly constructed graphs (in contrast to nearest neighbor graphs in. Weston et al.(2012), (c) we propose using graphs as inputs to the neural networks if there are no. input features. Our work is also different from recent works on using neural networks on graphs (e.g. seeNiepert et al. (2016)). Instead, we advocate a training objective that uses graphs to augment. neural network learning.\nIn this section, we will lay out the groundwork for our oosed training objective in section|3\nWe first provide a concise introduction to label propagation and its training objective. Suppose we are given a graph G = (V, E, W) where V is the set of nodes, E the set of nodes and W the edge weight matrix. Let Vi, V, be the labeled and unlabeled nodes in the graph. The goal is to predict a soft assignment of labels for each node in the graph, Y, given the training label distribution for the seed nodes, Y. Mathematically, label propagation performs minimization of the following convex objective function, for L labels,\nCLp(Y)=1) Y..-J + 2 Wu,v + 3 vEVi vEV,uEN(v) vEV\nsubject to I=1 Yt = 1, where N(v) is the neighbour node set of the node v, and U is the prio distribution over all labels, wu,v is the edge weight between nodes u and v, and 1, 2, and 3 ar hyperparameters that balance the contribution of individual terms in the objective. The terms in th objective function above encourage that: (a) the label distribution of seed nodes should be clos to the ground truth, (b) the label distribution of neighbouring nodes should be similar, and, (c) i relevant, the label distribution should stay close to our prior belief. This objective function can b solved efficiently using iterative methods such as the Jacobi procedure. That is, in each step, eac. node aggregates the label distributions from its neighbours and adjusts its own distribution, whic is then repeated until convergence. In practice, the iterative updates can be done in parallel or in distributed fashion which then allows large graphs with a large number of nodes and labels to b trained efficiently.Bengio et al.(2006) and Ravi & Diao(2016) are good surveys on the topic fo interested readers.\nNeural networks are a class of non-linear mapping from inputs to outputs and comprised of multiple layers that can potentially learn useful representations for predicting the outputs. We will view\nCnn(0) =>* c(ge(xn), yn) n\nIn this section, we devise a discriminative training objective for neural networks, that is inspirec by the label propagation objective and uses both labeled and unlabeled data, and can be trained by stochastic gradient descent.\nFirst, we take a close look at the two objective functions discussed in section[2] The label propaga. tion objective equation[1 makes sure the predicted label distributions of neighbouring nodes to b similar, while those of labeled nodes to be close to the ground truth. For example: if a cat imag. and a dog image are strongly connected in a graph, and if the cat node is labeled as animal, the pre. dicted probability of the dog node being animal is also high. In contrast, the neural network training. objective equation2|only takes into account the labeled instances, and ensure correct predictions or. the training set. As a consequence, a neural network trained on the cat image alone will not make. an accurate prediction on the dog image.\nVi CNGM(0) = c(ge(xn),yn) + Q1 Wuvd(he(xy), he(xy) n=1 (u,v)EELL + Q2 wuvd(he(xu),he(xx)) + Q3 Wuvd(he(xu), ho(xy (u,v)EELU (u,v)EEUU\nwhere ELL, Etu, and Euu are sets of labeled-labeled, labeled-unlabeled and unlabeled-unlabeled edges correspondingly, h() represents the hidden representations of the inputs produced by the neural network, and d() is a distance metric, and {@1, Q2, Q3} are hyperparameters. We call archi- tectures to be trained using this objective Neural Graph Machines, and schematically illustrate the concept in figure[1] In practice, we choose an l-1 or l-2 distance metric for d(), and h(x) to be the last layer of the neural network. However, these choices can be changed, to a customized metric, or to using an intermediate hidden layer instead."}, {"section_index": "3", "section_name": "3.1 CONNECTIONS TO PREVIOUS METHODS", "section_text": "Note that we have separated the terms based on the edge types, as these can affect the training. differently. The graph-dependent a hyperparameters control the balance of these terms. When. Q; = 0, the proposed objective ignores the similarity constraint and becomes a supervised-only. objective as in equation[2 When ge(x) = he(x) = y, where y is the label distribution, the individual. cost functions (c and d) are squared l-2 norm, and the objective is trained using y directly instead of. 0, we arrive at the label propagation objective in equation|1] Therefore, the proposed objective could\nvarious models such as feedforward neural networks, recurrent neural networks and convolutional neural networks are often trained by performing maximum likelihood learning, that is, tuning their parameters so that the networks' outputs are close to the ground truth under some criterion,.\nwhere ge(.) denotes the overall mapping, parameterized by 0, and c() denotes a loss function such. as l-2 for regression or cross entropy for classification. The cost function c and the mapping g. are typically differentiable w.r.t 0, which facilitates optimisation via gradient descent. Importantly, this can be scaled to a large number of training instances by employing stochastic training using minibatches of data. However, it is not clear how unlabeled data, if available, can be treated using this objective, or if extra information about the training set, such as relational structures can be used.\nSuch shortcoming of neural network training can be rectified by biasing the network using prior knowledge about the relationship between instances in the dataset. In particular, for the domains we are interested in, training instances (either labeled or unlabeled) that are connected in a graph. for example, dog and cat in the above example, should have similar predictions. This can be done by encouraging neighboring data points to have a similar hidden representation learnt by a neural network, resulting in a modified objective function for training neural network architectures using both labeled and unlabeled datapoints:\nXi, X Xj\nFigure 1: Illustration of Neural Graph Machine: the training objective ensures the neural net to make accurate node-level predictions and biases the hidden representations of neighbouring nodes to be similar. [Left: feedforward NNs, Right: RNNs]\nbe thought of as a non-linear version of the label propagation objective, and a graph-regularized version of the neural network training objective.\nSimilar to graph-based label propagation, the choice of the input graphs is critical, to correctly bias. the neural network's prediction. Depending on the type of the graphs and nodes on the graphs, they. can be readily available to use such as social networks or protein linking networks, or they can be constructed (a) using generic graphs such as Knowledge Bases, that consists of links between ver-. tices on the graph, (b) using embeddings learnt by an unsupervised learning technique, or, (c) using sparse feature representations for each vertex. Additionally, the proposed training objective can be. easily modified for directed graphs.\nWe have discussed using node-level features as inputs to the neural network. In the absences of sucl inputs, our training scheme can still be deployed using input features derived from the graph itself We show in figure[2land in the experiment that the neighbourhood information such as rows in the adjacency matrix are simple to construct, yet powerful inputs to the network. These features can also be combined with existing features.\nYk :- - k yi - 1 1 1 >Xj - 1 1 0 >Xj k 1 0 1 yj\ny Xj Xj1 Xj2 Xj Xj1 Xj2 Xjk\nYk - :- k yi - 1 1 1 ->Xj - 1 1 0 >Xj k 1 0 1 yj\nFigure 2: Illustration of how we can construct inputs to the neural network using the adjacency matrix.\nCNGM(0) = Q1wuvd(he(xu),he(xr)) + c(ge(xu),yu) + c(ge(xv),Yr (u,v)EELL + Q2Wuvd(he(xu),he(xv)) + c(ge(xu),yu) (u,v)EELU + Q3Wuvd(he(xu), ho(xv (u,v)EEUU\nThe objective in its new form enables stochastic training to be deployed. In particular, in each training iteration, we use a minibatch of edges and obtain the stochastic gradients of the objective To further reduce noise, we can select a labeled node and sample from the set of edges that are incident to that node. The number of edges per node to be sampled can be controlled."}, {"section_index": "4", "section_name": "3.4 COMPLEXITY", "section_text": "The complexity of each training epoch using equation|4|is O(M) where M = [E| is the number of. edges in the graph. In practice, unlabeled-unlabeled edges do not seem to help learning and could be ignored. which further reduces the above complexity.."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we provide several experiments showing the efficacy of the proposed training objec tive on a wide range of tasks, datasets and network architectures. All the experiments are done using TensorFlow (Abadi et al.2015)\nWe first consider a multi-label classification on nodes of a graph. We use the BlogCatalog datase (Agarwal et al. 2009), which has 10,312 nodes and 333,983 edges, and there are 39 labels. This graph represent a network of social relationships given by bloggers and the labels are the bloggers interests. We train a feedforward neural network with one hidden layer of 50 units and train each class as a one-vs-rest binary classification task. Since there are no features for each node, we use the rows of the adjacency matrix as inputs to the network, as discussed in section3.2[ Since we use the test set to construct the graph and augment the training objective, the learning in this experi- ment is transductive. Since the training set is extremely unbalanced, we employ weighted sampling during training, i.e. making sure each minibatch has both positive and negative examples. In this experiment, we fix a; to be equal, and experiment with a = O and O.1 (0 means no edge information during training); we use the l-2 metric to compute the distance between the hidden representations. We compare our method against a two-stage approach: use node2vec (Grover & Leskovec|2016) to generate node embeddings and use a linear one-vs-rest classifier for classification. The methods are evaluated using two metrics Macro F1 and Micro F1. The results for different train/test splits and different a values, together with the baseline are included in table|1] The results demonstrate that 1. using the graph itself as direct inputs to the neural network and letting the network learning a non-linear mapping is more effective than the two-stage approach considered, 2. using the graph information improves the performance in the small data regime (for example: when training set is only 20% of the dataset). We observe the same improvement over Node2vec on the Micro F1 metric and a = 0.1 is comparable to Q = 0 but performs better on the recall metric.\nThe proposed objective function in equation 3|has several summations over the labeled points and edges, and can be equivalently written as follows,.\nQ1Wuvd(he(xu),he(xx)) + c(go(xu),yu) + c(go(xr),Yv GM (u,v)E&LL + Q2Wuvd(he(xu),hs(xv)) + c(g(xu),yu) (u,U)EELU + Q3Wuvd(he(xu), he(xy) (u,v)EEUU\nThese results are different compared to|Grover & Leskovec(2016), since we treat the classifiers (one per label) independently. This setting is the same as for our NGM-NN classifiers.\nTable 1: Results for BlogCatalog dataset averaged over 10 random splits. The higher is better\nMacro F1 Train amount/a 0 0.1 Node2veq * 0.2 0.180 0.191 0.168 0.5 0.238 0.242 0.174 0.8 0.263 0.262 0.177\nWe evaluate the proposed objective function on a multi-class text classification task using a character-level convolutional neural network (CNN). We use the AG news dataset from Zhang et al. (2015), where the task is to classify a news article into one of 4 categories. Each category has 30,000 examples for training and 1,900 examples for testing. In addition to the train and test sets, there are 111,469 examples that are treated as unlabeled examples.\nWe restrict the graph construction to only the train set and the unlabeled examples and keep the test. set only for evaluation. We use the Google News word2vec corpus to calculate the average em-. bedding for each news article and use the cosine similarity of document embeddings as a similarity metric. Each node is restricted to 5 neighbors.\nTable 2: Settings of CNNs for the text classification experiment\nSetting Baseline \"small' CNN \"Tiny\" CNN # of convolutional layers 6 3 Frame size in conv. layers. 256 32 # of fully-connected layers. 3 3 Hidden units in fully-connected layers 1024 256\n# of convolutional layers. Frame size in conv. layers. # of fully-connected layers. Hidden units in fully-connected layers\nNetwork Accuracy % Baseline: \"small' CNN 84.35 Baseline: \"small'' CNN with thesaurus augmentation 85.20 Baseline: \"tiny\" CNN 85.07 \"Tiny\" CNN with NGM 86.90\nFinally, we compare the performance of our approach for training RNN sequence models (LSTM) for a semantic intent classification task as described in the recent work on SmartReply (Kannan et al. 2016) for automatically generating short email responses. One of the underlying tasks in SmartReply is to discover and map short response messages to semantic intent clusters!'we choose 20 intent classes and created a dataset comprised of 5,483 samples (3,832 for training, 560 for validation and 1,091 for testing). Each sample instance corresponds to a short response message text paired with a semantic intent category that was manually verified by human annotators. For example, \"That\n'For details regarding SmartReply and how the semantic intent clusters are generated, refer Kannan et a (2016).\nWe construct the CNN in the same way asZhang et al.(2015), but with significantly smaller layers as shown in table2\nThe network is trained with the same parameters as Zhang et al.(2015) but only for 20 epochs. We compare the final outputs using the cross entropy loss, that is d = cross_entropy(g(xu), g(xu)) Using the proposed objective function, the NGM-CNN provides a 1.8% absolute and 2.1% relative. improvement in accuracy, despite using a smaller network. We show the results in table|3.\nTable 3: Results for News Categorization using CNNs\nsounds awesome!\" and \"Sounds fabulous\" belong to the sounds good intent cluster. We construct a sparse graph in a similar manner as the news categorization task using word2vec embeddings ove the message text and computing similarity to generate a response message graph with fixed node degree (k=10). We use l-2 for the distance metric d() and choose a based on the development set.\nWe run the experiments for a fixed number of time steps and pick the best results on the devel opment set. A multilayer LSTM architecture (2 layers, 100 dimensions) is used for the RNI. sequence model. The LSTM model and its NGM variant are also compared against other baselin. systems-Random baseline ranks the intent categories randomly and Frequency baseline rank them in order of their frequency in the training corpus. To evaluate the intent prediction quality o different approaches, for each test instance, we compute the rank of the actual intent category rank. with respect to the ranking produced by the method and use this to calculate the Mean Reciproca. Rank:\nWe show in table4|that LSTM RNNs with our proposed graph-augmented training objective func tion outperform standard baselines by offering a better MRR.\nTable 4: Results for Semantic Intent Classification using LSTM RNNs"}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "We have proposed a training objective for neural network architectures that can leverage both labelec and unlabeled data. Inspired by the label propagation objective function, the proposed objective bi- ases the neural networks to learn similar hidden representations for nodes connected by an edge on the graph. Importantly, this objective can be trained by stochastic gradient descent, as in super vised neural network training. We validate the efficacy of the graph-augmented objective on various state-of-the-art neural network architectures on bloggers' interest, text category and semantic in tent classification problems. Additionally, the node-level input features can be combine with grapl features as inputs to the neural network. We showed that a neural network that simply takes the adjacency matrix of a graph and produces node labels, can perform better than a recently proposec two-stage approach using sophisticated graph embeddings and a linear classifier.\nWe would like to thank the Google Expander team for insightful feedbacks\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrey Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah..\nN 1 1 MRR = N rank, i=1\nModel Mean Reciprocal Rank (MRR) Random 0.175 Frequency 0.258 LSTM 0.276 NGM-LSTM 0.284\nWhile our objective can be applied to multiple graphs which come from different domains, we have. not fully explored this aspect and leave this as future work. We expect the domain-specific networks can interact with the graphs to determine the importance of each domain/graph source in prediction Another possible future work is to use our objective on directed graphs, that is to control the direction. of influence between nodes during training.\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012.\nThorsten Joachims. Transductive inference for text classification using support vector machines. In nternationalConfo Sachine Learning. 1999\nAnjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos Greg Corrado, Laszlo Lukacs, Marina Ganea, Peter Young, and Vivek Ramavajjala. Smart reply Automated response suggestion for email. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2016.\nMathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net works for graphs. arXiv preprint arXiv:1605.05273, 2016.\nIlya Sutskever. Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in Neural Information Processing Systems, pp. 3104-3112, 2014.\nJason Weston, Frederic Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi supervised embedding. In Neural Networks: Tricks of the Trade.. . 639-655. Springer. 2012\nXiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa gation. Technical report, School of Computer Science, Canegie Mellon University.\nMike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: //tensorf1ow. org/ . Software available from tensorflow.org."}] |
B16dGcqlx | [{"section_index": "0", "section_name": "THIRD-PERSON IMITATION LEARNING", "section_text": "Reinforcement learning (RL) makes it possible to train agents capable of achiev ing sophisticated goals in complex and uncertain environments. A key difficulty i1 reinforcement learning is specifying a reward function for the agent to optimize Traditionally, imitation learning in RL has been used to overcome this problem Unfortunately, hitherto imitation learning methods tend to require that demonstra tions are supplied in the first-person: the agent is provided with a sequence o states and a specification of the actions that it should have taken. While powerful this kind of imitation learning is limited by the relatively hard problem of collect ing first-person demonstrations. Humans address this problem by learning fron third-person demonstrations: they observe other humans perform tasks, infer th task, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learn ing. Here third-person refers to training an agent to correctly achieve a simpl goal in a simple environment when it is provided a demonstration of a teache\ntask, and accomplish the same task themselves. In this paper, we present a method for unsupervised third-person imitation learn- ing. Here third-person refers to training an agent to correctly achieve a simple goal in a simple environment when it is provided a demonstration of a teacher achieving the same goal but from a different viewpoint; and unsupervised refers to the fact that the agent receives only these third-person demonstrations, and is not provided a correspondence between teacher states and student states. Our methods primary insight is that recent advances from domain confusion can be utilized to yield domain agnostic features which are crucial during the training process. To validate our approach, we report successful experiments on learning from third-person demonstrations in a pointmass domain, a reacher domain, and inverted pendulum."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning (RL) is a framework for training agents to maximize rewards in large, un known, stochastic environments. In recent years, combining techniques from deep learning witl reinforcement learning has yielded a string of successful applications in game playing and robotics Mnih et al.(2015}2016); Schulman et al.(2015a); Levine et al.(2016). These successful appli cations, and the speed at which the abilities of RL algorithms have been increasing, makes it ar exciting area of research with significant potential for future applications\nOne of the major weaknesses of RL is the need to manually specify a reward function. For each. task we wish our agent to accomplish, we must provide it with a reward function whose maximizer. will precisely recover the desired behavior. This weakness is addressed by the field of Inverse. Reinforcement Learning (IRL). Given a set of expert trajectories, IRL algorithms produce a reward. function under which these the expert trajectories enjoy the property of optimality. Recently, there has been a significant amount of work on IRL, and current algorithms can infer a reward functior from a very modest number of demonstrations (e.g,.Abbeel & Ng(2004); Ratliff et al.(2006): Ziebart et al.(2008);Levine et al.(2011); Ho & Ermon(2016); Finn et al.(2016)).\nWhile IRL algorithms are appealing, they impose the somewhat unrealistic requirement that the. demonstrations should be provided from the first-person point of view with respect to the agent. Human beings learn to imitate entirely from third-person demonstrations - i.e., by observing other humans achieve goals. Indeed, in many situations, first-person demonstrations are outright impossi- ble to obtain. Meanwhile, third-person demonstrations are often relatively easy to obtain."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The goal of this paper is to develop an algorithm for third-person imitation learning. Future advance- ments in this class of algorithms would significantly improve the state of robotics, because it will enable people to easily teach robots news skills and abilities. Importantly, we want our algorithm to be unsupervised: it should be able to observe another agent perform a task, infer that there is an underlying correspondence to itself, and find a way to accomplish the same task.\nWe offer an approach to this problem by borrowing ideas from domain confusionTzeng et al.(2014 and generative adversarial networks (GANs) Goodfellow et al. (2014). The high-level idea is to in- troduce an optimizer under which we can recover both a domain-agnostic representation of the agent's observations, and a cost function which utilizes this domain-agnostic representation to cap- ture the essence of expert trajectories. We formulate this as a third-person RL-GAN problem, and our solution builds on the first-person RL-GAN formulation by|Ho & Ermon|(2016).\nSurprisingly, we find that this simple approach has been able to solve the problems that are pre sented in this paper (illustrated in Figure[1), even though the student's observations are related in a complicated way to the teacher's demonstrations (given that the observations and the demonstrations are pixel-level). As techniques for training GANs become more stable and capable, we expect our algorithm to be able to infer solve harder third-person imitation tasks without any direct supervision"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Imitation learning (also learning from demonstrations or programming by demonstration) consider the problem of acquiring skills from observing demonstrations. Imitation learning has a long history with several good survey articles, including (Schaal!|1999 Calinon2009f |Argall et al.||2009). Twc main lines of work within imitation learning are: 1) behavioral cloning, where the demonstrations are used to directly learn a mapping from observations to actions using supervised learning, po tentially with interleaving learning and data collection (e.g.,Pomerleau|(1989); Ross et al.(2011) 2) Inverse reinforcement learning (Ng et al.] 2o00), where a reward function is estimated that ex plains the demonstrations as (near) optimal behavior. This reward function could be represente as nearness to a trajectory (Calinon et al.|2007) Abbeel et al.[|2010), as a weighted combination o features (Abbeel & Ng2004|Ratliff et al. 2006|Ramachandran & Amir2007 Ziebart et al.]2008 Boularias et al.]2011} Kalakrishnan et al. 2013 Doerr et al.[2015), or could also involve featur learning (Ratliff et al.| 2007Levine et al. 2011 Wulfmeier et al.]2015f Finn et al.]2016f Ho & Ermon2016).\nFigure 1: From left to right, the three domains we consider in this paper: pointmass, reacher, and. pendulum. Top-row is the third-person view of a teacher demonstration. Bottom row is the agent's. view in their version of the environment. For the point and reacher environments, the camera angles differ by approximately 40 degrees. For the pendulum environment, the color of the pole differs..\nThis past work, however, is not directly applicable to the third person imitation learning setting. In third-person imitation learning. the observations and actions obtained from the demonstration are not the same as what the imitator agent will be faced with. A typical scenario would be: the. imitator agent watches a human perform a demonstration, and then has to execute that same task. As discussed inNehaniv & Dautenhahn(2o01) the 'what and how to imitate\" questions become. significantly more challenging in this setting. To directly apply existing behavioral cloning or inverse. reinforcement learning techniques would require knowledge of a mapping between observations anc. actions in the demonstrator space to observations and actions in the imitator space. Such a mapping. is often difficult to obtain, and it typically relies on providing feature representations that captures. the invariance between both environments Carpenter et al.[(2002);Shon et al.(2005);Calinon et al (2007); Nehaniv(2007); Gioioso et al.(2013); Gupta et al.(2016).Contrary to prior work, we consider third-person imitation learning from raw sensory data, where no such features are made. available.\nOur work also closely builds on advances in generative adversarial networks Goodfellow et al. (2014), which are very closely related to imitation learning as explained in Finn et al.[(2016);Ho & Ermon(2016). In our optimization formulation, we apply the gradient flipping technique from[Ganin & Lempitsky(2014).\nThe problem of adapting what is learned in one domain to another domain has been studied exten sively in computer vision in the supervised learning setting[Yang et al.(2007);[Mansour et al.(2009) Kulis et al.(2011);Aytar & Zisserman(2011); Duan et al.(2012); Hoffman et al.(2013); Long & Wang (2015). It has also been shown that features trained in one domain can often be relevant tc other domains Donahue et al.(2014). The work most closely related to ours is Tzeng et al.(2014 2015), who also consider an explicit domain confusion loss, forcing trained classifiers to rely or features that don't allow to distinguish between two domains. This work in turn relates to earlie work by Bromley et al.[(1993); Chopra et al.(2005), which also considers supervised training of deep feature embeddings.\nA discrete-time finite-horizon discounted Markov decision process (MDP) is represented by a tuple M = (S, A, P, r, po, , T), in which S is a state set, A an action set, P : S A S > R+ a transition probability distribution, r : S A -> R a reward function, po : S -> R+ an initial state distribution, y E [0, 1] a discount factor, and T the horizon.\nIn the (first-person) imitation learning setting, we are not given the reward function. Instead we are given traces (i.e., sequences of states traversed) by an expert who acts according to an unknow. policy g. The goal is to find a policy e that performs as well as the expert against the unknow. reward function. It was shown in[Abbeel & Ng(2004) that this can be achieved through inverse reinforcement learning by finding a policy e that matches the expert's empirical expectation ove discounted sum of all features that might contribute to the reward function. The work byHo & Ermon|(2016) generalizes this to the setting when no features are provided as follows: Find a polic e that makes it impossible for a discriminator (in their work a deep neural net) to distinguish state visited by the expert from states visited by the imitator agent. This can be formalized as follows:\nThe most closely related work to ours is byFinn et al.(2016);[Ho & Ermon|(2016);[Wulfmeier et al. (2015), who also consider inverse reinforcement learning directly from raw sensory data. However. the applicability of their approaches is limited to the first-person setting. Indeed, matching raw sensory observations is impossible in the 3rd person setting.\nOur approach to third-person imitation learning relies on reinforcement learning from raw sensory data in the imitator domain. Several recent advances in deep reinforcement learning have made this practical, including Deep Q-Networks (Mnih et al.2015), Trust Region Policy Optimization (Schul- man et al.2015a), A3C Mnih et al.(2016), and Generalized Advantage Estimation (Schulman et al. 2015b). Our approach uses Trust Region Policy Optimization.\nmax min Ee(logDr(s)-EE log(1Dr(s)) TT e DR\nHere, the expectations are over the states experienced by the policy of the imitator agent, e, and by. the policy of the expert, E, respectively. Dr is the discriminator, which outputs the probability of. a state having originated from a trace from the imitator policy e. If the discriminator is perfectly. able to distinguish which policy originated state-action pairs, then Dr will consistently output a. probability of 1 in the first term, and a probability of O in the second term, making the objective. its lowest possible value of zero. It is the role of the imitator agent e to find a policy that makes. it difficult for the discriminator to make that distinction. The desired equilibrium has the imitator agent making it impractical for the discriminator to distinguish, hence forcing the discriminator to. assign probability 0.5 in all cases.Ho & Ermon(2016) present a practical approach for solving. this type of game when representing both e and Dr as deep neural networks. Their approach. repeatedly performs gradient updates on each of them. Concretely, for a current policy e traces can. be collected, which together with the expert traces form a data-set on which Dr can be trained with. supervised learning minimizing the negative log-likelihood (in practice only performing a modest. number of updates). For a fixed Dr, this is a policy optimization problem where - log Dr(s, a). is the reward, and policy gradients can be computed from those same traces. Their approach uses. trust region policy optimization (Schulman et al.2015a) to update the imitator policy from those. gradients.\nIn our work we will have more terms in the objective, so for compactness of notation, we will realize the discriminative minimization from Eqn. (1) as follows:\nmax min LR = CE(DR(si) TT 0 DR 2\nWhere s; is state i, ce, is the correct class label (was the state s; obtained from an expert vs. from non-expert), and CE is the standard cross entropy loss.\nFormally, the third-person imitation learning problem can be stated as follows. Suppose we are given two Markov Decision Processes M and Me. Suppose further there exists a set of traces p = {(s1, ..., Sn)}?=o which were generated under a policy acting optimally under some unknown. reward R . In third-person imitation learning, one attempts to recover by proxy through p a policy = f() which acts optimally with respect to Re.."}, {"section_index": "4", "section_name": "5.1 GAME FORMULATION", "section_text": "In this section, we discuss a simple algorithm for third-person imitation learning. This algorithr. is able to successfully discriminate between expert and novice policies, even when the policies are executed under different environments. Subsequently, this discrimination signal can be used to train. expert policies in new domains via RL by training the novice policy to fool the discriminator, thus. forcing it to match the expert policy.\nWe begin by recalling that in the algorithm proposed by Ho & Ermon[(2016) the loss in Equation[2 is utilized to train a discriminator Dr capable of distinguishing expert vs non-expert policies. Un-. fortunately, (2) will likely fail in cases when the expert and non-expert act in different environments,. since D p will quickly learn these differences and use them as a strong classification signal.\nTo handle the third-person setting, where expert and novice are in different environments, we con-. sider that Dr works by first extracting features from Ot, and then using these features to make a\nIn third-person learning, observations are more typically available rather than direct state access. so going forward we will work with observations ot instead of states st as representing the expert traces. The top row of Figure[8jillustrates what these observations are like in our experiments.\nclassification. Suppose then that we partition Dr into a feature extractor DF and the actual clas sifier which assigns probabilities to the outputs of D. Overloading notation, we will refer to the classifier as Dr going forward. For example, in case of a deep neural net representation, DF would correspond to the earlier layers, and Dr to the later layers. The problem is then to ensure that D y contains no information regarding the rollout's domain label de (i.e., expert vs. novice domain) This can be realized as\nnax min LR = ) CE(DR(DF(0i)),ce TT0 s.t. MI(DF(0i);di) = 0\nWhere MI is mutual information and hence we have abused notation by using DR, DF, and de to. mean the classifier, feature extractor, and the domain label respectively as well as distributions over these objects.\nThe mutual information term can be instantiated by introducing another classifier Dp, which takes. features produced by D and outputs the probability that those features were produced by in the expert vs. non-expert environment. (SeeBridle et al.(1992);Barber & Agakov(2005);Krause et al. (2010);Chen et al. (2016) for further discussion on instantiating the information term by introducing. another classifier.) If o; = DF(o,), then the problem can be written as.\nmax min max LR + Lp = ) CE(DR(0i),cei) + CE(DD(0i),dei DR Dp TT e\nIn words, we wish to minimize class loss while maximizing domain confusion\nOften, it can be difficult for even humans to judge a static image as expert vs. non-expert because it does not convey any information about the environmental change affected by the agent's actions. For example, if a pointmass is attempting to move to a target location and starts far away from its goal state, it can be difficult to judge if the policy itself is bad or the initialization was simply unlucky. In response to this difficulty, we give Dr access to not only the image at time t, but also at some future time t + n. Define ot = DF(ot) and t+n = DF(Ot+n). The classifier then makes a prediction DR(t, Ot+n) = Cl.\nThis renders the following formulation.\nmax min maxLR + Lp = CE(DR(Oi,0i+n),ce;)+CE(DD(0i),dei) TTe DR DD\nNote we also want to optimize over DF, the feature extractor, but it feeds both into Dr and into Dp which are competing (hidden under o), which we will address now.\nTo deal with the competition over D=, we introduce a function G that acts as the identity when moving forward through a directed acyclic graph and flips the sign when backpropagating through the graph. This technique has enjoyed recent success in computer vision. See, for example, (Ganin & Lempitsky|[2014). With this trick, the problem reduces to its final form\nTo ensure sufficient signal for discrimination between expert and non-expert, we collect third-perso. demonstrations in the expert domain from both an expert and from a non-expert.\nOur complete formulation is graphically summarized in Figure2\nmaxminmaxLR+ Lp=>~CE(DR(Oi,0i+n),cei) + CE(DD(oi),dei) TT e DR DD\nmin LR+Lp=> CE(DR(0i,0i+n),ce;) + \\ CE(DD(G(0i),dei) max Te DR,Dp,DF\nIn Equation (5), we flip the gradient's sign during backpropagation of DF with respect to the domain classification loss. This corresponds to stochastic gradient ascent away from features that are useful for domain classification, thus ensuring that DF produces domain agnostic features. Equation 5|can be solved efficiently with stochastic gradient descent. Here X is a hyperparameter that determines. the trade-off made between the objectives that are competing over DF..\nFigure 2: Architecture diagram for third-person imitation learning. Images at time t and t + 4 are. sent through a feature extractor to obtain F(ot) and F(ot+4). Subsequently, these feature vectors. are reused in two places. First. they are concatenated and used to predict whether the samples are drawn from expert or non-expert trajectories. Second, F(ot) is utilized to predict a domain labe. (expert vs. novice domain). During backpropogation, the sign on the domain loss Lp is flippec. to destroy information that was useful for distinguishing the two domains. This ensures that the. feature extractor F is domain agnostic. Finally, the classes probabilities that were computed using. this domain-agnostic feature vector are utilized as a cost signal in TRPO; which is subsequently. utilized to train the novice policy to take expert-like actions and collect further rollouts.."}, {"section_index": "5", "section_name": "5.2 ALGORITHM", "section_text": "To solve the game formulation in Equation (5), we perform alternating (partial) optimization over the policy e and the reward function and domain confusion encoded through DR, Dp, DF\nOur generator (e) step is similar to the generator step in the algorithm by (Ho & Ermon2016). We. simply use log Dr as the reward. Using policy gradient methods (TRPO), we train the generator. to minimize this cost and thus push the policy further towards replicating expert behavior. Once the generator step is done, we start again with the discriminator step. The entire process is summarized. in algorithm 1.\nWe seek to answer the following questions through experiments\naLE d0F aLE d0E Loss LE Input Ot+4 features 0t+4 concat Expert vs. Non-expert Label reatures E TRPO VnE,0) Domain Label Feature Extractor F D Input Ot -aLp aLp aeF Loss Lp d0E\nOt d0F aLE d0E Loss LE Input Ot+4 features Ot+4 concat Expert vs. Non-expert Label features E TRPO Vn(E,ot) Domain Label Feature Extractor F D Input Ot -aLp aLp d0F Loss Lp d0E\nTheoptimization Over Dr,Dp,DF is done through stochastic gradient descent with ADAM Kingma & Ba(2014)\n1. Is it possible to solve the third-person imitation learning problem in simple settings? I.e. given a collection of expert image-based rollouts in one domain, is it possible to train a policy in a different domain that replicates the essence of the original behavior? 2. Does the algorithm we propose benefit from both domain confusion and velocity? 3. How sensitive is our proposed algorithm to the selection of hyper-parameters used in de- ployment? 4. How sensitive is our proposed algorithm to changes in camera angle? 5. How does our method compare against some reasonable baselines?\nAlgorithm 1 A third-person imitation learning algorithm\nPoint: A pointmass attempts to reach a point in a plane. The color of the target and the camera angl change between domains.\nReacher: A two DOF arm attempts to reach a designated point in the plane. The camera angle the length of the arms, and the color of the target point are changed between domains. Note tha changing the camera angle significantly alters the image background color from largely gray t oughly 30 percent black. This presents a significant challenge for our method.\nInverted Pendulum: A classic RL task wherein a pendulum must be made to balance via control. For this domain, We only change the color of the pendulum and not the camera angle. Since there. is no target point, we found that changing the camera angle left the domain invariant representation. with too little information and resulted in a failure case. In contrast to some traditional renderings.\nTo evaluate our algorithm, we consider three environments in the MuJoCo physics simulator. There are two different versions of each environment, an expert variant and a novice variant. Our goal is to train a cost function that is domain agnostic, and hence can be trained with images on the expert domain but nevertheless produce a reasonable cost on the novice domain. See Figure 1 for a visualization of the differences between expert and novice environments for the three tasks.\nof this problem, we do not terminate an episode when the agent falls but rather allow data collectior to continue for a fixed horizon.\nIs it possible to solve the third-person imitation learning problem in simple settings? In Figure|3 we see that our proposed algorithm is indeed able to recover reasonable policies for all three tasks we examined. Initially, the training is quite unstable due to the domain confusion wreaking havoc on the learned cost. However, after several iterations the policies eventually head towards reasonable local minima and the standard deviation over the reward distribution shrinks substantially. Finally, we note that the extracted feature representations used to complete this task are in fact domain-agnostic as seen in Figure[9] Hence, the learning is properly taking place from a third-person perspective.\nReacher Reward vs Iteration. Inverted Pendulum Reward vs Iteration Point Reward vs Iteration. 30 1000 2000 15 10 20 30 2.5 5.0 7.5 10.0 5 10 15 20 25 iteration iteration iteration\nReacher Reward vs Iteration Inverted Pendulum Reward vs Iteration Point Reward ys Iteration 30 meea -2000 20 30 2.5 7.5 10.0 20 25 10 iteration 1 Q iteration 15\nFigure 3: Reward vs training iteration for reacher, inverted pendulum, and point environments. The learning curves are averaged over 5 trials with error bars represent one standard deviation in the reward distribution at the given point.\nReacher domain class acc vs iteration Pendulum domain class acc vs iteration Point domain class acc vs iteration. 1.00 1.00 1.00 0.25 0.25 0.25 0.00 0.00 0.00 12 12 iteration 12 iteration iteration\nReacher domain class acc vs iteration Pendulum domain class acc vs iteration Point domain class acc vs iteration 1.00 1.00 1.00 0.00 12 0.00 0.00- 8 8 12 8 12 iteration iteration iteration\nFigure 4: Domain accuracy vs. training iteration for reacher, inverted pendulum, and point environ ments."}, {"section_index": "6", "section_name": "Does the algorithm we ose benefit from both domain confusion and the multi-time step input", "section_text": "aigoriinm we propose oene eee-leneeseepeepl We answer this question with the experiments summarized in Figure 5] This experiment compare. our approach with: (i) our approach without the domain confusion loss; (ii) our approach without th multi-time step input; (iii) our approach without the domain confusion loss and without the multi. time step input (which is very similar to the approach in Ho & Ermon (2016)). We see that adding. domain confusion is essential for getting strong performance in all three experiments. Meanwhile. adding multi-time step input marginally improves the results. See also Figure7|for an analysis o the effects of multi-time step input on the final results..\nvelo and domain confusion reacher velo and domain confusion inverted pendulum velo and domain confusion point 25 -2000- variable variable variable vanilla vanilla dom vanilla dom_plus_velo dom_plus_velo dom_plus_velo 15 -6000 12 -8000 30 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 Iteration Iteration Iteration\nvelo and domain confusion reacher velo and domain confusion inverted pendulum velo and domain confusion point -2000 25 Reerrp variable variable variable vanilla dem vanilla vom vanilla dom_plus_velo dom_plus_velo dom_plus_velo -6000 -12 10 8000 10 20 30 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 Iteration Iteration Iteration\nFigure 5: Reward vs iteration for reacher, inverted pendulum, and point environments with no do main confusion and no velocity (red), domain confusion (orange), velocity (brown), and both do main confusion and velocity (blue).\nHow sensitive is our proposed algorithm to the selection of hyper-parameters used in deployment?. Figure 6|shows the effect of the domain confusion coefficient X, which trades off how much we. should weight the domain confusion objective vs. the standard cost-recovery objective, on the final. performance of the algorithm. Setting too low results in slower learning and features that are not domain-invariant. Setting A too high results in an objective that is too quick to destroy information.. which makes it impossible to recover an accurate cost..\nFor multi-time step input, one must choose the number of look-ahead frames that are utilized. Ii too small a window is chosen, the agent's actions have not affected a large amount of change in the environment and it is difficult to discern any additional class signal over static images. If too large a time-frame passes, causality becomes difficult to interpolate and the agent does worse than simply being trained on static frames. Figure 7 illustrates that no number of look-ahead frames is consistently optimal across tasks. However, a value of 4 showed good performance over all tasks. and so this value was utilized in all other experiments.\nReacher Reward vs dom confusion coefficient Pendulum Reward vs dom confusion coefficient Point Reward vs dom confusion coefficient 30 -10 2000 10 -6000- -20 0.00 0.25 0.50 0.75 0.00 .25 0.50 0.75 0.00 0.25 0.50 0.75 1.00 Domain Confusion Coefficient Domain Confusion Coefficient Domain Confusion Coefficient\nFigure 6: Reward of final trained policy vs domain confusion weight X for reacher, inverted pendu lum, and point environments.\nReacher Reward vs look-ahead frames Inverted Pendulum Reward vs look-ahead frames Point Reward vs look-ahead frames -5.75 -1000 27.5 -6.00 22.5 1500 -6.50 15 1750 0 15 20 0 20 0 O 10 15 Look-ahead frames Look-ahead frames Look-ahead frames 20\nReacher Reward vs look-ahead frames Inverted Pendulum Reward vs look-ahead frames Point Reward vs look-ahead frames -5.75 -1000 27.5 -6.00 22.5 1500 -6.50 -1750 0 5 1O 15 20 0 15 20 5 15 20 Look-ahead frames Look-ahead frames Look-ahead frames\nFigure 7: Reward of final trained policy vs number of look-ahead frames for reacher, inverted pen dulum, and point environments.\nReacher Reward vs dom confusion coefficient Pendulum Reward vs dom confusion coefficient Point Reward vs dom confusion coefficient 30 10 2000 10 6000 -20 0.00 Domain Confusion Coefficient .25 0.50 0.75 0.00 Domain Confusion Coefficient 0.75 0.00 Domain Confusion Coefficient 0.50 0.75 1.00\nHow sensitive is our algorithm to changes in camera angle? We present graphs for the reacher. and point experiments wherein we exam the final reward obtained by a policy trained with third-. person imitation learning vs the camera angle difference between the first-person and third-person perspective. We omit the inverted double pendulum experiment, as the color and not the camera. angle changes in that setting and we found the case of slowly transitioning the color to be the.. definition of uninteresting science.\nPoint Experiment Third-Person vs. Baselines 1000- 3000- first on third 0009- first-person third-persor Iteration Reacher Experiment Third-Person vs. Baselines 6 8 10- first on third first-person r third-person Iteration\n1000- Reearp 3000- first on third 3000- first-person r/ third-person\nFigure 9: Learning curves for third-person imitation vs. three baselines: 1)RL with true reward, 2) first-person imitation, 3) attempting to use first-person features on the third-person agent\nPoint Camera Angle vs Reward. Reacher Camera Angle vs Reward -400 4.5 500 -5.0 600 Rey -700 -6.0 -800 -6.5 0 10 20 30 0 5 10 15 Difference in Camera Angle (degrees). Difference in Camera Angle (degrees).\nFigure 8: Point and reacher final reward after 20 epochs of third-person imitation learning vs the camera angle difference between the first and third-person perspective. We see that the point follows a fairly linear slope in regards to camera angle differences, whereas the reacher environment is more stochastic against these changes.\n8 10- first on third first-person rl 2 third-person\nHow does our method compare against reasonable baselines? We consider the following base. lines for comparisons against third-person imitation learning. 1) Standard reinforcement learning with using full state information and the true reward signal. This agent is trained via TRPO. 2)\nWe compare all three of these baselines to third-person imitation learning. As we see in figure 9: 1) Standard RL, which (unlike the imitation learning approaches) has access to full state anc true reward, helps calibrate performance of the other approaches. 2) First-person imitation learning is faced with a simpler imitation problem and accordingly outperforms third-person imitation, ye third-person imitation learning is nevertheless competitive. 3) Applying the first-person policy tc the third-person agent fails miserably, illustrating that explicitly considering third-person imitatior is important in these settings\nSomewhat unfortunately, the different reward function scales make it difficult to capture information on the variance of each learning curve. Consequently, in Appendix A we have included the full learning curves for these experiments with variance bars, each plotted with an appropriate scale to examine the variance of the individual curves."}, {"section_index": "7", "section_name": "DISCUSSION AND FUTURE WORK", "section_text": "In this paper, we presented the problem of third-person imitation learning. We argue that this prob lem will be important going forward, as techniques in reinforcement learning and generative adver sarial learning improve and the cost of collecting first-person samples remains high. We presented an algorithm which builds on Generative Adversarial Imitation Learning and is capable of solving simple third-person imitation tasks.\nOne promising direction of future work in this area is to jointly train policy features and cost features at the pixel level, allowing the reuse of image features. Code to train a third person imitation learnin. agent on the domains from this paper is presented here: https://github. com/bstadie/"}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This work was done partially at OpenAI and partially at Berkeley. Work done at Berkeley was supported in part by Darpa under the Simplex program and the FunLoL program.\nD. Barber and F. V. Agakov. Kernelized infomax clustering. NIPs, 2005.\nStandard GAIL (first-person imitation learning). Here, the agent receives first-person demonstration. and attempts to imitate the correct behavior. This is an upper bound on how well we can expect to do, since we have the correct perspective. 3) Training a policy using first-person data and applying. it to the third-person environment..\nYusuf Aytar and Andrew Zisserman. Tabula rasa: Model transfer for object category detection. In 2011 International Conference on Computer Vision, pp. 2252-2259. IEEE, 2011.\nMalinda Carpenter, Josep Call, and Michael Tomasello. Understanding prior intentions enables. two-year-olds to imitatively learn a complex task. Child development. 73(5):1431-1441. 2002.\nLixin Duan, Dong Xu, and Ivor Tsang. Learning with augmented features for heterogeneous domai adaptation. arXiv preprint arXiv:1206.4660, 2012\nBrian Kulis. Kate Saenko. and Trevor Darrell. What you saw is not what you get: Domain adaptatio. using asymmetric kernel transforms. In Computer Vision and Pattern Recognition (CVPR), 201. IEEE Conference on, pp. 1785-1792. IEEE, 2011.\nJudy Hoffman, Erik Rodner, Jeff Donahue, Trevor Darrell, and Kate Saenko. Efficient learning of domain-invariant image representations. arXiv preprint arXiv:1301.3224, 2013.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). 2014\nYishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bound. and algorithms. arXiv preprint arXiv:0902.3430. 2009\nChrystopher L Nehaniv and Kerstin Dautenhahn. Like me?-measures of correspondence and imita tion. Cybernetics & Systems, 32(1-2):11-51, 2001.\nDean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances ii Neural Information Processing Systems, pp. 305-313, 1989.\nN. Ratliff, D. Bradley, J. A. Bagnell, and J. Chestnutt. Boosting structured prediction for imitatior learning. 2007.\nStephane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and struc tured prediction to no-regret online learning. In A1STATS, volume 1, pp. 6, 2011.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust regior policy optimization. Arxiv preprint 1502.05477, 2015a.\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Pieter Abbeel, Sergey. Levine, Kate Saenko, and Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments. arXiv preprint arXiv:1511.07111, 2015.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015\nJun Yang, Rong Yan, and Alexander G Hauptmann. Cross-domain video concept detection usin adaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, pI 188-197. ACM, 2007.\nB. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning In AAAI Conference on Artificial Intelligence, 2008..\nHere, we plot the learning curves for each of the baselines mentioned in the experiments section as. a standalone plot. This allows one to better examine the variance of each individual learning curve\nFigure 10: Inverted Pendulum performance under a policy trained on RL, first-person imitation learning, third-person imitation, and a first-person policy applied to a third-person agent.\nInverted DP RL Reward vs Iteration. Inverted DP First Person Imitation Reward vs Iteratic 50 50 20 10 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 iteration iteration DP First Person Policy on Third Person Agent. Inverted DP Third Person Imitation Reward vs Iteratic 22.5 40 20.0- reerrn 17.5 12.5 10 10.0- 2.5 5.0 7.5 10.0 2.5 5.0 7.5 10.0 iteration iteration\nInverted DP First Person Imitation Reward vs Iteration\n50 40 30 20 10 2.5 5.0 7.5 10.0 iteration\nInverted DP Third Person Imitation Reward vs Iteratior\nFigure 11: Reacher performance under a policy trained on RL, first-person imitation learning, third person imitation, and a first-person policy applied to a third-person agent.\nReacher RL Reward vs Iteration Reacher First Person Imitation Reward vs Iteration 4 A 6 5 -6 8 -7 10 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration Reacher First Person Policy on Third Person Agent Reacher Third Person Imitation Reward vs Iteration 6 8 9 10 -11 -9 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration\n-8 meeran nean 9 10 -11 9 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration\nPoint RL Reward vs Iteration Point First Person Imitation Reward vs Iteration 600 250 800 500 -1000 -750 0 5 10 15 20 25 0 5 10 15 20 25 iteration iteration Point First Person Policy on Third Person Agent Point Third Person Imitation Reward vs Iteration -1000 4000 ueaw 2000 -5000 -3000 6000 2.5 5.0 7.5 10.0 0 5 10 15 20 25 iteration iteration\nFigure 12: Point performance under a policy trained on RL, first-person imitation learning, third person imitation, and a first-person policy applied to a third-person agent..\nJoint Feature Extractor: Input is images are size 50 x 50 with 3 channels, RGB. Layers are 2 convolutional layers each followed by a max pooling layer of size 2. Layers use 5 filters of size 3 each.\nDomain Discriminator and the Class Discriminator: Input is domain agnostic output of con volutional layers. Layers are two feed forward layers of size 128 followed by a final feed forwarc layer of size 2 and a soft-max layer to get the log probabilities\nADAM is used for discriminator training with a learning rate of O.001. The RL generator uses the off-the-shelf TRPO implementation available in RLLab."}] |
HysBZSqlx | [{"section_index": "0", "section_name": "PLAYING SNES IN THE RETRO LEARNING ENVIRONMENT", "section_text": "Nadav Bhonker*, Shai Rozenberg* and Itay Hubara\n{nadavbh, shairoz}@tx.technion.ac.il itayhubara@gmail.com\nMastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer pro- gram is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were intro duced, aiming to learn how to perform human tasks such as playing video games As a result, the Arcade Learning Environment (ALE) (Bellemare et al.[|2013) has become a commonly used benchmark environment allowing algorithms to train on various Atari 2600 games. In many games the state-of-the-art algorithms outper form humans. In this paper we introduce a new learning environment, the Retro Learning Environment -- RLE, that can run games on the Super Nintendo Enter- tainment System (SNES), Sega Genesis and several other gaming consoles. The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining the same interface as ALE Moreover, RLE is compatible with Python and Torch. SNES games pose a signif- icant challenge to current algorithms due to their higher level of complexity and versatility."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Controlling artificial agents using only raw high-dimensional input data such as image or sound is. a difficult and important task in the field of Reinforcement Learning (RL). Recent breakthroughs in. the field allow its utilization in real-world applications such as autonomous driving (Shalev-Shwartz. et al.]2016), navigation (Bischoff et al.2013) and more. Agent interaction with the real world is. usually either expensive or not feasible, as the real world is far too complex for the agent to perceive.. Therefore in practice the interaction is simulated by a virtual environment which receives feedback. on a decision made by the algorithm. Traditionally, games were used as a RL environment, dating back to Chess (Campbell et al.]2002), Checkers (Schaeffer et al.[1992), backgammon (Tesauro 1995) and the more recent Go (Silver et al.][2016). Modern games often present problems and tasks. which are highly correlated with real-world problems. For example, an agent that masters a racing. game, by observing a simulated driver's view screen as input, may be usefull for the development of. an autonomous driver. For high-dimensional input, the leading benchmark is the Arcade Learning. Environment (ALE) (Bellemare et al.]2013) which provides a common interface to dozens of Atari. 2600 games, each presents a different challenge. ALE provides an extensive benchmarking plat-. form, allowing a controlled experiment setup for algorithm evaluation and comparison. The main. challenge posed by ALE is to successfully play as many Atari 2600 games as possible (i.e., achiev-. ing a score higher than an expert human player) without providing the algorithm any game-specific. information (i.e., using the same input available to a human - the game screen and score). A key work to tackle this problem is the Deep Q-Networks algorithm (Mnih et al.|2015), which made a breakthrough in the field of Deep Reinforcement Learning by achieving human level performance. on 29 out of 49 games. In this work we present a new environment - the Retro Learning Environ-. ment (RLE). RLE sets new challenges by providing a unified interface for Atari 2600 games as well as more advanced gaming consoles. As a start we focused on the Super Nintendo Entertainment"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "System (SNES). Out of the five SNES games we tested using state-of-the-art algorithms, only one was able to outperform an expert human player. As an additional feature, RLE supports research of multi-agent reinforcement learning (MARL) tasks (Busoniu et al.|2010). We utilize this feature by training and evaluating the agents against each other, rather than against a pre-configured in-game AI. We conducted several experiments with this new feature and discovered that agents tend to learn how to overcome their current opponent rather than generalize the game being played. However, if an agent is trained against an ensemble of different opponents, its robustness increases. The main contributions of the paper are as follows:\nThe Arcade Learning Environment is a software framework designed for the development of RI algorithms, by playing Atari 2600 games. The interface provided by ALE allows the algorithms to select an action and receive the Atari screen and a reward in every step. The action is the equivalent to a human's joystick button combination and the reward is the difference between the scores at time stamp t and t - 1. The diversity of games for Atari provides a solid benchmark since differen games have significantly different goals. Atari 2600 has over 500 games, currently over 70 of them are implemented in ALE and are commonly used for algorithm comparison."}, {"section_index": "3", "section_name": "2.2 INFINITE MARIO", "section_text": "Infinite Mario (Togelius et al.]2o09) is a remake of the classic Super Mario game in which levels are. randomly generated. On these levels the Mario AI Competition was held. During the competition. several algorithms were trained on Infinite Mario and their performances were measured in terms o. the number of stages completed. As opposed to ALE, training is not based on the raw screen data. but rather on an indication of Mario's (the player's) location and objects in its surrounding. This. environment no longer poses a challenge for state of the art algorithms. Its main shortcoming lii in the fact that it provides only a single game to be learnt. Additionally, the environment provide. hand-crafted features, extracted directly from the simulator, to the algorithm. This allowed the use. of planning algorithms that highly outperform any learning based algorithm.."}, {"section_index": "4", "section_name": "2.4 OPENAI UNIVERSE", "section_text": "Universe (Universel2016) is a platform within the OpenAI framework in which RL algorithms can train on over a thousand games. Universe includes very advanced games such as GTA V, Portal as well as other tasks (e.g. browser tasks). Unlike RLE, Universe doesn't run the games locally and requires a VNC interface to a server that runs the games. This leads to a lower frame rate and thus longer training times.\nIntroducing a novel RL environment with significant challenges and an easy agent evalu- ation technique (enabling agents to compete against each other) which could lead to new and more advanced RL algorithms. A new method to train an agent by enabling it to train against several opponents, making the final policy more robust. Encapsulating several different challenges to a single RL environment.\nThe OpenAI gym (Brockman et al.2016) is an open source platform with the purpose of creating an interface between RL environments and algorithms for evaluation and comparison purposes. OpenAI Gym is currently very popular due to the large number of environments supported by it.. For example ALE, Go, MouintainCar and VizDoom (Zhu et al.]2016), an environment for the learning of the 3D first-person-shooter game \"Doom\"'. OpenAI Gym's recent appearance and wide. usage indicates the growing interest and research done in the field of RL..\nMalmo (Johnson et al.| 2016) is an artificial intelligence experimentation platform of the famous. game \"Minecraft\". Although Malmo consists of only a single game, it presents numerous challenges since the \"Minecraft\" game can be configured differently each time. The input to the RL algorithms include specific features indicating the \"'state\"' of the game and the current reward.."}, {"section_index": "5", "section_name": "2.6 DEEPMIND LAB", "section_text": "DeepMind Lab (Dee) is a first-person 3D platform environment which allows training RL algorithms. on several different challenges: static/random map navigation, collect fruit (a form of reward) anc a laser-tag challenge where the objective is to tag the opponents controlled by the in-game AI. In. LAB the agent observations are the game screen (with an additional depth channel) and the velocity. of the character. LAB supports four games (one game - four different modes).."}, {"section_index": "6", "section_name": "2.7 DEEP O-LEARNING", "section_text": "In our work, we used several variant of the Deep Q-Network algorithm (DQN) (Mnih et al. 2015) an RL algorithm whose goal is to find an optimal policy (i.e., given a current state, choose action that maximize the final score). The state of the game is simply the game screen, and the action is a combination of joystick buttons that the game responds to (i.e., moving ,jumping). DQN learns through trial and error while trying to estimate the \"Q-function\", which predicts the cumulative discounted reward at the end of the episode given the current state and action while following a policy . The Q-function is represented using a convolution neural network that receives the screer as input and predicts the best possible action at it's output. The Q-function weights 0 are updated according to:\n0t+1(St,at) = 0t+ a(Rt+1+ymax(Qt(St+1,a;0))-Qt(St,at;0t))VeQt(St,at;0t)\nwhere st, St+1 are the current and next states, at is the action chosen, a is the step size, y is the. discounting factor Rt+1 is the reward received by applying at at st. 0' represents the previous. weights of the network that are updated periodically. Other than DQN, we examined two leadin, algorithms on the RLE: Double Deep Q-Learning (D-DQN) (Van Hasselt et al.]2015), a DQN based algorithm with a modified network update rule. Dueling Double DQN (Wang et al.2015] a modification of D-DQN's architecture in which the Q-function is modeled using a state (screen dependent estimator and an action dependent estimator.\nThe Super Nintendo Entertainment System (SNES) is a home video game console developed by Nintendo and released in 1990. A total of 783 games were released, among them, the iconic Supei Mario World, Donkey Kong Country and The Legend of Zelda. Table (1) presents a comparisor between Atari 2600, Sega Genesis and SNES game consoles, from which it is clear that SNES and Genesis games are far more complex."}, {"section_index": "7", "section_name": "3.2 IMPLEMENTATION", "section_text": "To allow easier integration with current platforms and algorithms, we based our environment on the ALE, with the aim of maintaining as much of its interface as possible. While the ALE is highly coupled with the Atari emulator, Stella| RLE takes a different approach and separates the learning environment from the emulator. This was achieved by incorporating an interface named LibRetro (li- bRetro site), that allows communication between front-end programs to game-console emulators Currently, LibRetro supports over 15 game consoles, each containing hundreds of games, at an esti- mated total of over 7,000 games that can potentially be supported using this interface. Examples of supported game consoles include Nintendo Entertainment System, Game Boy, N64, Sega Genesis,\n0t+1(St,at) =0t+a(Rt+1+ ,a;0D)) Qt(St,At;0t))VoQt(St,At;0t), C\nSaturn, Dreamcast and Sony PlayStation. We chose to focus on the SNES game console imple mented using the snes9x2|as it's games present interesting, yet plausible to overcome challenges Additionally, we utilized the Genesis-Plus-Gx3|emulator, which supports several Sega consoles: Genesis/Mega Drive, Master System, Game Gear and SG-1000."}, {"section_index": "8", "section_name": "3.3 SOURCE CODE", "section_text": "RLE is fully available as open source software for use under GNU's General Public Licensd' The environment is implemented in C++ with an interface to algorithms in C++, Python and Lua. Adding a new game to the environment is a relatively simple process.."}, {"section_index": "9", "section_name": "3.4 RLE INTERFACE", "section_text": "RLE provides a unified interface to all games in its supported consoles, acting as an RL-wrapper to. the LibRetro interface. Initialization of the environment is done by providing a game (ROM file). and a gaming-console (denoted by 'core'). Upon initialization, the first state is the initial frame of the game, skipping all menu selection screens. The cores are provided with the RLE and installed together with the environment. Actions have a bit-wise representation where each controller button is represented by a one-hot vector. Therefore a combination of several buttons is possible using. the bit-wise OR operator. The number of valid buttons combinations is larger than 7O0, therefore only the meaningful combinations are provided. The environments observation is the game screen, provided as a 3D array of 32 bit per pixel with dimensions which vary depending on the game. The. reward can be defined differently per game, usually we set it to be the score difference between. two consecutive frames. By setting different configuration to the environment, it is possible to alter in-game properties such as difficulty (i.e easy, medium, hard), its characters, levels, etc..\nAtari 2600 SNES Genesis Number of Games. 565 783 928 CPU speed 1.19MHz 3.58MHz 7.6 MHz ROM size 2-4KB 0.5-6MB 16 MBytes RAM size 128 bytes 128KB 72KB Color depth. 8 bit 16 bit 16 bit Screen Size 160x210 256x224 or 512x448 320x224 Number of controller buttons. 5 12 11 Possible buttons combinations 18 over 720 over 100"}, {"section_index": "10", "section_name": "3.5 ENVIRONMENT CHALLENGES", "section_text": "Integrating SNES and Genesis with RLE presents new challenges to the field of RL where visua information in the form of an image is the only state available to the agent. Obviously, SNES games are significantly more complex and unpredictable than Atari games. For example in sports games such as NBA, while the player (agent) controls a single player, all the other nine players' behavior is determined by pre-programmed agents, each exhibiting random behavior. In addition, many SNES games exhibit delayed rewards in the course of their play (i.e., reward for an actions is given many time steps after it was performed). Similarly, in some of the SNES games, an agent can obtain a reward that is indirectly related to the imposed task. For example, in platform games, such as Supe Mario, reward is received for collecting coins and defeating enemies, while the goal of the challenge is to reach the end of the level which requires to move to keep moving to the right. Moreover upon completing a level, a score bonus is given according to the time required for its completion Therefore collecting coins or defeating enemies is not necessarily preferable if it consumes too much time. Analysis of such games is presented in section4.2 Moreover, unlike Atari that consists of\nTable 1: Atari 2600. SNES and Genesis comparison\neight directions and one action button, SNES has eight-directions pad and six actions buttons. Since combinations of buttons are allowed, and required at times, the actual actions space may be large than 700, compared to the maximum of 18 actions in Atari. Furthermore, the background in SNES is very rich, filled with details which may move locally or across the screen, effectively acting a non-stationary noise since it provided little to no information regarding the state itself. Finally, w note that SNES utilized the first 3D games. In the game Wolfenstein, the player must navigate a maze from a first-person perspective, while dodging and attacking enemies. The SNES offers plent of other 3D games such as flight and racing games which exhibit similar challenges. These games are much more realistic, thus inferring from SNES games to \"'real world'' tasks, as in the case o self driving cars, might be more beneficial. A visual comparison of two games, Atari and SNES, is presented in Figure (1).\nFigure 1: Atari 2600 and SNES game screen comparison: Left: \"Boxing\"' an Atari 2600 fighting game , Right: 'Mortal Kombat' a SNES fighting game. Note the exceptional difference in the amount of details between the two games. Therefore, distinguishing a relevant signal from noise is much more difficult.\nTable 2: Comparison between RLE and the latest RL environments"}, {"section_index": "11", "section_name": "4.1 EVALUATION METHODOLOGY", "section_text": "The evaluation methodology that we used for benchmarking the different algorithms is the popular method proposed by (Mnih et al.]2015). Each examined algorithm is trained until either it reached convergence or 100 epochs (each epoch corresponds to 50,000 actions), thereafter it is evaluated by performing 30 episodes of every game. Each episode ends either by reaching a terminal state or after 5 minutes. The results are averaged per game and compared to the average result of a human player. For each game the human player was given two hours for training, and his performances were evaluated over 20 episodes. As the various algorithms don't use the game audio in the learning process, the audio was muted for both the agent and the human. From both, humans and agents\n12 1:33 PUSHSTART SUB-ZERO SCORPION AETIVISION\nCharacteristics RLE OpenAI Inifinte ALE Project DeepMind Universe Mario Malmo Lab Number of Games 8 out of 7000+ 1000+ 1 74 1 4 In game Yes NO No No Yes Yes adjustments1 530fps(SNES) 60fps Frame rate 5675fps2 120fps <7000fps <1000fps Observation (Input) screen, Screen hand crafted screen, hand crafted screen + depth RAM features RAM features and velocity\nscore, a random agent score (an agent performing actions randomly) was subtracted to assure tha learning indeed occurred. It is important to note that DQN's e-greedy approach (select a randon action with a small probability e) is present during testing thus assuring that the same sequenc of actions isn't repeated. While the screen dimensions in SNES are larger than those of Atari, ir our experiments we maintained the same pre-processing of DQN (i.e., downscaling the image t 84x84 pixels and converting to gray-scale). We argue that downscaling the image size doesn't affec a human's ability to play the game, therefore suitable for RL algorithms as well. To handle the large action space, we limited the algorithm's actions to the minimal button combinations whicl provide unique behavior. For example, on many games the R and L action buttons don't have an use therefore their use and combinations were omitted."}, {"section_index": "12", "section_name": "4.1.1 RESULTS", "section_text": "A thorough comparison of the four different agents' performances on SNES games can be seen in Figure Q. The full results can be found in Table (3). Only in the game Mortal Kombat a trained. agent was able to surpass a expert human player performance as opposed to Atari games where the same algorithms have surpassed a human player on the vast majority of the games..\nOne example is Wolfenstein game, a 3D first-person shooter game, requires solving 3D vision tasks navigating in a maze and detecting object. As evident from figure (2), all agents produce poor results indicating a lack of the required properties. By using e-greedy approach the agents weren't able tc explore enough states (or even other rooms in our case). The algorithm's final policy appeared as a random walk in a 3D space. Exploration based on visited states such as presented in|Bellemare et al.(2016) might help addressing this issue. An interesting case is Gradius III, a side-scrolling flight-shooter game. While the trained agent was able to master the technical aspects of the game which includes shooting incoming enemies and dodging their projectiles, it's final score is still fa from a human's. This is due to a hidden game mechanism in the form of 'power-ups\", which can be accumulated, and significantly increase the players abilities. The more power-ups collected withou being use - the larger their final impact will be. While this game-mechanism is evident to a human the agent acts myopically and uses the power-up straight away"}, {"section_index": "13", "section_name": "4.2 REWARD SHAPING", "section_text": "As part of the environment and algorithm evaluation process, we investigated two case studies. Firs is a game on which DQN had failed to achieve a better-than-random score, and second is a game on which the training duration was significantly longer than that of other games.\nIn the first case study, we used a 2D back-view racing game 'F-Zero\". In this game, one is requirec to complete four laps of the track while avoiding other race cars. The reward, as defined by the score of the game, is only received upon completing a lap. This is an extreme case of a reward delay. A lap may last as long as 30 seconds, which span over 450 states (actions) before reward is received. Since DQN's exploration is a simple e-greedy approach, it was not able to produce a useful strategy. We approached this issue using reward shaping, essentially a modification of the reward to be a functior of the reward and the observation, rather than the reward alone. Here, we define the reward to be the sum of the score and the agent's speed (a metric displayed on the screen of the game). Indeec when the reward was defined as such, the agents learned to finish the race in first place within a shor training period.\nThe second case study is the famous game of Super Mario. In this game the agent, Mario, is require. to reach the right-hand side of the screen, while avoiding enemies and collecting coins. We foun this case interesting as it involves several challenges at once: dynamic background that can chang. drastically within a level, sparse and delayed rewards and multiple tasks (such as avoiding enemie. and pits, advancing rightwards and collecting coins). To our surprise, DQN was able to reach the. end of the level without any reward shaping, this was possible since the agent receives rewards fo. events (collecting coins, stomping on enemies etc.) that tend to appear to the right of the player. causing the agent to prefer moving right. However, the training time required for convergence wa. significantly longer than other games. We defined the reward as the sum of the in-game reward anc. a bonus granted according the the player's position, making moving right preferable. This rewar.\n5 A video demonstration can be found at https://youtu.be/nU19XLMveEU\n120 100 DQN Noore annnrrnre D-DQN 80 Duel-DDQN 60 40 20 0 F-Zero (speed bonus) Gradius 3 Mortal Kombat Super Mario Wolfenstein Algorithms\nFigure 2: DQN, DDQN and Duel-DDQN performance. Results were normalized by subtracting the a random agent's score and dividing by the human player score. Thus 100 represents a human player. and zero a random agent.\nproved useful, as training time required for convergence decreased significantly. The two game. above can be seen in Figure (3)\nFigure (4) illustrates the agent's average value function . Though both were able complete the stage. trained upon, the convergence rate with reward shaping is significantly quicker due to the immediate realization of the agent to move rightwards.\nFigure 3: Left: The game Super Mario with added bonus for moving right, enabling the agent to master them game after less training time. Right: The game F-Zero. By granting a reward for speed the agent was able to master this game, as oppose to using solely the in-game reward.\nMARIO WORLD TIME 000200 X01 386 00000 POIWER 101 SAFE SPEEODOOmh RANE READY\nWORLD TIME 01 00000 POWER 101 386 SAFE SPEEDOOOMh RANK 000\"00 READY\n0.8 0.6 0.4 0.2 - Super Mario With Right Bonus Super Mario Without Right Bonus 10 20 30 40 50 60 70 Epoch\nFigure 4: Averaged action-value (Q) for Super Mario trained with reward bonus for moving righ (blue) and without (red)."}, {"section_index": "14", "section_name": "4.3.1 MULTI-AGENT REINFORCEMENT LEARNING RESULTS", "section_text": "We chose the game Mortal Kombat, a two character side viewed fighting game (a screenshot of. the game can be seen in Figure (1), as a testbed for the above, as it exhibits favorable properties:. both players share the same screen, the agent's optimal policy is heavily dependent on the rival's behavior, unlike racing games for example. In order to evaluate two agents fairly, both were trained. using the same characters maintaining the identity of rival and agent. Furthermore, to remove the. impact of the starting positions of both agents on their performances, the starting positions were. initialized randomly.\nIn the first experiment we evaluated all combinations of DQN against D-DQN and Dueling D-DQN Each agent was trained against the in-game AI until convergence. Then 50 matches were performed between the two agents. DQN lost 28 out of 50 games against Dueling D-DQN and 33 against D-DQN. D-DQN lost 26 time to Dueling D-DQN. This win balance isn't far from the random case, since the algorithms converged into a policy in which movement towards the opponent is not\nIn this section we describe our experiments with RLE's multi-agent capabilities. We consider the case where the number of agents, n = 2 and the goals of the agents are opposite, as in r1 = -r2 This scheme is known as fully competitive (Busoniu et al. 2010). We used the simple single agent RL approach (as described byBusoniu et al.(2010) section 5.4.1) which is to apply to sin gle agent approach to the multi-agent case. This approach was proved useful in Crites and Barto (1996) and Mataric(1997). More elaborate schemes are possible such as the minimax-Q algo- rithm (Littman1994), (Littman2001). These may be explored in future works.We conducted hree experiments on this setup: the first use was to train two different agents against the in-game AI, as done in previous sections, and evaluate their performance by letting them compete against each other. Here, rather than achieving the highest score, the goal was to win a tournament which consist of 50 rounds, as common in human-player competitions. The second experiment was to initially train two agents against the in-game AI, and resume the training while competing against each other. In this case, we evaluated the agent by playing again against the in-game AI, separately. Finally, in our last experiment we try to boost the agent capabilities by alternated it's opponents, switching between the in-game AI and other trained agents.\nrequired rather than generalize the game. Therefore, in many episodes, little interaction between the two agents occur, leading to a semi-random outcome.\nIn our second experiment, we continued the training process of a the D-DQN network by letting it. compete against the Dueling D-DQN network. We evaluated the re-trained network by playing 30. episodes against the in-game AI. After training, D-DQN was able to win 28 out of 30 games, yet. when faced again against the in-game AI its performance deteriorated drastically (from an average of 17000 to an average of -22000). This demonstrated a form of catastrophic forgetting (Goodfellow. 2012 oentcnlovedth\nIn our third experiment, we trained a Dueling D-DQN agent against three different rivals: the in game AI, a trained DQN agent and a trained Dueling-DQN agent, in an alternating manner, sucl that in each episode a different rival was playing as the opponent with the intention of preventing the agent from learning a policy suitable for just one opponent. The new agent was able to achieve a score of 162,966 (compared to the 'normal'' dueling D-DQN which achieved 169,633). As new and objective measure of generalization, we've configured the in-game AI difficulty to be \"'ver hard' (as opposed to the default \"medium' difficulty). In this metric the alternating version achieve 83,400 compared to -33,266 of the dueling D-DQN which was trained in default setting. Thus proving that the agent learned to generalize to other policies which weren't observed while training"}, {"section_index": "15", "section_name": "4.4 FUTURE CHALLENGES", "section_text": "As demonstrated, RLE presents numerous challenges that have yet to be answered. In addition to being able to learn all available games, the task of learning games in which reward delay is extreme,. such as F-Zero without reward shaping, remains an unsolved challenge. Additionally, some games,. such as Super Mario, feature several stages that differ in background and the levels structure. The task of generalizing platform games, as in learning on one stage and being tested on the other, is another unexplored challenge. Likewise surpassing human performance remains a challenge since current state-of-the-art algorithms still struggling with the many SNES games.."}, {"section_index": "16", "section_name": "5 CONCLUSION", "section_text": "We introduced a rich environment for evaluating and developing reinforcement learning algorithm which presents significant challenges to current state-of-the-art algorithms. In comparison to othe environments RLE provides a large amount of games with access to both the screen and the in game state. The modular implementation we chose allows extensions of the environment with nev consoles and games, thus ensuring the relevance of the environment to RL algorithms for years tc come (see Table (2)). We've encountered several games in which the learning process is highly dependent on the reward definition. This issue can be addressed and explored in RLE as rewarc definition can be done easily. The challenges presented in the RLE consist of: 3D interpretation delayed reward, noisy background, stochastic AI behavior and more. Although some algorithm were able to play successfully on part of the games, to fully overcome these challenges, an agen must incorporate both technique and strategy. Therefore, we believe, that the RLE is a great platforn for future RL research.\nThe authors are grateful to the Signal and Image Processing Lab (SIPL) staff for their support, Alfred. Agrell and the LibRetro community for their support and Marc G. Bellemare for his valuable inputs\nM. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifying count based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016\nM. Campbell, A. J. Hoane, and F.-h. Hsu. Deep blue. Artificial Intelligence, 134(1):57-83, 2002\nlibRetro site. Libretro. www.libretro.com. Accessed: 2016-11-03\nM. J. Mataric. Reinforcement learning in the multi-robot domain. In Robot colonies, pages 73-83 Springer, 1997.\nUniverse. Universe. universe.openai.com, 2016. Accessed: 2016-12-13.\nB. Bischoff, D. Nguyen-Tuong, I.-H. Lee, F. Streichert, and A. Knoll. Hierarchical reinforcement learning for robot navigation. In ESANN, 2013. G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016. L. Busoniu, R. Babuska, and B. De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in Multi-Agent Systems and Applications-1, pages 183-221. Springer, 2010.\nR. Crites and A. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information Processing Systems 8. Citeseer, 1996. I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. The malmo platform for artificial intelligence experimentation. In International Joint Conference On Artificial Intelligence (IJCAI), page 4246, 2016.\nSpringer, 1997 V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried-. miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. J. Schaeffer, J. Culberson, N. Treloar, B. Knight, P. Lu, and D. Szafron. A world championship. caliber checkers program. Artificial Intelligence, 53(2):273-289, 1992. S. Shalev-Shwartz, N. Ben-Zrihem, A. Cohen, and A. Shashua. Long-term planning by short-term prediction. arXiv preprint arXiv:1602.01580, 2016. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural. networks and tree search. Nature, 529(7587):484- 489, 2016. G. Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3): 58-68, 1995. J. Togelius, S. Karakovskiy, J. Koutnik, and J. Schmidhuber. Super mario evolution. In 2009 IEEE Symposium on Computational Intelligence and Games, pages 156-161. IEEE, 2009.\nUniverse. Universe. universe.openai.com, 2016. Accessed: 2016-12-13. H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. CoRR. abs/1509.06461, 2015. Z. Wang, N. de Freitas, and M. Lanctot. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015. Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual. navigation in indoor scenes using deep reinforcement learning. arXiv preprint arXiv:1609.05143,. 2016."}, {"section_index": "17", "section_name": "Appendices", "section_text": "Experimental Results\nTable 3: Average results of DON, D-DON, Dueling D-DON and a Human player\nDQN D-DQN Dueling D-DQN Human F-Zero 3116 3636 5161 6298 Gradius III 7583 12343 16929 24440 Mortal Kombat 83733 56200 169300 132441 Super Mario 11765 16946 20030 36386 Wolfenstein 100 83 40 2952"}] |
rkE3y85ee | [{"section_index": "0", "section_name": "CATEGORICAL REPARAMETERIZATION I GUMBEL-SOFTMAX WITH", "section_text": "Shixiang Gu\nEric Jang\nUniversity of Cambridge MPI Tubingen\nGoogle Brain\nejang@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Stochastic neural networks with discrete random variables are a powerful technique for representing distributions encountered in unsupervised learning, language modeling, attention mechanisms, anc reinforcement learning domains. For example, discrete variables have been used to learn probabilis tic latent representations that correspond to distinct semantic classes (Kingma et al.2014), imag regions (Xu et al.]2015), and memory locations (Graves et al.]2014] Graves et al.2016). Discrete representations are often more interpretable (Chen et al.2016) and more computationally efficien (Rae et al.|2016) than their continuous analogues.\nHowever, stochastic networks with discrete variables are difficult to train because the backprop. agation algorithm - while permitting efficient computation of parameter gradients - cannot be applied to non-differentiable layers. Prior work on stochastic gradient estimation has traditionally. focused on either score function estimators augmented with Monte Carlo variance reduction tech- niques (Paisley et al.]2012f Mnih & Gregor2014] Gu et al.[2016]Gregor et al.2013), or biased path derivative estimators for Bernoulli variables (Bengio et al.]2013). However, no existing gra. dient estimator has been formulated specifically for categorical variables. The contributions of this. work are threefold:\nThe practical outcome of this paper is a simple, differentiable approximate sampling mechanism fo categorical variables that can be integrated into neural networks and trained using standard back propagation.\nWork done during an internship at Google Brain\nBen Poole\nStanford University\npoole@cs.stanford.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables. due to the inability to backpropagate through samples. In this work, we present an. efficient gradient estimator that replaces the non-differentiable sample from a cat- egorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax esti- mator outperforms state-of-the-art gradient estimators on structured output predic tion and unsupervised generative modeling tasks with categorical latent variables. and enables large speedups on semi-supervised classification..\n1. We introduce Gumbel-Softmax, a continuous distribution on the simplex that can approx- imate categorical samples, and whose parameter gradients can be easily computed via the reparameterization trick. 2. We show experimentally that Gumbel-Softmax outperforms all single-sample gradient es-. timators on both Bernoulli variables and categorical variables.. 3. We show that this estimator can be used to efficiently train semi-supervised models (e.g Kingma et al.(2014)) without costly marginalization over unobserved categorical latent. variables.\nWe begin by defining the Gumbel-Softmax distribution, a continuous distribution over the simplex that can approximate samples from a categorical distribution. Let z be a categorical variable with class probabilities 1, 2, ...7tk. For the remainder of this paper we assume categorical samples are. encoded as k-dimensional one-hot vectors lying on the corners of the (k - 1)-dimensional simplex. k-1. This allows us to define quantities such as the element-wise mean Ep[z] = [1, .., k] of. these vectors.\nexp((log(;) + gi)/T) Yi for i = 1, ..., k =1 exp((log(;) + gi)/)\nk k k L I(ri/y+1) Pn,r(Y1,...,Yk) =T(k)7k-1 Ti/yi i=1 i=1\nThis distribution was independently discovered by[Maddison et al.(2016), where it is referred to as the concrete distribution. As the softmax temperature t approaches 0, samples from the Gumbel- Softmax distribution become one-hot and the Gumbel-Softmax distribution becomes identical to the categorical distribution p(z).\na) Categorical T = 0.1 T = 0.5 T = 1.0 T = 10.0 erreeeeeon 6 gadwes category\nFigure 1: The Gumbel-Softmax distribution interpolates between discrete one-hot-encoded categor ical distributions and continuous categorical densities. (a) For low temperatures (- = 0.1, t = 0.5), the expected value of a Gumbel-Softmax random variable approaches the expected value of a cate- gorical random variable with the same logits. As the temperature increases ( = 1.0, = 10.0), the expected value converges to a uniform distribution over the categories. (b) Samples from Gumbel-. Softmax distributions are identical to samples from a categorical distribution as t -> 0. At higher. temperatures, Gumbel-Softmax samples are no longer one-hot, and become uniform as t -> oo."}, {"section_index": "3", "section_name": "2.1 GUMBEL-SOFTMAX ESTIMATOR", "section_text": "The Gumbel-Softmax distribution is smooth for > 0, and therefore has a well-defined gradi ent dy/a with respect to the parameters . Thus, by replacing categorical samples with Gumbel-. Softmax samples we can use backpropagation to compute gradients (see Section|3.1). We denote.\nIThe Gumbel(0,1) distribution can be sampled using inverse transform sampling by drawing u ~ Uniform(0, 1) and computing g = - log(- log(u))\nThe Gumbel-Max trick (Gumbel!1954] Maddison et al.2 2014) provides a simple and efficient way to draw samples z from a categorical distribution with class probabilities :\nz = one hot arg max gi + log i\nwhere g1...gk are i.i.d samples drawn from Gumbel(0, 11 We use the softmax function as a continu. ous, differentiable approximation to arg max, and generate k-dimensional sample vectors y E k-1 where\nWhile Gumbel-Softmax samples are differentiable, they are not identical to samples from the corre. sponding categorical distribution for non-zero temperature. For learning, there is a tradeoff between small temperatures, where samples are close to one-hot but the variance of the gradients is large,. and large temperatures, where samples are smooth but the variance of the gradients is small (Figure 1. In practice, we start at a high temperature and anneal to a small but non-zero temperature..\nIn our experiments, we find that the softmax temperature t can be annealed according to a variety of schedules and still perform well. If - is a learned parameter (rather than annealed via a fixed schedule), this scheme can be interpreted as entropy regularization (Szegedy et al.2015, Pereyra et al.[ 2016), where the Gumbel-Softmax distribution can adaptively adjust the e\"confidence'of proposed samples during the training process."}, {"section_index": "4", "section_name": "2.2 STRAIGHT-THROUGH GUMBEL-SOFTMAX ESTIMATOR", "section_text": "Continuous relaxations of one-hot vectors are suitable for problems such as learning hidden repre sentations and sequence modeling. For scenarios in which we are constrained to sampling discrete values (e.g. from a discrete action space for reinforcement learning, or quantized compression), we discretize y using arg max but use our continuous approximation in the backward pass by approxi-. mating Vez ~ Vey. We call this the Straight-Through (ST) Gumbel Estimator, as it is reminiscent of the biased path derivative estimator described in|Bengio et al.(2013). ST Gumbel-Softmax allows. samples to be sparse even when the temperature t is high.."}, {"section_index": "5", "section_name": "3 RELATED WORK", "section_text": "In this section we review existing stochastic gradient estimation techniques for discrete variables (illustrated in Figure 2). Consider a stochastic computation graph (Schulman et al.2015) with. discrete random variable z whose distribution depends on parameter 0, and cost function f(z). The objective is to minimize the expected cost L(0) = Ez~pe(z)[f(z)] via gradient descent, which requires us to estimate VEz~pe(z)[f(z)]."}, {"section_index": "6", "section_name": "3.1 PATH DERIVATIVE GRADIENT ESTIMATORS", "section_text": "For distributions that are reparameterizable, we can compute the sample z as a deterministic functior g of the parameters 0 and an independent random variable e, so that z = g(0, e). The path-wise gradients from f to 0 can then be computed without encountering any stochastic nodes:.\nBiased path derivative estimators can be utilized even when z is not reparameterizable. In general we can approximate Vez ~ Vem(0), where m is a differentiable proxy for the stochastic sample For Bernoulli variables with mean parameter 0, the Straight-Through (ST) estimator (Bengio et al. 2013) approximates m = e(z), implying Vem = 1. For k = 2 (Bernoulli), ST Gumbel-Softmax is similar to the slope-annealed Straight-Through estimator proposed by|Chung et al.(2016), bu uses a softmax instead of a hard sigmoid to determine the slope. Rolfe (2016) considers an al ternative approach where each binary latent variable parameterizes a continuous mixture model Reparameterization gradients are obtained by backpropagating through the continuous variables and marginalizing out the binary variables.\nOne limitation of the ST estimator is that backpropagating with respect to the sample-independent mean may cause discrepancies between the forward and backward pass, leading to higher variance\na d df dg Ee[f(g(0,e))]=Ec~p Az~Pe f(z))] de ae dg d0\nFor example, the normal distribution z ~ N(, ) can be re-written as + : N(0, 1), making. it trivial to compute dz/a and dz/ao. This reparameterization trick is commonly applied to train ing variational autooencoders with continuous latent variables using backpropagation (Kingma & Welling][2013} Rezende et al.]2014b). As shown in Figure [2] we exploit such a trick in the con- struction of the Gumbel-Softmax estimator.\n(a) (b) (c) (d) (e) f(x) f(z) f Vlog Pe(Z) f(z) f(y) df d f d f df ax d z dzV dy f(z) x(0) f(z) d x A dy de au Pe(Z) log P,(Z) Pe(Z) log Pg(Y) dPe(Z) dPe(Z) ae Pe(Z) a0 Deterministic, Pe(Z) log Pg(Y) differentiable node Stochastic node e dPe(Z) a logPg(Y) a0 ae Forward pass 0 Backpropagation\n(a) (b) (c) (d) (e) f(x) f(z) f Vlog Pe(Z) f(z) f(y) df af df A df ax dz azV dy f(z) Z x(0) f (z) d x dy de du log Po(Z) Pe(Z) Po(Z) log Pg(Y) dPe(Z) dPe(Z) a0 a0 Deterministic, Pe(Z) Pe(Z) log P,(Y differentiable node 0 Stochastic node. dPe(Z) d log Pe(Y) ae ae Forward pass e Backpropagation\nFigure 2: Gradient estimation in stochastic computation graphs. (1) e.f(x) can be computed via. backpropagation if x(0) is deterministic and differentiable. (2) The presence of stochastic node. z precludes backpropagation as the sampler function does not have a well-defined gradient. (3) The score function estimator and its variants (NVIL, DARN, MuProp, VIMCO) obtain an unbiased estimate of Ve f (x) by backpropagating along a surrogate loss f log pe(z), where f = f (x) - b and. b is a baseline for variance reduction. (4) The Straight-Through estimator, developed primarily for Bernoulli variables, approximates Vez ~ 1. (5) Gumbel-Softmax is a path derivative estimator for. a continuous distribution y that approximates z. Reparameterization allows gradients to flow from. f (y) to 0. y can be annealed to one-hot categorical variables over the course of training..\nGumbel-Softmax avoids this problem because each sample y is a differentiable proxy of the corre sponding discrete sample z.\nVeEz[f(z)]= Ez[f(z)Ve logpe(z)\nSF only requires that pe(z) is continuous in 0, and does not require backpropagating through f or the sample z. However, SF suffers from high variance and is consequently slow to converge. Ir particular, the variance of SF scales linearly with the number of dimensions of the sample vectoi (Rezende et al.|2014a), making it especially challenging to use for categorical distributions.\nThe variance of a score function estimator can be reduced by subtracting a control variate b(z) from the learning signal f, and adding back its analytical expectation s = Ez [b(z)Ve log pe(z)] to keep the estimator unbiased:\nVeEz[f(z)]= Ez[f(z)Ve logpe(z) + (b(z)Ve logpe(z) -b(z)Ve logpe(z) =Ez[(f(z) -b(z))VelogPe(z)]+ b\nVeEz[f(z)] =Ez[f(z)Velogpe(z) +(b(z)Ve logpe(z)-b(z)Velogpe(z)) =Ez[(f(z)-b(z))Velogpe(z)]+\nNVIL (Mnih & Gregor2014) uses two baselines: (1) a moving average f of f to center the. learning signal, and (2) an input-dependent baseline computed by a 1-layer neural network"}, {"section_index": "7", "section_name": "3.3 SEMI-SUPERVISED GENERATIVE MODELS", "section_text": "Semi-supervised learning considers the problem of learning from both labeled data (x, y) ~ D. and unlabeled data x ~ Du, where x are observations (i.e. images) and y are corresponding labe. (e.g. semantic class). For semi-supervised classification,Kingma et al.(2014) propose a variationa. autoencoder (VAE) whose latent state is the joint distribution over a Gaussian \"style'' variable and a categorical \"semantic class\"' variable y (Figure 6] Appendix). The VAE objective trains. discriminative network qo(y|x), inference network qo(z|x, y), and generative network pe(x[y, end-to-end by maximizing a variational lower bound on the log-likelihood of the observation unde. the generative model. For labeled data, the class y is observed, so inference is only done on z . q(z|x, y). The variational lower bound on labeled data is given by:.\nlogpe(x, y) -L(x, y) [log Pe(x|y,z)] - KL[q(z|x, y)||pe(y)p(z)]\nFor unlabeled data, difficulties arise because the categorical distribution is not reparameterizable Kingma et al.(2014) approach this by marginalizing out y over all classes, so that for unlabeled. data, inference is still on go(z[x, y) for each y. The lower bound on unlabeled data is:.\nlog pe(x) -U(x) = Ez~q+(y,z(x)[logpe(x|y,z) + logPe(y) + logp(z) - qp(y,z|x )`qg(y|x)(-L(x,y) +H(qs(y|x)))\nThe full maximization obiective is\nJ = E(x,y)~Dt [-L(x,y)] + Ex~Du [-U(x)] + Q:E(x,y)~D1[logqg(y|x)]\nwhere a is the scalar trade-off between the generative and discriminative objectives"}, {"section_index": "8", "section_name": "EXPERIMENTAL RESULTS", "section_text": "In our first set of experiments, we compare Gumbel-Softmax and ST Gumbel-Softmax to other stochastic gradient estimators: Score-Function (SF), DARN, MuProp, Straight-Through (ST), and\nfitted to f - f (a control variate for the centered learning signal itself). Finally, variance. normalization divides the learning signal by max(1, f), where o? is a moving average of. Var[f]. DARN (Gregor et al.] 2013) uses b = f(z) + f'(z)(z - z), where the baseline corre-. sponds to the first-order Taylor approximation of f(z) from f(z). z is chosen to be 1/2 for Bernoulli variables, which makes the estimator biased for non-quadratic f, since it ignores the correction term , in the estimator expression.. MuProp (Gu et al.[2016) also models the baseline as a first-order Taylor expansion: b = f(z) + f'(z)(z - z) and b = f'(z)VeE, [z]. To overcome backpropagation through. discrete sampling, a mean-field approximation fmF(e(z)) is used in place of f(z) to. compute the baseline and derive the relevant gradients.. VIMCO (Mnih & Rezende]2016) is a gradient estimator for multi-sample objectives that. uses the mean of other samples b = 1/m ji f (z) to construct a baseline for each sample. Zi E z1:m. We exclude VIMCO from our experiments because we are comparing estimators. for single-sample objectives, although Gumbel-Softmax can be easily extended to multi- sample objectives\nOne limitation of this approach is that marginalization over all k class values becomes prohibitively expensive for models with a large number of classes. If D, I, G are the computational cost of sam- pling from qo(y[x), qs(z[x, y), and pe(x[y, z) respectively, then training the unsupervised objective requires O(D + k(I + G)) for each forward/backward step. In contrast, Gumbel-Softmax allows us to backpropagate through y ~ qo(y[x) for single sample gradient estimation, and achieves a cost of O(D + I + G) per training step. Experimental comparisons in training speed are shown in Figure|5\nSlope-Annealed ST. Each estimator is evaluated on two tasks: (1) structured output prediction an (2) variational training of generative models. We use the MNIST dataset with fixed binarizatior for training and evaluation, which is common practice for evaluating stochastic gradient estimators Salakhutdinov & Murray 2008Larochelle & Murray2011)\nLearning rates are chosen from {3e-5, 1e-5, 3e-4, 1e-4, 3e-3, 1e-3}; we select the best learn ing rate for each estimator using the MNIST validation set, and report performance on the tes set. Samples drawn from the Gumbel-Softmax distribution are continuous during training, but ar. discretized to one-hot vectors during evaluation. We also found that variance normalization was nec essary to obtain competitive performance for SF, DARN, and MuProp. We used sigmoid activatior functions for binary (Bernoulli) neural networks and softmax activations for categorical variables Models were trained using stochastic gradient descent with momentum 0.9."}, {"section_index": "9", "section_name": "4.1 STRUCTURED OUTPUT PREDICTION WITH STOCHASTIC BINARY NETWORKS", "section_text": "The objective of structured output prediction is to predict the lower half of a 28 28 MNIST digit given the top half of the image (14 28). This is a common benchmark for training stochastic binary. networks (SBN) (Raiko et al.]2014] Gu et al.] 2016, Mnih & Rezende2016). The minimization objective for this conditional generative model is an importance-sampled estimate of the likelihood. objective, Eh~pe(h;|uppr)[m =1 log pe(lower|hi)], where m = 1 is used for training and m = 1000 is used for evaluation.\nWe trained a SBN with two hidden layers of 200 units each. This corresponds to either 200 Bernoulli variables (denoted as 392-200-200-392) or 20 categorical variables (each with 10 classes) with bi. narized activations (denoted as 392-(20 10)-(20 10)-392)\nBernoulli SBN Categorical SBN SF SF DARN DARN ST Slope-Annealed st Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 200 400 500 100 200 400 500 Steps (x1e3) Steps (x1e3) (a) (b)\nFigure 3: Test loss (negative log-likelihood) on the structured output prediction task with binarizec MNIST using a stochastic binary network with (a) Bernoulli latent variables (392-200-200-392) anc (b) categorical latent variables (392-(20 10)-(20 10)-392)."}, {"section_index": "10", "section_name": "4.2 GENERATIVE MODELING WITH VARIATIONAL AUTOENCODERS", "section_text": "We train variational autoencoders (Kingma & Welling2013), where the objective is to learn a gener- ative model of binary MNIST images. In our experiments, we modeled the latent variable as a single hidden layer with 200 Bernoulli variables or 20 categorical variables (20 10). We use a learned cat- egorical prior rather than a Gumbel-Softmax prior in the training objective. Thus, the minimization objective during training is no longer a variational bound if the samples are not discrete. In practice\nAs shown in Figure[3] ST Gumbel-Softmax is on par with the other estimators for Bernoulli vari- ables and outperforms on categorical variables. Meanwhile, Gumbel-Softmax outperforms other estimators on both Bernoulli and Categorical variables. We found that it was not necessary to anneal the softmax temperature for this task, and used a fixed t = 1.\nSF SF DARN PDARN ST ST Slope-Annealed ST Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 400 100 400 500 Steps (x1e3) Steps (x1e3)\nwe find that optimizing this objective in combination with temperature annealing still minimizes actual variational bounds on validation and test sets. Like the structured output prediction task, we use a multi-sample bound for evaluation with m = 1000.\nThe temperature is annealed using the schedule t = max(0.5, exp(-rt)) of the global training step t, where t is updated every N steps. N E {500, 1000} and r E {1e-5, 1e-4} are hyperparameters for which we select the best-performing estimator on the validation set and report test performance\nAs shown in Figure4] ST Gumbel-Softmax outperforms other estimators for Categorical variables and Gumbel-Softmax drastically outperforms other estimators in both Bernoulli and Categorical variables.\nBernoulli VAE Categorical VAE SF SF DARN DARN ST ST Slope-Annealed ST Slope-Annealed ST 120 120 MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 110 110 105 100 200 0 400 500 100 200 400 500 Steps (xle3) Steps (xle3) (a) (b)\nFigure 4: Test loss (negative variational lower bound) on binarized MNIST VAE with (a) Bernoulli latent variables (784 - 200 784) and (b) categorical latent variables (784 - (20 10) 200).\nWe apply the Gumbel-Softmax estimator to semi-supervised classification on the binary MNIST dataset. We compare the original marginalization-based inference approach (Kingma et al.|2014) to single-sample inference with Gumbel-Softmax and ST Gumbel-Softmax\nWe trained on a dataset consisting of 100 labeled examples (distributed evenly among each of the 10 classes) and 50,000 unlabeled examples, with dynamic binarization of the unlabeled examples for each minibatch. The discriminative model qo(y|x) and inference model q(z|x, y) are each im- plemented as 3-layer convolutional neural networks with ReLU activation functions. The generative model pe(x[y, z) is a 4-layer convolutional-transpose network with ReLU activations. Experimental details are provided in Appendix A\nEstimators were trained and evaluated against several values of a = {0.1, 0.2,0.3, 0.8,1.0} ang the best unlabeled classification results for test sets were selected for each estimator and reportec\nBernoulli VAE Categorical VAE SF SF DARN DARN ST ST Slope-Annealed ST Slope-Annealed ST MuProp MuProp Gumbel-Softmax Gumbel-Softmax ST Gumbel-Softmax ST Gumbel-Softmax 100 400 400 500 Steps (x1e3) Steps (x1e3) (a) (b)\nTable 1: The Gumbel-Softmax estimator outperforms other estimators on Bernoulli and Categorical latent variables. For the structured output prediction (SBN) task, numbers correspond to negative log-likelihoods (nats) of input images (lower is better). For the VAE task, numbers correspond to. negative variational lower bounds (nats) on the log-likelihood (lower is better)..\nSF DARN MuProp ST Annealed ST Gumbel-S ST Gumbel-S. SBN (Bern.) 72.0 59.7 58.9 58.9 58.7 58.5 59.3 SBN (Cat.) 73.1 67.9 63.0 61.8 61.1 59.0 59.7 VAE (Bern.) 112.2 110.9 109.7 116.0 111.5 105.0 111.5 VAE (Cat.) 110.6 128.8 107.0 110.9 107.8 101.5 107.8\nTable 2: Marginalizing over y and single-sample variational inference perform equally well when applied to image classification on the binarized MNIST dataset (Larochelle & Murray) 2011).We report variational lower bounds and image classification accuracy for unlabeled data in the test set.\n35 Gumbel 30 Marginalization (rte/sdees) 25 20 15 5 0 K=5 K=10 K=100 Number of classes y (a) (b)\nsppeeeeeee) peeee 25 20 15 10 5 0"}, {"section_index": "11", "section_name": "5 DISCUSSION", "section_text": "The primary contribution of this work is the reparameterizable Gumbel-Softmax distribution, whose. corresponding estimator affords low-variance path derivative gradients for the categorical distri-. bution. We show that Gumbel-Softmax and Straight-Through Gumbel-Softmax are effective on structured output prediction and variational autoencoder tasks, outperforming existing stochastic. gradient estimators for both Bernoulli and categorical latent variables. Finally, Gumbel-Softmax. enables dramatic speedups in inference over discrete latent variables."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "in Table[2 We used an annealing schedule of t = max(0.5, exp(-3e-5 : t)), updated every 2000 steps.\nIn|Kingma et al.[(2014), inference over the latent state is done by marginalizing out y and using the reparameterization trick for sampling from qo(z|x, y). However, this approach has a computational cost that scales linearly with the number of classes. Gumbel-Softmax allows us to backpropagate directly through single samples from the joint qo(y, z[x), achieving drastic speedups in training without compromising generative or classification performance. (Table|2 Figure[5)\nIn Figure5l we show how Gumbel-Softmax versus marginalization scales with the number of cat- egorical classes. For these experiments, we use MNIST images with randomly generated labels. Training the model with the Gumbel-Softmax estimator is 2 as fast for 10 classes and 9.9 as fast for 100 classes.\n85 Gumbel 80 Marginalization 023 79 25 20 5 L0 0193456789 5 0183456789 0 K=5 K=10 K=100 Number of classes.\nFigure 5: Gumbel-Softmax allows us to backpropagate through samples from the posterior qo(y[x) providing a scalable method for semi-supervised learning for tasks with a large number of classes. (a) Comparison of training speed (steps/sec) between Gumbel-Softmax and marginaliza tion (Kingma et al.]2014) on a semi-supervised VAE. Evaluations were performed on a GTX Titan X GPU. (b) Visualization of MNIST analogies generated by varying style variable z across each row and class variable y across each column.\nWe sincerely thank Luke Vilnis, Vincent Vanhoucke, Luke Metz, David Ha, Laurent Dinh, George Tucker, and Subhaneil Lahiri for helpful discussions and feedback.\nBeng1o.N. eonara.and Courv1lle Estimating or propagating gradients through stochastic. neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Info-. gan: Interpretable representation learning by information maximizing generative adversarial nets.. CoRR, abs/1606.03657, 2016. J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks. arXiv preprint. arXiv:1609.01704, 2016. P. W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the. ACM, 33(10):75-84, 1990. A. Graves. G. Wayne. M. Reynolds. T. Harley. I. Danihelka. A. Grabska-Barwinska. S. G. Col-\nGregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive network. arXiv preprint arXiv:1310.8499, 2013. Gu, S. Levine, I. Sutskever, and A Mnih. MuProp: Unbiased Backpropagation for Stochast. Neural Networks. ICLR, 2016. J. Gumbel. Statistical theory of extreme values and some practical applications: a series. lectures. Number 33. US Govt. Print. Office, 1954.. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.611. 2013. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with dee. generative models. In Advances in Neural Information Processing Systems, pp. 3581-3589, 201 Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, volume. pp. 2, 2011. J. Maddison, D. Tarlow, and T. Minka. A* sampling. In Advances in Neural Information Pr cessing Systems, pp. 3086-3094, 2014. J. Maddison, A. Mnih, and Y. Whye Teh. The Concrete Distribution: A Continuous Relaxatic of Discrete Random Variables. ArXiv e-prints, November 2016.. Mnih and K. Gregor. Neural variational inference and learning in belief networks. ICML, 3. 2014. Mnih and D. J. Rezende. Variational inference for monte carlo objectives. arXiv prepri. arXiv:1602.06725, 2016. Paisley, D. Blei, and M. Jordan. Variational Bayesian Inference with Stochastic Search. ArX. e-prints, June 2012. briel Pereyra, Geoffrey Hinton, George Tucker, and Lukasz Kaiser. Regularizing neural networl. by penalizing confident output distributions. 2016. W Rae, J. J Hunt, T. Harley, I. Danihelka, A. Senior, G. Wayne, A. Graves, and T. P Lillicra. Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes. ArXiv e-print. October 2016. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforwai.\nCPP1 ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014a. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer ence in deep generative models. In Proceedings of The 31st International Conference on Machin Learning, pp. 1278-1286, 2014b.\nJ. T. Rolfe. Discrete Variational Autoencoders. ArXiv e-prints, September 2016\n(a) (c) N(O,1) N(O,1) G(0, 1) b Deterministic, N(O,1) differentiable node Stochastic node\nFigure 6: Semi-supervised generative model proposed by Kingma et al.(2014). (a) Generative model pe(x[y, z) synthesizes images from latent Gaussian \"style\"' variable z and categorical class variable y. (b) Inference model qo(y, z|x) samples latent state y, z given x. Gaussian z can be differentiated with respect to its parameters because it is reparameterizable. In previous work, when y is not observed, training the VAE objective requires marginalizing over all values of y. (c) Gumbel- Softmax reparameterizes y so that backpropagation is also possible through y without encountering stochastic nodes."}, {"section_index": "13", "section_name": "B DERIVING THE DENSITY OF THE GUMBEL-SOFTMAX DISTRIBUTION", "section_text": "Here we derive the probability density function of the Gumbel-Softmax distribution with proba bilities 1, ..., k and temperature t. We first define the logits x; = log , and Gumbel samples\nFigures|6|and7|describe the architecture used in our experiments for semi-supervised classification (Section4.3).\n(a) conv2 conv2 conv2 5x5 5x5 5x5 FC X stride=2 stride=2 stride=2 qq(y l x) 10 N=32 N=64 N=128 ReLU ReLU ReLU (b) conv2 conv2 conv2 5x5 5x5 5x5 FC [x,y] stride=2 stride=2 stride=2 qq(z 1 x) 32 N=32 N=64 N=128 ReLU ReLU ReLU c conv2_T conv2_T conv2_T conv2_T FC 3x3 3x3 3x3 3x3 [x, y] >FC pe(x y,z) 64 stride=2 stride=2 stride=2 stride=2 N=128 N=64 N=32 N=32\nFigure 7: Network architecture for (a) classification qs(y[x) (b) inference qo(z[x, y), and (c) gen erative pe(x[y, z) models. The output of these networks parameterize Categorical, Gaussian, and Bernoulli distributions which we sample from.\nexp((xi+ gi)/T) for i = 1, ..., Yi j=1 exp((x; + gj)/T)\nThe mapping from the Gumbel samples g to the Gumbel-Softmax sample y is not invertible as the normalization of the softmax operation removes one degree of freedom. To compensate for this, we define an equivalent sampling process that subtracts off the last element, (xk + gk)/- before the softmax:\nexp((xi+gi-(xk+ gk))/T) for i = 1,... i=1 exp((x;+ gj- (xk+ gk))/T)\nU=Xi+gi-(xk+ gk for i = 1,..., k - 1\nconv2 T conv2_T conv2_T conv2 T FC 3x3 3x3 3x3 3x3 [x,y] >FC po(x y,z) 64 stride=2 stride=2 stride=2 stride=2 N=128 N=64 N=32 N=32\n., 9k, where g; ~ Gumbel(0, 1). A sample from the Gumbel-Softmax can then be computed as\nTo derive the density of this equivalent sampling process, we first derive the density for the 'cen tered' multivariate Gumbel density corresponding to:\n(U1,..., Uk. dgk P(U1,..., Uk|gk)P(gk) k-1 dgk P(9k) ]I p(ui|gk) i=1 k-1 X dgk f(gk,O) f(xk+ gk,Xi-Ui i=1 k-1 11 gk-e-gk i=1\nWe perform a change of variables with v = e-9k, so dv = -e-9k dgx and dgk = dv e9k = dv/v and define uk = 0 to simplify notation:.\nI1 pu1,..., Uk,-1)=Suk=0 Xk i=1 k-1 exp Xk+) (xi i=1 k k exp i=1 i=1 k I(k) exp(xi-Ui exp i=1 i=1\nk-1 1+ Yk = exp(u) j=1\nThe determinant of the Jacobian can then be computed\nOh-1(y1:k-1) k-1 k-1 k 11 11 1 - yi1 Yj Oy1:k-1 j=1 i=1 i=1\nk k k 9 k p(y1,..,yk) = T(k exp ( exp Yi \\i=1 i=1 i=1 -k k k - 11 exp(xi) /y exp(xi)/yT+1 i=1 i=1\nk-1 11 VeXi-Ui-xk-Ve*i-uj-xk (15) U1, ..., Uk,-1. i=1 exp (16) Xk+ ) exp (17) =T(k) Iexp (18) exp =1\nven samples u1, ..., uk,-1 from the centered Gumbel distribution, we can apply a deterministi. nsformation h to yield the first k - 1 coordinates of the sample from the Gumbel-Softmax:\nexp(ui/T) Y1:k = h(u1:k-1), h = 1+ exp(uj/T)\nWe can thus compute the probability of a sample from the Gumbel-Softmax using the change of variables formula on only the first k - 1 variables:\nOh- Y1:k-1 p(y1:k) = p (y1:k-1) Oy1:k-1\nSo to compute the probability of the Gumbel-Softmax we need two more pieces: the inverse of h and its Jacobian determinant. The inverse of h is:.\nk-1 (y1:k-1 =TX log yi - log 1 Yj j=1"}] |
HyEeMu_xx | [{"section_index": "0", "section_name": "PROGRESSIVE ATTENTION NETWORKS FOR VISUAI ATTRIBUTE PREDICTION", "section_text": "Paul Hongsuck Seo', Zhe Lin', Scott Cohen', Xiaohui Shen' & Bohyung Han\n{hsseo, bhhan}@postech.ac.kr {zlin, scohen, xshen} @adobe.cor"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Attentive mechanisms often play important roles in modern neural networks (NNs) especially in computer vision tasks. Many visual attention models have been introduced in the previous literature and they have shown that attaching an attention to NNs can improve the accuracy in various tasks such as image classification (Jaderberg et al.| 2015]Ba et al.| 2015}Mnih et al.| 2014] Larochelle & Hinton2010), image generation (Gregor et al.|[2015), image caption generation (Xu et al.2015) and visual question answering (Yang et al.|2015} Andreas et al. 2016] Xu & Saenko2015).\nThere are several motivations for incorporating attentive mechanisms in NNs. One of them is tha it is analogous to the perceptual process of human beings. The human visual system concentrates attention to a region of interest instead of processing an entire scene. Likewise, in a neural attentior model, we can focus processing only on attended areas of the input image. This benefits us in term of computational resources; the number of hidden units may be reduced since the hidden activation only need to encode the region with attention (Mnih et al.]2014).\nAnother important motivation is that some computer vision tasks, e.g. visual question answering. (VQA), require identifying the object for accurate attribute prediction. For example, when the input image contains multiple objects, the task should focus on the object specified by the question Figure|1|illustrates an example task to predict the color (answer) of a given input number (query) The query specifies a particular object in the input image (number 7 in this example) for answering its. attribute (red). To address this type of tasks, the network architecture should incorporate an attentive mechanism either explicitly or implicitly.\nOne of the most popular attention mechanisms for NNs is the soft attention method (Xu et al. 2015), which aggregates responses in a feature map weighted by their attention probabilities (see Appendix A|for more details). This process results in a single attended feature vector. Since the soft attention method is fully differentiable, the entire network can be trained end-to-end with standard backpropagation. However, it can only model attention to local regions with a certain size depending on the receptive field of the layer chosen for attention. This makes the soft attention method inappropriate for complicated cases, where objects involve significant variations in their scales, and shapes."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a novel attention model which can accurately attend to target objects. of various scales and shapes in images. The model is trained to gradually suppress. rrelevant regions in an input image via a progressive attentive process over multiple. ayers of a convolutional neural network. The attentive process in each layei. determines whether to pass or suppress features at certain spatial locations for use. n the next layer. We further employ local contexts to estimate attention probability. at each location since it is difficult to infer accurate attention by observing a feature vector from a single location only. The experiments on synthetic and real datasets. show that the proposed attention network outperforms traditional attention methods. In visual attribute prediction tasks.\n(a) input image (b) first attention (c) second attention (d) third attention (e) final attention\nFigure 1: An example reference problem (with the query 7 and the answer red) and intermediate attention maps using our progressive attention model. It shows that attention is gradually refinec through the network layers for resolving the reference problem. Distracting patterns at smaller scales are suppressed at earlier layers while those at larger scales (e.g. 9) are suppressed at later layers with larger receptive fields. All attended images are independently rescaled for the visualization.\nTo overcome this limitation, we propose a novel attention network, referred to as progressive attentior network (PAN), which enables precise attention over objects of different scales and shapes b attaching attentive mechanisms to multiple layers within a convolutional neural network (CNN More specifically, the proposed network forces attention prediction in intermediate feature maps b forwarding the attended feature maps in each layer to the subsequent layers in the CNN. Since feature to be attended in the current feature map is obtained by combining lower-level features witl smaller receptive fields, the network can learn to distill the precise spatial support relevant to the target objects as final attention. The contribution of this work is three-fold:\nAttention by Image Transformation Another stream of attention models is based on image. transformations. These approaches transform a regular grid and sample from the input image with. the transformed grid whose element corresponds to a location in the input image.Ba et al.(2015) and Mnih et al.(2014) transform an input image with predicted translation parameters (tx and ty). and a fixed scale factor (s < 1) for image classification or multiple object recognition. Scale factor is also predicted in (Gregor et al.]2015) for image generation, where the network uses Gaussian. filters for sampling. Spatial transformer networks (STNs) predict all six parameters of the affine\nA novel attention model (progressive attention network) which can be learned to predic attention matching accurate scale and shape of a target object. Use of local contexts to improve the stability of the progressive attention model. Achievement of significant performance improvement over traditional soft and hard attentior approaches in query-specific visual attribute prediction tasks.\nThe rest of this paper is organized as follows. We first review related work in Section|2 In Section|3 we describe the proposed model with local context information. We then present our experimental. results on several datasets in Section4|and conclude the paper in Section|5\nAttention on Features The most straightforward attention mechanism is a feature based method which selects a subset of features by explicitly attaching an attention model to NN architectures. The approaches relying on this attention mechanism have improved performance in many tasks (Xu et al. 2015 Yang et al.]2015] Andreas et al.]2016]Xu & Saenko2015] Bahdanau et al.]2015Luong et al. 2015f [Weston et al.[2015f Graves et al.[2014). For example, they have been used to handle sequences of variable lengths in neural machine translation models (Bahdanau et al.|2015f Luong et al.|[2015), speech recognition (Chorowski et al.[2014) and handwriting generation (Graves2013), and manage memory access mechanisms for memory networks (Weston et al.[2015) and neural turing machines (Graves et al.|2014). When applied to computer vision tasks to resolve reference problems, these models are designed to pay attention to CNN features corresponding to subregions in the input image. Image caption generation and visual question answering are typical examples benefited from this attention mechanism (Xu et al.]2015]Yang et al.2015] Andreas et al.]2016} Xu & Saenko2015).\ntransformation matrix, and even extend it to a projective transformation and a 16-point thin plate. spline transformation (Jaderberg et al.]2015). Because all these transformations used in (Jaderberg et al.[[2015) involve scale factors. STNs are capable of dealing with objects in different sizes. However STN is limited when there are multiple candidate regions for attention. Our model overcomes this. problem by formulating attention as progressive filtering on feature maps instead of assuming objects. can be roughly aligned by a single spatial transformation.\nMultiple Attention Processes There have been several approaches iteratively performing attentive processes to resolve relations between targets.Yang et al.(2015) iteratively attend to images conditioned on the previous attention states for visual question answering as the objects of interest are often not specified explicitly in questions but implicitly in relational expressions about the targe objects. Also, Weston et al.[(2015) and [Graves et al.[(2014) incorporate attention mechanisms to memory cells iteratively to retrieve different values stored in the memory. Our proposed model is similar in spirit of iterative attention but aimed at attending to a single target object via operating or multiple CNN layers progressively, i.e., attention information is predicted progressively from feature maps through multiple layers of CNN to capture the fine shapes of the target object.\nIn (Jaderberg et al.|2015), the authors also conducted an experiment with a network with multiple. transformer layers. However, the attention shapes of STNs are still constrained to the type of. transformation regardless of the number of transformers. In contrast, the quality of the attention. shapes is improved through progressive attention process in the proposed method. Stollenga et al.. (2014) introduced a deep network which manipulates intermediate features of a fixed classifier through. channel-wise attention process. Although the channel-wise attention process is used at multiple layers of the network to manipulate the intermediate feature representations, they never explored spatial. attention process. More importantly, this method requires to have an accurate pretrained classifier for the target classes prior to learning attention while pretraining a general query-specific attribute. classifier is not trivial. It is also notable that both (Jaderberg et al.|[2015) and (Stollenga et al.[[2014) target simple classification tasks without queries while we aim to tackle the query-specific attribute. prediction task where answers from a single input image can be very different depending on the input. query.\nTraining Attention Models The networks with soft attention are fully differentiable and thus trainable end-to-end by backpropagation. Xu et al.(2015) and|Zaremba & Sutskever(2015) introduced a stochastic hard attention, where the network explicitly selects a single feature based on the predicted attention probability map. Because the explicit selection (or sampling) procedure is not differentiable REINFORCE learning rule (Williams1992), is used to make networks trainable. Transformation based attention models (Ba et al.] 2015] Mnih et al.]2014) are mostly trained by REINFORCE learning rule but STN (Jaderberg et al.||2015) proposed a fully differentiable formulation and made it possible to train end-to-end. Compared to these attention networks, the proposed network is also trainable end-to-end by the standard backpropagation without any extra techniques since every operation within the network is differentiable.\nTo overcome the limitation of existing attention models in handling variable object scales and shapes. we propose a progressive attention mechanism. In the proposed model, irrelevant features at different. scales are suppressed by attention filtering steps in different CNN layers, and computation is focused. on the features corresponding to regions of interest. At each attention layer, the model predicts an. attention map given the input query and the current feature map via an attention module, and then the. attention maps is multiplied to the feature maps channel-wise to obtain attended feature map. In each layer, each attended feature map is then forwarded to the next layer of the CNN for construction of. the following feature map, which is illustrated in Figure[2] This progressive attention process allows. us to estimate precise details of attention areas while maintaining deep representations appropriate. for high-level inference tasks.\nFigure 2: Overall procedure of progressive attention. Attentive processes are repeatedly applied tc feature maps at multiple layers and the resulting attended feature maps are used as input feature maps for the next convolution layers in CNN. Attention probabilities a' are estimated from featur maps and input query. In the last attention layer, the attended feature maps are aggregated to a single. feature vector (by sum pooling) and fed to the final attribute classifier.."}, {"section_index": "3", "section_name": "3.1 PROGRESSIVE ATTENTIVE PROCESS", "section_text": "Let f' E RH W,Ct be an output feature map of a layer l E {0, ..., L} in CNN with width Wi.. height Hi and Ci channels, and f, E RCi be a feature at (i, j) of the feature map fl. In the proposed. PAN, an attentive process is applied to multiple layers of CNN and we obtain the attended feature map fl = [fl i], which is given by\nUnlike the soft attention model (see Appendix [A), in the intermediate attention layers, the attended feature map f' is not summed up to generate a single vector representation of the attended regions. Instead, the attended feature map is forwarded to the next layer as an input to compute the next feature map, which is given by\nThis feedforward procedure with attentive processes in CNN is repeated from the input of the CNN i.e., f0 = I, until fL is obtained. Then, the attended feature fatt is finally retrieved by summing up all the features in the final attended feature map fL as in soft attention, which is given by\nH W H W fatt = >>ft=>>a i J i\nThe attended feature fatt obtained by such process is then used as the input to the visual attribute classifier as illustrated in Figure 2\nIn our models, we place the attention layers to the output of max pooling layers instead of every layer in CNN because the reduction of feature resolution within CNN mainly comes from pooling layers. In practice,, we can also skip the first few pooling layers and only attach the attention module to the. outputs of last K pooling layers.\nnext convolution layer (gCNN) bbnte feature attention attended attended map (f ) probability (a') feature map (f) feature (f att)\nsoftmax,. if l=L and att otherwise\nwhere gatt(-) denotes the attention function with a set of parameters Oatt for layer l, s,, is the attention score at (i, j) in layer l, q is the query, and o(.) is a sigmoid function. The attention probability at each location is independent of others in the same feature map, where a sigmoid function is employed to constrain attention probabilities between O and 1. For the last layer of attention, we use a softmax function over the entire spatial region for final aggregation of features.\n(f' 0'NN) +1 +1 9CNN\ngatt pai. gatt pai a (b)\nFigure 3: Attention estimation (a) without local context and (b) with local context. In (a), a%, is. predicted from fl , only while its spatially adjacent features are also used to estimate a, , in (b)."}, {"section_index": "4", "section_name": "3.2 MULTI-RESOLUTION ATTENTION ESTIMATION", "section_text": "The progressive attention model is still very effective in predicting fine attention shapes as th attention information is aggregated over multiple layers to suppress irrelevant structures at differen. granularity. In lower layers, features whose receptive fields contain small distractors are suppresse. first. Meanwhile, the features from a part of large distractors remain intact but passed to the next laye delaying its suppression. In higher layers, features of these large distractors would get low attentio. orobability as each feature contains information from larger receptive fields allowing the attentio module to distinguish whether the feature is from a distractor or the target object. This phenomeno is well demonstrated in the qualitative results in our experiments (Section4). An additional benefit o. progressive attention is that it is more straightforward during inference since it is a pure feedforwar. network."}, {"section_index": "5", "section_name": "3.3 LOCAL CONTEXT", "section_text": "A basic version of PAN discussed so far predicts an attention probability Q., based solely on the feature f/., at a single feature map location. We can improve the quality of attention estimation by. allowing the attention layers to observe a local context of the target feature. The local context Fl, of. a feature f, is composed of its spatially adjacent features. For example, the local context can be. given by F,j = {fs,t|i - s i + o, j - t j + } as illustrated in Figure[3 The attention. score is now predicted by the attention network with local context as.\nIn this architecture, the area of the local context is given by the filter size corresponding to the composite operation of convolution followed by pooling in the next layer. The local context does no need to be considered in the last layer of attention since its activations are used to compute the fina attended feature map. Local context improves attention prediction as it enables the centroid feature tc be compared with surrounding features which makes the estimated attention more discriminative"}, {"section_index": "6", "section_name": "3.4 TRAINING PROGRESSIVE ATTENTION NETWORKS", "section_text": "Training a PAN is as simple as training a soft attention network (Xu et al. 2015) because every operation within the network is differentiable. The entire network is trained end-to-end by the standarc backpropagation minimizing the binary cross entropies of the object-specific visual attributes. When we train it from a pretrained CNN, the CNN part should always be fine-tuned together since the intermediate attention maps may change the input distributions of their associated layers in CNN.\nJatt. Jatt.\nIn Eq. (3), the resolution of attention probability map a' depends on the size of the feature map in the corresponding layer. Due to the nature of a CNN with convolution and pooling layers, the resolution of a' will decrease with the increasing depth of a layer. Since the attentive processes are performed over multiple layers recursively in our framework, it is possible to attend to the regions of specific sizes and shapes. Note that the proposed network can exploit high-level semantics in deep representations for inference without losing attention resolution.\nbased solely on the\nFigure 4: Example of the MREF datasets\n(a) Network architectures of models on MREF. Arrows rep-. resents direct connection to next layer without attention.\nFigure 5: Detailed illustration of network architectures on MNIST Reference experiments"}, {"section_index": "7", "section_name": "4.1 MNIST REFERENCE", "section_text": "DatasetsWe conduct experiments on a synthetic dataset created from MNIST (LeCun et al.]1998) The synthetic dataset is referred to as MNIST Reference (MREF; Figure4a), where each training. example is a triple of an image, a query number and its color label. The task on this dataset is to. predict the color of the number identified by a query. Five to nine distinct MNIST numbers with. different colors in {green, yellow, white, red, blue} and scales in [0.5, 3.0] are randomly sampled. and located in each 100 100 image. When coloring numbers, Gaussian noise is added to the reference color value. To simulate more realistic situations, we made two variants of MREF by chainging backgrounds to either distractors (MDIST; Figure|4b) or natural images (MBG; Figure|4c) Background images in MDIST are constructed with randomly cropped 5 5 patches of MNIST images whereas backgrounds of MBG are filled with natural scene images randomly chosen from the. SUN Database (Xiao et al.]2014). The training, validation and test sets contain 30,000, 10,000 and. 10,000 images respectively\n8 (a) MREF (b) MDIST (c) MBG Figure 4: Example of the MREF datasets. STN SAN HAN PAN conv1 (3x3@32) p0o|1 (2x2) att1 conv2 (3x3@32) poo|2 (2x2) fc layer att2 (fusion layer, 32 activations) conv3 (3x3@32) pool3 (2x2) fc layer (estimation layer, 1 activation) att3 conv4 (3x3@32) sl, j pool4 (2x2) att (STN) att (soft) att (hard) (b) Architecture of attention function gatt (). Lc att4 fc (classification layer) cal contexts F,, are used only in PAN-CTX. Network architectures of models on MREF. Arrows rep-. nts direct connection to next layer without attention.\nExperimental Settings We implement the proposed network with and without the local context. observation referred to as PAN-CTX and PAN, respectively. In addition, soft attention network (SAN). hard attention network (HAN) (Xu et al.] 2015) and two variants of spatial transformer network (STN-S and STN-M) (Jaderberg et al.T2015), are used as baseline models for comparisons. While. STN-S is the model with a single transformer layer, STN-M contains multiple transformer layers in. the network. We reimplemented SAN and STNs following the descriptions in (Xu et al.[2015) and (Jaderberg et al.]2015), respectively, and trained HAN by optimizing the marginal log-likelihood. loss as it is more accurate and feasible due to small search space in our task. The architecture of image encoding network in SAN and HAN and localization networks in STNs are all identical for fair comparisons. CNN in the proposed network also has the same architecture except for the additional layers for hierarchical attention. The CNN is composed of four stacks of 3 3 convolutions with 32 channels (stride 1) followed by a 2 2 max pooling layer (stride 2) as illustrated in Figure[5a We. used a single fc layer for classification because the task requires simple color prediction. The attention functions gatt() for all models are formed as multi-layer perceptrons with two layers (Figure|5b).\nTable 1: Performance of attention models on MREF, MDIST, and MBG datasets\n(a) Color prediction accuracy [%] (b) True-positive ratio [%] MREF MDIST MBG MREF MDIST MBG STN-S 39.10 38.32 32.27 Uniform 2.34 2.35 2.39 STN-M 93.89 85.09 52.25 SAN 13.61 12.56 6.73 SAN 82.94 75.73 53.77 HAN 13.95 13.81 7.64 HAN 81.84 78.49 55.84 PAN 17.39 13.10 8.62 PAN 95.92 91.65 69.46 PAN-CTX 22.59 22.80 11.01 PAN-CTX 98.51 96.02 85.55 1.00 1.00 0.9 0.95 0.95 0.8 0.90 0.90 0.7 ACennecy 0.85 raey 0.85 0.80 0.6 ACCU 0.75 0.80 0.5 PAN_CTX SAN 0.70 PAN_CTX SAN PAN CTX SAN 0.75 PAN STN-M PAN STN-M 0.4 PAN STN-M 0.65 HAN HAN HAN 0.70 0.60 0.3 0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0 0.5 1.0 1.5 2.0 2.5 3.0 Scale Scale Scale (a) Attribute prediction accuracies of different models on the test subsets in different scales. 0.5 0.5 0.25 PAN-CTX PAN-CTX PAN-CTX 0.4 HAN 0.4 HAN HAN 0.20 SAN SAN SAN 0.2 0.2 0.10 0.1 0.1 0.05 0.0 0.0 0.00 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall (b) The precision-recall curves of object segmentation with attention probability\nFigure 6: Analysis of algorithms on MREF (left), MDIST (middle), and MBG (right)\nThe function takes the concatenation of a query q, which is a one-hot vector representing the target object and a feature vector f,j, and outputs an attention score s,j. In PAN-CTX, the attention. functions of att1, att2 and att3 additionally take the local context F!. Fi.; containing the adjacent features with d = 2. Every model is trained from scratch.\nResults Table [1a presents color prediction accuracy of all compared algorithms. It is obviou that PAN outperforms all the previous approaches with significant margins and PAN-CTX furthe improves the performance by exploiting the local contexts for attention estimation. While STN-s. often fails to predict the correct answers, STN-M learns to predict the color of the target objec. through multiple transformations and shows comparable performance to PAN in MREF. Howeve. the performance of STN-M dramatically drops as the dataset becomes more complex and realistic resulting in even lower performance than SAN and HAN. Also, note that STN-S is capable o attending to any region attended by STN-M since both models predict attention regions by estimating. an affine transformation. STN-M achieves the improvement by learning multiple transformers fron gradients coming from different levels of features. In contrast to those parametric models, th proposed network can predict attention map with more fine-grained shapes capturing the spatia. support of the target object better.\nTo evaluate the scale sensitivity of each model, we divided the test images into five subsets based on target object scales with uniform interval and computed the accuracies of the models. The results. are presented in Figure[6a] where SAN and HAN tend to predict the correct answers only in a scale range between 1.0 and 2.0, while their performance is degraded significantly with wild scale changes STN-M becomes vulnerable to scale variations in more realistic settings. In contrast, PAN and PAN-CTX are robust to scale variations due to their multi-scale attention machanism especially when. the local contexts are incorporated.\nUnlike STNs whose attention is constrained to rhombic regions, those models based on feature-wise attention maps can produce attention regions adaptive to the shapes of the target object. We evaluate the attention quality of these models using two complementary criteria: true-positive ratio (TPR)\nFigure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attended mapped to original image space by spreading activations to their receptive fields. (c) Magnitude of activations in attended feature maps fl,, which shows the effect of attention in contrast to (b). (d)\nFigure 7: Qualitative results of SAN, HAN and PAN-CTX. (a) Input images faded by attendec feature map (c). (b) Magnitude of activations in feature maps ff., before attention: the activations are mapped to original image space by spreading activations to their receptive fields. (c) Magnitude oi activations in attended feature maps f., which shows the effect of attention in contrast to (b). (d map. For PAN-CTX, only last three attention layers are visualized and attentions of ealier layers are accumulated for visualizing higher attention layers. For HAN, (c) and (d) represent attentior probability because attended feature map is not available. Every image except for input image is rescaled into [0, 1] by (x - min)/(max - min).\nand precision-recall (PR) curve. TPR measures how strong attention is given to proper location b computing the ratio of the aggregated attention probability within the desired area (a.k.a., ground truth segmentation) to the attention probability in the whole image (Table 1b). PR measures the overlaps between ground-truth segmentations and binarized segmentation predictions constructed with different thresholds (Figure6b). Note that the proposed model with the local context observation gives the best results with significant margin compared to all the other methods in terms of both criteria. These results suggest that PAN-CTX constructs more accurate shapes of attended regions than all other attention models.\nFigure7|shows the qualitative results of the proposed method and two baselines on the MBG dataset The proposed model yields accurate attention regions eventually by gradually augmenting attention and suppressing irrelevant regions in the image. We can observe that the proposed model could. maintain the high attention resolution through the progressive attention process. In contrast, the baseline models attend to the target objects only once at the top layer resulting in a coarse attention in size and shape. More qualitative results in these experiments are presented in Appendix[C"}, {"section_index": "8", "section_name": "4.2 ATTRIBUTE PREDICTION ON VISUAL GENOME", "section_text": "Dataset Visual Genome (VG) (Krishna et al. 2016) is an image dataset containing several types of. annotations: question/answer pairs, image captions, objects, object attributes and object relationship. We formulate the object attribute prediction as a multi-label classification task with reference. Given. an input image and a query (i.e., an object category), we predict the binary attributes of individual. objects specified by a query. We used 827 object classes and 749 attribute classes that appear more\nInput & Outputs PAN-CTX SAN HAN attention 2 attention 3 attention 4 (a) 7 query: 8 D answer: red SAN: white 3 3 (b) HAN: yellow 8 PAN: red O (c) (d)\nTable 2: Weighted mAP of the attribute prediction and TPR of attentions measured with ground-trut bounding boxes on VG dataset.\nFigure 8: Visualization of example attentions of HAN and PAN-CTX on VG dataset. Attention maps present magnitude of attended features and red boxes show ground truth bounding boxes of query\nthan 100 times. A total of 86,674 images with 667,882 object attribute labels are used for our experiment, and they are split into training, validation and test sets each containing 43,337, 8,667 and 34,670 images. The task is challenging because scales of objects largely vary and the attributes may be associated with very small objects.\nWe proposed a novel hierarchical attention network, which progressively attends to regions of interes. through multiple layers of a CNN. As the model is recursively applied to multiple layers of CNN. with an inherent feature hierarchy, it accurately predicts regions of interest with variable sizes anc shapes. We also incorporate local contexts into our attention network for more robust estimatior. The proposed network can be trained end-to-end with standard error backpropagation. We tested the model on both synthetic and real datasets, and demonstrated significant performance improvemen over existing attention methods.\nattention only w/ prior mAP TPR mAP TPR SAN 27.62 15.01 31.84 17.65 HAN 27.72 17.24 31.93 19.70 PAN-CTX 29.38 18.01 32.50 20.17 Query: shoe HAN PAN-CTX Input Image attention map masked image attention map 2 attention map 3 masked image 3\nExperimental Settings and Results We mainly compare our algorithm with SAN and HAN since. STNs could not learn a proper attention process on VG. The transformer layers of STNs generated. padded images of different sizes and rotations to encode the query vector to fit the query-specific. biases. All the networks share the same CNN architecture of VGG-16 network (Simonyan &. Zisserman2015), which is pretrained on ImageNet (Deng et al.|2009) and is further fine-tuned. on the VG dataset for the attribute prediction. For SAN and HAN, an attention layer is attached. to the last pooling layer in VGG-16 while PAN stacks an additional attention layer with the local. contexts F,, with 8 = 2 on top of each of the last three pooling layers in VGG-16. We skip to place. attention layers at the first two pooling layers (pool1 and pool2) because the features in those layers. are not discriminative enough to filter out. We also test models with object class conditional prior. In these models, the final attended feature is fused with the query once more by a fully connected layer. allowing the network to reflect the conditional distribution of the attributes given the query. Refer to. AppendixB|for more detailed descriptions on the network architectures..\nAll three models are evaluated in terms of mean average precision (mAP) weighted by the frequen- cies of the attribute labels in the test set, where the computation of mAP follows PASCAL VOC. protocol (Everingham et al.|2010). The proposed method consistently achieves the best weighted mAP scores in both experimental settings as shown in Table2|but the gain reduces with object class. conditional prior. Table|2|also shows TPR of each model measured with the ground-truth bounding. box for evaluating the attention qualities, and the proposed method shows the best TPR. Figure[8 presents the qualitative results of the proposed network and HAN on VG dataset.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional question answering with neural module networks. In CVPR, 2016.\nJimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visua attention. In ICLR, 2015.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015.\nKarol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network fc image generation. In ICML, pp. 1462-1471, 2015.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NIPS pp. 2008-2016, 2015.\nRanjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332. 2016\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-orde. boltzmann machine. In NIPS, pp. 1243-1251, 2010.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied t document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag recognition. ICLR, 2015.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015\nJianxiong Xiao, Krista A Ehinger, James Hays, Antonio Torralba, and Aude Oliva. Sun database. Exploring a large collection of scene categories. International Journal of Computer Vision, pp. 1-20, 2014.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In NIPS pp. 2204-2212, 2014.\nZichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks fo image question answering. arXiv preprint arXiv:1511.02274, 2015.\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprini arXiv:1505.00521, 2015."}, {"section_index": "10", "section_name": "Appendices", "section_text": "In this appendix section, we explain the soft attention network which is introduced in (Xu et al.]2015 and used as one of the baseline models in the experiments. Given a feature map, the soft attention network calculates an attention probability map and uses it to compute the attended feature for of where to attend, a soft attention model first obtains an attended feature map f E IR H W C, where W is width, H is height, and C is the number of channels. The input feature map f is generally a CNN output of an input image I, which is given by\nf = CNN(I)\nSi,j = Jatt(fi,j,q; Oatt) Qi,j = softmaxi,j(s), 0Qii1\nH W H W >>fi,j=>> i J 2 j\nIdeally, the locations in the feature map corresponding to the receptive fields containing an object of interest should have the maximum attention probability while the others have zero probabilities similarly to the hard attention. This statement stands true only if the target object is perfectly aligned with the receptive fields in terms of position and scale. In practice, however, object location and size vary whereas the structure of receptive fields is fixed. Note that there exists the trade-off between the attention resolution and the representation power. If we choose to extract deep and high-level features we give up high resolution in attention. On the other hand, we need to rely on shallow representations to increase attention resolution. This trade-off limits the performance of existing attention models."}, {"section_index": "11", "section_name": "NETWORK ARCHITECTURES ON VISUAL GENOME", "section_text": "In PAN, the convolution and pooling layers of VGG-16 network (Simonyan & Zisserman] 2015) pretrained on ImageNet (Deng et al.2009), are used, and three additional attention layers att1, att2 and att3 are stacked on top of the last three pooling layers pool3, pool4 and pool5 respectively addition to the query q and the target feature fl , to obtain the attention score s, . The size of the local contexts is squared with that of the receptive fields of the next three convolution layers before the next attention by setting = 3. Three convolutions same as the next three convolution layers ir CNN firstly encode the target feature and the local context, and are initiallized with the same weights as in CNN (Figure|9b). This embedding is then concatenated with the one-hot query vector and fed to two fully connected layers, one fusing two modalities and the other estimating the attention score In att3, the attention function takes the concatenation of the query and the target feature and feed it to two fully connected layers (Figure[9c). The attended feature fatt obtained from the last attention layer att3 is finally fed to a classification layer to predict the attributes.\nThe baseline networks also share the same architecture of CNN of VGG-16 network as in PAN (Figure[9a). In SAN, the soft attention described in Appendix[A|is attached to the top of CNN. In. HAN, the hard attention (Xu et al.|. [2015) is attached to the top of CNN instead. The hard attention is.\nFor each feature fi.; E R at (i, j) of the feature map f and the query q, the attention probability map denoted by a =Q;,iis given by\nwhere gatt() is the attention network parameterized by 0att and s = [si,j] is an attention score map The attention score map is normalized with softmax to produce attention probabilities Q;,;. Note that gatt () can be any kind of network such as a multilayer perceptron..\net ; E RC be a vector of the attended feature map f at (i, j). Then, the attended feature denoted E RC is computed by a weighted sum of features as\nq SAN HAN PAN conv1 1(33@64) conv1 2 (3x3@64) feature+context embedding (two 3 x 3 convolution layers) pool1 (2x2) conv2_1 (3x3@128) fc layer conv2_2 (3x3@128) (fusion layer, C' activations) pool2 (2x2) conv3_1 (33@256) fc layer conv3 2 (3x3@256) (estimation layer, 1 activation) conv3 3 (3x3@256) pool3 (2x2) att1 (b) Architecture of the intermediate attention func conv4_1 (33@512) gatt (.) in att1 and att2 of PAN. conv4_2 (3 x3@512) fl.j q conv4 3 (3x3@512) pool4 (2x2) att2 fc layer conv5_1 (3x3@512) (fusion layer, 512 activations) conv5_2 (3x3@512) conv5_3 (3x3@512) fc layer. pool5 (2x2) (estimation layer, 1 activation) att (soft) att (hard) att3 fc (classification layer) (a) Network Architectures of Models. (c) Architecture of the attention functions of SAN and I nd the last attention fuing ofPAN\nFigure 9: Detailed illustration of network architectures on Visual Genome experiments\nimplemented to maximize the marginal likelihood directly during training while the original pape maximized the variational lower bound of the marginal likelihood because of the large attentior search space. For testing, we also directly calculated the marginal likelihood instead of picking a single prediction with the highest attention probability. This is possible because of relatively small search space of attention in our problem compared to the image captioning where the search space of attention increases exponentially depending on the lengths of sequences. The attention functions in the baselines consist of two fully connected layers taking the concatenation of the query and the target feature as in the attention function of att3 in PAN.\nThe proposed network and the baselines described above use the query for obtaining the attentior probabilities and give us the pure strength of the attention models. However, the target object class, represented by the query, gives much more information than just attetion. It confines possible attributes and filters irrelevant attributes. For these reasons, we additionally experiment on a set of models that incorporate the target object class conditional prior for the attribute prediction. In these models, the query is fused with the attended feature fatt by an additional fully connected layer and the fused feature is used as the input of the classification layer.\nPAN-CTX Input & Outputs SAN HAN attention 2 attention 3 attention 4 query: 2 answer: blue SAN: blue HAN: blue O O PAN-CTX: blue query: 2 answer: blue SAN: blue O HAN: red . PAN-CTX: blue 9 O O query: 9 answer: white SAN: white HAN: white PAN-CTX: white query: 7 06 answer: red SAN: yellow HAN: yellow PAN-CTX: red\nFigure 10: The qualitative results of SAN, HAN and PAN-CTX on the MREF and MDIST datasets. For each example, attended images are shown in the first row and the corresponding attention maps are shown on the second row. In case of the progressive attention network, the last three attention maps (attention 2, 3 and 4) are visualized. As can be seen, attention map at deeper layers reveal the. evidence of aggregation over earlier attention maps.\nPAN-CTX Input & Outputs SAN HAN attention 2 attention 3 attention 4 query: 1 00 answer: yellow O SAN: yellow HAN: blue PAN-CTX: yellov O query: 4 O. 69 15 answer: blue SAN: yellow HAN: yellow. PAN-CTX: blue query: 9 O answer: blue SAN: yellow O HAN: white PAN-CTX: blue query: 1 O O answer: green SAN: red HAN: green PAN-CTX: green\nFigure 11: More qualitative results of SAN, HAN and PAN-CTX on the MBG dataset\nInputs SAN HAN PAN-CTX attention 2 attention 3 attention 4 query: 1 O O query: 7 O (a) PAN-CTX Inputs SAN HAN attention 2 attention 3 attention 4 O query: 1 query: 5 . O O (b)\nFigure 12: Two common failure cases of attention models on the MBG dataset. (a) The models attend. to a part of a larger structure which resembles the target object. (b) The models are confused by background distractors that are similar to the target object. Although failed, the examples show that the results of PAN-CTX are more visually interpretable (attended to query-like structures)..\nInput & Query PAN-CTX SAN HAN attention 2. attention 3. Query: cap Answer: blue SAN: 28.50 % HAN: 16.18 % PAN: 69.16 % Query: sky Answer: cloudy SAN: 30.81 % HAN: 32.86 % PAN: 56.25 % Query: floor Answer: wooden SAN: 37.86 % HAN: 26.34 % PAN: 59.79 %\nFigure 13: The qualitative results of SAN, HAN and PAN-CTX on the VG dataset. For each example the attended images are presented in the first row while their attended feature maps are shown in the second row. In the case of the PAN, last two attention maps are visualized where the attention maps at deeper layers reveal the evidence of aggregation of attention information over previous layers. The red boxes within the final attended images represent the ground truth bounding boxes for the query object annotated in the VG dataset. Each object may have multiple bounding boxes annotated by different annotators. The annotated answer is presented in the first column. The percentage for each method means the probability of the GT answer for corresponding method.\nSAN PAN-CTX Input & Query HAN attention 2 attention 3 Query: shirt Answer: white SAN: 26.36 % HAN: 40.00 % PAN: 45.75% Query: car Answer: parked SAN: 19.35 % HAN: 8.14 % PAN: 68.89 % Query: branch Answer: bare SAN: 10.05 % HAN: 6.32 % PAN: 37.06 %\nFigure 14: More qualitative results of SAN, HAN and PAN-CTX on the VG dataset.\nCock STOI"}] |
SyVVJ85lg | [{"section_index": "0", "section_name": "PALEO: A PERFORMANCE MODEL FOF DEEP NEURAL NETWORKS", "section_text": "Evan R. Sparks\nsparks@cs.berkeley.edu\nhanggi@cs.ucla.edu\nAlthough various scalable deep learning software packages have been proposed it remains unclear how to best leverage parallel and distributed computing infras- tructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural net- work architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effective- ness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of soft- ware, hardware and communication strategies, PALEO can efficiently and accu rately model the expected scalability and performance of a putative deep learning system. We show that PALEo is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scal- ability results for CNNs such as NiN, Inception and AlexNet."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "How fast can we train or evaluate a model on a user's given hardware?. For a given architecture, how can a user best leverage parallel and distributed computation? How can we design a new neural network architecture that can be trained and evaluated efficientl under common hardware setups?\nIn response to these fundamental questions, various software packages and systemshave been painstakingly developed, e.g. DistBelief (Dean et al.]2012), TensorFlow (Abadi et al.] 2015) MXNet (Chen et al.]2015), SparkNet (Moritz et al.]2015), FireCaffe (Iandola et al.]2016). More- over, expensive benchmarking efforts, e.g., Chintala et al. (2016), have performed brute-force pro filing on some of these deep learning systems on a handful network architectures.\nIn this work we aim to tackle these questions by taking an analytical approach to model the per. formance of arbitrary learning systems. Our work hinges on the observation that a neural network. architecture is a declarative specification of the forward and backward propagation steps required for training and deploying the network. However, given this specification, there is a rich design. space of algorithms, hardware choices, and communications strategies to most efficiently execute these specifications. We build a novel performance model called PALEd'[that maps this declarative. specification to arbitrary points in this design space to estimate the execution time of training and\nOpen-sourced athttps://github.com/TalwalkarLab/paleo\nameet@cs.ucla.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep learning has been successfully applied in many areas including natural language processing. and computer vision. The scale of modern datasets and the millions to billions of parameters in these deep networks pose new challenges when designing computational systems that leverage parallel. and distributed computing. Indeed, several important open questions remain:.\ndeploying deep neural networks2|PALEO applies broadly to a wide variety of neural network archi tectures and for arbitrary learning systems within this design space, and thus can serve as a valuable tool for practitioners and developers to answer the questions mentioned above\nHardware acceleration approaches are designed to accelerate the computation of the forward anc. backward passes and often make use of specialized hardware, such as GPUs (Coates et al.[2013), o more recently custom hardware designed specifically for deep learning (Jouppi2016). PALEO ac. cepts constants associated with hardware as input (e.g., peak FLOPS, network bandwidth) and au. tomatically adapts to changes in this input..\nSoftware acceleration via specialized libraries, e.g., cuda-convnet (Krizhevsky2014a) and cuDNN (Chetlur et al.]2014), and highly-optimized algorithms for commonly used primitives e.g.,Chetlur et al.[(2014) and Lavin(2016), can also be used to accelerate deep model training PALEO dynamically picks among the best available implementation for each layer at execution time\nParallelization is a natural approach to consider, and can involve training a neural network with many computational devices (e.g. CPUs, GPUs) on a single machine, or across a network. There. are two major parallelization strategies when it comes to training deep neural network models al scale: data parallelism and model parallelism. In classical data parallel systems, each worker stores. an identical copy of the model and computes gradients only on a shard of the training examples, anc these gradients are aggregated to update the model. In contrast, model parallel systems shard the. model itself across the workers, while the training data may be stored on each worker or sharded across the workers. PALEO models both data and model parallel settings..\nCommunication schemes have also been explored to accelerate incremental model updates across distributed workers. Three of the most common schemes are (Iandola et al.2016]Zhao & Canny 2013): (i) the OneToAll scheme has a 2KT communication time as a master node must communi cate with all K workers individually, where T is the time for communicating data through one linl. in the network; (ii) the Tree AllReduce scheme takes 2 log(K )T for weights to be aggregated anc broadcasted to all workers following a tree topology; and (iii) the Butterfly AllReduce scheme ir which all workers receive aggregated weights in log2(K)T using a butterfly network. We restric. the focus of PALEO to distributed communication schemes that return equivalent results to seria executions, and we thus do not consider the recently introduced butterfly mixing scheme ofZhao &. Canny(2013), or non-deterministic asynchronous parameter servers.\nWe now present PALEO, a model for the lean consumption of resources during the training of DNNs PALEo decomposes the total execution time into computation time and communication time; both are estimated for each pass of a neural network's evaluation given user specified choices within the design space of algorithms, hardware, and communications strategies. Figure[1 illustrates the overall idea. The computation time is calculated from factors including the size of the computation inputs imposed by the network architecture, the complexity of the algorithms and operations involved in the network layers, and the performance of the hardware to be used. The communication time is estimated based on the computational dependencies imposed by the network, the communication bandwidth of the hardware, and the assumed parallelization schemes. Once the network architecture and design space choices are fixed, all of the key factors in PALEO can be derived, and we can estimate execution time without actually implementing the entire network and/or an underlying software package.\n2Training a neural network involves both forward and backward propagation, whereas deploying a trainec. network on a new data point involves only forward propagation. Thus, estimating the execution time of mode training encompasses both model training and deployment, and is the focus of this work..\nTraining deep neural networks can be very time and resource consuming, and it is not uncommon for the training of a model to take days across tens or hundreds of machines. Several high-level strategies have been proposed to accelerate this process, and these strategies collectively define the design space considered by PALEO.\nMemory (Data, weights, gradients, activations). Dependencies (Network architecture) Network architecture. Complexity Computation (e.g. FLOP counts) Operation selection Execution Time Software framework (e.g. GEMM, FFT, Tiled FFT ) Parallelization strategies. (Model parallel, data parallel). scale-up Communication scheme Communication GPU (OneToAll, Tree AllReduce, Butterfly AllReduce) GPUs cluster compute Communication bandwidth CPU CPUs (GB/s) cluster Computation speed network scale-out (TFLOPS)\ndepth height OOOOO 88880width X\nComplexity (e.g. FLOP counts) Operation selection (e.g. GEMM, FFT, Tiled FFT ) Parallelization strategies\nSoftware framework\nU (OneToAll, Tree AllReduce, Butterfly AllReduce) ter Communication bandwidth J (GB/s) ter Computation speed scale-out (TFLOPS)\nFigure 1: Overview of the PALEO modeling approach. PALEO decomposes execution time into computation time and communication time, which can be derived from various factors implicitly specified by network architectures and hardware configurations."}, {"section_index": "3", "section_name": "3.1 COMPUTATION MODELING", "section_text": "We first describe the computation model on a single machine. The computation in a neural network can be expressed as a directed graph N = ({u()}r=1,{(u(i), u())}), where each node u(i) is associated with an operation f(i) on a device d(i); each directed edge (u(i), u()) represents the dependency that operation f(j) cannot be executed until f(i) is finished. We use Pa(u()) to represent the set of immediate parent nodes of u(). We model each layer in the neural network as a node, and the connections between layers as edges. In the following text, we omit the superscript index when there is no ambiguity."}, {"section_index": "4", "section_name": "3.1.1 COMPUTATION TIME FOR A SINGLE LAYER", "section_text": "To model the runtime of a layer u, we consider the operation f and decompose the execution time of this operation into three terms (as shown in Figure 2a): the time to fetch the input produced by its parent layers R(Pa(u)); the time to perform the computation of f on the designated device d i.e., C(f, d); and the time to write the outputs to the local memory W(f, d). Assuming a sequential execution, the runtime for a node u can be written as a simple summation:\nT(u) = R(Pa(u)) +C(f,d) + W(f,d)\nAmong the three terms, the computation time C(f, d) is calculated as the FLOP (floating-point op. eration) counts of the operation divided by the computation speed (FLOPS; floating-point operation. per second) of the device: C(f, d) = FLOPs(f)/speed(d). The IO times R(Pa(u)) and W(u) are calculated as the size of memory footprints involved in the computation divided by the IO bandwidth of the device. When inputs must be fetched from other devices, e.g. in the case of model parallelism. this IO bandwidth refers to the communication bandwidth between two devices. PALEO treats the speed and bandwidth of devices as parameters given to the model so that users can configure them. to reflect user-specific configurations.\nUsing this per-layer model, we will next describe how to model the computation time of an entire network. We will subsequently we present FLOP counts for layer operations commonly used in modern DNNs in Section4\nWe first consider simple sequential structures where layers are constructed one after another, as in Figure 2p. The total execution time can be calculated as the sum of execution time of all layers T(N) = =1 T(u(i)). While this calculation may seem trivial at first glance, it forms the founda- tion for modeling execution time for more complex architectures.\nFC Write outputs Pooling Pooling Pooling Conv U Operation f Conv Conv Pooling g(1) Device1 Device 2 g(2) Conv Fetch inputs Pooling (a) (b) (c)\nFigure 2: (a) The execution time of a node in the computation graph consists of the time for fetching. input, computing results, and writing results to memory. (b) An example of a sequential computation graph segment. (c) An example of a parallel computation graph segment..\nParallel structures are not uncommon in DNNs; for example, the Inception model (Szegedy et al. 2015a) contains layers that can be evaluated simultaneously, and layers on different workers can. run in parallel in model parallel setups (Dean et al.[2012). Figure[2 illustrates a paralle1 structure. where two convolutional layers (each followed by a pooling layer) are scheduled to be executed on. two devices.\nTo model computation time of parallel structures, we identify synchronization barriers before and after every parallel structure and introduce a notation of supernode U - {G(i) };=1 as a set of disjoint. subgraphs sandwiched by the synchronization barriers in the computation graph. When substituting the subgraphs with the supernode, the network is reduced to a sequential structure described above.. For the supernode, the execution time T(U) is within the range [max; T(g()), , T(g())], where. the lower bound corresponds to perfect parallelization, the upper bound corresponds to sequential. execution. Note that the execution time of a subgraph T(g(i)) can be calculated recursively.."}, {"section_index": "5", "section_name": "3.1.3 COMPUTATION MODELING FOR LAYER OPERATIONS", "section_text": "In modern DNNs, the convolutional layer is one of the most commonly used and computation ally intensive type of layer. For this reason, there has been many heavily optimized implementa tions (Chetlur et al.| 2014 Vasilache et al. 2015 Lavin 2016). Deriving plausible FLOP counts for other types of layers is a straightforward process, and in this section, we consider two leading implementations for convolutional operations: matrix multiplication and Fast Fourier Transform.\nFollowing the notation used byChetlur et al.(2014), a 2D convolutional layer during forward prop agationtakes an input feature map DnxC Hx w (which has a batch of N input feature maps witl shape H W and C channels) and a set of convolutional filters FK C Rx s (K filters with shap R S and C channels). It produces N K feature maps each of shape P Q which can be calcu lated from the shapes of inputs and filters together with additional striding and padding parameters The FLOP counts for the convolution operation can be expressed as 2K CRSN PQ. A commonl used implementation is to reduce convolution operations to matrix multiplications, which can b efficiently computed with well-optimized SGEMM routines on various platforms. Although thes FLOP counts ignore auxiliary operations (e.g. indexing arithmetic in efficient implementations) they nonetheless provide a good estimate of FLOP counts for matrix multiplication implementa tionS.\nAnother implementation is based on Fast Fourier Transform (Vasilache et al.|2015): both input fea. ture maps and filters are transformed into the frequency domain, then element-wise multiplication are performed followed by an inverse Fourier transform. This implementation introduces computa tion and memory overhead in discrete Fourier transforms, but reduces the computation complexity. to O(NCK HW +(NC+CK + NK)HW log(HW)). Convolutional layers with large filters or a.\nOur arguments generalize to N-dimensional settings, and similar arguments apply for the backward pass\nlarge problem size can benefit from FFT implementations. When counting FLOPs, it is not possible to get exact counts without knowing the underlying implementation details. In PALEO, we adopt the. commonly used FFT complexity 5n log2 n as the FLOP counts for complex-valued transformations. of size n (Cooley & Tukey1965). To account for the IO overhead caused by auxiliary memories,. PALEo estimates the memory size required for complex-valued matrices in the frequency domain and incorporates it into the data reading and writing terms. For FFT-based implementations with tilings, PALEO estimates the number of tiles from the convolution specifications..\nThe choice of algorithm - matrix multiplication or FFT - is problem specific, as it depends on the filter size, strides, input size of the convolutional layers, and memory workspace. In order to derive reasonable estimations for user-specific DNNs comparable to real executions, it is important for PA. LEO to make decisions comparable to real-world systems. Two common approaches are employec in existing DNN software frameworks and libraries to choose between these algorithms: (i) using predefined heuristics based on offline benchmarks; (ii) autotuning to empirically evaluate available algorithms on the given specification. Since autotuning is tied to platform and software implementa tions, for maximum generality PALEO by default takes the first approach. In particular, PALEO uses heuristics from cuDNN to make algorithm choices while also accounting for user preferences."}, {"section_index": "6", "section_name": "3.2 COMMUNICATION MODELING", "section_text": "We now describe our modeling for communication among multiple workers. Let |D be the size o. data to be communicated between two workers, and define B as the bandwidth of the communica. tion channel. Then the communication time can simply be written as Tcomm = D|/B. By using. different bandwidth configurations, PALEO works for both scale-up setups (multiple GPUs on one. machine) and scale-out setups (multiple machines in a cluster). Moreover, in data-parallel settings. an AllReduce operation is performed to synchronize model parameters across all workers after every backward pass. PALEO considers three communications schemes: OneToAll, Tree AllReduce, anc Butterfly AllReduce. The communication time under these three schemes is described in Section2."}, {"section_index": "7", "section_name": "3.3 PLATFORM PERCENT OF PEAK", "section_text": "Thus far, we have assumed that deep learning software platforms make perfect use of their underly ing hardware. That is, that the CPUs and GPUs are operating at \"peak FLOPS\", and that network and IO links are fully saturated. This has allowed our model to be platform independent.\nHowever, this assumption is unreasonable in practice. For instance, achieving peak FLOPS is a difficult proposition, usually requiring customized libraries developed by organizations with intimate knowledge of the underlying hardware, e.g., Intel's MKL (int]2009), ATLAS (Whaley & Petitet 2005), and cuDNN. Even these specially tuned libraries may fall short of peak execution by as mucl as 40% (atl). Further, any computation done outside the scope of PALEO (e.g. job scheduling, data copying) will exacerbate the observed inefficiency in practice. Sometimes such inefficiencies are warranted from the perspective of ease of programmability or maintenance of the learning platforms\nRather than trying to measure and capture every source of inefficiency in every learning framework we take a small number of representative deep learning workloads which contain convolutions pooling, dropout, and fully connected layers and run them for a short time on a single GPU. Givei observed total throughput and estimated total throughput on this benchmark we fit a scaling constan to estimate a platform percent of peak (PPP) parameter which captures the average relative ineffi ciency of the platform compared to peak FLOPS. Highly specialized frameworks (e.g. cuDNN) wil in general have a computational PPP that is close to 100%, while frameworks with higher overhead may have PPP constants closer to 50% or less.\nWe follow a similar benchmarking procedure to estimate PPP for the communication link for Ten sorFlow. For the FireCaffe experiments, we estimate the communication PPP based on the empirical results for communication reported in Table 4 of the paper."}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "We now present empirical results which illustrate that PALEo is robust to the choice of network architecture, hardware, communication schemes, and parallelization strategies\nWe first compare PALEO-estimated runtimes with actual runtimes measured from Tensor Flow4[(Abadi et al.]2015) execution in two popular CNN architectures: the one-tower variant o AlexNet (Krizhevsky|2014b) and the 16-layer VGG network (Simonyan & Zisserman]2014). PA LEO uses cuDNN heuristics to choose algorithms and the auto-tuning mechanism in TensorFlow i. disabled. Experiments are run on a NVIDIA TITAN X GPU with a 4 GB workspace limit..\nFor convolutional and fully connected layers, we evaluate forward computation, backward compu tation with respect to layer inputs, and backward computation with respect to filters separately (see Figure 4|in the appendix for the plots of layer-by-layer comparison.) Table[1|shows a comparison of full forward pass and backward pass with all layers included. PALEo's per layer estimates are. quite close to the actual TensorFlow execution, with only one layer - 'fc6' - consistently being. underestimated by PALEoIn spite of this issue with fc6', our full pass estimates are remarkably. accurate.\nTable 1: Full pass time of TensorFlow and PALEO estimation on AlexNet and VGG-16\nTable 2: PALEO configurations used in the case studies\nCase 1 Case 2 Case 3 Net NiN Inception v3 AlexNet Device NVIDIA K20X NVIDIA K20 NVIDIA K20 Workers Up to 128 Up to 100 Up to 8 Bandwidth 70 Gbps 10 Gbps 6 GB/s Communication Tree AllReduce Parameter Server Various Parallelization Data Parallelism Data Parallelism Hybrid Platform Caffe TensorFlow cuda-convnet2 One Step Time6 1918 ms 4269 ms 402 ms PALEO Estimation Reported Time7 2275 ms 418 ms\ncnsOllTOw O.9 willl CuDTN 4 baCKend. 5Examining the TensorFlow execution with the NVIDIA profiler revealed that TensorFlow spent two-thirds. of its reported 'fc6' time in transforming data layout between NHWC and NCHW when calling the underlying cuBLAS primitives.\nForward passd (ms) Backward pass (ms)\nForward pass (ms) Backward pass (ms) AlexNet TensorFlow 44.00 155.10 PALEO Estimation 45.96 118.44 VGG-16 TensorFlow 400.46 1117.48 PALEO Estimation 435.46 1077.27\nWe now revisit the questions posed at the beginning of the paper and demonstrate how PALEO can. help in answering them. In this subsection we present three case studies. We extract experiment se-. tups including network architectures, hardware specifications, communication schemes, and paral- lelization strategies from selected publications focusing on scalability of CNNs. We then plug those. configurations into PALEO and compare the simulated scalability results with the reported results in. the original publications. Table[2|summaries the configurations of PALEO in these experiments.\n6Total time of forward pass, backward pass, and parameter update for one mini-batch on one worker. 7Reported times for Cases 1 and 3 are derived approximately from information in the publications. For Case 2 no run time information is provided\nFireCaffe (Iandola et al.| 2016) adopts the Tree AllReduce communication scheme when training NiN model (Lin et al.2013) in data parallel settings with up to 128 servers on the Titan supercom puter. They report a 38 speedup for NiN with batch size 1024 relative to single-GPU performance Tabel 3 shows the results from PALEO compared with the results reported by FireCaffe.\nTable 3: Comparison between PALEO estimation and FireCaffe for training NiN\nMurray et al.(2016) reported their results in synchronously training the Inception model (Szegedy et al.[[2015b) with TensorFlow and achieved a 56 speedup with 100 workers. They apply a weak scaling strategy with batch size 256 to keep GPUs saturated. Although Murray et al.(2016) lever aged a distributed parameter server rather than one of the three communications schemes considered in PALEO, the communication cost of Butterfly AllReduce can be viewed as a lower bound (Zhao & Canny2013). To account for the fact that they train with worker nodes each of which have 8 GPUs. we assumes a linear speedup for GPUs on the same host. Figure|3a shows a comparison between reported speedups and PALEO estimated speedups. For absolute runtime, in one of the experiments their model completes 20 epochs of training after 100 hours when using 8 Tesla K40's and a batch size 256. PALEO projects a 111 hours runtime under the same setting."}, {"section_index": "9", "section_name": "4.2.3 CASE 3: ALEXNET WITH HYBRID PARALLELISM", "section_text": "Krizhevsky(2014b) describes a hybrid model and data parallelism approach for training AlexNet. using up to 8 GPUs with a weak scaling strategy. In his setup, each of the two CPUs connects to 4 GPUs, the communication bandwidth is penalized by 50% across the two groups as mentioned in. the paper. Table|4|shows the comparison between PALEO's projection and the original result, which are quite similar. Moreover, whereas Krizhevsky(2014b) does not quantify the speedup of hybrid parallelism relative to strict data parallelism, PALEO simulates training the entire network with only data parallelism (see last two columns of Table4) in order to estimate this speedup..\nTable 4: Comparison between PALEO estimation and|Krizhevsky((2014b) for training AlexNet\nIn this subsection, we use PALeo in two hypothetical setups to analyze the scalability of AlexNe and a GAN model under different communication schemes.\nFireCaffe PALEO Estimation Workers Batch size Train Time. Speedup Train Time. Speedup 1 256 5.8 days 1 X 4.9 days 1 x 32 256 11 hours 13 7.6 hours 15.5 32 1024 6 hours 23 4.6 hours 25.3 128 1024 3.6 hours 39 2.3 hours 51.6\nOne Weird Trick PALEO Estimation Hybrid parallelism Hybrid parallelism Data parallelism Workers Train Time (h) Speedup Train Time (h) Speedup Train Time (h) Speedup 1 98.95 1x 96.31 1x 96.31 1x 2 50.24 1.95 49.57 1.94 55.90 1.72 4 26.20 3.74 25.42 3.79 32.82 3.03 8 16.68 6.25 14.37 6.70 23.65 5.40"}, {"section_index": "10", "section_name": "4.3.1 ALEXNET IN A CLOUD-BASED SETUP", "section_text": "We show strong scaling for all three communication schemes in Figure |3c Even when assuming a fairly large batch size of 2048 which is beneficial in distributed settings, we see very modest. speedups. The OneToAll scheme achieves a max speedup of less than a 2 using 4 workers, while the communication-efficient Butterfly AllReduce scheme achieves a max speedup of roughly 5 when using 32 workers. The weak scaling results, shown in Figure[3b] show drastically improved. scaling results, as we observe nearly linear speedups as we increase the number of workers. How. ever, it is important to note that we are increasing the effective batch size as we increase the numbe. of workers, and it is well-known that training with large effective batch-sizes can yield models with. substandard accuracy (Breuel2015).\n100 120 Paleo: OneToAll OneToAll OneToAll OneToAll 80 100 Paleo: Tree AllReduce Tree AllReduce Tree AllReduce Tree AllReduce Paleo: Butterfly AllReduce Butterfly AllReduce Butterfly AllReduce Butterfly AllReduce dnpeddp 60 Murray el at. (2016) 40 32 32 20 20 0 8 16 50 100 16 32 64 128 8 16 32 64 128 0 4 4 8 2 4 2 4 8 16 32 64 128 Workers Workers Workerse Workerse (a) Inception / weak (b) AlexNet / weak (c) AlexNet / strong (d) GAN/ strong\nFigure 3: Comparison of PALEO projected speedups for various networks under different scaling strategies and communication schemes. (a-b) weak scaling. (c-d) strong scaling.."}, {"section_index": "11", "section_name": "4.3.2 GAN ARCHITECTURE", "section_text": "Table 5: Full pass time of the discriminator and generator in a GAN architecture"}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "We introduced PALEo - an analytical performance model for exploring the space of scalable deep learning systems. By extracting computational requirements carried by neural network architectures and mapping them to the design space of software, hardware, and communication strategies, PA LEO can effectively and accurately model the expected scalability and performance of a putative deep learning system.\nIn this study, we present an analysis of data parallel training of AlexNet. We assume a modern cloud. setup with a cluster of servers each equipped with a NVIDIA K80 GPU connected to a 20 Gbps network. In contrast to the Inception model with 23 million parameter, the one-tower variant of AlexNet has 50 million parameters and therefore doubles communication workload when training. with data parallelism.\nPALEO can be applied to architectures other than CNNs. We profile a generative adversarial network. (GAN) inspired byRadford et al.(2015) for the LSUN dataset with the same hardware assumptions. as the previous case study. Table|5|shows that PALEO estimations are close to empirical TensorFlow. run time for both the discriminator and generator networks. Figure[3d|plots the estimated speedups for training the model with a batch size 256 on up to 128 workers under strong scaling. With-. out communication-intensive fully-connected layers, while training this GAN architecture is more. scalable than AlexNet. PALEO still only predicts an 8 sub-linear speedup with 64 workers.\nForward pass s (ms) Backward pass (ms\nForward pass (ms Backward pass InS Discriminator TensorFlow 30.19 77.39 PALEO Estimation 27.55 79.25 Generator TensorFlow 110.11 374.18 PALEO Estimation 117.02 324.49"}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Atlas timings. URLhttp://math-atlas.sourceforge.net/timing/\nMartin Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous systems, 2015. Softwa available from tensorflow. org, 2015.\nThomas Breuel. The effects of hy. rameters on sgd training of neural networks. arXiv:1508.02788, 2015\nTianqi Chen et al. Mxnet: A flexible and efficient machine learning library for heterogeneous distribute. systems. arXiv:1512.01274, 2015.\neffrey Dean et al. Large scale distributed deep networks. In NIPs, pp. 1223-1231, 2012\nForrest N Iandola, Khalid Ashraf, Mattthew W Moskewicz, and Kurt Keutzer. Firecaffe: near-linear accelera tion of deep neural network training on compute clusters. In CVPR, 2016..\nAlex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv:1404.5997. 2014b\nAndrew Lavin. Fast algorithms for convolutional neural networks. In CVPR, 2016\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013\nNicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun. Fas convolutional nets with fbfft: A gpu performance evaluation. In ICLR, 2015..\nHuasha Zhao and John Canny. Butterfly mixing: Accelerating incremental-update algorithms on clusters. In SIAM Conf. on Data Mining. SIAM, 2013.\nIntel Math Kernel Library. Reference Manual. Intel Corporation, 2009. Santa Clara, USA. ISBN 630813. 054US.\nSharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv:1410.0759, 2014..\nPhilipp Moritz, Robert Nishihara, Ion Stoica, and Michael I Jordan. Sparknet: Training deep networks in spark arXiv:1511.06051, 2015.\nWe include supplementary figures in appendix due to the space constraint\nForward Backward wrt inputs Backward wrt filters conv1 Paleo Estimation conv2 TensorFlow conv3 conv4 conv5 fc6 fc7 fc8 E 0 2 4 6 8 10 12 14 0 10 20 3040 50 60 0 5 10 15 20 Time (ms) Time (ms) Time (ms) (a) Layer-wise comparison in AlexNet Forward Backward wrt inputs Backward wrt filters conv1 conv1- 2 conv2- conv2- conv3- conv3 conv3-3 conv4- conv4- conv4-3 conv5- conv5- conv5- fc6 fc7 fc8 0 10 20 30 40 50 60 70 0 50 100 150 0 20 40 60 80 100 Time (ms) Time (ms) Time (ms) (b) Layer-wise comparison in VGG-16."}] |
rJY0-Kcll | [{"section_index": "0", "section_name": "OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNING", "section_text": "Sachin Ravi* and Hugo Larochelle\nTwitter. Cambridge. USA\nsachinr,hugo}@twitter.com\nThough deep neural networks have shown great success in the large data domain they generally perform poorly on few-shot learning tasks, where a classifier has tc. quickly generalize after seeing very few examples from each class. The genera. belief is that gradient-based optimization in high capacity classifiers requires many. iterative steps over many examples to perform well. Here, we propose an LSTM. based meta-learner model to learn the exact optimization algorithm used to trail another learner neural network classifier in the few-shot regime. The parametriza. tion of our model allows it to learn appropriate parameter updates specifically fo. the scenario where a set amount of updates will be made, while also learning a. general initialization of the learner (classifier) network that allows for quick con. vergence of training. We demonstrate that this meta-learning model is competitive. with deep metric-learning techniques for few-shot learning.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Deep learning has shown great success in a variety of tasks with large amounts of labeled data in image classification (He et al.]2015), machine translation (Wu et al.| 2016), and speech modeling (Oord et al.] [2016). These achievements have relied on the fact that optimization of these deep, igh-capacity models requires many iterative updates across many labeled examples. This type of optimization breaks down in the small data regime where we want to learn from very few labeled examples. In this setting, rather than have one large dataset, we have a set of datasets, each with few annotated examples per class. The motivation for this task lies not only in the fact that humans, even children, can usually generalize after just one example of a given object, but also because models excelling at this task would have many useful applications. Firstly, they would help alleviate data collection as we would not require millions of labeled examples to attain reasonable performance Furthermore, in many fields, data exhibits the characteristic of having many different classes but few examples per class. Models that are able to generalize from few examples would be able to capture this type of data effectively.\nThere seem to be two main reasons why gradient-based optimization fails in the face of few la beled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum Nesterov1983), Adagrad (Duchi et al.]2011), Adadelta (Zeiler2012), and ADAM (Kingma & Ba]2014), weren't designed specifically to perform well under the constraint of a set number of updates. Specifically when applied to non-convex optimization problems, with a reasonable choice of hyperparameters these algorithms don't have very strong guarantees of speed of convergence, beyond that they will eventually converge to a good solution after what could be many millions of iterations. Secondly, for each separate dataset considered, the network would have to start from a random initialization of its parameters, which considerably hurts its ability to converge to a good solution after a few updates. Transfer learning (Caruana]1995]Bengio et al.2012]Donahue et al. 2013) can be applied to alleviate this problem by fine-tuning a pre-trained network from another task which has more labelled data; however, it has been observed that the benefit of a pre-trained network greatly decreases as the task the network was trained on diverges from the target task (Yosinski et al. 2014). What is needed is a systematic way to learn a beneficial common initialization that would\n*Work done as an intern at Twitter. Sachin is a PhD student at Princeton University and can be reached at sachinr@princeton.edu.\nserve as a good point to start training for the set of datasets being considered. This would provide the same benefits as transfer learning, but with the guarantee that the initialization is an optimal starting point for fine-tuning.\nPrevious work has suggested one manner in which to acquire quick knowledge from few examples through the idea of meta-learning (Thrun|[1998) Schmidhuber et al.||1997). Meta-learning suggests framing the learning problem at two levels. The first is quick acquisition of knowledge within eacl separate task presented. This process is guided by the second, which involves slower extraction o information learned across all the tasks.\nWe present a method here that addresses the weakness of neutral networks trained with gradient- based optimization on the few-shot learning problem by framing the problem within a meta-learning setting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learner neural network classifier. The meta-learner captures both short-term knowledge within a task and long-term knowledge common among all the tasks. By using an objective that directly captures an optimization algorithm's ability to have good generalization performance given only a set number of updates, the meta-learner model is trained to converge a learner classifier to a good solution quickly on each task. Additionally, the formulation of our meta-learner model allows it to learn a task-common initialization for the learner classifier, which captures fundamental knowledge shared among all the tasks."}, {"section_index": "2", "section_name": "TASK DESCRIPTION", "section_text": "We first begin by detailing the meta-learning formulation we use. In the typical machine learning setting, we are interested in a dataset D and usually split D so that we optimize parameters 0 on a training set Dtrain and evaluate its generalization on the test set Dtest. In meta-learning, however we are dealing with meta-sets containing multiple regular datasets, where each D E has a spli of Dtrain and Dtest.\nWe consider the k-shot, N-class classification task, where for each dataset D, the training set con. sists of k labelled examples for each of N classes, meaning that Dtrain consists of k : N examples. and Dtest has a set number of examples for evaluation. We note that previous work (Vinyals et al. 2016) has used the term episode to describe each dataset consisting of a training and test set..\nIn meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta testing (Dmeta-train, Dmeta-validation, and Dmeta-test, respectively). On Dmeta-train, we are. interested in training a learning procedure (the meta-learner) that can take as input one of its train. ing sets Dtrain and produce a classifier (the learner) that achieves high average classification perfor. mance on its corresponding test set Dtest. Using Dmeta-validation we can perform hyper-parameter. selection of the meta-learner and evaluate its generalization performance on Dmeta-test..\nFor this formulation to correspond to the few-shot learning setting, each training set in datasets D E will contain few labeled examples (we consider k = 1 or k = 5), that must be used to generalize to good performance on the corresponding test set. An example of this formulation is given in Figure1\nWe now move to the description of our oposed model for meta-learning."}, {"section_index": "3", "section_name": "3.1 MODEL DESCRIPTION", "section_text": "Consider a single dataset, or episode, D E Dmeta-train. Suppose we have a learner neural net classifier with parameters 0 that we want to train on Dtrain. The standard optimization algorithms used to train deep neural networks are some variant of gradient descent, which uses updates of the form\n0t=0t-1Qt0+-1Lt\nDtrain Dtest Meta- 3 5 Train Dmeta-train Dtrain Dtest Meta- Test Dtrain Dtest Dmeta-test\nwhere 0t-1 are the parameters of the learner after t - 1 updates, Qt is the learning rate at time t Lt is the loss optimized by the learner for its tth update, Vet-, Lt is the gradient of that loss with respect to parameters 0t-1, and 0t is the updated parameters of the learner.\nOur key observation that we leverage here is that this update resembles the update for the cell state in an LSTM (Hochreiter & Schmidhuber1997\nCt = ft O Ct-1 + it O C if ft=1,Ct-1 =0t-1it=Qt,and Ct=-Ve+\nThus, we propose training a meta-learner LSTM to learn an update rule for training a neural net-. work. We set the cell state of the LSTM to be the parameters of the learner, or ct = 0t, and the candidate cell state ct = Vet-1 Lt, given how valuable information about the gradient is for opti-. mization. We define parametric forms for it and ft so that the meta-learner can determine optimal. values through the course of the updates.\nLet us start with it, which cor!. ponds to the learning rate for the updates. We le\nmeaning that the learning rate is a function of the current parameter value 0t-1, the current gradient Ve+-, Lt, the current loss Lt, and the previous learning rate it-1. With this information, the meta. learner should be able to finely control the learning rate so as to train the learner quickly while avoiding divergence.\nAs for ft, it seems possible that the optimal choice isn't the constant 1. Intuitively, what would justify shrinking the parameters of the learner and forgetting part of its previous value would be. if the learner is currently in a bad local optima and needs a large change to escape. This would correspond to a situation where the loss is high but the gradient is close to zero. Thus, one proposal for the forget gate is to have it be a function of that information, as well as the previous value of the forget gate:\nAdditionally, notice that we can also learn the initial value of the cell state co for the LSTM, treating it as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that\nFigure 1: Example of meta-learning setup. The top represents the meta-training set Dmeta-train,. where inside each gray box is a separate dataset that consists of the training set Dtrain (left side of. dashed line) and the test set Dtest (right side of dashed line). In this illustration, we are considering. the 1-shot, 5-class classification task where for each dataset, we have one example from each of 5 classes (each given a label 1-5) in the training set and 2 examples for evaluation in the test set.. The meta-test set Dmeta-test is defined in the same way, but with a different set of datasets that. cover classes not present in any of the datasets in Dmeta-train (similarly, we additionally have a. meta-validation set that is used to determine hyper-parameters)..\nt =(W1 A.. Lt.Lt. 9t-1,lt-1\nft=(W. Ve+-1Lt,Lt,0t-1,ft-1 +bF)"}, {"section_index": "4", "section_name": "3.2 PARAMETER SHARING & PREPROCESSING", "section_text": "Because we want our meta-learner to produce updates for deep neural networks, which consi. of tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need t. employ some sort of parameter sharing. Thus as in|Andrychowicz et al.(2016), we share parameter. across the coordinates of the learner gradient. This means each coordinate has its own hidden an cell state values but the LSTM parameters are the same across all coordinates. This allows us t. use a compact LSTM model and additionally has the nice property that the same update rule is use. for each coordinate, but one that is dependent on the respective history of each coordinate durin. optimization. We can easily implement parameter sharing by having the input be a batch of gradiei. coordinates and loss inputs (Vet.. Lt, Lt) for each dimension i..\nBecause the different coordinates of the gradients and the losses can be of very different magnitudes we need to be careful in normalizing the values so that the meta-learner is able to use them properly during training. Thus, we also found that the preprocessing method of|Andrychowicz et al.(2016 worked well when applied to both the dimensions of the gradients and the losses at each time step:\nlog(x] if|x] e-p x -> -1,ePx otherwise\nThe question now is how do we train the LSTM meta-learner model to be effective at few-shot learning tasks? As observed in|Vinyals et al.[(2016), in order to perform well at this task, it is key to have training conditions match those of test time. During evaluation of the meta-learning, for each dataset (episode), D = (Dtrain, Dtest) E Dmeta-test, a good meta-learner model will, given. a series of learner gradients and losses on the training set Dtrain, suggest a series of updates for the. classifier that pushes it towards good performance on the test set Dtest..\nThus to match test time conditions, when considering each dataset D E Dmeta-train, the training objective we use is the loss Ltest of the produced classifier on D's test set Dtest. While iterating over the examples in D's training set Dtrain, at each time step t the LSTM meta-learner receives (Vet-1Lt, Lt) from the learner (the classifier) and proposes the new set of parameters 0t. The process repeats for T steps, after which the classifier and its final parameters are evaluated on the test set to produce the loss that is then used to train the meta-learner. The training algorithm i described in Algorithm[1and the corresponding computational graph is shown in Figure[2"}, {"section_index": "5", "section_name": "3.3.1 GRADIENT INDEPENDENCE ASSUMPTION", "section_text": "Notice that our formulation would imply that the losses Lt and gradients Vot-1 Lt of the learner are dependent on the parameters of the meta-learner. Gradients on the meta-learner's parameters should normally take this dependency into account. However, as discussed by|Andrychowicz et al. (2016) this complicates the computation of the meta-learner's gradients. Thus, following Andrychowicz et al.(2016), we make the simplifying assumption that these contributions to the gradients aren't important and can be ignored, which allows us to avoid taking second derivatives, a considerably expensive operation. We were still able to train the meta-learner effectively in spite of this simplify ing assumption.\nthe meta-learner is training). Learning this initial value lets the meta-learner determine the optimal initial weights of the learner so that training begins from a beneficial starting point that allows optimization to proceed rapidly. Lastly, note that though the meta-learner's update rule matches the cell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al. 2014 hidden state update, with the exception that the forget and input gates aren't tied to sum to one.\nThis preprocessing adjusts the scaling of gradients and losses, while also separating the information about their magnitude and their sign (the latter being mostly useful for gradients). We found that the suggested value of p = 10 in the above formula worked well in our experiments.\n(X1,Y1) (X2, Y2) (X3,Y3) (XT,YT) (X,Y) + 00 01 02 0T-1 Learner : (V1,L1) .(Vr,LT) L2 L(M(X; 0T+1), Y) 0T- Meta-learner\nFigure 2: Computational graph for the forward pass of the meta-learner. The dashed line divides examples from the training set Dtrain and test set Dtest. Each (X,, Y;) is the ith batch from the training set whereas (X, Y) is all the elements from the test set. The dashed arrows indicate that w do not back-propagate through that step when training the meta-learner. We refer to the learner as M, where M(X; 0) is the output of learner M using parameters 0 for inputs X. We also use t as a shorthand for Vet-1 Lt.\nWhen training LSTMs, it is advised to initialize the LSTM with small random weights and to set the forget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enabling gradient flow (Zaremba]2015). In addition to the forget gate bias setting, we found that we needed to initialize the input gate bias to be small so that the input gate value (and thus the learning rate) used by the meta-learner LSTM starts out being small. With this combined initialization, the meta learner starts close to normal gradient descent with a small learning rate, which helps initial stability of training.\nBatch Normalization (Ioffe & Szegedy2015) is a recently proposed method to stabilize and thus speed up learning of deep neural networks by reducing internal covariate shift within the learner's hidden layers. This reduction is achieved by normalizing each layer's pre-activation, by subtracting by the mean and dividing by the standard deviation. During training, the mean and standard devi- ation are estimated using the current batch being trained on, whereas during evaluation a running average of both statistics calculated on the training set is used. We need to be careful with batch normalization for the learner network in the meta-learning setting, because we do not want to collect mean and standard deviation statistics during meta-testing in a way that allows information to leak between different datasets (episodes), being considered. One easy way to prevent this issue is to not collect statistics at all during the meta-testing phase, but just use our running averages from meta- training. This, however, has a bad impact on performance, because we have changed meta-training and meta-testing conditions, causing the meta-learner to learn a method of optimization that relies on batch statistics which it now does not have at meta-testing time. In order to keep the two phases as similar as possible, we found that a better strategy was to collect statistics for each dataset D E during Dmeta-test, but then erase the running statistics when we consider the next dataset. Thus, during meta-training, we use batch statistics for both the training and testing set whereas during meta-testing, we use batch statistics for the training set (and to compute our running averages) but then use the running averages during testing. This does not cause any information to leak between different datasets, but also allows the meta-learner to be trained on conditions that are matched be tween training and testing. Lastly, because we are doing very few training steps, we computed the running averages so that higher preference is given to the later values.\nInput: Meta-training set Dmeta-train, Learner M with parameters 0, Meta-Learner R with parameters O.\nWhile this work falls within the broad literature of transfer learning in general, we focus here on positioning it relative to previous work on meta-learning and few-shot learning"}, {"section_index": "6", "section_name": "4.1 META-LEARNING", "section_text": "Meta-learning has a long history, but has grown to prominence recently as many have advocate for it as a key to achieving human-level intelligence in the future (Lake et al.] 2016). The ability tc learn at two levels (learning within each task presented, while accumulating knowledge about th similarities and differences between tasks) is seen as being crucial to improving AI. Previous worl has used a variety of techniques in the meta-learning setting.\nSchmidhuber (1992||1993) explored using networks that learn how to modify their own weights ove1 a number of computations steps on the input. The updating of the weights is defined in a parametric form that allows the prediction and weight-change process to be differentiable end-to-end. The work of Bengio et al.(1990f 1995) and Bengio[(1993) considered learning update rules for neural networks that are biologically plausible. This property is enforced by allowing the parametric form of the update to only have as input local information at each hidden unit to determine the weighi change. Different optimization methods, such as genetic programming or simulated annealing, are used to train the learning rule.\nIn Santoro et al.(2016), a memory-augmented neural network is trained to learn how to store an. retrieve memories to use for each classification task. The work of Andrychowicz et al.(2016) use. an LSTM to train a neural network; however, they are interested in learning a general optimizatioi. algorithm to train neural networks for large-scale classification, whereas we are interested in th. few-shot learning problem. This work also builds upon Hochreiter et al.(2001) and Bosc botl. of which used LSTMs to train multi-layer perceptrons to learn on binary classification and time. series prediction tasks. Another related method is the work of|Bertinetto et al.(2016), who trai. a meta-learner to map a training example to the weights of a neural network that is then used t classify future examples from this class; however, unlike our method the classifier network is directl.. produced rather than being fine-tuned after multiple training steps. Our work also bears similarity. to Maclaurin et al.(2015), who tune the hyperparameters of gradient descent with momentum by. backpropagating through the chain of gradient steps to optimize the validation performance..\nThe best performing methods for few-shot learning have been mainly metric learning methods.. Deep siamese networks (Koch 2015) train a convolutional network to embed examples so that items in the same class are close while items in different classes are far away, according to some. distance metric. Matching networks (Vinyals et al.2016) refine this idea so that training and testing. conditions match, by defining a differentiable nearest neighbor loss involving the cosine similarities. of embeddings produced by a convolutional network.."}, {"section_index": "7", "section_name": "5 EVALUATION", "section_text": "For the learner, we use a simple CNN containing 4 convolutional layers, each of which is a 3 convolution with 32 filters, followed by batch normalization, a ReLU non-linearity, and lastly. 2 2 max-pooling. The network then has a final linear layer followed by a softmax for the numbe. of classes being considered. The loss function is the average negative log-probability assigned b. the learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer i. a normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and losse. are preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are als. used by the second layer LSTM to implement the state update rule shown in (1). At each time step the learner's loss and gradient is computed on a batch consisting of the entire training set Dtrain. because we consider training sets with only a total of 5 or 25 examples. We train our LSTM witl. ADAM using a learning rate of 0.001 and with gradient clipping using a value of 0.25.."}, {"section_index": "8", "section_name": "5.1 EXPERIMENT RESULTS", "section_text": "The Mini-ImageNet dataset was proposed by Vinyals et al.(2016) as a benchmark offering the. challenges of the complexity of ImageNet images, without requiring the resources and infrastructure. necessary to run on the full ImageNet dataset. Because the exact splits used in|Vinyals et al.(2016 were not released, we create our own version of the Mini-Imagenet dataset by selecting a randon 100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classes. for training, validation and testing, respectively. We consider 1-shot and 5-shot classification fo. 5 classes. We use 15 examples per class for evaluation in each test set. We compare against twc. baselines and a recent metric-learning technique, Matching Networks (Vinyals et al.|2016), whicl has achieved state-of-the-art results in few-shot learning. The results are shown in Table[1.\nThe first baseline we use is a nearest-neighbor baseline (Baseline-nearest-neighbor), where we firs train a network to classify between all the classes jointly in the original meta-training set. At meta test time, for each dataset D, we embed all the items in the training set using our trained networ and then use nearest-neighbor matching among the embedded training examples to classify each te. example. The second baseline we use (Baseline-finetune) represents a coarser version of our meta learner model. As in the first baseline, we start by training a network to classify jointly between a classes in the meta-training set. We then use the meta-validation set to search over SGD hyperpa rameters, where each training set is used to fine-tune the pre-trained network before evaluating o\nCode can be found athttps://github. com/twitter/meta-learning 1 st m\nIn this section, we describe the results of experiments, examining the properties of our model and. comparing our method's performance against different approaches* FollowingVinyals et al.(2016). we consider the k-shot, N-class classification setting where a meta-learner trains on many related. but small training sets of k examples for each of N classes. We first split the list of all classes in the data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, and. meta-testing. To generate each instance of a k-shot, N-class task dataset D = (Dtrain, Dtest) E 9, we do the following: we first sample N classes from the list of classes corresponding to the meta-set. we consider. We then sample k examples from each of those classes. These k examples together compose the training set Dtrain. Then, an additional fixed amount of the rest of the examples are. sampled to yield a test set Dtest. We generally have 15 examples per class in the test sets. When. training the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta- validation and meta-testing, however, we produce a fixed number of these datasets to evaluate each. method. We produce enough datasets to ensure that the confidence interval of the mean accuracy is. Small.\n5-class Model 1-shot 5-shot Baseline-finetune 28.86 0.54% 49.79 0.79% Baseline-nearest-neighbor 41.08 0.70% 51.04 0.65% Matching Network. 43.40 0.78% 51.09 0.71% Matching Network FCE. 43.56 0.84% 55.31 0.73% Meta-Learner LSTM (OURS) 43.44 0.77% 60.60 0.71%\nthe test set. We use a fixed number of updates for fine tuning and search over the learning rate an learning rate decay used during the course of these updates..\nFor our meta-learner. we train different models for the 1-shot and 5-shot tasks. that make 12 anc 5 updates, respectively. We noticed that better performance for each task was attained if the meta. learner is explicitly trained to do the set number of updates during meta-training that will be used. during meta-testing.\nWe attain results that are much better than the baselines discussed and competitive with Matching. Networks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot. the confidence interval for our performance intersects the interval for Matching Networks. Again. we note that the numbers do not match the ones provided by|Vinyals et al. (2016) simply because we. created our version of the dataset and implemented our own versions of their model. It is interesting. to note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are not regularizing the classifier, with very few updates the fine-tuning model overfits, especially in the. 1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of the. classifier end-to-end as is done in the meta-learning LSTM..\nWe also visualize the optimization strategy learned by the meta-learner, in Figure[3 We can lool at the it and ft gate values in Equation|2|at each update step, to try to get an understanding of hov the meta-learner updates the learner during training. We visualize the gate values while training on different datasets Dtrain, to observe whether there are variations between training sets. We consider both 1-shot and 5-shot classification settings, where the meta-learner is making 10 and 5 updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt a simple weight decay strategy that seems consistent across different layers. The input gate values are harder to interpret to glean the meta-learner's strategy. However, there seems to a be a lot oi variability between different datasets, indicating that the meta-learner isn't simply learning a fixec optimization strategy. Additionally, there seem to be differences between the two tasks, suggesting that the meta-learner has adopted different methods to deal with the different conditions of eacl setting.\nTable 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals Marked in bold are the best results for each scenario, as well as other results with an overlapping confidence interval.\nFor Matching Networks, we implemented our own version of both the basic and the fully-conditiona. embedding (FCE) versions. In the basic version, a convolutional network is trained to learn indepen dent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTM is used to learn an embedding for the training set such that each training example's embedding is also a function of all the other training examples. Additionally, an attention-LSTM is used so that a test example embedding is also a function of all the embeddings of the training set. We do not consider fine-tuning the network using the train set during meta-testing to improve performance as mentioned in|Vinyals et al.(2016), but do note that our meta-learner could also be fine-tuned using this data. Note that to remain consistent withVinyals et al.(2016), our baseline and matching ne convolutional networks have 4 layers each with 64 filters. We also added dropout to each convolu tional block in matching nets to prevent overfitting."}, {"section_index": "9", "section_name": "6 CONCLUSION", "section_text": "We described an LSTM-based model for meta-learning, which is inspired from the parameter up. dates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its state to represent the learning updates of the parameters of a classifier. It is trained to discover both a. good initialization for the learner's parameters, as well as a successful mechanism for updating the. learner's parameters to a given small training set for some new classification task. Our experiments demonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-. art in metric learning for few-shot learning.\nIn this work, we focused our study to the few-shot and few-classes setting. However, it would be more valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. fo. few or lots of training examples and for few or lots of possible classes. Our future work will thus consider moving towards this more challenging scenario."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Marcin Andrychowicz. Misha Denil. Sergio Gomez. Matthew W. Hoffman, David Pfau. Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. CoRR, abs/1606.04474 2016. IIRIh++n /abs /1 606 04474\nLayer 1 Layer 1 3 4 5 6 7 8 10 4 7 Layer 2 Layer 2 2 3 4 5 7 8 9 6 10 Layer 3 Layer 3 4 2 3 4 5 6 7 8 9 10 Layer 4 Layer 4 4 2 4 5 6 7 8 Layer 5 9 10 3 Layer 5 2 3 4 5 6 8 9 10 (a) Forget gate values for 1-shot meta-learner (b) Input gate values for 1-shot meta-learner Layer 1 Layer 1 8:6 2 3 4 2 3 Layer 2 Layer 2 2 3 4 2 3 4 5 Layer 3 Layer 3 2 3 2 3 A 5 Layer 4 Layer 4 2 3 4 2 3 4 5 Layer 5 Layer 5 2 3 4 2 3 (c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learner.\nFigure 3: Visualization of the input and forget values output by the meta-learner during the course of its updates. Layers 1 4 represent the values for a randomly selected parameter from the 4 convolutional layers and layer 5 represents the values for a random parameter from fully-connected layer. The different curves represent training steps on different datasets..\nYoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Universite de Montreal, Departement d'informatique et de recherche operationnelle, 1990.\nYoshua Bengio et al. Deep learning of representations for unsupervised and transfer learning. ICMI Unsupervised and Transfer Learning, 27:17-36, 2012.\nTom Bosc. Learning to learn neural networks\nRich Caruana. Learning many related tasks at the same time with backpropagation. Advances i neural information processing systems, pp. 657-664, 1995.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, anc Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma chine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406. 1078\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevo Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR abs/1310.1531,2013. URLhttp://arxiv.0rg/abs/1310.1531\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CoRR, abs/1512.03385, 2015. URLhttp://arxiv.0rg/abs/1512.03385\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997\nGregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015.\nLuca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. CoRR, abs/1606.05233, 2016. URLhttp://arxiv.org/ abs/1606.05233\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http: //arxiv.org/. abs/1502.03167\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization.. CoRR abs/1412.6980, 2014. URLhttp://arxiv.0rg/abs/1412.6980\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model fo raw audio. arXiv preprint arXiv:1609.03499, 2016.\nJurgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurren networks. Neural Computation, 4(1):131-139, 1992\nJurgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1): 105-130, 1997.\nSebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181-209. Springer, 1998\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun. Yuan Cao. Oin Gao, Klaus Macherey, et al. Google's neural machine trans lation system: Bridging the gap between human and machine translation. arXiv preprin. arXiv:1609.08144, 2016.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? CoRR, abs/1411.1792, 2014. URL http://arxiv.org/abs/1411. 17 92\nWojciech Zaremba. An empirical exploration of recurrent network architectures. 2015"}] |
rkEFLFqee | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Understanding videos has been one of the most important tasks in the field of computer visior. Compared to still images, the temporal component of videos provides much richer descriptions o. the visual world, such as interaction between objects, human activities, and so on. Amongst th. various tasks applicable on videos, the task of anticipating the future has recently received increase attention in the research community. Most prior works in this direction focus on predicting high-leve. semantics in a video such as action (Vondrick et al.]2015) Ryoo]2011) Lan et al.]2014), event (Yue) and Torralba]2010fHoai and Torre[[2013) and motion (Pintea et al.]2014]Walker et al.[2014Pickup et al.[2014f Walker et al.[[2016). Forecasting semantics provides information about what will happe in a video, and is essential to automate decision making. However, the predicted semantics ar. often specific to a particular task and provide only a partial description of the future. Also, training. such models often requires heavily labeled training data which leads to tremendous annotation cost. especially with videos.\nIn this work, we aim to address the problem of prediction of future frames in natural video sequences Pixel-level predictions provide dense and direct description of the visual world, and existing video recognition models can be adopted on top of the predicted frames to infer various semantics of the future. Spatio-temporal correlations in videos provide a self-supervision for frame prediction, which. enables purely unsupervised training of a model by observing raw video frames. Unfortunately. estimating frames is an extremely challenging task; not only because of the inherent uncertainty of the future, but also various factors of variation in videos leading to complicated dynamics in raw pixel. values. There have been a number of recent attempts on frame prediction (Srivastava et al.|2015. Mathieu et al.[2015fOh et al.2 2015Goroshin et al.|2015} Lotter et al. 2015] Ranzato et al.2014)\n*This work was done while SH and XL were visiting the University of Michigan\nXunyu Lin4,*\nHonglak Lee1,5"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which inde pendently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable net work architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.\nwhich use a single encoder that needs to reason about all the different variations occurring in videos in order to make predictions of the future, or require extra information like foreground-backgrounc segmentation masks and static background (Vondrick et al.2016).\nWe propose a Motion-Content Network (MCnet) for robust future frame prediction. Our intuition i to split the inputs for video prediction into two easily identifiable groups, motion and content, an independently capture each information stream with separate encoder pathways. In this architecture the motion pathway encodes the local dynamics of spatial regions, while the content pathway encode the spatial layout of the salient parts of an image. The prediction of the future frame is then achieve by transforming the content of the last observed frame given the identified dynamics up to the las observation. Somewhat surprisingly, we show that such a network is end-to-end trainable withou individual path way supervision. Specifically, we show that an asymmetric architecture for the twc pathways enables such decompositions without explicit supervision. The contributions of this pape are summarized below:\nThe problem of visual future prediction has received growing interests in the computer vision. community. It has led to various tasks depending on the objective of future prediction. such as human activity (Vondrick et al.]2015] Ryoo2011 Lan et al.2014), event (Yuen and Torralba2010] Hoai and Torre[2013) and geometric path (Walker et al.[2014). Although previous work achieved reasonable success in specific tasks, they are often limited to estimating predefined semantics, and. require fully-labeled training data. To alleviate this issue, approaches predicting representation of. the future beyond semantic labels have been proposed. Walker et al. (2014) proposed a data-driven. approach to predict the motion of a moving object, and coarse hallucination of the predicted motion. Vondrick et al.[(2015) proposed a deep regression network to predict feature representations of the. future frames. These approaches are supervised and provide coarse predictions of how the future will. look like. Our work also focuses on unsupervised learning for prediction of the future, but to a more. direct visual prediction task: frame prediction..\nCompared to predicting semantics, pixel-level prediction has been less investigated due to the difficulties in modeling evolution of raw pixels over time. Fortunately, recent advances in deep. learning provide a powerful tool for sequence modeling, and enable the creation of novel architectures for modeling complex sequential data.Ranzato et al.(2014) applied a recurrent neural network. developed for language modeling to frame prediction by posing the task as classification of each image region to one of quantized patch dictionaries. Srivastava et al.(2015) applied a sequence-to sequence model to video prediction, and showed that Long Short-Term Memory (LSTM) is able to. capture pixel dynamics. Oh et al.(2015) proposed an action-conditional encoder-decoder network to predict future frames in Atari games. In addition to the different choices of architecture, some. other works addressed the importance of selecting right objective function: Lotter et al.[(2015) used adversarial loss with combined CNN and LSTM architectures, and[Mathieu et al.(2015) employed similar adversarial loss with additional regularization using a multi-scale encoder-decoder network Finn et al.(2016) constructed a network that predicts transformations on the input pixels for next frame prediction. Patraucean et al.(2015) proposed a network that by explicitly predicting optical flow features is able to predict the next frame in a video.Vondrick et al.(2016) proposed a generative. adversarial network for video which, by generating a background-foreground mask, is able to generate\nWe propose MCnet for the task of frame prediction, which separates the information streams (motion and content) into different encoder pathways.. The proposed network is end-to-end trainable and naturally learns to decompose motion and. content without separate training, and reduces the task of frame prediction to transforming. the last observed frame into the next by the observed motion.. We evaluate the proposed model on challenging real-world video datasets, and show that it. outperforms previous approaches on frame prediction..\nThe rest of the paper is organized as follows. We briefly review related work in Section [2] and. introduce an overview of the proposed algorithm in Section[3] The detailed configuration of the proposed network is described in Section|4] Section 5|describes training and inference procedure Section 6lillustrates implementation details and experimental results on challenging benchmarks.\nrealistic-looking video sequences. However, none of the previously mentioned approaches exploit. spatial and temporal information separately in an unsupervised fashion. In terms of the way data is observed, the closest work to ours is|Xue et al.[(2016). The differences are (1) Our model is deterministic and theirs is probabilistic, (2) our motion encoder is based on convolutional LSTM (Shi. et al.f |2015) which is a more natural module to model long-term dynamics, (3) our content encoder observes a single scale input and theirs observes many scales, and (4) we directly generate image. pixels values, which is a more complicated task. We aim to exploit the existing spatio-temporal. correlations in videos by decomposing the motion and content in our network architecture..\nTo the best of our knowledge, the idea of separating motion and content has not been investigated in. the task of unsupervised deterministic frame prediction. The proposed architecture shares similarities. to the two-stream CNN (Simonyan and Zisserman2014), which is designed for action recognition to jointly exploit the information from frames and their temporal dynamics. However, in contrast to. their network we aim to learn features for temporal dynamics directly from the raw pixels, and we. use the identified features from the motion in combination with spatial features to make pixel-level predictions of the future."}, {"section_index": "2", "section_name": "3 ALGORITHM OVERVIEW", "section_text": "In this section, we formally define the task of frame prediction and the role of each component in the. proposed architecture. Let xt E Rwx hxc denote the t-th frame in an input video x, where w, h, and. c denote width, height, and number of channels, respectively. The objective of frame prediction is to generate the future frame Xt+1 given the input frames x1:t.\nAt the t-th time step, our network observes a history of previous consecutive frames up to frame and generates the prediction of the next frame t+1 as follows:\nThe overall architecture of the proposed algorithm is described in Figure[1] The prediction of multiple frames, Xt+1:t+T, can be achieved by recursively performing the above procedures over T time steps. (Section|5). Each component in the proposed architecture is described in the following section."}, {"section_index": "3", "section_name": "4 ARCHITECTURE", "section_text": "This section describes the detailed configuration of the proposed architecture, including the two encoder pathways, multi-scale residual connections, combination layers, and decoder.."}, {"section_index": "4", "section_name": "4.1 MOTION ENCODER", "section_text": "The motion encoder captures the temporal dynamics of the scene's components by recurrently observing subsequent difference images computed from xt-1 and xt, and outputs motion features by\ndt,Ct] = fdyr XtXt-1,dt-1,Ct-1)\nMotion Encoder recurrently takes an image difference input between frame xt and xt-1 starting from t = 2, and produces the hidden representation d encoding the tempora dynamics of the scene components (Section4.1) Content Encoder takes the last observed frame x, as an input, and outputs the hidder representation st that encodes the spatial layout of the scene (Section4.2). Multi-Scale Motion-Content Residual takes the computed features, from both the motior and content encoders, at every scale right before pooling and computes residuals rt (He et al 2015) to aid the information loss caused by pooling in the encoding phase (Section|4.3) Combination Layers and Decoder takes the outputs from both encoder pathways anc residual connections, dt, St, and rt, and combines them to produce a pixel-level predictior of the next frame t+1 (Section4.4)\nShared Combination LSTM LSTM LSTM LSTM LSTM layers CONCAT Decoder Content Encoder Content Enco MC\nFigure 1: Overall architecture of the proposed network. (a) illustrates MCnet without the Motion Content Residual skip connections, and (b) illustrates MCnet with such connections. Our network observes a history of image differences through the motion encoder and last observed image through the content encoder. Subsequently, our network proceeds to compute motion-content features and. communicates them to the decoder for the prediction of the next frame..\nrather than complicated global motion. For this, we use an encoder CNN with a Convolutional LSTN (Shi et al.]2015) layer on top."}, {"section_index": "5", "section_name": "4.2 CONTENT ENCODER", "section_text": "St = fcont Xt.\nrt = fres ([st, d{])"}, {"section_index": "6", "section_name": "4.4 COMBINATION LAYERS AND DECODER", "section_text": "The outputs from the two encoder pathways, dy and st, encode a high-level representation of motion and content, respectively. Given these representations, the objective of the decoder is to generate a\nMotion Encoder Motion Encoder - Conv Multi-scale - Deconv Motion Residual Combination Combination LSTM LSTM LSTM LSTM LSTM LSTM layers layers CONCAT CONCAT Decoder Decoder Content Encoder Content Encoder Multi-scale Content Residual\n(b) MCnet with Multi-scale Motion-Content Residuals\nThe content encoder extracts important spatial features from a single frame, such as the spatial layout of the scene and salient objects in a video. Specifically, it takes the last observed frame x, as an input and produces content features by\nIt is important to note that our model employs an asymmetric architecture for the motion and content. encoder. The content encoder takes the last observed frame, which keeps the most critical clue to reconstruct spatial layout of near future, but has no information about dynamics. On the other hand.. the motion encoder takes a history of previous image differences, which are less informative about the future spatial layout compared to the last observed frame, yet contain important spatio-temporal. variations occurring over time. This asymmetric architecture encourages encoders to exploit each of. two pieces of critical information to predict the future content and motion individually, and enables. the model to learn motion and content decomposition naturally without any supervision..\nTo prevent information loss after the pooling operations in our motion and content encoders, we use residual connections (He et al.[2015). The residual connections in our network communicate motion-content features at every scale into the decoder layers after unpooling operations. The residual feature at laver l is computed by\npixel-level prediction of the next frame Xt+1 E Rwhxc. To this end, it first combines the motior and content back into a unified representation by\nwhere [dt, st] E Rw'h' 2c' denotes the concatenation of the higher-level motion and content features in the depth dimension, and f, E Rw' h'c' denotes the combined high-level representation of motion. and content. gcomb is implemented by a CNN with bottleneck layers (Hinton and Salakhutdinov. 2006); it first projects both d and st into a lower-dimensional embedding space, and then puts it. back to the original size to construct the combined feature ft. Intuitively, f can be viewed as the content feature of the next time step, St+1, which is generated by transforming st using the observed. dynamics encoded in d. Then our decoder places f, back into the original pixel space by.\ngdec (ft,rt) Xt+1 =\nGiven an input video, our network observes the first n frames as image difference between frame x and xt-1, starting from t = 2 up to t = n, to encode initial temporal dynamics through the motion encoder. The last frame xn is given to the content encoder to be transformed into the first prediction Xt+1 by the identified motion features.\nFor each time step t E [n + 1, n + T], where T is the desired number of prediction steps, our network takes the difference image between the first prediction +1 and the previous image xt, and the first prediction t+1 itself to predict the next frame xt+2, and so forth..\nLimg = Lp(Xt+k,Xt+k) + Lgdl(Xt+k,Xt+k)\nf = gcomb ([dt, St]),\nwhere rt is a list containing the residual connections from every layer of the motion and content. encoders before pooling sent to every layer of the decoder after unpooling. We employ the decon- volution network (Zeiler et al.2011) for our decoder network gdec, which is composed of multiple. successive operations of deconvolution, rectification and unpooling with the addition of the motion-. content residual connections after each unpooling operation. The output layer is passed through a. tanh (.) activation function. Unpooling with fixed switches are used to upsample the intermediate. activation maps.\nSection 4 describes the procedures for single frame prediction, while this section presents the extension of our algorithm for the prediction of multiple time steps.\nL = QLimg + LGAN\nT Lp(y,z)=>||y-z||p ere k=1 h,w Lgdl(y,z)=>||yi,j-yi-1,j|-|Zi,j-Zi-1,jl)| i,j +|(yi,j-1-yi,j Zi.i-1 - Zi.j\nHere, xt+k and t+k are the target and predicted frames, respectively, and p and X are hyper. parameters for L, and Ladl, respectively. Intuitively, Lp guides our network to match the average pixel\nvalues directly, while gdl guides our network to match the gradients of such pixel values. Overall, Limg guides our network to learn parameters towards generating the correct average sequence given the input. Training to generate average sequences, however, results in somewhat blurry generations which is the reason we use an additional sub-loss. LgAn is the generator loss in adversarial training to allow our model to predict realistic looking frames and it is defined by\nwhere x1:t is the concatenation of the input images, Xt+1:t+T is the concatenation of the ground-truth future images, G (x1:t) = Xt+1:t+T is the concatenation of all predicted images along the depth dimension, and D (.) is the discriminator in adversarial training. The discriminative loss in adversarial training is defined by\nLdisc = - log D ([x1:t, Xt+1:t+T]) log(1 D ([x1:t, G (x1:t)])"}, {"section_index": "7", "section_name": "6 EXPERIMENTS", "section_text": "In this section, we present experiments using our network for video generation. We first evaluate. our network, MCnet, on the KTH (Schuldt et al.]2004) and Weizmann action (Gorelick et al.]2007 datasets, and compare against a baseline convolutional LSTM (ConvLSTM) (Shi et al.|2015). We then proceed to evaluate on the more challenging UCF-101 (Soomro et al.|2012) dataset, in which we compare against the same ConvLSTM baseline and also the current state-of-the-art method byMathieu et al.(2015). For all our experiments, we use = 1, X = 1, and p = 2 in the loss. functions.\nArchitectures. The content encoder of MCnet is built with the same architecture as VGG16 (Si monyan and Zisserman,2015) up to the third pooling layer. The motion encoder of MCnet is also similar to VGG16 up to the third pooling layer, except that we replace its consecutive 3x3 convolu- tions with single 5x5, 5x5, and 7x7 convolutions in each layer. The combination layers are composed of 3 consecutive 3x3 convolutions (256, 128, and 256 channels in each layer). The multi-scale residuals are composed of 2 consecutive 3x3 convolutions. The decoder is the mirrored architecture of the content encoder where we perform unpooling followed by deconvolution. For the baseline ConvLSTM, we use the same architecture as the motion encoder, residual connections, and decoder except we increase the number of channels in the encoder in order to have an overall comparable number of parameters with MCnet."}, {"section_index": "8", "section_name": "6.1 KTH AND WEIZMANN ACTION DATASETS", "section_text": "Experimental settings. The KTH human action dataset (Schuldt et al.]2004) contains 6 categories of periodic motions on a simple background: running, jogging, walking, boxing, hand-clapping and hand-waiving. We use person 1-16 for training and 17-25 for testing, and also resize frames to 128x128 pixels. We train our network and baseline by observing 10 frames and predicting 10 frames into the future on the KTH dataset. We set = 0.02 for training. We also select the walking, running one-hand waving, and two-hands waving sequences from the Weizmann action dataset (Gorelick et al.|2007) for testing the networks' generalizability.\nFor all the experiments, we test the networks on predicting 20 time steps into the future. As for evaluation, we use the same SSIM and PSNR metrics as in|Mathieu et al.(2015). The evaluation on KTH was performed on sub-clips within each video in the testset. We sample sub-clips every 3 frames for running and jogging, and sample sub-clips every 20 frames (skipping the frames we have already predicted) for walking, boxing, hand-clapping, and hand-waving. Sub-clips for running jogging, and walking were manually trimmed to ensure humans are always present in the frames. The evaluation on Weizmann was performed on all sub-clips in the selected sequences.\nLGAn, in addition to Limg, allows our network to not only generate the target sequence, but also simultaneously enforce realism in the images through visual sharpness that fools the human eye Note that our model uses its predictions as input for the next time-step during the training, which enables the gradients to flow through time and makes the network robust for error propagation during prediction. For more a detailed description about adversarial training, please refer to Appendix[D\nIn addition to the results in this section, we also provide more qualitative comparisons in the supplementary material and in the videos on the project website: https : / /sites. google.\nKTH Weizmann 38 34 Conv LSTM Conv LSTM + RES Conv LSTM + RES MCnet + RES 32 MCnet 34 MCnet + RES 2 28 30 26 28 26 peak peak 24 22 Deme' 22 20 10 11 12 13 14 15 16 17 18 19 20 8 10 11 12 13 14 15 16 17 1819 20 time steps time steps 1.0 1.0 Conv LSTM Conv LSTM + RES Conv LSTM + RES MCnet + RES O. MCnet 0.9 MCnet + RES 0.9 0.8 0.8 0.7 0.7 1111 ..... ..... .... 10 11 12 13 14 15 16 17 18 19 20 5 8 9 10 11 12 13 14 151617181920 time steps time steps\nConv LSTM Conv LSTM + RES Conv LSTM + RES MCnet + RES MCnet 0.9 MCnet + RES 0.9 0.8 0.8 0.7 0.7 : 101 10 11 12 13 14 15 16 17 18 19 20 5 6 9 10 11 12 13 14 15 16 17 18 19 20 time steps time steps\nFigure 2: Quantitative comparison between MCnet and ConvLSTM baseline with and without multi scale residual connections (indicated by \"+ RES\"). Given 10 input frames, the models predict 2( frames recursively, one by one. Left column: evaluation on KTH dataset (Schuldt et al.]2004). Righ colum: evaluation on Weizmann (Gorelick et al.2007) dataset.\nResults. Figure[2summarizes the quantitative comparisons among our MCnet, ConvLSTM baseline and their residual variations. In the KTH test set, our network outperforms the ConvLSTM baseline. by a small margin. However, when we test the residual versions of MCnet and ConvLSTM on the dataset (Gorelick et al.]2007) with similar motions, we can see that our network can generalize. well to the unseen contents by showing clear improvements, especially in long-term prediction. One. reason for this result is that the test and training partitions of the KTH dataset have simple and similar. image contents so that ConvLSTM can memorize the average background and human appearance to. make reasonable predictions. However, when tested on unseen data, ConvLSTM has to internally take care of both scene dynamics and image contents in a mingled representation, which gives it a. hard time for generalization. In contrast, the reason our network outperforms the ConvLSTM baseline. on unseen data is that our network focuses on identifying general motion features and applying them. to a learned content representation.\nFigure3|presents qualitative results of multi-step prediction by our network and ConvLSTM. As expected, prediction results by our full architecture preserves human shapes more accurately than the baseline. It is worth noticing that our network produces very sharp prediction over long-term time. steps; it shows that MCnet is able to capture periodic motion cycles, which reduces the uncertainty o. future prediction significantly. More qualitative comparisons are shown in the supplementary materia. and the project website\nExperimental settings. This section presents results on the challenging real-world videos in the. UCF-101 (Soomro et al. 2012) dataset. Having collected from YouTube, the dataset contains 101. realistic human actions taken in a wild and exhibits various challenges, such as background clutter,. occlusion, and complicated motion. We employed the same network architecture as in the KTH. dataset, but resized frames to 240x320 pixels, and trained the network to observe 4 frames and predict a single frame. We set = 0.001 for training. We also trained our convolutional LSTM baseline. in the same way. Following the same protocol as Mathieu et al.(2015) for data pre-processing and.\nt=12 t=15 t=18 t=21 t=24 t=27 t=30 Menr A Jogging Mene A Walking\nFigure 3: Qualitative comparison between our MCNet model and ConvLSTM. We display predictions starting from the 12th frame, in every 3 timesteps. The first 3 rows correspond to KTH dataset for the action of jogging and the last 3 rows correspond to Weizmann dataset for the action of walking\nevaluation metrics on full images, all networks were trained on Sports-1M (Karpathy et al.]2014 dataset and tested on UCF-101 unless otherwise stated!1\nResults. Figure4|shows the quantitative comparisons between our network trained for single-step prediction and Mathieu et al.(2015). We can clearly see the advantage of our network over the baseline. The separation of motion and contents in two encoder pathways allows our network tc identify key motion and content features, which are then fed into the decoder to yield predictions of higher quality compared to the baseline2|In other words, our network only moves what shows motion in the past, and leaves the rest untouched. We also trained a residual version of MCnet or UCF-101, indicated by \"MCnet + RES UCF101\", to compare how well our model generalizes wher trained and tested on the same or different dataset(s). To our surprise, when tested with UCF-101, the MCnet trained on Sports-1M (MCnet + RES) roughly matches the performance of the MCnet trained on UCF-101 (MCnet + RES UCF101), which suggests that our model learns effective representations which can generalize to new datasets. Figure |5|presents qualitative comparisons between frames generated by our network and Mathieu et al.(2015). Since the ConvLSTM and Mathieu et al.(2015 lack explicit motion and content modules, they lose sense of the dynamics in the video and therefore the contents become distorted quickly. More qualitative comparisons are shown in the supplementary material and the project website\nWe were not able to get the model fine-tuned on UCF-101 from the authors so it is not included in Figure\n1.0 Raato Conv LSTM Conv LSTM Conv LSTM + RES 30 Conv LSTM + RES 0.9 O MCnet OO MCnet NS!se MCnet + RES O MCnet + RES MCnet + RES UCF101 0.8 > MCnet + RES UCF101 25 Matheiu et al Matheiu et al S 0.7 geubs 20 Hiii 0.6 0.5 15 ? een S 0.4 P 10 : 1 2 3 4 5 6 8 1 i. 2 3 4 5 6 7 8 time steps time steps\nFigure 4: Quantitative comparison between our model, convolutional LSTM Shi et al.(2015), anc Mathieu et al. (2015). Given 4 input frames, the models predict 8 frames recursively, one by one."}, {"section_index": "9", "section_name": "7 CONCLUSION", "section_text": "We proposed a motion-content network for pixel-level prediction of future frames in natural vide. sequences. The proposed model employs two separate encoding pathways, and learns to decompos motion and content without explicit constraints or separate training. Experimental results suggest tha separate modeling of motion and content improves the quality of the pixel-level future prediction, anc our model overall achieves state-of-the-art performance in predicting future frames in challengin, real-world video datasets."}, {"section_index": "10", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, gifts from the Bosch Research and Technology Center, and Sloan Research Fellowship. We also thank NVIDIA for donating K4Oc and TITAN X GPUs. We thank Ye Liu, Junhyuk Oh, Xinchen Yan, Lajanugen Logeswaran, Yuting Zhang, Sungryull Sohn, Kibok Lee, Rui Zhang, and other collaborators for helpful discussions. R. Villegas was partly supported by the Rackham Merit Fellowship.."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "R. Goroshin, M. Mathieu, and Y. LeCun. Learning to linearize under uncertainty. In NIPs. 2015.\nM. Hoai and F. Torre. Max-margin early event detectors. IJCV, 2013\nC. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through. video prediction. In NIPS, 2016. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPs. 2014.. L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. Transactions on Pattern Analysis and Machine Intelligence, 29(12):2247-2253, December 2007.\nHe, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.. CoRR. abs/1512.03385, 2015. . Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science. 2006.\n'CI.LdlgC-sCalc classification with convolutional neural networks. In CVPR. 2014. T. Lan, T. Chen, and S. Savarese. A hierarchical representation for future action prediction. In ECCV, 2014. W. Lotter, G. Kreiman, and D. Cox. Unsupervised learning of visual structure using predictive generative networks. arXiv preprint arXiv:1504.08023, 2015.\nG.T.. MCnet ConvLSTM Mathieu et al.(2015 t=5 t=7 t=9 t=11 t=5 t=7 t=9 t=11\nFigure 5: Qualitative comparisons among MCnet and ConvLSTM and|Mathieu et al.(2015). We. display predicted frames (in every other frame) starting from the 5th frame. The green arrows denote. the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website.\narXiv preprint drXlv:15 J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In NIPS. 2015. V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR, abs/1511.06309, 2015. L. C. Pickup, Z. Pan, D. Wei, Y. Shih, C. Zhang, A. Zisserman, B. Scholkopf, and W. T. Freeman. Seeing the arrow of time. In CVPR, 2014. S. L. Pintea, J. C. van Gemert, and A. W. M. Smeulders. Dejavu: Motion prediction in static images. In European Conference on Computer Vision, 2014. M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014. M. S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming 1ideos In JCCV 2O1\nK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition In ICLR, 2015. K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015. C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. arXiv preprint arXiv:1504.08023, 2015. C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In NIPs. 2016. J. Walker, A. Gupta , and M. Hebert . Patch to the future: Unsupervised visual prediction. In CVPR. 2014. J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images. using variational autoencoders. CoRR, abs/1606.07873, 2016. P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. DeepFlow: Large displacement optical flow with deep matching. In ICCV, 2013. T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. NIPS, 2016. J. Yuen and A. Torralba. A data-driven approach for event prediction. In ECCV, 2010.. M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level\nJ. Yuen and A. Torralba. A data-driven approach for event prediction. In ECCV, 2010\nM. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high leve feature learning. In ICCV, 2011.\nFigure 6: Qualitative comparisons on KTH testset. We display predictions starting from the 12th frame, for every 3 timesteps. More clear motion prediction can be seen in the project website\nCOTTonYM - P P Boxing Menr M Running Mentt - Wallki"}, {"section_index": "12", "section_name": "Walking", "section_text": "Figure 7: Qualitative comparisons on KTH testset. We display predictions starting from the 12t. frame, for every 3 timesteps. More clear motion prediction can be seen in the project website\nt=12 t=15 t=18 t=21 t=24 t=27 t=30 Mene\nt=12 t=15 t=18 t=21 t=24 t=27 t=30 Menr Handclapping Mene 1\nMenr 7 LD\nG.T. MCnet ConvLSTM Mathieu et al.2015 t=5 t=7 t=9 t=11 t=5 t=7 t=9 t=11\nFigure 8: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image. patches between MCnet and ground-truth. More clear motion prediction can be seen in the projec website\nIn this section, we show frame prediction examples in which considerable camera motion occurs. We. analyze the effects of camera motion on our best network and the corresponding baselines. First we analyze qualitative examples on UCF101 (more complicated camera motion) and then on KTH. (zoom-in and zoom-out camera effect).\nUCF101 Results. As seen in Figure[9|and Figure[10, our model handles foreground and camera motion for a few steps. We hypothesize that for the first few steps, motion signals from images are clear. However, as images are predicted, motion signals start to deteriorate due to prediction errors.. When a considerable amount of camera motion is present in image sequences, the motion signals. are very dense. As predictions evolve into the future, our motion encoder has to handle large motion deterioration due to prediction errors, which cause motion signals to get easily confused and lost. quickly.\nG.T. MCnet ConvLSTM Mathieu et al. (2015) t=5 t=7 t=9 t=11\nFigure 9: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website\nG.T. MCnet ConvLSTM Mathieu et al.(2015) t=5 t=7 t=9 t=11 t=5 J08-2.30x0 J08-2.30x0 08-2.30 J03-230x0 t=7 J08-2.30x0 J08-2.30x0 08-2.30 t=9 J08-2.30x0 J03-2.30x0 -2.30 t=11 J08-2.30xO\nFigure 10: Qualitative comparisons on UCF-101. We display predictions (in every other frame starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within. image patches between MCnet and ground-truth. More clear motion prediction can be seen in the. Droiect wehsite.\nKTH Results. We were unable to find videos with background motion in the KTH dataset, but we found videos where the camera is zooming in or out for the actions of boxing, handclapping, and handwaving. In Figure[11] we display qualitative for such videos. Our model is able to predict the. zoom change in the cameras, while continuing the action motion. In comparison to the performance observed in UCF101, the background does not change much. Thus, the motion signals are well localized in the foreground motion (human), and do not get confused with the background and lost as. quickly.\nFigure 11: Qualitative comparisons on KTH testset. We display predictions starting from the 12th. frame, in every 3 timesteps. More clear motion prediction can be seen in the project website\nt=12 t=15 t=18 t=21 t=24 t=27 t=30 Menr P P Boxing Meer Handclapping\nt=12 t=15 t=18 t=21 t=24 t=27 t=30 1 P P\nMenr LD\nIn this section, we show additional quantitative comparison with a baseline based on copying the last observed frame through time for KTH and UCF101 datasets. Copying the last observed frame through time ensures perfect background prediction in videos where most of the motion comes fron foreground (i.e. person performing an action). However, if such foreground composes a small part oi the video, it will result in high prediction quality score regardless of the simple copying action.\nIn Figure|12|below, we can see the quantitative comparison in the datasets. Copying the last observed frame through time does a reasonable job in both datasets, however, the impact is larger in UCF101. Videos in the KTH dataset comprise simple background with minimal camera motion, which allows. our network to easily predict both foreground and background motion, resulting in better image. quality scores. However, videos in UCF101 contain more complicated and diverse background which. in combination with camera motion present a much greater challenge to video prediction networks From the qualitative results in Section[Aand Figures 5] 8] 9] and[10] we can see that our network performs better in videos that contain isolated areas of motion compared to videos with dense motion. A simple copy/paste operation of the last observed frame, ensures very high prediction scores in. videos where very small motion occur. The considerable score boost by videos with small motion. causes the simple copy/paste baseline to outperform MCnet in the overall performance on UCF101\nKTH Rato Raato 34 Conv LSTM 'Conv LSTM Conv LSTM + RES Conv LSTM + RES 32 30 MCnet MCnet ns! Noase MCnet + RES MCnet + RES !ON 30 Matheiu et al. Copy last frame 25 Copy last frame 28 leub! 26 20 24 15 ? eek peer 22 P 20 10 6 8 10 11 12 13 14 15 16 17 18 19 20 3 4 5 6 8 time steps time steps 1.0 1.0 Conv LSTM Conv LSTM Conv LSTM + RES Conv LSTM + RES Biniigy 0.9 MCnet MCnet 0.9 MCnet + RES MCnet + RES 0.8 Matheiu et al. Copy last frame Copy last frame 0.7 -ri 0.8 0.6 0.5 0.7 S S 0.4 3 4 5 8 9 10 11 12 13 14 151617 181920 2 3 4 5 6 7 time steps time steps\nFigure 12: Extended quantitative comparison including a baseline based on copying the last observec frame through time"}, {"section_index": "13", "section_name": "UCF1O1 MOTION DISAMBIGUATION EXPERIMENTS", "section_text": "Due to the observed bias from videos with small motion, we perform experiments by measuring the image quality scores on areas of motion. These experiments are similar to the ones performed in|Mathieu et al.[(2015). We compute DeepFlow optical flow (Weinzaepfel et al.[2013) betweer the previous and the current groundtruth image of interest, compute the magnitude, and normalize it to [0, 1. The computed optical flow magnitude is used to mask the pixels where motion wa observed. We set the pixels where the optical flow magnitude is less than O.2, and leave all othei pixels untouched in both the groundtruth and predicted images. Additionally, we separate the tes1 videos by the average l2-norm of time difference between target frames. We separate the test videos into deciles based of the computed average l2-norms, and compute image quality on each decile Intuitively, the 1st decile contains videos with the least overall of motion (i.e. frames that show the smallest change over time), and the 10th decile contains videos with the most overall motion (i.e frames that show the largest change over time).\nAs shown in Figure[13] when we only evaluate on pixels where rough motion is observed, MCnet reflects higher PSNR and SSIM, and clearly outperforms all the baselines in terms of SSIM. The SSIM results show that our network is able to predict a structure (i.e. textures, edges, etc) similar to the grountruth images within the areas of motion. The PSNR results, however, show that our method outperforms the simple copy/paste baseline for the first few steps, but then our method performs slightly worse. The discrepancies observed between PSNR and SSIM scores could be due to the fac1 that some of the predicted images may not reflect the exact pixel values of the groundtruth regardless of the structures being similar. SSIM scores are known to take into consideration features in the image that go beyond directly matching pixel values, reflecting more accurately how humans perceived image quality.\n1.0 Raato Conv LSTM Conv LSTM I Conv LSTM + RES Conv LSTM + RES MCnet O MCnet Noase MCnet + RES O MCnet + RES >Matheiu et al. >Matheiu et al Copy last frame Copy last frame. 30 0.9 1 +11A eeak 25 S 0.8 2 3 4 6 7 8 2 3 4 8 time steps. time steps\nFigure 13: Extended quantitative comparison on UCF101 including a baseline based on copying the last observed frame through time using motion based pixel mask.\nFigures[15|and14|show the evaluation by separating the test videos into deciles based on the average l2-norm of time difference between target frames. From this evaluation, it is proven that the copy last frame baseline scores higher in videos where motion is the smallest. The first few deciles (videos with small motion) show that our network is not just copying the last observed frame through time, otherwise it would perform similarly to the copy last frame baseline. The last deciles (videos with large motion) show our network outperforming all the baselines, including the copy last frame baseline, effectively confirming that our network does predict motion similar to the motion observed in the video.\n10th decile 1.0 Conv LSTM Conv LSTM Conv LSTM + RES ..o MCnet Conv LSTM + RES Oo MCnet O MCnet + RES .. MCnet + RES Matheiu et al w Copy last frame Matheiu et al 16 time steps time steps 9th decile Conv LSTM 1.0 Conv LSTM Conv LSTM + RES Conv LSTM + RES o MCnet Oo MCnet MCnet + RES MCnet + RES Matheiu et al Matheiu et al Copy last frame * Copy last frame 0. time steps time steps 8th decile 1.0 Conv LSTM Conv LSTM + Conv LSTM Conv LSTM + RES OO MCnet Oo MCnet O MCnet + RES 0 MCnet + RES Matheiu et al Matheiu et al * Copy last frame time steps time steps 7th decile 1.0 Conv LSTM Conv LSTM Conv LSTM + RES Conv LSTM + RES OiO MCnet Oo MCnet MCnet + RES 0 MCnet + RES Matheiu et al > Matheiu et al Copy last frame Copy last frame 22 5 time steps time steps 6th decile 1.0 Conv LSTM Conv LSTM Conv LSTM + RES Conv LSTM + RES Oo MCnet OO MCnet MCnet + RES . MCnet + RES Matheiu et al > Matheiu et al Copy last fram Copy last frame time steps time steps\nFigure 14: Quantitative comparison on UCF101 using motion based pixel mask, and separatin dataset by average l2-norm of time difference between target frames.\n5th decile 1.0 Conv LSTM Conv LSTM Conv LSTM + RES Conv LSTM + RES . MCnet OO MCnet .. MCnet + RES MCnet + RES Matheiu et al. Matheiu et al Copy last fram Copy last frame 30 0.9 time steps time steps 4th decile 1.0 40 Conv LSTM Conv LSTM + RES Oo MCnet MCnet + RES + Matheiu et al Copy last frame 80 00 Conv LSTM 0.9 Conv . Mcnet 25 MCnet + RES Matheiu et al Copy last fram time steps time steps 3rd decile 1.0 Conv LSTM Conv LSTM + RES Oo MCnet 0-0 MCnet + RES > Matheiu et al * Copy last frame 5 I Conv LSTM Conv LSTM + RE MCnet MCnet + RES Matheiu et al 25 Copy last fram. time steps time steps 2nd decile 1.0 Conv LSTM Conv LSTM + RES Oo MCnet .-. MCnet + RES > Matheiu et al . Copy last i. frame 35 Conv LSTM Conv LSTM + 0 MCnet MCnet + RES Matheiu et al 0.9 Copy last frame 25 2 3 time steps time steps 1st decile 50 1.0 Conv LSTM Conv LSTM + RES Oo MCnet .. MCnet + RES > Matheiu et al Copy last frame Conv LSTM Conv LSTM + RES MCnet -0 MCnet + RES Matheiu et al w Copy last frame 30 time steps time steps\nFigure 15: Quantitative comparison on UCF101 using motion based pixel mask, and separatin dataset by average l2-norm of time difference between target frames.."}, {"section_index": "14", "section_name": "D ADVERSARIAL TRAINING", "section_text": "Mathieu et al.(2015) proposed an adversarial training for frame prediction. Inspired byGoodfellow et al.[(2014), they proposed a training procedure that involves a generative model G and a discrimi-. native model D. The two models compete in a two-player minimax game. The discriminator D is. optimized to correctly classify its inputs as either coming from the training data (real frame sequence) or from the generator G (synthetic frame sequence). The generator G is optimized to generate frames. that fool the discriminator into believing that they come from the training data. At training time D takes the concatenation of the input frames that go into G and the images produced by G. The. adversarial training objective is defined as follows:\nminmax log D ([x1:t, Xt+1:t+T]) + log(1 D ([x1:t, G (x1:t)l)) G\nLGAN = - log D ([x1:t, G (x1:t)])\nLdisc = log D ([x1:t,Xt+1:t+T]) log(1 - D ([x1:t, G (x1:t)]))\nwhere we optimize the parameters of D to minimize Ldisc, while the parameters of G stay untouched D tells us whether its input came from the training data or the generator G. Alternating between the two objectives, causes G to generate very realistic images, and D not being able to distinguish between generated frames and frames from the training data.\nwhere [., ] denotes concatenation in the depth dimension, x1:t denotes the input frames to G, Xt+1:t+T are the target frames, and G (x1:t) = Xt+1:t+T are the frames predicted by G. In practice, we split the. minimax objective into two separate, but equivalent, objectives: LgAn and Ldisc. During optimization. we minimize the adversarial objective alternating between LgAn and Ldisc. LgAy is defined by.\nwhere we optimize the parameters of G to minimize LgAn while the parameters of D stay untouched As a result, G is optimized to generate images that make D believe that they come from the training. data. Thus, the generated images look sharper, and more realistic. Ldisc is defined by.\ndisc = log D ([x1:t,Xt+1:t+T]) log(1 D ([x1:t, G(x1:t)D))"}] |
BkIqod5ll | [{"section_index": "0", "section_name": "CONVOLUTIONAL NEURAL NETWORKS GENERALIZA- TION UTILIZING THE DATA GRAPH STRUCTURE", "section_text": "Yotam Hechtlinger, Purvasha Chakravarti & Jining Qin\n{yhechtli,pchakrav, jiningg}@stat.cmu.edu\nConvolutional Neural Networks have proved to be very efficient in image and audio processing. Their success is mostly attributed to the convolutions which utilize the. geometric properties of a low - dimensional grid structure. This paper suggests a. generalization of CNNs to graph-structured data with varying graph structure, that. can be applied to standard regression or classification problems by learning the graph structure of the data. We propose a novel convolution framework approach. on graphs which utilizes a random walk to select relevant nodes. The convolution. shares weights on all features, providing the desired parameter efficiency. Further more, the additional computations in the training process are only executed once. in the pre-processing step. We empirically demonstrate the performance of the. proposed CNN on MNIST data set, and challenge the state-of-the-art on Merck. molecular activity data set."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Convolutional Neural Networks (CNNs) (LeCun et al.]1998) are variants of multi-layer perceptrons that have been inspired by biological cells in the visual cortex. The cells act as local filters over the input space and are well-suited to exploit the strong local spatial correlation present in natural images (Hubel & Wiesel 1968). In recent years, following a breakthrough byKrizhevsky et al.(2012) at the 2012 ImageNet challenge, CNN has repeatedly demonstrated significant improvements in a large number of computer vision problems.\nThe major success of CNN for visual data is justly credited to the convolution. But its strength i dependent on three crucial underlying attributes found in visual data..\nThese assumptions make it challenging to duplicate the success of CNN on a different data structure Nevertheless, CNNs have also proved effective for non-image data, usually relying on the grid structure of the inputs. Results on acoustic data (Hinton et al.[2012), videos (Le et al.2011) and even Go board (Silver et al.][2016) indicate that it might be sensible to generalize CNN on other data structures that lack the under-lying grid structure.\nThe main contribution of this work is a generalization of CNNs to general graph-structured data. directed or undirected, offering a single method that incorporates the structural information present in. the graph of the features into supervised learning algorithms. Due to the active research on learning the graph structure of features, this proves to be quite a general framework. As demonstrated by the. examples, large number of standard continuous regression and classification problems fall within the."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "1. Local connectivity assumption: The signal in visual data tends to be highly correlated in local regions, and mostly uncorrelated in global regions. 2. Shared weights assumption: The same convolution is globally valid across the image. resulting in a significant parameter reduction. 3. Grid structure of the image: Enabling a straight forward re-scaling of the feature layers through the process of max pooling.\nThe main hurdle for generalizing CNNs to graph-structured data is to find a corresponding generalized. convolution operator. We first consider a random walk on the graph in order to select the top k neighbors for every node during the pre-processing step, as Figure|1|shows. Then during the training. process, the convolution is performed as an inner product of the weights and the selected top neighbors. of the corresponding node in the preference order. Thus the weights are shared by each node and reflect the dependency between each node and its closest neighbors. When an image is considered as. an undirected graph, this convolution operation is the same as the standard convolution. The proposed. convolution is also applicable when the graph structure varies between observations..\nIn order to demonstrate the potential of the suggested method, we perform a set of experiments on the Merck molecular activity challenge and the MNIST data sets. The Merck molecular activity challenge data can be seen as a standard regression problem with significant correlation between the features. Essentially, for any regression or classification problem, the data can be visualized as a graph and its correlation matrix can be used to learn the corresponding graph structure. By treating the data as a graph, we show that a simple application of the graph convolutional neural network gives results that are comparable to state-of-the-art models."}, {"section_index": "3", "section_name": "2 LITERATURE REVIEW", "section_text": "Standard CNN architectures use a fixed-dimensional input which makes it difficult to apply them on data with changing graph-structure. Recently,Kalchbrenner et al.(2014) developed a CNN for modeling sentences of varying lengths. Another interesting example of a convolution over a changing graph structure has recently been suggested byDuvenaud et al.(2015).\nSeveral deep neural networks have been suggested in the past for predicting the properties of molecules (for example, Glen et al.(2006) and Lusci et al.[(2013)). One of the proposed ideas is to extract features from the molecular structure into a fixed-dimensional feature vector and then use it\nFigure 1: Visualization of the graph convolution size 5 . For a given node, the convolution is applied on the node and its 4 closest neighbors selected by the random walk. As the right figure demonstrates. the random walk can expand further into the graph to higher degree neighbors. The convolution weights are shared according to the neighbors' closeness to the nodes and applied globally on all nodes.\nGraph theory has been heavily studied in the last few decades, both from mathematical and statisti cal/computational perspectives, with a large body of algorithms developed for a variety of problems Despite that, research on algorithms that incorporate CNNs with graph structured-data is still emerg ing. The idea of extending CNN to graph-structured data was recently explored byBruna et al.[(2013] and Henaff et al.(2015). They suggested two solutions. The first uses multi-scale clustering to define the network architecture, with the convolutions being defined per cluster without any weight sharing The second defines the convolution through the eigen-values of the graph Laplacian, weighting out the distance induced by the graph's similarity matrix. The drawback of the methods is that there is no easy way to induce weight sharing among the different nodes of the graph. Also, these methods only handle inputs of a fixed size as the graph structure is fixed.\nas an input in a machine learning method. Specifically, Duvenaud and Maclaurin Duvenaud et al [2015), propose a neural network to extract features or molecular fingerprints from molecules that can. be of arbitrary size and shape. Their neural network consists of layers which are local filters being. applied to all the nodes and its neighbors. After using several such convolutional layers to create. representations of the original data, they apply a global pooling step to features and feed that into a. standard classifier. However, this method is limited in its ability to propagate information across the. graph, limited by the depth of the network in its pooling stage..\nThe problem of selecting nodes for a convolution on a graph is a particular instance of the problem oj. selecting local receptive fields in a general neural network. The work of[Coates & Ng[(2011) sugges. to select the local receptive fields in a general neural network according to closest neighbors inducec by the similarity matrix."}, {"section_index": "4", "section_name": "GRAPH CONVOLUTIONAL NEURAL NETWORK", "section_text": "The key step which differentiates CNNs on images from regular neural networks, is the selection of neighbors on the grid in a k k window combined with the shared weight assumption. We propose a convolution operator analogous to the convolution performed on images in standard CNNs. In order to select the local neighbors of a given node, we use the graph transition matrix and calculate the expected number of visits of a random walk starting from the given node. The convolution would then be applied on the nodes being visited the most. In this section we discuss the application of the convolution in a single layer on a single graph. It is immediate to extend the definition to more complex structures, and it will be explicitly explained in|3.4] We introduce some notation in order to proceed into further discussion.\nThis work assumes the existence of the graph transition matrix P. This is not a restriction. If graph structure of the data is already known, i.e. if the similarity matrix S is already known, then the transition matrix can be obtained, as explained inLovasz et al.(1996), by\nIf the graph structure is unknown, it can be learned using several unsupervised or supervised graph learning algorithms. Learning the data graph structure is an active research topic and is not in the scope of this paper. The interested reader can start with Belkin & Niyogi(2001), and Henaff et al. (2015) discussing similarity matrix estimation. We use the absolute value of the correlation matrix as the similarity matrix, following Roux et al.(2008) who showed that correlation between the features is usually enough to capture the geometrical structure of images. That is. we assume\nIn contrast to previous research, we suggest a novel efficient convolution that captures the local connectivity reflected in the graph structure. The convolution weights are shared among the different nodes and can even be applied to changing graph structures. We do so by considering the closest. neighbors obtained in a random walk, using information contained in the similarity matrix..\nNotation: Let G = (V, ) be a graph over a set of N features, V = (X1,..., Xy), and a set of edges . Let P denote the transition matrix of a random walk on the graph, such that P; is the probability to move from node X, to X,. Let the similarity matrix and the correlation matrix of the graph be given by S and R respectively. Define D as a diagonal matrix where D: = ) , Si\nSi,j=|Ri,jVi,j\nOnce we derive the transition matrix P, we define Q(k) Pk, where [Pk]i, is the probability of transitioning from X, to X; in k steps. That is,\nk Q(0) =I, Q(1)=I+P,..., Pk i=0\na random walk on the graph. As k increases we incorporate neighbors further away from the node while the act of summation still gives proper weights to the node itself and its closest neighbors Figure2|provides a visualization of the matrix Q over the 2-D grid\nTo the best of our knowledge, this is the first use of the expected number of visits on a graph to select neural nets architecture. Coates & Ng(2011) and others suggest using the similarity matrix This definition extends the notion of the similarity matrix, since Q(1) agrees with the variable order induced by the similarity matrix. Furthermore, higher powers of k emphasize more on the grapl structure of the data, giving major hubs more weight. This might be valuable, for example, in social network data.\n>1,2,...,N h that\nThe notion of ordered distance between the nodes is a global feature of all graphs and nodes Therefore, we can take advantage of it to satisfy the desired shared weights assumption. We define Conv1, as the size p convolution over the graph G with nodes x E RN and weights w E Rp, for the p nearest neighbors of each node, as the inner product:\nX_(k)(1 k W1 71 TT X1 X_(k) L W2 k X2 T2 1 T2 Convi(x) where x = .. X_(k)( L k X N TN TN p\nThe order of the weights follows from the distance induced by the transition matrix. That is, wj. will be convoluted with the variable which has the largest value in each row according to the matrix Q(k). For example, when Q(1) = I + P, w1 will always correspond to the node itself, and w2 will. correspond to the node's closest neighbor. For higher values of k, the order will be defined by the. unique graph structure. An interesting attribute of this convolution, as compared to other convolutions. on graphs is that, it preserves locality while still being applicable over different graphs with different. structures.\nIt should be noted that Conv1 is susceptible to the effects of negative correlation between the features. and does not take into account the actual distance between the nodes (it only uses that for the selectior of the closest neighbors of a node). Since the weights are being learned globally, in order to account. for that, we have also defined Conv2 as:.\nIn practice Conv2 performs slightly better than Conv1, although the major differences between them are mostly smoothed out during the training process..\nAn important feature of the suggested convolution is the operation complexity. For a graph with N nodes, a single p level convolution only requires O(N . p) operations, where p is a very small\nX_(k)(1) k TT1 W1 x1 A X_(k)(1) W2 x2 Conv1(x) = where x - . X_(k)(1) Wp X N k TN A n\ny1 91 )(1) W1 Y2.1 k) (1) W2 Conv2(x) = (5) N (1 X1 X2 where x = and yij= sign(Rij) Qi x, Vi=1,..., N, j=1,..., N. X N\nY1,j k 1 W1 W2 YN,N (k)(1) Y\nW1 W2 k Conv2(x) = X1 X2 where x = and yij= sign(Rij) Qij xj, Vi=1,..., N, j=1,..., N X N\nk =1 k =2 k =4 k =10\nFigure 2: Visualization of a row of Q(k) on the graph generated over the 2-D grid at a node near the center when connecting each node to its 8 adjacent neighbors. For k = 1, most of the weight is on the node, with smaller weights on the first order neighbors. This corresponds to a standard 3 3 convolution. As k increases the number of active neighbors also increases, providing greater weight to neighbors farther away, while still keeping the local information.\nnatural number (the number of neighbors considered). The major computational effort goes in the computation of Q which is being done once per graph structure in the pre-processing step\nThis implies that for large values of k, local information will be smoothed out and the convolution. will repeatedly be applied on the features with maximum connections. For this reason, we suggest the value of k to be kept relatively low (but large enough to capture sufficient amount of features. when needed).\nFor every graph convolution layer, we have as an input a 3D tensor of observations, their features and depth. We first extend the input with an additional dimension that includes the top p neighbors of each feature selected by Q(k), transforming the input dimension from 3D to 4D tensor as.\nNow for every graph convolution layer, the weights are a 3D tensor with the dimension of (Neighbors, Depth, Filters). Therefore application of a graph convolution which is a tensor dot product between the input and the weights, along the (Neighbors, Depth) axes, results with an outpu dimension:\n(Observations, Features) , (Neighbors, Depth) (Neighbors, Depth) , (Filters - (Observations, Features, Filters)\nObservations, Features) , (Neighbors, Depth) Neighbors, De - (Observations, Fe\nImplementation of the algorithm has been done using Keras and Theano libraries in Python, inheriting all the tools provided by the libraries to train neural networks, such as dropout regularization advanced optimizers and efficient initialization methods. Source code will be publicly available prior to the ICLR conference on the authors' website.\nThe selection of the value of k is data dependent, as with every hyper-parameter. But there are two main components affecting its value. Firstly, it is necessary for k to be large enough to detect the top p neighbors of every node. If the transition matrix P is sparse, it might require higher values of k Secondly, from properties of stochastic processes, we know that if we denote r as the Markov chain stationary distribution, then\nk lim Tj Vi,j k k->00\nNeighbors, Depth) , (Filters - (Observations, Features, Filters)\n0.8 0.25 0.6 0.4 0.2 0.20 0 0.2 0.15 -0.4 -0.6 Architecture C10 C10C20FC300 C10FC300 + FC300 FC100 -0.8 0.10 10 15 20 25 30 35 4O Epoch\n0.8 0.25 0.6 0.4 0.2 0.20 0 -0.2 0.15 -0.4 -0.6 Architecture C10 C10C0FC300 C10FC300 FC300 FC100 -0.8 0.10 10 15 20 25 30 35 4O Epoch\nFigure 3: Left: Visualization of the correlation matrix between the first 100 molecular descriptors (features) ir the DPP4 Merck molecular activity challenge training set. The proposed method utilizes the correlation structure between the features. Right: Convergence of R2 for the different methods on the test set. The graph convolution makes the convergence steadier by reducing the number of parameters."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "In order to test the feasibility of the proposed CNN on graphs, we have conducted experiments on well known data sets functioning as benchmarks - Merck molecular activity challenge and MNIST Both the data sets are popular and well-studied challenges in computational biology and computer vision respectively.\nIn all the implementations we kept the architecture shallow and simple, instead of deep and complex This was done to enable better comparisons between the models, and reduce the chance of over-fitting the test set by the model selection process. The hyper-parameters were chosen arbitrarily when possible rather than being tuned and optimized. Nevertheless, we still report state-of-the-art, or competitive results on the experimented data sets.\nIn this section, we denote a graph convolution layer with k feature maps by Ck and a fully connected layer with k hidden units by FCk\nThe Merck molecular activity is a Kaggle[|challenge which is based on 15 molecular activity data sets. The target is to predict activity levels for different molecules based on the structure between the different atoms in the molecule. This helps to identify molecules in medicines which hit the intended target and do not cause side effects\nFollowingHenaff et al.(2015), we apply our algorithm on the DPP4 dataset. DPP4 contains 6148 training and 2045 test molecules. Some of the features of the molecules are very sparse and are only active in a handful number of molecules. For these features, the correlation estimation is not very accurate. Therefore we use features that are active in at least 20 molecules (observations). This results in 2153 features. As can be seen in Figure[3] there is significant correlation structure between different features. This implies strong connectivity among the features which is important for the application of the proposed method.\nThe training was performed using Adam optimization procedure (Kingma & Ba]2014) where the gradients are derived from back-propagation algorithm. We used learning rate, = O.001, fixed the number of epochs to 40 and implemented dropout regularization on every layer during th optimization procedure. The correlation matrix absolute values were used to learn the graph structure We found that a small number of nearest neighbors (p) between 5 to 10 works the best, and used p = 5 in all models.\nSince this is a regression problem, we used the root mean-squared error loss (RMSE). Following the standard set by the Kaggle challenge, results are reported in terms of the squared correlation (R2),\nR2 Method Architecture. OLS Regression 0.135 Random Forest. 0.232 Merck winner DNN 0.224 Spectral Networks. C64- P8 - C64 - P8 - FC1000 0.204 Spectral Networks. C16- P4 - C16 - P4 - FC1000 0.277 (supervised graph) Fully connected NN FC300-FC100 0.192 C10 Graph CNN 0.246 Graph CNN C10- FC100 0.258 Graph CNN C10- C20- FC300 0.268\nwhere Y is the actual activity level and Y is the predicted one\nThe convergence plot given in Figure|3 demonstrates convergence of the selected architectures. The contribution of the suggested convolution is explained in view of the alternatives:.\nTable|1|provides more thorough R2 results for the different architectures explored, and compares it. to two of the winners of the Kaggle challenge, namely the Deep Neural Network and the Randon forest in[Ma et al.(2015). We perform better than both the winners of the Kaggle contest. Since. the challenge is already over, and we had full access to the test set, the results should mostly be. considered as a proof of concept.\nThe models in Henaff et al.(2015) and Bruna et al.(2013) use a spectral approach, and currently hold the state-of-the-art. In comparison to them, we perform better than the Spectral Networks CNN. on unsupervised graph structure, which is equivalent to what was done by using the correlation. matrix as similarity matrix. The one using Spectral Networks on supervised graph structure holds the state-of-the-art by learning the graph structure. This is a direction we have not yet explored, as graph. learning is beyond the scope of this paper, although it will be straightforward to apply the proposed. graph CNN in a similar way to any learned graph.."}, {"section_index": "6", "section_name": "4.2 MNIST DATA", "section_text": "The MNIST data often functions as a benchmark data set to test new machine learning methods We have experimented with two different graph structures for the images. First, we considered the images as observations from an undirected graph on the 2-D grid, where each pixel is connected to its 8 adjunct neighbors pixels. We used the convolutions over the grid structure as presented in Figure[2 and Q(3) with p = 25 as the number of nearest neighbors. Due to the symmetry of the graph in most regions of the image, many pixels has equal distance from the pixel being convoluted. If ties were broken in a consistent manner, this example would be reduced to the regular convolution on a 5 5 window for exactly the entire space but pixels 3 steps away from the boundary. In order to make the example more compelling, we have broken ties arbitrarily, making the training process harder compared to regular CNN. Imitating LeNet, with C20, Pooling(2x2), C50, Pooling(2x2) , FC100\nFC300-FC100 C10 C1o- FC100 C10- C20- FC300\nTable 1: The squared correlation between the actual activity levels and predicted activity levels, R2 for different methods on DPP4 data set from Merck molecular activity challenge.\nR2 = Corr(Y, Y)2\nFully connected Neural Network: Models first applying convolution, followed by fully connected hidden layer converge better than more complex fully connected models. Further-. more, convergence is more stable in comparison to the fully connected models, due to the parameter reduction. Linear Regression: Optimizing over the set of convolutions is often considered as automa tion of the feature extraction process. From that perspective, a simple application of one. layer of convolution, followed by linear regression, significantly outperforms the results of a. standalone linear regression.\nMethod Error Rate (%) / of Parameteres. Logistic Regression 7.49 7,180 C20 2.24 143, 410 C20 C20 1.71 145, 970 C20 - FC512 1.39 7, 347, 862 FC512 - FC512 1.42 635, 402\nTable 2: Error rates of different methods on MNIST digit recognition task\nfollowed by a linear classifier, resulted with 1.1% error rate. This is a worse than a regular CNN which achieves with similar architecture around 0.75% error rate, and better than a fully connectec neural network which achieves around 1.4%, as expected from the complexity differences of the models.\nSecond, we used the correlation matrix to estimate the graph structure directly from the pixels. Since. some of the MNIST pixels are constant (e.g the corners are always black), we restricted the data only to the active 717 pixels not constant. We used Q(1) with p = 6 as the number of neighbors. This. was done in order to ensure that the spatial structure of the image no longer effect the results. With only 6 neighbors, and a partial subset of the pixels, the relative location of the top correlated pixels necessary varies per pixel. As a result, regular CNN are no longer applicable on the data, and we. have compared the performance to fully connected Neural Networks..\nTable[2|present the experiment results. During training a dropout rate of 0.2 has been applied on al. layers to prevent overfitting. In all the experiments the final layer is the standard softmax logistic. regression classifier. The Graph CNN perform on par with fully connected neural networks, witl lower number of parameters. Also a single layer of graph convolution, followed by logistic regressior greatly improve the performance of logistic regression, demonstrating the potential of the graph. convolution for feature extraction purposes. As with regular convolutions, C2o - FC512 had ove. 7M parameters, due to the fact that the convolution uses small amount of parameters to generate different maps of the input. This implies that the graph convolution might be even more effective with the development of an efficient pooling methods on graphs, a problem that will be covered ir. future research."}, {"section_index": "7", "section_name": "5 CONCLUSIONS", "section_text": "We suggest a method to address the problem of supervised learning over graph-structured data, b extending convolutional neural networks to graph input. Our main contribution is a new way to defin a convolution over graph that can handle different graph structures as its input. The convolution cai be applied on standard regression or classification problems by learning the graph structure in the data using the correlation matrix, or other methods. Compared to a fully connected layer, the suggeste convolution has significantly lower number of parameters, while providing stable convergence an comparable performance. We validated and demonstrated the predictive performance of our propose method on benchmark machine learning data sets such as: the Merck Molecular Activity data set an MNIST data.\nConvolutional Neural Networks have already revolutionized the field of computer vision, speec. recognition and language processing. We think an important step forward is to extend it to all othe problems which have an inherent graph structure within them.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Alessandro Rinaldo, Ruslan Salakhutdinov and Matthew Gormley fo suggestions, insights and remarks that has greatly improved the quality of this paper."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Belkin, Mikhail and Niyogi, Partha. Laplacian eigenmaps and spectral techniques for embedding an clustering. In NIPS, volume 14, pp. 585-591, 2001.\nCoates, Adam and Ng, Andrew Y. Selecting receptive fields in deep networks. pp. 2528-2536, 2011\nHubel. David H and Wiesel, Torsten N. Receptive fields and functional architecture of monkey striate cortex. The Journal of physiology, 195(1):215-243, 1968\nKingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nKrizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classification with deep. convolutional neural networks. In Advances in neural information processing systems, pp. 1097- 1105, 2012. Le, Quoc V, Zou, Will Y, Yeung, Serena Y, and Ng, Andrew Y. Learning hierarchical invariant. spatio-temporal features for action recognition with independent subspace analysis. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 3361-3368. IEEE, 2011\nHinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep Senior, Andrew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82-97, 2012\nLeCun, Yann, Bottou, Leon, Bengio, Yoshua, and Haffner, Patrick. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324. 1998\nMa, Junshui, Sheridan, Robert P, Liaw, Andy, Dahl, George E, and Svetnik, Vladimir. Deep neural nets as a method for quantitative structure-activity relationships. Journal of chemical information. and modeling, 55(2):263-274, 2015. Roux, Nicolas L, Bengio, Yoshua, Lamblin, Pascal, Joliveau, Marc, and Kegl, Balazs. Learning the 2-d topology of images. In Advances in Neural Information Processing Systems, pp. 841-848.. 2008. Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche,. George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al.. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016."}] |
rJfMusFll | [{"section_index": "0", "section_name": "BATCH POLICY GRADIENT METHODS FOR IMPROVING NEURAL CONVERSATION MODEL.", "section_text": "Kirthevasan Kandasamy\nCarnegie Mellon University, Pittsburgh, PA, USA kandasamy@cs.cmu.edu\nryoto,dtarlow, dacart}@microsoft.com\nWe study reinforcement learning of chatbots with recurrent neural network archi tectures when the rewards are noisy and expensive to obtain. For instance, a chat bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous re nforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically tha such strategies are not appropriate for this setting and develop an off-policy batcl policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Nonetheless, there are some important differences in the above scenario when compared to the more popular approaches for RL.\nYoram Bachrach\nDigitalGenius Ltd., London, UK"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Chatbots are one of the classical applications of artificial intelligence and are now ubiquitous in technology, business and everyday life. Many corporate entities are now increasingly using chatbots to either replace or assist humans in customer service contexts. For example, Microsoft is currently actively building a chat bot to optimise and streamline its technical support service.\nIn these scenarios, there is usually an abundance of historical data since past conversations between customers and human customer service agents are usually recorded by organisations. An apparently straightforward solution would be to train chatbots to reproduce the responses by human agents using standard techniques such as maximum likelihood. While this seems natural, it is far from desirable for several reasons. It has been observed that such procedures have a tendency to produce very generic responses (Sordoni et al., 2015). For instance, when we trained chatbots via maximum likelihood on a restaurant recommendations dataset, they repeatedly output responses to the effect of How large is your group?, What is your budget? etc. Further, they also produce responses such as Let me look that up. or Give me a second. which, although permissible for a human agent to say, are not appropriate for a chatbot. Although there are ways to increase the diversity of responses (Li et al., 2015), our focus is on encouraging the bot to meaningfully advance the conversation. One way to address this problem is to provide some form of weak supervision for responses generated by a chatbot. For example, a human labeller, such as a quality assurance agent. could score each response generated by a chatbot in a conversation with a customer. This brings us to the reinforcement learning (RL) paradigm where these rewards (scores) are to be used to train a good chatbot. In this paper we will use the terms score, label, and reward interchangeably. Labelled data will mean conversations which have been assigned a reward of some form as explained above.\n. Noisy and expensive rewards: Obtaining labels for each conversation can be time consuming. and economically expensive. As a result, there is a limited amount of labelled data available Moreover, labels produced by humans are invariably noisy due to human error and subjectivity\nIf labelled data is in short supply, reinforcement learning could be hopeless. However, if unlabellec. data can be used to train a decent initial bot, say via maximum likelihood, we can use policy iteratior. techniques to refine this bot by making local improvements using the labelled data (Bellman, 1956) Besides chatbots, this framework also finds applications in tasks such as question answering (Fer. rucci et al., 2010; Hermann et al., 2015; Sachan et al., 2016), generating image descriptions (Karpa thy & Fei-Fei, 2015) and machine translation (Bahdanau et al., 2014) where a human labeller car provide weak supervision in the form of a score to a sentence generated by a bot..\nTo contextualise the work in this paper, we make two important distinctions in policy iteration methods in reinforcement learning. The first is on-policy vs off-policy. In on-policy settings, the goal is to improve the current policy on the fly while exploring the space. On-policy methods are used in applications where it is necessary to be competitive (achieve high rewards) while simultaneously exploring the environment. In off-policy, the environment is explored using a behaviour policy, but the goal is to improve a different target policy. The second distinction is on-line vs batch (off-line). In on-line settings one can interact with the environment. In batch methods, which is the setting for this work, one is given past exploration data from possibly several behaviour policies and the goal is to improve a target policy using this data. On-line methods can be either on-policy or off-policy whereas batch methods are necessarily off-policy.\nIn this paper, we study reinforcement learning in batch settings, for improving chat bots with Seq2Seq recurrent neural network (RNN) architectures. One of the challenges when compared to on-line learning is that we do not have interactive control over the environment. We can only hope to do as well as our data permits us to. On the other hand, the batch setting affords us some luxuries. We can reuse existing data and use standard techniques for hyper-parameter tuning basec on cross validation. Further, in on-line policy updates, we have to be able to \"guess' how an episode will play out, i.e. actions the behaviour/target policies would take in the future and corresponding rewards. However, in batch learning, the future actions and rewards are directly available in the data This enables us to make more informed choices when updating our policy."}, {"section_index": "3", "section_name": "RELATED WORK", "section_text": "Recently there has been a surge of interest in deep learning approaches to reinforcement learning. many of them adopting Q-learning, e.g. (He et al., 2015; Mnih et al., 2013; Narasimhan et al., 2015). In Q-learning, the goal is to estimate the optimal action value function Q*. Then, when an agent. is at a given state, it chooses the best greedy action according to Q*. While Q-learning has been. successful in several applications, it is challenging in the settings we consider since estimating Q. over large action and state spaces will require a vast number of samples. In this context, policy iteration methods are more promising since we can start with an initial policy and make incremental. local improvements using the data we have. This is especially true given that we can use maximum likelihood techniques to estimate a good initial bot using unlabelled data..\nPolicy gradient methods, which fall within the paradigm of policy iteration, make changes to the parameters of a policy along the gradient of a desired objective (Sutton et al., 1999). Recently, the natural language processing (NLP) literature has turned its attention to policy gradient methods for improving language models. Ranzato et al. (2015) present a method based on the classical REIN- FORCE algorithm (Williams, 1992) for improving machine translation after preliminary training with maximum likelihood objectives. Bahdanau et al. (2016) present an actor-critic method also for machine translation. In both cases, as the reward, the authors use the BLEU (bilingual evaluation understudy) score of the output and the translation in the training dataset. This setting, where the rewards are deterministic and cheaply computable, does not reflect difficulties inherent to training chatbots where labels are noisy and expensive. Li et al. (2016) develop a policy gradient method bot for chatbots. However, they use user defined rewards (based on some simple rules) which, once again, are cheaply obtained and deterministic. Perhaps the closest to our work is that of Williams & Zweig (2016) who use a REINFORCE based method for chat bots. We discuss the differences of\nOff-line evaluations: Unlike conventional RL settings, such as games, where we try to find. the optimal policy while interacting with the system, the rewards here are not immediately available. Previous conversations are collected, labelled by human experts, and then given to. an algorithm which has to manage with the data it has..\nthis and other methods in greater detail in Section 3. The crucial difference between all of the abov efforts and ours is that they use on-policy and/or on-line updates in their methods..\nThe remainder of this manuscript is organised as follows. In Section 2 we review Seq2Seq models. and Markov decision processes (MDP) and describe our framework for batch reinforcement learn ing. Section 3 presents our method BPG and compares it with prior work in the RL and NLP. literature. Section 4 presents experiments on a synthetic task and a customer service dataset for. restaurant recommendations.\nThe goal of a Seq2Seq model in natural language processing is to produce an output sequence. y = a1, a2, ..., aT] given an input sequence x (Cho et al., 2014; Kalchbrenner & Blunsom, 2013. Sutskever et al., 2014). Here a; E A where A is a vocabulary of words. For example, in machine translation from French to English, x is the input sequence in French, and y is its translation in En. glish. In customer service chatbots, x is the conversation history until the customer's last query anc. y is the response by an agent/chatbot. In a Seq2Seq model, we use an encoder network to represen. the input sequence as a euclidean vector and then a decoder network to convert this vector to ar. output sequence. Typically, both the encoder and decoder networks are recurrent neural networks. (RNN) (Mikolov et al., 2010) where the recurrent unit processes each word in the input/output se. quences one at a time. In this work, we will use the LSTM (long short term memory) (Hochreiter &. Schmidhuber, 1997) as our recurrent unit due to its empirical success in several applications..\nIn its most basic form, the decoder RNN can be interpreted as assigning a probability distribution over A given the current \"state'. At time t, the state st is the input sequence x and the words Yt-1 = [a1,..., at-1] produced by the decoder thus far, i.e. St = (x, yt-1). We sample the next word at from this probability distribution (-[st), then update our state St+1 = (x, yt) where yt = [yt-1, at], and proceed in a similar fashion. The vocabulary A contains an end-of-statement token <Eos>. If we sample <Eos> at time T + 1, we terminate the sequence and output yT."}, {"section_index": "4", "section_name": "2.2 A REVIEW OF MARKOV DECISION PROCESSES (MDP", "section_text": "We present a formalism for MDPs simplified to our setting. In an MDP, an agent takes an action c in a state s and transitions to a state s'. An episode refers to a sequence of transitions s1 -> a1 - S2 -> a2 -> ... -> aT -> sT+1 until the agent reaches a terminal state sT+1. At a terminal state, the agent receives a reward. Formally, an MDP is the triplet (S, A, R). Here, S is a set of states and A i. a set of actions. When we take an action a at state s we transition to a new state s' = s'(s, a) which in this work, will be deterministic. A will be a finite but large discrete set and S will be discret but potentially infinite. R : S -> R is the expected reward function such that when we receive a reward r at state s E S, E[r = R(s). Let So C S be a set of terminal states. When we transition tc any s E So, the episode ends. In this work, we will assume that the rewards are received only at a terminal state, i.e R(s) is nonzero only on So.\nA policy is a rule to select an action at a given state. We will be focusing on stochastic policies : A S -> R+ where (a|s) denotes the probability an agent will execute action a at state s. We define the value function V : S -> R of policy , where V(s) is the expected reward at the end of the episode when we follow policy from state s. For any terminal state s E So, V(s) = R(s) regardless of . We will also find it useful to define the action-value function Q\" : S A :-> R. where Q(s, a) is the expected reward of taking action a at state s and then following policy . With deterministic state transitions this is simply Q(s, a) = V(s'(s, a)). It can be verified that V(s) = Ea~(|s) [Q\"(s, a)] (Sutton & Barto, 1998).\nWe now frame our learning from labels scenario for RNN chatbots as an MDP. The treatment has similarities to some recent RL work in the NLP literature discussed above\nLet x be the input and yt-1 = [a1, ..., at-1] be the words output by the decoder until time t. The state of our MDP at time t of the current episode will be st = (x, yt-1). Therefore, the set of states S will be all possible pairs of inputs and partial output sequences. The actions A will be the vocabulary. The terminal states So will be (x, y) such that the last literal of y is <Eos>. The stochastic policy will be a Seq2Seq RNN which produces a distribution over A given state st. When we wish to make the dependence of the policy on the RNN parameters explicit, we will write e. When we sample an action at ~ (.[st), we deterministically transition to state (x, [yt-1, at]). If we sample aT+1 = <E0s> at time T + 1, the episode terminates and we observe a stochastic reward.\nWe are given a dataset of input-output-reward triples {(x(i),y(i),r(i)}t=1 where y(i) (i (i) . .., at?, <Eos>) is the sequence of output words. This data was collected from possibly mul a1 tiple behaviour policies which output y(i) for the given input x(i). In the above customer servic example, the behaviour policies could be chatbots, or even humans, which were used for conversa tions with a customer. The rewards r; are scores assigned by a human quality assurance agent t each response of the chatbot. Our goal is to use this data to improve a given target policy ne. W will use q to denote the distribution of the data. q(s) is the distribution of the states in the datasel q(a|s) is the conditional distribution of an action given a state, and q(s, a) = q(s)q(a[s) is the joir distribution over states and actions. q will be determined by the initial distribution of the inputs x(? and the behaviour policies used to collect the training data. Our aim is to find a policy that does wel with respect to q. Specifically, we wish to maximise the following objective,\nHere, the value function Ve is not available to us but has to be estimated from the data. This i. similar to objectives used in on-line off-policy policy gradient literature where q is replaced by th. limiting distribution of the behaviour policy (Degris et al., 2012). In the derivation of our algorithnr. we will need to know q(a[s) to compute the gradient of our objective. In off-policy reinforcemen. learning settings this is given by the behaviour policy which is readily available. If the behaviou. policy if available to us, then we can use this too. Otherwise, a simple alternative is to \"learn'' a be haviour policy. For example, in our experiments we used an RNN trained using the unlabelled dat. to obtain values for q(a[s). As long as this learned policy can capture the semantics of natural lan. guage (for example, the word apple is more likely than car when the current state is (x, I ate an).. then it can be expected to do reasonably well. In the following section, we will derive a stochasti. gradient descent (SGD) procedure that will approximately minimise (1)..\nBefore we proceed, we note that it is customary in the RL literature to assume stochastic transitions between states and use rewards at all time steps instead of the terminal step. Further, the future rewards are usually discounted by a discount factor y < 1. While we use the above formalism to. simplify the exposition, the ideas presented here extend naturally to more conventional settings..\nVTe(a|s) = Es~q Te(a|s) a)y(a,s)Q*e(s, a Te(a|s) aEA =E(st,at)~q(,) [P(St,at)W(at,St)(Qo(St,at)- Vo(st))]\nJ(0) = q(s)V*e(s) sES\nOur derivation follows the blueprint in Degris et al. (2012) who derive an off-policy on-line actor critic algorithm. Following standard policy gradient methods, we will aim to update the policy by taking steps along the gradient of the objective J(@).\nVJ(0) = VEs~q e(a|s)Qe(s,a) Ve(a|s)Q\"e(s,a) + ne(a|s)VQ\"e(s,a aE A a EA\nThe latter term inside the above summation is difficult to work with, so the first step is to ignore it and work with the approximate gradient g(0) = Es~g[ aEAVe(a|s)Q*e(s,a)] ~ VJ(0). Degris et al. (2012) provide theoretical justification for this approximation in off policy settings by. establishing that J(0) < J(0 + ag(0)) for all small enough a. Expanding on g(0), we obtain:.\nVTe(a|s) Here y(a, s)= = V log e(a[s) is the score function of the policy and p(s, a) Te(a|s) e(a[s)/q(a[s) is the importance sampling coefficient. In the last step, we have used the fact that E[(a[s)(a[s)h(s)] = 0 for any function h : S -> R of the current state (Szepesvari, 2010). The purpose of introducing the value function Ve is to reduce the variance of the SGD updates - we want to assess how good/bad action at is relative to how well e will do at state st in expectation. If at is a good action (Qe (st, at) is large relative to Ve (st)), the coefficient of the score function is positive and it will change 0 so as to assign a higher probability to action at at state st.\n0 0 + ap(St,at)Y(St,at)(r-V(st))\nIn Algorithm 1, we have summarised the procedure where the updates are performed after an entir. pass through the dataset. In practice, we perform the updates in mini-batches..\nAn Estimator for the Value Function: All that is left to do is to specify an estimator V for the value function. We first need to acknowledge that this is a difficult problem: S is quite large and for typical applications for this work there might not be enough data since labels are expensive. That said, the purpose of V in (2), (3) is to reduce the variance of our SGD updates and speed up convergence so it is not critical that this be precise - even a bad estimator will converge eventually. Secondly standard methods for estimating the value function based on minimising the projected Bellman er- ror require the second derivatives, which might be intractable for highly nonlinear parametrisations of V (Maei, 2011). For these two statistical and computational reasons, we resort to simple esti- mators for Ve. We will study two options. The first is a simple heuristic used previously in the RL literature, namely a constant estimator for V which is equal to the mean of all rewards in the dataset (Williams, 1992). The second uses the parametrisation V(s) = o(' $(s)) where is the logistic function and o(s) E Rd is a Euclidean representation of the state. For V(s) of the above form, the Hessian ?V(s) can be computed in O(d) time. To estimate this value function, we use the GTD() estimator from Maei (2011). As $(s) we will be using the hidden state of the LSTM. The rationale for this is as follows. In an LSTM trained using maximum likelihood, the hidden state contains useful information about the objective. If there is overlap between the maximum like- lihood and reinforcement learning objectives, we can expect the hidden state to also carry useful information about the RL objective. Therefore, we can use the hidden state to estimate the value function whose expectation is the RL objective. We have described our implementation of GTD() in Appendix A and specified some implementation details in Section 4.\n1 Note Qe (St, at) = V\"e (St+1) for deterministic transitions. However, it is important not to interpret the term in (2) as the difference in the value function between successive states. Conditioned on the current time. step, Ve (st) is deterministic, while Ve (st+1) is stochastic. In particular, while a crude estimate suffices for. the former, the latter is critical and should reflect the rewards received during the remainder of the episode.\nThe Qe , Ve functions are not available to us so we will replace them with estimates. For Ve (st) we will use an estimate V(st) - we will discuss choices for this shortly. However, the action value function is usually not estimated in RL policy gradient settings to avoid the high sample complexity A sensible stochastic approximation for Q*e (st, at) is to use the sum of future rewards from the current state (Sutton & Barto, 1998)'. If we receive reward r at the end of the episode, we can then use Q*e (st, at) ~ r for all time steps t in the episode. However, since q(at[St) is different from e(at|St) we will need to re-weight future rewards via importance sampling r II,=t P(si, ai) This is to account for the fact that an action a given s may have been more likely under the policy e(:[s) than it was under q(:[s) or vice versa. Instead of directly using the re-weighted rewards, we will use the so called X-return which is a convex combination of the re-weighted rewards and the value function (Sutton, 1988; 1984). In our setting, they are defined recursively from the end of the episode t = T + 1 to t = 1 as follows. For X E (0, 1],\nr =(1-A)Vo(St+1) + Ap(St,at)^t+1 for t = T....,1 r,\nAlgorithmI Batch Policy Gradient (BPG Given: Data {(x, yi, ri)}t=1, step size Q, return coefficient A, initial 0o. Set0 0o. -For each epoch k = 1, 2, . Set00 For each episode i = 1,..., n o rT+1 + ri . For each time step in reverse t = T(i), ..., 1 (ii) 0 0 + rt Pt(s{),a(i) (iii) Compute updates for the value function estimate V. Update the policy 0 0 + a0 Update the value function estimate V.\nr+I t[t+1 i AeAe + V(s+"}, {"section_index": "5", "section_name": "COMPARISON WITH OTHER RL APPROACHES IN NLP", "section_text": "Policy gradient methods have been studied extensively in on policy settings where the goal is tc improve the current policy on the fly (Amari, 1998; Williams, 1992). To our knowledge, all RI. approaches in Seq2Seq models have also adopted on-policy policy gradient updates (Bahdanau et al.. 2016; Li et al., 2016; Ranzato et al., 2015; Williams & Zweig, 2016). However, on policy methods. break down in off-policy settings, because any update must account for the probability of the actior under the target policy. For example, suppose the behaviour policy took action a at state s anc. received a low reward. Then we should modify the target policy 0 so as to reduce e(a[s). However. if the target policy is already assigning low probability to a|s then we should not be as aggressive. when making the updates. The re-weighting p(s, a) via importance sampling does precisely this.\nA second difference is that we study batch RL. Standard on-line methods are designed for settings. where we have to continually improve the target while exploring using the behaviour policy. Critical. to such methods are the estimation of future rewards at the current state and the future actions thai. will be taken by both the behaviour and target policies. In order to tackle this, previous research. either ignore future rewards altogether (Williams, 1992), resort to heuristics to distribute a delayed. reward to previous time steps (Bahdanau et al., 2016; Williams & Zweig, 2016), or make additiona). assumptions about the distribution of the states such as stationarity of the Markov process (Degris. et al., 2012; Maei, 2011). However, in batch settings, the X-return from a given time step can be. computed directly (3) since the future action and rewards are available in the dataset. Access to thi information provides a crucial advantage over techniques designed for on-line settings.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "Implementation Details: We implement our methods using Chainer (Tokui et al., 2015), and grouj. sentences of the same length together in the same batch to make use of GPU parallelisation. Sinc different batches could be of different length, we do not normalise the gradients by the batch size. as we should take larger steps after seeing more data. However, we normalise by the length of the. output sequence to allocate equal weight to all sentences. We truncate all output sequences to lengtl. 64 and use a maximum batch size of 32. We found it necessary to use a very small step size (10-5). otherwise the algorithm has a tendency to get stuck at bad parameter values. While importance re. weighting is necessary in off-policy settings, it can increase the variance of the updates, especiall. when q(at|st) is very small. A common technique to alleviate this problem is to clip the p(St, at. value (Swaminathan & Joachims, 2015). In addition to single p(st, at) values, our procedure has a. oroduct of p(st, at) values when computing the future rewards (3). The effect of large p values is a. arge weight pt(rA V(st)) for the score function in step (ii) of Algorithm 1. In our implementation.\nU V W Figure 1: Illustration of the enc A 4 and decoder RNNs used in our e softmax! Isoftmax! I softmax! iments. In this example, the inp. the encoder is x = (..., A, B,<E. LSTM LSTM !LSTM LSTM LSTM and the output of the decoder is. (U,v,w,...). We use four diff LSTMs for the bottom and top l LSTM LSTM LSTM LSTM LSTM of the encoder and decoder netw. In our RL algorithms, we only ch <EOS> the top LSTM and the softmax l. A B U V of the decoder RNN as shown ir. IIII dashed lines. 111 Encoder Decoder\nwe clip this weight at 5 which controls the variance of the updates and ensures that a single exampl does not disproportionately affect the gradient.\nRNN Design: In both experiments we use deep LSTMs with two layers for the encoder and decode RNNs. The output of the bottom layer is fed to the top layer and in the decoder RNN, the output o. the top layer is fed to a softmax layer of size |AJ. When we implement GTD() to estimate V we use the hidden state of the bottom LSTM as $(s). When performing our policy updates, we onl change the parameters of the top LSTM and the softmax layer in our decoder RNN. If we were tc change the bottom LSTM too, then the state representation $(s) would also change as the polic changes. This violates the MDP framework. In other words, we treat the bottom layer as part o the environment in our MDP. To facilitate a fair comparison, we only modify the top LSTM anc. softmax layers in all methods. We have illustrated this set up in Fig. 1. We note that if one is conten with using the constant estimator, then one can change all parameters of the RNN.."}, {"section_index": "7", "section_name": "4.1 SOME SYNTHETIC EXPERIMENTS ON THE EUROPARL DATASET", "section_text": "To convey the main intuitions of our method, we compare our methods against other baselines o a synthetic task on the European parliament proceedings corpus (Koehn, 2005). We describe the experimental set up briefly, deferring details to Appendix B.1. The input sequence to the RNN was each sentence in the dataset. Given an input, the goal was to reproduce the words in the input withou repeating words in a list of forbidden words. The RL algorithm does not explicitly know either goa of the objective but has to infer it from the stochastic rewards assigned to input output sequences ir the dataset. We used a training set of 500 input-output-reward triplets for the RL methods.\nWe initialised all methods by maximum likelihood training on 6000 input output sequences wher. the output sequence was the reverse of the input sequence. The maximum likelihood objectiv. captures part of the RL objective. This set up reflects naturally occurring practical scenarios for th algorithm where a large amount unlabelled data can be used to bootstrap a policy if the maximun likelihood and reinforcement learning objectives are at least partially aligned. We trained the Rl algorithms for 200 epochs on the training set. At the end of each epoch, we generated outputs fron. the policy on test set of 500 inputs and scored them according to our criterion. We plot the test se. error against the number of epochs for various methods in Fig. 2..\nFig. 2(a) compares 3 methods: BPG with and without maximum likelihood initialisation and a. version of BPG which does not use importance sampling. Clearly, bootstrapping an RL algorithm. with ML can be advantageous especially if data is abundantly available for ML training. Further. without importance sampling, the algorithm is not as competitive for reasons described in Section 3.. In all 3 cases, we used a constant estimator for V and X = 0.5. The dashed line indicates the. performance of ML training alone. BPG-NIS is similar to the algorithms of Ranzato et al. (2015);. Williams & Zweig (2016) except that there, their methods implicitly use X = 1.\nFig. 2(b) compares 4 methods: BPG and its on-line version OPG with constant (CONST) and GTD() estimators for V. The on-line versions of the algorithms are a direct implementation of the method in Degris et al. (2012) which do not use the future rewards as we do. The first observation is that while GTD(X) is slightly better in the early iterations, it performs roughly the same as us- ing a constant estimator in the long run. Next, BPG performs significantly better than OPG. We believe this is due to the following two reasons. First, the online updates assume stationarity of the MDP. When this does not hold. such as in limited data instances like ours. the SGD updates can be\n0.6 0.6 0.6 0.55 0.55 0.55 BPG-CONST BPG-GTD(A) 0.5 0.5 * OPG-CONST 0.5 0.45 OPG-GTD(A) 0.45 8e 0.4 Ayerae 0.45 0.4 - ML (No RL) 0.35 = 0.1 O BPG+ML =0.5 0.35 0.4 BPG-NIS+ML 0.3 W#v*vwv#wmRwk A = 0.8 V * BPG 0.3N 0.25 + =1.0 0.35 50 100 150 200 50 100 150 200 50 100 150 20 Number of Epochs. Number of Epochs. Number of Epochs.\n0.6 0.6 0.6 0.55 0.55 0.55 BPG-CONST BPG-GTD(A) Cewa 0.5 0.5 OPG-CONST 0.5 Y 0.45 OPG-GTD(A) yrreee 0.45 0.45 ere ML (No RL) 0.35 0.1 o BPG+ML =0.5 BPG-NIS+ML 0.4 VA 0.3 + 0.8 * BPG 0.25 = 1.0 0.35 50 100 150 200 50 100 150 200 50 100 150 200 Number of Epochs Number of Epochs Number of Epochs (a) (b) (c)\nFigure 2: Results for synthetic experiments. (a): Comparison of BPG with and without maximum likelihood (ML) initialisation and BPG without importance sampling (BPG-NIS). The dotted line indicates performance of ML alone. (b): Comparison of BPG with its online counterparts OPG. We compare both methods using a constant estimator (CONST) for the value function and GTD(). (c): Comparison of BPG with different values of X. All curves were averaged over 10 experiments where the training set was picked randomly from a pool. The test set was the same in all 10 experiments. The error bars indicate one standard error.\nvery noisy. Secondly, the value function estimate plays a critical role in the online version. While. obtaining a reliable estimate V is reasonable in on-line settings where we can explore indefinitely. to collect a large number of samples, it is difficult when one only has a limited number of labelled samples. Finally, we compare BPG with different choices for in Fig. 2(c). As noted previously,. X < 1 is useful with stochastic rewards, but choosing too small a value is detrimental. The optimal. A value may depend on the problem.."}, {"section_index": "8", "section_name": "4.2 RESTAURANT RECOMMENDATIONS", "section_text": "We use data from an on-line restaurant recommendation service. Customers log into the service. and chat with a human agent asking recommendations for restaurants. The agents ask a series of. questions such as food preferences, group size etc. before recommending a restaurant. The goal is. to train a chatbot (policy) which can replace or assist the agent. For reasons explained in Section 1 maximum likelihood training alone will not be adequate. By obtaining reward labels for responses. produced by various other bots, we hope to improve on a bot initialised using maximum likelihood..\nData Collection: We collected data for RL as follows. We trained five different RNN chatbots wit different LSTM parameters via maximum likelihood on a dataset of 6000 conversations from thi dataset. The bots were trained to reproduce what the human agent said (output y) given the pas conversation history (input x). While the dataset is relatively small, we can still expect our bots t do reasonably well since we work in a restricted domain. Next, we generated responses from thes bots on 1216 separate conversations and had them scored by workers on Amazon Mechanical Turl (AMT). For each response by the bots in each conversation, the workers were shown the histor before the particular response and asked to score (label) each response on a scale of 0 - 1 - 2. W collected scores from three different workers for each response and used the mean as the reward.\nBot-1: H = 512, E = 256 BPG: X = 0.5, GTD(A) estimator for V Bot-2: H = 400, E = 400. BPG: X = 0.5, constant estimator for V.\nBot-1: H = 512. E = 256. BPC Bot-2: H = 400. E = 400. BPC\nTesting: We used a separate test set of 500 conversations which had a total of more than 3500 input output (conversation history - response) pairs. For each Bot-1 and Bot-2 we generated responses before and after applying BPG, totalling 4 responses per input. We then had them scored by workers on AMT using the same set up described above. The same worker labels the before-BPG and after- BPG responses from the same bot. This controls spurious noise effects and allows us to conduct a paired test. We collected 16, 808 before and after label pairs each for Bot-1 and Bot-2 and compare them using a paired t-test and a Wilcoxon signed rank test.\nPolicies and RL Application: Next, we initialised 2 bots via maximum likelihood and then used BPG to improve them using the labels collected from AMT. For the 2 bots we used the following LSTM hidden state size H, word embedding size E and BPG parameters. These parameters were chosen arbitrarily and are different from those of the bots used in data collection described above\nResults: The results are shown in Table 1. The improvements on Bot-2 are statistically significan. at the 10% level on both tests, while Bot-1 is significant on the Wilcoxon test. The large p-values fo. Bot-1 are due to the noisy nature of AMT experiments and we believe that we can attain significance if we collect more labels which will reduce the standard error in both tests. In Appendix B.2 we present some examples of conversation histories and the responses generated by the bots before anc after applying BPG. We qualitatively discuss specific kinds of issues that we were able to overcome. via reinforcement learning"}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "We presented a policy gradient method for batch reinforcement learning to train chatbots. The data. to this algorithm are input-output sequences generated using other chatbots/humans and stochastic. rewards for each output in the dataset. This setting arises in many applications, such as customer. service systems, where there is usually an abundance of unlabelled data, but labels (rewards) are. expensive to obtain and can be noisy. Our algorithm is able to efficiently use minimal labelled data. to improve chatbots previously trained through maximum likelihood on unlabelled data. While our. method draws its ideas from previous policy gradient work in the RL and NLP literature, there are. some important distinctions that contribute to its success in the settings of interest for this work Via importance sampling we ensure that the probability of an action is properly accounted for in. off-policy updates. By explicitly working in the batch setting, we are able to use knowledge of. future actions and rewards to converge faster to the optimum. Further, we use the unlabelled data. to initialise our method and also learn a reasonable behaviour policy. Our method outperforms. baselines on a series of synthetic and real experiments..\nThe ideas presented in this work extend beyond chatbots. They can be used in applications such as question answering, generating image descriptions and machine translation where an output sen- tence generated by a policy is scored by a human labeller to provide a weak supervision signal."}, {"section_index": "10", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank Christoph Dann for the helpful conversations and Michael Armstrong fo helping us with the Amazon Mechanical Turk experiments"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998\nTable 1: The results on the Mechanical Turk experiments using the restaurant dataset. The first two columns are the mean labels of all responses before and after applying BPG on the bots initialised via maximum like lihood. The last two columns are the p-values using a paired t-test and a paired Wilcoxon signed rank test. For both Bot-1 and Bot-2, we obtained 16,808 before and after responses scored by the same worker. Bot-2 is. statistically significant at the 10% level on both tests while Bot-1 is significant on the Wilcoxon test.\nKarthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text-based games usin deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015\nMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training wit recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT pres Cambridge, 1998.\nRichard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057-1063, 1999"}, {"section_index": "12", "section_name": "A IMPLEMENTATION OF GTD(X)", "section_text": "The gradient and Hessian of V have the following forms\n7sV(s) =V(s)(1-Vg(s))(s), V2Vg(s) =Vg(s)(1-V(s)(1-2Vg(s))(s)(s)\nThe Hessian product in step (d) of Algorithm 2 can be computed in O(d) time via\nV?V(s).w = Vg(s)(1-Vg(s))(1-2Vg(s))((s)w)\nFor each episode i = 1, . . . , n Set rT+1ri, 9T+10, qT+1 Pt<eat|st )/q(at)|st) For each time step in reverse t =- T(i), (a) gtPt((1-X)Vg(st) b) 1 - (c) 8tgt_V(s) (d) ht(8t-wTVgVg(st)))V?Vg(St (e) ww+Tl5(8t-wVgVg(s{))Vg Ve(st (f) +T0tVgI -qwVeV ww+a\"w."}, {"section_index": "13", "section_name": "B ADDENDUM TO EXPERIMENTS", "section_text": "Given an input and output sequence, we used the average of five Bernoulli rewards Bern(r), where the parameter r was r = 0.75 rr + 0.25 rf. Here r. was the fraction of common words in the input and output sequences while rf = O.01Pf where pf is the fraction of forbidden words in the dataset. As the forbidden words, we used the 50 most common words in the dataset. So if an inpu\nWe present the details of the GTD() algorithm (Maei, 2011) to estimate a value function in Al- gorithm 2. However, while Maei (2011) give an on-line version we present the batch version here where the future rewards of an episode are known. We use a parametrisation of the form V(s) = Vg(s) = o(T (s)) where E Rd is the parameter to be estimated. o(z) = 1/(1 + e-) is the logistic function.\nThe algorithm requires two step sizes a', a\" below for the updates to & and the ancillary parameter w. Following the recommendations in Borkar (1997), we use a\" < a. In our implementations. we used Q' = 10-5 and a\" = 10-6. When we run BPG, we perform steps (a)-(f) of Algorithm 2 in step (iii) of Algorithm 1 and the last two update steps of Algorithm 2 in the last update step of Algorithm 1.\nAlgorithm 2 GTD() Given:Data {(x, Yi, ri)}=1, step sizes Q', Q\", return coefficient , initial o. -Set o, w 0. -For each epoch k = 1, 2, .. Set 0, w0. For each episode i = 1,..., n Set rT+1ri, 9T+10, qT+10 pt + no(a{)|s()/q(a{|s{) for t =1,.., T(i). . For each time step in reverse t = T(i), ..., 1:. (a) g+ Pr((1-X)Vg(s{%1) +Xptrt+1) (b) q+ Pt(1-X)VgVe(s{%1) +Xqt+1 (c) 0t gt-V(s()) (d) ht(8t-wTVgVg(s{))V?Vg(s{) w (e) ww+Tt(8t-wTVgVg(s{i))VgVg(s{i) (f) +T((0tVgVg(s{)-qtwTVgVg(s{)- ww+a\"w. > +a'.\nhad 10 words of which 2 were forbidden, an output sequence repeating 7 of the allowed words anc 1 forbidden word would receive an expected score of 0.75 (8/10) + 0.25 0.01(1/8) = 0.7406.\nThe training and testing set for reinforcement learning were obtained as follows. We trainec. 4 bots using maximum likelihood on 60o0 input output sequences as indicated in Section 4.1 The LSTM hidden state size H and word embedding size E for the 4 bots were, (H,E) =. (256, 128), (128, 64), (64, 32), (32, 16). The vocabulary size was [A] = 12000. We used these bots to generate outputs for 500 different input sequences each. This collection of input and output pairs. was scored stochastically as described above to produce a pool of 2000 input-output-score triplets. From this pool we use a fixed set of 500 triplets for testing across all our experiments. From the remaining 1500 data points, we randomly select 500 for training for each execution of an algorithm For all RL algorithms, we used an LSTM with 16 layers and 16 dimensional word embeddings..\nWe collected the initial batch of training data for RL as follows: We trained, via maximum likelihood. on 6000 conversations, five RNN bots whose LSTM hidden size H and word embedding size E were (H, E) = (512, 512),(256,256),(128,128),(512,256), (256, 64). The inputs x were all words from the history of the conversation truncated at length 64, i.e. the most recent 64 words in the conversation history. The outputs were the actual responses of the agent which were truncated tc length 64. As the vocabulary we use the [A = 4000 most commonly occurring words in the dataset. and replace the rest with an <uNK> token..\nUsing the bots trained this way we generate responses on 1216 separate conversations. This data. was sent to AMT workers who were asked to label the conversations on the following scale"}, {"section_index": "14", "section_name": "SOME QUALITATIVE RESULTS", "section_text": "In Tables 2 and 3 we have presented some examples. The text in black/grey shows the conversation history, the response in blue is by the bot trained via maximum likelihood (ML) alone and in red is the bot after improvement using our BPG reinforcement learning algorithm\nThe first two examples of Table 2 present examples where the ML algorithm repeated generic ques. tions (on budget, group size etc.) even though they had already been answered previously. Afte. applying BPG, we are able to correct such issues, even though there are some grammatical errors. In the second, third and fourth example, we see that the ML+BPG bot is able to take context int. consideration well when responding. For example, the customer asks for oriental/Mexican/Italiai. food. While the ML bot doesn't take this into consideration, the ML+BPG bot is able to provid relevant answers. However, in the third example, the name of the restaurant suggests that the foo. might be Indian and not Mexican. In the final example of Table 2 the customer asks a direct ques. tion about smoking. The ML bot provides an irrelevant answer where as the ML+BPG bot directl. responds to the question.\n. 2: The response is coherent and appropriate given the history and advances the conversation forward. : 1: The response has some minor flaws but is discernible and appropriate.. 0: The response is either completely incoherent or inappropriate and fails to advance the. conversation forward.\nIn some examples, the ML bot had a tendency to produce sentences that were grammatically correct. but nonsensical, sensible but grammatically incorrect, or just complete gibberish. We were able tc correct such issues via RL. The first three examples of Table 3 present such cases. Occasionally the. opposite happened. The last example of Table 3 is one such instance.\nCustomer: Hi there, I'm looking for a restaurant recommendation. Agent: How many in your party?. Customer: 5, including children.. ML: How many people will be joining you? [0] ML+BPG: [1.5] That is a fun size, and what is your budget joining you? Customer: Hey... Sorry, it's the anniversary of my wife's death. I always do oriental tonight.. Agent: I'm sorry to hear that. I will do my best to find just what you need. Customer: Thank you very much.. Agent: What kind of budget are you looking at? Customer: Doesn't matter... $100 sounds like a good number. Agent: Will it be just you, or are others going to join you? Customer: It's just me tonight, but if you are off soon I wouldn't mind a friendly ear. No pressure.. ML: What type of budget are you looking to stay within? [0.33] ML+BPG: How about: Cafe Hungry <UNk> ? They serve sea American to oriental on site such?. [1.33] Customer: I would love to find a great Mexican restaurant! Agent: Great, and will anybody be joining you? Customer: There will be a total of 3 of us. Customer: Could you find me a place with outdoor seating? Agent: How about: The Painted Burrito. They serve Mexican food. Customer: Eh, I've heard that one isn't great. Can you please suggest another? mL: How about: The Hungry .w seafood restaurant? they serve sea and American food.. [0.33] ML+BPG: I am happy to, one moment. How about: Diva Indian Bistro Boston bar? They serve Mexican food.. [1.67] Customer: Hello, I'm looking for a restaurant. Agent: I can help! Please tell me what kind of foods you most enjoy. I see you are in the Cambridge area.. Customer: I would like a restaurant that serves Italian food. Agent: Are you looking for something a little less expensive or are you looking for a high end restaurant? Customer: I am within a budget, 20 to 50 dollars. Agent: I highly recommend The Redhouse Restaurant. It serves . Customer: Is there another restaurant? the restaurant would also need to host a party size of 5 people.. mL: How about: Amuse? They serve French and American food. [0] ML+BPG: How about: Ocean prime - Boston Seaport? They serve sea, American and Italian food.. [2] Agent: How about Cafe at Taj Boston?. Customer: Oh, I just ate there last night. Anything else? Agent: Try Mela Restaurant at 578 Tremont St. Anything else I can help you with today? Customer: That sounds nice. Do they allow smoking? mL: This sure is within your budget for all. [0.5] ML+BPG: No. [1.5]\nTable 2: Some qualitative results on the AMT experiment using restaurant data. In each cell, the text in black/grey show the history of a conversation between the customer and a human agent. The dots ... mear that the part of the conversation history is hidden. The blue response is by the bot trained using maximum likelihood (ML) alone. The red response is by the bot improved with BPG after ML. The numbers on the right of the blue/red responses are the average scores each response received from 2-3 AMT workers."}, {"section_index": "15", "section_name": "Agent:", "section_text": "Table 3: Qualitative results on the AMT experiment using restaurant data, continued. See caption unde. Table 2 for more details.\nAgent: How may I help you? Customer: I'd like to find a French restaurant. Agent: Do you have a budget in mind? Customer: Between $50 and $75. Agent: How many will be in your party? Customer: Just me. ML: And how will you be dining? [0] ML+BPG: How will you be traveling to the restaurant? [2] Agent: Hello, how may I help you? Customer: We are kinda hungry, any suggestion in mind? Agent: What type of cuisine do you like? Customer: I want them Americano food. ML: Very evening. Will you be dining alone? [1] ML+BPG: Excellent choice. How many people will be in your party?. [2] Customer: I'd like to find a good local restaurant. Agent: What type of cuisine would you like? Customer: I have a hankering for Indian curry. Agent: Will you be needing parking? Customer: We'll be driving altogether, so yes. mL: Thank you for 1. How about: T. w your curry this place should be. dining? [0] ML+BPG: How about: The Maharaja? They serve Indian and American food.. [2] Agent: How can I help you today? Customer: I would like a restaurant recommendation. Agent: Okay what type of resturant? Customer: Chinese would be great. Agent: Who will be dining, how will you be travelling? Customer: Just me and by bus. ML: Okay, thanks. Do you have a budget in mind? [2] ML+BPG: Do you want a wonderful place evening, or whats your party adults, and a place. [0]"}] |
rywUcQogx | [{"section_index": "0", "section_name": "DIFFERENTIABLE CANONICAL CORRELATION ANALYSIS", "section_text": "Matthias Dorfer\nDepartment of Computational Perception Johannes Kepler University Linz. Linz. 4040. Austria."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Canonical Correlation Analysis (DCCA) (Andrew et al., 2013) is a non-linear extension o. classic Canonical Correlation Analysis (CCA) (Hotelling, 1936) that learns highly correlated laten. representations on top of two different neural networks. The central idea of our work is to extenc. this formulation and cast CCA as a fully differentiable neural network layer which allows for param. eter optimization via back-propagation through the CCA projection matrices. This is in contrast t DCCA, where correlation analysis is the topmost part of the network and only used as an optimiza. tion target for maximizing the correlation between the respective views. DCCA in general gainec. a lot of attention recently. It inspired related methods such as Deep Linear Discriminant Analysi. (Dorfer et al., 2015) as well as a discriminative re-formulation of DCCA (Elmadany et al., 2016. applied to improve speech-based emotion recognition. Wang et al. (2015a) show that joint optimiza. tion of correlation and reconstruction error in auto-encoder configurations is successfully used fo. representation learning on a multi-modal speech production dataset. We take this as a motivation tc. evolve and extend the applicability of DCCA.\nIn our experiments, we employ the proposed differentiable CCA layer in a cross-modality retrieval. setup. Cross-modality retrieval is the task of retrieving relevant data of another type when a sample. of a different modality is given as a search query. A recent survey by Wang et al. (2016) categorizes the task into binary and real-valued representation learning. In the case of real-valued representation learning, End-to-End DCCA (Yan & Mikolajczyk, 2015) achieves state of the art retrieval results. in combination with retrieval by cosine distance computation. With differentiable CCA, it becomes possible to train the networks to directly minimize the objective which will be used for retrieval (e.g., the cosine distance), while still benefitting from the optimally-correlated projections obtained. by CCA. Results on two publicly available datasets (Flickr30k (Young et al., 2014), IAPR TC-12\nJan Schluter\nJan Schluter. The Austrian Research Institute for Artificial Intelligence. Vienna. 1010. Austria."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Canonical Correlation Analysis (CCA) computes maximally-correlated linear pro- jections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, sur- passing both Deep CCA and a multi-view network with freely-learned projections We assume that Differentiable CCA could be a useful building block for many multi-modality tasks.\nIn this section, we review the concepts of classical and deep Canonical Correlation Analysis, th basis for the methodology proposed in this work\nA* B* - yy\ncorr(A*'x,B*'y) =dj i<k\n1 1 1 XX'+rI XY' r xx y yy m m m\nHere, r is a regularization parameter ensuring the matrices are positive definite. Substituting these estimates for xx, xy and yy, respectively, we can estimate A* and B* using Equation 3."}, {"section_index": "3", "section_name": "2.2 DEEP CANONICAL CORRELATION ANALYSIS (DCCA)", "section_text": "Andrew et al. (2013) propose an extension of CCA that allows learning parametric nonlinear trans- formations of two variables maximizing the cross-correlation after optimal projection. Specifically let a E Rda and b E Rd denote two random vectors, and let x = f(a;O) and y = g(b;Oq) denote their nonlinear transformations, parameterized by O : and Og. For example, f and g could be feed-forward neural networks. As before, Equation 3 gives the linear transformations of x and y optimizing the CCA objective in Equation 2. Deep CCA optimizes O t and Og to further increase the cross-correlation. For dx = dy = k, the CCA objective is equal to the sum of all singular values of T (Equation 4), which is equal to its trace norm:\ncorr(f(a;Of),g(b;Og)) = corr(x,y) =||T|u =tr(T'T)1/2\nThe remainder of our paper is structured as follows. In Section 2, we review classic and deep CCA, which are the basis for the differentiable CCA layer proposed in Section 3. In Section 4, we show results of an experimental evaluation in a cross-modality retrieval setting and provide further investigations on the representations learned by our networks. Finally, Section 5 concludes the paper.\nLet x E Rdx and y E Rdy denote two random vectors with covariances xx and yy and cross-. covariance xy. The objective of CCA is to find two matrices A* E Rdx k and B* E Rdyk (with. k < d and k < d,) that project x and y into a common space maximizing their cross-correlation:\n(A*,B*) = argmax corr(A'x,B'y A,B\n(A*,B*) = A'xyB arg max A'xxA=B'yyB=I\nIn practice, the covariances and cross-covariance of x and y are usually not known, but estimated from a training set of m paired vectors, expressed as matrices X E Rd m, Y E Rdy m.\n1 1 X =X - X1 Y=Y Y1 m m\nRetrieval Objective f(a)UOOOOO g(b) Vl Trace Norm Objective CCA =gb =fa 000c y=g(b) 000000 0000 - View 1 (a) View 2 (b) View 1 (a) View 2 (b) (a) DCCA (b) CCA Laver\nRetrievalObjective f(a) U g(b) VT Trace Norm Objective CCA x=fa =g(b x=f(a)OOOOO O O O O Oy=g(b) 0000000 0000000 00 00 View 1 (a) View 2 (b) View 1 (a) View 2 (b) (a) DCCA (b) CCA Layer\nAndrew et al. (2013) show how to compute the gradient of this Trace Norm Objective (TNO) with. respect to x and y. Assuming f and g are differentiable with respect to Oc and Og (as is the case for neural networks), this allows to optimize the nonlinear transformations via a gradient-based method. Figure 1a shows a schematic sketch of DCCA, as a fixed objective backpropagated through. two neural networks.\nIn this section, we further extend DCCA to allow not only an arbitrary nonlinear transformation o the inputs, but also arbitrary transformations of (or objectives on) the projected vectors. This allows CCA to be used as a building block within a multi-modality neural network, instead of as a fina objective only. In the following, we will discuss how to enable backpropagation through CCA, what to consider when doing stochastic updates, and how to apply it for cross-modality retrieval."}, {"section_index": "4", "section_name": "3.1 GRADIENT OF CCA", "section_text": "For our differentiable CCA, we instead need the gradients of the projected data A*'x and B*'y wrt x and y, which require aU We could again decompose this into the gradients wrt. T, dx, y dx, y the gradients of T wrt. xx, xy and yy and the gradients of those wrt. x and y. However, while the gradients of U and V wrt. T are known (Papadopoulo & Lourakis, 2000), they involve solving O((d,dy)2) linear 2 2 systems. To arrive at a more practical implementation that does not require the gradient of the SVD, we reformulate the solution to use two symmetric eigendecompositions TT' = U diag(e)U' and T'T = V diag(e)V' (Petersen & Pedersen, 2012, Eq. 270). This gives us the same left and right eigenvectors we would obtain from the SVD (save for possibly flipped signs, which are easy to fix), along with the squared singular values (e, = d?). The gradients of eigenvectors of symmetric real eigensystems have a simple form (Magnus. 1985. Eq. 7) and both\nFigure 1: Comparison of DCCA and the prosed differentiable CCA layer. DCCA optimizes the. correlation of the two different views and is therefore the topmost part of the network. In contrast. our CCA layer establishes gradient fow over the CCA computation. This allows us to use the projection output of CCA as input for subsequent components in a multi-view network (e.g., a. retrieval objective such as cosine distance)..\nAs mentioned above, we can compute the canonical correlation along with the optimal projection 1/2 = U diag(d)V'. Specifi- xyyy DCCA, it suffices to compute the gradient of the total correlation wrt. x and y in order to backprop- agate it through the two networks f and g. Using the chain rule, Andrew et al. (2013) decompose this into the gradients of the total correlation wrt. xx, xy and yy, and the gradients of those wrt. x and y. Their derivations of the former make use of the fact that both the gradient of ) , d, wrt. T and the gradient of ||T|tr (the trace norm objective in Equation 7) wrt. T'T have a simple form; see Andrew et al. (2013, Sec. 7) for details.\nTT' and T'T are differentiable wrt. x and y, enabling a sufficiently efficient implementation in graph-based, auto-differentiating math compiler such as Theano (Theano Development Team, 2016)"}, {"section_index": "5", "section_name": "3.2 STOCHASTIC OPTIMIZATION", "section_text": "For classical CCA, xx, xy and yy are estimated from a large set of m training examples (Equa tion 6). In contrast, gradient-based optimization of neural networks usually estimates the gradients wrt. network parameters from mini-batches of n randomly drawn examples, with n < m. In Deep CCA as well as in our extension, the correlations are functions of the network parameters that we need to backpropagate through, effectively enforcing m = n.\nAndrew et al. (2013) solve this discrepancy by optimizing the network parameters with L-BFGS o. the full training set, which is infeasible for very large datasets. Yan & Mikolajczyk (2015) instea train on small mini-batches, estimating correlation matrices of size 4096 4096 from 100 example. only, which seems risky. We will choose a way in between, training on large mini-batches to obtai. stable estimates. This approach was also taken by Wang et al. (2015b, Sec. 5.1), who found mini. batches of 400-1000 examples to even outperform full-batch L-BFGS. In addition, for testing, w optionally re-estimate the correlation matrices (and the corresponding projection matrices) using. larger set of m > n examples.\nAnother tempting option is to train on small mini-batches, but use exponential moving averages updated with each mini-batch as follows:\n+xxQ Ex ) xyQ yy"}, {"section_index": "6", "section_name": "3.3 CROSS-MODALITY RETRIEVAL WITH DIFFERENTIABLE CCA", "section_text": "Given the methodology introduced above, we now have the means to optimize DCCA projections. directly for the task at hand. In Figure 1b, we show a possible setting where we put the differentiabl CCA layer on top of a multi-view network. Instead of optimizing the networks to maximize the. correlation of the projected views (the TNO), we can optimize the networks towards a task-specific. objective and still benefit from the optimality of the CCA projections..\nFor this work, we optimize towards minimal cosine distance between the correlated views, the ver metric used for retrieval. In the next section, we empirically show that this is indeed beneficial ir terms of quantitative retrieval performance as well as convergence speed of network training"}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate our approach in cross-modality retrieval experiments on two publicly available datasets (also considered by Yan & Mikolajczyk (2015)) and provide investigations on the representations learned by the network."}, {"section_index": "8", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "For the evaluation of our approach, we consider Flickr30k and IAPR TC-12, two publicly available datasets for cross-modality retrieval. Flickr30k consists of image-caption pairs, where each image\nWith proper initialization and a sufficiently small coefficient a, this gives stable estimates even for small n. However, since only the estimates from the current mini-batch xx, xy and yy can be practically considered in backpropagation, this changes the learning dynamics: For too small a the projection matrices will be virtually degraded to constants. Empirically, we found that large mini-batches perform slightly better than small batches with moving averages (see Appendix B).\nDCCA maximizes the correlation between the latent representations of two different neural net works. When the two network inputs a and b represent different views of an entity (e.g., an image and its textual description), DCCA projects them into a common space where they are highly cor elated. This can be exploited for cross-modality retrieval: Projecting one modality of an entity, we can find the best-matching representations of the second modality (e.g., an image for a textual de- scription, or vice versa). To find the best matches, a common option is to compute nearest neighbors in terms of cosine distance (Yan & Mikolajczyk, 2015), which is closely related to correlation.\nTable 1: Example images for Flickr30k (top) and IAPR TC-12 (bottom)\nis annotated with five different textual descriptions. The train-validation-test split for Flickr30l. is 28000-1000-1000. In terms of evaluation setup, we follow the related work and report result. on two different evaluation protocols. Protocol pooled pools the five available captions into on 'concatenated\" text, meaning that only one but richer text annotation remains per image. This i. done for all three sets. Protocol 5 captions pools only the captions of the train set and keeps five. separate annotations for validation and test set. The IAPR TC-12 dataset contains 20000 natura. mages where only one - but compared to Flickr30k more detailed - caption is available for eacl. mage. As no predefined train-validation-test split is provided, we randomly select 2o00 images fo. testing, 1000 for validation and keep the rest for training. Yan & Mikolajczyk (2015) also use 2000. mages for testing, but did not explicitly mention hold out images for validation. Table 1 shows ar. example image along with its corresponding captions or caption for either dataset..\nThe task at hand for both datasets is to retrieve the correct counterpart - either text or image -. when given a query element of the other modality. We follow Yan & Mikolajczyk (2015) and use. the cosine distance for retrieval in the projection space. As evaluation measures we consider the Recall@k (R@k) as well as the Median Rank (MR) and the Mean Average Precision (MAP). The R@k rate (high is better) is the ratio of queries which have the correct corresponding counterpart in. the first k retrieval results. The MR is the median position (low is better) of the target in a similarity. ordered list of available candidates. Finally, we define the MAP (high is better) as the mean value of. 1/ Rank over all queries.\nThe input to our networks is a 4096-dimensional image feature vector along with a correspond ing text vector representation (5793 for Flickr30k, 2048 for IAPR TC-12). In terms of text pre- processing, we follow Yan & Mikolajczyk (2015), tokenizing and lemmatizing the raw captions as the first step. Based on the lemmatized captions, we compute l2-normalized TF/IDF-vectors. omitting words with an overall occurrence smaller than 5 times for Flickr30k and 3 times for IAPR TC-12, respectively. The image represenations are computed from the last hidden layer of a network pretrained on ImageNet (layer fc7 of CNN_S by Chatfield et al. (2014))."}, {"section_index": "9", "section_name": "4.2 NETWORK ARCHITECTURES AND OPTIMIZATION DETAILS", "section_text": "We feed 4096-dimensional image vectors along with the corresponding text representation into ou networks. The image representation is followed by a linear dense layer with 128 units (this will als be the dimensionality k = 128 of the resulting CCA retrieval space). The text vector is processe by two batch-normalized (Ioffe & Szegedy, 2015) dense layers of 1024 units each and an ELI activation function (Clevert et al., 2015). As a last layer for the text representation network, w again apply a dense layer with 128 linear units. For a fair comparison, we keep the structure (an number of parameters) of all networks in our experiments the same. The only parameters that var are the objectives and the corresponding optimization/regularization strategies. In particular, w apply a grid search on the respective hyper-parameters and report the best results for each method Optimization is performed either using Stochastic Gradient Descent (SGD) with momentum or b the adam (Kingma & Ba, 2014) update rule.\nA man in a white cowboy hat reclines in front of a window in an airport\nyoung man rests on an airport seat with a cowboy hat over his face\nA man is sleepin. inside on a bench with his hat over his eyes\nA person is sleeping at an airport with a hat on their head.\na green and brown embankment with brown houses on the right and a light brown sandy beach at the dark blue sea on the left; a dark mountain range behind it and white clouds in a light blue sky in the background:\nTable 2: Cross-modality retrieval results on Flickr30k. \"E2E-DCCA\" is taken from Yan & Mikola jczyk (2015), all other results are our own. Methods marked with \"*\" re-estimate projection matrices. from a larger batch than used during training (10,000 training examples), see Section 3.2..\nImage-to-Text Text-to-Image Protocol Method R@1 R@5 R@10 MR R@1 R@5 R@10 MR E2E-DCCA 27.9 56.9 68.2 4 26.8 52.9 66.9 4 TNO* 29.9 57.9 67.9 4 21.8 48.1 64.0 6 learned-cos2 9.0 23.3 32.8 28 8.5 23.3 32.8 26 pooled CCAL-l2 18.2 42.0 53.6 9 17.7 42.2 53.2 9 CCAL-cos 28.9 57.5 69.1 4 25.1 53.1 66.4 5 CCAL-cos2 30.7 58.8 70.1 4 28.0 56.2 68.3 4 CCAL-cos2* 34.1 60.0 70.6 3.5 29.2 58.3 69.7 4 E2E-DCCA 16.7 39.3 52.9 8 12.6 31.0 43.0 15 TNO* 17.5 39.3 51.4 10 13.4 31.7 41.3 19 5 captions CCAL-cos2 21.2 44.4 55.8 8 14.9 35.9 47.5 12 CCAL-cos2* 20.6 45.9 57.2 7 15.6 37.0 49.4 11\nAs optimization targets, we consider the following candidates: (1) The Trace Norm Objective (TNO) as our base line for cross-modality retrieval (Yan & Mikolajczyk, 2015). (2) The proposed differ- entiable CCA layer in combination with the objectives cosine distance (CCAL-cos), squared cosine distance (CCAL-cos2) and euclidean distance (CCAL-l2). As an additional setting, we consider a freely-learnable projection layer where the projection matrices A and B are randomly initialized weights that can be optimized by the network using SGD in the conventional way. This allows to assess the benefit of using CCA-derived projections within a multi-view network under otherwise unchanged objectives. For this experiment, we optimize for the squared cosine distance and denote the setting by learned-cos2. The batch size is set to 1o00 samples to allow stable covariance esti-. mates for the CCA (Section 3.2). For further stabilization, we regularize the covariance matrices (Andrew et al., 2013) by adding scaled (r = 10-3) identity matrices to the estimates xx, yy and. T (Section 2.1). The variants based on differentiable CCA are additionally regularized by L2 weight decay. No dropout is used in this settings as it harmed optimization in our experiments. When opti- mizing with the TNO we follow Yan & Mikolajczyk (2015) and use dropout (p = 0.5) after the first two dense layers of the text network. In Table 4 in Appendix A we provide the optimization settings for all configurations in detail, found using a grid search optimizing MAP on the validation set.."}, {"section_index": "10", "section_name": "4.3 EXPERIMENTAL RESULTS ON CROSS-MODALITY RETRIEVAL", "section_text": "Table 2 lists our results on Flickr3Ok. Along with our experiments, we also show the results re- ported in (Yan & Mikolajczyk, 2015) as a reference (E2E-DCCA). However, a direct comparison to our results may not be fair: E2E-DCCA uses a different ImageNet-pretrained network for the image representation, and finetunes this network while we keep it fixed (as we are only interested in comparing differentiable CCA to alternatives, not in obtaining the best possible results). Our TNO results use the same objective as E2E-DCCA, but our network architecture, permitting direct comparison.\nWhen comparing the performance of our networks, we observe a gain both for image-to-text and text-to-image retrieval when training with the CCAL-cos2 objective compared to TNO (e.g., R @1 of 34.1 compared to 29.9 under protocol pooled). This indicates that training a network directly on the objective used for retrieval (using differentiable CCA) is a reasonable design choice. A closer look at the results also reveals that the squared cosine distance is superior compared to the remaining objectives. We further observe that the randomly initialized projection matrices learned entirely by SGD (learned-cos) show poor performance compared to their CCA counterpart (even though in theory, they could converge to exactly the same solution). This suggests that exploiting the beneficial properties of the CCA projections directly within a network during training is a powerful tool, supporting optimization of related objectives. CCAL-l2 for example performs poorer than the variants including cosine losses but still better than the version with learned weights. On protocol\nTable 3: Cross-modality retrieval results on IAPR TC-12\nImage-to-Text Text-to-Image Method R@1 R@5 MAP MR R@1 R@5 MAP MR E2E-DCCA 30.2 57.0 0.426 29.5 60.0 0.415 TNO* 30.0 56.7 0.424 4 28.0 55.4 0.410 5 CCAL-cos2* 31.1 58.4 0.439 4 26.8 55.1 0.403 4 0.6 0.75 0.8 1.0 TNO 0.70 0.7 CCAL-cos 0.5 0.65 0.8 0.6 0.60 g d.cos (TNO, va) 30.4 dcos(CCAL-cos2, va) 0.50 g 0.45 TNO (tr) 0.2 TNO (va) 0.40 corr(TNO, tr) CCAL-cos (tr) 0.2 0.2 0.35 0.1 corr(CCAL-cos2, tr) CCAL-cos (va) - 0.30 0.0 0.0 40 70 0 10 0 10 20 30 50 60 20 30 40 50 60 70 0 20 40 50 60 30 Epoch Epoch Correlation Coefficient (a) Evolution of correlation (train) (b) MAP over training epochs. (c) Individual Correlations and cosine distance (validation).\nFigure 2: Comparison of the TNO and CCAL-cos? based on the total amount of canonical correla tion (sum over singular values d) as well as the cosine distance between corresponding samples\n5 captions, we only report the best results (CCAL-cos?) along with the TNO and observe similar. tendencies. Note that there are various other methods reporting results on Flickr3Ok (Karpathy et al.. 2014; Socher et al., 2014; Mao et al., 2014; Kiros et al., 2014) which partly surpass ours, for example. by using more elaborate processing of the textual descriptions. We omit these results as we focus on. the comparison of DCCA with the proposed differentiable CCA layer..\nIn Table 3, we list our results on the IAPR TC-12 dataset. We again show the retrieval performances. of Yan & Mikolajczyk (2015) as a baseline (again with limited comparability, due to a different ar- chitecture and a different train-validation-test split), along with our implementation of the TNO and the CCA layer trained with squared cosine distance. For image-to-text retrieval, we achieve slightly better retrieval performances when training with cosine distance and propagating the gradients back. through the differentiable CCA layer. For the other direction, results are slightly worse.."}, {"section_index": "11", "section_name": "4.4 INVESTIGATIONS ON LEARNED REPRESENTATIONS", "section_text": "In this section, we provide a more detailed look at the learned representations. We compare the. representations learned with the TNO to the proposed CCA layer optimized with the squared cosine. distance objective. For easier comparison, we re-train both networks with a reduced projection. dimensionality of h = 64 - otherwise, the TNO takes much longer to converge than the CCA layer. This results in slightly decreased performance for both, but the relative tendences are preserved\nFigure 2a shows the evolution of the mean correlation (mean over singular values with maximum 1.0) on the training set during optimization. Allong with the correlation, we also plot the average. cosine distance between corresponding pairs on the validation set. As expected, for the TNO we. observe a continous decrease of cosine distance when the correlation increases. Interestingly, this is not the case for CCAL-cos2. The result suggests that the network found a way of minimizing. the cosine distance other than by increasing correlation between the representations - the latter even. decreases after a few training epochs. In Figure 2b, we plot the corresponding evolution of MAP. on the training and validation set, confirming that the decreased cosine distance indeed also leads to improved retrieval performance. Finally, in Figure 2c we compare the individual correlation coefficients (magnitudes of CCA singular values on the training set) of both representations after the. last training epoch. This details the observation in Figure 2a: not only the total correlation, but also the individual correlation coefficients are considerably higher when training with TNO, even though the retrieval performance is lower."}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "We presented a fully differentiable version of Canonical Correlation Analysis which enables us. to back-propagate errors directly through the computation of CCA. As this requires to establish. gradient flow through CCA, we formulate it to allow easy computation of the partial derivatives. ?A and B* of CCA's projection matrices A* and B* with respect to the input data x and y. Ox, y Ox, y With this formulation, we can incorporate CCA as a building block within multi-modality neural. networks that produces maximally-correlated projections of its inputs. In our experiments, we use. this building block within a cross-modality retrieval setting, optimizing a network to minimize the. cosine distance of the correlated CCA projections. Experimental results show that when using the. cosine distance for retrieval (as is common for correlated views), this is superior to optimizing a. network for maximally-correlated projections (as done in Deep CCA), or not using CCA at all. We. further observed (Section 4.4) that it is not necessarily required to have maximum correlation to. achieve a high retrieval performance. Finally, our differentiable CCA layer could provide a useful basis for further research, e.g., as an intermediate processing step for learning binary cross-modality. retrieval representations."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "The research reported in this paper has been supported by the Austrian Federal Ministry for Trans- port, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH, as well as by the Federal Ministry for Transport, Innovation & Technology (BMVIT) and the Austrian Science Fund (FWF): TRP 307-N23. The Tesla K40 used for this research was donated by the NVIDIA Corporation.."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Matthias Dorfer, Rainer Kelz, and Gerhard Widmer. Deep linear discriminant analysis. Internationa Conference on Learning Representations (1CLR) (arXiv:1511.04707), 2015.\nHarold Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321-377, 1936\nGalen Andrew. Raman Arora. Jeff Bilmes. and Karen Livescu. Deep canonical correlation analysis 21712552012\nRyan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embedding with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014\nJan R. Magnus. On differentiating eigenvalues and eigenvectors. Econometric Theory, 1(2):179 191. 1985. ISSN 02664666. 14694360.\nJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L Yuille. Explain images with multimodal recurrent neural networks. arXiv preprint arXiv:1410.1090. 2014..\nK.V. Mardia, J.T. Kent, and J.M. Bibby. Multivariate analysis. Probability and mathematical statis tics. Academic Press. 1979. ISBN 9780124712508\nK. B. Petersen and M. S. Pedersen. The matrix cookbook, nov 2012. Version 20121115\nRichard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng Grounded compositional semantics for finding and describing images with sentences. Trans actions of the Association for Computational Linguistics, 2:207-218. 2014\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016\nPeter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics. 2:67-78. 2014."}, {"section_index": "15", "section_name": "APPENDIX A: OPTIMIZATION SETTINGS", "section_text": "The table below provides a detailed listing of the optimization strategies for all our experiments. All. our configurations are of course also available in our experimental code published at (will be added)\nTable 4: Details on optimization strategies for the respective networks\nTheodore Papadopoulo and Manolis I.A. Lourakis. Estimating the Jacobian of the Singular Value Decomposition: Theory and Applications. In Proceedings of the 6th European Conference on. Computer Vision (ECCV), 2000.\nKaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, and Liang Wang. A comprehensive survey on cross- modal retrieval. arXiv preprint arXiv:1607.06215, 2016.\nFlickr30k Objective Optimizer Units lrini lr-schedule Dropout L2 r TNO momentum 2048 0.05 constant 0.5 none 10-3 CCAL momentum 1024 0.5 0.7 from epoch 10 none 0.002 10-3 learned-cos2 momentum 1024 0.25 none none 0.002 10-3 IAPR TC-12 Objective Optimizer Units lrini lr-schedule Dropout L2 r TNO adam 1024 0.001 x 0.1 in epoch 30 none 0.0001 10-3 CCAL adam 1024 0.001 x 0.1 in epoch 50 none 0.0002 10-3\nFigure 3: Influence of parameter a."}, {"section_index": "16", "section_name": "APPENDIX B: INFLUENCE OF RUNNING AVERAGE STATISTICS", "section_text": "In this additional section, we investigate the influence of weighting coefficient a when using ex ponential moving average estimates of the covariance matrices for CCA computation (see Section. 3). A high a (close to 1.0) means that the averaged estimate of xx, yy and xy mostly depends. on the current batch, and a low a (close to O.0) means it more strongly depends on the history of previous batches. To assess whether and under what circumstances exponential moving averages are. helpful, we run an additional experiment on the IAPR TC-12 dataset as follows: We re-train one of. the models of Section 4 both with batch size 1000 and with batch size 200, varying a from 1.0 to 0.1. with a step size of 0.1 and measuring the MAP achieved on the validation set. We run each setting. three times and report the average over the three runs. Figure 3 shows the results of this experiment.. For batch size 1000, we draw the same conclusion as was reported in (Wang et al., 2015a;b): If the. batch size is sufficiently large and representative for the entire population, learning on distribution. parameters (in this case covariance matrices) is feasible, and the network performs best when trained. with an a close to one. This is not the case for batch size 200. In particular, the configurations with a large a (small effective running average window) perform poorly. We conclude that a batch size. of 200 is too small to obtain stable and representative covariances. However, when choosing a small. Q, it is still possible to train the models and achieve reasonable retrieval performance. As a prac-. tical recommendation, we suggest to use large batch sizes whenever possible (e.g., if feasible with. available hardware). If the batch size needs to be reduced (e.g., for very large models and limited. memory), using small alpha values still allows for training canonically correlated retrieval networks.. For this work, we use a batch size of 1o00 and fix = 1, disabling moving averages..\n0.6 0.5 0.4 0.3 0.2 0.1 batch size 200 batch size 1000 0.0 - 0.0 0.2 0.4 0.6 0.8 1.0 alpha"}] |
HkuVu3ige | [{"section_index": "0", "section_name": "ON ORTHOGONALITY AND LEARNING RECORRENT NETWORKS WITH LONG TERM DEPENDENCIES", "section_text": "Eugene Vorontsoy. 1,3. Chris Pal 1,2\nIt is well known that it is challenging to train deep neural networks and recur-. rent neural networks for tasks that exhibit long term dependencies. The vanishing. or exploding gradient problem is a well known issue associated with these chal-. lenges. One approach to addressing vanishing and exploding gradients is to use. either soft or hard constraints on weight matrices so as to encourage or enforce or-. thogonality. Orthogonal matrices preserve gradient norm during backpropagation. and can therefore be a desirable property; however, we find that hard constraints. on orthogonality can negatively affect the speed of convergence and model per- formance. This paper explores the issues of optimization convergence, speed and. gradient stability using a variety of different methods for encouraging or enforcing. orthogonality. In particular we propose a weight matrix factorization and parame-. terization strategy through which we can bound matrix norms and therein control. the degree of expansivity induced during backpropagation.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Both expansivity and contractivity of linear transformations can also be limited by more tightly bounding their spectra. By limiting the transformations to be orthogonal, their singular spectra are limited to unitary gain causing the transformations to be norm-preserving. Le et al.(2015) and Henaff et al.(2016) have respectively shown that identity initialization and orthogonal initialization can be beneficial. Arjovsky et al.[(2015) have gone beyond initialization, building unitary recurrent neural network (RNN) models with transformations that are unitary by construction which they achieved by composing multiple basic unitary transformations. The resulting transformations, for some n-dimensional input, cover only some subset of possible n n unitary matrices but appea to perform well on simple tasks and have the benefit of having low complexity in memory and computation.\nThe entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At a much higher computational cost, gradient descent optimization directly along this manifold can be done via geodesic steps (Nishimori]2005) Tagare]2011). Recent work (Wisdom et al.]2016) has proposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradient. descent. To produce a full-capacity parameterization for unitary matrices they use some insights"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The depth of deep neural networks confers representational power, but also makes model optimiza- tion more challenging. Training deep networks with gradient descent based methods is known to be difficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmid- huber1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al. 2013) or introducing an L2 or L1 weight norm penalty. The latter has the effect of bounding the spectral radius of the linear transformations, thus limiting the maximal gain across the transforma- tion.Krueger & Memisevic(2015) attempt to stabilize the norm of propagating signals directly by penalizing differences in successive norm pairs in the forward pass and Pascanu et al.(2013 propose to penalize successive gradient norm pairs in the backward pass. These regularizers affect the network parameterization with respect to the data instead of penalizing weights directly\nIn contrast, here we explore the optimization of real valued matrices within a configurable margin about the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model's representational power, hindering its performance, and may make optimization more difficult. We explore this hypothesis empirically by employing a factorization technique that allows us to limit the degree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simulta- neously update the singular spectra of our matrices along Euclidean steps, allowing optimization to step away from the manifold while still curving about it.\na,(h-1)=W,h-1+ b, iE{2,..,n}\nFor notational convenience, we combine parameters W, and b, to form an affine matrix 0. We car see that for some loss function L at layer n, the derivative with respect to parameters O, is:\nThe partial derivatives for the pre-activations can be decomposed as follows:\nwhere D; is the Jacobian corresponding to the activation function, containing partial derivatives of the hidden units at layer i + 1 with respect to the pre-activation inputs. Typically, D is diagonal. Following the above, the gradient in equation2|can be fully decomposed into a recursive chain of. matrix products:\nnorms of the non-linearity's Jacobian and transition matrix at time t (layer i), as follows.\nwhere Ap, and Aw, are the largest singular values of the non-linearity's Jacobian D, and the tran sition matrix Wz. In RNNs, W is shared across time and can be simply denoted as W..\nEquation 5 shows that the gradient can grow or shrink at each layer depending on the gain of each layer's linear transformation W and the gain of the Jacobian D. The gain caused by each layer. is magnified across all time steps or layers. It is easy to have extreme amplification in a recurrent. neural network where W is shared across time steps and a non-unitary gain in W is amplified. exponentially. The phenomena of extreme growth or contraction of the gradient across time steps or. layers are known as the exploding and the vanishing gradient problems, respectively. It is sufficient. for RNNs to have nt 1 at each time t to enable the possibility of vanishing gradients, typically. for some large number of time steps T. The rate at which a gradient (or forward signal) vanishes.\nfromTagare (2011), combining the use of a canonical inner products and Cayley transformations Their experimental work indicates that full capacity unitary RNN models can solve the copy memory problem whereas both LSTM networks and restricted capacity unitary RNN models having similar complexity appear unable to solve the task for a longer sequence length (T = 2000).\nThe issue of vanishing and exploding gradients as it pertains to the parameterization of neural net-. works can be illuminated by looking at the gradient back-propagation chain through a network\naL dan+1 aL 80i dan+1\nOai+1 Oa; On; Oai+1 d0, dai dh dai dai+1 = D;Wi+1; D;Wi+1 80i dai\naL dai n aL l(D;Wj+1 00i 00i dan+1 j=i\ndat+1 <I|Dtl|Wt|l< Xpt Xwt = nt dat\nADt, Aw, E R.\ndepends on both the parameterization of the model and on the input data. The parameterizatior. may be conditioned by placing appropriate constraints on W. It is worth keeping in mind that the. Jacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent whereas W can vary from being contractive to norm-preserving, to expansive and applies the same. gain on the forward signal as on the back-propagated gradient signal..\nVanishing and exploding gradients can be controlled to a large extent by controlling the maximun and minimum gain of W. The maximum gain of a matrix W is given by the spectral norm whicl is given by\nW = USVT\nSince the spectral norm or maximum gain of a matrix is equal to its largest singular value, this decomposition allows us to control the maximum gain or expansivity of the weight matrix by con trolling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity of a matrix can be obtained from the minimum singular value.\nWe can keep the bases U and V orthogonal via geodesic gradient descent along the set of weights. that satisfy UTU = I and VTV = I respectively. The submanifolds that satisfy these constraints are called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss ou. construction for bounding the singular values.\nDuring optimization, in order to maintain the orthogonality of an orthogonally-initialized matrix M, i.e. where M = U, M = V or M = W if so desired, we employ a Cayley transformation of the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori(2005) and Tagare[(2011). Given an orthogonally-initialized parameter matrix M and its Jacobian, G with respect to the objective function, an update is performed as follows:\nwhere A is a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix) which is mapped to an orthogonal matrix via a Cayley transform and n is the learning rate.\nUWx| w||2 = max x\nBy keeping our weight matrix W close to orthogonal, one can ensure that it is close to a norm- preserving transformation (where the spectral norm is equal to one, but the minimum gain is also one). One way to achieve this is via a simple soft constraint or regularization term of the form.\n||w+w,-I||2\nHowever, it is possible to formulate a more direct parameterization or factorization for W which per- mits hard bounds on the amount of expansion and contraction induced by W. This can be achieved by simply parameterizing W according to its singular value decomposition, which consists of the composition of orthogonal basis matrices U and V with a diagonal spectral matrix S containing the singular values which are real and positive by definition. We have\nA = GM - MG n Mnew = M + (I+ A A). 2 2\nWhile the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrix W if desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. As such, we parameterize the transition matrix W in factorized form, as a singular value decompositior with orthogonal bases U and V updated by geodesic gradient descent using the Cayley transform approach above.\nIf W is an orthogonal matrix, the singular values in the diagonal matrix S are all equal to one. However. in our formulation we allow these singular values to deviate from one and employ a sigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of\ndeviation. Specifically, we define a margin m around 1 within which the singular values must lie This is achieved with the parameterization.\nThe singular values are thus restricted to the range 1 - m, 1 + m and the underlying parameters. pi are updated freely via stochastic gradient descent. Note that this parameterization strategy also. has implications on the step sizes that gradient descent based optimization will take when updating the singular values - they tend to be smaller compared to models with no margin constraining their values. Specifically, a singular value's progression toward a margin is slowed the closer it is to the margin. The sigmoidal parameterization can also impart another effect on the step size along the spectrum which needs to be accounted for. Considering|10] the gradient backpropagation of some. loss L toward parameters p; is found as.\nFrom (11), it can be seen that the magnitude of the update step for pi is scaled by the margii hyperparameter m. This means for example that for margins less than one, the effective learning rate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learning rate along the spectrum to be independent of the margin by renormalizing it by 2m.\nThis margin formulation both guarantees singular values lie within a well defined range and slows deviation from orthogonality. Alternatively, one could enforce the orthogonality of U and V and impose a regularization term corresponding to a mean one Gaussian prior on these singular values This encourages the weight matrix W to be norm preserving with a controllable strength equivalent to the variance of the Gaussian. We also explore this approach further below.\nIn this section, we explore hard and soft orthogonality constraints on factorized weight matrices for recurrent neural network hidden to hidden transitions. With hard orthogonality constraints on U and V, we investigate the effect of widening the spectral margin or bounds on convergence and performance. Loosening these bounds allows increasingly larger margins within which the transition matrix W can deviate from orthogonality. We confirm that orthogonal initialization is useful as noted in Henaff et al.[(2016), and we show that although strict orthogonality guarantees stable gradient norm, loosening orthogonality constraints can increase the rate of gradient descent convergence. We begin our analyses on tasks that are designed to stress memory: a sequence copying task and a basic addition task (Hochreiter & Schmidhuber!1997). We then move on to tasks on rea data that require models to capture long-range dependencies: digit classification based on sequentia and permuted MNIST vectors (Le et al.]2015} LeCun et al.]1998). Finally, we look at a basic language modeling task using the Penn Treebank dataset (Marcus et al.1993).\nThe copy and adding tasks, introduced by|Hochreiter & Schmidhuber (1997), are synthetic bencl. marks with pathologically hard long distance dependencies that require long-term memory in moc. els. The copy task consists of an input sequence that must be remembered by the network, followe. oy a series of blank inputs terminated by a delimiter that denotes the point at which the network mu begin to output a copy of the initial sequence. We use an input sequence of T + 20 elements tha. begins with a sub-sequence of 10 elements to copy, each containing a symbol a; E {a1, ..., ap} ol. of p = 8 possible symbols. This sub-sequence is followed by T - 1 elements of the blank categor. ao which is terminated at step T by a delimiter symbol ap+1 and 10 more elements of the blan. category. The network must learn to remember the initial 10 element sequence for T time steps an output it after receiving the delimiter symbol..\nThe goal of the adding task is to add two numbers together after a long delay. Each number is. randomly picked at a unique position in a sequence of length T. The sequence is composed of. T values sampled from a uniform distribution in the range [0, 1), with each value paired with an. indicator value that identifies the value as one of the two numbers to remember (marked 1) or as a value to ignore (marked O). The two numbers are positioned randomly in the sequence, the first in. the range [0, T - 1] and the second in the range [, T 1], where O marks the first element. The. network must learn to identify and remember the two numbers and output their sum..\nSi = 2m(o(pi)-0.5) +1, S; E{diag(S)}, m E [0, 1]\ndL dsi dL do(pi) dL 2m dpi dpi dsi dsi\nThe sequential MNIST task from Le et al.(2015), MNIST digits are flattened into vectors that can be traversed sequentially by a recurrent neural network. The goal is to classify the digit based on. the sequential input of pixels. The simple variant of this task is with a simple flattening of the image. matrices; the harder variant of this task includes a random permutation of the pixels in the input. vector that is determined once for an experiment. The latter formulation introduces longer distance. dependencies between pixels that must be interpreted by the classification model..\nThe English Penn Treebank (PTB) dataset from|Marcus et al.(1993) is an annotated corpus of En glish sentences, commonly used for benchmarking language models. We employ a sequential char. acter prediction task: given a sentence, a recurrent neural network must predict the next character a each step, from left to right. We use input sequences of variable length, with each sequence contain. ing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase). numbers, common punctuation, and an unknown character placeholder. In our experiments on tw subsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters anc. in the second we include over 99% of the dataset, picking strings with up to 300 characters..\nIn this section, we experimentally explore the effect of loosening hard orthogonality constraints through loosening the spectral margin defined above for the hidden to hidden transition matrix.\nIn all experiments, we employed RMSprop (Tieleman & Hinton2012) when not using geodesi. gradient descent. We used minibatches of size 50 and for generated data (the copy and addin. tasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clippin. at magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not b. required and we consistently applied a small weight decay of O.ooo1. Unless otherwise specifie. we trained all simple recurrent neural networks with the hidden to hidden matrix factorization a in (8) using geodesic gradient descent on the bases (learning rate 10-6) and RMSprop on the othe. parameters (learning rate O.0001), using a tanh transition nonlinearity, and clipping gradients of 10. magnitude. The neural network code was built on the Theano framework (Theano Developmei. Team2016). When parameterizing a matrix in factorized form, we apply the weight decay on th. composite matrix rather than on the factors in order to be consistent across experiments. For MNIS and PTB, test set metrics were computed based on the parameterization that gave the best validatio. set accuracy.\nFor different sequence lengths T of the copy and adding tasks, we trained a factorized RNN with 128 hidden units and various spectral margins m. For the copy task, we used Elman networks withou a transition non-linearity as in Henaff et al.(2016). We discuss our investigations into the use of non-linearity on the copy task in the Appendix.\nAs shown in Figure [1we see an increase in the rate of convergence as we increase the spectral margin. This observation generally holds across the tested sequence lengths (T = 200, T = 500. T = 1000, T = 100o0); however, large spectral margins hinder convergence on extremely long sequence lengths. At sequence length T = 10000, parameterizations with spectral margins larger than O.001 converge slower than when using a margin of O.001. In addition, the experiment without a margin failed to converge on the longest sequence length. This follows the expected pattern where stepping away from the Stiefel manifold may help with gradient descent optimization but loosening orthogonality constraints can reduce the stability of signal propagation through the network\nFor the adding task, we trained a factorized RNN on T = 1000 length sequences, using a ReLU activation function on the hidden to hidden transition matrix. The mean squared error (MsE) is shown for different spectral margins in Figure|5lin the Appendix. Testing spectral margins m = 0, m = 1, m = 10, m = 100, and no margin, we find that the models with the purely orthogonal (m = 0) and the unconstrained (no margin) transition matrices failed to begin converging beyond baseline MSE within 2000 epochs.\n1.0 1.0 1.0 1.0 m=0 0.8 0.8 0.8 0.8 m=0.001 m=0.01 m=0.1 m=1 0.2 0.2 0.2 0.2 no margin 0.0. 0.0 0.0 0.0 0 20 40 60 80 100 2040 60 80100 120 140 160 0 50 100 150 200 0 50 100 150 200250 300 number of epochs number of epochs number of epochs number of epochs\nFigure 1: Accuracy curves on the copy task for sequence lengths of (from left to right) T=200 T=500, T=1000, T=10000 given different spectral margins. Convergence speed increases with mar gin size; however, large margin sizes are ineffective at longer sequence lengths (T=1oooo, right).\nTable 1: Ordered sequential MNIST classifica tion with different margin sizes and an LSTM."}, {"section_index": "3", "section_name": "3.1.2 PERFORMANCE ON REAL DATA", "section_text": "Having confirmed that an orthogonality constraint can negatively impact convergence rate, we seek. to investigate the effect on model performance for tasks on real data. We show the results of experi-. ments on permuted sequential MNIST in Table[2|and ordered sequential MNIST in Table[1] The loss curves are shown in Figure|6|in the Appendix and reveal an increased convergence rate for larger spectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. We also trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peep. hole connections, orthogonally initialized (and forget gate bias initialized to one), and trained with. RMSprop (learning rate O.0001, clipping gradients of magnitude 1)..\nWe show the results of experiments on PTB character prediction, in terms of bits per character (bpc). and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table 3|and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4 We trained factorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on the bases (learning rate 10-6) and RMSprop on the other parameters (learning rate O.001), using a tanh. transition nonlinearity, and clipping gradients of 30 magnitude..\nInterestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zero margin significantly outperform those that are constrained to have purely orthogonal transition matri.\nmargin initialization bpc accuracy 0 orthogonal 2.16 55.31 0.01 orthogonal 2.16 55.33 0.1 orthogonal 2.12 55.37 1 orthogonal 2.06 57.07 100 orthogonal 2.04 57.51 none orthogonal 2.06 57.38 none Glorot normal 2.08 57.37 none identity 2.25 53.83\nTable 3: Character prediction on PTB sentences of to 75 characters, using different margins\n1.0 1.0 1.0 1.0 m=0 0.8 0.8 0.8 0.8 m=0.001 0.6 m=0.01 CU m=0.1 0.4 m=1 0.2 0.2 0.2 0.2 no margin 0.0 0.0 0.0 0.0 0 20 40 60 80 100 2040 60 80100 120 140 160 0 50 100 150 200 0 50 100 150 200 250300 number of epochs number of epochs number of epochs number of epochs\nTable 2: Permuted sequential MNIST classifica-. tion with different margin sizes and an LSTM.\nmargin initialization bpc accuracy 0 orthogonal 2.20 54.88 0.01 orthogonal 2.20 54.83 0.1 orthogonal 2.24 54.10 1 orthogonal 2.36 51.12 100 orthogonal 2.36 51.20 none orthogonal 2.34 51.30 none Glorot normal 2.34 51.04 identity none 2.68 45.35\nTable 4: Character prediction on PTB sentences of up to 300 characters, using different margins\nces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yielde by models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. Ai LSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transition initialized as orthogonal matrices performed admirably without a memory component and withou all of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs per formed almost on par with the LSTM in the permuted sequential MNIST task which presents longe distance dependencies than the ordered task. Although the optimal margin appears to be O.1, RNN: with large margins perform almost identically to an RNN without a margin, as long as the transitior matrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantly outperform Glorot normal initialization (Glorot & Bengio2010) or initializing the matrix as iden tity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful whil. orthogonality constraints appear mainly detrimental. This suggests that while orthogonality help early training by stabilizing gradient flow across many time steps, orthogonality constraints may need to be loosened on some tasks so as not to over-constrain the model's representational ability.\nCuriously, larger margins and even models without sigmoidal constraints on the spectrum (no mar-. gin) performed well as long as they were initialized to be orthogonal, suggesting that evolution away from orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is use- ful for the MNIST tasks since they depend on long distance signal propagation with a single output at. the end of the input sequence. On the other hand, character prediction with PTB produces an output. at every time step. Constraining deviation from orthogonality proved detrimental for short sentences (Table[3) and beneficial when long sentences were included (Table4). Furthermore, Glorot normal initialization did not perform worse than orthogonal initialization for PTB. Since an output is gen-. erated for every character in a sentence, short distance signal propagation is possible. Thus it is. possible that the RNN is first learning very local dependencies between neighbouring characters and that given enough context, constraining deviation from orthogonality can help force the network to learn longer distance dependencies."}, {"section_index": "4", "section_name": "3.1.3 SPECTRAL AND GRADIENT EVOLUTION", "section_text": "It is interesting to note that even long sequence lengths (T=10o0) in the copy task can be solvec. efficiently with rather large margins on the spectrum. In Figure 2|we look at the gradient propaga tion of the loss from the last time step in the network with respect to the hidden activations. We car see that for a purely orthogonal parameterization of the transition matrix (when the margin is zero) the gradient norm is preserved across time steps, as expected. We further observe that with increas ing margin size, the number of update steps over which this norm preservation survives decreases though surprisingly not as quickly as expected..\n0 1.0 0 1.0 50 0.8 50 0.8 100 0.6 100 0.6 0.4 0.4 150 150 0.2 0.2 200 200 0.0 0.0 0 200 400 600 800 200 400 600 800 0 1.0 0 1.0 50 0.8 50 0.8 100 0.6 100 0.6 0.4 0.4 150 150 0.2 0.2 200 200 0.0 0.0 0 200 400 600 800 0 200 400 600 800 0 1.0 0 1.0 50 0.8 50 0.8 100 0.6 100 0.6 0.4 150 0.4 150 0.2 0.2 200 200 0.0 0.0 0 200 400 600 800 0 200 400 600 800\nFigure 2: The norm of the gradient of the loss from the last time step with respect to the hidder. units at a given time step for a length 220 RNN over 1000 update iterations for different margins. Iterations are along the abscissa and time steps are denoted along the ordinate. The first column. margins are: 0, O.001, 0.01. The second column margins are: 0.1, 1, no margin. Gradient norms are normalized across the time dimension.\nAlthough the deviation of singular values from one should be slowed by the sigmoidal parameteriza. tions, even parameterizations without a sigmoid (no margin) can be effectively trained for all but the. longest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality anc. that inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great-\nest expansion or contraction. We evaluated the spread of the spectrum in all of our experiments anc found that indeed, singular values tend to stay well within their prescribed bounds and only reacl the margin when using a very large learning rate that does not permit convergence. Furthermore when transition matrices are initialized as orthogonal, singular values remain near one throughou training even without a sigmoidal margin for tasks that require long term memory (copy, adding sequential MNIST). On the other hand, singular value distributions tend to drift away from one fo PTB character prediction which may help explain why enforcing an orthogonality constraint car be helpful for this task, when modeling long sequences. Interestingly, singular values spread out less for longer sequence lengths (nevertheless, the T=10o00 copy task could not be solved with nc sigmoid on the spectrum).\nWe visualize the spread of singular values for different model parameterizations on the permuted se quential MNIST task in Figure[3] Curiously, we find that the distribution of singular values tends to shift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNIST tasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractive in both the forward signal pass and the gradient backward pass. An upward shift in the distributior of singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxe et al.] 2013) describe this as a possibly good regime for learning in deep neural networks. That the model appears to evolve toward this regime suggests that deviating from it may incur a cost. This is interesting because the cost function cannot take into account numerical issues such as vanish ing or exploding gradients (or forward signals); we do not know what could make this deviatior costly. That the transition matrix may be compensating for the contraction of the tanh is supported by further experiments: applying a 1.05 pre-activation gain appears to allow a model with a margir of O to nearly match the top performance reached on both of the MNIST tasks. Furthermore, when using the OPLU norm-preserving activation function (Chernodub & Nowicki|2016), we found tha orthogonally initialized models performed equally well with all margins, achieving over 90% ac curacy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNN on the bottom right of Figure[3|with Glorot normal initialized transition matrices, begins and ends with a wide singular spectrum. While there is no clear positive shift in the distribution of singula. values, the mean value appears to very gradually increase for both the ordered and permuted sequen tial MNIST tasks. If the model is to be expected to positively shift singular values to compensate for the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case however, this may be due to the inefficiency of training as a result of vanishing gradients, given that initialization.\n1.20 1.20 1.20 1.15 1.15 1.15 1.10 1.10 1.10 1.05 1.05 1.05 1.00 1.00 1.00 0.95 0.95 0.95 0.90 0.90 0.90 0.85 0.85 0.85 0.80 0.80 0.80 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 number of epochs number of epochs number of epochs 1.20 1.20 2.5 1.15 1.15 2.0 1.10 1.10 1.05 1.05 1.5 1.00 1.00 0.95 0.95 1.0 0.90 0.90 0.5 0.85 0.85 0.80 0.80 0.0 0 50 100 150 200 0 50 100 150 200 0 50 100 150 200 number of epochs number of epochs number of epochs\nFigure 3: Singular value evolution on the permuted sequential MNIST task for factorized RNNs with different margin sizes. Margins are, from left to right: top row: O.001, O.01, 0.1; bottom row: 1, no margin, no margin. The singular value distributions are summarized with the mean (green line center) and standard deviation (green shading about mean), minimum (red, bottom) and maximum (blue, top) values. All models are initialized with orthogonal hidden to hidden transition matrices except for the model on the bottom right where Glorot normal initialization is used.\nHaving established that it may indeed be useful to step away from orthogonality, here we explore two forms of soft constraints (rather than hard bounds as above) on hidden to hidden transition matrix orthogonality. The first is a simple penalty that directly encourages a transition matrix W to be orthogonal, of the form A||wTw - I||?. This is similar to the orthogonality penalty introduced by Henaff et al.(2016). In the first two subfigures on the left of Figure4] we explore the effect of weakening this form of regularization. We trained both a regular non-factorized RNN on the T = 200 copy task and a factorized RNN with orthogonal bases on the T = 500 copy task. For the regular RNN, we had to reduce the learning rate to 10-5. Here again we see that weakening the strength of the orthogonality-encouraging penalty can increase convergence speed.\n1.0 1.0 1.0 1.0 0.001 0.0001 0.8 0.8 0.8 0.8 0.001 0.01 0.1 0.01 g0.6 1 0.1 80.4 0.4 0.4 10 1 0.2 0.2 0.2 10 0.2 100 100 0.0 0.0 100 0.0 100 150 200 250 300 0.0. 0 200 400 600 800 1000 0 20 40 60 80 50 50 100 150 200 250 300 number of epochs number of epochs number of epochs number of epochs\n1.0 1.0 0.001 1.0 1.0 0.8 0.8 0.01 0.8 0.8 0 T 0.1 O 0.6 0.6 1 0 0.4 10 80.4 0.4 0.2 0.2 100 0.2 0.2 0.00 0.0 0.0 50100 150200 250300 0.0 0 200 400 600 800 01000 0 20 40 60 80 100 50100 150200 250300 number of epochs number of epochs number of epochs number of epochs\nThe second approach we explore replaces the sigmoidal margin parameterization with a mean one Gaussian prior on the singular values. In the two right subfigures of Figure|4] we visualize the accu racy on the length 200 copy task, using geoSGD (learning rate 10-6) to keep U and V orthogonal and different strengths of a Gaussian prior with mean one on the singular values. We trained these experiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, using a 10-5 learning rate. We see that priors which are too strong lead to slow convergence. Loosening the strength of the prior makes the optimization more efficient. Furthermore, we compare a direct parameterization of the spectrum (no sigmoid) in Figure|4|with a sigmoidal parameterization, using a large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unsta ble; on the other hand, the optimization also becomes unstable if the prior is removed completely ir the sigmoidal formulation (margin 1). These results further motivate the idea that parameterizations that deviate from orthogonality may perform better than purely orthogonal ones, as long as they are sufficiently constrained to avoid instability during training."}, {"section_index": "5", "section_name": "4 CONCLUSIONS", "section_text": "We have explored a number of methods for controlling the expansivity of gradients during backprop. agation based learning in RNNs through manipulating orthogonality constraints and regularizatio. on matrices. Our experiments indicate that while orthogonal initialization may be beneficial, main taining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraint. on matrix orthogonality can help improve optimization convergence rate and model performance. However, we also observe with synthetic tasks that relaxing regularization which encourages th. spectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms o weight matrices to be too wide, can reverse these gains and may lead to unstable optimization.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "Figure 4: Accuracy curves on the copy task for different strengths of soft orthogonality constraints A soft orthogonality constraint is applied to the transition matrix W for a regular RNN on T = 200 Left) and the same is applied on a factorized RNN on T = 500 (Left center). Another constraint in the form of a mean one Gaussian prior on the singular values is applied to a factorized RNN on T = 200 (Right center); the same is applied to a factorized RNN with a sigmoidal parameterization of the spectrum, using a large margin of 1 (Right). Loosening orthogonality speeds convergence."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neura networks. In Aistats, volume 9, pp. 249-256, 2010.\nMikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal rnns and long-memory tasks. arXi preprint arXiv:1602.06662, 2016.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. 1CML (3), 28:1310-1318, 2013.\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.\nHemant D Tagare. Notes on optimization on stiefel manifolds. Technical report, Tech. Rep., Yale University, 2011.\nScott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. To appear in NIPS, 2016..\nArtem Chernodub and Dimitri Nowicki. Norm-preserving orthogonal permutation linear unit acti yation. functions (onlu). eprint arXiv:1604.02313. 2016 Y1/Y1\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998\n0.30 0.25 0.20 m=0 m=1 E S 0.15 m=10 m=100 0.10 no margin 0.05 0.00 0 500 1000 1500 2000 number of epochs\n0.25 0.20 m=0 m=1 E S 0.15 m=10 m=100 0.10 no margin. 0.05 0.00 0 500 1000 1500 2000 number of epochs\n2.5 2.5 2.0 2.0 m=0 m=0.001 m=0.01 1.5 1.5 Cost Cost m=0.1 wwwwwwwwwwwwwwwwwww m=1 1.0 1.0 no margin glorot 0.5 0.5 identity 0.0 0.0 0 50 100 150 200 0 50 100 150 200 number of epochs number of epochs\n2.5 2.5 2.0 2.0 m=0 m=0.001 1.5 m=0.01 1.5 Cost cost m=0.1 wwwwwwwwwwwwwwww m=1 1.0 1.0 wwwwwwwwwwwwwwwwwwwwwwwwwww no margin glorot 0.5 0.5 identity 0.0 0.0 0 50 100 150 200 0 50 100 150 200 number of epochs. number of epochs.\nFigure 6: Loss curves for different factorized RNN parameterizations on the sequential MNIST task (left) and the permuted sequential MNIST task (right). The spectral margin is denoted by m models with no margin have singular values that are directly optimized with no constraints; Glorot refers to a factorized RNN with no margin that is initialized with Glorot normal initialization."}, {"section_index": "8", "section_name": "5.2 COPY TASK NONLINEARITY", "section_text": "Figure 5: Mean squared error (MsE) curves on the adding task for different spectral margins m For a trivial baseline solution of always outputting the same number, the expected baseline MsE is 0.167.\nWe found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton]2010) or hy- perbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short se- quence length (T = 100) copy task required both a soft constraint that encourages orthogonality and thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neu- ral network of Arjovsky et al.(2015), the non-linearity (referred to as the \"'modReLU\") is actually initialized as an identity operation that is free to deviate from identity during training. Further- more,Henaff et al.(2016) derive a solution mechanism for the copy task that drops the non-linearity from an RNN. To explore this further, we experimented with a parametric leaky ReLU activation function (PReLU) which introduces a trainable slope a for negative valued inputs x, producing f(x) = max(x, 0) + amin(x, 0) (He et al.]2015). Setting the slope to one would make the PReLU equivalent to an identity function. We experimented with clamping to O.5, 0.7 or 1 in a factorized RNN with a spectral margin of 0.3 and found that only the model with a = 1 solved the T = 1000 length copy task. We also experimented with a trainable slope a, initialized to 0.7 and found that it converges to O.96, further suggesting the optimal solution for the copy task is without a transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a tran- sition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information. Thus, we also tried a recent activation function that preserves information, called an orthogonal per- mutation linear unit (OPLU) (Chernodub & Nowicki]2016). The OPLU preserves norm, making a fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recover identical results on the copy task to those without a nonlinearity for different spectral margins.\nAlthough the method proposed in section2[relies on a matrix inversion, an operation with O(n3). complexity for an n n matrix, the running time of an RNN factorized in such a way actually. remains reasonable. This running time is summarized in Table 5 and includes all computations in the graph, together with the matrix inversion. As this method is meant to be used only for the analysis in this work, we find the running times acceptable for that purpose. Models were run on an Nvidia GTX-770 GPU and were run against the T=100 length copy task.\nhidden units SGD geoSGD 128 21.9 0.2 40.4 0.1 500 46.7 0.2 161.4 0.2 1000 95.4 0.3 711.2 0.8\nTable 5: Run time in seconds for 1000 itera tions on a T=100 copy task of a regular RNN trained with stochastic gradient descent (SGD) compared against a factorized RNN trained with geodesic SGD on the bases (geoSGD) and reg ular SGD for other parameters."}] |
B1KBHtcel | [{"section_index": "0", "section_name": "HERE'S MY POINT: ARGUMENTATION MINING WITH POINTER NETWORKS", "section_text": "Peter Potash, Alexey Romanov & Anna Rumshisky\n{ppotash, aromanov, arum}@cs.uml.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Computational approaches to argument mining/understanding have become very popular (Persing & Ng]2016f Cano-Basave & He2016] |Wei et al.[2 2016Ghosh et al. 2016Palau & Moens2009 Habernal & Gurevych2016). One important avenue in this work is to understand the structure ir argumentative text (Persing & Ng]2016] Peldszus & Stede2015] Stab & Gurevych2016] Nguyer & Litman2016). One fundamental assumption when working with argumentative text is the pres ence of Arguments Components (ACs). The types of ACs are generally characterized as a claim o a premise (Govier2013), with premises acting as support (or possibly attack) units for claims. Tc model more complex structures of arguments, some annotation schemes also include a major clain AC type (Stab & Gurevych]2016}2014b)\nThere are two key assumptions our work makes going forward. First, we assume subtask 1 has. been completed, i.e. ACs have already been identified. Second, we follow previous work that assumes a tree structure for the linking of ACs (Palau & Moens]2009] Cohen1987]Peldszus & Stede 2015} Stab & Gurevych 2016) Specifically, a given AC can only have a single outgoing. link, but can have numerous incoming links. Furthermore, there is a 'head' component that has."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "One of the major goals in automated argumentation mining is to uncover the argu- ment structure present in argumentative text. In order to determine this structure, one must understand how different individual components of the overall argument are linked. General consensus in this field dictates that the argument components form a hierarchy of persuasion, which manifests itself in a tree structure. This work provides the first neural network-based approach to argumentation mining, focusing on extracting links between argument components, with a secondary fo- cus on classifying types of argument components. In order to solve this problem. we propose to use a modification of a Pointer Network architecture. A Pointer Network is appealing for this task for the following reasons: 1) It takes into ac- count the sequential nature of argument components; 2) By construction, it en- forces certain properties of the tree structure present in argument relations; 3) The hidden representations can be applied to auxiliary tasks. In order to extend the contribution of the original Pointer Network model, we construct a joint model that simultaneously attempts to learn the type of argument component, as well as continuing to predict links between argument components. The proposed model achieves state-of-the-art results on two separate evaluation corpora. Furthermore, our results show that optimizing for both tasks, as well as adding a fully-connected layer prior to recurrent neural network input, is crucial for high performance.\nGenerally, the task of processing argument structure encapsulates four distinct subtasks: 1) Given a sequence of tokens that represents an entire argumentative text, determine the token subsequences that constitute non-intersecting ACs; 2) Given an AC, determine the type of AC (claim, premise etc.); 3) Given a set/list of ACs, determine which ACs have a link that determine overall argument structure; 4) Given two linked ACs, determine whether the link is of a supporting or attacking relation. In this work. we focus on subtasks 2 and 3.\nFirst, [cloning will be beneficial for many people who are in need of organ transplants]Ac1. In addition, [it shortens the healing process]ac2. Usually, [it is very rare to find an appropriate organ donor]Ac3 and [by using cloning in orderto raise required organs the waiting time can be shortened tremendouslylAc4\nFigure 1: An example of argument structure with four ACs. The left side shows raw text that has been annotated for the presence of ACs. Squiggly and straight underlining means an AC is a claim or premise, respectively. The ACs in the text have also been annotated for links to other ACs, which. is show in the right figure. ACs 3 and 4 are premises that link to another premise, AC2. Finally, AC2 links to a claim, AC1. AC1 therefore acts as the central argumentative component..\nno outgoing link (the top of the tree). Figure[1 shows an example that we will use throughout the paper to concretely explain how our approach works. First, the left side of the figure presents the raw text of a paragraph in a persuasive essay (Stab & Gurevych]2016), with the ACs contained in square brackets. Squiggly verse straight underlining differentiates between claims and premises, respectively. The ACs have been annotated as to how the ACs are linked, and the right side of the figure reflects this structure. The argument structure with four ACs forms a tree, where AC2 has two incoming links, and AC1 acts as the head, with no outgoing links. We also specify the type of AC, with the head AC marked as claim and the remaining ACs marked as premise. Lastly, we note that the order of arguments components can be a strong indicator of how components should related. Linking to the first argument component can provide a competitive baseline heuristic (Peldszus & Stede2015} Stab & Gurevych2016).\nGiven the task at hand, we propose a modification of a Pointer Network (PN) (Vinyals et al.]2015b A PN is a sequence-to-sequence model that outputs a distribution over the encoding indices at eac. decoding timestep. The PN is a promising model for link extraction in argumentative text becaus. it inherently possesses three important characteristics: 1) it is able to model the sequential nature 0. ACs; 2) it constrains ACs to have a single outgoing link, thus partly enforcing the tree structure; 3. the hidden representations learned by the model can be used for jointly predicting multiple subtask.. We also note that since a PN is a type of sequence-to-sequence model (Sutskever et al.2014), allows the entire sequence to be seen before making prediction. This is important because if th. problem were to be approached as standard sequence modeling (Graves & Schmidhuber2009 Robinson 1994), making predictions at each forward timestep, it would only allow links to AC. hat have already been seen. This is equivalent to only allowing backward links. We note that we d test a simplified model that only uses hidden states from an encoding network to make predictions as opposed to the sequence-to-sequence architecture present in the PN (see Section5).\nPNs were originally proposed to allow a variable length decoding sequence (Vinyals et al.]2015b) Alternatively, the PN we implement differs from the original model in that we decode for the same number of timesteps as there are input components. We also propose a joint PN for both extracting links between ACs and predicting the type of AC. The model uses the hidden representation of ACs produced during the encoding step (see Section |3.4). Aside from the partial assumption o1 tree structure in the argumentative text, our models do not make any additional assumptions abou the AC types or connectivity, unlike the work of Peldszus(2014). We evaluate our models on the corpora of Stab & Gurevych (2016) and Peldszus (2014), and compare our results with the results of the aformentioned authors.\nRecent work in argumentation mining offers data-driven approaches for the task of predicting links between ACs. Stab & Gurevych (2014b) approach the task as a binary classification problem. The\nAC1 Claim AC2 Premise AC3 AC4 Premise Premise\nauthors train an SVM with various semantic and structural features.Peldszus & Stede (2015 have also used classification models for predicting the presence of links. Various authors hav. also proposed to jointly model link extraction with other subtasks from the argumentation mining. pipeline, using either an Integer Linear Programming (ILP) framework (Persing & Ng] 2016] Stal & Gurevych2016) or directly feeding previous subtask predictions into another model. The forme. joint approaches are evaluated on annotated corpora of persuasive essays (Stab & Gurevych]2014a 2016), and the latter on a corpus of microtexts (Peldszus2014). The ILP framework is effectiv in enforcing a tree structure between ACs when predictions are made from otherwise naive base. classifiers.\nUnrelated to argumentation mining specifically, recurrent neural networks have previously beer proposed to model tree/graph structures in a linear manner.Vinyals et al.(2015c) use a sequence to-sequence model for the task of syntactic parsing. The authors linearize input parse graphs using a depth-first search, allowing it to be consumed as a sequence, achieving state-of-the-art results on several syntactic parsing datasets.Bowman et al.(2015) experiment on an artificial entailmen dataset that is specifically engineered to capture recursive logic (Bowman et al.[2014). The text is annotated with brackets, in an original attempt to provide easy input into a recursive neural network However, standard recurrent neural networks can take in complete sentence sequences, brackets included, and perform competitively with a recursive neural network.\nIn this section we will describe how we use a PN for the problem of extracting links between ACs We begin by giving a general description of the PN model."}, {"section_index": "3", "section_name": "3.1 POINTER NETWORK", "section_text": "A PN is a sequence-to-sequence model (Sutskever et al.]2014) with attention (Bahdanau et al. 2014) that was proposed to handle decoding sequences over the encoding inputs, and can be ex tended to arbitrary sets (Vinyals et al.2015a). The original motivation for a pointer network wa to allow networks to learn solutions to algorithmic problems, such as the traveling salesperson an convex hull, where the solution is a sequence over candidate points. The PN model is trained o input/output sequence pairs (E, D), where E is the source and D is the target (our choice of E,D i meant to represent the encoding, decoding steps of the sequence-to-sequence model). Given mode parameters O, we apply the chain rule to determine the probability of a single training example:\nm(E) 11 p(DE;O) = p(Di|D1,..., Di-1, E;O i=1\nwhich is the sum over all training example pairs\nThe PN uses Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber 1997) for sequentia. modeling, which produces a hidden layer h at each encoding/decoding timestep. In practice, the PI. has two separate LSTMs, one for encoding and one for decoding. Thus, we refer to encoding hidde layers as e, and decoding hidden layers as d..\ntanh(Wie;+ Wdi)\nwhere the function m signifies that the number of decoding timesteps is a function of each individual training example. We will discuss shortly why we need to modify the original definition of m for. our application. By taking the log-likelihood of Equation 1] we arrive at the optimization objective:\n9* * = argmax logp(D E;O E,D\nThe PN uses a form of content-based attention (Bahdanau et al.|2014) to allow the model to produce a distribution over input elements. This can also be thought of as a distribution over input indices. wherein a decoding step 'points' to the input. Formally, given encoding hidden states (e1, ..., en) The model calculates p(D|D1, ..., D-1, E) as follows:\nFigure 2: Applying a Pointer Network to the example paragraph in Figure[1|with LSTMs unrolled Over time.\np(Di|D1,.., D-1, E) = softmax(u\nIn order to make the PN applicable to the problem of link extraction, we explicitly set the number of decoding timesteps to be equal to the number of input components. Using notation from Equation|1 the decoding sequence length for an encoding sequence E is simply m(E) = {C1, ..., Cn}, which is trivially equal to n. By constructing the decoding sequence in this manner, we can associate decoding timestep i with AC Ci.\nFrom Equation4] decoding timestep D; will output a distribution over input indices. The result of. this distribution will indicate to which AC component C; links. Recall there is a possibility that an AC has no outgoing link, such as if it's the root of the tree. In this case, we state that if AC C; does not have an outgoing link, decoding step D, will output index i. Conversely, if D, outputs index. j, such that j is not equal to i, this implies that C, has an outgoing link to Cs. For the argument. structure in Figure [1] the corresponding decoding sequence is (1, 1, 2, 2). The topology of this decoding sequence is illustrated in Figure2 Note how C1 points to itself since it has no outgoing. link.\nFinally, we note that we modify the PN structure to have a Bidirectional LSTM as the encoder. Thus e; is the concatenation of forward and backward hidden states e , and 'e n-i+1, produced by two. separate LSTMs. The decoder remains a standard forward LSTM.\nAt each timestep of the decoder, the network takes in the representation of an AC. Each AC is itself a sequence of tokens, similar to the recently proposed Question-Answering dataset (Weston et al. 2015). We follow the work of[Stab & Gurevych|(2016) and focus on three different types of features\nWe also experimented with relu and elu activations, but found sigmoid to yeild the best performance\nLSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM E1 E2 E3 E4 D1 D2 D3 D4 A A A Component 1 Component 2 Component 3 Component 4\nwhere matrices W1, W2 and vector v are parameters of the model (along with the LSTM parameters used for encoding and decoding). In Equation[3] prior to taking the dot product with v, the resulting. transformation can be thought of as creating a joint, hidden representation of inputs i and j. Vector u' in equation|4|is of length n, and index j corresponds to input element j. Therefore, by taking the. softmax of u', we are able to create a distribution over the input..\nA given piece of text has a set of ACs, which occur in a specific order in the text, (C1, ..., Cn).. Therefore, at encoding timestep i, the model is fed a representation of C,. Since the representation is large and sparse (see Section|3.3|for details on how we represent ACs), we add a fully-connected layer before the LSTM input. Given a representation R, for AC C, the LSTM input A, becomes:.\nA; = o(WrepRi + brep\nwhere Wrep, brep in turn become model parameters, and o is the sigmoid function' (similarly, the decoding network applies a fully-connected layer with sigmoid activation to its inputs, see Figure 3j. At encoding step i, the encoding LSTM produces hidden layer e, which can be thought of as a hidden representation of AC C .\nClaim Premise Premise Premise A FC2 FC2 FC2 FC2 A Bi-LSTM Bi-LSTM Bi-LSTM Bi-LSTM LSTM LSTM LSTM LSTM E1 E2 E3 E4 D1 D2 D3 D4 Bidirectional LSTM Encoder FC3 FC3 FC3 FC1 FC1 FC1 FC1 4 ^ Component 1 Component 2 Component 3 Component 4\nFigure 3: Architecture of the joint model applied to the example in Figure1\nto represent our ACs: 1) Bag-of-Words of the AC; 2) Embedding representation based on GloVe embeddings (Pennington et al.[2014); 3) Structural features: Whether or not the AC is the first AC in a paragraph, and Whether the AC is in an opening, body, or closing paragraph. See Section|6|for an ablation study of the proposed features\nUp to this point, we focused on the task of extracting links between ACs. However, recent worl has shown that joint models that simultaneously try to complete multiple aspects of the subtasl pipeline outperform models that focus on a single subtask (Persing & Ng2016f Stab & Gurevych 2014bf Peldszus & Stede2015). Therefore, we will modify the architecture we proposed in Sectior 3 so that it would allow us to perform AC classification (Kwon et al.]2007] Rooney et al.2012 together with link prediction. Knowledge of an individual subtask's predictions can aid in othe subtasks. For example, claims do not have an outgoing link, so knowing the type of AC can aid ir the link prediction task. This can be seen as a way of regularizing the hidden representations fron the encoding component (Che et al.2015).\nPredicting AC type is a straightforward classification task: given AC Ct, we need to predict whether it is a claim or premise. Some annotation schemes also include the class major claim (Stab & [Gurevych]2014a), which means this can be a multi-class classification task. For encoding timestep i, the model creates hidden representation e;. This can be thought of as a representation of AC C; Therefore, our joint model will simply pass this representation through a fully connected layer as follows:\nZi = Wclsei + bcls\nConsequently, the probability of predicting component type at timestep i is defined as\nFinally, combining this new prediction task with Equation|2] we arrive at the new training objective\n= arg max Q logp(DE;O) +(1-)) logp(EO) E,D E\nwhich simply sums the costs of the individual prediction tasks, and the second summation is the cost for the new task of predicting argument component type. Q E 0, 1is a hyperparameter that\np(C)=p(E|Ei,Ei;O)\np(E;|Ei, Ei;O) =softmax(zi)\nspecifies how we weight the two prediction tasks in our cost function. The architecture of the join model, applied to our ongoing example, is illustrated in Figure[3."}, {"section_index": "4", "section_name": "4+ EXPERIMENTAL DESIGN", "section_text": "As we have previously mentioned, our work assumes that ACs have already been identified. That. is, the token sequence that comprises a given AC is already known. The order of ACs corresponds directly to the order in which the ACs appear in the text. Since ACs are non-overlapping, there. is no ambiguity in this ordering. We test the effectiveness of our proposed model on a dataset of. persuasive essays (Stab & Gurevych2016), as well as a dataset of microtexts (Peldszus2014) The feature space for the persuasive essay corpus has roughly 3,ooo dimensions, and the microtext. corpus feature space has between 2,500 and 3,000 dimensions, depending on the data split (see. below).\nThe persuasive essay corpus contains a total of 402 essays, with a frozen set of 80 essays held out. for testing. There are three AC types in this corpus: major claim, claim, and premise. We follow the. creators of the corpus and only evaluate ACs within a given paragraph. That is, each training/test. example is a sequence of ACs from a paragraph. This results in a 1,405/144 training/test split. The. microtext corpus contains 112 short texts. Unlike, the persuasive essay corpus, each text in this. corpus is itself a complete example. Since the dataset is small, the authors have created 10 sets of 5-fold cross-validation, reporting the the average across all splits for final model evaluation. This. corpus contains only two types of ACs (claim and premise) The annotation of argument structure of. the microtext corpus varies from the persuasive essay corpus; ACs can be linked to other links, as. opposed to ACs. Therefore, if AC C; is annotated to be linked to link l, we create a link to the source. AC of l. On average, this corpus has 5.14 ACs per text. Lastly, we note that predicting the presence. of links is directional (ordered): predicting a link between the pair C, C,(i j) is different than. Ci, Ci.\nWe implement our models in TensorFlow (Abadi et al.]2015). Our model has the following param eters: hidden input dimension size 512, hidden layer size 256 for the bidirectional LSTMs, hidder layer size 512 for the LSTM decoder, equal to 0.5, and dropout (Srivastava et al.2014) of 0.9 We believe the need for such high dropout is due to the small amounts of training data (Zarrella & Marsh! 2016), particularly in the Microtext corpus. All models are trained with Adam optimizer (Kingma & Ba]2014) with a batch size of 16. For a given training set, we randomly select 10% tc become the validation set. Training occurs for 4,o00 epochs. Once training is completed, we seleci the model with the highest validation accuracy (on the link prediction task) and evaluate it on the held-out test set. At test time, we take a greedy approach and select the index of the probability distribution (whether link or type prediction) with the highest value."}, {"section_index": "5", "section_name": "5 RESULTS", "section_text": "The results of our experiments are presented in Tables [1and 2 For each corpus, we present f1 scores for the AC type classification experiment, with a macro-averaged score of the individual class f1 scores. We also present the f1 scores for predicting the presence/absence of links between ACs, as well as the associated macro-average between these two values..\nWe implement and compare four types of neural models: 1) The previously described PN-based model depicted in Figure 3 (called PN in the tables); 2) The same as 1), but without the fully. connected input layers; 3) The same as 1), but the model only predicts the link task, and is therefore not optimized for type prediction; 4) A non-sequence-to-sequence model that uses the hidden layers produced by the BLSTM encoder with the same type of attention as the PN (called BLSTM in the table). That is, d, in Equation[3|is replaced by ei.\nIn both corpora we compare against the following previously proposed models: Base Classifier (Stab & Gurevych]2016) is feature-rich, task-specific (AC type or link extraction) SVM classifier. Neither of these classifiers enforce structural or global constraints. Conversely, the ILP Joint Model (Stab & Gurevych] 2016) provides constrains by sharing prediction information between the base classifier. For example, the model attempts to enforce a tree structure among ACs within a given paragraph, as well as using incoming link predictions to better predict the type class claim. For the\nTable 1: Results on persuasive essay corpus\nTable 2: Results on microtext corpus\nType prediction Link prediction Model Macro f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 Simple .817 - - .663 .478 .848 Best EG .869 1 .693 .502 .884 MP+p .831 .720 .546 .894 - Base Classifier .830 .712 .937 .650 .446 .841 ILP Joint Model .857 .770 .943 .683 .486 .881 PN .813 .692 .934 .740 .577 .903\nmicrotext corpus only, we have the following comparative models: Simple (Peldszus & Stede||2015 is a feature-rich logistic regression classifier. Best EG (Peldszus & Stede]2015) creates an Evidenc Graph (EG) from the predictions of a set of base classifier. The EG models the potential argumen structure, and offers a global optimization objective that the base classifiers attempt to optimize b adjusting their individual weights. Lastly, MP+p (Peldszus & Stede] 2015) combines prediction from base classifiers with a MSTParser, which applies 1-best MIRA structured learning."}, {"section_index": "6", "section_name": "6 DISCUSSION", "section_text": "Table 3 shows the results of an ablation study for AC feature representation. Regarding link pre diction, BOw features are clearly the most important, as their absence results in the highest drop in performance. Conversely, the presence of structural features provides the smallest boost in perfor- mance, as the model is still able to record state-of-the-art results compared to the ILP Joint Model. This shows that, one one hand, the PN model is able to capture structural ques through sequence\nType prediction Link prediction Model Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 Base Classifier .794 .891 .611 .879 .717 .508 .917 ILP Joint Model .826 .891 .682 .903 .751 .585 .918 BLSTM .810 .830 .688 .912 .754 .589 .919 PN No FC Input .791 .826 .642 .906 .708 .514 .901 PN No Type .709 .511 .906 - - - - PN .849 .894 .732 .921 .767 .608 .925\n'irst, we point out that the PN model achieves state-of-the-art on 10 of the 13 metrics in Tables[. nd 2] including the highest results in all metrics on the Persuasive Essay corpus, as well as link. rediction on the Microtext corpus. The performance on the Microtext corpus is very encouraging. or several reasons. First, the fact that the model can perform so well with only a hundred training. xamples is rather remarkable. Second, although we motivate the use of a PN due to the fact tha. t partially enforces the tree structure in argumentation, other models explicitly contain further con traints. For example, only premises can have outgoing links, and there can be only one claim in ar. AC. As for the other neural models, the BLSTM model performs competitively with the ILP Join. Model on the persuasive essay corpus, but trails the performance of the PN model. We believe this. s because the PN model is able to create two different representations for each AC, one each in the. ncoding/decoding state, which benefits performance in the dual tasks, whereas the BLSTM mode. nust encode information relating to type as well as link prediction in a single hidden representation On one hand, the BLSTM model outperforms the ILP model on link prediction, yet it is not able tc. natch the ILP Joint Model's performance on type prediction, primarily due to the BLSTM's poo. erformance on predicting the major claim class. Another interesting outcome is the importance oi he fully-connected layer before the LSTM input. The results show that this extra layer of depth is. rucial for good performance on this task. Without it, the PN model is only able to perform com. etitively with the Base Classifier. The results dictate that even a simple fully-connected layer witl igmoid activation can provide a useful dimensionality reduction for feature representation. Finally. he PN model that only extracts links suffers a large drop in performance, conveying that the join. spect of the PN model is crucial for high performance in the link prediction task..\nTable 3: Feature ablation study. * indicates that both BOw and Structural are present, as well as th stated embedding type.\nTable 4: Results of binning test data by length of AC sequence. * indicates that this bin does not contain any major claim labels, and this average only applies to claim and premise classes. However. we do not disable the model from predicting this class: the model was able to avoid predicting this class on its own.\nTable 4 shows the results on the Persuasive Essay test set with the examples binned by sequenc length. First, it is not a surprise to see that the model performs best when the sequences are th shortest. As the sequence length increases, the accuracy on link prediction drops. This is possibl due to the fact that as the length increases, a given AC has more possibilities as to which other AC i can link to, making the task more difficult. Conversely, there is actually a rise in no link predictio accuracy from the second to third row. This is likely due to the fact that since the model predicts a most one outgoing link, it indirectly predicts no link for the remaining ACs in the sequence. Sinc the chance probability is low for having a link between a given AC in a long sequence, the no lin performance is actually better in longer sequences."}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "In this paper we have proposed how to use a modified PN (Vinyals et al.f 2015b) to extract links between ACs in argumentative text. We evaluate our models on two corpora: a corpus of persuasive essays (Stab & Gurevych]|2016), and a corpus of microtexts (Peldszus||2014). The PN model records state-of-the-art results on the persuasive essay corpus, as well as achieving state-of-the-art results for link prediction on the microtext corpus, despite only having 90 training examples. The results show that jointly modeling the two prediction tasks is crucial for high performance, as well as the presence of a fully-connected layer prior to the LSTM input. Future work can attempt to learn the AC representations themselves, such as in|Kumar et al.(2015). Lastly, future work can integrate subtasks 1 and 4 into the model. The representations produced by Equation 3|could potentially be used to predict the type of link connecting ACs, i.e. supporting or attacking; this is the fourth subtask in the pipeline. In addition, a segmenting technique, such as the one proposed by|Weston et al.(2014), can accomplish subtask 1.\nType prediction Link prediction Model Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1. No structural .808 .824 .694 .907 .760 .598 .922 No BOW .796 .833 .652 .902 .728 .543 .912 No Embeddings .827 .874 .695 .911 .750 .581 .918 Only Avg Emb* .832 .873 .717 .917 .751 .583 .918 Only Max Emb* .843 .874 .732 .923 .766 .608 .924 Only Min Emb* .838 .878 .719 .918 .763 .602 .924 All features. .849 .894 .732 .921 .767 .608 .925\nType prediction Link prediction Bin Macro f1 MC f1 Cl f1 Pr f1 Macro f1 Link f1 No Link f1 1len < 4 .863 .902 .798 .889 .918 .866 .969 4len<8 .680 .444 .675 .920 .749 .586 .912 8 len< 12 .862* .000* .762 .961 .742 .542 .941\nmodeling and semantics (the ILP Joint Model directly integrates these structural features), however the PN model still does benefit from their explicit presence in the feature representation. When con-. sidering type prediction, both BOw and structural features are important, and it is the embedding. features that provide the least benefit. The Ablation results also provide an interesting insight into the effectiveness of different 'pooling' strategies for using individual token embeddings to create a multi-word embedding. The popular method of averaging embeddings (which is used by Stab &. Gurevych (2016) in their system) is in fact the worst method, although its performance is still com- petitive with the previous state-of-the-art. Conversely, max pooling produces results that are on par with the PN results from Table1"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nSamuel R Bowman, Christopher Potts, and Christopher D Manning. Recursive neural networks can learn logical semantics. arXiv preprint arXiv:1406.1827, 2014.\nAmparo Elizabeth Cano-Basave and Yulan He. A study of the impact of persuasive argumentatior in political debates. In Proceedings of NAACL-HLT, pp. 1405-1413, 2016.\nRobin Cohen. Analyzing the structure of argumentative discourse. Computational linguistics, 13 (1-2):11-24, 1987.\nTrudy Govier. A practical study of argument. Cengage Learning, 2013\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nNamhee Kwon, Liang Zhou, Eduard Hovy, and Stuart W Shulman. Identifying and classifying sub- jective claims. In Proceedings of the 8th annual international conference on Digital government research: bridging disciplines & domains, pp. 76-81. Digital Government Society of North Amer- ica, 2007.\nHuy V Nguyen and Diane J Litman. Context-aware argumentative relation mining. 2016\nRaquel Mochales Palau and Marie-Francine Moens. Argumentation mining: the detection, classifi cation and structure of arguments in text. In Proceedings of the 12th international conference o artificial intelligence and law, pp. 98-107. ACM, 2009.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew. Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten-. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: / /tensorf1ow. org/| Software available from. tensorflow.org.\nSamuel R Bowman, Christopher D Manning, and Christopher Potts. Tree-structured composition in. neural networks without tree-structured architectures. arXiv preprint arXiv:1506.04834, 2015.\nZhengping Che, David Kale, Wenzhe Li, Mohammad Taha Bahadori, and Yan Liu. Deep compu tational phenotyping. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 507-516. ACM, 2015.\nAlex Graves and Jurgen Schmidhuber. Offline handwriting recognition with multidimensional re current neural networks. In Advances in neural information processing systems, pp. 545-552 2009.\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532- 43. 2014.\nAnthony J Robinson. An application of recurrent nets to phone probability estimation. IEEE transactions on Neural Networks. 5(2):298-305. 1994\nNiall Rooney, Hui Wang, and Fiona Browne. Applying kernel methods to argumentation mining. Ir FLAIRS Conference, 2012.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nIsaac Persing and Vincent Ng. End-to-end argumentation mining in student essays. In Proceedings of NAACL-HLT, pp. 1384-1394, 2016"}] |
ryTYxh5ll | [{"section_index": "0", "section_name": "CONTENT2VEC: SPECIALIZING JOINT REPRESENTATIONS OF PRODUCT IMAGES AND TEXT FOR THE TASK OF PRODUCT RECOMMENDATION", "section_text": "Thomas Nedelec. Elena Smirnova & Flavian Vasile\nt.nedelec,e.smirnova,f.vasile}@criteo.com\nWe propose a unified product embedded representation that is optimized for the ask of retrieval-based product recommendation. We generate this representatior ising Content2Vec, a new deep architecture that merges product content infor nation such as text and image, and we analyze its performance on hard recom mendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signa is available, we merge the product co-occurrence information and propose a sec ond architecture Content2vec+ and show its lift in performance versus non-hybric approaches in both cold start and normal recommendation regimes."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Online product recommendation is now a key driver of demand, not only in E-commerce businesses that recommend physical products, such as Amazon (Marshall! 2006), TaoBao (Xiang 2013) and Ebay (Academy2013), but also in online websites that recommend digital content such as news (Yahoo! - Agarwal et al.(2013), Google -Liu et al.(2010), movies (Netflix -Bell & Koren (2007) music (Spotify - Johnson(2015)), videos (YouTube - Covington et al. (2016)) and games (Xbox Koenigstein et al.(2012)).\nTwo of the most challenging aspects of recommendation in general and of product recommendation in particular, are scalability and freshness. The first one addresses the problem of making fast rec- ommendations in parallel, the second addresses the problem of updating recommendations based on real-time user interaction. One of the most encountered architecture solutions for recommendation at scale divides the recommendation process in two stages: a candidate generation stage that prunes the number of recommendable items from billions to a couple of hundreds, followed by a second item selection stage that decides the final set of items to be displayed to the user, as shown in Figure 1(seeMazare(2016), Cheng et al.(2016), Covington et al.(2016)).\nThe first stage generally implies the pre-generation of an inverted index over the set of recommend. able products, paired with a real-time retrieval module, similarly to a search engine architecture. In our current paper we focus on the cases where the system supports vectorial product queries. The sources of the vectorial representations range from the set of co-occurring products, like in the. case of neighborhood-based collaborative filtering, to a low-dimensional representation produce. via matrix factorization or to an embedded representation produced via a deep neural network.\nThe second stage takes the candidate set and decides the final list of recommendations, usually by. optimizing a ranking metric. This stage has in general a lot more constraints in terms of latency, due to its use of real-time signal that makes its predictions not cacheable. Therefore, in terms of model choice, the first stage can be a lot more complex than the second. In terms of impact, the quality of. the candidate set coming from the first stage is crucial, since this constitutes a hard threshold on the performance of the second stage and of the overall system..\nBecause of the feasibility of using a more complex model and the potential impact on the final recommendation performance, we choose to concentrate our efforts on the task of optimal candi"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recommended Item 1 Item 2 items Item 3 Query Stage 2: Final recommendation item set generation Ranking Candidate items Build / update. Retrieval service index job Item Item representations inverted index Stage 1: Candidate item set generation\nItem representations\nFigure 1: 2-Stage Recommender System Architecture\ndate generation. We formalize the problem as a link prediction task, where given a set of past co-purchased products we try to predict unseen pairs of products. Related work in representation learning for recommendation investigated the use of collaborative filtering (CF), text and product images, but to our knowledge, there has been no attempt to unify all of these signals in a single rep. resentation. We see this as an opportunity to investigate the leveraging effect of generating a Unifiea Product Representation via a deep-learning approach. In the following, we formally define the set of associated requirements we would like to satisfy:\nRelevance: the representation should be optimized for product recommendation relevance. as measured by the associated target metrics (in this case, modeling it as a link predictior. task and optimizing for the AUC of product pair prediction).. : Coverage: the representation should leverage all available product information (in ou. case, all product information available in the product catalog together with observed prod. uct co-occurrences). Cross-modality expressiveness: the representation should be able to account for interac. tions between various information sources such as text and image (can take into accoun the fact that the word '\"'red' and the 'red'' color detector are correlated).. Pair-wise expressiveness: the representation should be able to account for interactions between the two products. Robustness: the representation should operate well (recommendation performance will no degrade dramatically) in hard recommendation situations such as product cold-start (nev. products, new product pairs) and cross-category recommendation. These are importan. use-cases in product recommendation, when the product catalog has high churn (as in the. case of flash sales websites or classifieds) or the recommendation needs to leverage cross. advertiser signal (as in the case of new users and user acquisition advertising campaigns). This is a different goal from simply trying to optimize for relevance metrics, due to the. inherent limitations of offline metrics in predicting future online performance.. Retrieval-optimized: the representation should be adapted to a content-retrieval setup. both on the query and on the indexing side, meaning that the vectors should be either. small. snarse or both\nWe propose a modular deep architecture that leverages state-of-the-art architectures for generating embedded representations for image, text and CF input, re-specializes the resulting product em beddings and combines them into a single product vector. This is a very general architecture that can plugin any networks in the image and text domain and re-use them for the problem of product recommendation, along with their gains in representation learning for the two domains. We investi- gate multiple ways of merging the modality-specific product information and propose a new type of residual-inspired unit, which we name Pairwise Residual Unit, that can model the joint aspects of the different product embeddings and show that it leads to good improvements.\nWe analyze our proposed architecture on an Amazon dataset (McAuley et al.]2015) containing information on co-purchased products. We report our improvements versus a text and an image based baseline, that was introduced in previous work by (cite Julian) and show improvements both on normal and hard recommendation regimes such as cold-start and cross-category setups.\nOur approach is similar to the recent work by (Covington et al.]2016), that propose a solution for. video recommendation at YouTube. Unlike their proposed solution, where, in order to support user vector queries, the candidate generation step co-embeds users and items, we are interested to co embed just the product pairs, which generally has a much smaller dimension. In our approach, the personalization step can happen after the per-item candidates are retrieved.\nOur main contributions are the following.\nThough the focus of our work is on improving product recommendation through representation. learning, we believe that simple extensions of our approach can be applied to many other recom mendation scenarios.\nThe rest of the paper goes as follows: In Section 2|we cover previous related work and the rela- tionship with our method. In Section3|we present the Content2Vec model, followed by a detailed description of the resulting architecture in Section4] In Section[5|we present the experimental setup and go over the results on Section[5.2] In Section[6|we summarize our findings and conclude with future directions of research.\nOur work fits in the new wave of deep learning based recommendation solutions, that similarly tc classical approaches can fall into 3 categories, namely collaborative filtering based, content based or hybrid approaches.\nSeveral approaches use neural networks to build better item representations based on the co occurrence matrix.The Prod2Vec algorithm (see (Grbovic et al.]2015)) implements Word2Vec. ((Mikolov et al.[2013a), (Shazeer et al.[ 2016), an algorithm that is at origin a shallow neura language model, on sequences of product ids, to reach a low-dimensional representation of eacl. product. Among other embedding solutions that use the item relationship graph are the more recen extensions to Word2Vec algorithm such as Glove (Pennington et al.2014) and SwIVEL (Shazee et al.|2016) and the graph embedding solutions proposed in Node2Vec (Grover & Leskovec| 2016 and SDNE (Wang et al.]2016).\nContent-based methods recommend an item to a user based upon an item description and a user profile ((Pazzani & Billsus20o7)). This idea was deeply investigated in the information retrieval. literature: in the context of web search, DSSM (Huang et al.]2013) and its extensions (Shen et al.. 2014)(C-DSSM) and (Shan et al.[z 2016) are some of the most successful methods that specialize.\nWe propose a novel way of integrating deep-learning item representation in the context c. large scale recommender system with a 2-stage serving architecture and introduce the ne. task of Unified Product Representation for optimal candidate selection in both cold sta and normal recommendation setups. We introduce a new deep architecture that merges content and CF signal for the task c. product recommendation and propose the Pairwise Residual Unit, a new learning compc. nent that models the joint product representations.. We introduce two novel experimental setups (hard cold start, cross-category) and test th the proposed Content2 Vec architecture satisfies the requirements we defined..\nquery and document text embedding in order to predict implicit feedback signal such as documen. click-through rate. In the context of product recommendation, in (McAuley et al.. 2015) the author feed a pre-trained CNN (CNN trained on the ImageNet dataset, which is an image classification tasl. that is very different from the task of image-based product recommendation) with products image. and use the last layer of the network as the product embedding. This representation is subsequentl used to compute similarities between products. Similarly, the authors in (Van den Oord et al.]2013 use CNNs to compute similarities between songs.Yosinski et al.(2014) show that the low layer. of DNNs trained on different tasks are often similar and that good performance can be reached b. fine-tuning a network previously trained on another task. In the case of recommendation systems this fine tuning was implemented in|Veit et al.(2015), where the authors specialize a GoogLeNe. architecture to the task of predicting co-view events based on product pictures..\nThe performance of Collaborative Filtering (CF) models is often higher than that of content-based ones but it suffers from the cold-start problem. To take advantage of the best of both worlds, hybrid models use both sources of information in order to make recommendations. One possible way to incorporate product information is using it as side information in the product sequence model, as proposed in Meta-Prod2Vec (Vasile et al.| 2016), leading to better product embeddings for products with low signal (low number of co-occurrences). In this work we continue the investigation of using. both types of signal, this time both at training and product recommendation time..\nOur proposed approach takes the idea of specializing the input representations to the recommenda tion task and generalizes it for multi-modality inputs, in order to leverage all product information and in particular, product images and product title and description text.\nThe main criteria for the Content2Vec architecture is to allow us to easily plugin new sources of signal and to replace existing embedding solutions with new versions. We are also interested in separating product-level embeddings from pair-level embeddings, such that the network can generate product vectors that are readily indexable. As a result, the Content2Vec architecture has three types of modules, as shown in Figure[2}\nThe previous work on learning pair-wise item distances concentrated on using ranking (McFee & Lanckriet 2010), siamese (Hadsell et al.|2006) or logistic loss (Zheng et al.[2015). For optimizing the link prediction objective we choose the logistic similarity loss (eq. 1) that has the advantage of\nContent-specific embedding modules that take raw product information and generate th associated vectors. In this paper we cover embedding modules for text, image, categorica attributes and product co-occurrences (for an example, see Figure[3). Overall product embedding modules that merge all the product information into a unifie. product representation. Pair embedding module that merges the product-to-product interactions and computes th final similarity score. In the case of retrieval-optimized product embeddings, this modul becomes the inner-product between the two items and all interactions between them are t be approximated within the product-level embedding modules.\nOntent2Vec training follows the architecture, learning module-by-module. In the hrst stage, we initialize the content-specific modules with embeddings from proxy tasks (classification for image,. language modeling for text) and re-specialize them to the task of product recommendation. For the. specialization task, as mentioned in Section[1] we frame the objective as a link prediction task where. we try to predict the pairs of products purchased together. We describe the loss function in Section. 3.1 In the second stage, we stack the modality-specific embeddings generated in the first stage into a. general product vector and learn an additional residual vector using the same learning objective as. the 1a17at1 Th1c will described in denth in. Sectior.\nOutput layer. Pair Embedding Overall Embedding Overall Embedding of Product A of Product B. Image Text CF Image Text CF Embedding Embedding Embedding Embedding Embedding Embedding Module Module Module Module Module Module\nPair Embedding\nFigure 2: Content2Vec architecture combines content-specific modules with residual vector to pro duce embedding vector for each product, then uses these vectors to compute similarities between products.\nhaving a fast approximation via Negative Sampling loss (Mikolov et al.|2013b) shown in eq.2 By. using Negative Sampling, the prediction step can scale up to large number of items, by using al. positive pairs and sampling the negatives on the fly..\nL(0) = -XPOS logo(sim(ai,bj)) -XNEG logo(-sim(ai,bj)) ij\nk Lns(0) =-XPOS(logo(sim(ai,bj)) +Ent~Pp logo(-sim(ai,nt)) iJ l=1\nContent-specific modules can have various architectures and are meant to be used separately in order to increase modularity. Their role is to map all types of item signal into embedded representations\nImage Vector Text Vector CF Vector Image Vector Text Vector CF Vector Image Text CF Image Text CF Embedding Embedding Embedding Embedding Embedding Embedding Module Module Module Module Module Module THEART Title: \"The Art of War Title: \"7 Samurgi OFWAR C C Description:/.. Description: . Also bought with: B,C Also bought with: A, D Product A: Product B: \"The Art of War\" - book \"Seven Samurai\" - movie\nFigure 3: An example of using the content-specific modules to create embedded representations of two products with images, text and CF signal.\nIn the following we analyze four types of input signal and embedding solutions for each one of them For all of the modules, we use Lns loss (see eq.2) as specialization loss.\nModel and proxy task: CNN for Image Classification For generating the image embeddings w propose reusing a model trained for image classification, as in previous work by (Krizhevsky et al. 2012) and (He & McAuley]2015). In (He & McAuley2015), the authors have shown how to use. the Inception architecture (Szegedy et al.2015) and specialize it for the product recommendatior. task. However, the Inception architecture is very deep and requires extensive training time. For ease. of experimentation we use AlexNet, which is a simpler architecture that was also a winner on the ImageNet task (Krizhevsky et al.] 2012) previously to Inception NN. In section [5.2|we will show that, even if simpler, when combined with additional product text information, the AlexNet-basec. solution can perform very well on the recommendation task.. For our experiments, we use the pretrained version of AlexNet available on Toronto's university website. We experimented with two different ways to specialize the representation in order to com pute product similarities. In the first one, we learn a weighted inner product between the two repre. sentations (fc7 layer of ImageNet). In the second one, we specialize the fc7 layer to detect produc similarities. The second approach lead to much better performance and is the one for which we.\nModel and proxy task: Word2Vec for Product Language Modeling For generating word em. beddings, we propose reusing Word2Vec Mikolov et al.(2013b), a model for generating language. models that has been employed in a various of text understanding tasks. More recently, it has been shown in (Pennington et al.[2014) that Word2Vec is closely linked with matrix factorization tech- niques applied on the word co-occurrence matrix. For Content2Vec, we chose to pretrain Word2Vec.\nProduct B: \"Seven Samurai\" - movie\non the entire product catalog text information and not use an available set of word embeddings such. as the one created on the Google Corpus. The main reason is that the text distribution within product descriptions is quite different from the general distribution. For example the word 'jersey' has a very. different conditional distribution within the product description corpus versus general online text.. Text CNN (Kim, 2014) offers a simple solution for sentence-level embeddings using convolutions.. The convolutions act as a form of n-gram filters, allowing the network to embed sentence-level information and specializing word embeddings to higher-order tasks such as text classification or. sentiment analysis. To the best of our knowledge, this is the first attempt to employ them for the. task of product recommendation. For our task, we generate sentences based on the product titles and descriptions."}, {"section_index": "3", "section_name": "4.1.3 EMBEDDING PRODUCT CO-OCCURRENCES: PROD2VEC", "section_text": "Prod2Vec (Grbovic et al.]2015) is an extension of the Word2Vec algorithm to product shopping. sequences. As a result, Prod2Vec can be seen as a matrix factorization technique on the product. co-occurence matrix. In Content2Vec, the Prod2Vec-based similarity contains all of the information that can be derived from the sequential aspect of the user behavior, without taking into account the per-product meta-data."}, {"section_index": "4", "section_name": "4. 2 JOINT PRODUCT EMBEDDING: PAIRWISE RESIDUAL UNIT", "section_text": "As stated in Section[1] the function of the product embedding module is two-fold: first, to model all interactions that exist between the modality-specific embeddings with respect to the final opti- mization objective, and second, to approximate interaction terms between the products that cannot be explained by a linear combination of the modality-specific similarities. With this in mind, we introduce a new type of learning unit, the Pairwise Residual Unit (eq. 4), which similarly to the original residual unit introduced in He et al.(2015) (eq. 3, allows the layers to learn incremental, i.e. residual representations (see Figure4). In Hardt & Ma(2016) the authors motivate the use of residual units as helping preserve the repre- sentations learned in the previous layers. In our case we are interested in preserving the specialized image and text representations and learn an additional representation for their interactions. Though in previous work, most the of the residual units are using at least two ReLU layers in the residual unit, we observe good results using just one. In order to model interactions between modalities, we could also learn a fully connected layer initialized with identity that takes as input the concatenated modality-specific vectors. However, in order to have a smaller number of parameters and increase model comprehensibility, we would like to keep separate the modality-specific representations and to model the final prediction model as an ensemble.\ny=F(x)+x\ny = sim(F(x1), F(x2)) + sim(x1, x2\nTo be able to measure the incremental value of introducing a residual vector we introduce a baseline. architecture that computes the final prediction based on the linear combination of the modality. specific similarities denoted by Content2Vec-linear with the associated similarity function defined in eq. 5]\nMeta-Prod2Vec (Vasile et al.]2016) improves upon Prod2Vec by using the product meta-data side. information to regularize the final product embeddings. In Content2Vec, we can use the similar tech- nique of co-embedding product categorical information with product ids to generate the embedding values for the categorical features.\nsimc2v (simm(ai, Wm mE Modalities\nsimc2v-res(ai,bj) = Wmo(simm(ai, bj) mE(Modalities+Residual)\nIn order to learn the residual vector, we keep fixed the modality-specific similarities and co-train. the final weights of each of the modalities together with the product-specific residual layers. For example, in the case of using only image and text signals, our final predictor can be defined as in eq. 7 where Ptxt and Pimg are pre-set and wtxt, Wimg, Wres and Pres are learned together:."}, {"section_index": "5", "section_name": "4.3 PAIR EMBEDDING MODULE", "section_text": "In a retrieval-based architecture, the pair embedding module cannot support more than a simpl. linear combination of the product embedding vectors, such that the final score can be computec. via inner-product. However, we are still interested to know the trade-off in performance between ar. inner-product-based candidate scoring and a model that allows for explicit interaction terms betweer. the items. To this end, we introduce two explicit interaction models: Content2Vec-crossfeat -\nF(X) SIM( X1, X2 ) SIM( F(X1), F(X2) ) X X1, X2 Residual Unit Pairwise Residual Unit Figure 4: Pairwise Residual Unit simc2v(ai, bj) = Wmo(simm(ai,bj)) mE Modalities this notation, the residual-based architecture denoted as Content2Vec-res minimizes 1 ciniloritxfan defned in eg 6\nUnder this notation, the residual-based architecture denoted as Content2Vec-res minimizes LNS with the similarity function defined in eq.6.\nP(pos|a, b) = (wtxtPtxt(pos|atxt, btxt + WresPres(pos|ares, bres))\nIn Section5.2|we compare the performance of Content2Vec-res and Content2Vec-linear and show that, as expected, the proposed architecture surpasses the performance of the linear model, while allowing for a retrieval-based candidate scoring solution.\nSIM(X1, X2 ) SIM( F(X1), F(X2) ) SIM( X1, X2 ) F(X1, X2) X1, X2 X1, X2 Pairwise ResidualUnit\nFigure 5: The two types of Pairwise Residual Units. By comparison with the first version tha outputs a scalar, the second one outputs a vector that goes directly into the final prediction layer\nmodel where we discretize the text and image-specific similarity scores and create explicit feature conjunctions between them and Content2 Vec-embedpairs - a model where we use a similar technique with Paiwise Residual Unit, in this case modeling the residual of the linear similarity directly as a vector in the pair embedding layer, as shown in Figure 5] In Section|5.2|we show that two models have as expected better performance than the linear model and that the pair embedding is slightly better."}, {"section_index": "6", "section_name": "5.1 DATASET", "section_text": "We perform our evaluation on the publicly available Amazon dataset (McAuley et al.] 2015) tha represents a collection of products that were co-bought on the Amazon website. Each item has a rich description containing product image, text and category (any of the modalities can be missing) In terms of dimensionality, the dataset contains around 1OM pairs of products. We concentrate or the subgraph of Book and Movie product pairs, because both categories are large and they have a reasonable sized intersection. This allows us to look at recommendation performance on cross category pairs (to evaluate a model trained only on Book pairs on predicting Movie co-bought items and mixed category pairs (to evaluate the models on Book-Movie product pairs).\nBased on the full Book & Movies data we generate three datasets with different characteristics: The first dataset simulates a hard cold start regime, where all product pairs used in validation and testing are over products unseen in training. This tests the hardest recommendation setup, where all testing data is new. We decided to bench all of our hyperparameters on this regime and use the best setup on all datasets, since tuning on the harder dataset ensures the best generalization error (results shown in Table1). The second dataset simulates a non-cold start regime, where the vast majority of the products in the test set are available at training time. The dataset is generated by taking the top 1o0k most connected products in the original dataset and keeping the links between them (results shown in Table|2). The third dataset simulates a soft cold start regime, where some of the products in the test set are available at training time. The dataset is generated by taking the top 200k most connected products in the original dataset and sampling 10% of the links between them (results shown in Table[3).\nEvaluation task We evaluate the recommendation methods on the product link prediction task. similar to (He & McAuley2015). We consider the observed product pairs as positive examples and all unknown pairs as negatives. We generate negative pairs according to the popularity of the products in the positive pairs (negative examples between popular products are more likely to be generated) with a positive to negative ratio of 1:2..\nEvaluation metrics For the link prediction task, we use the Area Under Curve (AUC) of the Precision/Recall curve as our evaluation metric.."}, {"section_index": "7", "section_name": "5.2 RESULTS", "section_text": "The results on hard and soft cold start datasets (Tables[1]3) show that our main proposed methoc Content2Vec-res can leverage the additional signal provided by each of the input modalities in a joint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN TextCNN) and their linear combination (Content2Vec-linear).\nThe results on hard and soft cold start datasets (Tables 1]3) show that our main proposed method Content2Vec-res can leverage the additional signal provided by each of the input modalities in a joint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN. TextCNN) and their linear combination (Content2Vec-linear). From the point of view of robustness, Content2Vec-res learns product representations that perform better than the baseline methods on out-of-sample recommendations such as cross-category pairs and mixed-category pairs (Table|1). We observe that adding an additional layer that represents pair-level interactions does not lead to big improvements in either of the two models we investigated (Content2Vec-crossfeat,embedpairs), confirming that a product retrieval-based recommender system can achieve state-of-the-art results. Finally, Content2Vec-res+, our proposed hybrid architecture that combines content and CF signal achieves better performance than the content and CF-only models, with bigger lifts in the case of the third dataset (Table3) where the CF signal is weaker due to higher sparsity.\nHyper-parameters We fixed the sizes of embedding vectors for image CNN module to 4096. hidden units, for text CNN module to 256, for Prod2Vec module to 50, for residual representation. to 128. For optimization we use an Adam algorithm and we manually set the initial learning rate. based on the validation set performance. The batch sizes vary for different datasets. We train all the models until validation set performance stops increasing.\nImageCNN: prediction based on specialized image embeddings similarity . TextCNN: prediction based on specialized text embeddings similarity Content2Vec-linear: prediction based on the linear combination of text and image similar ities Content2Vec-crossfeat: prediction based on the linear combination of discretized image and text similarities and their conjuctions Content2 Vec-res: prediction based on the linear combination of text and image similarities plus product-level residual vectors similarities Content2Vec-embedpairs: prediction based on the linear combination of text and image similarities and a pair-level residual component . Prod2Vec: prediction based on the product vectors coming from the decomposition of the co-purchase matrix Content2Vec+: prediction based on the ensemble of Prod2Vec and Content2Vec models\nRecommendation Model Books Movies Mixed Models trained on Books dataset Book ImageCNN specialized. 81% 78% 64% Book TextCNN 72% 79% 76% Book Content2Vec-linear. 83% 83% 76% Book Content2 Vec-crossfeat. 86% 83% 83% Book Content2 Vec-res. 89% 83% 77% Book Content2Vec-embedpairs. 90% 82% 77% Models trained on Movies dataset. Movie ImageCNN specialized. 59% 92% 60% Movie TextCNN 63% 90% 65% Movie Content2 Vec-linear. 64% 94% 65% Movie Content2Vec-crossfeat. 62% 94% 63% Movie Content2 Vec-res. 60% 95% 66% Movie Content2Vec-embedpairs. 64% 94% 65%\nTable 1: AUC results of image and text-based embeddings on hard cold-start dataset on Book, Movie and Mixed category test product pairs.\nTable 2: AUC results on non cold-start dataset.\nThis work has several key contributions. We show how to use all product signal for the task of prod. uct recommendation using a modular architecture that can leverage fast evolving solutions for eacl. type of input modality. We define a set of requirements for evaluating the resulting product embed. dings and show that our method leads to significant improvements over the single signal approache on hard recommendation situations such as cold-start and cross-category evaluation. Finally, in or. der to model the joint aspects of the product embeddings we introduce a new type of learning unit. named Pairwise Residual Unit and show the resulting gains on a real product co-purchases dataset.. In the current work we have addressed all but one of the desired requirements, namely generat. ing retrieval-optimized embeddings. For the next steps, we want to pursue sparse and compressec. product representations, in order to help the performance of the final product retrieval system..\nDeepak Agarwal, Bee-Chung Chen, Pradheep Elango, and Raghu Ramakrishnan. Content recom mendation on web portals. Communications of the ACM. 56(6):92-101. 2013\nTable 3: AUC results on soft cold-start dataset.\nMihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varui. Bhagwan, and Doug Sharp. E-commerce in your inbox: Product recommendations at scale. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discover and Data Mining, KDD '15, pp. 1809-1818, New York, NY, USA, 2015. ACM. ISBN 978 1-4503-3664-2. doi: 10.1145/2783258.2788627. URLhttp://doi.acm.0rg/10.1145. 2783258.2788627\nAditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. 2016\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nRuining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feed back. arXiv preprint arXiv:1510.01784, 2015\nPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management pp. 2333-2338. ACM, 2013.\nChris Johnson. algorithmic music recommendations at spotify, 2015\nNoam Koenigstein, Nir Nice, Ulrich Paquet, and Nir Schleyen. The xbox recommender system. Ii Proceedings of the sixth ACM conference on Recommender systems, pp. 281-284. ACM, 2012\nPaul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations In Proceedings of the 191_198 ACM.2016\nMoritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231 2016.\nMichael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptive web, pp. 325-341. Springer, 2007.\nYing Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Web scale modeling without manually crafted combinatorial features. 2016.\nFlavian Vasile, Elena Smirnova, and Alexis Conneau. Meta-prod2vec-product embeddings using side-information for recommendation. arXiv preprint arXiv:1607.07326, 2016.\nAndreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. Learning visual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4642-4650, 2015.\nDaixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. 2016\nLilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner, and Atilla Baskurt. Logistic simi larity metric learning for face verification. In 2015 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 1951-1955. IEEE, 2015.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014"}] |
SyZprb5xg | [{"section_index": "0", "section_name": "ON ROBUST CONCEPTS AND SMALL NEURAL NETS", "section_text": "Amit Deshpande\nMicrosoft Research, Vigyan, 9 Lavelle Road, Bengaluru 560o01, Indi amitdesh@microsoft.com\nDepartment of Computer Science, The University of Texas at Austin 2317 Speedway, Stop D9500 Austin, TX 78712, USA\nThe universal approximation theorem for neural networks says that any reason able function is well-approximated by a two-layer neural network with sigmoic gates but it does not provide good bounds on the number of hidden-layer nodes o the weights. However, robust concepts often have small neural networks in prac tice. We show an efficient analog of the universal approximation theorem on the boolean hypercube in this context.\nJDOVV dndlogO IvClsal apploAlllatr boolean hypercube in this context. We prove that any noise-stable boolean function on n boolean-valued input vari- ables can be well-approximated by a two-layer linear threshold circuit with a small number of hidden-layer nodes and small weights, that depend only on the noise- stability and approximation parameters, and are independent of n. We also give a polynomial time learning algorithm that outputs a small two-layer linear thresh- old circuit that approximates such a given function. We also show weaker gener- alizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The universal approximation theorem of[Hornik et al.(1989) and Cybenko(1992) provides a foun dation to the mathematical theory of artificial neural networks. It states that any continuous functio. on a compact subset of the Euclidean space can be approximated arbitrarily well by a feed-forwarc. artificial neural network with only one hidden layer containing finitely many neurons, under milc. assumptions on the activation function. In such neural networks, each node applies an activatio function to a weighted linear combination of its inputs, and the above theorem holds true for man. different choices of activation functions as shown by Hornik(1991). However, the universal ap. proximation theorem and its quantitative improvements by Barron (1993) and others have certaii. limitations, namely, they do not provide reasonable, practical bounds or efficient learning algo. rithms for the parameters of these neural networks, that is, the number of neurons in the hidde. layer and the size of weights used in the linear combinations. For a detailed survey of these result. in approximation theory, we point the reader to Pinkus(1999).\nIn practice, we notice that even moderate-sized neural networks can be trained to learn variou natural concepts in computer vision tasks, and the typical rules of thumb followed for their mode and size selection are usually guided by the domain knowledge, the learning algorithm, and th available computational resources more than any theoretical bounds; see Simard et al. (2003). Th known theoretical bounds are either based on the Network Information Criterion (NIC) by Amai (1998), which is a generalization of Akaike Information Criterion (AIC) by Akaike(1974) used i statistical inference, or based on the Vapnik-Chervonenkis dimension; seeBaum & Haussler(1989 Bartlett(1993), Maass(1995), Karpinski & Macintyre(1997). These bounds do not adequatel explain the observed efficiency of learning many natural concepts in practice.\n*This work was done during an internship at Microsoft Research India, when the author was a student al Chennai Mathematical Institute, H1, SIPCOT IT Park, Siruseri, Chennai 603103, India."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Most natural concepts are often based on a small number of relevant attributes or features, and can. be learnt efficiently once we implicitly map our input to the correct attribute space and focus on these. relevant attributes or features. Moreover, most natural concepts are also robust, that is, their positive. and negative examples are reasonably unambiguous and far from each other. Thus, an important. theoretical question is to understand the underlying cognitive process, find a reasonably close and. accurate model for it, and answer why certain models like artificial neural networks can mimic this cognitive process in practice\nThe implicit mapping of our input coordinates to the space of attributes is formalized by the kernel method in machine learning; see Hofmann et al.(2008). Attribute-efficient learning proposed by Valiant(2000) and Littlestone(1988) captures the ease of learning via improved VC-dimension bounds that depend only a small number of relevant attributes. Robust concepts are often defined using large-margin classifiers studied in the context of Support Vector Machines; see Cortes & Vapnik(1995). We use a different notion of robustness suited to the boolean hypercube known as noise-stability. Due to known results from Fourier analysis over the boolean hypercube, noise stability also implies closeness to a function that depends only on a small number of attributes.\nSince the universal approximation theorem gives a depth-2 neural network with only one hidder. layer, the effect of depth on the power of neural networks has attracted considerable interest il approximation theory as well as boolean circuit complexity; see de Villiers & Barnard[(1993) an. Siu et al.[(1995). Note that on the boolean hypercube, depth-d circuits with sigmoid gates and linea. threshold gates are essentially equivalent. An important result relevant to our paper is due to a lon? line of work including Goldmann et al.[(1992),Goldmann & Karpinski(1998), and Hofmeiste (1996) which proved that any depth-d linear threshold circuit with polynomially (in the number r. of input variables) many nodes but arbitrary weights can be efficiently simulated by a depth-(d + 1 linear threshold circuit with polynomially many nodes and polynomially bounded integer weights.s."}, {"section_index": "3", "section_name": "2 OUR RESULTS", "section_text": "We work with linear threshold circuits with boolean inputs and outputs, which are discrete analog. of the neural networks with real-valued inputs and continuous activation functions. They are alsc. known as multi-layer perceptrons as in Minsky & Papert(1987), which are simply feed-forwarc. neural networks where each node computes a weighted linear combination of its inputs and applie. a threshold function for activation. As mentioned above, the notion of robustness we use is noise. stability or low noise-sensitivity. The noise sensitivity of a boolean function is simply the fraction o inputs whose output changes, if we change each coordinate of the input independently with a smal. probability, say some e > 0.\nAs a warm-up, we show that if a boolean function defined on the boolean hypercube {-1, 1}n is. noise-stable, that is, if it has low noise-sensitivity, then it can be approximated by a depth-2 linear. threshold circuit (that is, with one hidden layer), that depends only on constantly many variables in. the input, and its number of hidden nodes and the weights are also constants, all independent of n Here we quantify approximation or closeness based on the fraction of inputs where two functions. differ. This result may be folklore although we are not aware of any reference..\nTheorem 1. Any f : {-1, 1}n -> {-1, 1} that has small noise-sensitivity for e-perturbations, tha is, NSe (f) = O (se), is 8-close to a depth-2 linear threshold circuit that depends only on O(1 variables of the input with O(1) hidden nodes and O(1) weights, where the constants O(1) depen on e and s but are independent of n..\nWhen the given function is actually a linear threshold function, that is, when it represents a halfs. oace, we can improve the above theorem with constants O(1) that are polynomial in 1/e and 1/d and thus, give an efficient analog of the universal approximation theorem for neural networks ove. he boolean hypercube. Note that this is consistent with the intuition that better noise-stable con. cepts can be approximated by smaller neural networks. It also shows that a given concept may bc. linearly separable in a high n-dimensional kernel space but its approximation by neural networks. only depends on an inherent parameter like robustness or noise-sensitivity, independent of n..\nTheorem 2. Any linear threshold function f : {-1, 1}n -> {-1, 1} that has small noise-sensitivity. for e-perturbations, that is, NSe (f) = O (s3e), is 8-close to a depth-2 linear threshold circuit\nEquipped with this, we show the following implication for learning. Given oracle access to such a linear threshold function f of low noise-sensitivity, we can learn a depth-2 linear threshold circuit that approximates f well, in polynomial time.\nWe would also like to note that it is possible to extend our result for halfspaces to polynomial threshold functions. This uses the facts that any degree-d polynomial threshold function e-close to a J-junta, is close to junta that is a polynomial threshold function of degree at most d, and that the machinery from De et al.[(2014) extends to small weight polynomial threshold functions as well.\nWe now discuss some obstacles to possible improvements of our results\nWeak, proper, agnostic learning of halfspaces under non-uniform distributions is NP-hard as shown by Guruswami & Raghavendra(2006), and extended to improper learning byDaniely et al.(2013] and Daniely(2015). Daniely's result rules out efficient, constant factor approximation for even improper learning of halfspaces using any hypothesis class on the boolean hypercube under non- uniform distributions'However,Daniely[(2014) can get around this by giving a PTAS for improper learning of halfspaces on the unit sphere under uniform distribution. Our result can be seen as another way to circumvent the hardness results. We learn noise-stable halfspaces on the boolean hypercube under uniform distribution, by giving an efficient, agnostic-type learning algorithm where the output hypothesis is a depth-2 neural network. This is arguably more natural than other improper learning results for halfspaces via low-degree polynomials.\nNot having an efficient version of Bourgain's theorem for arbitrary noise-stable boolean functions, where the number of junta variables is polynomial in the noise-sensitivity parameters is another ob- stacle to efficient generalizations of our result. Note that the proof of this for noise-stable halfspaces does not generalize to higher depth linear threshold circuits. Another approach is to approximate any noise-stable function first using a halfspace and then by a depth-2 linear threshold circuit, but this has been ruled out by Mossel & Neeman (2016) with an example of a noise-stable function that is far from any halfspace.\n'Results inDaniely et al.(2013) are under certain assumptions that are refuted in [Allen et al.. (2015 However, Daniely(2015) recovers a slightly weaker but very similar result for halfspaces under different as. sumptions\nIn a recent paper, Feldman & Vondrak (2013) have shown that sub-modular functions are e close to O (1/e2 : log (1/e))-juntas. Note that this tells us that we can e-approximate submodular func- tions by polynomials of degree O (1/e2 . log(1/e)). This means we can approximate submodular functions by depth-3 neural networks with linear threshold gates everywhere except for the top gate.\nThe nk running time is needed to identify the specific set of O (1/e2 : log(1/e) : log(1/)) relevant coordinates. This nO(k) factor is unavoidable while learning k-juntas, and a candidate hard case is presented in Blum et al.(1994). Only recently[Valiant (2015) gave an improved algorithm to learn\nWe now give a brief outline of the proofs of the above theorems.Bourgain (2002) proved that any. function with small noise-sensitivity can be approximated by another function that is a junta, which. means that it depends on very few coordinates. In Theorem[1] we show that such a function can also be represented by a small depth-2 linear threshold circuit with small size and small integer weights.. Moreover, any linear threshold function that is close to a junta is actually close to a linear threshold.\nfunction defined over those junta coordinates. Thus, we can approximate the given noise-stable. function by a linear threshold function on a small number of inputs, however, its weights may be. large. Therefore, we use the size-depth-weight trade-off from Goldmann et al.(1992) to simulate this linear threshold function by a depth-2 linear threshold circuit with small size as well as small weights in Theorem|2] We also use a recent improvement over Bourgain's theorem by|Diakonikolas. et al.(2014) to get bounds polynomial in the noise-stability parameters. Theorem [3|follows by combining a result of De et al. (2014) on agnostic-type learning by a linear threshold function with a constructive, efficient simulation of the Goldmann et al.[(1992) result by|Goldmann & Karpinski. (1998)."}, {"section_index": "4", "section_name": "3 RELATED WORK", "section_text": "Motivated by the recent advances in neural networks, there have been various attempts to build a theory to understand why neural networks can efficiently simulate many natural concepts and why. their models and parameters can be learnt efficiently, for example, [Andoni et al.(2014) and [Arora. et al.(2014). Our objective is to show efficient analogs of the universal approximation theorem for neural networks, a question that has been studied in approximation theory as well as boolear. circuit complexity. We combine the size-depth-weight trade-off results from about two decades. ago such as [Goldmann et al.(1992) and Goldmann & Karpinski(1998) with more recent work or the Fourier analysis of boolean functions and its corollaries in learning. Also note that There are. known NP-hardness results for learning halfspaces by Guruswami & Raghavendra(2009) and fo. approximately learning depth-2 threshold circuits by Bartlett & Ben-David (2002). However, these. are for arbitrary threshold circuits. As we will show, the noise-stability constraint allows us to get a. polynomial time algorithm to learn a depth-2 threshold circuit approximating the original function..\nThe low effective-dimension of hyperparameters has been observed and exploited to learn using neural networks in practice by Bergstra & Bengio(2012). We propose noise-stability as an approach to study this theoretically.\nArriaga & Vempala(2006) showed that robust or large-margin halfspaces in Rn can be learnt effi ciently using random projections. Their learning algorithm outputs a depth-2 neural network with different activation functions in different layers. We define robustness using noise-stability instead and show that better noise-stability reduces learning complexity. Our results also generalize tc polynomial threshold functions, that is, a noise-stable polynomial threshold function (PTF) can be represented by a small, depth-2 neural network."}, {"section_index": "5", "section_name": "4 PRELIMINARIES", "section_text": "Here we give a compilation of definitions and known results that we will use to prove Theorems|1 2] and 3] Noise-stable boolean functions have low noise-sensitivity. Noise-sensitivity of a boolean function, with respect to e-perturbations, is defined as the fraction of inputs whose output changes when we change each bit of the input independently with a small probability e.\nDefinition 1. The noise sensitivity of a boolean function f : {-1, 1}n -> {-1, 1} at a given nois. rate e > 0 is defined as\nNSc(f) = Probx.y(f(x) F f(y)\nwhere x is uniformly distributed in {-1,1}n, and y is obtained from x by flipping each bit of x independently with probability e.\nA theorem of|Bourgain[(2002) states that boolean functions with small noise-sensitivity are close t untas. which are boolean functions that depend on yery few coordinates. Note that the number o these relevant coordinates is independent of n.\nLemma 1. Any f : {-1,1}n > {-1, 1} that satisfies NSe (f) = O(e) is S-close to a k-junt where\nO(1/e) 1 k = de\nHere, S-closeness means agreement on 1 - 8 fraction of the inputs\nf(x) = sgn n WXi-W0\nLemma 2. Any linear threshold function f : 1,1} satisfies NSe (f) < 2/e\nThe bounds in Proposition|1|can be improved when f is a linear threshold function as shown by the result of Diakonikolas et al.[(2014) mentioned below. Thus, a noise-stable linear threshold functior is close to a k-junta, where k is polynomial dependent on the noise and approximation parameters but is independent of n\n1 log k = 0 log\nThe following lemma from O'Donnell & Servedio(2011) ties it up nicely to say that if any linea. threshold function is close to a junta, then it must be close to a linear threshold function defined over those junta coordinates.\nLemma 4. If a linear threshold function f : {-1, 1}n -> {-1, 1} is 8-close to a junta over a subse J C [n] of coordinates, then f is S-close to a linear threshold function defined over that subse J C n of coordinates.\nLinear threshold circuits where each gate computes a linear threshold function forms an important class in circuit complexity. We borrow the standard definitions and notation from Siu et al. (1995) and Goldmann et al.(1992).\nDefinition 3. LTg is defined as the class of linear threshold circuits of depth d on n inputs with the number of nodes polynomial in n but arbitrary weights inside the linear threshold functions. LT'd is defined as the class of linear threshold circuit of depth d on n inputs with both the number of nodes and weights inside the linear threshold functions polynomially bounded in n.\nThe size-depth-weight trade-offs for linear threshold circuits have been studied in circuit complexity. with keen interest, and a long line of work culminated in the following result by Goldmann et al. (1992). Here, the weight bounds are bounds on the ratio of the maximum and the minimum weights.. when all of them are integers..\nThis means that any depth-d linear threshold circuit of polynomial size but arbitrary weights can be simulated by a depth-(d + 1) linear threshold circuit whose size and weights are both polynomially bounded. WhileGoldmann et al.(1992) gives an existence result, Goldmann & Karpinski(1998 gives a constructive proof and it is easy to check that the underlying simulation is efficient and can be computed in polynomial time as well. Hofmeister (1996) has a simplified proof of Goldmann & Karpinski(1998) with improved explicit bounds.\nBourgain's theorem has also been extended to the case of boolean functions with inputs that com from constant biased distributions over {-1, 1}n in Kindler & Safra|(2002). Our general result car be extended to these cases as well. For this we need to define the X-noise-sensitivity of a booleal function with respect to p, where p is the distribution that picks -1 with probability p and 1 witl. probability 1 - p.\nNote that the /e in the bound has a special significance for linear threshold functions, as we explain below.\nA theorem of Peres(2004) states that the noise sensitivity of any linear threshold function at noise rate e is at most 2/e\nRemark: For convenience, we use NSe (f) = O(83/e) in our assumption whenever using the above theorem.\nDefinition 4. The X-noise-sensitivity of a Boolean funciton f : {-1, 1}n -> {-1, 1} with respe to p is defined as\nwhere x ~ and y is constructed by first sampling coordinates I from [n] according to 3 anc then replacing those coordinates in x by coordinates independently sampled from !\nLemma 6. For any parameter X > 0, fix k = logi-x(1/2). Then every Boolean functior f : {-1,1}n {-1, 1} whose A-noise-sensitivity with respect to p is bounded by (e/k)?, is a max[O(e log(1/p)/p2), J]-junta, where\nLemma 7. Any f : {-1, 1}n -> {-1, 1} that is a k-junta can be represented by a depth-2 linear threshold circuit with the number of nodes and weights bounded by 2O(k).\nProof. Since f is a k-junta we can pretend that f : {1, 1}k -> {-1, 1}. Each positive example x E (-1, 1}k such that f(x) = 1 can be isolated by a single halfspace h(y) = sgn ((x, y) - (k - 1/2)) which outputs positive value for y E {-1, 1}k iff x = y. We can build a depth-2 linear threshold. circuit where all the hidden nodes correspond to h(x), one for each positive examples of f. Thus. for a positive example of f, exactly one of the hidden layer node outputs 1. Otherwise, all hidder layer nodes output -1. Now we can have a linear threshold gate are the top with all weights 1 anc threshold 1 - p, where p is the number of positive examples of f. Note that all the hidden threshold. gates have integer weights bounded by k and they are at most 2k in number. The top gate has integer. weights bounded by 2k. Thus, f can be represented by an LT, or depth-2 linear threshold circui. where the size of the circuit and the integer weights used in it are bounded by 2O(k).\nTherefore, combining this with Proposition[1 we get that any noise-stable f as required in Theorem 1is d-close to a depth-2 linear threshold circuit whose size and integer weights are bounded by 20(k), where\nindependent of n\nSince Bourgain's theorem can be improved for linear threshold functions with polynomial depen. dency in the noise and approximation parameters, we can approximate the given function using. junta where the number of junta variables is polynomially bounded. Due to Lemma[4] we can more over, say that our function is not just close to a junta but close to a linear threshold function define. over these junta variables. The only caveat is that the weights used in this linear threshold functio may be large. This is where we invoke size-depth-weight trade-off result such as Proposition|5|fror circuit complexity to simulate this linear threshold function by a linear threshold circuit with a extra depth but polynomially bounded weights..\nNSx.p(f) = Probx,y(f(x) # f(y)\nO(1/e) 1 k = de\nProof. (Proof of Theorem2) From Proposition[3] we see that any linear threshold function f with low noise-sensitivity NSe (f) = O (s3e) is -close to an O (1/e2 log(1/e) log(1/))-junta. From Lemma4] moreover, it must be d-close a linear threshold function over these junta variables.\nThus, f is &-close to an LT function over these junta variables but the weights could be large. How ever, Proposition 5|shows that this can be simulated by an LT2 function over these junta variables with weights polynomially bounded in the number of junta variables. Therefore, f is d-close to an LT2 function over O (1/e2 log(1/e) log (1/8)) variables with the size of the circuits and the weights at the threshold gates polynomially bounded in 1/e and 1/, but independent of n. This concludes the proof of Theorem2"}, {"section_index": "6", "section_name": "PROOF OF THEOREM3", "section_text": "Proof. (Proof of Theorem[3) Looking at Theorem[2 the broad outline of the algorithm is as follows As seen in the proof of Theorem[2] we know that the given linear threshold function of low noise- sensitivity is close to another linear threshold function that depends only on a small, constant number of input variables. We can go over each small subset by brute force. Now over each small subset, we can try to learn a linear threshold function over them that is closest to the given function. Here. we use a result fromDe et al.[(2014) (see Theorem 36 of|De et al.(2014) on agnostic-type learning halfspaces via reconstructing the Chow parameters of a linear threshold function; Chow parameters. are the level-0 and level-1 Fourier coefficients which are known to completely determine a linear threshold function.\nLemma 8. Let f : {-1, 1}n -> {-1,1} and let opt be the minimum disagreement (in fraction oJ the inputs) of f with its closest linear threshold function. Then given any O < e, y < 1/2 and access to independent uniform samples (x, f(x)), we can output a linear threshold function g (given by its weights) such that, with probability 1 - ,\nwhere the algorithm runs in time\nCorollary 1. Let f : {-1, 1}n -> {-1, 1} be a boolean function that is &-close to a linear threshol function in a given subset S C [n] of k input variables. Then, for O < 8,y < 1/2, and given acces. to independent uniform examples (x, f(x)), we can output a linear threshold function g (given b) its weights) such that, with probability 1 - y,\nd(f,g) 2-(/log(1/s)) + 8,\nwhere the algorithm runs in time\nO(log-(1/8)) log\nO(log(1/e)) log\nThus, we go over all subsets of size O (1/e2 : log(1/e) : log(1/)) and run the agnostic-type learn-. ing of linear threshold functions byDe et al.(2014). We take the best of these and convert the. corresponding output, which is a linear threshold function with weights possibly exponential in 1/e and 1/, and applyGoldmann & Karpinski[(1998) to convert it into a depth-2 linear threshold circuit. whose size and weights both are polynomially bounded in 1/e and 1/8..\nWe show an efficient analog of the universal approximation theorem for neural networks in the case of noise-sensitive halfspaces of boolean hypercube, and gave efficient learning algorithms for the same. We do this via an interplay of techniques from Fourier analysis over the boolean hypercube and size-weight-depth trade-off results on linear threshold circuits from circuit complexity.\nOne might be able to extend these result to continuous domains where the input is sampled uniformly from 1, 1|n by using the ANOvA (analysis of variance) decomposition of a function. However. to do this one will have to prove a Bourgain-type theorem for these settings.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Sarah R. Allen, Ryan O'Donnell, and David Witmer. How to refute a random CsP. CoR abs/1505.04383,2015. URLhttp://arxiv.0rg/abs/1505.04383\nShun-ichi Amari. The handbook of brain theory and neural networks. chapter Learning and Sta tistical Inference, pp. 522-526. MIT Press, Cambridge. MA. USA. 1998. ISBN 0-262-51102-9 URLhttp://dl.acm.0rq/citation.cfm?id=303568.303829\nRosa I. Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projection. Machine Learning, 63(2):161-182, 2006. 1SsN 1573-0565. doi: 10.1007/ s10994-006-6265-7. URLhttp://dx.doi.0rg/10.1007/s10994-006-6265-7\nPeter L. Bartlett and Shai Ben-David. Hardness results for neural network approximation problems Theoretical Computer Science, 284(1):53 - 66, 2002. ISsN 0304-3975. doi: http://dx.doi.org/ 10.1016/S0304-3975(01)00057-3. URL http://www.sciencedirect.com/science/ artic1e/pii/s0304397501000573 Computing Learining Theory.\nEric B. Baum and David Haussler. What size net gives valid generalization? Neural Comput 1(1):151-160, March 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.1.151. URL http : //dx.doi.0rg/10.1162/neco.1989.1.1.151\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. J. Mach Learn. Res., 13:281-305, February 2012. ISsN 1532-4435. URL http://d1.acm.org/ citation.cfm?id=2188385.2188395\nAlexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks. In Proceedings of the 31th International Conference on Machine Learning ICML 2014, Beijing, China, 21-26 June 2014, pp. 1908-1916, 2014. URL http://jmlr. org/proceedings/papers/v32/andoni14.htm1\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In Proceedings of the 31th International Conference on Machine Learning. ICML 2014, Beijing, China, 21-26 June 2014, pp. 584-592, 2014. URLhttp: // jm1r.0rg/ proceedings/papers/v32/arora14.html\nPeter L. Bartlett. Vapnik-chervonenkis dimension bounds for two- and three-layer networks. Neural Computation, 5(3):371-373, 1993. doi: 10.1162/neco.1993.5.3.371. URLhttp: / /dx. doi. org/10.1162/neco.1993.5.3.371\nAvrim Blum, Merrick L. Furst, Michael J. Kearns, and Richard J. Lipton. Cryptographic primitives based on hard learning problems. In Proceedings of the 13th Annual International Cryptology Conference on Advances in Cryptology, CRYPTO '93, pp. 278-291, London, UK, UK, 1994. Springer-Verlag. ISBN 3-540-57766-1. URL http://dl.acm.org/citation.cfm?id= 646758.759585\nGeorge Cybenko. Approximation by superpositions of a sigmoidal function. MCss, 5(4):455, 1992 doi: 10.1007/BF02134016. URLhttp://dx.doi.0rg/10.1007/BF02134016\nSI1lap lailspacCs. 61(2):11:1-11:36, 2014. doi: 10.1145/2590772. URL http://doi.acm.0rg/10.1145/ 25 90772 J. de Villiers and E. Barnard. Backpropagation neural nets with one and two hidden layers. Neural Networks, IEEE Transactions on, 4(1):136-141, Jan 1993. ISSN 1045-9227. doi: 10.1109/72. 182704. I. Diakonikolas, R. Jaiswal, R. A. Servedio, L.-Y. Tan, and A. Wan. Noise Stable Halfspaces are. Close to Very Small Juntas. November 2014. Vitaly Feldman and Jan Vondrak. Optimal bounds on approximation of submodular and xos func-. tions by juntas. In Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS '13, pp. 227-236, Washington, DC, USA, 2013. IEEE Computer So ciety. ISBN 978-0-7695-5135-7. doi: 10.1109/FOCS.2013.32. URLhttp://dx.doi.org/ 10.1109/F0CS.2013.32\nVenkatesan Guruswami and Prasad Raghavendra. Hardness of learning halfspaces with noise. SIAA Journal on Computing, 39(2):742-765, 2009. doi: 10.1137/070685798. URLhttp://dx do1.0rg/10.1137/070685798\nThomas Hofmann, Bernhard Schlkopf, and Alexander J. Smola. Kernel methods in machine learn ing. Ann. Statist., 36(3):1171-1220, 06 2008. doi: 10.1214/009053607000000677. URL http://dx.doi.0rg/10.1214/009053607000000677\nMathematics, 131:269-276, 2002. doi: 10.1007/BF02785861. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Mach. Learn., 20(3):273-297 September 1995. ISSN 0885-6125. doi: 10.1023/A:1022627411411. URL http: / /dx. doi. org/10.1023/A:1022627411411\nAmit Daniely. A PTAS for agnostically learning halfspaces. CoRR, abs/1410.7050, 2014. URL http://arxiv.0rg/abs/1410.7050\nThomas Hofmeister. Computing and Combinatorics: Second Annual International Conference. COCOON '96 Hong Kong, June 17-19, 1996 Proceedings, chapter A note on the simulation of exponential threshold weights, pp. 136-141. Springer Berlin Heidelberg, Berlin, Heidelberg. 1996. ISBN 978-3-540-68461-9. doi: 10.1007/3-540-61332-3_146. URLhttp: //dx. doi. 1 4 6\nMarek Karpinski and Angus Macintyre. Polynomial bounds for {VC} dimension of sigmoidal and general pfaffian neural networks. Journal of Computer and System Sciences, 54(1):169 - 176 1997. ISSN 0022-0000. doi: http://dx.doi.org/10.1006/jcss.1997.1477. URL http: / /www. sciencedirect.com/science/article/pii/s002200009791477X\nGuy Kindler and Shmuel Safra. Noise-resistant boolean functions are juntas. preprint, 2002\nNick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshol algorithm. Mach. Learn., 2(4):285-318, April 1988. ISsN 0885-6125. doi: 10.1023/A 1022869011914. URLhttp://dx.doi.0rq/10.1023/A:1022869011914\nMarvin Minsky and Seymour Papert. Perceptrons - an introduction to computational geometry. MIT Press, 1987. ISBN 978-0-262-63111-2"}] |
ryWKREqxx | [{"section_index": "0", "section_name": "EMERGENT PREDICATION STRUCTURE IN VECTOR REPRESENTATIONS OF NEURAL READERS", "section_text": "Hai Wang. Takeshi Onishi Kevin Gimpel David McAllester\nReading comprehension is a question answering task where the answer is to be. found in a given passage about entities and events not mentioned in general knowl-. edge sources. A significant number of neural architectures for this task (neural. readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of \"predication structure'' in. the hidden state vectors of a class of neural readers including the Attentive Reader. and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation [P, c] of a \"predicate vector' P and a \"constant. symbol vector\"' c and that the hidden state represents the atomic formula P(c). This predication structure plays a conceptual role in relating \"aggregation read-. ers\" such as the Attentive Reader and the Stanford Reader to \"explicit reference. readers'' such as the Attention-Sum Reader. the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show thai. the addition of linguistics features to the input to existing neural readers signifi. cantly boosts performance yielding the best results to date on the Who-did-What. dataset"}, {"section_index": "1", "section_name": "INTRODUCTION AND OVERVIEW", "section_text": "Reading comprehension is a type of question answering task where the answer is to be found in a passage about particular entities and events not otherwise familiar to the reader. In particular, the entities and events should not be mentioned in structured databases of general knowledge. Reading comprehension problems are intended to measure a systems ability to extract semantic information about entities and relations directly from unstructured text. Several large scale reading comprehen- sion datasets have been introduced recently. In particular the CNN & DailyMail datasets (Hermann et al.l2015), the Children's Book Test (CBT) (Hill et al.l|2016), and the Who-did-What dataset (On- ishi et al.|2016). The large sizes of these datasets enable the application of deep learning. These are all cloze-style datasets where a question is constructed by deleting a word or phrase from an article summary (in CNN/DailyMail), from a sentence in a Children's story (in CBT), or by delet ing a person from the first sentence of a different news article on the same entities and events (in Who-did-What).\nIn this paper we present empirical evidence for the emergence of predication structure in a certai class of neural readers. To understand predication structure is it helful to review the anonymizatio performed in the CNN/DailyMail dataset. In this dataset named entities are replaced by anonymou entity identifiers such as \"entity37'. The passage might contain \"entity52 gave entity24 a rousin applause\"' and the question might be \"X received a rounding applause from entity52\". The tas is to fill in X from a given multiple choice list of candidate entity identifiers. A fixed relativel small set of the same entity identifiers are used over all the problems and the same problem i presented many times with the entity identifiers shuffled. This prevents a given entity identifier fror having any semantically meaningful vector embedding. The embeddings of the entity identifiers ar"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "presumably just pointers to semantics-free tokens. We will write entity identifiers as logical constant symbols such as c rather than strings such as \"entity37'\nAggregation readers, including Memory Networks (Weston et al. Sukhbaatar et al. 2015), the At- tentive Reader (Hermann et al.| 2015) and the Stanford Reader (Chen et al.[2016), use bidirectional LSTMs or GRUs to construct a contextual embedding ht of each position t in the passage and also an embedding q of the question. They then select and answer c using a criterion similar to\nargmax ht. (c) t\nargmax < e(c) t\nHere . aght is viewed as a vector representation of the passage\nWe argue that for aggregation readers, roughly defined by (2), the hidden state ht of the passage at position (or word) t can be viewed as a vector concatenation ht = e(t), e'(ct) where t is a property (or statement or predicate) being stated of a particular constant symbol ct. A logician might write this as ht = t[ct]. Furthermore, the question can be interpreted as having the form I[x] where the problem is to find a constant symbol c such that the passage implies I[c]. Assuming h+ = [e(+).e'(ct)l and q = [e().0l and e(c) 1 we can rewrite (1) as 0.e\nargmax < e(t),e() > <e'(ct),e'(c) > C t\nThe first inner product in (3) is interpreted as measuring the extent to which t[x] implies I[x] for any x. The second inner product is interpreted as restricting t to positions talking about the constant symbol c.\nNote that the posited decomposition of ht is not explicit in (2) but instead must emerge during. training. We present empirical evidence that this structure does emerge. The empirical evidence is somewhat tricky as the direct sum structure that divides ht into its two parts need not be axis aligned. and therefore need not literally correspond to vector concatenation..\nWe also consider a second class of neural readers that we call explicit reference readers. Explicit reference readers avoid (2) and instead use.\nargmax Qt C tER(c)\nwhere R(c) is the subset of the positions where the constant symbol (entity identifier) c occurs Note that if we identify at with < e(t), e() > and assume that < e'(c), e'(ct) > is either O or 1 depending on whether c = ct, then (3) and (4) agree. In explicit reference readers the hidden state ht need not carry a pointer to ct as the restriction on t is independent of learned representations. Ex- plicit reference readers include the Attention Sum Reader (Kadlec et al.|2016), the Gated Attention Reader (Dhingra et al.]2016), the Attention-over-Attention Reader (Cui et al.]2016) and others (a list can be found in section6)\nSo far we have only considered anonymized datasets that require the handling of semantics-free constant symbols. However, even for non-anonymized datasets such as Who-Did-What, it is helpful to add features which indicate which positions in the passage are referring to which candidate an- swers. This indicates, not surprisingly, that reference is important in question answering. The fact that explicit reference features are needed in aggregation readers on non-anonymized data indicates that reference is not being solved by the aggregation readers. However, as reference seems to be important for cloze-style question answering, these problems may ultimately provide training data from which reference resolution can be learned.\nSections 2 and 3 review various existing datasets and models respectively. Section4 presents the logical structure interpretation of aggregation readers in more detail and the empirical evidence supporting it. Section|5|proposes new models that enforce the direct sum structure of the hidden\nwhere e(c) is the vector embedding of the constant symbol (entity identifier) c. In practice the inner-product < ht, q > is normalized over t using a softmax to yield an attention at over t and (1. becomes.\nBefore presenting various models for machine comprehension we give a general formulation of th. machine comprehension task. We take an instance of the task be a four tuple (q,p, a, A), wher. q is a question given as sequence of words containing a special taken for a \"blank'' to be filled ir p is a document consisting of a sequence of words, A is a set of possible answers and a E A i the ground truth answer. All words are drawn from a vocabulary V. We assume that all possibl answers are words from the vocabulary, that is A C V, and that the ground truth answer appears i the document, that is a E p. The problem can be described as that of selecting the answer a E . that answers question q based on information from p..\nCNN & DailyMail:Hermann et al.(2015) constructed these datasets from a large number of news. articles from the CNN and Daily Mail news websites. The main article is used as the context.. while the cloze style question is formed from one short highlight sentence appearing in conjunction. with the published article. To avoid the model using external world knowledge when answering. the question, the named entities in the entire dataset were replaced by anonymous entity IDs which were then further shuffled for each example. This forces models to rely on the context document to.. answer each question. In this anonymized corpus the entity identifiers are taken to be a part of the . vocabulary and the answer set A consists of the entity identifiers occurring in the passage..\nWho-did-What (WDW): The Who-did-What dataset (Onishi et al.]2016) contains 127,000 mul tiple choice cloze questions constructed from the LDC English Gigaword newswire corpus (David & Cieri]2003). In contrast with CNN and Daily Mail, it avoids using article summaries for ques tion formation. Instead, each problem is formed from two independent articles: one is given as the passage to be read and a different article on the same entities and events is used to form the ques tion. Further, Who-did-What avoids anonymization, as each choice is a person named entity. In thi dataset the answer set A consists of the person named entities occurring in the passage. Finally, the problems have been filtered to remove a fraction that are easily solved by simple baselines. It has two training sets. The larger training set (\"relaxed') is created using less baseline filtering, while the smaller training set (\"strict') uses the same filtering as the validation and test sets.\nChildren's Book Test (CBT) Hill et al.(2016) developed the CBT dataset in a slightly different fashion to the CNN/DailyMail datasets. They take any sequence of 21 consecutive sentences from a children's book: the first 20 sentences are used as the passage, and the goal is to infer a missing word in the 21st sentence. The task complexity varies with the type of the omitted word (verb, preposition, named entity, or common noun). According to the original study on this dataset (Hill et al.]2016), n-gram and recurrent neural network language models are sufficient for predicting verbs or prepositions. However, for named entities and common nouns, current solvers are still far from human performance.\nOtherRelated Datasets.] It is also worth mentioning several related datasets. The MCTest dataset (Richardson et al.l|2013) consists of children's stories and questions written by crowdsourced workers. The dataset only contains 660 documents and is too small to train deep models. The bAbI dataset (Weston et al.]2016) is constructed automatically using synthetic text generation and can be perfectly answered by hand-written algorithms (Lee et al.|2016). The SQuAD dataset (Ra- jpurkar et al.2016) consists passage-question pairs where the passage is a wikipedia article and the questions are written by crowdsourced workers. Although crowdsourcing is involved, the dataset contains over 200,O00 problems. But the answer is often a word sequence which is dificult to handle with the reader models considered here. The LAMBADA dataset (Denis et al.]2016) is a word prediction dataset which requires a broad discourse context and the correct answer might not in the context. Nonetheless, when the correct answer is in the context, neural readers can be applied effectively(Chu et al.]2016).\nstate vectors. It is shown that these new models perform well on the Who-did-What dataset provided. that reference annotations are added as input features. Section 5 also describes additional linguistic features that can be added to the input embeddings and show that these improve the performance of existing models resulting in the best single-model performance to date on the Who-did-What. dataset."}, {"section_index": "3", "section_name": "AGGREGATION READERS AND EXPLICIT REFERENCE READERS", "section_text": "Here we classify readers into aggregation readers and explicit reference readers. Aggregation readers. appeared first in the literature and include Memory Networks (Weston et al.) Sukhbaatar et al.. 2015), the Attentive Reader (Hermann et al. 2015), and the Stanford Reader (Chen et al. 2016). Aggregation readers are defined by equations (8) and (10) below. Explicit reference readers incluce. the Attention-Sum Reader (Kadlec et al.]2016), the Gated-Attention Reader (Dhingra et al.]2016), and the Attention-over-Attention Reader (Cui et al.|[2016). Explicit reference readers are defined by equation (14) below. We first present the Stanford Reader as a paradigmatic aggregation Reader and the Attention-Sum Reader as a paradigmatic explicit reference reader.."}, {"section_index": "4", "section_name": "3.1 AGGREGATION READERS", "section_text": "h biLSTM(e(p)) [fLSTM(e(q))|q], bLSTM(e(q))1 q\n1(C(9))|q|;0LO 11V1(C(9))1] In equations (5) and (6) we have that e(p) is the sequence of word embeddings e(w;) for w; E p and similarly for e(q). The expression biLSTM(s) denotes the sequence of hidden state vectors resulting from running a bi-directional LSTM on the vector sequence s. We write biLSTM(s); for the ith vector in this sequence. Similarly fLSTM(s) and bLSTM(s) denote the sequence of vectors resulting from running a forward LSTM and a backward LSTM respectively and :, :] denotes vector concatenation. The Stanford Reader, and various other readers, then compute a bilinear attention over the passage which is then used to construct a single weighted vector representation of the passage\nHere e,(a) is an \"output embedding\" of the answer a. On the CNN dataset the Stanford Reader trains an output embedding for each the roughly 500 entity identifiers used in the dataset. In cases where the answer might be any word in V an output embedding must be trained for the entire vocabulary.\nMemory Networks. Memory Networks (Weston et al.; Sukhbaatar et al.]2015) use (8) and (10) but have more elaborate methods of constructing \"memory vectors\" ht not involve LSTMs. Memory networks use (8) and (10) but replace (9) with\nP(w|p,q,A) = P(w|p,q) = softmax e,(w)To WEV\nP(w(p,q,A) = P(w(p,q) = softmaxe,(w)To\nIt should be noted that (11) trains output vectors over the whole vocabulary rather than just those items occurring in the choice set A. This is empirically significant in non-anonymized datasets such as CBT and Who-did-What where choices at test time may never have occurred as choices in the training data.\nAttentive Reader. The Stanford Reader was derived from the Attentive Reader (Hermann et al 2015). The Attentive Reader uses Qt = softmaxt MLP([ht, ql) instead of (7). Here MLP(x) is th output of a multi layer perceptron (MLP) given input x. Also, the answer distribution in the attentiv. reader is defined over the full vocabulary rather than just the candidate answer set A.\nStanford Reader. The the Stanford Reader (Chen et al.]2016) computes a bi-directional LSTM representation of both the passage and the question..\nsoftmax h' Wa q Qt t 0\np(a[d, q,A) softmax eo. aEA a argmax eo aEA\nThe reader is trained with log-loss ln 1/P(a[p, q, A) where a is the correct answer. At test time the reader is scored on the percentage of problems where a = a.\nP(w[p,q, A) = P(w|p, q) = softmax e,(w)MLP([o, q]) WEV\nEquation (12) is similar to (11) in that it leads to the training of output vectors for the full vocabulary rather than just those items appearing in choice sets in the training data. As in memory networks. this leads to improved performance on non-anonymized data sets..\nHere we think of R(a, p) as the set of references to a in the passage p. It is important to note that (13) is an equality and that P(a|p, q, A) is not normalized to the members of R(a, p). When training with the log-loss objective this drives the attention at to be normalized - to have support only on the positions t with t E R(a, p) for some a. See the heat maps in the appendix\nrated-Attention Reader. The Gated Attention Reader Dhingra et al.(2016) involves a K-laye iGRU architecture defined by the following equations..\n[fGRU(e(q))Iql,bGRU(e(q))1] 1 < l < K h1 biGRU(e(p)) hl biGRU(hl 2 <l< K\nAttention-over-Attention Reader, The Attention-over-Attention Reader (Cui et al.. 2016) uses a more elaborate method to compute the attention dt. We will use t to range over positions in the passage and j to range over positions in the question. The model is then defined by the following equations.\nBj=pt t,j Qt=j jQt,j\nNote that the final equation defining Qt can be interpreted as applying the attention , to the atten tions Qt,j. This reader uses (13) and (14)\nAs discussed in the introduction the entity identifiers such as \"entity37' introduced in the CNN/DailyMail dataset cannot be assigned any semantics other than their identity. We should think of them as pointers or semantics-free constant symbols. Despite this undermining of semantics aggregation readers using (8) and (10) are able to perform well. Here we posit that this is due to an. emergent predication structure in the hidden vectors ht. Intuitively we want to think of the hidder state vector ht as a concatenation [e(t), e'(at)] where t carries semantic information true of at. We think of ht as representing t[at] for semantic statement t[x] asserted of the constant symbol.\nAttention-Sum Reader. In the Attention-Sum Reader (Kadlec et al.]2016) h and q are computed with equations (5) and (6) as in the Stanford Reader but using GRUs rather than LSTMs. The attention at is computed similarly to (7) but using a simple inner product Qt = softmaxt ht q rather than a trained bilinear form. Most significanlty, however, equations (9) and (10) are replaced by the following where t E R(a, p) indicates that a reference to candidate answer a occurs at position t in p.\nP(a|p,q,A) Qt tER(a,p) a argmax Qt a tER(a,p)\nfGRU(e(q))lql,bGRU(e(q))1] 1 < l < K h1 biGRU(e(p)) hl biGRU(hl 2 <l < K\nHere the question embeddings q' for different values of l are computed with different GRU model parameters. Here h O q abbreviates the sequence h1 O q, h2 O q, ... h|pl O q. Note that for K = 1. we have only q' and h' as in the attention-sum reader. An attention is then computed over the final layer hK with at = softmaxt (hK)T qK in the attention-sum reader. This reader uses (13) and. (14).\nat. We also think of the vector representation q of the question as having the form [e(), 0] and vector embedding eo(a) as having the form [0, e'(a)].\nif t E R(a,p C 0 otherwise\nand hence (10) and (14) agree - the aggregation readers and the explicit reference readers are using essentially the same answer selection criterion..\nEmpirical evidence for (16) is given in the first three rows of table [1 The first row empirically measures the \"constant' c in (16) by measuring eo(a) ht for those cases where t E R(a,p). The second row measures \"0' in (16) by measuring eo(a)' ht in those cases where t R(a, p). Addi- tional evidence for (16) is given in figure[1[showing that the output vectors eo(a) for different entity identifiers a are nearly orthogonal. Orthogonality of the output vectors is required by (16) provided that each output vector eo(a) is in the span of the hidden state vectors ht,p for which t E R(a, p) Intuitively, the mean of all vectors ht,p with t E R(a, p) should be approximately equal to eo(a). Of course empirically this will only be approximately true.\nEquation (16) would suggest that the vector embedding of the constant symbols should have di-. mension at least as large as the number of distinct constants. However, in practice is sufficient that. e(a)' e(a') is small for a a'. This allows the vector embeddings of the constants to have dimen-. sion much smaller than the number of constants. We have experimented with two-sparse constant. symbol embeddings where the number of embedding vectors in dimention d is 2d(d - 1) (d choose. 2 times the four ways of setting the signs of the non-zero coordinates). Although we do not report. results here. these designed and untrained constant embeddings worked reasonably well\nUnfortunately, the decomposition of hy into this predication structure need not be axis aligned Rather than posit an axis-aligned concatenation we posit that the hidden vector space H is a possibly non-aligned direct sum\nH=SE\nwhere S is a subspace of \"statement vectors' and E is an orthogonal subspace of \"entity pointers\". Each hidden state vector h E H then has a unique decomposition as h = +e for E S and e E E This is equivalent to saying that the hidden vector space H is some rotation of a concatenation of. the vector spaces S and E.\nWe now present empirical evidence for this decomposition structure. We first note that the predi cation decomposition implies that e,(a) ' ht equals eo(a)' eo(at). This suggests the following for some fixed positive constant c.\nAssuming the predication structure we have c = eo(a)|[2. We note that if different entity constants. had different norms then answers would be biased toward occurrences of the constant symbol of larger norm. But we need to have that all constant symbols are equivalent. We note that (??) gives.\nargmax eo argmax eo(a)' Qtht a a t argmax Qt eo(a) ' ht = argmax Qt a a t tER(a,p)\nCNN Dev CNN Test samples mean variance samples mean variance (a)Tht, tE R(a,p) 222,001 10.66 2.26 164,746 10.70 2.45 (a)ht, tR(a,p) 93,072,682 -0.57 1.59 68,451,660 -0.58 1.65 (a)'ht1, tE R(a,p) 443,878 2.32 1.79 329,366 2.25 1.84 osine(q,ht),a t E R(a,p) 222,001 0.22 0.11 164,746 0.22 0.12 osine(q, eo(a)), Va 103,909 -0.03 0.04 78,411 -0.03 0.04\n0 210 100 180 150 200 120 90 300 60 30 400 0 -30 500 0 100 200 300 400 500\nFigure 1: Plot of e,(a;)' e,(a;) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3.\nThis interpretation is exactly correct if some of the dimensions of the vector space correspond tc predicates, is a 0-1 vector representing a conjunction predicates, and is also 0-1 on these di mensions indicating whether a predicate is implied by the context. Of course in practice one expect. the dimension to be smaller than the number of possible predicates.."}, {"section_index": "5", "section_name": "5 POINTER ANNOTATION READERS", "section_text": "It is of course important to note that anonymization provides reference information - anonymiza. tion assumes that one can determine coreference so as to replace coreferent phrases with the same entity identifier. Anonymization allows the reference set R(a, p) to be directly read off of the pas-. sage. Still, an aggregation reader must learn to recover this explicit reference structure..\nAggregation readers can have difficulty when anonymization is not done. The Stanford Reade. achieves just better than 45% on Who-did-What dataset while Attention Sum Reader can get neai 60%. But if we anonymize the Who-did-What dataset and then re-train the Stanford Reader, the accuracy jumps to near 65%. Anonymization has two effects. First, it greatly reduces the number. of output word e,(a) to be learned - we need only learn output embeddings for the relatively small number of entity identifiers needed. Second, anonymization suppresses the semantics of the reference phrases and leaves only a semantics-free entity identifier. This suppression of semantics may facilitate the separation of the hidden state vector space H into a direct sum S E with q E S. and eo(a) E E.\nWe can think of anonymization as providing additional linguistic input for the reader - it explicitly marks positions of candidate answers and establishes coreference. A natural question is whether\nAs another testable predication we note that the posited decomposition of the hidden state vectors implies\nq(h+ eo(a))=qh,\nOC This equation is equivalent to q' eo(a) = 0. Experimentally, however, we cannot expect q' eo(a). to be exactly zero and (17) seems to provides a more experimentally meaningful test. Empirical evidence for (17) is given in the fourth and fifth row of table1 The fourth row measures the cosine. of the angle between the question vector q and the hidden state ht averaged over passage positions. t at which some entity identifier occurs. The fifth row measures the cosine of the angle between q. and eo(a) averaged over the entity identifiers a.\nA question asks for a value of x such that a statement I[x] is implied by the passage. For a question I we might even suggest the following vectorial interpretation of entailment..\nI[x] implies |x iff q'y>1\nOne-Hot Pointer Annotation: The Stanford Reader involves both input embeddings of words anc output embeddings of entity identifiers. In the Who-did-What dataset each problem has at most five. choices in the multiple choice answer list. This means that we need only five entity identifiers and we can use a five dimensional one-hot vector representation for answer identifiers. If an answer choice exists at position t in the passage let it be the index of that choice on the choice list. If nc choice occurs t take it to be zero. Take e'(i) to be the zero vector if i = 0 and otherwise to be the one-hot vector for i. We defined pointer annotation to be the result of adding e'(it) as additional features to the input embedding.\nWe then define a one-hot pointer reader by designates five dimensions of the hidden state as indica tors of the answer and take the probability of choice i to be defined as.\np(i|d, q) = softmax 0i 2\nGeneral Pointer Annotation: In the CNN dataset there are roughly 500 entity identifier and a one hot representation is not desirable. Instead we can let e'(i) be a fixed set of \"pointers vectors'' - vectors distributed widely on the unit sphere so that for i j we have that e'(i) ' e'(j) is small. We. again use (18) but replace (19) with\np(i[d, q) = softmax [0, e'(i)]' c\nIn the general pointer reader the pointer embeddings e'(i) are held fixed and not trained\nBinary feature: whether current token occurs in the question Real value feature: the frequency of current token in the passage.\nTable 2: Accuracy on WDW dataset. All these results are based on single model. Results for neural readers other than NSE are based on replications of those systems. All models were trained on the. relaxed training set which uniformly yields better performance than the restricted training set. The first group of models are explicit reference models and the second group are aggregation models. + indicates anonymization with better reference identifier..\nthis information can be provided without anonymization by simply adding additional coreference features to the input. Here we evaluate two architectures inspired by this question. This evaluation is done on the Who-did-What dataset which is not anonymized. In each architecture we add features to the input to mark the occurrences of candidate answers. These models are simpler than the Stanford reader but perform comparably. This comparable performance in table 2|further supports our analysis of logical structure in aggregation readers.\ne(wt) =[e(wt),e'(it)\nLinguistic Features. Each model can be modified to include additional input features for each input token in the question and passage. More specifically we can add the following features to the word embeddings.\nThe performance of various recent readers on CNN, DailyMail and CBTest are summarized in Table 3] For purposes of comparison we only present results on single models. Model ensembles generally. perform better than single models but are require more computation to train making comparisons more difficult. More experimental details can be found in appendix..\nTable 3: Accuracy on CNN, DailyMail, CBTest NE and CBTest CN. All results are based on a singl model. Results other than those involving pointer or linguistic feature annotations are taken fron. the original publications. Readers in the first group are explicit reference readers. Readers in th second group are aggregation readers. The final reader defies this classification..\nIn table 3] all the high-performance approaches are proposed very recently. Blue color represents the second highest accuracy and bold font indicates the state-of-the-art accuracy. Note that the result of Stanford Reader we report here is the one without relabeling since relabeling procedure doesn't follow the protocol used in Hermann et al.(2015)."}, {"section_index": "6", "section_name": "1 DISCUSSION", "section_text": "Explicit reference architectures rely on reference resolution - a specification of which phrases in the given passage refer to candidate answers. Our experiments indicate that all existing readers ben- efit greatly from this externally provided information. Aggregation readers seem to demonstrate a stronger learning ability in that they essentially learn to mimic explicit reference readers by iden tifying reference annotation and using it appropriately. This is done most clearly in the pointer reader architectures. Furthermore, we have argued for, and given experimental evidence for, an in terpretation of aggregation readers as learning emergent logical structure - a factoring of neural representations into a direct sum of a statement (predicate) representation and an entity (argument) representation.\nReal value feature: position of the token's first occurrence in the passage as a percentage. of the passage length. Binary feature: whether the text surrounding token match the text surrounding the place holder in the question. We only have features for matching both left and right one word. One hot vector: Part-of-speech (POS) tagging. We only use such feature on CBT dataset.. One hot vector: Name Entity Recognition (NER). We only use such feature on CBT dataset.\nAt a very high level our analysis and experiments support a central role for reference resolution in reading comprehension. Automating reference resolution in neural models, and demonstrating its value on appropriate datasets, would seem to be an important area for future research..\nOf course there is great interest in \"learning representations\"'. The current state of the art in reading comprehension is such that systems still benefit from externally provided linguistic features includ ing externally annotated reference resolution. It would seem desirable to develop fully automatec neural readers that perform as well as readers using externally provided annotations. It is of course important to avoid straw man baselines when making any such claim."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thanks the support of NVIDIA Corporation with the donation of GPUs used for this work"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the ACL, 2016.\nZewei Chu, Hai Wang, Kevin Gimpel, and David McAllester. Broad context language modeling reading comprehension. Arxiv, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over attention neural networks for reading comprehension. Arxiv, 2016.\nPaperno. Denis, Germn Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernndez. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the ACL, 2016.\nBhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. Arxiv, 2016\nKarm Moritz Hermann, Tom Kocisk, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su leyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), 2015.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceeding. of the 3rd International Conference on Learning Representations, 2015..\nWe are hesitant to make any more detailed comments on the differences between the architectural details of the readers discussed in this paper. The differences in scores between the leading read- ers are comparable to differences in scores that can be achieved by aggressive search over meta parameters or the statistical fluctuations in the quality of models learned by noisy statistical train- ing procedures. More careful experiments over a longer period of time are needed. More dramatic improvements in performance would of course provide better support for particular innovations.\nTsendsuren Munkhdalai and Hong Yu. Reasoning with memory augmented neural networks foi language comprehension. Arxiv, 2016.\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the EMNLP, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 1oo,ooo+ questions for machine comprehension of text. In Proceedings of International Conference on Empirical Methods in Natural Language Processing, 2016.\nPascanu Razvan, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. In Proceedings of 1CML, pp. 1310-1318, 2013.\nAndrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dy namics of learning in deep linear neural networks. Arxiv, 2013.\nYelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. Arxiv, 2016..\nAlessandro Sordonif, Phillip Bachmanf, and Yoshua Bengio. Iterative alternating neural attention for machine reading. Arxiv, 2016..\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. Arxiv, 2016.\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel : Frameworks for deep learning. Arxiv, 2015.\nDirk Weissenborn. Separating answers from queries for neural reading comprehension. Arxiv, 2016\nJason Weston. Sumit Chopra. and Antoine Bordes\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merrinboer, Armand Joulin, and Tomas Mikolov. Towards ai complete question answering: A set of prerequisite toy tasks. In Proceedings of the 4th International Cc onference on Learning Representations. 2016\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks\nFre de ric Bastien. Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud. Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. NIPs Workshop Deep Learning and Unsupervised Feature Learning, 2012..\nFor Stanford Reader and One-Hot Pointer Reader, we simply follows the Stanford Reader's set. ting and didn't tune it on each dataset. For gated attention reader, the lookup table was ran. domly initialized with uniform distribution from the interval [-0.2, 0.2] on CBT dataset, but o CNN&DailyMail, the lookup table was initialized by Glove vector (Jeffrey et al.J2014) traine. on the train&validatation set (we found that the pre-trained word vector doesn't improve the ac. curacy but will accelerate the training) on CNN&DailyMail. On WDW dataset, the lookup tabl. was initialized by pre-trained Glove vector?! It should be noticed that if we initialize the lookuj. table with pre-trained Glove vector from //nlp.stanford.edu/data/glove.6B.zip, it will slightly boos. the accuracy compared with using the Glove vector trained on train&validation set. Input to hidde. state weights were initialized by random orthogonal matrices (Saxe et al.[2013) and biases wer. initialized to zero. Hidden to hidden state weights were initialized by identity matrices to force th. model can remember longer information. To compute the attention weight, we at = htTWaq an. initialize W with random uniform distribution. We also used the gradient clipping (Razvan et al. 2013) with threshold of 10 and batches of size 32.\nDuring training we randomly shuffled all examples within each epoch. To speedup training, we. always pre-fetched 10 batches worth of examples and sorted them according to document length as did byKadlec et al.(2016). When trained on CNN, DailyMail and WDw (anonymization case). dataset, we randomly reshuffled the entity identifier to match the procedure proposed in Hermann. et al.(2015).\nDuring training we evaluated the accuracy after each epoch and stopped the training when the accu racy on the validation set started decreasing. We tried limiting the vocabulary to the most frequent tokens but didn't observed any performance improvement compared with using all the distinct tokens as vocabulary. Since part of our experiments need to check the word embedding assignment issues, finally we use all the distinct tokens as vocabulary. To find the optimal embedding and hidden state dimension, we tried several groups of different combinations, the optimal value and corresponding training statistics in Gated Attention readers are summarized in Table.4] When anonymize the Who did-What dataset, we can either use simple string match to replace answer in question and story with entity identifier, or we can use Name Entity Recognition(NER) tool|to detect name entities and then replace the answer name entities in question and story with entity identifier, we found the later one generally will bring 2 % improvement compared with simple string match. More experimental details can be found in code.\nTable 4: Training Details on Different Datasets\nDataset Embedding Hidden State Time Per Epoch Trained Epochs K CNN 128 256 18 hours 5 3 DailyMail 128 256 2 days 5 3 WDW Relaxed 200 384 2.5 hours 8 1 CBT NE 384 384 1 hour 8 1 CBT CN 384 256 1 hour 7 1\nWe randomly choose one article from CNN dataset and show softmax(e,(a)ht) for t E [0, [p] fo each answer candidate a in figure2 figure[3] figure|4] figure5[and figure|6] Red color indicate\n2http://nlp.stanford.edu/data/glove.6B.zip http://nlp.stanford.edu/software/CRF-NER.shtml\nlarger probability and orange indicates smaller probability and the remaining indicates very low probability that can be ignored. From those figures, we can see that our assumption that eo(a) is used to pick up its occurrence is reasonable\nFigure 2: Heat map of softmax(e.(a)ht) when a = entity0\nWe randomly choose one article from CNN dataset and show the attention map at softmax(qTW.ht) for different readers (in Attention Sum and Gated Attention Reader, W. is ider tity matrix). From figure[7] figure[8|and figure[9] we can see that different readers essential put th weights on the entity identifiers.\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a. @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a. preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,. @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding. in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in. statement friday said one of its journalists \" mentioned only once the presence of a woman hidden. inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief edito. felt that this information should not be released . it therefore has subsequently never been repeated. on air or posted on - screen . @entity16 regrets that the mention of this information could cause. concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement. said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27. @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed ir. the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born. @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15. customers from @entity47 in the cold room . the hostage - taking was the culmination of three days. of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5. satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported. from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . alerv: they hid\nquery: they hid in a cold room during the attack in @entity0 by gunman @ placeholder\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\nFigure 4: Heat map of softmax(e.(a)ht) when a = entity16\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a. @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a. preliminary investigation was opened by the prosecutor's office wednesday . the media outlet ,. @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding. in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a. statement friday said one of its journalists \" mentioned only once the presence of a woman hidden. inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor. felt that this information should not be released . it therefore has subsequently never been repeated. on air or posted on - screen . @entity16 regrets that the mention of this information could cause. concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27. @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in. the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born. @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15. customers from @entity47 in the cold room . the hostage - taking was the culmination of three days. of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5. satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the. lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported. from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report .\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a. @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking .. according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a. preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet ,. @entity1 affiliate @entity16_, is accused of endangering the lives of the hostages , who were hiding. in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a. statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor. felt that this information should not be released . it therefore has subsequently never been repeated. on air or posted on - screen . @entity16 regrets that the mention of this information could cause. concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement. said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27. @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in. the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born. @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15. customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5. satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 ,. were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the. lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported. from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this. report .\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\nFigure 6: Heat map of softmax(e,(a)ht.) when a = entity47\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief edito felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report .\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\n@entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists \" mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . \" \" immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , \" the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . the nlaceholde1\nquery: they hid in a cold room during the attack in @entity0 by gunman @placeholder\nFigure 8: Heat map Q+ for Gated Attention Reader\n( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first. time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special. military unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched. @entity14 village in the @entity15 , \" he said . @entity14 is a village that borders @entity17 and. has been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents. have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle. that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the. @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2_, whose name translates as \" @entity44. education is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme. version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from. battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids. on villages , mass kidnappings , assassinations , market bombings and attacks on churches and. unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring. countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in. @entity63 , @entity5 , contributed to this report ..\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor\n@entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first. time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six. attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special. military unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched. @entity14 village in the @entity15 .\" he said . @entity14 is a village that borders @entity17 and has been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents. have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle. that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the. @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2_, whose name translates as \" @entity44. education is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme. version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from. battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids. on villages , mass kidnappings , assassinations , market bombings and attacks on churches and. unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring. countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in. @entity63 , @entity5 , contributed to this report .\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor.\nFigure 9: Heat map a for Attention Sum Reader\n@entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first. time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six. attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special military unit set up to fight @entity2 . the attackers came thursday \" in the hundreds ... torched. @entity14 village in the @entity15 , \" he said . @entity14 is a village that borders @entity17 and. has been identified as a recruiting ground for @entity2 . regional gov. @entity19 said the insurgents. have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle. that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the. @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2_, whose name translates as \" @entity44. education is sin , \" has been waging a years - long campaign of terror aimed at instituting its extreme. version of @entity42 law in @entity29 . @entity2's tactics have intensified in recent years , from. battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids. on villages , mass kidnappings , assassinations , market bombings and attacks on churches and. unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring. countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in. @entity63 , @entity5 , contributed to this report .\nquery: @placeholder is based in @entity29 but has attacked across the border of several neighbor"}] |
r1LXit5ee | [{"section_index": "0", "section_name": "EPISODIC EXPLORATION FOR DEEP DETERMINISTIC POLICIES FOR STARCRAFT MICROMANAGEMENT", "section_text": "Nicolas Usunier*, Gabriel Synnaeve*, Zeming Lin, Soumith Chintala Facebook AI Research\n{usunier,gab, zlin, soumith}@fb.com\nWe consider scenarios from the real-time strategy game StarCraft as benchmarks. for reinforcement learning algorithms. We focus on micromanagement, that is, the. short-term, low-level control of team members during a battle. We propose several scenarios that are challenging for reinforcement learning algorithms because the. state- action space is very large, and there is no obvious feature representation for the value functions. We describe our approach to tackle the micromanagement. scenarios with deep neural network controllers from raw state features given by. the game engine. We also present a heuristic reinforcement learning algorithm which combines direct exploration in the policy space and backpropagation. This. algorithm collects traces for learning using deterministic policies, which appears much more efficient than, e.g., e-greedy exploration. Experiments show that this. algorithm allows to successfully learn non-trivial strategies for scenarios with. armies of up to 15 agents, where both Q-learning and REINFORCE struggle.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "StarCrafis a real-time strategy (RTS) game in which each player must build an army and control. individual units to destroy the opponent's army. As of today, StarCraft is considered one of the. most difficult games for computers, and the best bots only reach the level of high amateur human. players (Churchill!2015). The main difficulty comes from the need to control a large number of. units in partially observable environment, with very large state and action spaces: for example, in a. typical game, there are at least 101685 possible states whereas the game of Go has about 10170 states. Because of simultaneous and durative actions, StarCraft provides an ideal environment to study the control of many agents at large scale, and an opportunity to define tasks of increasing difficulty, from. micromanagement, which concerns the short-term, low-level control of fighting units during battles. to long-term strategic and hierarchical planning under uncertainty. While building a controller for the. full game based on machine learning is out-of-reach with current methods, we propose, as a first step. to study reinforcement learning (RL) algorithms in micromanagement scenarios in StarCraft..\nBoth the work on Atari games (Mnih et al.]2013) and the recent Minecraft scenarios studied by researchers (Abel et al.| 2016f Oh et al.|2016) focus on the control of a single agent, with a fixed limited set of actions. Coherently controlling multiple agents (units) is the main challenge of. reinforcement learning for micromanagement tasks. This comes with two main challenges. The first one is to efficiently explore the large action space. The implementation of a coherent strategy requires. the units to take actions that depend on each other, but it also implies that any small alteration of a strategy must be maintained for a sufficiently long time to properly evaluate the long-term effect of. that change. In contrast to this requirement of consistency in exploration, the reinforcement learning algorithms that have been successful in training deep neural network policies such as Q-learning (Watkins & Dayan]1992) Sutton & Barto1998) and REINFORCE (Williams1992]Deisenroth et al. 2013), perform exploration by randomizing actions. In the case of micromanagement, randomizing. actions mainly disorganizes the units, which then rapidly lose the battle without collecting relevant"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "feedback. The second challenge of micromanagement is that there is no obvious way to parameterize the policy given the state and the actions, because actions are relations between entities of the state e.g. (unit A, attack, unit B) or (unit A, move, position B) and are not restricted to a few constant symbols such as \"move left' or \"move right\"'. Multi-class architectures, such as these used for Atari games (Mnih et al.]2015), cannot evaluate actions that are parameterized by an entity of the state.\nThe contribution of this paper is twofold. First, we propose several micromanagement tasks fron StarCraft (Section |3), then we describe our approach to tackle them and evaluate well knowr reinforcement learning algorithms on these tasks (Section|4). In particular, we present an approacl of greedy inference to break out the complexity of taking the actions at each step. We also describ the features used to jointly represent states and actions, as well as a deep neural network model fo the policy (Section|5). Second, we propose the zero order (ZO) reinforcement learning algorithm tc address the difficulty of exploration in these tasks (Section|6). Compared to algorithms for efficien direct exploration in parameter space, the novelty of our algorithm is to explore directly in policy space by mixing parameter randomization and plain gradient descent.\nMulti-agent reinforcement learning has been an active area of research (Busoniu et al.|[2008). Most of. the focus has been on learning agents in competitive environments with adaptive adversaries (Littman. 1994] [Hu & Wellman|1998f Tesauro2003). Some work has looked at learning control policies for individual agents in a collaborative setting with communication constraints (Tan]1993) Bernstein. et al.]2002), with applications such as soccer robot control (Stone & Veloso1999), and methods such as hierarchical reinforcement learning for communicating high-level goals (Ghavamzadeh. et al.|2006), or learning an efficient communication protocol (Sukhbaatar et al.|2016). While the. decentralized control framework is most likely relevant for playing full games of StarCraft, here we. avoid the difficulty of imperfect information, therefore we use the multi-agent structure only as a means to structure the action space. As in the approach of (Maes et al.] 2009) with reinforcement. learning for structured output prediction, we use a greedy sequential inference scheme at each time. frame: each unit decides on its action based solely on the state combined with the actions of units. that came before it in the sequence..\nAlgorithms that have been used to train deep neural network controllers in reinforcement learning include Q-learning (Watkins & Dayan 1992 Mnih et al.]2015), the method of temporal differences (Sutton]1988]Tesauro1995), policy gradient and their variants (Williams]1992| Deisenroth et al. 2013), and actor/critic architectures (Barto et al. 1983 Silver et al.20142016). Except for the deterministic policy gradient (DPG) (Silver et al. 2014), these algorithms rely on randomizing the actions at each step for exploration. DPG collects traces by following deterministic policies that remain constant throughout an episode, but can only be applied when the action space is continuous. Hausknecht & Stone(2015) apply DPG with paramterized action spaces, in which discrete actions (e.g. \"move') are parameterized by continuous variables (e.g. the target location). Our work is most closely related to works that explore the parameter space of policies rather than the action space Several approaches have been proposed that randomize the parameters of the policy at the beginning of an episode and run a deterministic policy throughout the entire episode, borrowing ideas from gradient-free optimization, e.g. (Mannor et al.[2003] Sehnke et al.]2008] Szita & Lorincz2006). However, these algorithms rely on gradient-free optimization for all parameters, which does not scale well with the number of parameters.Osband et al.(2016b) describe another type of algorithm where the parameters of a deterministic policy are randomized at the beginning of an episode, and learn a posterior distribution over the parameters as in Thomson sampling (Thompson1933). Their approach was proved to be efficient, but applies only to linear functions and scales quadratically with the number of parameters. The bootstrapped deep Q-networks (BDQN) (Osband et al.|2016a) are a practical implementation of the ideas of (Osband et al.|[2016b) for deep neural networks. However, BDQN still performs exploration in the action space at the beginning of the training, and there is no randomization of the parameters. BDQN keeps several versions of the last layer of the deep neural network, and selects a single version per episode to perform Q-learning updates, while it ensembles all such \"heads\"' as test time. In contrast, we randomize the parameters of the last layer once at the\nIn the context of RTS micromanagement, a large spectrum of AI approaches have been studied There has been work on Bayesian fusion of hand-designed influence maps (Synnaeve & Bessiere 2011), fast heuristic search in a simplified simulator (Churchill et al.2012), and even evolutionary optimization (Liu et al.]2014). Overmind (Klein et al.2010) used threat-aware A* pathing and RL-tuned potential fields. Closer to this work, Marthi et al.[(2005) employ concurrent hierarchical Q-learning (units Q-functions are combined at the group level),Wender & Watson(2012) successfully applied tabular Q-learning (Watkins & Dayan|[1992) and SARSA (Sutton & Bartol|1998), with and without experience replay (\"eligibility traces''), with a reward similar to the one used in several of our experiments. However, the action space was reduced to pre-computed \"meta-actions': fight and retreat, and the features were hand-crafted. None of these approaches are used as is in existing StarCraft bots, for a lack of robustness, completeness (both can be attributed to hand-crafting), or computational efficiency. For a more detailed overview of AI research on StarCraft, the reader should consult (Ontanon et al.|2013).\nWe focus on micromanagement, which consists of optimizing each unit's actions during a battle. The tasks presented in this paper represent only a subset of the complexity of playing StarCraft. As. StarCraft is a real-time strategy (RTS) game, actions are durative (are not fully executed on the nex. frame), and there are approximately 24 frames per second. As we take an action for each unit every. few frames (e.g. every 9 frames here, more details can be found in AppendixD), we only conside. actions that can be executed in this time frame, which are: the 8 move directions, holding the curren position, an attack action for each of the existing enemy units. During training, we always control all. units from one side, and the opponent (built-in AI in the experiments) is attacking us:.\nFor all these scenarios, a human expert can win 100% of the time against the built-in AI, by moving away units that are hurt (thus conserving firepower) and with proper focus firing\nFormalism The environment is approximated as a Markov Decision process (MDP), with a finite set. of states denoted by S. Each state s has a set of units U(s), and a policy has to issue a command c E C to each of them. The set of commands is finite. An action in that MDP is represented as a sequence of (unit, command) pairs a = ((u1, c1), ..., (u|s], C|sJ)) such that {u1, ..., ujs|} = U(s). [s| denotes the. number of units in state s and A(s) = (U(s) C)|s! the set of actions in state s. We denote by p(s'Is, a the transition probability of the MDP and by p1 the probability distribution of initial states. When there is a transition from state st to a state st+1, the agent receives the reward rt+1 = r(st, st+1) where r : S S -> R is the reward function. We assume that commands are received and\nm5v5 is a task in which we control 5 Marines (ranged ground unit), against 5 opponent Marines. A good strategy here is to focus fire, e.g. order all Marines to attack a single opponent.. m1 5v1 6: same as above, except we have 15 Marines and the opponent has 16. A good strategy. here is also to focus fire, while avoiding \"overkill\" 7 Marines attacking simultaneously kills an. opponent in a single volley, so using more marines to simultaneously target an enemy causes. attacks to be wasted, resulting in \"overkill'. dragoons_zealots: symmetric armies with two types of units: 3 Zealots (melee ground. unit) and 2 Dragoons (ranged ground unit). Here a strategy requires to focus fire, and if possible. to 1) not spend too much time having the Zealots walk instead of fight, 2) focus the Dragoons. who die more easily but deal more damage.. w15v17: we control 15 Wraiths (ranged flying unit) while the opponent has 17. Flying units. have no \"collision\"', so multiple units can occupy the same tile and reach their target more quickly. It only takes 6 wraiths to kill an opponent in a single volley. Hence, it is important not to \"overkill. on this map. other mXvY or wXvY scenarios. The 4 scenarios above are the ones on which we train our models but they can learn strategies that overfit a given number of units, so we have similar scenarios but. with different numbers of units (on each side)..\nThe \"greedy' MDP One way to break out the complexity of jointly inferring the commands tc each individual unit is to perform greedy inference at each step: at each state, units choose a command one by one, knowing the commands that were previously taken by other units. Learning a greedy policy boils down to learning a policy in another MDP with fewer actions per state but exponentially more states, where the additional states correspond to the intermediate steps of the greedy inference This reduction was previously proposed in the context of structured prediction by Maes et al.(2009) who proved that an optimal policy in this new MDP has the same cumulative reward as an optima policy in the original MDP. We expand on this in Appendix[B\nNormalized cumulative rewards Immediate rewards are necessary to provide feedback that guide exploration. In the case of micromanagement, a natural reward signal is the difference betwee damage inflicted and incurred between two states. The cumulative reward over an episode is th total damage inflicted minus the total damage incurred along the episode. However, the scale of thi quantity heavily depends on the number of units (both our units and enemy units, which significantl decreases along an episode) that are present in the state. Without proper normalization with respec to the number of units in the current state z(s), learning will be artificially biased towards the larg immediate rewards at the beginning of the episode. Then, instead of considering cumulative reward from a starting state st, we define normalized cumulative rewards nt..T as the following recursiv computation over an episode:\nnt+1..1 Vt E{1,...,T-1}, nt..T\nWe use the sum of maximum hit points of all units in the state s' as normalization factor z(st), which implies that nt..T E [--0.5, 0.5]. One way to look at this normalization process is to consider that the reward is rt+1 plays the role of an (adaptive) discount factor, which is chosen to be at z(st), and z(st+1) Z(st) most 1, and strictly smaller than 1 when the number of units change\nFor policy gradient and our algorithm described in section[6. we directly use nt..T. We describe i Appendix[C|how we adapted the update rule for Q-learning.."}, {"section_index": "3", "section_name": "FEATURES AND MODEL FOR MICROMANAGEMENT IN STARCRAFT", "section_text": "We describe in this section the features and the neural network architecture we use to parameterize the policy. Since we consider the greedy inference described in the previous section, the underlying MDP will contain states of the form s = (s, a1..k, uk+1), where: s is the current state of the game given by the game engine, k is the number of units which already \"played' at this frame, a1..k is the sequence of the k pairs (unit, command) that correspond to the k commands the have already been chosen, and finally uk+1 is the unit to play. For each unit, we consider two types of commands (1) attack a given enemy unit, and (2) move to a specific position. In order to reduce the number oi possible move commands, we only consider 9 move commands, which either correspond to a move in one of the 8 basic directions, or staying at the same position.\nThere are several challenges to represent states and actions in RTS games.\nThe number of units and actions are not bound a priori and varies in time Commands must be evaluated in context of all currently executing command Attack actions must resolve the reference to its target\nexecuted concurrently, so that the order of commands in an action does not alter the transition. probabilities. Finally, we consider the episodic reinforcement learning scenario, with finite horizon. T and undiscounted rewards. The learner has to learn a (stochastic) policy (a|s), which defines. a probability distribution over actions in A(s) for every s E S. The objective is to maximize the where the expectation is taken with respect to s1 ~ 01, st+1 p(.at. st) and at ~ (.st\nTo address the first two challenges, we adopt an approach based on a joint encoding of states and commands. Denoting by s = (s, a1..k, uk+1) the current state of the greedy MDP and c a\nTable 1: Unit features as given by the game engine, their abbreviated name and their type: cat. means the feature is caterogical and 1-hot encoded, real-valued features comme with the re-scaling constant\nhit points shield cooldown is enemy unit type (hp, E R, /20) (shield, E R, /20) (cd, E R, /10) (nmy, bool) (type, cat.) position previous target chosen target prev. cmd type chosen cmd type (pos, E R2, /20) (tgt_pos, E R2, /20) (next_pos, E R2, /20) (prev_cmd, cat.) (next_cmd, cat.)\ncandidate action, we learn the parameters w and 0 of a (state, command) value function of the form f(s,c) = (w, e(s, c)) where w E Rd and o(s,c) is the output of a embedding network that maps (state, command) pairs to Rd, with parameters 0. In Q-learning and our algorithm presented in the next section, we directly use f as the state/action value function, whereas in policy gradient the probability to take command c in state s is given by the Gibbs distribution over f(s, c) with o f (s, c) /T\nTo tackle the last challenge, we identify units with their (x, y coordinates in the map. We add two fields to the unit features that contain the coordinates of their corresponding target, or its own location if it does not have a target. To evaluate a command c = (<actor unit>, <attack or move>, <target>), we compute pairwise distances between the actor and the target. Note that with this kind of representation, the input of the embedding network e is a joint representation of the state s and the com mand c to evaluate. A complete list of unit features is given in Table[1] Hit points are the remaining life points of the unit, shield corresponds to additional hit points that are not affected by armor and regenerate slowly, cooldown is the time to wait until damages can be inflicted.\nThe full scoring approach is depicted in Figure[1] In our ap proach, a state is represented as a list of units. The raw features are transformed by a featurizer that 1) takes the 3 unit features (pos, tgt_pos and next_pos) and computes their distances with the position the acting unit and its target (posc and tgtc). All 4 categorical variables are passed through a 10-dimensional linear embedding (not shown in figure). In addition to the 4 real valued unit features, we have a 40 dimensional feature vector per unit as input to our network.\nEach unit feature vector then goes through the unit-level embed- ding network. We then concatenate the max and mean poolings across units with an embedding of the command type. Then the resultant 210 dimensional vector is passed through a final state-command embedding network. Both the unit-level and state-command embedding networks have a hidden dimension of 100, and ELU nonlinearities in the intermediate layer (Clev- ert et al.]2015). We use tanh for the final unit-level network nonlinearty, and a ReLU for the final state-command network nonlinearity. We did not extensively experiment with the struc- ture of the network, but we found the maxpooling and tanh nonlinearity to be particularly important.\nThe advantage of this approach is to rely on raw features only, and does not require any encoding ol the game dynamics, in contrast to previous works on RL for micromanagement (see e.g. (Wender & Watson2012)) that used domain knowledge handcrafted in the features (such as the damages inflicted by an attack). The distance-based encoding is also a simple way to represent the differen relationships between units that correspond to previous/chosen attacks.\n1 2 n unit Randur its hp pos tgt_pos cmd posc featurizer tgtc typec hp feannes d(pos, posc) d(pos, tgtc) Linear (40x100) ELU Linear (100x100) Tanh pooling Embedding (dim 10) Max Mean Linear (210x100) ELU Linear (100x100) ReLU\nFigure 1: Representation of the joint (state, command) featuriza- tion and scoring process."}, {"section_index": "4", "section_name": "COMBINING BACKPROPAGATION AND ZERO-ORDER OPTIMIZATION", "section_text": "Our preliminary experiments with Q-learning or REINFORCE made it clear that structured explo. ration was necessary to learn non-trivial strategies with substantial armies. The randomization ol actions lead to the disorganization of the army and a rapid defeat, which prevents the algorithms from evaluating alterations to the current policy in the long run. Whereas gradient-free optimization. that performs episode-based exploration (e.g.Mannor et al.(2003); Sehnke et al.(2010)) would be a valid choice, it only scales to few parameters. Preliminary experiments with direct exploration in the parameter space of the deep neural network confirmed that a more efficient scheme was needed..\nThe deterministic policy. y we consider takes action a in state s according to the rule\n= argmax(w, e(s, a)) aEA(s)\nWe use the notation (s, a) for state and actions in an MDP for the presentation of the algorithm, even though in our experiments we use it with states s of the greedy MDP and unit-level commands c. Likewise, we describe the algorithm in the standard cumulative reward setup, while in our experiments. we use the normalized cumulative rewards..\nThis form of policy naturally allows to perform structured exploration by only randomizing parts of. the network. More specifically, the parameters w of the last layer affect all states and actions in a similar way along an episode. The approach we follow is then to perform gradient-free optimization. on these parameters w only. Following stochastic methods for zero-th order optimization (Kiefer et al.]1952} Nemirovsky et al.]1982] Spall]1997] Duchi et al.2013] Ghadimi & Lan]2013), the gradient of a differentiable function x E IRd can be estimated by\nwhere the expectation is taken over the vector u sampled on the unit sphere (Nemirovsky et al.]1982 chapter 9.3). The constant d is absorbed by learning rates, so we ignore it in the following. Given a (state, action) pair (s, a) and the observed cumulative reward r1..t for an episode of length t, an. estimate of the gradient of the expected cumulative reward with respect to w is thus r1..t u. In practice. 1 rk.. t ather thar -1..t o. Ihich. odien 1lativ\nThe overall algorithm is described in Algorithm[1 At the beginning of an episode, a perturbation u is sampled from the unit sphere of Rd and the policy s +> w+su,e(s) is run through the entire episode ( is a hyperparameter of the algorithm). The perturbation vector plays both role of performing. structured exploration and providing the gradient estimate of the cumulative reward with respect to w. The algorithm performs a minibatch update at the end of the episode. The second loop in Algorithm\ngradient with respect to 0 when the network input is (s' and the backward step uses z as inpui\nVf(x) ~ E (x+Su)u]\nThe deterministic exploration along an episode does not provide any update rule for the parameters of. the embedding network, because the randomization is the same for every (state, action) pair. We pro pose a heuristic rule to update the parameters 0 of the embedding network, motivated by the following. remark: given a function (w E Rd,v E Rd) +> F((w, v)) E R, we have VwF = F'((w,v))v and. V,F = F'((w, v))w. Denoting by \" the term-by-term division of vectors (assuming v contains. only non-zero values) and O the term-by-term multiplication operator, we obtain:.\nW V,F=(VwF) U\ngation. In practice, we use the sign of W to avoid exploding gradients due to the division by\nG(0) + backprope(s) W\nAlgorithm 1: Zero-order (ZO) backpropagation algorithm\nThe reasoning above is only an intuitive motivation of the update rule (**) of Algorithm 1] because. we neglected that a single u is sampled for an entire episode. We also neglected the argmax operatior. that chooses the actions. Nonetheless, considering (**) as a crude approximation to some real. estimator of the gradient seems to work very well in practice, as we shall see in our experiments Finally, we use Adagrad (Duchi et al.2011) to update the parameters of the different layers. We. found the use of Adagrad's update scheme fairly important in practice, compared to other approaches. such as e.g. RMSProp (Tieleman & Hinton|2012), even though RMSProp tended to work slightly. better with Q-learning or REINFORCE in our experiments..\nWe use Torch7 (Collobert et al.]2011) for all our experiments. We connect our Torch code and model: to StarCraft through a socket server, as described in (Synnaeve et al.]2016). We ran experiments with deep Q networks (DQN) (Mnih et al.]2013), policy gradient (PG) (Williams1992) (detailed in Appendix[A), and zero order (ZO). We did an extensive hyper-parameters search, in particulai over e (for epsilon-greedy exploration in DQN), (for policy gradient's softmax), learning rates optimization methods, RL algorithms variants, and potential annealings (detailed Appendix E).."}, {"section_index": "5", "section_name": "7.2 BASELINE HEURISTICS", "section_text": "As all the results that we report are against the built-in AI, we compare our win rates to the ones of baseline heuristics. Some of these heuristics often perform the micromanagement in full-fledged StarCraft bots (Ontanon et al.]2013), and are the basis of heuristic search (Churchill et al.2012) The baselines are the following:.\nFigure 2: Example of the training uncertainty (one standard deviation) on 5 different initialization fo. DQN (left) and zero-order (right) on the m5v5 scenario..\nWin rate Win rate 1.0 1.0 0.8 0.8 Mwwwwwww/wmwwwwwwwy 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000\nWin rate Win rate 1.0 1.0 MW 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0 10000 20000 30000 40000 50000 0 10000 20000 30000 40000 50000\nrandom no change (rand_nc): select a random target for each of our units and do not change tl target before it dies (or our unit dies). This spreads damage over several enemy units, but wh there are collisions, it may make our units to move a lot to be in range of their target. noop: send no action. In this case, the built-in AI will control our units, so this exhibit t symmetry (or not!) of a given scenario. As we are always in a defensive position, with the ene commanded to walk towards us, all other things considered equal, it should be easier for t defending built-in AI than for the attacking one. Our models cannot send a noop command. closest (c): each of our units targets the enemy unit closest to it. This is not a bad heuristic enemy units formation will make it so that several of our units have the same opponent unit closest unit (some form of focus firing), but not all of them (no overkill). It is also quite robust melee units (e.g. Zealots) as it means they spend less time moving and more time attacking weakest closest (wc): each of our units targets the weakest enemy unit. The distance of the ene. unit to the center of mass of our units is used for tie-breaking. This may overkill. no overkill no change (nok_nc): same as the weakest closest heuristic, but register the number our units that target each opponent unit, choosing another target to focus fire when it becon overkill to keep targeting a given unit. Each of our units keep firing on their target without changi (that would lead to erratic behavior). Our implementation of the \"no overkill'' component dc not take all the dynamics of the game into account, and so if our units die without doing th expected damage on their target, \"no overkill' can be detrimental."}, {"section_index": "6", "section_name": "7.3 RESULTS", "section_text": "Overall, the zero order optimization outperforms both DQN and PG (REINFORCE) on most of the maps. The only map on which DQN and PG perform well is m5v5. It seems to be easier to learn\nThe first thing that we looked at were sliding average win rates over 400 battles during training against the built-in AI of the various models. In Figure[2] we can see than DQN is much more dependent on initialization and variable than zero order (ZO). DQN can unlearn, reach suboptimal plateau, or overall need a lot of exploration to start learning (high sample complexity).\nFor all the results that we present in Tables2|and[3] we ran the models in \"test mode\" by making them deterministic. For DQN we remove the epsilon-greedy exploration (set e = 0), for PG we do not sample from the Gibbs policy but instead take the value-maximizing action, and for ZO we do not add noise to the last layer.\nWe can see in Table|2|that m15v16 is at the advantage of our player's side (noop is at 81% win rate), whereas w15v17 is hard (c is at 20% win rate). By looking just at the results of the heuristics, we can see that overkill is a problem on m15v16 and w15v17 (nok_nc is better than wc). \"Attack closest\"' (c) is approximatively as good as nok_nc at spreading damage, and thus better on m15v16 because there are lots of collisions (and attacking the closest unit is going to trigger less movements).\nTable 2: Test win rates over 1oo0 battles for the training scenarios, for all methods and for heuristic baselines. The best result for a given map is in bold..\nTable 3: Win rates over 1000 games for out-of-training-domain maps, for all methods. The map on. which this method was trained on is indicated on the left. The best result is in bold, the best result out of the reinforcement learning methods is in italics.\ntrain map test map best heuristic DQN PG ZO train map test map best heuristic DQN PG zO m15v16 m5v5 .96 (wc/c) .96 .79 .80 w15v17 w5v5 .78 (c) .70 .70 .74 m15v15 .97 (c) .27 .16 .80 w15v13 1. (rand_nc/c) 1. .99 1. m18v18 .98 (c/noop) .18 .25 .82 w15v15 .95 (c) .87 .61 .99 m18v20 .63 (noop) .00 .01 .17 w18v18 .99 (c) .92 .56 1. w18v20 .71 (c) .31 .24 .76\na focus firing heuristic (e.g. \"attack weakest') by identifying and locking on a feature, than to alsc learn not to \"overkill''. We interpret the learned behaviors in Appendix[F.\nWe then studied how well a model trained on one map performs on maps with a different number of. units, to test generalization. Table 3 contains the results for this experiment. We observe that DQN performs the best on m5v5 when trained on m15v16, because it learned a simpler (but more efficient on m5v5) heuristic. \"Noop\"' and \"attack closest\"' are quite good with the large Marines map because they generate less moves (and less collisions). Overall, ZO is consistently significantly better than other RL algorithms on these generalization tasks, even though it does not reach an optimal strategy\nWe also played the best model on each map against each other. We modify the maps in this case such that they are all symmetric, but with the same army composition. Table|4|shows the results for this experiment. It seems that PG and DQN learned very different strategies on wXvY, DQN beats PG consistently when trained on w15v17, while the PG model trained on w15v15 has an edge over DQN Overall, ZO comes out ahead in every match-up except for m5v5, often by a significant margin."}, {"section_index": "7", "section_name": "8 CONCLUSION", "section_text": "This paper presents two main contributions. First, it establishes StarCraft micromanagement scenarios as complex benchmarks for reinforcement learning: with durative actions, delayed rewards, and large action spaces making random exploration infeasible. Second, it introduces a new reinforcement learning algorithm that performs better than prior work (DQN, PG) for discrete action spaces in these micromanagement scenarios, with robust training (see Figure[2) and episodically consistent exploration (exploring in the policy space).\nThis work leaves several doors open and calls for future work. Simpler embedding models of state and actions, and variants of the model presented here, have been tried, none of which produced efficien units movement (e.g. taking a unit out of the fight when its hit points are low). There is ongoing\nTable 4: Win rates over 2000 games against each other\nheuristics. RL map rand_nc noop c wc nok_nc DQN PG ZO dragoons_zealots .14 .49 .67 .83 .50 .61 .69 .90 m5v5 .49 .84 .94 .96 .83 .99 .92 1. m15v16 .00 .81 .81 .10 .68 .13 .19 .79 w15v17 .19 .10 .20 .02 .12 .16 .14 .49\ntrained on. dragoons_zealots m15v16 m5v5 w15v15 w15v17 tested on dragoons_zealots m15v15 m18v18 m5v5 w15v15 w18v18 w15v15 w18v18 PG > DQN .74 .46 .47 .49 .61 .69 .09 .04 ZO > PG .76 .82 .79 .44 .82 .77 .98 .99 ZO > DQN .93 .85 .86 .39 .88 .90 .79 .80\nwork on convolutional networks based models that conserve the 2D geometry of the game (while embedding the discrete components of the state and actions). The zero order optimization technique presented here should be studied more in depth, and empirically evaluated on domains other thar StarCraft (e.g. Atari). As for StarCraft scenarios specifically, the subsequent experiments will include self-play in training, multi-map training (more generic models), and more complex scenarios whicl include several types of advanced units with actions other than move and attack. Finally, the goal o playing full games of StarCraft should not get lost, so future scenarios would also include the actions of \"recruiting\" units (deciding which types of unit to use), and how to best make use of them."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Y-Lan Boureau. Antoine Bordes. Florent Perronnin, Dave Churchill, Leon Bottou and Alexander Miller for helpful discussions and feedback about this work and earlier versions of the paper We thank Timothee Lacroix and Alex Auvolat for technical contributions to our StarCraft/Torch bridge. We thank Davide Cavalca for his support on Windows virtual machines in our cluster environment."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Lucian Busoniu. Robert Babuska, and Bart De Schutter. A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, And Cybernetics-Part C: Applications and Reviews, 38 (2) 2008, 2008.\nDavid Churchill, Abdallah Saffidine, and Michael Buro. Fast heuristic search for rts game combat scenarios. Ir AIIDE, 2012.\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.\nMarc Peter Deisenroth, Gerhard Neumann, and Jan Peters. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1-2):1-142, 2013\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochasti optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nSaeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic progran ming. SIAM Journal on Optimization, 23(4):2341-2368, 2013\nMohammad Ghavamzadeh, Sridhar Mahadevan, and Rajbala Makar. Hierarchical multi-agent reinforcemer learning. Autonomous Agents and Multi-Agent Systems, 13(2):197-229, 2006.\nJunling Hu and Michael P Wellman. Multiagent reinforcement learning: theoretical framework and an algorithn In ICML, volume 98, pp. 242-250, 1998\nDavid Abel. Alekh Agarwal, Fernando Diaz, Akshay Krishnamurthy, and Robert E Schapire. Exploratory gradient boosting for reinforcement learning in complex domains. arXiv preprint arXiv:1603.04119, 2016.\nAndrew G Barto. Richard S Sutton. and Charles W Anderson. Neuronlike adaptive elements that can solv difficult learning control problems. IEEE transactions on systems, man, and cybernetics, (5):834-846, 198\nDaniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity of decentralizec control of markov decision processes. Mathematics of operations research, 27(4):819-840, 2002\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376. 2011..\nohn C Duchi. Michael I Jordan. Martin J Wainwright. and Andre Wibisono. Optimal rates for zero-order convex optimization: the power of two function evaluations. arXiv preprint arXiv:1312.2139, 2013\nSylvain Gelly and Yizao Wang. Exploration exploitation in go: Uct for monte-carlo go. In NIPs: Neura. Information Processing Systems Conference On-line trading of Exploration and Exploitation Workshop, 2006\nMatthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. arXiv preprint arXiv:1511.04143, 2015.\nJack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annal of Mathematical Statistics, 23(3):462-466, 1952\nMichael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings o the eleventh international conference on machine learning, volume 157, pp. 157-163, 1994.\nFrancis Maes, Ludovic Denoyer, and Patrick Gallinari. Structured prediction with reinforcement learning Machine learning, 77(2-3):271-301, 2009.\nBhaskara Marthi. Stuart J Russell. David Latham, and Carlos Guestrin. Concurrent hierarchical reinforcemen learning. In IJCAI, pp. 779-785, 2005\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception and action in minecraft. arXiv preprint arXiv:1605.09128, 2016\nSantiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, and Mike Preuss. A survey of real-time strategy game ai research and competition in starcraft. Computational Intelligence and A1 in Games, IEEE Transactions on, 5(4):293-311, 2013.\nIan Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrappe dqn. arXiv preprint arXiv:1602.04621, 2016a.\nFrank Sehnke, Christian Osendorfer, Thomas Ruckstie, Alex Graves, Jan Peters, and Jurgen Schmidhube Policy gradients with parameter-based exploration for control. In Artificial Neural Networks-ICANN 2o08, pp. 387-396. Springer, 2008\nFrank Sehnke, Christian Osendorfer, Thomas Ruckstie, Alex Graves, Jan Peters, and Jurgen Schmidhuber Parameter-exploring policy gradients. Neural Networks, 23(4):551-559, 2010..\nDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministi policy gradient algorithms. In ICML, 2014.\nJames C Spall. A one-measurement form of simultaneous perturbation stochastic approximation. Automatica 33(1):109-112, 1997.\nPeter Stone and Manuela Veloso. Team-partitioned, opaque-transition reinforcement learning. In Proceedings o the third annual conference on Autonomous Agents, pp. 206-212. ACM, 1999\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998\nRichard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057-1063, 1999\nDan Klein, David Burkett, David Hall, Taylor-Kirkpatrick Berk, John Blitzer, John DeNero, Haomiao Huang Eugene Ma, Yewen Pu, Jie Tang, Nicholas Hay, Oriol Vinyals, and Jason Wolfe. The berkeley overmind project, 2010. URLhttp://overmind.cs.berkeley.edu/.\nSiming Liu, Sushil J Louis, and Christopher Ballinger. Evolving effective micro behaviors in rts game. In Computational Intelligence and Games (CIG), 2014 IEEE Conference on, pp. 1-8. IEEE, 2014\nShie Mannor, Reuven Y Rubinstein, and Yohai Gat. The cross entropy method for fast policy search. In ICML pp. 512-519, 2003.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In Proceedings of NIPS, 2013\nan Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions n 2386 2016b\nSainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation arXiv preprint arXiv:1605.07736, 2016\nRichard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9-44 1988.\nGabriel Synnaeve and Pierre Bessiere. A bayesian model for rts units control applied to starcraft. In Computa tional Intelligence and Games (CIG), 2011 IEEE Conference on, pp. 190-196. IEEE, 2011.\nGerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM, 38(3):58-68, 1995\nT. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Divide the gradient by a running average of its recen magnitude. COURSERA: Neural Networks for Machine Learning, 2012.\nHado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. arXi preprint arXiv:1509.06461, 2015.\nChristopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.\nstvan Szita and Andras Lorincz. Learning tetris using the noisy cross-entropy method. Neural computation, 18 (12):2936-2941, 2006.\nIing Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the teni international conference on machine learning, pp. 330-337, 1993.\nGerald Tesauro. Extending q-learning to general adaptive multi-agent systems. In Advances in neural informatior processing systems, pp. None, 2003.\nVilliam R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidenc of two samples. Biometrika, 25(3/4):285-294. 1933.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning Machine learning, 8(3-4):229-256. 1992"}, {"section_index": "10", "section_name": "We here briefly describe the two algorithms we use as baseline, Q-learning (Sutton & Bartol1998 and REINFORCE (Williams]1992).", "section_text": "Q-learning The Q-learning algorithm in the finite-horizon setting learns an action-value function Q by solving the Bellman equation.\nVs E S,Va E A(s),Qt(s,a) = >p(s'|s,a)(r(s,s')+ max.. a'EA(s' s'ES\nexploration: at state s and stage t, an action in argmaxae.A(s) Qt(s, a) is chosen with probability 1 e or an action in A(s) is chosen uniformly at random with probability e. In practice, we use stationary Q functions (i.e., Qt = Qt+1), which are neural networks, as described in Section 5] Training is carried out using the standard online update rule for Q learning with function approximation (see (Mnih et al.] 2015) for DQN), which we apply in mini-batches (hyper-parameters are detailed in AppendixE).\nREINFORCE E The algorithm REINFORCE belongs to the family of policy gradient algorithms (Sutton et al.]1999). Given a stochastic policy o parameterized by O, learning is carried out by generating traces (st, at, st+1, rt+1)t=1,.,T-1 by following the current policy. Then, stochastic. gradient updates are performed, using the gradient estimate:.\nT r(st..T')Ve log(no(at(st)) t=1\nWe use a Gibbs policy (with temperature parameter t) as the stochastic policy\nexp($e(a, s)/t) te(as)\nwhere e is a neural network with paramters O that gives a real-valued score to each (state, action pair. For testing, we use the deterministic policy o(s) = argmaxge.A(s) $o(a, s).."}, {"section_index": "11", "section_name": "B THE GREEDY MDP", "section_text": "We settled on iteratively choosing a unit, then a command to apply to that unit, which yields ar algorithm with 2[s| steps for state s, linear in the number of units. Since the commands are executec concurrently by the environment after all commands have been decided, the cumulative reward does not depend on the order in which we choose the units, for instance: uniformly at random among remaining units. More formally, using the notation a1..k to denote the k first (unit, command) pairs of an action a (with the convention a1. = 0). the state space S of the greedy MDP is defined by\n2The policy may not be determistic if we break ties randomly in the argmax.\nS,Va EA(s),Qt(s,a) = )`p(s'|s,a)(r(s,s')+ max Qt+1(s', a')), a'EA(s') s'ES\nwhere Qt is the state-action value function at stage t of an episode, and QT(s, a) = 0 by convention Qt(s, a) is also 0 whenever a terminal state is reached, and transitions from a terminal state only go to the same terminal state.\nThis training phase is distinct from the test phase, in which we record the average cumulative reward of the deterministic policys +> argmaxae.A(s) Q(s, a).\nA natural way to define the greedy MDP (Section|4) is to define the set of atomic actions of the greedy policy as all possible (unit, command) pairs for the units whose command is still not decided This would lead to an inference with quadratic complexity with respect to the number of units, which is undesirable.\nS ={(s,a1..k, Uk+1) s E S,0 k < [s],a =((u1,c1),..., (u|sl,ClsD)) E A(s)}\nFinally, using the same notation as above, the reward function r between states that represent intermediate steps of the algorithm is 0 and the last unit to play receives the reward:\nF((s, ai.. W. and r((s s.0. =r(s.s\nr((s, a1.k-1, Uk), (S, a1.k, Uk+1)) )=0, and r((s,aj. u[sD,(s',0,u)=r(s,s\nIt can be shown that an optimal policy for this greedy MDP chooses actions that are optimal for the original MDP, because the immediate reward in the original MDP does not depend on the order in which the actions are taken. This result only applies if the family of policies has enough capacity. In practice, some ordering may be easier to learn than others, but we did not investigate this issue because the gain, in terms of computation time, of the random ordering was critical for the experiments.\nThe normalized rewards (from Section4) maintain the invariant nt..T z(st) ; but more importantly, the normalization can be applied to the Bellman equation (2), which becomes\nVs E S,Va E A(s),Q(s,a) = > +zs max Q(s'. a'EA(s') s'ES\nThe stochastic gradient updates for Q-learning can easily be modified accordingly, as well as the gradient estimate in REINFORCE (3) in which we replace r by n."}, {"section_index": "12", "section_name": "D STARCRAFT SPECIFICS", "section_text": "We advocate that using existing video games for RL experiments is interesting because the simulators. are oftentimes complex, and we (the AI programmers) do not have control about the source code of the simulator. In RTS games like StarCraft, we do not have access to a simulator (and writing one. would be a daunting task), so we cannot use (Monte Carlo) tree search (Gelly & Wang2006) directly. even less so in the setting of full games (Ontanon et al.]2013). In this paper, we consider the problem. of micromanagement scenarios, a subset of full RTS play. Micromanagement is about making good. use of a given set of units in an RTS game. Units have different features, like range, cooldown hit points (health), attack power, move speed, collision box etc. These numerous features and the. dynamics of the game advantage player that take the right actions at the right times. Specifically for. the game(s) StarCraft, for which there are professional players, very good competitive players anc. professional players perform more than 300 actions per minute during intense battles..\nWe ran all our experiments on simple scenarios of battles of an RTS game: StarCraft: Broodwar.. These scenarios can be considered small scale for StarCraft, but they already deem challenging for. existing RL approaches. The joint action space is in O((#commands per unit)#units), with a peak. number of units of about 400 (Synnaeve & Bessiere 2011). For an example scenario of 15 units (that we control) against 16 enemy units, even while reducing the action space to \"atomic\" actions. (surrounding moves, and attacks), we obtain 24 (8+16) possible discrete actions per unit for our. controller to choose from (2415 actions total) at the beginning of the battle. Battles last for tens of. seconds, with durative actions, simultaneous moves, and at 24 frames per second. The strategies that. we need to learn consist in coordinated sets of actions that may need to be repeated, e.g. focus firing. without overkill. We use a featurization that gives access only to the state from the game. we do not\nThe action space A(s) of each state 3 E S is constant and equal to the set of commands C. Moreover for each state s of the original MDP, any action a = ((u1, c1), ..., (u|s|, C|sj) E A(s), the transition. probabilities p in the greedy MDP are defined by.\n1 Vk E{0,...,[s|- 1}, p((s, a1..k,Uk+1)|(s,a1..k-1, Uk), c and Vs' E S,Vu' EU(s'), p((s', 0,u)|(s,a1.Is]-1,U|sl),\nThis normalization does not change the optimal policy because it maintains the invariant that the expected normalized cumulative reward from a given state s to the end of an episode (by following the optimal deterministic policy) is the expected cumulative reward from this s divided by a value that depends only on s.\nOur tasks (\"maps') represent battles with homogeneous types of units, or with little diversity (2 types of unit for each of the players). For instance, they may use a unit of type Marine, that is one soldier with 40 hit points, an average move speed, an average range (approximately 10 times its collision size), 15 frames of cooldown, 6 of attack power of normal damage type (so a damage per second of 9.6 hit points per second, on a unit without armor). On symmetric and/or monotyped maps, strategies that are required to win (on average) are \"focus firing\", without overkill (not more units targeting a unit than what is needed to kill it). For perfect win rates, some maps may require that the AI moves its units out from the focus firing of the opponent.."}, {"section_index": "13", "section_name": "E HYPER-PARAMETERS", "section_text": "Taking an action on every frame (24 times per second at the speed at which human play StarCraft) for every unit would spam the game needlessly, and it would actually prevent the units from moving3 We take actions for all units synchronously on the same frame, even skip_frames frames. We tried several values of this hyper-parameter (5, 7, 9, 11, 13, 17) and we only saw smooth changes in performance. We ran all the following experiments with a skip_frames of 9 (meaning that we take about 2.6 actions per unit per second).We also report the strongest numbers for the baselines over all these skip frames. We optimize all the models after each battle (episode), with RMSProp (momentum 0.99 or 0.95), except for zero-order for which we optimized with Adagrad (Adagrad did not seem to work better for DQN nor REINFORCE). In any case, the learning rate was chosen among {10-2, 10-3, 10-4}.\nFor all methods, we tried experience replay, either with episodes (battles) as batches (of sizes 20, 50 100), or additionally with random batches of (st, at, Tt+1, St+1, terminal?) quintuplets in the case of Q-learning, it did not seem to help compared to batching with the last battle. So, for consistency we only present results where the training batches consisted of the last episode (battle).\nFor REINFORCE we searched over t E {0.1,0.5,1,10}\nFor zero-order, we tried E {0.1, 0.01, 0.001}\n3Because several actions are durative, including moves. Moves have a dynamic consisting of per-unit-type turn rate, max speed, and acceleration parameters..\nFor most of these tasks (\"maps'), the number of units that our RL agent has to consider changes over an episode (a battle), as do its number of actions. The fact that we are playing in this specific adversarial environment is that if the units do not follow a coherent strategy for a sufficient amount of time, they will suffer an unrecoverable loss, and the game will be in a state of the game where the units will die very rapidly and make little damage, independently of how they play - a state that is mostly useless for learning.\nFor Q-learning (DQN), we tried two schemes of annealing for epsilon greedy, e =. EO V1+ea.E0.t with t the optimization batch, and e = max(0.01, ot), Both with eo E {0.1, 1}, and respectively. eg E {0, eo} and eg E {10-5, 10-4, 10-3}. We found that the first works marginally better and used. that in the subsequent experiments with eo = 1 and ea = 1 for most of the scenarios. We also used Double DQN as in (Van Hasselt et al.]2015) (thus implemented as target DQN). For the target/double. network, we used a lag of 100 optimizations, thus a lag of 100 battles in all the following experiments.. According to our initial runs/sweep, it seems to slightly help for some cases of over-estimation of the Q value.\nWe visually inspected the model's performance on large battles. On the larger Marines map (m15v16) DQN learned to focus fire. Because this map has many units, focus firing leads to units bumping into each other to try to focus on a single unit. The PG player seemed to have a policy that attacks the closest marine, though it doesn't do a good job switching targets. The Marines that are not in range often bump into each other. Our zero order optimization learns a hybrid between focus firing\nand attacking the closest unit. Units would switch to other units in range if possible, but still focus on specific targets. This leads to most Marines attacking constantly, as well as focus firing wher they can. However, the learned strategy was not perfected, since Marines would still split their fire Occasionally when left with few units.\nIn the Wraiths map (w15v17), the DQN player's strategy was hard to decipher. The most likely explanation is that they tried to attack the closest target, though it is likely the algorithm did not converge to a specific strategy. The PG player learned to focus fire. However, because it only takes 6. Wraiths to kill another, 9 actions are \"wasted\" during the focus firing (at the beginning of the fight when all our units are alive). Our zero order player learns that focusing only on one enemy is not good, but it does not learn how many attacks are necessary. This leads to a much higher win rate, but. the player still assigns more than 6 Wraiths to an enemy target (maybe for robustness to the loss of one of our units), and occasionally will not focus fire when only a few Wraiths are remaining. This is similar to what the zero order player learned during the Marines scenario.."}] |
By14kuqxx | [{"section_index": "0", "section_name": "BIT-PRAGMATIC DEEP NEURAL NETWORK COMPUT ING", "section_text": "jorge, juddpatr, delmasll, sayeh, moshovos}@ece.utoronto.ca\nWe quantify a source of ineffectual computations when processing the multiplica tions of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragmatic (PRA), an architecture that exploits it improving performance and en ergy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms that is, products of the multiplicand and powers of two, which added together pro duce the final product|Wallace(1964). At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non zero terms resulting in a design whose execution time for convolutional layers is ideally proportional to the number of activation bits that are 1. Measurements demonstrate that for the convolutional layers on Convolutional Neural Networks and during inference, PRA improves performance by 4.3x over the DaDiaNao (DaDN) accelerator Chen et al.(2014) and by 4.5x when DaDN uses an 8-bit quantized representation Warden(2016). DaDN was reported to be 300x faster than commodity graphics processors."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Neural Network (DNN) hardware typically uses either 16-bit fixed-point Chen et al. (2014 or quantized 8-bit numbers|Warden (2016) and bit-parallel compute units. For convolutional layers that account for most of the execution time in Convolutional Neural Networks (CNNs) during image classification, these bit-parallel engines perform many ineffectual computations. Specifically, these layers perform several several inner products, where multiple pairs of weights and activations are multiplied and then reduced into an output activation. Any time a zero bit of an activation or a weigh is multiplied it adds nothing to the final output activations. These ineffectual bits are introduced by the conventional positional number representation and if avoided it would take even less time tc calculate each product improving energy and performance. As a first step, this work targets the ineffectual bits of activations only. Section 2 shows that in recent image classification networks 93% and 69% of activation bit and weight products are ineffectual when using respectively 16-bit fixed-point and 8-bit quantized representations.\nThis work presents Pragmatic (PRA) a DNN accelerator whose goal is to process only the essentia. (non-zero) bits of the input activations PRA employs the following four key techniques: 1) on-the fly conversion of activations from a storage representation (e.g., conventional positional numbers or quantized) into an explicit representation of the essential bits only, 2) bit-serial activation/bit. parallel weight processing, an idea borrowed from STR Judd et al.(2016b a) but adapted for the. aforementioned representation, 3) judicious SIMD (single instruction multiple data) lane grouping. to maintain wide memory accesses and to avoid fragmenting and enlarging the multi-MB on-chip. weight memories (Sections 5 and 5.1), and 4) computation re-arrangement (Section [5.1) to reduce datapath area. All evaluated PRA variants maintain wide memory accesses and use highly-paralle. SIMD-style (single-instruction multiple-data) computational units. PRA introduces an additional. dimension upon which software can improve performance and energy efficiency by controlling ac.\nJorge Albericio , Patric Judd, Alberto Delmas Lascorz, Sayeh Sharify & Andreas Moshovos Electrical and Computer Engineering."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: Sources of ineffectual computation with conventional positional representation and fixed length hardware precision.\nExperimental measurements with recent CNNs for image classification demonstrate that most straightforward PRA variant, boosts average performance for the convolutional layers to 2.59x over the state-of-the-art DaDN accelerator. Pragmatic's average energy efficiency is 1.48x over DaDN and its area overhead is 1.35x. Another variant further boosts performance to 3.1x over DaDN at the expense of an additional 0.7% area."}, {"section_index": "3", "section_name": "2 MOTIVATION", "section_text": "With such a hardware arrangement there are two sources of ineffectual computations that result. from: 1) an Excess of Precision (EoP), and 2) Lack of Explicitness (LoE). Figure|1|shows an example illustrating these sources with a bit-parallel multiplier using an 8-bit unsigned fixed-point number. with 4 fractional and 4 integer bits. While 10.101(2) requires just five bits, our 8-bit bit-parallel. multiplier will zero-extend it with two prefix and one suffix bits. This is an example of EoP and is. due to the fixed-precision hardware. Two additional ineffectual bits appear at positions 1 and -2 as a result of LoE in the positional number representation. In total, five ineffectual bits will be processec. generating five ineffectual terms.\nOur number could be represented with an explicit list of its three constituent powers of 2: (1,-1, 3). While such a representation may require more bits and thus be undesirable for storage, coupled. with the abundant parallelism that is present in DNNs layers, it provides an opportunity to revisit. hardware design improving performance and energy efficiency..\nTable|5reports the essential bit content of the activation stream of recent CNNs for two commonly. used fixed length representations: 1) 16-bit fixed-point of DaDianNao [Chen et al.(2014), 2) 8-bit quantized of Tensorflow|Warden(2016). The essential bit content is the average number of non-zero. bits that are 1. Two measurements are presented per representation: over all neuron values (\"All'),. and over the non-zero neurons (NZ\"') as accelerators that can skip zero activations for fixed-point representations have been recently proposedHan et al.(2016);[Albericio et al.(2016).\nWhen considering all activations, the essential bit-content is at most 12.7% and 38.4% for the fixed point and the quantized representations respectively. Even when considering the non-zero activa tions the essential bit content remains well below 50% suggesting that the potential exists to improve performance and energy efficiency over approaches that target zero valued activations only.\nThis section illustrates the idea behind Pragmatic via a simplified example\nBit-Parallel Hardware Precision Required prefix precision suffix 00 101010 Essential bits (1,-1,-3)\ntivation values judiciously in order to reduce their essential bit content while maintaining accuracy This work explores such an alternative, where the software explicitly communicates how many pre. fix and suffix bits to discard after each layer\nLet us assume a p-bit bit-parallel multiplier using a straightforward implementation of the \"Shift and Add' algorithm where n s is calculated as t=o n : (s < i), where n, the i-th bit of n. The. multiplier computes p terms, each a product of s and of a bit of n, and adds them to produce the final result. The terms and their sum can be calculated concurrently to reduce latency Wallace[(1964).\nTable 1: Average fraction of non-zero bits per activation for two fixed-length representations: 16-bit fixed-point, and 8-bit quantized. All: over all activations. NZ: over non-zero activation only.\nFigure 2: An Example Illustrating How Pragmatic Skips Ineffectual Activation Bits Yet Exceeding the Performance of a Bit-Parallel Engine\nThe bit-parallel unit of Figure2a multiplies two activations with their respective weights and via. an adder reduces the two products. The unit reads all activation and weight, (no = 001(2), n1 = 010(2)) and (so = 001(2), S1 = 111(2)) respectively in a single cycle. As a result, the two sources. of inefficiency EoP and LoE manifest here: no and n1 are represented using 3 bits instead of 2 respectively due to EoP. Even in 2 bits, they each contain a zero bit due to LoE. As a result, four ineffectual terms are processed when using standard multipliers such as those derived from the Shift and Add algorithm. In general, given N activation and weight pairs, this unit will take [N/2] cycles. to process them regardless of their precision and the essential bit content of the activations..\nFigure2b|shows a simplified PRA engine. In this example, activations are no longer represented as vectors of bits but as vectors of offsets of the essential bits. For example, activation no = 001(2) is represented as ono = (0), and a activation value of 111(2) would be represented as (2, 1, 0). An out- of-band bit (wire) not shown indicates the activation's end. A shifter per activation uses the offsets to effectively multiply the corresponding weight with the respective power of 2 before passing it to the adder tree. As a result, PRA processes only the non-zero terms avoiding all ineffectual computations that were due to EoP or LoE. To match the throughput of the bit-parallel engine of Figure2a] we take advantage of weight reuse and processes multiple activations groups in parallel. In this example, six activations (no = 001(2),n1 = 010(2),no = 000(2),n = 010(2),n = 010(2),n = 000(2)) are combined with the two weights as shown. For this example, PRA would process the six activation and weight pairs in a single cycle, a speedup of 3 over the bit-parallel engine."}, {"section_index": "4", "section_name": "1+ BASELINE SYSTEM: DADIANNAO", "section_text": "Pragmatic is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed by Chen et al.Chen et al.(2014). Figure[3a|shows a DaDN tile which processes 16 filters concurrently calculating 16 activation and weight products per filter for a total of 256 products per cycle. To do,. each cycle the tile accepts 16 weights per filter for total of 256 weight, and 16 input activations. The tile multiplies each weight with only one activation whereas each activation is multiplied with 16 weight, one per filter. The tile reduces the 16 products into a single partial output activation per filter, for a total of 16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each\n0 MSB no|o Nernnns LSB ono 0 1 0 on1 1 on'. 0 on' 1 0 on\" 2 sradeues So on 0 0 1 So 0 1 1 1 (a) Bit-Parallel Unit (b) Pragmatic Unit\nNBin NBin Window Offset Neuron nO Lane 0 -4 Lane 0 Lane 0 ... ... Offset n15 Lane 15 1-4 from central : from central : eDRAM eDRAM Offset Neuron Lane 240 nO 1-4 Window : : Lane 15 Offset Lane 15 n15} Lane 255 1-4 SB (eDRAM) + IPO PIP0,0 PIP15,0 Synapse Synapse 16 ... Lane o + Lane 0 Filter <+ Filter x NBout Lane 0 : SR ... Lane 0 : Synapse P to central Synapse Lane 15 eDRAM Lane 15 16 to central ... eDRAM ... : : : Synapse 1C NBout Synapse Lane 0 IP15 Filter Lane 0 X Lane 15 : Filter : ... Synapse Lane 15 Lane 15 PIP0,15 PIP(15,15) Synapse X 16 Lane 15 SB (eDRAM) (a) (b)\nFigure 3: a) DaDianNao Tile. b) Pragmatic Tile\nprocessing a different set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes 16 activations and 256 16 = 4K weights producing 16 16 = 256 partial output activations.\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per synapse. lane, 2) an input neuron buffef'[(NBin) which provides 16 activations per cycle through 16 neuron lanes, and 3) a neuron output buffer (NBout) which accepts 16 partial output activations per cycle. In the tile's datapath, or the Neural Functional Unit (NFU) each neuron lane is paired with 16 synapse. lanes one from each filter. Each synapse and neuron lane pair feed a multiplier and an adder tree per filter lane reduces the 16 per filter products into a partial sum. In all, the filter lanes produce each a partial sum per cycle, for a total of 16 partial output activations per NFU. Once a full window is processed, the 16 resulting sums, are fed through a non-linear activation function, f, to produce the 16 final output activations. The multiplications and reductions needed per cycle are implemented via 256 multipliers one per synapse lane and sixteen 17-input (16 products plus the partial sum from. NBout) adder trees one per filter lane..\nDaDN's main goal was minimizing off-chip bandwidth while maximizing on-chip compute utiliza tion. To avoid fetching weights from off-chip, DaDN uses a 2MB eDRAM SB per tile for a total of 32MB eDRAM. All inter-layer activations except for the initial input and the final output are stored in a 4MB shared central eDRAM Neuron Memory (NM) which is connected via a broadcast interconnect to the 16 NBin buffers. Off-chip accesses are needed only for reading the input image. the filter weights once per layer, and for writing the final output.\nProcessing Approach: Processing starts by reading from external memory the first layer's weights synapses, and the input image. The weights are distributed over the SBs and the input is stored into NM. Each cycle an input activation brick is broadcast to all units. Each units reads 16 weigh bricks from its SB and produces a partial output activation brick which it stores in its NBout. Once computed, the output activations are stored through NBout to NM and then fed back through the NBins when processing the next layer. Loading the next set of activations from external memory can be overlapped with the processing of the current layer as necessary.\nChen et al.[(2014) used the terms neuron and synapse to refer to activations and weights respectively an named the various components accordingly. We maintain this terminology for the design's components.\nTerminology: For clarity, in what follows n(x, y, i) and o(x, y, i) refer to an input and an output. activation at coordinates (x, y,i) respectively. The weight of filter f at coordinates (x, y,i) is de-. noted as sf (x,y,i). The term brick refers to a set of 16 elements of a 3D activation or weight array which are contiguous along the i dimension, e.g., n(x, y, i)...n(x, y, i + 15). Bricks will be. denoted by their origin element with a B subscript, e.g., nb(x, y, i). The term pallet refers to a set. of 16 bricks corresponding to adjacent, using a stride S, windows along the x or y dimensions, e.g.,. nB(x, y, i)...nb(x, y + 15 S, i) and will be denoted as np(x, y, i). The number of activations per. brick, and bricks per pallet are design parameters.."}, {"section_index": "5", "section_name": "5 Pragmatic", "section_text": "PRA's goal is to process only the essential bits of the activations. To do so PRA a) converts, on-the. fly, the input activation representation into one containing only the essential bits, and b) processes one essential bit per activation and a full 16-bit weight per cycle. Since PRA processes activatior bits serially, it may take up to 16 cycles to produce a product of a activation and a weight. To always. match or exceed the performance of the bit-parallel units of DaDN, PRA processes more activations concurrently exploiting the abundant parallelism of the convolutional layers. The remaining of this section describes in turn: 1) an appropriate activation representation, 2) the way PRA calculates. terms, 3) how multiple terms are processed concurrently to maintain performance on par with DaDN. in the worst case, and 4) how PRA's units are supplied with the necessary activations from NM..\nThat is, each cycle, the weight s multiplied by f, the next constituent power two of n, and the resul is accumulated. This multiplication can be implemented as a shift and an AND..\nBoosting Compute Bandwidth over DaDN: To match DaDN's performance PRA needs to pro cess the same number of effectual terms per cycle. Each DaDN tile calculates 256 activation and weight products per cycle, or 256 16 = 4K terms. While most of these terms will be in practice ineffectual, to guarantee that PRA always performs as well as DaDN it should process 4K terms per cycle. For the time being let us assume that all activations contain the same number of essential bits so that when processing multiple activations in parallel, all units complete at the same time and thus can proceed with the next set of activations in sync. The next section will relax this constraint.\nSince PRA processes activations bits serially, it produces one term per activation bit and weight pair. and thus needs to process 4K such pairs concurrently. The choice of which 4K activation bit and weight pairs to process concurrently can adversely affect complexity and performance. For example . it could force an increase in SB capacity and width, or an increase in NM width, or be ineffective. due to unit underutilization given the commonly used layer sizes.\nFortunately, it is possible to avoid increasing the capacity and the width of the SB and the NM while keeping the units utilized as in DaDN. Specifically, a PRA tile can read 16 weight brick. and the equivalent of 256 activation bits as DaDN's tiles do (DaDN processes 16 16-bit activation or 256 activation bits per cycle). Specifically, as in DaDN, each PRA tile processes 16 weigh bricks concurrently, one per filter. However, differently than DaDN where the 16 weight bricks ar combined with just one activation brick which is processed bit-parallel, PRA combines each weigh brick with 16 activation bricks, one from each of 16 windows, which are processed bit-serially The same 16 activation bricks are combined with all weight bricks. These activation bricks forn a pallet enabling the same weight brick to be combined with all. For example, in a single cycle : PRA title processing filters 0 through 15 could combine combine s(x, y, 0), ..., s5(x, y, 0) with nPRA(x,y, 0), nPRA(x +2, y, 0), ..nPRA(x +31, y, 0) assuming a layer with a stride of 2. In this case. DNA s4(x,y,2) would be paired with nPRA(x, y,2), nPRA(x + 2, y,2), .., nPRA(x + 31, y,2) to produce the output weights on(x, y, 4) through on(x + 15, y, 4).\nAs the example illustrates, this approach allows each weight to be combined with one activation per window whereas in DaDN each weight is combined with one activation only. In total, 256 essential activation bits are processed per cycle and given that there are 256 weights and 16 windows, PRA\nInput Activation Representation: PRA starts with an input activation representation where it is straightforward to identify the next essential bit each cycle. One such representation is an explicit list of oneffsets, that is of the constituent powers of two. For example, an activation n = 5.5(1o) = 0101.1(2) would be represented as n = (2, 0, -1). In the implementation de- scribed herein, activations are stored in 16-bit fixed-point in NM, and converted on-the-fly in the PRA representation as they are broadcast to the tiles. A single oneffset is processed per activation per cycle. Each oneffset is represented as (pow, eon) where pow is a 4-bit value and eon a sin- gle bit which if set indicates the end of a activation. For example, n = 101(2) is represented as nPRA = ((0010, 0) (0000, 1)).\nCalculating a (weight, activation) product: PRA calculates the product of weight s and activation\ns xn= S X n V f EnPRA V f EnPRA\nprocesses 256 16 = 4K activation bit and weight pairs, or terms per cycle producing 256 partia output activations, 16 per filter, or 16 partial output activation bricks per cycle.\nSupplying the Inputs: Thus far it was assumed that all input activations have the same numbe. of essential bits. Under this assumption, all neuron lanes complete processing their terms at th. same time, allowing PRA to move on to the next activation pallet and the next set of weight brick in one step. This allows PRA to reuse STR's approach for fetching the next pallet from the single. ported NM Judd et al.[(2016b a). Briefly, with unit stride the 256 weights would be typically a. stored in the same NM row or at most over two adjacent NM rows and thus can be fetched in a. most two cycles. When the stride is more than one, the weights will be spread over multiple row. and thus multiple cycles will be needed to fetch them all. Fortunately, fetching the next pallet ca. be overlapped with processing the current one. Accordingly, if it takes NMc to access the nex. oallet from NM, while the current pallet requires Pc cycles to process, the next pallet will begi. orocessing after max(NMc. Pc) cycles. When NMc > Pc performance is lost waiting for NM\nIn practice it highly unlikely that all activations will have the same number of essential bits. In. general, each neuron lane if left unrestricted will advance at a different rate. In the worst case, each. neuron lane may end up needing activations from a different activation brick, thus breaking PRA's ability to reuse the same weight brick. This is undesirable if not impractical as it would require. partitioning and replicating the SB so that 4K unrelated weight could be read per cycle, and it would. also increase NM complexity and bandwidth.\nFortunately, these complexities can be avoided with pallet-level neuron lane synchronization wher all neuron lanes \"wait' (a neuron lane that has detected the end of its activation forces zero term while waiting) for the one with the most essential bits to finish before proceeding with the nex pallet. Under this approach it does not matter which bits are essential per activation, only how man exist. Since, it is unlikely that most pallets will contain an activation with 16 essential terms, PR will improve performance over DaDN. Section|5.1|will discuss finer-grain synchronization scheme that lead to even better performance. Before doing so, however, we detail PRA's design.\nSynapse 16 x16: o_nbout Synapse 16 max 1st prec Done cycle shift_B i_nbout\nSynapse 16 x16: o_nbout Synapse << 1 16 max r 1st prec Done cycle shift_B i_nbout\nFigure 4: Pragmatic Inner Product Unit"}, {"section_index": "6", "section_name": "5.1 STRUCTURE AND PERFORMANCE AND AREA OPTIMIZATIONS", "section_text": "Figure 3b|shows the Pragmatic tile architecture which comprises an array of 16 16 = 256 prag matic inner product units (PIPs). PIP(i,j) processes an activation oneffset from the i-th window anc. its corresponding weight from the j-th filter. Specifically, all the PIPs along the i-th row receive the. same weight brick belonging to the i-th filter and all PIPs along the j-th column receive an oneffse. from each activation from one activation brick belonging to the j-th window. The necessary activa\n1st stage 16xN 2nd stage cycle 1 cycle 2 cycle 3 cycle 4 1st Synapse 100100010 85 85 +1 16 2nd x16 ... + 010000001 70 0 0 7 6 << E Synapse, bau < 011010000 764 764 0 76 0 7 << << 16 << Neuron values Oneffsets Done 116 PIP (a) (b)\nFigure 5: 2-stage shifting. a) Modified PIP. b) Example: Processing three 9-bit weight and activation pairs with L = 2.\n16xN y tstage 2nd stage cycle 1 cycle 2 cycle 3 N cycle 4 Synapse 1st 1001 00010 851 85 2nd < 16 x16 ... [01 0000001 70 0 0 7 3 4 6 << < << Synapse bau << 011010000 764 76 0 76 0 7 16 << << << ... Neuron values Oneffsets Done 116 PIP (a) (b)\nFigure 6: Per-column synchronization example: one extra synapse register and 1x2 PIP array capa. ble of processing two windows in parallel. The two numbers per brick show: the first from the top i the brick's index, (0, 1, 2) and (0', 1', 2) for the bricks of the first and second window. The secon. is the maximum count of oneffsets in its activations, (2, 4, 4) and (5, 2, 2) respectively. The number in the registers indicate the index of the corresponding bricks, i.e., a synapse register containing a K stores the weights corresponding to activation bricks with indexes K and K'. In cycles 3 to 8. thicker lines indicate registers being loaded or wires being used..\ntion oneffsets are read from NBin where they have been placed by the Dispatcher and the Oneffset generators units as Section|5.1|explains. Every cycle NBin sends 256 oneffsets 16 per window lane. All the PIPs in a column receive the same 16 oneffsets, corresponding to the activations of a sin gle window. When the tile starts to process a new activation pallet, 256 weights are read from SB. through its 256 synapse lanes as in DaDN and are stored in the synapse registers (SR) of each PIP.. The weights and oneffsets are then processed by the PIPs..\nDispatcher and Oneffset Generators The Dispatcher reads 16 activation bricks from NM, as ex pected by the PRA tiles. The oneffset generator converts their activations on-the-fly to the oneffset. representation, and broadcasts one oneffset per activation per cycle for a total of 256 oneffsets to all titles. Fetching and assembling the 16 activation bricks from NM is akin to fetching words with a stride of S from a cache structure. Once the 16 activation bricks have been collected, 256 oneffset generators operate in parallel to locate and communicate the next oneffset per activation.. A straightforward 16-bit leading one detector is sufficient. The latency of the oneffset generators. and the dispatcher can be readily hidden as they can be pipelined as desired overlapping them with. processing in the PRA tiles.\nReducing Title Area with 2-Stage Shifting: Any shift can be performed in two stages as two smaller shifts: a < K = a < (K' + C) = ((a < K) < C). Thus, to shift and add T weights by different offsets Ko, ..., KT, we can decompose the offsets into sums with a common term C. e.g., K, = K! + C. Accordingly, PIP processing can be rearranged using a two stage processing where the first stage uses a per weight specific offset K?, and the second stage, the common across all weights offset C. This arrangement can be used to reduce the width of the weight shifters and of the adder tree by sharing one common shifter after the adder tree as Figure|5h shows. A design parameter, L, defines the number of bits controlling the weight shifters so that the design can process oneffsets which differ by less than 2 in a single cycle. This reduces the size of the weight shifters and reduces the size of the adder tree to support terms of 16 + 2L - 1 bits only.\nIncreasing Performance with Per-Column Neuron Lane Synchronization: The pallet neuron lane synchronization scheme of Section|5|is one of many possible synchronization schemes. Finer- grain neuron lane synchronization schemes are possible leading to higher performance albeit at a cost. Among them, per column neuron lane synchronization is an appealing scheme offering a good balance of cost vs. performance. Here each PIP column operates independently but all the PIPs along the same column synchronize before moving to the next activation brick. Since the PIPs along the same column operate in sync, they all process one set of 16 weight bricks which can be read using the existing SB interface. However, given that different PIP columns operate now out-of-\nBrick Indexes: 2] cycle 1 cycle 3 21 cycle 6 2 cycle 7 2 cycle 8 Max # oneffsets: 4 44 4 4 Bricks: 2 1 22 1 2' 2 2 2 2 Synapses corresponding SR SR to brick # SB SR SB >r2l Extra Synapse Registers+\nPragmatic Inner-Product Unit: Figure 4|shows the PIP internals. Every cycle, 16 weights are combined with their corresponding oneffsets. Each oneffsets controls a shifter effectively multiply. ing the weight with a power of two. The shifted weights are reduced via the adder tree. An AND gate per weight supports the injection of a null terms when necessary. In the most straightforward design, the oneffsets use 4-bits, each shifter accepts a 16-bit weight and can shift it by up to 15 bit positions producing a 31-bit output. Finally, the adder tree accepts 31-bit inputs. Section 5.1 presents an enhanced design that requires narrower components improving area and energy..\nsync, the SB would be accessed more frequently and could become a bottleneck. There are two. concerns: 1) different PIP columns may need to perform two independent SB reads while there are only one SB port and one common bus connecting the PIP array to the SB, and 2) there will be. repeat accesses to SB that will increase SB energy, while the SB is already a major consumer of. energy. These concerns are addressed as follows: 1) only one SB access can proceed per cycle thus. a PIP column may need to wait when collisions occur. 2) A set of registers, or synapse set registers (SSRs) are introduced in front of the SB each holding a recently read set of 16 weight bricks. Since. all PIP columns will eventually need the same set of weight bricks, temporarily buffering them avoids fetching them repeatedly from the SB. Once a weight set has been read into an SSR, it stays there until all PIP columns have copied it (a 4-bit down counter is sufficient for tracking how many. PIP columns have yet to read the weight set). This policy guarantees that the SB is accessed the. same number of times as in DaDN. However, stalls may incur as a PIP column has to be able to store a new set of weights into an SSR when it reads it from the SB. Figure|6 shows an example Since each neuron lane advances independently, in the worst case, the dispatcher may need to fetch 16 independent activation bricks each from a different pallet. The Dispatcher can buffer those pallets. to avoid rereading NM, which would, at worst, require a 256 pallet buffer. However, given that the. number SSRs restricts how far apart the PIP columns can be, and since Section|6.2 shows that only. one SSR is sufficient, a two pallet buffer in the dispatcher is all that is needed..\nThis improved generator reduces runs of adjacent oneffsets a...b into pairs of the form a + 1, -b. Single oneffsets or gaps inside runs are represented by a positive or negative oneffset, respectively. For example a neuron value of 11011 that would normally be encoded with oneffsets (4, 3, 1, 0) car instead be represented with (5, -3, +2, 0) or even more economically with (5, -2, 0). This is. equivalent to a Radix-4 Booth encoding and will never emit more than + 1 oneffsets, where x is the neuron precision.\nThis encoding will never produce more oneffsets compared to the baseline encoding. However, because of the 2-stage shifting, it is possible that this encoding will increase the number of cycles needed. This will happen when the oneffset distribution among the bit groups being processed together during 2-stage shifting changes\nFinally, booth encoding is conventionally used to reduce the number of cycles needed to perforn multiplication in single shift-and-add multipliers typically reserved for low cost low performance de signs, or to reduce the depth of bit-parallel multipliers. Pragmatic with its 2-stage shifting and judi cious lane synchronization enables its practical use in a massively data-parallel accelerator boostin, performance beyond what is possible with bit-parallel units.\nThe Role of Software: PRA enables an additional dimension upon which hardware and software can attempt to further boost performance and energy efficiency, that of controlling the essential activation value content. This work investigates a software guided approach where the precision requirements of each layer are used to zero out a number of prefix and suffix bits at the output of each layer. Using the profiling method of Judd et al., Judd et al.(2015), software communicates the precisions needed by each layer as meta-data. The hardware trims the output activations before. writing them to NM using AND gates and precision derived bit masks..\nAfter reviewing the experimental methodology the rest of this section is organized as follows: Sec- tions 6.1and6.2 explore the PRA design space considering respectively single- and 2-stage shift ing configurations, and column synchronization. Section |6.2|reports energy efficiency for the best\nFurther Increasing Performance with Improved Oneffset Encoding: Since PIPs in Pragmatic can negate any input term, it is possible to enhance the oneffset generator to generate fewer oneffsets for neuron values containing runs of ones by allowing signed oneffsets[Booth (1951)\nThe performance, area and energy efficiency of Pragmatic is compared against DaDN Chen et al.. (2014) and Stripes Judd et al.(2016b), two state-of-the-art DNN accelerators. DaDN is the fastest bit-parallel accelerator proposed to date that processes all activations regardless of theirs values, and. STR improves upon DaDN by exploiting the per layer precision requirements of DNNs. Cnvlutin improves upon DaDN by skipping most zero- or near-zero-valued activations[Albericio et al. (2016) however, Stripes has been shown to outperform it..\nTable 2: Per convolutional layer activation precision profiles\nconfiguration. Section |6.4|analyzes the contribution of the software provided precisions. Finally Section|6.5|reports performance for designs using an 8-bit quantized representation.\nMethodology: The same methodology is used for all systems for consistency. A custom cycle. accurate simulator models execution time. For all systems, computation was scheduled to minimize energy, which led to the same schedule for all. To estimate power and area, the designs were synthe sized with the Synopsis Design CompilerSynopsys for a TSMC 65nm library. The NBin and NBou. SRAM buffers were modeled using CACTI|Muralimanohar & Balasubramonian The eDRAM are. and energy were modeled with Destiny Poremba et al.(2015). To compare against STR, the pe. layer numerical representation requirements reported in Table[2|were found using the methodology. of Judd et al.Judd et al.(2016b). All PRA configurations studied exploit software provided preci sions as per Section[5.1] Section|6.4|analyzes the impact of this information on overall performance All performance measurements are for the convolutional layers only which account for more thar 92% of the overall execution time in DaDN Chen et al.(2014). PRA does not affect the executior. time of the remaining layers.\nPerformance: Figure7|shows the performance of STR (leftmost bars) and of PRA variants relative to DaDN. The PRA systems are labelled with the number of bits used to operate the first-stage, weight shifters, e.g., the weight shifters of \"2-bit\" , or PRA2b, are able to shift to four bit positions (0-3). \"4-bit' or PRA4b, is the single-stage Pragmatic, or PRAsingle of Sections 55.1|whose weight shifters can shift to 16 bit positions (0-15). It has no second stage shifter.\nPRAsingle improves performance by 2.59 on average over DaDN compared to the 1.85 average improvement with STR. Performance improvements over DaDN vary from 2.11 for VGG19 to 2.97 for VGGM. As expected the 2-stage PRA variants offer slightly lower performance than PRAsingle, however, performance with PRA2, and PRA3b is always within 0.2% of PRAsingle. Even PRAos which does not include any weight shifters outperforms STR by 20% on average. Given a set of oneffsets, PRAos will accommodate the minimum non-zero oneffset per cycle via its second level shifter.\nArea and Power: Table 3 shows the absolute and relative to DaDN area and power. Two area measurements are reported: 1) for the unit excluding the SB, NBin and NBout memory blocks, and 2) for the whole chip comprising 16 units and all memory blocks. Since SB and NM dominate chip area, the per area area overheads Given the performance advantage of PRA, the area and powei overheads are justified. PRA2s is particularly appealing as its overall area cost over BASE is only. 1.35 and its power 2.03 while its performance is 2.59 on average. Accordingly, we restrict. attention to this configuration in the rest of this evaluation..\nPerformance: Figure [8 reports the relative performance for PRA2s with column synchronization and as a function of the number of SSRs as per Section. 15.1\nPer Layer Network Activation Precision in Bits. AlexNet 9-8-5-5-7 NiN 8-8-8-9-7-8-8-9-9-8-8-8 GoogLeNet 10-8-10-9-8-10-9-8-9-10-7 VGG_M 7-7-7-8-7 VGG_S 7-8-9-7-9 VGG_19 12-12-12-11-12-10-11-11-13-12- 13-13-13-13-13-13\nStripes O-bit 1-bit 2-bit 3-bit 4-bit 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 7: Pragmatic's performance relative to DaDianNao using 2-stage shifting and per- pallet synchronization.\nDDN STR 0-bit 1-bit 2-bit 3-bit 4-bit Area U. 1.55 3.05 3.11 3.16 3.54 4.41 5.75 Area U. 1.00 1.97 2.01 2.04 2.29 2.85 3.71 Area T. 90 114 115 116 122 136 157 Area T. 1.00 1.27 1.28 1.29 1.35 1.51 1.75 Power T. 18.8 30.2 31.4 34.5 38.2 43.8 51.6 Power T. 1.00 1.60 1.67 1.83 2.03 2.33 2.74\nTable 3: Area [mm21 and power [W] for the unit and the whole chip. Pallet synchronization\nArea and Power: Table 4 reports the area per unit, and the area and power per chip. The bes\nEnergy Efficiency: Figure 10 shows the energy efficiency of various configurations of Pragmatic Energy Efficiency, or simply efficiency for a system NEw relative to BAsE is defined as the ratic Ebase/Enew of the energy required by BAsE to compute all of the convolution layers over that of NEw. For the selected networks, STR is 16% more efficient than DaDN. The power overhead oi PRAsingle (PRA4s) is more than the speedup resulting in a circuit that is 5% less efficient thar DaDN. PRA2s reduces that power overhead while maintaining performance yielding an efficiency of 28%. PRA2R yields the best efficiency at 48% over DaDN.\nStripes PRA-Ob-Pallet PRA-1b-Pallet PRA-2b-Pallet PRA-2b-1R 5 4 3 2 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 9: Relative performance of Pragmatic using Improved Oneffset Encoding for different configurations. Marked: performance not using IOE\nStripes 1-reg 4-regs 16-regs perCol-ideal 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 8: Relative performance of PRA2b with column synchronization and as a func tion of the SB registers used.\nStripes PRA-4b PRA-2b PRA-2b-1R 1.5 1.0 0.5 0.0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 10: Relative energy efficiency\nTable 4: Area [mm2] and power [W] for the unit and the whole chip for column synchronizatior and PRA2b\nTable 5: Performance benefit due to software guidance"}, {"section_index": "7", "section_name": "6.3 IMPROVED ONEFFSET ENCODING", "section_text": "Figure 9|reports performance for Pragmatic when using the enhanced oneffset generator described in Section5.1 The considered configurations include PRAob, PRA1b and PRA2b (with pallet syn of 26%, 48%, and 41% respectively. A cause of degradation for PRAos is the increased spread of oneffset values (for example, the pair of neurons 011101, 010101 takes 4 cycles with conventional encoding and 5 with enhanced encoding even though the total count of oneffsets is reduced from 7 to 6)."}, {"section_index": "8", "section_name": "6.4 THE IMPACT OF SOFTWARE", "section_text": "Figure[11reports performance for DaDN and PRA configurations using the 8-bit quantized repre sentation used in Tensorflow|Warden|(2016);Google (2016). This quantization uses 8 bits to specify arbitrary minimum and maximum limits per layer for the activations and the weights separately, anc maps the 256 available 8-bit values linearly into the resulting interval. This representation has highe\nStripes perPall perPall-2bit perCol-1reg-2bit perCol-ideal 5 3 2 0 Alexnet NiN Google VGGM VGGS VGG19 geo\nFigure 11: Performance: 8-bit quantized repre sentation (marked: without IOE).\nAll PRA configurations studied thus far, used software provided per layer activation precisions to. reduce essential bit content. PRA does not require these precisions to operate. Table|5|shows what. tion studied. The results demonstrate that: 1) PRA would outperform the other architectures even without software guidance, and 2) on average, software guidance improves performance by 19%.\ndesigns is left for future work, however, the absolute area and energy needed by all will be lower. due to the narrower representation. Moreover, given that the tile logic will occupy relatively less area for the whole chip and given that the SB and NM account for significant area and energy, the overall overheads of the PRA designs over DaDN will be lower than that measured for the 16-bit fixed-point configurations."}, {"section_index": "9", "section_name": "7 RELATED WORK", "section_text": "The acceleration of Deep Learning is an active area of research and has yielded numerous proposals for hardware acceleration. DaDianNao (DaDN) is the de facto standard for high-performance DNN acceleration Chen et al.(2014). In the interest of space, this section restricts attention to methods that are either directly related to DaDN, or that follow a value-based approach to DNN acceleration, as Pragmatic falls under this category of accelerators. Value-based accelerators exploit the properties of the values being processed to further improve performance or energy beyond what is possible by exploiting computation structure alone. Cnvlutin [Albericio et al.[(2016) and Stripes Judd et al. (2016b|Judd et al.(2016a) are such accelerators and they have been already discussed and compared against in this work.\nPuDianNao is a hardware accelerator that supports seven machine learning algorithms including. DNNs Liu et al.(2015). ShiDianNao is a camera-integrated low power accelerator that exploits. integration to reduce communication overheads and to further improve energy efficiency Du et al.. [2015). Cambricon is the first instruction set architecture for Deep LearningLiu et al.(2016). Min. erva is a highly automated software and hardware co-design approach targeting ultra low-voltage,. highly-efficient DNN accelerators Reagen et al.(2016). Eyeriss is a low power, real-time DNN ac- celerator that exploits zero valued activations for memory compression and energy reduction Chen,. Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne(2016). The Efficient Inference Engine (EIE) exploits efficient activation and weight representations and pruning to greatly reduce communication costs, to improve energy efficiency and to boost performance by avoiding certain. ineffectual computationsHan et al.(2016]Han et al.(2015). EIE targets fully-connected (FC) lay- ers and was shown to be 12 more efficient than DaDN on FC layers, and 2 less efficient for convolutional layers. All aforementioned accelerators use bit-parallel units. While this work has. demonstrated Pragmatic as a modification of DaDN, its computation units and potentially, its gen. eral approach could be compatible with all aforementioned accelerator designs. This investigation. is interesting future work\nProfiling has been used to determine the precision requirements of a neural network for a hardwirec implementation [Kim et al.(2014). EoP has been exploited in general purpose hardware and othe application domains. For example, Brooks et al. Brooks & Martonosi[(1999) exploit the prefix bit due to EoP to turn off parts of the datapath improving energy. Park et al.Park et al.(2010), use a similar approach to trade off image quality for improved energy efficiency. Neither approach directly improves performance."}, {"section_index": "10", "section_name": "8 CONCLUSION", "section_text": "To the best of our knowledge Pragmatic is the first DNN accelerator that exploits not only the pe layer precision requirements of CNNs but also the essential bit information content of the activatio values. While this work targeted high-performance implementations, Pragmatic's core approac should be applicable to other hardware accelerators. We have investigated Pragmatic only for in ference and with image classification convolutional neural networks. While desirable, applying th. same concept to other network types, layers other than the convolutional one, is left for future work. It would also be interesting to study how the Pragmatic concepts can be applied to more genera. purpose accelerators or even graphics processors.\nflexibility and better utilization than the reduced precision approach of Stripes since the range doesnt. have to be symmetrical and the limits dont have to be powers of two, while still allowing straight forward multiplication of the values. The limit values are set to the maximum and the minimum activation values for each layer and the quantization uses the recommended rounding mode."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Jorge Albericio, Patrick Judd, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, and An. dreas Moshovos. Cnvlutin: Ineffectual-neuron-free deep neural network computing. In 2016 IEEE/ACM International Conference on Computer Architecture (ISCA), 2O16.\nDavid Brooks and Margaret Martonosi. Dynamically exploiting narrow width operands to improve processor power and performance. In Proceedings of the 5th International Symposium on Higl Performance Computer Architecture, HPCA '99, pp. 13-, Washington, DC, USA, 1999. IEEF Computer Society. ISBN 0-7695-0004-8. URLhttp://d1.acm.org/citation.cfm? id=520549.822763\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. arXiv:1602.01528 [cs], February 2016. URLhttp://arxiv.0rg/abs/1602.01528 arXiv: 1602.01528.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raquel Urtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deep Neural Nets, arXiv:1511.05236v4 [cs.LG] . arXiv.org, 2015.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, and Andreas Moshovos. Stripes Bit-serial Deep Neural Network Computing . In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-49, 2016a.\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network Computing . Computer Architecture Letters, 2016b.\nYunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen Zhiwei Xu, Ninghui Sun, and O. Temam. Dadiannao: A machine-learning supercomputer. Ir Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, pp. 609 622. Dec 2014. doi: 10.1109/MICR0.2014.58\nDaofu Liu, Tianshi Chen, Shaoli Liu, Jinhong Zhou, Shengyuan Zhou, Olivier Teman, Xiaobing Feng, Xuehai Zhou, and Yunji Chen. PuDianNao: A Polyvalent Machine Learning Accelerator. In Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS '15, pp. 369-381, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-2835-7. doi: 10.1145/2694344.2694358. URLhttp://doi.acm. org/10.1145/2694344.2694358 PuDianNao.\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large caches\nSynopsys. Design Compiler. http://www.synopsys.com/Tools/ Implementation/RTLSynthesis/DesignCompiler/Pages\nThis appendix complements the analysis of Section 2 by estimating the potential of an idealized Pragmatic accelerator that can skip any term (product of a full precision weight and one input. activation bit) while also improving execution time proportionally. Note the number of terms is considered before the Improved Oneffset Encoding described in Section|5.1 is applied..\nTo estimate PRA's potential, this section compares the number of terms that would be processed by various computing engines for the convolutional layers of recent CNNs (see Section 6) for the two aforementioned baseline activation representations.\n16-bit Fixed-Point Representation: The following computing engines are considered: 1) baseline. representative of DaDN using 16-bit fixed-point bit-parallel units Chen et al.[(2014), 2) a hypothet ical enhanced baseline ZN, that can skip all zero valued activations, 3) Cnvlutin (CVN) a practical design that can skip zero value activations for all but the first layer [Albericio et al.(2016), 4) STR that avoids EoP (see Table[2] Section[6) Judd et al.(2016b), 5) an ideal, software-transparent PRA, PRA-fp16 that processes only the essential activation bits, and 6) an ideal PRA, PRA-red, where. software communicates in advance how many prefix and suffix bits can be zeroed out after each layer (see Section5.1)\nOn average, STR reduces the number of terms to 53% compared to DaDN while skipping just the. zero valued activations could reduce them to 39% if ZN was practical and to 63% in practice with. CVN. PRA-fp16 can ideally reduce the number of additions to just 10% on average, while with software provided precisions per layer, PRA-red reduces the number of additions further to 8% on\nFigure [12a|reports the number of terms normalized over DaDN where each multiplication is ac. counted for using an equivalent number of terms or equivalently additions: 16 for DaDN, ZN, and CVN, p for a layer using a precision of p bits for STR, and the number of essential activation bits. for PRA-fp16, and for PRA-red. For example, for n = 10.001(2), the number of additions counted. would be 16 for DaDN and CVN+, 5 for STR as it could use a 5-bit fixed-point representation, and 2 for PRA-fp16 and PRA-red\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 S 10 GG (a) 16-bit fixed-point (b) 8-Bit Quantized Figure 12: Convolutional layer computational demands conv1 conv2 conv3 conv4 conv5 conv1 conv2 conv3 conv4 conv5 conv1conv2 conv3 conv4 conv5 70% 60% 0 35% 50% 50% 30% 40% 40% 25% 20% 30% 30% 15% 20% 20% 10% 10% 10% 5% 0% 09 0%\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 0 NiN N S 19 NiN N S 19 VG AVG C VGG VGG VGG VGG VGG VGG.. googler\nZN CVN STR PRA-fp16 PRA-red ZN PRA 0.75 0.75 0.5 0.5 0.25 0.25 0 GG (a) 16-bit fixed-point (b) 8-Bit Quantized\naverage. The potential savings are robust across all CNNs remaining above 87% for all DNNs with PRA-red.\n8-bit Quantized Representation: Figure [12b|shows the relative number of terms processed for: 1) a bit-parallel baseline, 2) an ideal, yet impractical bit-parallel engine that skips all zero activa- tions, and 3) PRA. In the interest of space and since PRA subsumes STR and CVN they are not considered. Pragmatic's benefits are significant even with an 8-bit quantized representation. On average, skipping all the zero valued activations would eliminate only 30% of the terms whereas Pragmatic would remove up to 71% of the terms..\n9.2 ESSENTIAL BIT CONTENT DISTRIBUTIONS\nThis section reports the distributions of the essential bit count for the activations processed pe convolutional layers for the networks studied. Three distributions are shown per network for th activations for three different representations: 1) 16-bit fixed-point, 2) per layer fixed-point, and 3 8-bit Quantized. A peak appears for values having four bits that are 1 for the quantized representatior since the value zero is mapped to a non-zero index having four bits that are one (114). Note that, as in Section9.1] the distributions are taken before Improved Oneffset Encoding.\nFigure 12: Convolutional layer computational demands\nI conv1 conv2 conv3conv4conv9 I conv1 conv2conv3 conv4 conv5 conv1conv2 conv3conv4conv 70% 40% 60% 35% 50% 30% 25% 40% 20% 30% 15% 20% 10% 10% 5% 0% 0% 10111213141516 O 1 0 6-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized\nFigure 13: AlexNet: Per Layer '1'-bit Count Distributions\nconv1 conv2 conv3 conv4-1024 cccp1 cccp2 conv1 conv2 conv3 conv4-1024 cccp1 cccp2 conv1 conv2 conv3 conv4-1024 cccp1 ccc2 cccp3 cccp4 cccp5 cccp6 cccp7-1024 cccp8-1024 cccp3 cccp4 cccp5 cccp6 cccp7-1024 cccp8-1024 cccp3 cccp4 cccp5 cccp6 I cccp7-1024 = cccp8-1024 60% 60% 40% 50% 50% 35% 30% 40% 40% 25% 30% 30% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 0% 1 10111213141516 6 0 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activationss\nFigure 14: NiN: Per Layer '1'-bit Count Distributions\nconv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b = incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b incept_4c = incept_4d = incept_4e = incept_5a = incept_5b 70% 70% 45% 60% 60% 40% 35% 50% 50% 30% 40% 40% 25% 30% 30% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 0% 1 10 11 12 1314 1516 0 8 10 0\nFigure 15: GoogLeNet: Per Layer '1'-bit Count Distributions\nFigure 16: VGG_M: Per Layer '1'-bit Count Distributions\nFigure 17: VGG_S: Per Layer '1'-bit Count Distributions\nconv1_1 = conv1_2 = conv2_1 conv2_2 = conv3_1 = conv3_2 = conv3_3 = conv3_4 conv1_1 conv1_2 conv2_1 conv2_2 conv3_1 conv3_2 conv3_3 = conv3_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 conv4_1 = conv4_2 = conv4_3 = conv4_4 = conv5_1 = conv5_2 = conv5_3 = conv5_4 60% 60% 50% 45% 50% 50% 40% 40% 40% 35% 30% 30% 30% 25% 20% 20% 20% 15% 10% 10% 10% 5% 0% 0% 10111213141516 12 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations\nFigure 18: VGG_19: Per Layer '1'-bit Count Distributions\nconv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b conv1 conv2 incept_3a incept_3b incept_4a incept_4b incept_4c =incept_4d = incept_4e = incept_5a incept_5b incept_4c = incept_4d =incept_4e = incept_5a =incept_5b incept_4c =incept_4d = incept_4e = incept_5a = incept_5b 70% 45% 60% 40% 35% 50% 30% 40% 25% 30% 20% 20% 15% 10% 10% 5% 0% 0% 10111213141516 0 1 5 10 0 3 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision. (c) Quantized: Activations\nconv1conv2 conv3 conv4conv5 conv1conv2conv3 conv4 conv5 conv1 conv2conv3 conv4 conv5 80% 60% 70% 50% 60% 50% 40% 40% 30% 30% 20% 20% 10% 10% 0% 10111213141516 0 2 3 5 0 (a) 16-bit: Full-Precision (b) 16-bit: Per Layer Precision (c) Quantized: Activations"}] |
BkVsEMYel | [{"section_index": "0", "section_name": "INDUCTIVE BIAS OF DEEP CONVOLUTIONAL NETWORKS THROUGH POOLING GEOMETRY", "section_text": "Nadav Cohen & Amnon Shashua\ncohennadav,shashua}@cs.huji.ac.il\nOur formal understanding of the inductive bias that drives the success of convo lutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and pooling operations so suitable for natural images. In this paper we study the ability of convolutional networks to model correlations among regions of their input. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on other types of convolutional networks as well. Correlations are formalized through the notion of separation rank, which for a given partition of the input measures how far a function is from being separable. We show that a polynomi ally sized deep network supports exponentially high separation ranks for certain input partitions, while being limited to polynomial separation ranks for others The network's pooling geometry effectively determines which input partitions are favored, thus serves as a means for controlling the inductive bias. Contiguous pooling windows as commonly employed in practice favor interleaved partitions over coarse ones, orienting the inductive bias towards the statistics of natural im ages. Other pooling schemes lead to different preferences, and this allows tailor ing the network to data that departs from the usual domain of natural imagery. In addition to analyzing deep networks, we show that shallow ones support only lin ear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A central factor in the application of machine learning to a given task is the inductive bias, i.e. the. choice of hypotheses space from which learned functions are taken. The restriction posed by the. inductive bias is necessary for practical learning, and reflects prior knowledge regarding the tasl at hand. Perhaps the most successful exemplar of inductive bias to date manifests itself in the use of convolutional networks (LeCun and Bengio(1995)) for computer vision tasks. These hypothe-. ses spaces are delivering unprecedented visual recognition results (e.g.Krizhevsky et al.(2012). Szegedy et al.(2015); Simonyan and Zisserman(2014); He et al.(2015)), largely responsible for the resurgence of deep learning (LeCun et al.(2015)). Unfortunately, our formal understanding of the inductive bias behind convolutional networks is limited - the assumptions encoded into these. models, which seem to form an excellent prior knowledge for imagery data, are for the most part a. mystery.\nExisting works studying the inductive bias of deep networks (not necessarily convolutional) do sc in the context of depth efficiency, essentially arguing that for a given amount of resources, more layers result in higher expressiveness. More precisely, depth efficiency refers to a situation where a function realized by a deep network of polynomial size, requires super-polynomial size in order to be realized (or approximated) by a shallower network. In recent years, a large body of research was devoted to proving existence of depth efficiency under different types of architectures (see for example Delalleau and Bengio (2011); Pascanu et al.[(2013); Montufar et al.(2014); Telgarsky (2015);Eldan and Shamir(2015);Poggio et al.(2015);Mhaskar et al.(2016)). Nonetheless, despite the wide attention it is receiving, depth efficiency does not convey the complete story behind the inductive bias of deep networks. While it does suggest that depth brings forth functions that are otherwise unattainable, it does not explain why these functions are useful. Loosely speaking, the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "hypotheses space of a polynomially sized deep network covers a small fraction of the space of al. functions. We would like to understand why this small fraction is so successful in practice\nA specific family of convolutional networks gaining increased attention is that of convolutional arithmetic circuits. These models follow the standard paradigm of locality, weight sharing and pool- ing, yet differ from the most conventional convolutional networks in that their point-wise activations are linear, with non-linearity originating from product pooling. Recently, Cohen et al.[(2016b) an- alyzed the depth efficiency of convolutional arithmetic circuits, showing that besides a negligible (zero measure) set, all functions realizable by a deep network require exponential size in order to be realized (or approximated) by a shallow one. This result, termed complete depth efficiency, stands in contrast to previous depth efficiency results, which merely showed existence of functions efficiently realizable by deep networks but not by shallow ones. Besides their analytic advantage, convolu- tional arithmetic circuits are also showing promising empirical performance. In particular, they are equivalent to SimNets - a deep learning architecture that excels in computationally constrained set- tings (Cohen and Shashua(2014);Cohen et al.(2016a)), and in addition, have recently been utilized for classification with missing data (Sharir et al.(2016)). Motivated by these theoretical and practi- cal merits, we focus our analysis in this paper on convolutional arithmetic circuits, viewing them as representative of the class of convolutional networks. We empirically validate our conclusions with both convolutional arithmetic circuits and convolutional rectifier networks - convolutional networks with rectified linear (ReLU, Nair and Hinton (2010)) activation and max or average pooling. Adap- tation of the formal analysis to networks of the latter type, similarly to the adaptation of the analysis in Cohen et al.(2016b) carried out byCohen and Shashua(2016), is left for future work.\nOur analysis approaches the study of inductive bias from the direction of function inputs. Specifi cally, we study the ability of convolutional arithmetic circuits to model correlation between regions of their input. To analyze the correlations of a function, we consider different partitions of input regions into disjoint sets, and ask how far the function is from being separable w.r.t. these parti- tions. Distance from separability is measured through the notion of separation rank (Beylkin and Mohlenkamp (2002)), which can be viewed as a surrogate of the L2 distance from the closest sepa- rable function. For a given function and partition of its input, high separation rank implies that the function induces strong correlation between sides of the partition, and vice versa.\nWe show that a deep network supports exponentially high separation ranks for certain input par titions, while being limited to polynomial or linear (in network size) separation ranks for others The network's pooling geometry effectively determines which input partitions are favored in terms of separation rank, i.e. which partitions enjoy the possibility of exponentially high separation rank with polynomial network size, and which require network to be exponentially large. The standarc choice of square contiguous pooling windows favors interleaved (entangled) partitions over coarse ones that divide the input into large distinct areas. Other choices lead to different preferences, for example pooling windows that join together nodes with their spatial reflections lead to favoring par titions that split the input symmetrically. We conclude that in terms of modeled correlations, pooling geometry controls the inductive bias, and the particular design commonly employed in practice ori- ents it towards the statistics of natural images (nearby pixels more correlated than ones that are far apart). Moreover, when processing data that departs from the usual domain of natural imagery, prior knowledge regarding its statistics can be used to derive respective pooling schemes, and accordingly tailor the inductive bias.\nWith regards to depth efficiency, we show that separation ranks under favored input partitions are exponentially high for all but a negligible set of the functions realizable by a deep network. Shallov networks on the other hand, treat all partitions equally, and support only linear (in network size separation ranks. Therefore, almost all functions that may be realized by a deep network requir a replicating shallow network to have exponential size. By this we return to the complete deptl efficiency result of |Cohen et al.(2016b), but with an added important insight into the benefit o1 functions brought forth by depth -- they are able to efficiently model strong correlation under favorec partitions of the input.\nThe remainder of the paper is organized as follows. Sec.2lprovides a brief presentation of necessary background material from the field of tensor analysis. Sec.3|describes the convolutional arithmetic circuits we analyze, and their relation to tensor decompositions. In sec.4|we convey the concept o separation rank, on which we base our analyses in sec.5|and[6] The conclusions from our analyses are empirically validated in sec.[7] Finally, sec.[8 concludes."}, {"section_index": "3", "section_name": "PRELIMINARIES", "section_text": "The analyses carried out in this paper rely on concepts and results from the field of tensor analysis. In this section we establish the minimal background required in order to follow our arguments. referring the interested reader to Hackbusch (2012) for a broad and comprehensive introduction to the field.\nThe core concept in tensor analysis is a tensor, which for our purposes may simply be thought of as. a multi-dimensional array. The order of a tensor is defined to be the number of indexing entries in the array, which are referred to as modes. The dimension of a tensor in a particular mode is defined as the number of values that may be taken by the index in that mode. For example, a 4-by-3 matrix is a tensor of order 2, i.e. it has two modes, with dimension 4 in mode 1 and dimension 3 in mode 2. If A is a tensor of order N and dimension M, in each mode i E [N] := {1, ..., N}, the space of all. configurations it can take is denoted, quite naturally, by RM1 -- M..\nA fundamental operator in tensor analysis is the tensor product, which we denote by &. It is an Q respectively), and returns a tensor A B E RM1x-x Mp+ (order P + Q) defined by: (A B)d...dp+o = Ad1...dp . Bdp+1...dp+o. Notice that in the case P = Q = 1, the tensor product reduces to the standard outer product between vectors, i.e. if u E RM1 and v E RM2, then u v is no other than the rank-1 matrix uvT E RM M2\nWe now introduce the important concept of matricization, which is essentially the rearrangement of a tensor as a matrix. Suppose A is a tensor of order N and dimension M, in each mode i E N], and let (I, J) be a partition of [N], i.e. I and J are disjoint subsets of [N] whose union gives [N]. We may write I = {1,..., i1|} where i1 < ... < i1], and similarly J = {1,..., J|J|} where j1 < ... < j|J]. The matricization of A w.r.t. the partition (I, J), denoted [A]1,J, is the I!1 M, -by H M, matrix holding the entries of A such that Ad1..dy is placed in row index 1 + !1 (di 1) II=t+1 Mi, and column index 1 + !1(dj. - 1) II!/t+1 Mj,. If I = 0 or J = 0, then by definition [A]1, J is a row or column (respectively) vector of dimension =1 Mt holding Ad ..dn in entry 1 + 1(dt - 1) II=t+1 Mt.\nA well known matrix operator is the Kronecker product, which we denote by O. For two matrices. index (i - 1)N1 + k and column index (j - 1)N2 + l. Let A and B be tensors of orders P and Q. respectively, and let (I, J) be a partition of [P + Q]. The basic relation that binds together the tensor product, the matricization operator, and the Kronecker product, is:.\n[A 8 B]1,J = [A]1n[P],Jn[P] O [B](1-P)n[Q],(J-P)n[Q\nwhere I - P and J - P are simply the sets obtained by subtracting P from each of the elements. in I and J respectively. In words, eq.1|implies that the matricization of the tensor product between A and B w.r.t. the partition (I, J) of [P + Q], is equal to the Kronecker product between two matricizations: that of A w.r.t. the partition of [P] induced by the lower values of (I, J), and that of B w.r.t. the partition of [Q] induced by the higher values of (I, J)..\nThe convolutional arithmetic circuit architecture on which we focus in this paper is the one con. sidered in Cohen et al.[(2016b), portrayed in fig.[1(a). Instances processed by a network are rep. resented as N-tuples of s-dimensional vectors. They are generally thought of as images, with the. s-dimensional vectors corresponding to local patches. For example, instances could be 32-by-3 RGB images, with local patches being 5 5 regions crossing the three color bands. In this case. assuming a patch is taken around every pixel in an image (boundaries padded), we have N = 102. and s = 75. Throughout the paper, we denote a general instance by X = (x1,...,x), with. X1 ... X E IRs standing for its patches.\n1 The definitions we give are actually concrete special cases of more abstract algebraic definitions as giver in|Hackbusch](2012). We limit the discussion to these special cases since they suffice for our needs and ar easier to grasp.\n(a) hidden Tayer 0. hidden layer L-1 input X representation 1x1 conv pooling 1x1 conv dense pooling (output) M rep(i,d)= fe,(xi) pool,(j,r)=Il conv,(j',r) pool-1(r)= ] conv-(j',y j'e window j. conv,(j,r)=(ao,,rep(j,:)) j'covers space out(y)=(aL, pool-() (b) hidden layer. C Ilow, J high. input X representation. 1x1 conv global dense 20 19 24 23 19 24 23 pooling (output) 3 4 10 29 30 25 26 13 14 10 . 30 25 26 16 32 31 28 2 16 15 12 11 32 31 28 27 V 49 50 54 33 34 38 49 50 53 54 83 34 37 38 rep(i,d)= fe, (xi) 52 56 36 35 40 39 52 51 56 55 36 35 40 39 conv(i,r) =(ao,rep(i,:)) 58 46 42 61 62 57 58 45 out 46 41 42 pool(y) = I conv(i,r) 59 48 44 43 64 63 60 59 48 47 44 43 4 , pool i covers space\nFigure 1: Best viewed in color. (a) Convolutional arithmetic circuit architecture analyzed in this paper (see description in sec.3). (b) Shallow network with global pooling in its single hidden layer (c) Illustration of input patch ordering for deep network with 2 2 pooling windows, along with patterns induced by the partitions (1odd, Jeven) and (Ilow, Jhigh) (eq.8and|9respectively).\nThe first layer in a network is referred to as representation. It consists of applying M repre. sentation functions fe,...fe : Rs -> R to all patches, thereby creating M feature maps. In. the case where representation functions are chosen as fe,(x) = o(w x + bd), with parameters. 0d = (wd, bd) E Rs R and some point-wise activation o(), the representation layer reduces to. a standard convolutional layer. More elaborate settings are also possible, for example modeling the representation as a cascade of convolutional layers with pooling in-between. Following the repre-. sentation, a network includes L hidden layers indexed by l = 0. . .L - 1. Each hidden layer l begins. with a 1 1 conv operator, which is simply a three-dimensional convolution with r1 channels and. filters of spatial dimensions 1-by-1. This is followed by spatial pooling, that decimates feature. maps by taking products of non-overlapping two-dimensional windows that cover the spatial extent.. The last of the L hidden layers (l = L 1) reduces feature maps to singletons (its pooling operator is. global), creating a vector of dimension rL-1. This vector is mapped into Y network outputs through. a final dense linear layer.\nAltogether, the architectural parameters of a network are the type of representation functions (fea). the pooling window shapes and sizes (which in turn determine the number of hidden layers L) and the number of channels in each layer (M for representation, ro...rL-1 for hidden layers, Y for output). Given these architectural parameters, the learnable parameters of a network are the representation weights (0g for channel d), the conv weights (a', for channel y of hidden layer l). and the output weights (aL,y for output node y).\nFor a particular setting of weights, every node (neuron) in a given network realizes a function from (Rs)N to R. The receptive field of a node refers to the indexes of input patches on which its function may depend. For example, the receptive field of node j in channel y of conv oper-\nCohen et al. (2016b) consider two settings for the 1 1 conv operator. The first, referred to as weight. sharing, is the one described above, and corresponds to standard convolution. The second is more general.. allowing filters that slide across the previous layer to have different weights at different spatial locations. It is shown in Cohen et al.(2016b) that without weight sharing, a convolutional arithmetic circuit with one hidden. layer (or more) is universal, i.e. can realize any function if its size (width) is unbounded. This property is. imperative for the study of depth efficiency, as that requires shallow networks to ultimately be able to replicate. any function realized by a deep network. In this paper we limit the presentation to networks with weight. sharing, which are not universal. We do so because they are more conventional, and since our entire analysis. is oblivious to whether or not weights are shared (applies as is to both settings). The only exception is where. we reproduce the depth efficiency result ofCohen et al.(2016b). There, we momentarily consider networks. without weight sharing.\nator at hidden layer O is {j}, and that of an output node is [N], corresponding to the entire ir. put. Denote by h(t,,j) the function realized by node j of channel in conv operator at hidde. layer l, and let I(l,~,j) C [N] be its receptive field. By the structure of the network it is evide!. that I(l,,j) does not depend on 7, so we may write I(l,s) instead. Moreover, assuming poolin. windows are uniform across channels (as customary with convolutional networks), and taking in account the fact that they do not overlap, we conclude that 1(l,j1) and 1(l,j2) are necessarily di. joint if j1j2. A simple induction over l = 0...L - 1 then shows that h(t,,j) may be expresse. receptive field 1(l,j), and A(l,v,j) is a tensor of order T = |1(l,j)| and dimension M in each mod. with entries given by polynomials in the network's conv weights {a',}l,y. Taking the inductic. one step further (from last hidden layer to network output), we obtain the following expression f. functions realized by network outputs:. M 1 y\nM V yx1..,xN= d1 .d\ny E [Y] here is an output node index, and hy is the function realized by that node. Ay is a tensor of. order N and dimension M in each mode, with entries given by polynomials in the network's conv weights {a',~}t,y and output weights aL,y. Hereafter, terms such as function realized by a network. or coefficient tensor realized by a network, are to be understood as referring to hy or A' respectively.. Next, we present explicit expressions for Ay under two canonical networks - deep and shallow..\nDeep network. Consider a network as in fig.1(a), with pooling windows set to cover four entries each, resulting in L = log4 N hidden layers. The linear weights of such a network are {ao, layer l = 1...L - 1, and {aL,y = RrL-1}ye[Y] for dense output operator. They determine the coefficient tensor Ay (eq.2) through the following recursive decomposition:\nY E r1 order 4 *1 ,lE{2...L-1},yE[rl OY orde order 4L=N\na'~ and aL,y here are scalars representing entry a in the vectors a',~ and aL,y respectively, and the. symbol with a superscript stands for a repeated tensor product, e.g. 4a0, := a,aa0, &ao,a ao,. To verify that under pooling windows of size four A is indeed given by eq.3 simply plug. the rows of the decomposition into eq.[2] starting from bottom and continuing upwards. For context,. eq.3|describes what is known as a hierarchical tensor decomposition (see chapter 11 in|Hackbusch (2012)), with underlying tree over modes being a full quad-tree (corresponding to the fact that the. network's pooling windows cover four entries each)..\nShallow network. The second network we pay special attention to is shallow, comprising a single. hidden layer with global pooling - see illustration in fig.1(b). The linear weights of such a network They determine the coefficient tensor Ay (eq.2) as follows:.\nAy\nwhere al,y stands for entry y of al,y, and again, the symbol with a superscript represents a. repeated tensor product. The tensor decomposition in eq.4|is an instance of the classic CP decom position, also known as rank-1 decomposition (see Kolda and Bader (2009) for a historic survey).\nTo conclude this section, we relate the background material above, as well as our contribution de. scribed in the upcoming sections, to the work of Cohen et al.(2016b). The latter shows that with\narbitrary coefficient tensors Ay, functions hy as in eq.2|form a universal hypotheses space. It is then. shown that convolutional arithmetic circuits as in fig.[1[a) realize such functions by applying tensor decompositions to Ay, with the type of decomposition determined by the structure of a network (number of layers, number of channels in each layer etc.). The deep network (fig.1[a) with size-4. pooling windows and L = log4 N hidden layers) and the shallow network (fig.1(b)) presented here- inabove are two special cases, whose corresponding tensor decompositions are given in eq.3land4 respectively. The central result in|Cohen et al.[(2016b) relates to inductive bias through the notion of depth efficiency it is shown that in the parameter space of a deep network, all weight settings but a set of (Lebesgue) measure zero give rise to functions that can only be realized (or approximated) by a shallow network if the latter has exponential size. This result does not relate to the characteristics of instances X = (x1, ..., xn), it only treats the ability of shallow networks to replicate functions. realized by deep networks.\nIn this paper we draw a line connecting the inductive bias to the nature of X, by studying the relation between a network's architecture and its ability to model correlation among patches xi.. Specifically, in sec.4|we consider partitions (I, J) of [N] (IUJ = [N], where U stands for disjoint union), and present the notion of separation rank as a measure of the correlation modeled between the patches indexed by I and those indexed by J. In sec.5.1 the separation rank of a network's function hy w.r.t. a partition (I, J) is proven to be equal to the rank of [Ay1.J the matricization. of the coefficient tensor Ay w.r.t. (I, J). Sec.5.2 derives lower and upper bounds on this rank. for a deep network, showing that it supports exponential separation ranks with polynomial size for certain partitions, whereas for others it is required to be exponentially large. Subsequently,. sec.5.3 establishes an upper bound on rank[Ay]1,J for shallow networks, implying that these. must be exponentially large in order to model exponential separation rank under any partition, and. thus cannot efficiently replicate a deep network's correlations. Our analysis concludes in sec.6. where we discuss the pooling geometry of a deep network as a means for controlling the inductive. bias by determining a correspondence between partitions (I, J) and spatial partitions of the input.. Finally, we demonstrate experimentally in sec.7|how different pooling geometries lead to superior. performance in different tasks. Our experiments include not only convolutional arithmetic circuits, but also convolutional rectifier networks, i.e. convolutional networks with ReLU activation and max or average pooling"}, {"section_index": "4", "section_name": "4 SEPARATION RANK", "section_text": "In this section we define the concept of separation rank for functions realized by convolutional arithmetic circuits (sec.3), i.e. real functions that take as input X = (x1,..., x) E (Rs)N. The separation rank serves as a measure of the correlations such functions induce between different sets of input patches, i.e. different subsets of the variable set {x1, ..., X }\nsep(h;I, J) := min{R E NU{0} : 3g1...gR : (Rs)|I| R, g1..gR : (Rs)|J] _> R s.t. h(x1,...,XN) = :,Xi9v(Xj\nIn words, it is the minimal number of summands that together give h, where each summand is sepa rable w.r.t. (I, J), i.e. is equal to a product of two functions - one that intakes only patches indexec by I, and another that intakes only patches indexed by J. One may wonder if it is at all possible to express h through such summands, i.e. if the separation rank of h is finite. From the theory oi tensor products between L2 spaces (see|Hackbusch|(2012) for a comprehensive coverage), we know that any hEL2((Rs)N), i.e. any h that is measurable and square-integrable, may be approximated v=1 9v(xi1,...,Xi|1|)9'(xj1,...,Xj|J1). Exact real ization however is only guaranteed at the limit R -> oo, thus in general the separation rank of h\nLet (I, J) be a partition of input indexes, i.e. I and J are disjoint subsets of [N] whose union gives [N]. We may write I = {1,..., 1|} where i1 < ... < i|1], and similarly J = {J1,..., J|J|} where j1 < ... < j|JL. For a function h : (Rs)N - IR, the separation rank w.r.t. the partition (I, J) is defined as follows:\nneed not be finite. Nonetheless, as we show in sec.5l for the class of functions we are interested ir namely functions realizable by convolutional arithmetic circuits, separation ranks are always finite\nThe concept of separation rank was introduced in Beylkin and Mohlenkamp(2002) for numeri cal treatment of high-dimensional functions, and has since been employed for various applications e.g. quantum chemistry (Harrison et al.(2003)), particle engineering (Hackbusch (2006)) and ma chine learning (Beylkin et al.(2009)). If the separation rank of a function w.r.t. a partition of its input is equal to 1, the function is separable, meaning it does not model any interaction between the sets of variables. Specifically, if sep(h; I, J) = 1 then there exist g : (IRs)|II - R and g' : (Rs)|J] -> R such that h(x1,..,xN) = g(xi,...,xi)g'(xj,...,Xj|J1), and the func- tion h cannot take into account consistency between the values of {x,..., Xij} and those of {x, ..., Xjj!}. In a statistical setting, if h is a probability density function, this would mean that {x,..., Xiu} and {x,..., XjJ} are statistically independent. The higher sep(h, I, J) is, the farther h is from this situation, i.e. the more it models dependency between {x,..., Xi!} and {Xj1) : . . : , j|j|}, or equivalently, the stronger the correlation it induces between the patches indexed by I and those indexed by J.\nThe interpretation of separation rank as a measure of deviation from separability is formalized in app.B where it is shown that sep(h; I, J) is closely related to the L2 distance of h from the set of separable functions w.r.t. (I, J). Specifically, we define D(h; I, J) as the latter distance divided by the L2 norm of h4I, and show that sep(h; I, J) provides an upper bound on D(h; I, J). While it is not possible to lay out a general lower bound on D(h; I, J) in terms of sep(h; I, J), we show that the specific lower bounds on sep(h; I, J) underlying our analyses can be translated into lower bounds on D(h; I, J). This implies that our results, facilitated by upper and lower bounds on separation ranks of convolutional arithmetic circuits, may equivalently be framed in terms of L2 distances from separable functions."}, {"section_index": "5", "section_name": "5 CORRELATION ANALYSIS", "section_text": "In this section we analyze convolutional arithmetic circuits (sec.3) in terms of the correlations they. can model between sides of different input partitions, i.e. in terms of the separation ranks (sec.4) they. support under different partitions (I, J) of N|. We begin in sec.[5.1] establishing a correspondence. between separation ranks and coefficient tensor matricization ranks. This correspondence is then used in sec.[5.2|and|5.3|to analyze the deep and shallow networks (respectively) presented in sec.3 We note that we focus on these particular networks merely for simplicity of presentation - the. analysis can easily be adapted to account for alternative networks with different depths and pooling. schemes."}, {"section_index": "6", "section_name": "5.1 FROM SEPARATION RANK TO MATRICIZATION RANK", "section_text": "Let h, be a function realized by a convolutional arithmetic circuit, with corresponding coefficient tensor Ay (eq.2). Denote by (I, J) an arbitrary partition of [N], i.e. IUJ = [N]. We are inter- ested in studying sep(hy; I, J) the separation rank of hy w.r.t. (I, J) (eq.5). As claim1below states, assuming representation functions {fea} de[m] are linearly independent (if they are not, we drop dependent functions and modify A? accordingly[D), this separation rank is equal to the rank of [A]1,J - the matricization of the coefficient tensor Ay w.r.t. the partition (I, J). Our problem thus translates to studying ranks of matricized coefficient tensors.\nClaim 1. Let hy be a function realized by a convolutional arithmetic circuit (fig.1(a)), with corre sponding coefficient tensor Ay (eq.2). Assume that the network's representation functions fe, are linearly independent, and that they, as well as the functions gv, g, in the definition of separation\nAs the linear weights of a network vary, so do the coefficient tensors (A) it gives rise to. Ac. cordingly, for a particular partition (I, J), a network does not correspond to a single value o. rank[Ay1.J, but rather supports a range of values. We analyze this range by quantifying its maxi. mum, which reflects the strongest correlation that the network can model between the input patche. indexed by I and those indexed by J. One may wonder if the maximal value of rank|Ay1.J is the. appropriate statistic to measure, as a-priori, it may be that rank[Ay1.J is maximal for very few of. the network's weight settings, and much lower for all the rest. Apparently, as claim|2|below states this is not the case, and in fact rank[Ay]1.J is maximal under almost all of the network's weigh. settings.\nClaim 2. Consider a convolutional arithmetic circuit (fig.1(a)) with corresponding coefficient ten sor Au (eq.2). A depends on the network's linear weights {a',~}t, and aL,y, thus for a given partition (I, J) of [N], rank[Ay1.J is a function of these weights. This function obtains its maxi mum almost everywhere (w.r.t. Lebesgue measure)."}, {"section_index": "7", "section_name": "5.2 DEEP NETWORK", "section_text": "In this subsection we study correlations modeled by the deep network presented in sec.3](fig.1[a with size-4 pooling windows and L = log4 N hidden layers). In accordance with sec.[5.1] we do so by characterizing the maximal ranks of coefficient tensor matricizations under different partitions\nRecall from eq.3 the hierarchical decomposition expressing a coefficient tensor A' realized by the deep network. We are interested in matricizations of this tensor under different partitions of [N] Let (I, J) be an arbitrary partition, i.e. IUJ = [N]. Matricizing the last level of eq.3|w.r.t. (I, J) while applying the relation in eq.1 gives:\nIn[24L-1],Jn[2:4L-1] (I-2.4L-1)n[24L-1],(J-24L-1)n[24L-1\nApplying eq.[1|again, this time to matricizations of the tensor L-1, 1,a, we obtain:\n[Ay]I,J=) -1.tJL-1t\n6 Square-integrability of representation functions fe, may seem as a limitation at first glance, as for example neurons feg (x) = o(w x + bd), with parameters 0a = (wd, bd) E Rs R and sigmoid or ReLU activation o(), do not meet this condition. However, since in practice our inputs are bounded (e.g. they represent image pixels by holding intensity values), we may view functions as having compact support, which, as long as they are continuous (holds in all cases of interest), ensures square-integrability.\nJN4L-1] )n[4L-1],(J-4L-1)n[4L-] I-2.4L-1)n[4L-1],(J-2.4L-1)n[4L-1] 3.4L-1)n[4L-1],(J-34L-1)n[4L-1]\nThe upper bound in theorem |1|is expressed via constants c,k, defined recursively over levels l = 0...L - 1, with k ranging over 1...N/4' for each level l. What prevents cl,k from grow- ing double-exponentially fast (w.r.t. l) is the minimization with Mmin{|Ii,x|,|Jr,x|}. Specifically, if min{It,k,Ji.k} is small, i.e. if the partition induced by (I, J) on the k'th size-4 group of patches is unbalanced (most of the patches belong to one side of the partition, and only a few belong to the other), cl,k will be of reasonable size. The higher this takes place in the hierarchy (i.e. the larger l is), the lower our eventual upper bound will be. In other words, if partitions induced by (I, J) on size-4' patch groups are unbalanced for large values of l, the upper bound in theorem|1|will be small For example, consider the partition (Ilow, Jhigh) defined by:\nIk:=(I-(k-1)4)n4 Ji.k:=(J-(k-1).4)n4\n0,4(k-1)+t,J0,4(k-1)+t ,y E [r1 J1 M|I1,k|-by-M|J1,k| l,k,Jl,k I-1,4(k-1)+t,Ji-1,4(k-1)+t,l E{2...L-1},y E [rl M|It,k|-by-M|Jl,kI IL-1,t,JL-1,t (7) II. M|I|-by-M|J|\nEq.7lexpresses [A]1,J -- the matricization w.r.t. the partition (I, J) of a coefficient tensor A real- ized by the deep network, in terms of the network's conv weights {a'~}t,y and output weights aL,y As discussed above, our interest lies in the maximal rank that this matricization can take. Theorem|1 below provides lower and upper bounds on this maximal rank, by making use of eq.7 and of the rank-multiplicative property of the Kronecker product (rank(AoB) = rank(A).rank(B)).\nTheorem 1. Let (I, J) be a partition of [N], and [Ay]1.J be the matricization w.r.t. (I, J) of a coefficient tensor Ay (eq.2) realized by the deep network (fig.1(a) with size-4 pooling windows). For every l E {0...L 1} and k E [N/4], define Ii,k and Ji,k as in eq.[6] Then, the maximal rank that Ay1.1 can take (when network weights vary) is:.\nNo smaller than min{ro, M}S, where S := |{k E [N/4] : I1,k 0 J1,k 0}] No greater than min{Mmin{|I,|Jl}, rL-1 I-1 cL-1,t}, where c0,k := 1 for k E [N], a cl,k := min{Mmin{|I,x||Jz,q|},r1-1 IIt=1 cl-1,4(k-1)+t} for l E [L -1], k E [N/4'].\nThe lower bound in theorem|1|is exponential in S, the latter defined to be the number of size-4 patch groups that are split by the partition (I, J), i.e. whose indexes are divided between I and J Partitions that split many of the size-4 patch groups will thus lead to a large lower bound. For example, consider the partition (Iodd, Jeven) defined as follows:\n1odd ={1,3,..., N -1} Jeven ={2,4,..., N}\nIlow ={1,..., N/2} Jhigh={N/2+1,...,N}\nUnder (Ilow, Jhigh), all partitions induced on size-4L-1 patch groups (quadrants of [N]) are com pletely one-sided (min{IL-1,k], JL-1,k} = 0 for all k E [4]), resulting in the upper bound being no greater than r L,-1 - linear in network size.\nTo summarize this discussion, theorem 1states that with the deep network, the maximal rank of a coefficient tensor matricization w.r.t. (I, J), highly depends on the nature of the partition (I, J) - i1 while being only polynomial (or linear) for partitions like (Ilow, Jhigh), under which size-4' patch groups are unevenly divided for large values of l. Since the rank of a coefficient tensor matricization w.r.t. (I, J) corresponds to the strength of correlation modeled between input patches indexed by 1 and those indexed by J (sec.5.1), we conclude that the ability of a polynomially sized deep network to model correlation between sets of input patches highly depends on the nature of these sets."}, {"section_index": "8", "section_name": "5.3 SHALLOW NETWORK", "section_text": "We now turn to study correlations modeled by the shallow network presented in sec.3[(fig.1(b)). In line with sec. 5.1, this is achieved by characterizing the maximal ranks of coefficient tensor matricizations under different partitions\n[Ay]I.J=\no|I|ao,~ and o|J|a0, here are column vectors of dimensions M|I! and M!J! respectively, stand-. ing for the Kronecker products of a,~ E RM with itself |I] and [J] times (respectively). Eq.10 immediately leads to two observations regarding the ranks that may be taken by IAyl1..J. First they depend on the partition (I, J) only through its division size, i.e. through I] and J]. Second they are no greater than min{Mmin{|I|,|Jl}, ro}, meaning that the maximal rank is linear (or less). in network size. In light of sec.5.1|and|5.2] these findings imply that in contrast to the deep net-. work, which with polynomial size supports exponential separation ranks under favored partitions. the shallow network treats all partitions (of a given division size) equally, and can only give rise tc an exponential separation rank if its size is exponential..\nSuppose now that we would like to use the shallow network to replicate a function realized by a polynomially sized deep network. So long as the deep network's function admits an exponential separation rank under at least one of the favored partitions (e.g. (1odd, Jeven) - eq.8), the shallow network would have to be exponentially large in order to replicate it, i.e. depth efficiency takes place.[] Since all but a negligible set of the functions realizable by the deep network give rise to maximal separation ranks (sec|5.1), we obtain the complete depth efficiency result of|Cohen et al (2016b). However, unlike Cohen et al.(2016b), which did not provide any explanation for the usefulness of functions brought forth by depth, we obtain an insight into their utility - they are able to efficiently model strong correlation under favored partitions of the input.\nThe deep network presented in sec.3] whose correlations we analyzed in sec. 5.2. was defined as having size-4 pooling windows, i.e. pooling windows covering four entries each. We have yet\n7 Convolutional arithmetic circuits as we have defined them (sec.3) are not universal. In particular, it may very well be that a function realized by a polynomially sized deep network cannot be replicated by the shallov network. no matter how large (wide) we allow it to be. In such scenarios depth efficiency does not provide insight into the complexity of functions brought forth by depth. To obtain a shallow network that is universal thus an appropriate gauge for depth efficiency, we may remove the constraint of weight sharing, i.e. allow the filters in the hidden conv operator to hold different weights at different spatial locations (see|Cohen et al. 2016b) for proof that this indeed leads to universality). All results we have established for the original shallov network remain valid when weight sharing is removed. In particular, the separation ranks of the network are still linear in its size. This implies that as suggested, depth efficiency indeed holds.\nRecall from eq.4|the CP decomposition expressing a coefficient tensor A' realized by the shallow. network. For an arbitrary partition (I, J) of [N], i.e. IUJ = [N], matricizing this decomposition with repeated application of the relation in eq.1] gives the following expression for [A]1,J - the matricization w.r.t. (I, J) of a coefficient tensor realized by the shallow network:.\nto specify the shapes of these windows, or equivalently, the spatial (two-dimensional) locations of nodes grouped together in the process of pooling. In compliance with standard convolutional network design, we now assume that the network's (size-4) pooling windows are contiguous square blocks, i.e. have shape 2 2. Under this configuration, the network's functional description (eq.2 with A given by eq.3 induces a spatial ordering of input patches , which may be described by the following recursive process:\nWith this spatial ordering (illustrated in fig.[1(c)), partitions (I, J) of [N] convey a spatial pattern fig.1(c), whereas (Ilow, Jhigh) (eq.9) corresponds to the pattern illustrated on the right. Our anal- ysis (sec.5.2) shows that the deep network is able to model strong correlation under (1odd, Jeven), while being inefficient for modeling correlation under (Ilow, Jhigh). More generally, partitions for which S, defined in theorem[1] is high, convey patterns that split many 2 2 patch blocks, i.e. are highly entangled. These partitions enjoy the possibility of strong correlation. On the other hand, partitions for which min{|It,k|, |Ji,k|} is small for large values of l (see eq.6|for definition of Ii,k and Ji,k) convey patterns that divide large 2' 2' patch blocks unevenly, i.e. separate the input to distinct contiguous regions. These partitions, as we have seen, suffer from limited low correlations.\nWe conclude that with 2 2 pooling, the deep network is able to model strong correlation betweer. input regions that are highly entangled, at the expense of being inefficient for modeling correlatior between input regions that are far apart. Had we selected a different pooling regime, the preference. of input partition patterns in terms of modeled correlation would change. For example, if pooling. windows were set to group nodes with their spatial reflections (horizontal, vertical and horizontal. vertical), coarse patterns that divide the input symmetrically, such as the one illustrated on the righ of fig.1[c), would enjoy the possibility of strong correlation, whereas many entangled patterns. would now suffer from limited low correlation. The choice of pooling shapes thus serves as a means for controlling the inductive bias in terms of correlations modeled between input regions. Square. contiguous windows, as commonly employed in practice, lead to a preference that complies witl our intuition regarding the statistics of natural images (nearby pixels more correlated than distan ones). Other pooling schemes lead to different preferences, and this allows tailoring a network tc data that departs from the usual domain of natural imagery. We demonstrate this experimentally ir the next section, where it is shown how different pooling geometries lead to superior performance. in different tasks."}, {"section_index": "9", "section_name": "7 EXPERIMENTS", "section_text": "The main conclusion from our analyses (sec.5|and6) is that the pooling geometry of a deep convo lutional network controls its inductive bias by determining which correlations between input regions can be modeled efficiently. We have also seen that shallow networks cannot model correlations efficiently, regardless of the considered input regions. In this section we validate these assertions empirically, not only with convolutional arithmetic circuits (subject of our analyses), but also with convolutional rectifier networks - convolutional networks with ReLU activation and max or average pooling. For conciseness, we defer to app.[C|some details regarding our implementation. The latter s fullv available online at ht t ps : qi+ h11h COm/ HII.T -Deen /i nduc+ i ve DOO1i nd\n8 The network's functional description assumes a one-dimensional full quad-tree grouping of input patcl indexes. That is to say, it assumes that in the first pooling operation (hidden layer O), the nodes correspond ing to patches x1, X2, X3, X4 are pooled into one group, those corresponding to x5, X6, X7, X8 are pooled int another, and so forth. Similar assumptions hold for the deeper layers. For example, in the second poolin operation (hidden layer 1), the node with receptive field {1, 2, 3, 4}, i.e. the one corresponding to the quadru ple of patches {x1, x2, x3, x4}, is assumed to be pooled together with the nodes whose receptive fields ar {5, 6, 7, 8}, {9, 10, 11, 12} and {13, 14, 15, 16}.\nFor l = 1, ..., L = log4N: Replicate the already-assigned top-left 2l-1-by-2l-1 block ol indexes, and place copies on its right, bottom-right and bottom. Then, add a 4'-1 offset tc. all indexes in the right copy, a 2 : 4'-1 offset to all indexes in the bottom-right copy, and. a 3 : 4'-1 offset to all indexes in the bottom copy..\nOur experiments are based on a synthetic classification benchmark inspired by medical imaging. tasks. Instances to be classified are 32-by-32 binary images, each displaying a random distorted. oval shape (blob) with missing pixels in its interior (holes). For each image, two continuous scores. in range [0, 1| are computed. The first, referred to as closedness, reflects how morphologically closed. a blob is, and is defined to be the ratio between the number of pixels in the blob, and the number of pixels in its closure (see app.D for exact definition of the latter). The second score, named. symmetry, reflects the degree to which a blob is left-right symmetric about its center. It is measured. by cropping the bounding box around a blob, applying a left-right flip to the latter, and computing the. ratio between the number of pixels in the intersection of the blob and its reflection, and the number. of pixels in the blob. To generate labeled sets for classification (train and test), we render multiple. images, sort them according to their closedness and symmetry, and for each of the two scores, assign. the label \"high'' to the top 40% and the label \"low' to the bottom 40% (the mid 20% are considered. ill-defined). This creates two binary (two-class) classification tasks - one for closedness and one. for symmetry (see fig.2|for a sample of images participating in both tasks). Given that closedness. is a property of a local nature, we expect its classification task to require a predictor to be able to. model strong correlations between neighboring pixels. Symmetry on the other hand is a property. that relates pixels to their reflections, thus we expect its classification task to demand that a predictor. be able to model correlations across distances..\nIn addition to the deep network, we also evaluated the shallow convolutional arithmetic circuit an- alyzed in the paper (fig.1(b)). The architectural choices for this network were the same as those\nclosedness: low closedness: high closedness: low closedness: high symmetry: low symmetry: low symmetry: high symmetry: high\nFigure 2: Sample of images from our synthetic classification benchmark. Each image displays a random blob with holes, whose morphological closure and left-right symmetry about its center are measured. Two classification tasks are defined - one for closedness and one for symmetry. In each task, the objective is to distinguish between blobs whose respective property (closedness/symmetry) is high, and ones for which it is low. The tasks differ in nature - closedness requires modeling correlations between neighboring pixels, whereas symmetry requires modeling correlations between pixels and their reflections.\nWe evaluated the deep convolutional arithmetic circuit considered throughout the paper (fig. 1(a) with size-4 pooling windows) under two different pooling geometries. The first, referred to as. square, comprises standard 2 2 pooling windows. The second, dubbed mirror, pools together. nodes with their horizontal, vertical and horizontal-vertical reflections. In both cases, input patches (x) were set as individual pixels, resulting in N = 1024 patches and L = logs N = 5 hidden layers. M = 2 representation functions (fe) were fixed, the first realizing the identity on binary inputs (fe, (b) = b for b E {0, 1}), and the second realizing negation (fe, (b) = 1 - b for b E {0, 1}).. Classification was realized through Y = 2 network outputs, with prediction following the stronger. activation. The number of channels across all hidden layers was uniform, and varied between 8 and 128. Fig. 3|shows the results of applying the deep network with both square and mirror pool-. ing, to both closedness and symmetry tasks, where each of the latter has 20000 images for training. and 4000 images for testing. As can be seen in the figure, square pooling significantly outperforms mirror pooling in closedness classification, whereas the opposite occurs in symmetry classification. This complies with our discussion in sec.6] according to which square pooling supports modeling. correlations between entangled (neighboring) regions of the input, whereas mirror pooling puts fo- cus on correlations between input regions that are symmetric w.r.t. one another. We thus obtain a demonstration of how prior knowledge regarding a task at hand may be used to tailor the inductive bias of a deen convolutional network by. desionino Onetr\nDeep convolutional arithmetic circuit closedness task symmetry task 100 100 95 95 90 90 [%] 85 aunrre 85 80 80 square pool - train square pool - test 75 75 (..x mirror pool - train. :mirror pool - test 70 70 20 40 60 80 100 120 140 20 40 60 80 100 0 0 120 140 breadth (# of channels in each hidden layer) breadth (# of channels in each hidden layer)\nclosedness task symmetry task 100 100 95 95 [%] 90 [%] 90 85 85 80 80 square pool - train square pool - test 75 75 X-x mirror pool - train X mirror pool - test 70 70 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 breadth (# of channels in each hidden layer) breadth (# of channels in each hidden layer)\nFigure 3: Results of applying a deep convolutional arithmetic circuit to closedness and symmetry classification tasks. Two pooling geometries were evaluated - square, which supports modeling cor relations between neighboring input regions, and mirror, which puts focus on correlations betweer regions that are symmetric w.r.t. one another. Each pooling geometry outperforms the other on the task for which its correlations are important, demonstrating how prior knowledge regarding a tasl at hand may be used to tailor the inductive bias through proper pooling design.\ndescribed above for the deep network besides the number of hidden channels, which in this cas applied to the network's single hidden layer, and varied between 64 and 4096. The highest train anc. test accuracies delivered by this network (with 4096 hidden channels) were roughly 62% on closed ness task, and 77% on symmetry task. The fact that these accuracies are inferior to those of th deep network, even when the latter's pooling geometry is not optimal for the task at hand, complie. with our analysis in sec.|5] Namely, it complies with the observation that separation ranks (correla tions) are sometimes exponential and sometimes polynomial with the deep network, whereas witl. the shallow one they are never more than linear in network size..\nFinally, to assess the validity of our findings for convolutional networks in general, not just convolu. tional arithmetic circuits, we repeated the above experiments with convolutional rectifier networks Namely, we placed ReLU activations after every conv operator, switched the pooling operation from. product to average, and re-evaluated the deep (square and mirror pooling geometries) and shallow. networks. We then reiterated this process once more, with pooling operation set to max instead of. average. The results obtained by the deep networks are presented in fig.4] The shallow network. with average pooling reached train/test accuracies of roughly 58% on closedness task, and 55% or. symmetry task. With max pooling, performance of the shallow network did not exceed chance. Al-. together, convolutional rectifier networks exhibit the same phenomena observed with convolutiona. arithmetic circuits, indicating that the conclusions from our analyses likely apply to such networks. as well. Formal adaptation of the analyses to convolutional rectifier networks, similarly to the adap. tation of|Cohen et al.(2016b) carried out in|Cohen and Shashua(2016), is left for future work.."}, {"section_index": "10", "section_name": "8 DISCUSSION", "section_text": "Our analysis shows that a polynomially sized deep convolutional arithmetic circuit supports expo. nentially high separation ranks for certain input partitions, while being limited to polynomial or lin ear (in network size) separation ranks for others. The network's pooling window shapes effectively. determine which input partitions are favored in terms of separation rank, i.e. which partitions enjoy. the possibility of exponentially high separation ranks with polynomial network size, and which re quire network to be exponentially large. Pooling geometry thus serves as a means for controlling the. inductive bias. The particular pooling scheme commonly employed in practice - square contiguous. windows, favors interleaved partitions over ones that divide the input to distinct areas, thus orients. the inductive bias towards the statistics of natural images (nearby pixels more correlated than distan\nThrough the notion of separation rank, we studied the relation between the architecture of a convolu. tional network, and its ability to model correlations among input regions. For a given input partition the separation rank quantifies how far a function is from separability, which in a probabilistic setting corresponds to statistical independence between sides of the partition.\nDeep convolutional rectifier network (average pooling) closedness task symmetry task 100 100 95 95 90 90 85 85 80 80 square pool - train square pool - test 75 75 X-x mirror pool - train x mirror pool - test 70 70 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 breadth (# of channels in each hidden layer). breadth (# of channels in each hidden layer). Deep convolutional rectifier network (max pooling). closedness task symmetry task 100 100 95 95 90 90 Aoeancce 85 85 80 80 square pool - train square pool - test 75 75 X-x mirror pool - train x mirror pool - test 70 70 0 20 40 60 80 100 120 140 V. 20 40 60 80 100 120 140 breadth (# of channels in each hidden layer). breadth (# of channels in each hidden layer).\nclosedness task symmetry task 100 100 95 95 90 90 aecnnne 85 aecnnre 85 80 80 square pool -train I square pool - test 75 75 X-x mirror pool - train x mirror pool - test 70 70 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140\nFigure 4: Results of applying deep convolutional rectifier networks to closedness and symmetry classification tasks. The same trends observed with the deep convolutional arithmetic circuit (fig.3. are apparent here.\nones). Other pooling schemes lead to different preferences, and this allows tailoring the network t data that departs from the usual domain of natural imagery..\nAs opposed to deep convolutional arithmetic circuits, shallow ones support only linear (in network size) separation ranks. Therefore, in order to replicate a function realized by a deep network (ex- ponential separation rank), a shallow network must be exponentially large. By this we derive the depth efficiency result of|Cohen et al.(2016b), but in addition, provide an insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input.\nWe validated our conclusions empirically, with convolutional arithmetic circuits as well as convolu tional rectifier networks - convolutional networks with ReLU activation and max or average pooling. Our experiments demonstrate how different pooling geometries lead to superior performance in dif-. ferent tasks. Specifically, we evaluate deep networks in the measurement of shape continuity, a task. of a local nature, and show that standard square pooling windows outperform ones that join together. nodes with their spatial reflections. In contrast, when measuring shape symmetry, modeling cor-. relations across distances is of vital importance, and the latter pooling geometry is superior to the conventional one. Shallow networks are inefficient at modeling correlations of any kind, and indeed. lead to poor performance on both tasks.\nFinally, our analyses and results bring forth the possibility of expanding the coverage of correlations efficiently modeled by a deep convolutional network. Specifically, by blending together multiple pooling geometries in the hidden layers of a network, it is possible to facilitate simultaneous support for a wide variety of correlations suiting data of different types. Investigation of this direction, from both theoretical and empirical perspectives, is viewed as a promising avenue for future research."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work is supported by Intel grant ICRI-CI #9-2012-6133, by ISF Center grant 1790/12 and by the European Research Council (TheoryDL project). Nadav Cohen is supported by a Google Doctoral Fellowship in Machine Learning"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Richard Bellman. Introduction to matrix analysis, volume 960. SIAM, 1970.\nRichard Caron and Tim Traynor. The zero set of a polynomial. WSMR Report 05-02, 2005\nNadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decompositions International Conference on Machine Learning (ICML), 2016..\nNadav Cohen, Or Sharir, and Amnon Shashua. Deep simnets. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016a\nThomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012\nRobert M Haralick, Stanley R Sternberg, and Xinhua Zhuang. Image analysis using mathematical morphology IEEE transactions on pattern analysis and machine intelligence, (4):532-550, 1987.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arX preprint arXiv:1512.03385, 2015.\nFrank Jones. Lebesgue integration on Euclidean space. Jones & Bartlett Learning, 2001.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, pages 1106-1114, 2012.\nYann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks. 3361(10). 1995\nNadav Cohen and Amnon Shashua. Simnets: A generalization of convolutional networks. Advances in Neural Information Processing Systems (NIPS), Deep Learning Workshop, 2014..\nNadav Cohen, Or Sharir, and Amnon Shashua. On the expressive power of deep learning: A tensor analysis Conference On Learning Theory (COLT), 2016b\nOlivier Delalleau and Yoshua Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Infor mation Processing Systems, pages 666-674, 2011.\nWolfgang Hackbusch. Tensor Spaces and Numerical Tensor Calculus, volume 42 of Springer Series in Com putational Mathematics. Springer Science & Business Media, Berlin, Heidelberg, February 2012.\nRobert J Harrison, George I Fann, Takeshi Yanai, and Gregory Beylkin. Multiresolution quantum chemistry ir multiwavelet bases. In Computational Science-ICCS 2003, pages 103-110. Springer, 2003\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675-678. ACM, 2014.\nYann LeCun. Yoshua Bengio. and Geoffrey Hinton. Deep learning. Nature. 521(7553):436-444. May 2015\nHrushikesh Mhaskar, Qianli Liao, and Tomaso Poggio. Learning real and boolean functions: When is deep better than shallow. arXiv preprint arXiv:1603.00988, 2016.\nRazvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of inference regions of deep fee forward networks with piece-wise linear activations. arXiv preprint arXiv, 1312, 2013..\nTomaso Poggio, Fabio Anselmi, and Lorenzo Rosasco. I-theory on depth vs width: hierarchical function composition. Technical report, Center for Brains, Minds and Machines (CBMM), 2015\nWalter Rudin. Functional analysis. international series in pure and applied mathematics, 1991.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. CVPR, 2015.\nMatus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint arXiv:1509.08101 2015.\nGuido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions o deep neural networks. In Advances in Neural Information Processing Systems. pages 2924-2932. 2014\n/inod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceed"}, {"section_index": "13", "section_name": "A.1 PROOF OF CLAIM", "section_text": "We prove the equality in two steps, first showing that sep(hy; I, J)<rank[A'1,J, and then establishing the converse. The first step is elementary, and does not make use of the representation functions' (fe) linear independence, or of measurability/square-integrability. The second step does rely on these assumptions, and employs slightly more advanced mathematical machinery. Throughout the proof, we assume without loss of generality that the partition (I, J) of N|is such that I takes on lower values, while J takes on higher ones That is to say, we assume that I = {1, ...,|Il} and J = {|I| + 1,..., N}.9\n[A]I,J '][4],o O [C']o,[|J|] 1n[I],Jn[|I]] O [C'](1-|I])n[|Jl],(J-|I|)n[|J|]\n(x1,...,xn)\n9 To see that this does not limit generality, denote I = {i1,..., jr} and J = {1,...,J|J|}, and define an auxiliary function h's by permuting the entries of hy such that those indexed by I are on the left and those indexed by J on the right, i.e. h'y(x,...,Xiu,X,...,Xj|J|) = hy(x1,...,Xn). Ob- viously sep(hy; I, J) = sep(h'y; I', J'), where the partition (I', J') is defined by I' = {1,...,|I|} and J' = {|I| + 1,..., N}. Analogously to the definition of h'y, let A's be the tensor obtained by permut- ing the modes of A? such that those indexed by I are on the left and those indexed by J on the right i.e. Ad.di dj.j|J! = Au, ..d. It is not difficult to see that matricizing A' w.r.t. (I', J') is equivalent to matricizing A w.r.t. (I, J), i.e. [A']1', J' = [A]1,J, and in particular rank[A']1',J' = rank[A]1,J Moreover, since by definition A' is a coefficient tensor corresponding to hy (eq.2), A's' will be a coefficient tensor that corresponds to hy. Now, our proof will show that sep(hy; I', J') = rank[A']I',J', which, in light of the equalities above, implies sep(hy; I, J) = rank[A]1,J, as required.\nTo prove that sep(hy; I, J)rank[A]1,J, denote by R the rank of [A%]1,J. The latter is an M|I|-by-M!J| matrix, thus there exist vectors u1.. .uR E R M!I!. and V1...VR E RM|J| For every v E [R], let B' be the tensor of order |I| and dimension M in each mode whose arrangement as a column vector gives u, i.e. whose matricization w.r.t. the partition ([[I|], 0) is equal to u,. Similarly, let C, v E [R], be the tensor of order [J] = N [I| and dimension M in each mode whose matricization w.r.t. the partition (0, [| J|]) (arrangement as a row vector) is equal to vT . It holds that:\n][|Il],o O [C']o,[IJ|] In[|[l],Jn[[I|] O [C](I-|I])n[|J|],(J-|I])n[|J]\nwhere the third equality relies on the assumption I = {1, ..., [I}, J = {|I] + 1, ..., N}, the fourth equality makes use of the relation in eq.[1 and the last equality is based on the linearity of the matricization operator. Since matricizations are merely rearrangements of tensors, the fact that [A1,J = [-1 B' C']1, J implies Plugging this into eq.2 gives:\nM X1,...,XI M (X1,...,xJ d.\nSubstituting these into eq.11|leads to:\n(x1,...,XN X1,...,X1gxI+1,...,XN\nFor proving the converse inequality, i.e. sep(hy; I, J)rank[A]1,J, we rely on basic concepts and result from functional analysis, or more specifically, from the topic of L2 spaces. While a full introduction to this. topic is beyond our scope (the interested reader is referred to Rudin(1991)), we briefly lay out here the minima. background required in order to follow our proof. For any n E N, L-(Rn) is formally defined as the Hilber space of Lebesgue measurable square-integrable real functions over Rn1o, equipped with standard (point. wise) addition and scalar multiplication, as well as the inner product defined by integration over point-wise multiplication. For our purposes, L2(Rn) may simply be thought of as the (infinite-dimensional) vector space. of functions g : Rn -> R satisfying S g2 < 0o, with inner product defined by (g1, g2) := S g1:g2. Our proo. will make use of the following basic facts related to L? spaces:.\nFact 1. If V is a finite-dimensional subspace of L2(Rn), then any gEL? (Rn) may be expressed as g = p + with pEV and SeV- (i.e. S is orthogonal to all elements in V). Moreover, such a representation is unique, sc in the case where gEV, we necessarily have p = g and 8 = 0.\nFact 2. If gEL2(Rn), q'EL2(Rn'), then the function (x1,x2)+>g(x1).q'(x2) belongs to L2(Rn Rn'\nFact 3. Let V and V' be finite-dimensional subspaces of L2(Rn) and L2(Rn') respectively, and de- fine UcL?(Rn Rn') to be the subspace spanned by {(x1,x2)+>p(x1)p'(x2) : pEV,p'EV'}. Given gEL?(Rn),g'EL?(Rn'), consider the function (x1,x2)+>g(x1)g'(x2) in L?(Rn Rn'). This function be- longs to U if gEV or g'EV'\nFact 4. If g1...gmEL?(Rn) are linearly independent, then for any k E N, the set of functions {(X1,.,Xk)+>-1Jd,(xi)} dz E[m] is linearly independent in L?((Rn)k)\nTo facilitate application of the theory of L2 spaces, we now make use of the assumption that the network's. representation functions fe,, as well as the functions gv, g in the definition of separation rank (eq.|5), are mea surable and square-integrable. Taking into account the expression given in eq.2|for hy, as well as fact|2|above one readily sees that fe1...fe EL?(Rs) implies hyEL?((Rs)). The separation rank sep(hy; I, J) will be. the minimal non-negative integer R such that there exist g1.. .gRE L?((Rs)|I!) and g1.. .g'REL? ((Rs)|J!) for which:\nR x1...,XN)= X1,...,XgvxI+1,...,Xn\nV L2 span X1;.. X iE[M] V L2 IM U span - EM\nU = span{(x1,..,xN)+>p(x1,.., X||)p'(x||+1,.., xN) : pEV,p'EV'}\n10 More precisely, elements of the space are equivalence classes of functions, where two functions are con sidered equivalent if the set in R\" on which they differ has measure zero.\nWe would like to show that sep(hy; I, J)>rank[A]1.J. Our strategy for achieving this will be to start from eq.[12] and derive an expression for [A]1,J comprising a sum of R rank-1 matrices. As an initial step along this path, define the following finite-dimensional subspaces:\nhy(X1,...,XN) vx1,..,X1)gxI+1,.,xN) X1)+8(x1,..,X1) xI+1,.,XN)+ (xI+1,.,XN X|1|)p(x|1|+1,.,xN) x1,..,x)x+1,...,xN) O(x1,...,x1)p(x|I|+1,...,xN) (x1,...,X|1|)(x||+1,...,XN)\nR (x1,...,XN X1,...,X)pxI+1,...,XN\nM x1,...,X|I M x1,...,x|\nhy(x1,...,XN) .-, fed.(xi) mpare this expression for hy to that given in eq.2 M\n...dN,Vdi...dN E[M] > Ay= 8 Ci\nMatricizing the tensor equation on the right w.r.t. (I, J) gives\nwhere the second equality is based on the linearity of the matricization operator, the third equality relies on the relation in eq.[1] and the last equality makes use of the assumption I = {1, ..., [I[}, J = {|I] + 1, ..., N}\nGiven that U is the span of products from V and V' (eq.16), and that pv EV, 8,EV, P' EV', 8, EV', one. readily sees that the first term in the latter expression belongs to U, while, according to fact[3] the second, third and fourth terms are orthogonal to U. We thus obtained an orthogonal decomposition of hy w.r.t. U. Since hy is contained in U, the orthogonal component must vanish (fact1), and we amount at:.\nFor every v E [R], let B and C' be coefficient tensors of py and p'l, w.r.t. the functions that span V and V (eq.13|and 14), respectively. Put formally, B and C are tensors of orders [I] and [J] (respectively), with dimension M in each mode, meeting:.\nInj|I],Jn[[]] O [C'](1-|I])n[|Jl],(J-|I)n[|J|] I[|Il],o O [C']o,[IJ|]\nFor every v E [R], [B'|],o is a column vector of dimension M!I! and [C'o,[|J|] is a row vector of dimen sion M!J|. Denoting these by u, and vJ respectively, we may write:.\nR [A']1,J="}, {"section_index": "14", "section_name": "A.2 PROOF OF CLAIM2", "section_text": "The claim is framed in measure theoretical terms, and in accordance, so will its proof be. While a complet introduction to measure theory is beyond our scope (the interested reader is referred to Jones(2001)), we briefl. convey here the intuition behind the concepts we will be using, as well as facts we rely upon. The Lebesgu. measure is defined over sets in a Euclidean space, and may be interpreted as quantifying their \"volume\". Fo. example, the Lebesgue measure of a unit hypercube is one, of the entire space is infinity, and of a finite set o. points is zero. In this context, when a phenomenon is said to occur almost everywhere, it means that the se. of points in which it does not occur has Lebesgue measure zero, i.e. is negligible. An important result we wil. make use of (proven in|Caron and Traynor (2o05) for example) is the following. Given a polynomial define. over n real variables, the set of points in R\" on which it vanishes is either the entire space (when the polynomia. in question is the zero polynomial), or it must have Lebesgue measure zero. In other words, if a polynomial i. not identically zero, it must be different from zero almost everywhere..\nHeading on to the proof, we recall from sec.3|that the entries of the coefficient tensor A (eq.2) are given. by polynomials in the network's conv weights{a'~}t, and output weights aL,y. Since [A1,J - the ma- tricization of A? w.r.t. the partition (I, J), is merely a rearrangement of the tensor as a matrix, this matrix too has entries given by polynomials in the network's linear weights. Now, denote by r the maximal rank. taken by [A]1,J as network weights vary, and consider a specific setting of weights for which this rank is. attained. We may assume without loss of generality that under this setting, the top-left r-by-r block of [A']1,J is non-singular. The corresponding minor, i.e. the determinant of the sub-matrix ([A]1,J)1:r,1:r, is thus a. polynomial defined over {a',~ }t, and aL,y which is not identically zero. In light of the above, this polynomial. is different from zero almost everywhere, implying that rank([A]1,J)1:r,1:r = r almost everywhere. Since. rank[A]1,Jrank([A]1,J)1:r,1:r, and since by definition r is the maximal rank that [A]1, J can take, we have that rank[A|1,J is maximal almost everywhere."}, {"section_index": "15", "section_name": "A.3 PROOF OF THEOREM1", "section_text": "0,Y ey ,Ymin{ro,M} a 0 , otherwise 1, Y 1 ,y = 1 a 0 , Otherwise e1 l,Y ,y = 1 a forl = 2...L - 1 0 , otherwise L,y a e1\nLet n E [N/4]. Recalling the definition of I,k and Ji,x from eq.[6] consider the sets I1,n and J1,n, as well as Io,4(n-1)+t and Jo,4(n-1)+t for t E [4]. (I1,n, J1,n) is a partition of [4], i.e. I1,nUJ1,n = [4], and for every t E [4] we have Io,4(n-1)+t = {1} and Jo,4(n-1)+t = 0 if t belongs to I1,n, and otherwise Io,4(n-1)+t = 0\nThis shows that rank[A]1,JR. Since R is a general non-negative integer that admits eq.[12] we may take it to be minimal, i.e. to be equal to sep(hy; I, J) the separation rank of hy w.r.t. (I, J). By this we obtain rank[A]1,Jsep(hy; I, J), which is what we set out to prove.\nThe matrix decomposition in eq.7expresses [A]1,J in terms of the network's linear weights - {ao,~ E RM}E[ro] for conv operator in hidden layer 0, {a'~ E Rri-1}e[r] for conv operator in hidden layer l = 1. . .L--1, and aL,y E RrL-1 for node y of dense output operator. We prove lower and upper bounds on the maximal rank that [A]1,J can take as these weights vary. Our proof relies on the rank-multiplicative property of the Kronecker product (rank(AOB) = rank(A).rank(B) for any real matrices A and B - see|Bellman (1970) for proof), but is otherwise elementary.\nBeginning with the lower bound, consider the following weight setting (e. here stands for a vector holding 1 in entry y and O at all other entries, O stands for a vector holding 0 at all entries, and 1 stands for a vector holding 1 at all entries, with the dimension of a vector to be understood by context):.\nand Jo,4(n-1)+t = {1} if t belongs to J1,n. This implies that for an arbitrary vector v, the matricization v1 is equal to v if teI1.n, and to vT if tEJ1.n. Accordingly, for any y E [ro]:.\nI1,n=4|J1,n=0 |I1,n=3|J1,n=1 Y)(a%,Y O aV I1,n=2 J1,n=2 U,Y O a',Y O I1.n=1J1.n=3 Oa', |I1,n=0|J1,n|=4 a,\nAssume that y min{ro, M}. By our setting ao,~ = ey, so the above matrix holds 1 in a single entry and 0 in all the rest. Moreover, if the matrix is not a row or column vector, i.e. if both I1,n and J1.n are non-empty the column index and row index of the entry holding 1 are both unique w.r.t. , i.e. they do not repeat as ranges over 1... min{ro, M}. We thus have:\nmin{ro,M} O[a0,]1o,4(n-1)+tJo,4(n-1)+t min{ro, M} ,I1,n0^ J1,n7 rank I1,n=0 V J1,n=\nmin{ro, M} ,I1,n0 ^ J1,n0 ro 1.1 rank a I0,4(n-1)+t,J0,4(n-1)+t 1 ,I1,n=0 V J1,n=0\nmin{ro, M} ,I1,n0 ^ J1,n rank I1,n=0 V J1,n=0\nN/4 A]1,J=\nN/4 rank[A]1,J =] rank I1,t,J1,t = min{ro, M} t=1\nwhere S := {t E [N/4] : I1,t 0 J1,t 0}]. This equality holds for the specific weight setting we define in eq.19 Maximizing over all weight settings gives the sought after lower bound:.\nro 1.^ 4(k-1)+t0,4k-1)+t\n4k-1\n[9bY]I,k,Jl,k I-1,4(k-1)+tJl-1,4(k-1)+t\nmax rank[A91,J min{ro, M a }l.x,aL,y\nMoving on to the upper bound, we show by induction over l = 1. . .L -- 1 that for any k E [N/4'] and E [ri]\n1,k is defined by the right hand side of this inequality, so our inductive hypotheses holds for l = 1. For l > 1:\nrank[$ 'Ii.k,Jl,k rank -1.4(k-1)+tJl-1,4(k-1)+t an 1,4(k-1)+tJl-1,4(k-1)+t Ii-1,4(k-1)+t,J{-1,4(k-1)+t -1,4k-1+t < -1,4(k-1)+t C1\nwhere we used rank sub-additivity in the second line, the rank-multiplicative property of the Kronecker product in the third line, and our inductive hypotheses for l - 1 in the fourth line. Since the number rows and columns in ['~]1,k, Jr,x is M|I1,s! and M|Ji,s! respectively, we may incorporate these terms into the inequality, ob- taining:\nrank['~]1,h,J1,k min{ Mmin{|It,xh|J1,el}, -1,4k-1)+t\nThe right hand side here is equal to cl,k by definition, so our inductive hypotheses indeed holds for all l =. 1. . .L - 1. To establish the sought after upper bound on the rank of [A?]1,J, we recall that the latter is given\nIL-1 [A]1,J= Ly 1.tJL-1t Q=1\nCarry out a series of steps similar to before, while making use of our inductive hypotheses for l = L - 1:"}, {"section_index": "16", "section_name": "B SEPARATION RANK AND THE L2 DISTANCE FROM SEPARABLE FUNCTIONS", "section_text": "Our analysis of correlations modeled by convolutional networks is based on the concept of separation rank conveyed in sec.[4 When the separation rank of a function w.r.t. a partition of its input is equal to 1, the functior. is separable, meaning it does not model any interaction between sides of the partition. We argued that the highe. the separation rank, the farther the function is from this situation, i.e. the stronger the correlation it induces. between sides of the partition. In the current appendix we formalize this argument, by relating separation rank. to the L2 distance from the set of separable functions. We begin by defining and characterizing a normalizec. (scale invariant) version of this distance (app.[B.1). It is then shown (app.[B.2) that separation rank provides ar. upper bound on the normalized distance. Finally, a lower bound that applies to deep convolutional arithmetic. circuits is derived (app.B.3), based on the lower bound for their separation ranks established in sec.5.2. Together, these steps imply that our entire analysis, facilitated by upper and lower bounds on separation ranks. of convolutional arithmetic circuits, can be interpreted as based on upper and lower bounds on (normalized. L2 distances from separable functions.\nIn the text hereafter, we assume familiarity of the reader with the contents of sec.23l45]and the proofs give. in app.A We also rely on basic knowledge in the topic of L2 spaces (see discussion in app.[A.1|for minima. background required in order to follow our arguments), as well as several results concerning singular value of matrices. In line with sec.5 an assumption throughout this appendix is that all functions in question ar measurable and square-integrable (i.e. belong to L2 over the respective Euclidean space), and in app.[B.3] w. also make use of the fact that representation functions (fe,) of a convolutional arithmetic circuit can be regarde as linearly independent (see sec.5.1). Finally, for convenience, we now fix (I, J) an arbitrary partition of [N Specifically, I and J are disjoint subsets of [N] whose union gives [N], denoted by I = {1, ..., 1|} wit i1 <...< ij1], and J ={j1,...,J|J|} with j1 <...< j|J|.\nFor a function hEL?((Rs)N) (which is not identically zero), the normalized L2 distance from the set of sepa rable functions w.r.t. (I, J), is defined as follows:.\n1 D(h; I, J) inf h(x1,...,XN)- g(xi,..,Xi)g'(x1 |h|| gEL2((Rs)|I| g'EL2((Rs)|J|)\nrank 1.tJL-1,t cank tJL-1,t 1.tJL-1t 1 1t rL-\nrank[A]1,J rank 1 n. JL-1 car -1.tJL-1t < 1.t rI\nSince [A%]1,J has M!I! rows and M|J! columns, we may include these terms in the inequality, thus reaching the upper bound we set out to prove..\nnormalization (division by hD) admits scale invariance to D(h; I, J), and is of critical importance - withou it, rescaling h would accordingly rescale the distance measure, rendering the latter uninformative in terms o deviation from separability.\nIt is worthwhile noting the resemblance between D(h; I, J) and the concept of mutual information (see|Cover and Thomas(2012) for a comprehensive introduction). Both measures quantify the interaction that a nor malized function induces between input variables, by measuring distance from separable functions. The difference between the measures is threefold. First, mutual information considers probability density functions (non-negative and in L), while D(h; I, J) applies to functions in L2. Second, the notion of distance in mu- tual information is quantified through the Kullback-Leibler divergence, whereas in D(h; I, J) it is simply the L? metric. Third, while mutual information evaluates the distance from a specific separable function -- product of marginal distributions, D(h; I, J) evaluates the minimal distance across all separable functions.\n.,X)dx1.dx X dxN 2 'Aa.a' $(Xi, Xi$'(Xj1,.. ,Xj|J| 3 Po'(xi1 ..,XjJ)dx1.-dxN (xi1 ...,Xj[J] ..,Xjl. )dx1:..dx Xi1 ...dxi! ..,XilIl )dXi1 5 Xi dXjlJ .u' . otherwise 6 , otherwise. (7) A 24\nEquality (1) here originates from the definition of L2 norm. (2) is obtained by plugging in the expression in eq.22 (3) is merely an arithmetic manipulation. (4) follows from the linearity of integration. (5) makes use\n11 An equivalent definition of D(h; I, J) is the minimal L2 distance between h/ ||h| and a function separable w.r.t. (I, J). Accordingly, we may view D(h; I, J) as operating on normalized functions\nWe now turn to establish a spectral characterization of D(h; I, J), which will be used in app.B.2 and|B.3|fo deriving upper and lower bounds (respectively). Assume we have the following expression for h:\nh(x1,...,XN) = Xi1,...,Xil Xj1,...,Xj.\nwhere m and m' are positive integers, A is an m-by-m' real matrix, and {$} #=1, {$' }1 are orthonormal sets of functions in L2((Rs)|I|), L2((Rs)|J|) respectively. We refer to such expression as an orthonormal separable decomposition of h, with A being its coefficient matrix. We will show that for any orthonormal separable decomposition, D(h; I, J) is given by the following formula:\no?(A) D(h;I, J) ?(A)+...+ o? min{m,m'}(A)\nwhere o1(A) ... min{m,m'}(A) 0 are the singular values of the coefficient matrix A. This implies. that if the largest singular value of A accounts for a significant portion of the spectral energy, the normalized L2 distance of h from separable functions is small. On the other hand, if all but a fraction of the spectral energy. is attributed to trailing singular values, h is far from being separable (D(h; I, J) is close to 1)..\na first step in deriving eq. we show that |h||? = 0(A) +...+ 0min{m,m'}(A) h||2 :,XN)dx1.dxN 1 (2) X ..., XjlJl (3 :, XjL1L )dx1:::dxN 4 ,Xj|J dx1:::dxN ,Xi|1|)dXi1dXi|1 5 ; U = ,u = u , otherwise 0 , otherwise (6) (7) 01(A) +...+ 0min{m,m'}(A) (24 (8)\n(6) , Otherwise , otherwise (7) o}(A) +...+ Omin{m,m'} (8)\ng(Xi1,...,Xi1)g'(x1,...,XjJ|) +O(Xi\nh(x1,...,xN)-gxi,...,Xi)gx1,..,xj =A-QQ)+|E(x1.xN)||\nX1,..,XN)-g(xi,.,Xil Xi1:\nh(x1,...,XN)- g(xi,...,Xi! 2(A)+...+0min{m,m )q (xi1: .XiLJ A\n(A)+...+ Omin{m,m' inf h(x1,...,XN) gxiu,.. gEL2((Rs)|I| g'EL2((Rs)|J|\nRecall that we would like to derive the formula in eq.[23|for D(h; I, J), assuming h is given by the orthonormal separable decomposition in eq.22Taking square root of the equalities established in eq.24 and [25] and plugging them into the definition of D(h; I, J) (eq.21), we obtain the sought after result.\nof Fubini's theorem (seeJones(2001). (6) results from the orthonormality of {$} =1 and {' }=1. (7) is a trivial computation. Finally, (8) is an outcome of the fact that the squared Frobenius norm of a matrix, i.e. the sum of squares over its entries, is equal to the sum of squares over its singular values (see[Golub and Van Loan 2013) for proof).\nXj(J)-E(x1,...,XN) (x1,...,Xn\n'). $(Xi,...,Xil. +||&(x1,...,xn)ll Xi1: Xi"}, {"section_index": "17", "section_name": "B.2 UPPER BOUND THROUGH SEPARATION RANK", "section_text": "We now relate D(h; I, J) the normalized L2 distance of hEL?((Rs)N) from the set of separable functions w.r.t. (I, J) (eq.21), to sep(h; I, J) - the separation rank of h w.r.t. (I, J) (eq.[5. Specifically, we make use of the formula in eq.2 23|to derive an upper bound on D(h; I, J) in terms of sep(h; I, J)..\nAssuming h has finite separation rank (otherwise the bound we derive is trivial), we may express it as.\nR h(x1,...,XN) ,...,Xi xj1.,X\nh(x1,...,XN)\nhe latter holds for any R E N that admits eq.[26] so in particular we may take it to be minimal, i.e. to be equ sep(h; I, J) , bringing forth the sought after upper bound:"}, {"section_index": "18", "section_name": "B.3 LOWER BOUND FOR DEEP CONVOLUTIONAL ARITHMETIC CIRCUITS", "section_text": "Let hyEL?((Rs)N) be a function realized by a deep convolutional arithmetic circuit (fig.1[a) with size-4 pooling windows and L = log4 N hidden layers), i.e. hy is given by eq.2 where fe1...fe EL?(Rs) are linearly independent representation functions, and A' is a coefficient tensor of order N and dimension M in each mode, determined by the linear weights of the network ({a'~}t,, aL,) through the hierarchical de- composition in eq.3 Rearrange eq.2|by grouping indexes d1.. .d in accordance with the partition (I, J):\no1(A) 1 ?(A)+...+o? A R min{m,m'\no?(A) 1 D(h; I, J ?(A)+...+ ? R min{m,m'\n1 D(h;I,J) sep(h; I, J)\nBy eq.|27 low separation rank implies proximity (in normalized L2 sense) to a separable function. We may use. the inequality to translate the upper bounds on separation ranks established for deep and shallow convolutional arithmetic circuits (sec.5.2|and 5.3 respectively), into upper bounds on normalized L2 distances from separable. functions. To completely frame our analysis in terms of the latter measure, a translation of the lower bound on separation ranks of deep convolutional arithmetic circuits (sec.[5.2) is also required. Eq.27|does not facilitate. such translation, and in fact, it is easy to construct functions h whose separation ranks are high yet are very close (in normalized L2 sense) to separable functions. 3However, as we show in app.B.3|below, the specific lower. bound of interest can indeed be translated, and our analysis may entirely be framed in terms of normalized L2. distance from separable functions.\nhy (X1,...,XN) =\nLet m = M!I!, and define the following mapping.\n: [M][IT> [m] p(di,..., di! =+ 1)MI|-t\nu': [M]|J]> [m'], H'(di,.. :,dil1 1)M|J|-t\nhyX1...,XN)= A9 Pu(Xi1\ndXi|I )dxi1...dxi! (2) (3) (4) (5) , otherwise. 1 ,di+() = dit() Vt E [|I] 0 , otherwise (6) 1 = 0 , Otherwise 7\no?([Ay]1,J) D(hy;I,J) ?([Ay1,J)+...+ o? nin{m,m'}([Ay]1,J)\nIn sec.5.2 we showed that the maximal separation rank realizable by a deep network is greater than or equal to min{ro, M}S, where M, ro are the number of channels in the representation and first hidden layers (respec- tively), and S stands for the number of index quadruplets (sets of the form {4k-3, 4k-2, 4k-1, 4k} for some k E\nWe now direct our attention to the special case where fe, .. .fe EL? (Rs) - the network's representation func. tions, are known to be orthonormal. The general setting, in which only linear independence is known, will be treated thereafter. Orthonormality of representation functions implies that 1 ... $mEL?((Rs)|!) are or- thonormal as well:\n)dXi1::.dXi|1 ; Xi Xi1,..., ; Xi 1 Xi...dXij (2) (3) (4) , dit () = di+(p) (5) 0 , Otherwise 1 ,dit() = dit() Vt E [|I|] (6) 0 , Otherwise 1 , = 0 , Otherwise 7\n(1) and (4) here follow from the definition of inner product in L2 space, (2) replaces $ and $p by. their definitions, (3) makes use of Fubini's theorem (see Jones(2001), (5) relies on the (temporary) as. sumption that representation functions are orthonormal, (6) is a trivial step, and (7) owes to the fact that +> (di1 (), ..., di 1 ()) is an injective mapping. A similar sequence of steps (applied to (', ')) shows. that in addition to 1 ... $m, the functions '1 ... 'm, E L? (IRs)!J!) will also be orthonormal if fe1...fem are.. We conclude that if representation functions are orthonormal, eq.29lindeed provides an orthonormal separable decomposition of hy, and the formula in eq.23 may be applied:.\n[N/4]) that are split by the partition (I, J). To prove this lower bound, we presented in app.A.3[a specific setting for the linear weights of the network ({a'~}t,, aL,) under which rank[A]1,J = min{ro, M}S Careful examination of the proof shows that with this particular weight setting, not only is the rank of [A']1,J equal to min{ro, M}S, but also, all of its non-zero singular values are equal to one another. 14 This implies that o?([A]1,J)/(o([A]1,s) +...+ min{m,m'}([A]r,J)) = min{ro, M}-S, and since we currently assume that fe1...fe are orthonormal, eq.30|applies and we obtain D(hy; I, J) = 1 - min{ro, M}-S Maximizing over all possible weight settings, we arrive at the following lower bound for the normalized L2 distance from separable functions brought forth by a deep convolutional arithmetic circuit:\n1 sup al,Y}t,y,aL,y ;I, min{ro, M}S a l,y,aL,y\nTurning to the general case, we omit the assumption that representation functions fe...feEL?(Rs are orthonormal, and merely rely on their linear independence. The latter implies that the dimension oi span{fe...fe} is M, thus there exist orthonormal functions 1...mEL?(Rs) that span it. Let F E pose now that we replace the original representation functions fe1...fe by the orthonormal ones $1.. .M Using the latter, the lower bound in eq.31|applies, and there exists a setting for the linear weights of the network - {a',~}t,, aL,, such that D(hy; I, J)1 - min{ro, M}-S. Recalling the structure of convolu- tional arithmetic circuits (fig.1(a)), one readily sees that if we return to the original representation functions fo1 .. .fe, while multiplying conv weights in hidden layer O by F' (i.e. mapping ao,~+>F' ao,), the overall function hy remains unchanged, and in particular D(hy; I, J)1 min{ro, M}-S still holds. We con clude that the lower bound in eq.31|applies, even if representation functions are not orthonormal.\nTo summarize, we translated the lower bound from sec.5.2|on the maximal separation rank realized by a deep convolutional arithmetic circuit, into a lower bound on the maximal normalized L2 distance from separable functions (eq.31). This, along with the translation of upper bounds facilitated in app.B.2 implies that the analysis carried out in the paper, which studies correlations modeled by convolutional networks through the notion of separation rank, may equivalently be framed in terms of normalized L2 distance from separable functions. We note however that there is one particular aspect in our original analysis that does not carry through the translation. Namely, in sec.5.1|it was shown that separation ranks realized by convolutional arithmetic circuits are maximal almost always, i.e. for all linear weight settings but a set of (Lebesgue) measure zero. Put differently, for a given partition (I, J), the maximal separation rank brought forth by a network characterizes almost all functions realized by it. An equivalent statement does not hold with the continuous measure of normalized L? distance from separable functions. The behavior of this measure across the hypotheses space of a network is non-trivial, and forms a subject for future research.\nWhen training convolutional arithmetic circuits, we followed the hyper-parameter choices made by Sharir et al.. [2016). In particular, our objective function was the cross-entropy loss with no L2 regularization (i.e. with. weight decay set to 0), optimized using Adam (Kingma and Ba (2014)) with step-size = 0.003 and moment decay rates 1 = 2 = 0.9. 15000 iterations with batch size 64 (48 epochs) were run, with the step-size a decreasing by a factor of 10 after 12000 iterations (38.4 epochs). We did not use dropout (Srivastava et al. (2014)), as the limiting factor in terms of accuracies was the difficulty of fitting training data (as opposed to. overfitting) - see fig.3\nFor training the conventional convolutional rectifier networks, we merely switched the hyper-parameters ol Adam to the recommended settings specified inKingma and Ba(2014) ( = 0.001, 1 = 0.9, 2 = 0.999) and set weight decay to the standard value of O.0001..\n14 To see this, note that with the specified weight setting, for every n E [N/4], [1,111,n,J1.n has one of two forms: it is either a non-zero (row/column) vector, or it is a matrix holding 1 in several entries and 0 in all the rest, where any two entries holding 1 reside in different rows and different columns. The first of the two forms admits a single non-zero singular value. The second brings forth several singular values equal to 1, possibly accompanied by null singular values. In both cases, all non-zero singular values of [$1,11,n,J1,n are equal to one another. Now, since [A1,J = ON1 [1,111,n, J1,n, and since the Kronecker product multiplies singular values (seeBellman(1970). we have that all non-zero singular values of [Ay1..1 are equal, as required.\nIn this appendix we provide implementation details omitted from the description of our experiments in sec.7 Our implementation, available online at https: //github. com/HuJI-Deep/inductive-pooling is based on the SimNets branch (Cohen et al.(2016a)) of Caffe toolbox (Jia et al.(2014)). The latter realizes convolutional arithmetic circuits in log-space for numerical stability."}, {"section_index": "19", "section_name": "MORPHOLOGICAL CLOSURE", "section_text": "It is not difficult to see that any pixel active in the original image is necessarily active in its closure. Moreover. pixels that are originally inactive yet are surrounded by active ones will also be turned on in the closure, hence the effect of \"gap filling\". Finally, we note that the particular sequence of steps described above represents the most basic form of morphological closure. The interested reader is referred to|Haralick et al.(1987) for a much more comprehensive introduction.\nThe synthetic dataset used in our experiments (sec.7) consists of binary images displaying different shapes. (blobs). One of the tasks facilitated by this dataset is the detection of morphologically closed blobs, i.e. of images that are relatively similar to their morphological closure. The procedure we followed for computing the morphological closure of a binary image is:.\n1. Pad the given image with background (0 value) pixels 2. Morphological dilation: simultaneously turn on (set to 1) all pixels that have a (left, right, top ol bottom) neighbor originally active (holding 1) 3. Morphological erosion: simultaneously turn off (set to O) all pixels that have a (left, right, top ol bottom) neighbor currently inactive (holding 0) Remove nixels introduced in naddino"}] |
Hyq4yhile | [{"section_index": "0", "section_name": "LEARNING INVARIANT FEATURE SPACES TO TRANS FER SKILLS WITH REINFORCEMENT LEARNING", "section_text": "Abhishek Gupta'* Coline Devin**, YuXuan Liu, Pieter Abbeelt*, Sergey Levine"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "People can learn large repertoires of motor skills autonomously from their own experience. How- ever, learning is accelerated substantially when the learner is allowed to observe another person performing the same skill. In fact, human infants learn faster when they observe adults performing a task, even when the adult performs the task differently from the child, and even when the adul performs the task incorrectly (Meltzoff|1999). Clearly, we can accelerate our own skill learning by observing a novel behavior, even when that behavior is performed by an agent with different phys ical capabilities or differences in morphology. Furthermore, evidence in neuroscience suggests that the parts of the brain in monkeys that respond to the pose of the hand can quickly adapt to insteac respond to the pose of the end-effector of a tool held in the hand (Umilta et al.l 2008). This suggests that the brain learns an invariant feature space for the task (e.g., reaching with a tool) that is inde- pendent of the morphology of the limb performing that task. Mirror neurons also fire both when the animal performs a task and when it observes another animal performing it (Rizzolatti & Craighero 2004][Ferrari et al.]2005). Can we enable robots and other autonomous agents to transfer knowledge from other agents with different morphologies by learning such invariant representations?\nIn robotics and reinforcement learning, prior works have considered building direct isomorphisms between state spaces, as discussed in Section2 However, most of these methods require specific domain knowledge to determine how to form the mapping, or operate on simple, low-dimensional environments. For instance, Taylor et al.(2008) find a mapping between state spaces by searching through all possible pairings. Learning state-to-state isomorphisms involves an assumption that the two domains can be brought into correspondence, which may not be the case for morphologically\n*These authors contributed equally to this work\nUC Berkeley, Department of Electrical Engineering and Computer Science + OpenAI"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "People can learn a wide range of tasks from their own experience, but can also learn from observing other creatures. This can accelerate acquisition of new skills even when the observed agent differs substantially from the learning agent in terms of morphology. In this paper, we examine how reinforcement learning algorithms can transfer knowledge between morphologically different agents (e.g., different robots). We introduce a problem formulation where two agents are tasked with learning multiple skills by sharing information. Our method uses the skills that were learned by both agents to train invariant feature spaces that can then be used to transfer other skills from one agent to another. The process of learning these invariant feature spaces can be viewed as a kind of \"analogy making,\" or implicit learning of partial correspondences between two distinct domains. We evaluate our transfer learning algorithm in two simulated robotic manipulation skills, and illustrate that we can transfer knowledge between simulated robotic arms with dif- ferent numbers of links, as well as simulated arms with different actuation mech anisms, where one robot is torque-driven while the other is tendon-driven.\ndifferent agents. Some aspects of the skill may not be transferable at all, in which case they must be learned from scratch, but we would like to maximize the information transferred between the agents\nIn this paper, we formulate this multi-agent transfer learning problem in a setting where two agents. are learning multiple skills. Using the skills that have been already acquired by both agents, each agent can construct a mapping from their states into an invariant feature space. Each agent can then transfer a new skill from the other agent by projecting the executions of that skill into the invariant. space, and tracking the corresponding features through its own actions. This provides a well-shaped reward function to the learner that allows it to imitate those aspects of the \"teacher' agent that are invariant to differences in their morphology, while ignoring the parts of the state that cannot be. imitated. Since the mapping from the state spaces of each agent into the invariant feature space might be complex and nonlinear, we use deep neural networks to represent the mappings, and we present an algorithm that can learn these mappings from the shared previously acquired skills..\nThe main contributions of our work are a formulation of the multi-skill transfer problem, a definition of the common feature space, and an algorithm that can be used to learn the maximally informative. feature space for transfer between two agents (e.g., two robots with different morphologies). To. evaluate the efficiency of this transfer process, we use a reinforcement learning algorithm to transfer skills from one agent to another through the invariant feature space. The agents we consider may. differ in state-space, action-space, and dynamics. We evaluate our transfer learning method in two. simulated robotic manipulation tasks, and illustrate that we can transfer knowledge between simu-. lated robotic arms with different numbers of links, as well as simulated arms with different actuation mechanisms. where one robot is torque-driven while the other is tendon-driven."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Transfer learning has long been recognized as an important direction in robotics and reinforcement. learning (Taylor & Stone(2009)). Konidaris & Barto(2006) learned value functions on subsets of the state representation that were shared between tasks, providing a shaping reward in the target. task.Taylor et al.[(2007) manually construct a function to map a Q-function from one Markov decision process (MDP) to another. Ammar & Taylor (2012) manually define a common feature space between the states of two MDPs, and use this feature space to learn a mapping between states..\nLater work by Ammar et al.(2015a) uses unsupervised manifold alignment to assign pairings be tween states for transfer. Like in our method, they aim to transfer skills between robots with different configurations and action spaces by guiding exploration in the target domain. The main difference from our work is that[Ammar et al.(2015a) assume the presence of a feature mapping that provides distances between states, and use these (hand designed) features to assign correspondences between states in the different domains. In contrast, we assume that good correspondences in episodic tasks can be extracted through time alignment, and focus on learning the feature mapping itself. Addition- ally, we do not try to learn a direct mapping between state spaces but instead try to learn nonlinear embedding functions into a common feature space, as compared to linear mappings between state spaces learned in|Ammar et al.(2015a). In a similar vein, Raimalwala et al.(2016) consider transfer learning across linear time-invariant (LTI) systems through simple alignment based methods. Al- though this method is quite effective in enabling transfer in these systems, it does not apply to the higher dimensional continuous control tasks we consider which may have non-linear dynamics, and may not be LTI.\nIn machine learning,Pan & Yang(2010) provide an extensive survey on transfer learning which addresses the case of train and test data being drawn from different distributions, as well as learn ing models that succeed on multiple, related tasks. Ben-David & Schuller (2003) derive theoretical guarantees on this sort of multitask learning and provide a formal framework for defining task re latedness. In deep learning, Caruana(1997) show that a multitask network can leverage a shared representation of the input to learn multiple tasks more quickly together than separately.\nMore recent work in deep learning has also looked at transferring policies by reusing policy pa rameters between environments (Rusu et al.2016a b] Braylan et al.] 2015] Daftry et al.]2016) using either regularization or novel neural network architectures, though this work has not looked at transfer between agents with structural differences in state, such as different dimensionalities. Our approach is largely orthogonal to policy transfer methods, since our aim is not to directly transfer a\nskill policy, which is typically impossible in the presence of substantial morphological difference. but rather to learn a shared feature space that can be used to transfer information about a skill tha is shared across robots, while ignoring those aspects that are not shared. Our own recent work ha looked at morphological differences in the context of multi-agent and multi-task learning (Devii. et al.|2016), by reusing neural network components across agent/task combinations. In contrast t. that work, which transferred components of policies, our present work aims to learn common fea ture spaces in situations where we have just two agents. We do not aim to transfer parts of policie themselves, but instead look at shared structure in the states visited by optimal policies, which ca. be viewed as a kind of analogy making across domains..\nLearning feature spaces has also been studied in the domain of computer vision as a mechanism for domain adaptation and metric learning.Xing et al.(2002) finds a linear transformation of the input data to satisfy pairwise similarity contraints, while past work byChopra et al.(2005) used Siamese networks to learn a feature space where paired images are brought close together and unpaired images are pushed apart. This enables a semantically meaningful metric space to be learned with only pairs as labels. Later work on domain adaptation by|Tzeng et al.(2015) and Ganin et al.[(2016] use an adversarial approach to learn an image embedding that is useful for classification and invariant to the input image's domain. We use the idea of learning a metric space from paired states, though the adversarial approach could also be used with our method as an alternative objective function in future work."}, {"section_index": "4", "section_name": "PROBLEM FORMULATION AND ASSUMPTIONS", "section_text": "We formalize our transfer problem in a general way by considering a source domain and a target domain, denoted Ds and Dt, which each correspond to Markov decision processes (MDPs) Ds = (Ls, As,Ts,Rs) and Dt = (Lt,At,TT,RT), each with its own state space Y, action space A, dynamics or transition function T, and reward function R. In general, the state and action spaces in the two domains might be completely different. Correspondingly, the dynamics Ts and Tt also differ, often dramatically. However, we assume that the reward functions share some structural similarity, in that the state distribution of an optimal policy in the source domain will resemble the state distribution of an optimal policy in the target domain when projected into some common feature space. For example, in one of our experimental tasks, Ds corresponds to a robotic arm with 3 links. while Dt is an arm with 4 links. While the dimensionalities of the states and action are completely different, the two arms are performing the same task, with a reward that depends on the position of the end-effector. Although this end-effector is a complex nonlinear function of the state, the reward is structurally similar for both agents."}, {"section_index": "5", "section_name": "3.1 COMMON FEATURE SPACES", "section_text": "We can formalize this common feature space assumption as following: if s(ss) denotes the state. distribution of the optimal policy in Ds, and T(sr) denotes the state distribution of the optimal. policy in Dt, it is possible to learn two functions, f and g, such that p(f(ss)) = p(g(sT)) for ss ~ nts. and sT ~ tt. That is, the images of s under f and T under g correspond to the same distribution This assumption is trivially true if we allow lossy mappings f and g (e.g. if f(ss) = g(sT) = O for all. ss and sT). However, the less information we lose in f and g, the more informative the shared feature will be for the purpose of transfer. So while we might not in general be able to fully recover t from the image of s under f, we can attempt to learn f and g to maximize the amount of information. contained in the shared space."}, {"section_index": "6", "section_name": "3.2 LEARNING WITH MULTIPLE SKILLS", "section_text": "In order to learn the common feature space, we need examples from both domains. While botl agents could in principle learn a common feature space through direct exploration, in this work we instead assume that the agents have prior knowledge about each other, in the form of other skills tha they have both learned. This assumption is reasonable, since many practical use-cases of transfe involve two agents that already have competence in a range of simple settings, and wish to transfe the competence of one agent in a new setting to another one. For example, we might wish to transfer a particular cooking skill from one home robot to another one, in a setting where both robots have\nalready learned some basic manipulation behaviors that can allow us to build a common feature space between the two robots. Humans similarly leverage their extensive prior knowledge to aid in transfer, by recognizing limbs and hands and understanding their function..\nTo formalize the setting where the two agents can perform multiple tasks, we divide the state space in each of the two domains into an agent-specific state sr and a task-specific state senv. A similar partitioning of the state variables was previously discussed by Devin et al.(2016), and is closely related to the agent-space proposed by Konidaris (2006). For simplicity, we will consider a case where there are just two skills: one proxy skill that has been learned by both agents, and one test skill that has been learned by the source agent in the domain Ds and is currently being transferred to the target agent in domain Dt. We will use Dsp and DTp to denote the proxy task domains for the source and target agents. We assume that Ds and Dsp (and similarly Dr and DTp) differ only in their reward functions and task-specific states, with the agent-specific state spaces Yr and action spaces being the same between the proxy and test domains. For example Dsp might correspond to a 3-link robot pushing an object, while Ds might correspond to the same robot opening a drawer. and DTp and Dt correspond to a completely different robot performing those tasks. Then, we can learn functions f and g on the robot-specific states of the proxy domains, and use them to transfer knowledge from Ds to DT.\nThe idea in this setup is that both agents will have already learned the proxy task, and we can com. pare how they perform this task in order to determine the common feature space. This is a natural problem setup for many robotic transfer learning problems, as well as other domains where multiple distinct agents might need to each learn a large collection of skills, exchanging their experience and. learning which information they can and cannot transfer from each other. In a practical scenario. each robot might have already learned a large number of basic skills, some of which were learnec. by both robots. These skills are candidate proxy tasks that the robots can use to learn their shared. space, which one robot can then use to transfer knowledge from the other one and more quickly. learn skills that it does not yet possess.."}, {"section_index": "7", "section_name": "3.3 ESTIMATING CORRESPONDENCES FROM PROXY SKILI", "section_text": "The proxy skill is useful for learning which pairs of agent-specific states correspond across botl. domains. We want to learn a pairing P, which is a list of pairs of states in both domains which. are corresponding. This is then used for the contrastive loss as described in Section 4 These. correspondences could be obtained through an unsupervised alignment procedure but in our metho we explore two simpler approaches exploiting the fact that the skills we consider are episodic.."}, {"section_index": "8", "section_name": "3.3.1 TIME-BASED ALIGNMENT", "section_text": "The first extremely simple approach we consider is to say that in such episodic skills, a reasonable. approximate alignment can be obtained by assuming that the two agents will perform each task at. roughly the same rate, and we can therefore simply pair the states that are visited in the same time step in the two proxy domains.."}, {"section_index": "9", "section_name": "3.3.2 ALTERNATING OPTIMIZATION USING DYNAMIC TIME WARPING", "section_text": "However, this alignment is sensitive to time based alignment and may not be very robust if the. agents are performing the task at somewhat different rates. In order to address this, we formulate an. alternating optimization procedure to be more robust than time-based alignment. This optimization. alternates between learning a common feature space using currently estimated correspondences, and. re-estimating correspondences using the currently learned feature space. We make use of Dynamic Time Warping (DTW) as described in|Muller (2007), a well known method for learning correspon-. dences across sequences which may vary in speed. Dynamic time warping requires a metric space. to compare elements in the sequences to compute an optimal alignment between the sequences. In. this method, we initialize the weak time-based alignment described in the previous paragraph and. use it to learn a common feature space. This feature space serves as a metric space for DTW to. re-estimate correspondences across domains. The new correspondences are then used as pairs for. learning a better feature space, and so on. This forms an Expectation-Maximization style approach. which can help estimate better correspondences than naive time-alignment.."}, {"section_index": "10", "section_name": "LEARNING COMMON FEATURE SPACES FOR SKILL TRANSFER", "section_text": "In this section, we will discuss how the shared space can be learned by means of the proxy task. We will then describe how this shared space can be used for knowledge transfer for a new task, an finally present results that evaluate transfer on a set of simulated robotic control domains.\nWe wish to find functions f and g such that, for states ssp and sT p along the optimal policies nsp and rp*, f and g approximately satisfy p(f(ssp,r)) = p(g(sTp,r)). If we can find the common feature space by learning f and g, we can optimize nt by directly mimicking the distribution over. f(SSp,r), where SSp,r ~ Jts.\nTo approximate the requirement that p(f(ssp,r)) = p(g(sTp,r)), we assume a pairing P of states in the proxy domains as described in 3.3 The pairing P is a list of pairs of states (ssp,STp) which are corresponding across domains. As f and g are parametrized as neural networks, we can optimize them using the similarity loss metric introduced by Chopra et al.(2005):\nLsim(SSp,STp;0f,0g) =||f(SSp,r;0f)-g(STp,r,0g)]\nHowever, as described in Section [3] if this is the only objective for learning f and g, we can easily end up with uninformative degenerate mappings, such as the one where f(ssp,r) = g(sTp,r) = 0. Intuitively, a good pair of mappings f and g would be as close as possible to be- ing invertible, so as to preserve as much of the informa- tion about the source domain as possible. We therefore train a second pair of decoder networks with the goal of optimizing the quality of the reconstruction of ssp,r and STp,r from the shared feature space, which encourages f and g to preserve the maximum amount of domain- invariant information. We define decoders Decs(f (ssp,r) and DecT(g(STp,r)) that map from the feature space back to their respective states. Note that, compared to conven- tional Siamese network methods. the weights between si0nalin The ohiectiy\nLAEt(STp.r; Og, ODecT. STp,r -DecT(g(STp,r; 0g); ODecT)2\nmin LAEs(SSp,r; Of, ODecs) + LAEt(STp,r; Og, ODecT) + Lsim(SSp,r,STp,r; Of,0g 0 f,0g,ODecs\nInstead of attempting direct policy transfer, we match the distributions of optimal trajectories across the domains. Given f and g learned from the network described in Section 4] and the distribution * of optimal trajectories in the source domain, we can incentivize the distribution of trajectories in the target domain to be similar to the source domains under the mappings f and g. Ideally, we. would like the distributions p(f(ss.r)) and p(g(sT.r)) to match as closely as possible. However, i1 may still be necessary for the target agent to learn some aspects of the skill from scratch, since not\nS ST S DeCs Dec lf(ss)-g(s(t)ll2 f(Sg) g(s) + g Ss ST\nFigure 1: The two embedding functions f and g are trained with a contrastive loss be tween the domains, along with decoders that optimize autoencoder losses.\nLAEs(SSp,r;0f, ODecs) = sSp,r - Decs(f(sSp,r; 0f); ODecs)|2\nThe functions f and g learned using the approach described above establish an invariant space across the two domains. However, because these functions need not be invertible, directly mapping from a state in the source domain to a state in the target domain is not feasible\nrtransfer\n1s the agent-specinc state along the optimal policy in the sour and s7 is the agent-specific state along the current policy that is being. main at time step t, and a is a weight on the transfer reward that control. o the overall task goal. In essence, this additional reward provides a 1. which gives additional learning guidance in the target domain. In sparse re. oerformance is highly dependent on directed exploration, and this addit. rajectory distributions in the embedding space provides strong guidanc. In tasks where the pairs mapping 9P is imperfect, the trans-. er reward may sometimes interfere with learning when the. arget domain policy is already very good, though it is usu-. lly very helpful in the early stages of learning. We therefore. night consider gradually reducing the weight a as learning. orogresses in the target domain. We use this technique for our. Figure 2: I second experiment, which learns a policy for a tendon-driven. forming the arm."}, {"section_index": "11", "section_name": "5.1 METHODS USED FOR COMPARISON", "section_text": "In the following experiments, we compare our method with other methods. The simplest one, re ferred to as \"no transfer', aims to learn the target task from scratch. This method generally canno succeed in sparse reward environments without a large number of episodes. Table 1|shows that without transfer, the tasks are not learned even with 3-4 times more experience.\nWe also compare to several linear methods, including random projections, canonical correlatior analysis (CCA), and unsupervised manifold alignment (UMA). Random projections of data have been found to provide meaningful dimensionality reduction (Hegde et al.]2008). We assign f and g be random projections into spaces of the same dimension, and transfer as described in Section|4.1.1 CCA (Hotelling1936) aims to find a basis for the data in which the source data and target data are maximally correlated. We use the matrices that map from state space to the learned basis as f and g UMA (Wang & Mahadevan (2009), Ammar et al.[(2015b)) uses pairwise distances between states to align the manifolds of the two domains. These methods impose a linearity constraint on f and g which proves to limit the expressiveness of the embeddings. We find that using CCA to learn the embedding allows for transfer between robots, albeit without as much performance gained than if f and g are neural networks.\nWe also compare to kernel-CCA (KCCA) which uses a kernel matrix to perform CCA, allowing the. method to use an implied non-linear feature mapping of the data. We test on several different kernels,. including polynomial (quad), radial basis (rbf), and linear. These methods perform especially well. on transfer between different actuation methods, but which kernel to use for best performance is not consistent between experiments. For example, although the quadratic kernel performs competitively. with our method for the tendon experiment, it does not work at all for our button pushing experiment.\nall intricacies will transfer in the presence of morphological differences. We therefore use a rein-. forcement learning algorithm to learn t, but with an additional term added to the reward function that provides guidance via f(ss r). This term has following form.\nwhere s) is the agent-specific state along the optimal policy in the source domain at time step t.\nV\nFigure 2: The 3 and 4 link robots per-. forming the button pressing task, which we use to evaluate the performance. of our transfer method. Each task is trained on multiple conditions where the objects start in different locations.\nOur experiments aim to evaluate how well common feature. space learning can transfer skills between morphologically different agents. The experiments were. performed in simulation using the MuJoCo physics simulator (Todorov et al.2012), in order to explore a variety of different robots and actuation mechanisms. The embedding functions f and g in our experiments are 3 layer neural networks with 60 hidden units each and ReLu non-linearities. They are trained end-to-end with standard backpropagation using the ADAM optimizer (Kingma. & Ba] 2015). Videos of our experiment will be available at https: //sites. google. com/. site/invariantfeaturetransfer/ For details of the reinforcement learning algorithm\nThe last method we compare with is \"direct mapping\" which learns to directly predict sT,r from ss, Instead of mapping both into a common space. This is representative of a number of prior technique that attempt to put source and target domains into direct correspondence such asTaylor et al.(2008) In this method, we use the same pairs as we do for our method, estimated from prior experience, bu try to map directly from the source domain to the target domain. In order to guide learning using this method, we pass optimal source trajectories through the learned mapping, and then penalize th target robot for deviating from these predicted trajectories. As seen in Figures|5|and|8|this methoc does not succeed, probably because mapping from one state space to another is more difficult thar mapping both state spaces into similar embeddings. The key difference between this method anc ours is that we map both domains into a common space, which allows us to put only the commor parts of the state spaces in correspondence instead of trying to map between entire states across domains.\nWe have also included a comparison between using time-based alignment across domains versus using a more elaborate EM-style procedure as described in3.3.2\n5.2 TRANSFER BETWEEN ROBOTS WITH DIFFERENT NUMBERS OF LINKS\nFigure 3: The 4-link robot pushing the button. Note that the reward function only tells the agent how far the button has been depressed, and provides no information to indicate that the arm should reach for the button\nFigure 4: The 3 and 4 link robots performing each of the three proxy tasks we consider: target reaching, peg. insertion, and block moving. Our results indicate that using all three proxy tasks to learn the common feature space improves performance over any single proxy task.\nIn our first experiment, we evaluate our method on transferring information from a 3-link robot to. a 4-link robot. These robots have similar size but different numbers of links and actuators, making. the representation needed for transfer non-trivial to learn. In order to evaluate the effectiveness of our method, we consider tasks with sparse or delayed rewards, which are difficult to learn quickly. without the use of prior knowledge, large amounts of experience, or a detailed shaping function to guide exploration. For transfer between the 3 link and 4 link robots, we evaluate our method. on a button pressing task as shown in Figures 2|and|3] The goal of this task is to reach through a narrow opening and press the white button to the red goal marker indicated in the figure. The. caveat is that the reward signal tells the arms nothing about where the button is, but only penalizes. distance between the white button and the red goal. Prior work has generally used well-shaped reward functions for tasks of this type, with terms that reward the arm for approaching the object of. interest (Lillicrap et al.2015}Devin et al.2016). Without the presence of a directed reward shaping guiding the arm towards the button, it is very difficult for the task to be performed at all in the target. domain. as seen from the performance of learning from scratch with no transfer (\"baseline') in the target domain in Figure[5] This is indicative of how such a task might be learned in the real world,. where it is hard to provide anything but very sparse feedback by using a sensor on the button..\nFor this experiment, we compare the quality of transfer when using different proxy tasks: reaching a target, moving a white block to the red goal, and inserting a peg into a slot near the robot, as shown in Figure4 These tasks are significantly easier than the sparse reward button pressing task Collecting successful trajectories from the proxy task, we train the functions f and g as described in\nSection4 Note that the state in both robots is just the joint angles and joint velocities. Learning a. suitable common feature space therefore requires the networks to understand how to map from join angles to end-effectors for both robots..\nWe consider the 3-link robot pressing the button as the source domain and the 4-link robot pressing. the button as the target domain. We allow the domain with the 3-link robot to have a well shaped cost function which has 2 terms: one for bringing the arm close to the button, and one for the. distance of the button from the red goal position. The performance of our method is shown in. Figure5] The agent trained with our method performs more directed exploration and achieves an almost perfect success rate in 7 iterations. The CCA method requires about 4 times more experience to reach 60% success than our method, indicating that using deep function approximators for the. functions f and g which allows for a more expressive mapping than CCA. Even with kernel CCA,. the task is not able to be performed as well as our method. Additionally the UMA and random projections baselines perform much worse than our method. We additionally find that using the EM style alignment procedure described in 3.3.2|also allows us to reach perfect formance as shown in Figure|5 Investigating this method further will be the subject of future work..\nLearning a direct mapping between states in both domains only provides limited transfer because thi. approach is forced to learn a mapping directly from one state space to the other, even though there. is often no complete correspondence between two morphologically different robots. For example there may be some parts of the state which can be put in correspondence, but others which cannot Our method of learning a common space between robots allows the embedding functions to only. retain transferable information.\n1.2 1.2 1.0 10 10 0.8 Secesss 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0.0 10 15 20 0.0 0.0 5 Iteration 10 5 10 15 20 2 Ours: All proxies, Jointly optimized P Iteration Iteration Random Projection KCCA,RBF Ours: All proxies, Jointly optimized P No transfer No transfer No transfer UMA KCCA, linear Ours: Reach proxy Direct mapping KCCA,quadratric Ours: Push proxy KCCA,quadratric Ours: All proxies Ours: All proxies Ours: All proxies CCA Ours: Peg proxy\nFigure 5: Performance of 4-link arm on the sparse reward button pressing task described in Section[5.2] On the left and middle, we compare our method with the methods described in Section[5.1] On the right, the \"peg,'. \"push,\" and \"reach\" proxy ablations indicate the performance when using embedding functions learned from. those proxy tasks. The embedding improves significantly when learned from all three proxy tasks, indicating that our method benefits from additional prior experience..\nIn order to illustrate the ability of our method to transfer across vastly different actuation mechanisms and learn representations that are hard to specify by hand, we consider transfer between a torque driven arm and a tendon driven arm, both with 3 links. These arms are pictured in Figure [6] The torque driven arm has motors at each of its joints that directly control its motion, and the state includes joint angles and joint velocities. The tendon driven arm, illustrated in Figure[6l uses three tendons to actuate the joints. The first tendon spans both the shoulder and the elbow, while the second and third control the elbow and wrist individually. The last tendon has a variable-length\nTable 1: Maximum success rate of \"no transfer\" method over 75 iterations of training shown for the 3 tasks considered in Sections[5.2 5.3] and 5.4Because the target environments suffer from sparse rewards, this method is unable to learn the tasks with a tractable amount of data..\nlever arm, while the first two have fixed-length lever arms, corresponding to tendons that conforn to the arm as it bends. This coupled system uses tendon lengths and tendon velocities as the state representation, without direct access to joint angles or end-effector positions.\nThe state representations of the two robots are dra- matically different, both in terms of units, dimension- ality, and semantics. Therefore, learning a suitable common feature space represents a considerable chal- lenge. In our evaluation, the torque driven arm is the source robot, and the tendon driven arm is the target robot. The task we require both robots to perform is a block pulling task indicated in Figure[7] This involves pulling a block in the direction indicated, which is non- trivial because it requires moving the arm under and around the block, which is restricted to only move in the directions indicated in Figure [6] With random ex- ploration, the target robot is unable to perform directed exploration to get the arm to actually pull the block in the desired direction, as shown in Figure [8\nWe use one proxy task in the experiment, which in- volves both arms reaching to various locations. With embedding functions f and g trained on op timal trajectories from the proxy task, we see that the transfer reward from our method enables th task to actually be performed with a tendon driven arm. The baseline of learning from scratch which again corresponds to attempting to learn the task with the target tendon-driven arm fror scratch, fails completely. The other methods of using CCA, and learning a direct mapping are abl to achieve better performance than learning from scratch but learn slower. Kernel CCA with th quadratic kernel does competitively with our method but in turn performed very poorly on the but ton task so is not very consistent. Additionally, the random projection and UMA baselines perforn quite poorly. The performance of the EM style alignment procedure is very similar to the standar time based alignment as seen in Figure[8| likely because the data is already quite time aligned acros the domains. These results indicate that learning the common feature subspace can enable substan tially accelerated learning in the target domain, and in fact can allow the target agent to learn a tas. that it fails to learn without any transfer rewards, and performs better than alternative methods.\nFigure 7: The tendon-driven robot pulling the block. Note that the reward function only tells the agent how fa the block is from the red goal and provides no information to indicate that the arm should reach around th block in order to pull it. The block is restricted to move only towards the red goal, but the agent needs to mov under and around the block to pull it."}, {"section_index": "12", "section_name": "5.4 TRANSFER THROUGH IMAGE FEATURES", "section_text": "A compelling use-case for learned common embeddings is in learning vision-based policies. Ir this experimental setup, we evaluate our method on learning embeddings from raw pixels instead of from robot state. Enabling transfer from extra high dimensional inputs like images would allow. significantly more natural transfer across a variety of robots without restrictive assumptions about full state information.\nWe evaluate our method on transfer across a 3-link and a 4-link robot as in Section 5.2] but use. images instead of state. Because images from the source and target domains are the same size and the same 'type\", we let g = f . We parametrize f as 3 convolutional layers with 5x5 filters and no pooling. A spatial softmax (Levine et al.[2016) is applied to the output of the third layer such. that f outputs normalized pixel indices of feature points on the image. These \"feature points\" form. the latent representation that we compare across domains. Intuitively the common \"feature points' embeddings should represent parts of the robots which are common across different robots..\nEmbeddings between the domains are built using a proxy task of reaching to a point, similar to. the one described in the previous experiments. The test task in this case is to push a white block\nendons\nFigure 6: The top images show the source and target domain robots: the robot on the left is torque driven at the joints and the one on the right is tendon driven. The tendons are high lighted in the image; the green tendon has a variable-length lever arm, while the yellow ten dons have fixed-length lever arms. Note that the first tendon couples two joints. The bottom im. ages show two variations of the test task\n1.0 1.0 0.8 0.8 Prreeesreeesss Prreeesreeeesss 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 5 10 15 20 5 10 15 20 Iteration Iteration Ours:Jointly optimized P Ours: Jointly optimized P KCCA, RBF Random Projection UMA No transfer No transfer UMA KCCA, linear Direct mapping KCCA,quadratric KCCA, quadratric Ours:Timestep-alignment Ours:Timestep-alignment CCA\nFigure 8: Performance of tendon-controlled arm on block pulling task. While the environment's reward is too sparse to succeed in a reasonable time without transfer, using our method to match feature space state distributions enables faster learning. Using a linear embedding or mapping directly from source states to target states allows for some transfer. Optimizing over P instead of assuming time-based alignment does noi hurt performance. KCCA with quadratic kernel performs very well in this experiment, but not in experiment 1\n(a) The 3-link robot demonstrating the task. The yellow triangles mark the locations of the feature points output by f applied to the image pix- els. We then use the feature points to transfer the skill to the 4-link robot.\nWe presented a method for transferring skills between morphologically different agents using in. variant feature spaces. The formulation of our transfer problem corresponds to a setting where tw agents (e.g. two different robots) have each learned a collection of skills, with some skills knowr. o just one of the agents, and some shared by both. A shared skill can be used to learn a spac. hat implicitly brings the agents into correspondence, without assuming that an explicit state space somorphism can be constructed. By then mapping into this space a skill that is known to only one\nto a red target as shown in Figure9a which suffers from sparse rewards because the reward only accounts for the distance of the block from the goal. Unless the robot knows that it has to touch the block, it receives no reward and has unguided exploration. As shown in Figure9bl our method is able to transfer meaningful information from source to target robot directly from raw images and successfully perform the task even in the presence of sparse rewards.\n1.0 0.8 Preeessreeesss 0.6 0.4 0.2 0.0 0 5 10 15 20 Iteration Random Projection KCCA,RBF No transfer KCCA,linear KCCA,quadratric Ours: Timestep-alignment\n(b) Performance of 4-link robot on block pushing task for transfer using raw images. We transfer from the 3-link robot by learning a feature space from raw pixels of both domains, enabling effec- tive faster learning. Random projections and linear kernel-CCA have some success in transfer. The baseline is unable to succeea because of the reward signal is too sparse without transfer\nof the agents, the other agent can substantially accelerate its learning of this skill by transferring. the shared structure. We present an algorithm for learning the shared feature spaces using a shared. proxy task, and experimentally illustrate that we can use this method to transfer manipulation skills. between different simulated robotic arms. Our experiments include transfer between arms with dif-. ferent numbers of links, as well as transfer from a torque-driven arm to a tendon-driven arm.\nA promising direction for future work is to explicitly handle situations where the two (or more) agents must transfer new skills by using a large collection of prior behaviors, with different degrees of similarity between the agents. In this case, constructing a shared feature space involves not only mapping the skills into a single space, but deciding which skills should or should not be combined For example, a wheeled robot might share manipulation strategies with a legged robot, but should not attempt to share locomotion behaviors.\nIn a large-scale lifelong learning domain with many agent and many skills, we could also consider. using our approach to gradually construct more and more detailed common feature spaces by trans-. ferring a skill from one agent to another, using that new skill to build a better common feature space, and then using this improved feature space to transfer more skills. Automatically choosing which skills to transfer when in order to minimize the training time of an entire skill repertoire is an interesting and exciting direction for future work.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Haitham Bou Ammar and Matthew E. Taylor. Reinforcement learning transfer via common sub spaces. In Adaptive and Learning Agents: International Workshop, 2012.\nHaitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew Taylor. Unsupervised cross-domai transfer in policy gradient reinforcement learning via manifold alignment. In AAAI Conferenc. on Artificial Intelligence, 2015a.\nRich Caruana. Multitask learning. Machine Learning. 1997\nShreyansh Daftry, J. Andrew Bagnell, and Martial Hebert. Learning transferable policies for monoc ular reactive MAV control. In International Symposium on Experimental Robotics (ISER). 2016\nHarold Hotelling. Relations between two sets of variates. Biometrika, 28, 1936\nHaitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Unsupervised cross-domair transfer in policy gradient reinforcement learning via manifold alignment. In Proc. of AAAI. 2015b.\nAlexander Braylan, Mark Hollenbeck, Elliot Meyerson, and Risto Miikkulainen. Reuse of neural modules for general video game playing. CoRR, abs/1512.01537, 2015\nSergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search unde. unknown dynamics. In Advances in Neural Information Processing Systems. 2014\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. Journal of Machine Learning Research, 17:1-40, 2016.\nWeiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biologica movement systems. In ICINCO (1), 2004.\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR. abs/1509.02971, 2015.\nAndrew Meltzoff. Born to learn: What infants learn from watching us. Skillman, NJ: Pediatri Institute Publication, 1999\nMeinard Muller. Dynamic time warping. Information retrieval for music and motion, pp. 69-84 2007.\nKaizad V Raimalwala, Bruce A Francis, and Angela P Schoellig. A preliminary study of transfe. learning between unicycle robots. In 2016 AAAI Spring Symposium Series. 2016\nGiacomo Rizzolatti and Laila Craighero. The mirror neuron system. Annual Review of Neuro science, 27:169-192, 2004.\nAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. CoRR abs/1606.04671, 2016a\nMatthew E. Taylor, Nicholas K. Jong, and Peter Stone. Transferring instances for model-based reinforcement learning. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2008..\nGeorge Konidaris and Andrew Barto. Autonomous shaping: knowledge transfer in reinforcement\nAndrei A Rusu. Matej Vecerik, Thomas Rothorl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell Sim-to-real robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286. 2016b.\nMatthew Taylor, Peter Stone, and Yaxin Liu. Transfer learning via inter-task mappings for temporal difference learning. Journal of Machine Learning Research, 8(1):2125-2167, 2007.\nChang Wang and Sridhar Mahadevan. Manifold alignment without correspondence. In IJCAI volume 2, pp. 3, 2009."}, {"section_index": "14", "section_name": "7 .1 REINFORCEMENT LEARNING WITH LOCAL MODELS", "section_text": "Although we can use any suitable reinforcement learning algorithm for learning policies, in thi. work, we use a simple trajectory-centric reinforcement learning method that trains time-varyin. linear-Gaussian policies (Levine & Abbeel, 2014). While this method produces simple policies, i. is very efficient, making it well suited for robotic learning. To obtain robot trajectories for training. tasks and source robots, we optimize time-varying linear-Gaussian policies through a trajectory. centric reinforcement learning algorithm that alternates between fitting local time-varying linea dynamics models, and updating the time-varying linear-Gaussian policies using the iterative linear. quadratic Gaussian regulator algorithm (iLQG) (Li & Todorov| 2004). This approach is simple anc. efficient, and is typically able to learn complex high-dimensional skills using just tens of trials. making it well suited for rapid transfer. The resulting time-varying linear-Gaussian policies ar. parametrized as p(ut[xt) = V (Kxt + kt,Ct) where Kt, kt, and Ct are learned parameters. Furthe. details of this method are presented in prior work (Levine & Abbeell2014)..\nWe use the same reinforcement learning algorithm to provide solutions in the source domain Ds,. though again any suitable reinforcement learning method (or even human demonstrations) could be used instead. To evaluate the ability of our method to provide detailed guidance through the transfer reward rtransfer, we use relatively sparse reward functions in the target domain Dr, as discussed. below. To generate the original skills in the source domain Ds and in the proxy domains Dsp and. DTp, we manually designed the appropriate shaped costs to enable learning from scratch to succeed,. though we note again that our method is agnostic to how the source domain and proxy domain skills. are acquired."}] |
Bk3F5Y9lx | [{"section_index": "0", "section_name": "EPITOMIC VARIATIONAL AUTOENCODER", "section_text": "Serena Yeung\nStanford University\nStanford University\nfeifeili}@cs.stanford.edu\nIn this paper, we propose epitomic variational autoencoder (eVAE), a probabilis. tic generative model of high dimensional data. eVAE is composed of a number of sparse variational autoencoders called 'epitome' such that each epitome par tially shares its encoder-decoder architecture with other epitomes in the composi- tion. We show that the proposed model greatly overcomes the common problem in variational autoencoders (VAE) of model over-pruning. We substantiate that eVAE is efficient in using its model capacity and generalizes better than VAE, by. presenting qualitative and quantitative results on MNIST and TFD datasets.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recently proposed variational autoencoder (VAE) (Kingma & Welling, 2014) is an example of one such generative model. VAE pairs a top down generative model with a bottom up recognition network for amortized probabilistic inference. Both networks are jointly trained to maximize a variational lower bound on the data likelihood. A number of recent works use VAE as a modeling framework, including iterative conditional generation of images (Gregor et al., 2015) and conditional future frame prediction (Xue et al., 2016).\nA commonly known problem with the VAE lower bound is that it is known to self-prune or un- der utilize the model's capacity (Mackay, 2001). This can lead to poor generalization. A common approach to alleviate this problem is to resort to optimization schedules and regularization tech-. niques (Bowman et al., 2015; Kaae Sonderby et al., 2016) that trade-off two competing terms, latent. cost and data reconstruction, in the bound. Fig. 1 provides a quick insight into this problem of over-pruning and how commonly used regularization techniques may not be sufficient. Detailed. discussion is provided in 2.1.\nIn this paper, we take a model-based approach to directly address this problem. We present an exten sion of variational autoencoders called epitomic variational autoencoder (Epitomic VAE, or eVAE.. for short) that automatically learns to utilize its model capacity more effectively, leading to better. generalization. Consider the task of learning a D-dimensional representation for the examples in. a given dataset. The motivation for our model stems from the hypothesis that a single example in. the dataset can be sufficiently embedded in a smaller K-dimensional (K < D) subspace of D.. However, different data points may need different subspaces, hence the need for D. Sparse coding. methods also exploit a similar hypothesis. Epitomic VAE exploits sparsity using an additional cat egorical latent variable in the encoder-decoder architecture of the VAE. Each value of the variable. activates only a contiguous subset of latent stochastic variables to generate an observation. This.\n*Work done during an internship at Facebook AI Research\nAnitha Kannan & Yann Dauphin\n{akannan, ynd}@fb.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Unsupervised learning holds the promise of learning the inherent structure in data so as to enable many future tasks including generation, prediction and visualization. Generative modeling is an. approach to unsupervised learning wherein an explicit stochastic generative model of data is defined,. such that independent draws from this model are likely to produce the original data distribution,. while the learned latent structure itself is useful in prediction, classification and visualization tasks.\nThe rest of the paper is organized as follows. We first describe variational autoencoders and math ematically show the model pruning effect in $ 2. We then present our epitomic VAE model in $ 3 that overcomes these shortcomings. Experiments showing qualitative and quantitative results are presented in 4. We finally provide more general context of our work in the related work in 5, anc conclude with discussions.\npe(xz) = N(x;f1(z);exp(f2(z))\nGiven a dataset X of T i.i.d samples, the model is learned such that it maximizes the likelihood oi the parameters to have generated the data, p(X [). This maximization requires marginalizing the unobserved z. However, computing p(z|x) is intractable due to dependencies induced between the Zi when conditioned on x.\nVariational autoencoders, as the name suggests, use variational inference to approximate the exact posterior with a surrogate parameterized distribution. However, instead of having separate parame- ters for the posterior distribution of each observation, VAE amortizes the cost by learning a neural network with parameters that outputs the posterior distribution of the form qo(z|x) = IIa q(zi[x). This results in the lower bound given by\nlog pe(X) t=1 T Eqs(z|x(t))[logp(x(t)|z)]- KL qo(zx )I p(z t=1\nT D Cuae =-Eqs(z|x()[logp(x(t)|z)]+KL(qg t=1 t=1 i=1\nOf particular interest is the KL term. Since the KL term is the sum of independent contributions from each dimension d of D, it provides unduly freedom for the model in how it minimizes this term. In. particular, the model needs to only ensure that the overall KL term is minimized, on average, and. not per component wise. The easiest way for the model to do this is to have a large number of. components that satisfies the KL term effectively, by turning off the units so that the posterior for. those units becomes the same as the prior'. This effect is quite pronounced in the early iterations of.\nSince log variance is modeled using the neural network, turning it off will lead to a variance of 1\nenables learning multiple shared subspaces such that each subspace specializes, and also increases the use of model capacity (Fig. 4), enabling better representation. The choice of the name Epit omic VAE comes from the fact that multiple miniature models with shared parameters are trained simultaneously.\nThe generative model (decoder) of a VAE consists of first generating a D-dimensional stochastic variable z drawn from a standard multivariate Gaussian\np(z) =N(z;0;I\nVAE is trained with standard backpropagation using minibatch gradient descent to minimize the negative of the lowerbound\nFigure 1: Sorted activity level of latent units and corresponding generations on MNIST, for a 50-d VAE with a. hidden layer of 500 units. Shown for varying values of the KL weight X. When = 1, only 30 units are active As X is decreased, more units are active; however generation does not improve since the model uses the capacity. to model increasingly well only regions of the posterior manifold near training samples (see reconstructions in. Fig. 8).\n2 6 R 6 7 0 6 $ 9 S 4 4 S 0 0 3 q a G 3 R 3 5 7 7 9 2. 9 0 9 8 0 5 9 05989 7 8 8 9 9 4 6 All units Active units Dead units\nFigure 2: Only active units contribute to generation, whereas units that have \"died' have no effect. Shown for a 50-d VAE with X = 1.\ntraining: the model for log p(x[z) is quite impoverished and hence the easiest way to improve the bound is by turning off the KL terms. However, once the units have become inactive, it is almos impossible for them to resurrect, and hence the full capacity of the model is not utilized.\nA quantity that is useful in understanding this effect, is the activity level of a unit. Following Burda et al. (2015), we define a unit to be used, or \"active\", if Au = Covx(Eu~q(u|x)[u]) > 0.02.\nA commonly used approach to overcome this problem is to use a trade-off between the two terms using parameter so that the cost is.\nD C =-Eqg(z|x)[logp(x|z)]+XKL(qp(zi|x)|p(zi) i=1\nFig. 1 shows the effect of on unit activity and generation, with = 1 being the correct objective to optimize. While tuning down increases the number of active units, samples generated from the model are still poor. Fig. 2 shows generation using all units, active units only, and dead units. only, for X = 1. The model spends its capacity in ensuring that reconstruction of the training set is. optimized (reconstruction visualizations are shown in s 8.1), at the cost of generalization. This has led to more sophisticated schemes such as using an annealed optimization schedule for (Bowman. et al., 2015; Kaae Sonderby et al., 2016) or enforcing minimum KL contribution from subsets of the. latent units (Kingma et al., 2016).\nIn this paper, we present a model based approach called \"epitomic variational autoencoder' to ad- dress the problem of over pruning.."}, {"section_index": "3", "section_name": "3 MODEL", "section_text": "We propose epitomic variational autoencoder (eVAE) to overcome the shortcomings of VAE by enabling more efficient use of model capacity to gain better generalization. We base this on the observation that while we may need a D-dimensional representation to accurately represent every example in a dataset, each individual example can be represented with a smaller K-dimensiona subspace. As an example, consider MNIST with its variability in terms of digits, strokes and thick\n1.4 = 1.0 X = 0.7 t9761163 9F3 4 9 031 3 M 1.2 1 3 E ? X = 0.5 63.726 3 8. 3 $888 = 0.2 1.0 Fa83tt1 3 DR54 4 i 6 4 0Z4077546 6 9 33 7 3 .3. f 7304013119 53372773 2 6) 7750935028 3 4 )39591654 KOE 3 3 2 F 0.4 X =1 X = 0.5 X = 0.2 0.2 0.0 0 10 20 30 40 50 Unit\n1.4 =1.0 = 0.7 163 3 1.2 3 7 3 F 4 ? I 7 E ? X 9 = 0.5 2 3 8 4 X = 0.2 1.0 a 4DR 3 3 0 i. 933 7 R s b 3 X$ 9 9 7 3 30 4 19 5 A 5 3602% 313 7 M 2 K0 3 C ). 3 2 4. 5 9 0.4 1 6ecne 1477y 0.2 X=1 X = 0.5 X = 0.2 0.0 ..................... 0 10 20 30 40 50\ny=1 y=2 3322222222 82 2 2 2 2 2 22 8222222222 8 8 8 2 2 Z 2 2 2 2 8282223338 88 8 8 8 2 2 8 8 8 8888888 88 8 8 2 88 88888888 88 8 8 8 2 88 88888888 88 8 8 8 8 8888888888 88 8 8 8 y a 8 888888888 88 8 8 8 8888888888 88 8 8 8888888888 899999 9 94 y=3 y=4 0555559 5555 22 055 5 5 5 5 5 5 5 2222222 055 5 5 5 5 5 5 5 442222222 0555 5 5 5 5 5 5 44222 22 0555 5 5 5 5 5 5 4442222 22 06555 5 5 5 5 5 444 2 2 06555556 66 444322 06666 6 6 6 6 6 4447777322 0666666666 4447777772 0666666666 9447777777\nness of ink, to name a few. While the overall D is large, it is likely that only a few K dimensions o D are needed to capture the variability in strokes of some digits (see Fig. 3).\nEpitomic VAE can be viewed as a variational autoencoder with latent stochastic dimension D that is composed of a number of sparse variational autoencoders called epitomes, such that each epitome partially shares its encoder-decoder architecture with other epitomes in the composition. In this. paper, we assume simple structured sparsity for each epitome: in particular, only K contiguous. dimensions of D are active2\nThe generative process can be described as follows: A D-dimensional stochastic variable z is drawn from a standard multivariate Gaussian p(z) = N(z;0; I). In tandem, an epitome is implicitly chosen through an epitome selector variable y, which has a uniform prior over possible epitomes. The N-dimensional observation x is then drawn from a Gaussian distribution:\nm, enforces the epitome constraint: it is also a D-dimensional vector that is zero everywhere except. in the active dimensions of the epitome. O is element-wise multiplication between the two operands.. Thus, my masks the dimensions of z other than those dictated by the choice of y. Fig. 3 illustrates. this for an 8-d z with epitome size K = 2, so that there are four possible epitomes (the model also allows for overlapping epitomes, but this is not shown for illustration purposes). Epitome structure. is defined using size K and stride s, where s = 1 corresponds to full overlap in D dimensions3. Our. model generalizes the VAE and collapses to a VAE when D = K = s.\nf1 (>) and f2() define non-linear deterministic transformations of modeled using neural networks. Note that the model does not snip off the K dimensions corresponding to an epitome, but insteac. deactivates the D-K dimensions that are not part of the chosen epitome. While the same deterministic functions f1 and f2 are used for any choice of epitome, the functions can still specialize due to the.\n2The model also allows for incorporating other forms of structured sparsity\n3The strided epitome structure allows for learning O(D) specialized subspaces, that when sampled during generation can each produce good samples. In contrast, if only a simple sparsity prior is introduced over arbitrary subsets (e.g. with Bernoulli latent units to specify if a unit is active for a particular example), it can lead to poor generation results, which we confirmed empirically but do not report. The reason for this is as follows: due to an exponential number of potential combinations of latent units, sampling a subset from the prior during generation cannot be straightforwardly guaranteed to be a good configuration for a subconcept in the data, and often leads to uninterpretable samples.\nFigure 3: Left: Illustration of an epitomic VAE with dimension D=8, epitome size K=2 and stride S=2. In this depiction, the second epitome is active. Right: Learned manifolds on MNIST for 4 different epitomes in a 20-d eVAE with size K = 2 and stride s = 1. We observe that each epitome specializes on a coherent subset of examples.\npe(x[y,z) = N(x;f1(my O z),exp(f2(my O z)))\nsparsity of their inputs. Neighboring epitomes will have more overlap than non-overlapping ones. which manifests itself in the representation space; an intrinsic ordering in the variability is learned"}, {"section_index": "4", "section_name": "3.1 OVERCOMING OVER-PRUNING", "section_text": "Following Kingma & Welling (2014), we use a recognition network q(z, y|x) for approximate pos terior inference, with the functional form\nq(z,yx) q(yx)q(z[y,x) q(y|x)W(z;my O ,exp(my O )\n= h(x) are neural networks that map x to D dimensional space where u = hi (x) and o.\nWe use a similar masking operation to deactivate units, as decided by the epitome y. Unlike the generative model (eq. 7), the masking operation defined by y operates directly on outputs of the recognition network that characterizes the parameters of q(z[y, x).\nAs in VAE, we can derive the lower bound on the log probability of a dataset, and hence the cos function (negative bound) is\n[logp(x(t)|y,z) evae .uxt)) t=1 KLqo(y|x ) l| pe(y)-qo(y|x(t)KL|qs(z|y,x(t) || pe(z t=1 y\nThe epitomic VAE departs from the VAE in how the contribution from the KL term is constrained. Let us consider the third term in eq. 10, and substituting in eq. 9:.\nqp(y|x(t)KL|qg(z|y,x(t)|po(z t=1 y T q qp(y|x(t)KL|N(z;my O O u(t) (t))) l| N(z;0,I ex t=1 y T qp(y|x(t))1[md,y=1]KL|J (za;t),exp($(t)) I| N(0,1) t=1 y d=1\nwhere 1[*] is an indicator variable that evaluates to 1 if only if its operand is true\nFor a training example x(t) and for a fixed y (and hence the corresponding epitome), the number of KL terms that will contribute to the bound is exactly K. The dimensions of z that are not part of the corresponding epitome will have zero KL because their posterior parameters are masked to have unit Gaussian, the same as\nThis is quite in contrast to how VAE optimizes Cyae (. 2.1). For Cyae to have a small contribution from the. KL term of a particular zd, it has to infer that unit to have zero mean and unit variance for many examples. in the training set. In practice, this results in VAE completely deactivating units, and leading to many dead. units. EpitomicVAE chooses the epitome based on x(t) and ensures that the dimensions that are not useful. in explaining x(t) are ignored in Cevae. This means that the unit is still active, but by design, only a fraction. of examples in the training set contributes a possible non-zero value to zd's KL term in Cevae. This added. flexibility gives the model the freedom to use more total units without deactivating them, while optimizing the bound. With these characteristics, during training, the data points will naturally group themselves to different. epitomes, leading to a more balanced use of z..\nIn Fig. 4 we compare the activity levels of VAE, dropout VAE and our model. We see that compared with VAF. our model is able to better use the model capacity. In the same figure, we also compare with adding dropout tc the latent variable z of the VAE (Dropout VAE). While this increases the number of active units, it generalizes. poorly as it uses the dropout layers to merely replicate representation, in contrast to eVAE. See Fig. 5 along. with the explanation in $ 4.1 where we compare generation results for all three models..\nFigure 4: Adding dropout to a VAE (here, dropout rate 0.5 is shown) can prevent the model from pruning units, shown for MNIST. However, in contrast to eVAE, it uses the additional units to encode redundancy, not additional information, and therefore does not address the problem. Generation results are shown in Fig. 5"}, {"section_index": "5", "section_name": "3.2 TRAINING", "section_text": "The generative model and the recognition network are trained simultaneously, by minimizing Cevae in eq. 10\nFor the stochastic continuous variable z. we use the reparameterization trick as in VAE. The trick involves reparametrizing the recognition distribution in terms of auxiliary variables with fixed distributions. This allows efficient sampling from the posterior distribution as they are deterministic functions of the inputs and auxiliary Variables.\nFor the discrete variable y, we cannot use the reparameterization trick. We therefore approximate q(y|x by a point estimate y* so that q(y|x) = d(y = y*), where & evaluates to 1 only if y = y* and the best. y* = arg minCevae. We also explored modeling q(y[x) = Cat(h(x)) as a discrete distribution with h. being a neural network. In this case, the backward pass requires either using REINFORCE or passing througl gradients for the categorical sampler. In our experiments, we found that these approaches did not work well especially when the number of possible values of y becomes large. We leave this as future work to explore.\nAlgorithm 1 Learning Epitomic VAE\n1: 0, Initialize parameters 2: for until convergence of parameters (0, ) do Assign each x to its best y* = arg min Cevae 3: 4: Randomize and then partition data into minibatches with each minibatch having proportion ate number of examples V y 5: for k E numbatches do 6: Update model parameters using kth minibatch consisting of x, y pairs 7: end for 8: end for"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We present experimental results on two datasets, MNIST (LeCun et al., 1998) and Toronto Faces Database. (TFD) (Susskind et al., 201O). We show generation results that illustrate eVAE's ability to better utilize mode capacity for modeling data variability, and then evaluate the effect of epitome choice and model complexity. Fi nally we present quantitative comparison with other models and qualitative samples from eVAE. We emphasize. that in all experiments, we keep the weight of the KL term X = 1 to evaluate performance under optimizing the. true derived lower bound, without introducing an additional hyperparameter to tune..\n1.4 VAE Dropout VAE 1.2 eVAE 1.0 0.8 A 0.6 AAAAAAAA 0.4 0.2 0.0 O0 .. 0 10 20 30 40 50 Unit\n1.4 VAE A Dropout VAE 1.2 eVAE 1.0 0.8 AA AAAAAA 0.6 AAAAAAA AAAAAAAAA 0.4 0.2 0.0 ................... 0 10 20 30 40 50 Unit\nThe recognition network first computes and . It is then combined with the optimal y* for each example, tc arrive at the final posterior. The model is trained using a simple algorithm outlined in Algo. 1. Backpropaga tion with minibatch updates is used, with each minibatch constructed to be balanced with respect to epitome assignment.\nWe use standard splits for both MNIST and TFD. In our experiments, the encoder and decoder are fully connected networks, and we show results for different depths and number of units of per layer. ReLU non linearities are used, and models are trained using the Adam update rule (Kingma & Ba, 2014) for 200 epochs. (MNIST) and 250 epochs (TFD), with base learning rate 0.001..\n1913931399 6 6 3 7 40733 9 2 464 2. 8 6 5 9 YAA 0996992992 2 36 3 7 9 9 3 3998438168 43 8 8 S19050603 Y C 5 82 6 213099004 a 31 008 7982744534 8 2018691938 2 3 9 66 4569117706 6 03908 i9 0465696900 97 182 9594295883 39599329 5 6 24402999 188498634 7823031071 14719340 S56358 8 0 6 6 93 489 94 9 3 68 3 5 9 2 2 CO 3 9 4 3 4 POE pnodour 3 9 3 4 8 9 6 2 & 6 5 2 9 9 6 3 9 8 S C HE X 5 - 9 6 6 3 - 9 9 8 3 4 7 5 e a 9 9 S 8 9 3 6 a 5 9389983 9 66 849355995 480933 D 02 53 9 3 P 8 5 8 5 3 5 6 488 5835488 43 9 0 5 8 8 DO 5 6 8 B 4 545 5 6 3 0 5 8 7 9 3 3 0 8343 5 5 CVAR 9 9 9 3 3 0 5 3 998 3 . 8 6898085 3 2 309 - 5 5490535 201369 9 8 84633588 2 0465696900 S 8863 5 6 5 6524194584 2940299922 3924 96742 0095 1880 5525465386\nFigure 5: Generations from VAE, Dropout VAE, and eVAE models for different dimensions of latent variable z. Across each row are 2-d, 5-d, 10-d, and 20-d models. VAE generation quality (1st row) degrades as latent dimension increases, and it is unable to effectively use added capacity to model greater variability. Adding dropout to the VAE (2nd row) fails to solve the problem since additional units are used to encode redundancy not additional information. eVAE (3rd row) overcomes the problem by modeling multiple shared subspaces here 2-d (overlapping) epitomes are maintained as the latent dimension is increased. Learned epitome manifolds from the 20-d model are shown in Fig. 3. Boxed digits highlight the difference in variability that the VAE vs eVAE model is able to achieve."}, {"section_index": "7", "section_name": "4.1 OVERCOMING OVER-PRUNING", "section_text": "We first qualitatively illustrate the ability of eVAE to overcome over-pruning and utilize latent capacity to mode greater variability in data. Fig. 5 compares generation results for VAE, Dropout VAE, and eVAE for differen dimensions D of latent variable z. With D = 2, VAE generates realistic digits but suffers from lack of diversity When D is increased to 5, the generation exhibits some greater variability but also begins to degrade in quality As D is further increased to 10 and 20, the degradation continues. As explained in Sec. 2.1, this is due tc VAE's propensity to use only a portion of its latent units for modeling the training data and the rest to minimiz the KL term. The under-utilization of model capacity means that VAE learns to model well only regions of th posterior manifold near training samples, instead of generalizing to model the space of possible generations The effect of this is good reconstruction (examples are shown in Fig. 9) but poor generation samples.\nAdding dropout to the latent variable z of the VAE (row 2 of Fig. 5) encourages increased usage of model capacity, as shown in Fig. 4 and the discussion in Sec. 2. However, due to the stochastic nature of dropout the model is forced to use the additional capacity to encode redundancy in the representation. It therefore does not achieve the desired effect of encoding additional data variability, and furthermore leads to blurred samples due to the redundant encoding. Epitomic VAE addresses the crux of the problem by learning multiple specialized subspaces. Since the effective dimension of any example is still small, eVAE is able to model each subspace well, while encoding variability through multiple possibly shared subspaces. This enables the model to overcome over-pruning from which VAE suffered. Fig. 5 shows that as the dimension D of z is increased\nwhile maintaining epitomes of size K = 2, eVAE is able to model greater variability in the data. Highlighted digits in the 20-d eVAE show multiple styles such as crossed versus un-crossed 7, and pointed, round, thick, and thin 4s. Additional visualization of the variability in the learned 2-d manifolds are shown in Fig. 3. In contrast the 2-d VAE generates similar-looking digits, and is unable to increase variability and maintain sample quality as the latent dimension is increased."}, {"section_index": "8", "section_name": "4.2 CHOICE OF EPITOME SIZE", "section_text": "We next investigate how the choice of epitome size, K, affects generation performance. We evaluate the. generative models quantitatively through their samples by measuring the log-density with a Parzen window. estimator Rifai et al. (2012). Fig. 6 shows the Parzen log-density for different choices of epitome size on. MNIST, with encoder and decoder consisting of a single deterministic layer of 500 units. Epitomes are non. overlapping, and the results are grouped by total dimension D of the latent variable z. For comparison, we also. show the log-density for VAE models with the same dimension D, and for mixture VAE (mVAE), an ablative version of eVAE where parameters are not shared. mVAE can also be seen as a mixture of independent VAEs trained in the same manner as eVAE. The number of deterministic units in each mVAE component is computed. so that the total number of parameters is comparable to eVAE.\nAs we increase D, the performance of VAE drops significantly, due to over-pruning. In fact, the number of active units for VAE are 8, 22 and 24 respectively, for D values of 8, 24 and 48. In contrast, eVAE performance increases as we increase D, with an epitome size K that is significantly smaller than D. Table 1 provides mor comparisons. This confirms the advantage of using eVAE to avoid overpruning and effectively capture data distribution.\nD = 8 D = 24 D = 48 300 250 200 2 4 2 34 8 3 4 8 Epitome size IVAEmVAEeVAE\nFigure 6: Epitome size vs. Parzen log-density (nats) on MNIST, grouped by different dimensions D of latent variable z. VAE performance for equivalent D is shown for comparison, as well as mVAE (ablative version of eVAE without parameter sharing). For each D, the optimal epitome size is significantly smaller than D"}, {"section_index": "9", "section_name": "4.3 INCREASING COMPLEXITY OF ENCODER AND DECODER", "section_text": "Here, we would like to understand the role of encoder and decoder architectures on over pruning, and the generative performance. We control model complexity through number of layers L of deterministic hidden units, and number of hidden units H in each deterministic layer..\nTable 1 shows the Parzen log-densities of VAE, mVAE and eVAE models trained on MNIST and TFD with different latent dimension D. For mVAE and eVAE models on MNIST, the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used. All epitomes are non-overlapping\nWe observe that for VAE, increasing the number of hidden units H (e.g. from 500 to 1000) for a fixed network. depth L has a negligible effect on the number of active units and performance. On the other hand, as the depth of the encoder and decoder L is increased, the number of active units in VAE decreases though performance is. still able to improve. This illustrates that increase in the complexity of the interactions through use of multiple.\neVAE also performs comparably or better than mVAE at all epitome sizes. Intuitively, the advantage of pa rameter sharing in eVAE is that each epitome can also benefit from general features learned across the training Set.\nlayers counteract the perils of the over-pruning. However, this comes with the cost of substantial increase in the number of model parameters to be learned\nIn contrast, for any given model configuration, eVAE is able to avoid the over-pruning effect in the numbe of active units and outperform VAE. While both VAE and eVAE approach what appears to be a ceiling ir generative performance with large models for MNIST, the difference between VAE and eVAE is significant for all TFD models.\nTable 1 also shows results for mVAE, the ablative version of eVAE where parameters are not shared. The number of deterministic units per layer in each mVAE component is computed so that the total number of parameters is comparable to eVAE. While mVAE and eVAE perform comparably on MNIST especially with larger models (reaching a limit in performance that VAE also nears), eVAE demonstrates an advantage or smaller models and when the data is more complex (TFD). These settings are in line with the intuition that parameter sharing is helpful in more challenging settings when each epitome can also benefit from general features learned across the training set.\nTable 1: Parzen log-densities in nats of VAE, mVAE and eVAE for increasing model parameters, trained or. MNIST and TFD with different dimensions D of latent variable z. For mVAE and eVAE models on MNIST the maximum over epitomes of size K = 3 and K = 4 is used, and on TFD epitomes of size K = 5 are used All epitomes are non-overlapping. Across each row shows performance as the number of encoder and decode. layers L increases for a fixed number of hidden units H in each layer, and as H increases. Number of active. units are indicated in parentheses."}, {"section_index": "10", "section_name": "4.4 COMPARISON WITH OTHER MODELS", "section_text": "In Table 2 we compare the generative performance of eVAE with other models, using Parzen log-density VAE-, mVAE-, and eVAE- refer to models trained using the same architecture as Adversarial Autoencoders for comparison. Encoders and decoders have L = 2 layers of H = 1ooo deterministic units. D = 8 for MNIST, and D = 15 for TFD. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1. For MNIST, the VAE model is (L, H, D) = (3, 500, 8), mVAE is (3,1000, 24) and eVAE is (3, 500, 48). For TFD, the VAE model is (3, 500,15), mVAE is (3,1000, 50), and eVAE is (3, 500, 25).\nWe observe that eVAE significantly improves over VAE and is competitive with several state-of-the-art models notably Adversarial Autoencoders. Samples from eVAE on MNIST and TFD are shown in Fig. 7..\nH = 500 H = 1000 L =1 L =2 L = 3 L =1 L = 2 L =3 MNIST VAE 283(8) 292(8) 325(8) 283(8) 290(8) 322(6) D =8 mVAE 300(8) 328(8) 337(8) 309(8) 333(8) 335(8) eVAE 300(8) 330(8) 337(8) 312(8) 331(8) 334(8) VAE 213(22) 273(11) 305(8) 219(24) 270(12) 311(7) D = 24 mVAE 309(24) 330(24) 336(24) 313(24) 333(24) 338(24) eVAE 311(24) 331(24) 336(24) 317(24) 332(24) 336(24) VAE 213(24) 267(13) 308(8) 224(24) 273(12) 309(8) D = 48 mVAE 314(48) 334(48) 336(48) 315(48) 333(48) 337(48) eVAE 319(48) 334(48) 337(48) 321(48) 334(48) 332(48) TFD VAE 2173(15) 2180(15) 2149(15) 2116(15) D = 15 mVAE 2276(15) 2314(15) 2298(15) 2343(15) eVAE 2298(15) 2353(15) 2278(15) 2367(15) 1 - VAE 2067(25) 2085(25) 2037(25) 2101(25) D = 25 mVAE 2287(25) 2306(25) 2332(25) 2351(25) eVAE 2309(25) 2371(25) 2297(25) 2371(25) 1 VAE 1920(50) 2062(29) 1886(50) 2066(30) D = 50 mVAE 2253(50) 2327(50) 2280(50) 2358(50) eVAE 2314(50) 2359(50) 2302(50) 2365(50)\nFigure 7: eVAE samples for MNIST (left) and TFD (right)"}, {"section_index": "11", "section_name": "5 RELATED WORK", "section_text": "A number of applications use variational autoencoders as a building block. In Gregor et al. (2015), a generativ. model for images is proposed in which the generator of the VAE is an attention-based recurrent model that i. conditioned on the canvas drawn so far. Eslami et al. (2016) proposes a VAE-based recurrent generative mode. that describes images as formed by sequentially choosing an object to draw and adding it to a canvas that i updated over time. In Kulkarni et al. (2015), VAEs are used for rendering 3D objects. Conditional variants o. VAE are also used for attribute specific image generation (Yan et al., 2015) and future frame synthesis (Xu. et al., 2016). All these applications suffer from the problem of model over-pruning and hence have adopte. strategies that takes away the clean mathematical formulation of VAE. We have discussed these in $ 2.1..\nMethod MNIST(10K) TFD(10K) DBN 138 2 1909 66 Deep CAE 1211 2110 50 Deep GSN 2141 1890 29 GAN 225 2 2057 26 GMMN + AE 2822 2204 20 Adversarial AE 340 2 2252 16 VAE 290 2 2149 23 mVAE- 333 2 2298 23 eVAE 331 2 2278 26 VAE 325 2 2180 20 mVAE 338 2 2358 20 eVAE 337 2 2371 20\nTable 2: Parzen log-densities in nats on MNIST and TFD. VAE-, mVAE-, and eVAE- refer to models trained using the same architecture as Adversarial Autoencoders, for comparison. VAE, mVAE, and eVAE refer to the best performing models over all architectures from Table 1.\nX 560364 594 03 4010 58 33 603559142a 4709097839 3609003897\nA complementary approach to the problem of model pruning in VAE was proposed in Burda et al. (2015); the idea is to improve the variational bound by using multiple weighted posterior samples. Epitomic VAE provides improved latent capacity even when only single sample is drawn from the posterior.\nRelated is the research in unsupervised sparse overcomplete representations, especially with group sparsity constraints c.f. (Gregor et al., 2011; Jenatton et al., 2011). In the epitomic VAE, we have similar motivations that enable learning better generative models of data."}, {"section_index": "12", "section_name": "6 CONCLUSION", "section_text": "This paper introduces Epitomic VAE, an extension of variational autoencoders, to address the problem of mode. over-pruning, which has limited the generation capability of VAEs in high-dimensional spaces. Based on th. intuition that subconcepts can be modeled with fewer dimensions than the full latent space, epitomic VAI. models the latent space as multiple shared subspaces that have learned specializations. We show how this mode. addresses the model over-pruning problem in a principled manner, and present qualitative and quantitative. analysis of how eVAE enables increased utilization of the model capacity to model greater data variability We believe that modeling the latent space as multiple structured subspaces is a promising direction of work. and allows for increased effective capacity that has potential to be combined with methods for increasing th. flexibility of posterior inference."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the reviewers for constructive comments. Thanks to helpful discussions with Marc'Aurelio Ranzato Joost van Amersfoort and Ross Girshick. We also borrowed the term epitome' from an earlier work of Jojic et al. (2003)."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. ICLR, 2015\nD.P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014\nYann LeCun. Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits. 1998\nMethods to increase the flexibility of posterior inference are proposed in (Salimans et al., 2015; Rezende &. Mohamed, 2016; Kingma et al., 2016). In Rezende & Mohamed (2016), posterior approximation is constructed by transforming a simple initial density into a complex one with a sequence of invertible transformations. In a similar vein, Kingma et al. (2016) augments the flexibility of the posterior through autoregression over. projections of stochastic latent variables. However, the problem of over pruning still persists: for instance,. Kingma et al. (2016) enforces a minimum information constraint to ensure that all units are used..\nS. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton Attend, infer, repeat: Fast scene understanding with generative models. CoRR, abs/1603.08575, 2016\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.046239, 2015\nD.J.C. Mackay. Local minima, symmetry-breaking, and model pruning in variational free energy minimization 2001.\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprin arXiv:1505.05770, 2016.\nSalah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractiv auto-encoders. arXiv preprint arXiv:1206.6434, 2012\nJosh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep, 3, 2010.\nXinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generatior from visual attributes. CoRR. abs/1512.00570. 2015.\nTianfan Xue, Jiajun Wu, Katherine L. Bouman, and William T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. arXiv preprint arXiv:1607.02586. 2016.\n73 7 7 6 53 3 1 566550445 5 0 221404293 07027413/ 4 73 1070530972 258350044y 73+ 7139823207 7874862377 63599499/7 5315202336 9940483453 3518812302 5555504431 55102a1282 504001+0 2218464283 807027413/ 5408315386 X = 1.0 X = 0.5 X = 0.2\nFigure 8: Reconstructions for a 50-d VAE with KL weight X = 1, 0.5, and 0.2. The top half of each figur are the original digits, and the bottom half are the corresponding reconstructions..\nWe visualize VAE reconstructions as the KL term weight X is tuned down to keep latent units active. The top half of each figure are the original digits, and the bottom half are the corresponding reconstructions. While reconstruction performance is good, generation is poor (Fig. 1). This illustrates that VAE learns to model well only regions of the posterior manifold near training samples, instead of generalizing to model well the full posterior manifold.\n1070s300 72 25t0500ty 7 7. 3 ? 823207 737486377 73 635 9 5313202330 4740483y 3 5 481 C 5665504451 55702ay2B7 50y 2213404283 807027413/ 5# X 37 5 1070530972 258350044 y 7 3 7 7138823207 4862377 7 8 n 63S 949 F 5315202336 9940483453 3 5 1 881230 5555504431 55702a9282 504j001%a 2218464283 8070a7413/ 5408315386 X = 1.0 X = 0.5 X = 0.2"}, {"section_index": "15", "section_name": "8.2 EFFECT OF INCREASING LATENT DIMENSION ON RECONSTRUCTION", "section_text": "In 4.1, Fig. 5 shows the effect of increasing latent dimension on generation for VAE, Dropout VAE, and eVAI models. Here we show the effect of the same factor on reconstruction quality for the models. The top half o each figure are the original digits, and the bottom half are the corresponding reconstructions. As the dimensio. of the latent variable z increases from 2-d to 20-d, reconstruction becomes very sharp (the best model), bu generation degrades (Fig. 5). Dropout VAE has poorer reconstruction but still blurred generation, while eVAI is able to achieve both good reconstruction and generation.\n2-d 5-d 10-d 20-d 8/70932100 64e08Z 823 3687 6/869 9 67864043 7 $ 1q 4031 q5 6 6 so5 3 527533 7 4 5 4 9 511943#8 1508 022 9 0749 273 7% 451308576 5 6 624300565 8367 y70y 40580y086 YAA 806G0b1387 90 l y q y709 24577 392001 3 3 8/70932/00 62480 2 23 2687 8 8 013899 674864093 7 3/97199031 /9887 0 642 5053//9599 0 1527533/ 9 6931196488 180801 622 0749273781 4513085769 - 6629300869 G12/47 6 972 8367991904 4058040867 8666001389 9909206699 9749724577 2392070133 2504242553 287243/183 084a662320 804506 54 5475 8254 449157+$23 026205 9042 8946005 5 YAP 87 714330 7478 3920 67l7100038 1 7586l 7 5 24390y20 2190656773 45504119) 34049109 00920597 qnodoug 7392366/37 9783266209 2768849 3 3 P 893535 2892681888 088662820 8045068 64 359589389 4041890823 026204092 8996005669 9 9889959350 4940859302 9200588 1075866752 5933392930 8840086943 1458641193 3404910923 9900933399 9808866189 9983866284 2.920848983 4J5242076 0711061191 7726039752 578191405 84841230 916756688 065554379 D 7613470440 3755664227 3147080y08 184/14/0 880409444 145449 0 890543314 1363380947 /8308/6 CVAR 270161/099 331168609 080911043 350 9853920704 0711061141 7726089752 578/711405 8864791950 9 168566886 0655848790 +613490420 3793069227 3 19/056908 51184/19 8980268444 1459576840 8965433816 1363882997 818305/666 290/01/299 3311696290 0980911062 3540088593\nFigure 9: Reconstructions from VAE, Dropout VAE, and eVAE models for different dimensions of laten variable z. Across each row are 2-d, 5-d, 10-d, and 20-d models. The top half of each figure are the origina digits, and the bottom half are the corresponding reconstructions. The eVAE models multiple shared subspace by maintaining 2-d (overlapping) epitomes as the latent dimension is increased. eVAE is the only model tha achieves both good reconstruction and generation."}, {"section_index": "16", "section_name": "8.3 EVALUATION METRIC FOR GENERATION", "section_text": "Table 3 shows the log-likelihood bound and log-density for VAE and eVAE models as the dimension D of laten variable z is increased. For VAE, as D increases, the likelihood bound improves, but the log-density decrease. Referring to the corresponding generation samples in Fig. 11, we see that sample quality in fact decreases counter to the likelihood bound but consistent with log-density. The reported VAE bounds and sample qualit also matches Figs. 2 and 5 in Kingma & Welling (2014). On the other hand, eVAE log-density first decrease and then improves with larger D. We see that this is also consistent with Fig. 11, where eVAE samples fc D = 8 are the most interpretable overall, and D = 48 improves over D = 24 but still has some degenerat or washed out digits. (Note that these models are consistent with Kingma & Welling (2014) but are not th best-performing models reported in our experiments.) Since our work is motivated by the generation task, w therefore use log-density as the evaluation metric in our experiments.\nIntuitively, the reason why VAE improves the likelihood bound but generation quality still decreases can be seen in the breakdown of the bound into the reconstruction and KL terms (Table 3 and Fig. 1O). The improvement of the bound is due to large improvement in reconstruction, but the KL becomes significantly worse. This has a negative effect on generation, since the KL term is closely related to generation. On the other hand. eVAE reconstruction improves to a lesser extent, but the KL is also not as strongly affected, so generation ability remains stronger overall. As a result of this, simply tuning the KL weight in the training objective is insufficient to improve VAE generation, as shown in Fig. 1 in the main paper.\nTable 3: Likelihood bound and log-density for VAE and eVAE as dimension D of latent variable z is increased The encoder and decoder for all models consist of a single deterministic layer with 500 units. eVAE models have epitomes of size K = 4 for D = 8, and K = 8 for D = 24 and D = 48. The breakdown of the likelihood bound into reconstruction term and KLD term is also shown.\nThere have been multiple approaches for evaluation of variational autoencoders, in particular log-likelihood lower bound and log-density (using the Parzen window estimator, Rifai et al. (2012)). Here we show that for the generation task, log-density is a more appropriate measure than log-likelihood lower bound. Models are trained on binarized MNIST, to be consistent with literature reporting likelihood bounds. The encoder and decoder for all models consist of a single deterministic layer with 500 units.\nRec. term KLD term Likelihood bound Log-density D =8 -89.4 -16.6 -106.0 278 VAE D = 24 -61.1 -29.3 -90.4 152 D = 48 -59.1 -30.3 -89.4 151 D = 8 -110.1 -9.6 -119.7 298 eVAE D = 24 -84.2 -15.7 -99.9 274 D = 48 -82.8 -14.2 -97.0 284\nVAE -KL term eVAE-KL term VAE - Reconstruction term eVAE -Reconstruction term VAE - Log-likelihood bound eVAE - Log-likelihood bound 140 120 100 80 Z 60 40 20 0 D = 8 D = 24 D = 48\nFigure 1O: Likelihood bound for VAE and eVAE as D increases (shown as NLL). VAE improvement of the. bound is due to significant reduction of reconstruction error. but at high cost of KL. which is closely related to generation. eVAE improves reconstruction more moderately, but also maintains lower KL, and has stronger generation overall.\nFigure 11: Generation samples for VAE and eVAE as dimension D of latent variable z is increased. VAE sample quality decreases, which is consistent with log-density but not likelihood bound..\n3 3 5 4. 0062334139 4 6611743714 09 40r3 40809 99716%2 5 6 PA 5 341 50191 3 2 410783 G y 9 5 0 551849749 59 2377435 678939397 69588940 8425719139 1a233R2100 3918484734 VAE (D = 8) VAE (D = 24) VAE (D = 48) d Q 3 9 3 9 ly 7 4 9 8 9 3 5 6 9 1 4 6 3 / 7 2 7 8 5 9 3 3 3 S D 4 3 K L 8 X 6 2 3 7 X 3 43481 6 63 V 4 39 64 422 78288986 3 4 18726174 eVAE (D = 8) eVAE (D = 24) eVAE (D = 48)"}] |
ry2YOrcge | [{"section_index": "0", "section_name": "LEARNING A NATURAL LANGUAGE INTERFACI WITH NEURAL PROGRAMMER", "section_text": "Arvind Neelakantan\nUniversity of Massachusetts Amherst\nUniversity of Massachusetts Amhers mccallum@cs.umass.edu"}, {"section_index": "1", "section_name": "BACKGROUND AND INTRODUCTION", "section_text": "Databases are a pervasive way to store and access knowledge. However, it is not straightforward for users to interact with databases since it often requires programming skills and knowledge abou database schemas. Overcoming this difficulty by allowing users to communicate with databases via natural language is an active research area. The common approach to this task is by semantic parsing, which is the process of mapping natural language to symbolic representations of meaning In this context, semantic parsing yields logical forms or programs that provide the desired response when executed on the databases (Zelle & Mooney|1996). Semantic parsing is a challenging problem that involves deep language understanding and reasoning with discrete operations such as counting and row selection (Liang2016).\nRecently, many neural network models have been developed for program induction (Andreas et al 2016} Jia & Liang 2016 Reed & Freitas 2016 Zaremba et al. 2016 Yin et al. 2015, despite\nWork done at Google Brain\nMartin Abadi\nGoogle Brain\nabadi@google.com\nqvl@google.com\ndamodei@openai.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning a natural language interface for database tables is a challenging task that involves deep language understanding and multi-step reasoning. The task is often approached by mapping natural language queries to logical forms or programs that provide the desired response when executed on the database. To our knowledge this paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset. We enhance the objective func- tion of Neural Programmer, a neural network with built-in discrete operations, and apply it on WikiTableQuestions, a natural language question-answering dataset. The model is trained end-to-end with weak supervision of question-answer pairs, and does not require domain-specific grammars, rules, or annotations that are key elements in previous approaches to program induction. The main experimental result in this paper is that a single Neural Programmer model achieves 34.2% ac- curacy using only 10,000 examples with weak supervision. An ensemble of 15 models, with a trivial combination technique, achieves 37.7% accuracy, which is competitive to the current state-of-the-art accuracy of 37.1% obtained by a tradi- tional natural language semantic parser.\nThe first learning methods for semantic parsing require expensive annotation of question-program. pairs (Zelle & Mooney1996f Zettlemoyer & Collins]2005). This annotation process is no longer necessary in the current state-of-the-art semantic parsers that are trained using only question-answer pairs (Liang et al.]2011Kwiatkowski et al.2013]|Krishnamurthy & Kollar|2013||Pasupat & Liang 2015). However, the performance of these methods still heavily depends on domain-specific gram-. mar or pruning strategies to ease program search. For example, in a recent work on building semantic. parsers for various domains, the authors hand-engineer a separate grammar for each domain (Wang. et al.2 2015).\nthe notorious difficulty of handling discrete operations in neural networks (Joulin & Mikolov|2015 Kaiser & Sutskever 2016). Most of these approaches rely on complete programs as supervision (Jia & Liang2016) Reed & Freitas 2016) while others (Zaremba et al. 2016] Yin et al.]2015 have been tried only on synthetic tasks. The work that is most similar to ours is that of|Andreas et al. (2016) on the dynamic neural module network. However, in their method, the neural network is employed only to search over a small set of candidate layouts provided by the syntactic parse of the question, and is trained using the REINFORCE algorithm (Williams|1992). Hence, theii method cannot recover from parser errors, and it is not trivial to adapt the parser to the task at hand Additionally, all their modules or operations are parametrized by a neural network, so it is difficult to apply their method on tasks that require discrete arithmetic operations. Finally, their experiments concern a simpler dataset that requires fewer operations, and therefore a smaller search space, thar WikiTableQuestions which we consider in our work. We discuss other related work in Section 4.\nNeural Programmer (Neelakantan et al.||2016) is a neural network augmented with a set of discrete operations. It produces both a program, made up of those operations, and the result of running the program against a given table. The operations make use of three variables: row selector, scalar answer, and lookup answer, which are updated at every timestep. lookup answer and scalar answer store answers while row selector is used to propagate information across time steps. As input, a model receives a question along with a table (Figure |1). The model runs for a fixed number of time steps, selecting an operation and a column from the table as the argument to the operation at each time step. During training, soft selection (Bahdanau et al.[2014) is performed so that the model can be trained end-to-end using backpropagation. This approach allows Neural Programmer to explore the search space with better sample complexity than hard selection with the REINFORCE algorithm (Williams|1992) would provide. All the parameters of the model are learned from a weak supervision signal that consists of only the final answer; the underlying program, which consists of a sequence of operations and of selected columns, is latent.\ntimestep t What was the. total number of. Neural Network. goals scored in. 2005 Scalar Row Lookup Answer Selector Answer Column Operation Selection Selection 1 Season Team Country Competition Matches Goals Operations 1999 Djurgardens IF Sweden Allsvenskan 15 1 Count 2000 Djurgardens IF Sweden Superettan 15 3 2001 Djurgardens IF Sweden Allsvenskan 22 7 Select 2002-2003 Grazer AK Austria Bundesliga 24 6 ArgMax 2003 Denizlispor Turkey Super Lig 3 0 ArgMin 2003 Landskrona BolS Sweden Allsvenskan 11 3 2004 Landskrona BolS Sweden Allsvenskan 22 4 2005 Djurgardens IF Sweden Allsvenskan 24 12 2006 Djurgardens IF Sweden Allsvenskan 17 6 > 2007 Djurgardens IF Sweden Allsvenskan 23 4 < Row Selector: 2008 Djurgardens IF Sweden Allsvenskan 29 6 Print from t-1 2008-09 Esbjerg fB Denmark Superliga 6 0 2010 AaB Denmark Superliga 3 1 2011 Assyriska FF Sweden Superettan 19 5 Data from Table. Total Total Total Total 233 58 Table\nFigure 1: Neural Programmer is a neural network augmented with a set of discrete operations. The model runs for a fixed number of time steps, selecting an operation and a column from the table at every time step. The induced program transfers information across timesteps using the row selector variable while the output of the model is stored in the scalar answer and lookup answer variables.\nIn this work, we develop an approach to semantic parsing based on Neural Programmer. We show. how to learn a natural language interface for answering questions using database tables, thus inte. grating differentiable operations that are typical of neural networks with the declarative knowledge contained in the tables and with discrete operations on tables and entries. For this purpose, we make several improvements and adjustments to Neural Programmer, in particular adapting its objective. function to make it more broadly applicable..\nOur main experimental results concern WikiTableQuestions (Pasupat & Liang2015), a real-worl. question-answering dataset on database tables, with only 10,o00 examples for weak supervision. This dataset is particularly challenging because of its small size and the lack of strong supervision. and also because the tables provided at test time are never seen during training, so learning require. adaptation at test time to unseen column names. A state-of-the-art, traditional semantic parser tha relies on pruning strategies to ease program search achieves 37.1% accuracy. Standard neural net. work models like sequence-to-sequence and pointer networks do not appear to be promising for this. dataset, as confirmed in our experiments below, which yield single-digit accuracies. In compari. son, a single Neural Programmer model using minimal text pre-processing, and trained end-to-end. achieves 34.2% accuracy. This surprising result is enabled primarily by the sample efficiency o1. Neural Programmer, by the enhanced objective function, and by reducing overfitting via strong reg. ularization with dropout (Srivastava et al.]2014] Iyyer et al.2015 Gal & Ghahramani2016) anc weight decay. An ensemble of 15 models, even with a trivial combination technique, achieves 37.7% accuracy."}, {"section_index": "3", "section_name": "2 NEURAL PROGRAMMER", "section_text": "In this section we describe in greater detail the Neural Programmer model and the modifications we made to the model. Neural Programmer is a neural network augmented with a set of discrete operations. The model consists of four modules:\nA more detailed description of the basic model can be found in|Neelakantan et al.(2016). The mode runs for fixed total of T timesteps. The parameters of the operations. selector module. question an\nIn earlier work, Neural Programmer is applied only on a synthetic dataset. In that dataset, when. the expected answer is an entry in the given table, its position is explicitly marked in the table.. However, real-world datasets certainly do not include those markers, and lead to many ambiguities. (e.g., (Pasupat & Liang2015)). In particular, when the answer is a number that occurs literally. in the table, it is not known, a priori, whether the answer should be generated by an operation or selected from the table. Similarly, when the answer is a natural language phrase that occurs. in multiple positions in the table, it is not known which entry (or entries) in the table is actually. responsible for the answer. We extend Neural Programmer to handle the weaker supervision signal. by backpropagating through decisions that concern how the answer is generated when there is an. ambiguity.\nQuestion RNN that processes the question and converts the tokens to a distributed repre sentation. We use an LSTM network (Hochreiter & Schmidhuber1997) as the questio RNN. A list of discrete operations such as counting and entry selection that are manually definec Each operation is parameterized by a real-valued vector that is learned during training. A selector module that induces two probability distributions at every time step, one ove the set of operations and another over the set of columns. The input to the selector i obtained by concatenating the last hidden state of the question RNN, the hidden state of th history RNN from the current timestep, and the attention vector obtained by performin soft attention (Bahdanau et al.2014) on the question using the history vector. Followin Neelakantan et al.(2016), we employ hard selection at test time. History RNN modeled by a simple RNN (Werbos!1990) with tanh activations which re members the previous operations and columns selected by the model. The input to th history RNN at each timestep is the result of concatenating the weighted representations o operations and columns with their corresponding probability distributions produced by th selector at the previous timestep\nhistory RNNs are all learned with backpropagation using a weak supervision signal that consists of the final answer. Below, we discuss several modifications to the model to make it more broadl applicable, and easier to train."}, {"section_index": "4", "section_name": "2.1 OPERATIONS", "section_text": "We use 15 operations in the model that were chosen to closely match the set of operations used in the. baseline model (Pasupat & Liang 2015). All the operations except select and most frequent entry. operate only on the set of selected rows which is given by the row selector variable. Before the firs timestep, all the rows in the table are set to be selected. The built-in operations are:.\nAll the operations are defined to work with soft selection so that the model can be trained with. backpropagation. The operations along with their definitions are discussed in the Appendix"}, {"section_index": "5", "section_name": "2.2 OUTPUT AND ROW SELECTOR", "section_text": "Neural programmer makes use of three variables: row selector, scalar answer and lookup answei which are updated at every timestep. The variable lookup answer stores answers that are selected from the table while scalar answer stores numeric answers that are not provided in the table'|The induced program transfers information across timesteps using the row selector variable which con- tains rows that are selected by the model.\nGiven an input table II, containing M rows and C columns (M and C can vary across examples) the output variables at timestep t are given by:\nwhere aP (op) and acol () are the probabilities assigned by the selector to operation op and column. j at timestep t respectively and output(count) is the output of the count operation at timestep t.. The row selector variable at timestep t is obtained by taking the weighted average of the outputs of the remaining operations and is discussed in the Appendix. lookup answerT[i][j] is the probability. that the element (i, j) in the input table is in the final answer predicted by the model.."}, {"section_index": "6", "section_name": "2.3 TRAINING OBJECTIVE", "section_text": "We modify the training objective of Neural Programmer to handle the supervision signal available in real-world settings. In previous work, the position of the answers are explicitly marked in the table when the answer is an entry from the table. However, as discussed in Section 1, in real-world datasets (e.g., (Pasupat & Liang 2015)) the answer is simply written down introducing two kinds of ambiguities. First, when the answer is a number and if the number is in the table, it is not known\nIt is possible to extend the model to generate natural language responses using an RNN decoder but it i not the focus of this paper and we leave it for further work.\ncount returns the number of selected rows in row selector.. select and most frequent entry are operations which are computed only once for every question and output a boolean tensor with size same as the size of the input table. An. entry in the output of the select operation is set to 1 if the entry matches some phrase in the question. The matched phrases in the question are anonymized to prevent overfitting. Similarly, for most frequent entry, it is set to 1 if the entry is the most frequently occurring. One in its column. argmax, argmin, greater than, less than, greater than or equal to, less than or equal to are. all operations that output a tensor with size same as the size of the input table.. first, last, previous and next modify the row selector.. print operation assigns row selector on the selected column of lookup answer.. reset resets row selector to its initial value. This operation also serves as no-op when the model needs to induce programs whose complexity is less than T..\nFor scalar answers we compute the square loss.\nLscalar(scalar answert, y scalar answert -\nwhere y is the ground truth answer. We divide Lscalar by the number of rows in the input table and. do not backpropagate on examples for which the loss is greater than a threshold since it leads to instabilities in training.\nWhen the answer is a list of items y = (a1, a2,..., an), for each element in the list (ai, i 1,2,...,N) we compute all the entries in the table that match that element, given by S {(r, c), V (r, c) I[r][c] = a}. We tackle the ambiguity introduced when an answer item occurs at multiple entries in the table by computing the loss only on the entry which is assigned the highest (i, j) in the input table is part of the output. We compute log-loss for each entry and the final loss is given by:\nN Llookup(lookup answerT, y) = min(r,c)es,(- log(lookup answert[r, c])) i=1 M C 1 [g[i,j] == 0] log(1- lookup answerr[i,j]) MC i=1 j=1\nwhere cond] is 1 when cond is True. and 0 otherwise\nWe deal with the ambiguity that occurs when the ground truth is a number and if the number also oc curs in the table, by computing the final loss as the soft minimum of Lscalar and Llookup. Otherwise. the loss for an example is Lscalar when the ground truth is a number and Llookup when the grounc. truth matches some entries in the table. The two loss functions Lscalar and Llookup are in differen. scales, so we multiply Llookup by a constant factor which we set to 50.0 after a small exploration ir. Our experiments."}, {"section_index": "7", "section_name": "3.1 DATA", "section_text": "We use the train, development, and test split given byPasupat & Liang(2015). The dataset contains. 11321, 2831, and 4344 examples for training, development, and testing respectively. We use theii. tokenization, number and date pre-processing. There are examples with answers that are neither\nwhether the loss should be computed using the scalar answer variable or the lookup answer variable Second, when the answer is a natural language phrase and if the phrase occurs in multiple positions. in the table, we again do not know which entry (or entries) in the table is actually responsible for. generating the answer. We extend Neural Programmer to handle this weaker supervision signal. during training by computing the loss only on the prediction that is closest to the desired response...\nSince we employ hard selection at test time, only one among scalar answer and lookup answer is modified at the last timestep. We use the variable that is set at the last timestep as the final output of the model.\nWe apply Neural Programmer on the WikiTableQuestions dataset (Pasupat & Liang2015) and compare it to different non-neural baselines including a natural language semantic parser devel- oped byPasupat & Liang(2015). Further, we also report results from training the sequence-to- sequence model (Sutskever et al.]2014) and a modified version of the pointer networks (Vinyals et al.2015). Our model is implemented in TensorFlow (Abadi et al.]2016) and the model takes ap- proximately a day to train on a single Tesla K80 GPU. We use double-precision format to store the model parameters since the gradients become undefined values in single-precision format. Our code is available at https://github.com/tensorflow/models/tree/master/neural\nTable 1: Performance of Neural Programmer compared to baselines from (Pasupat & Liang. 2015) The performance of an ensemble of 15 models is competitive to the current state-of-the-art natura language semantic parser\nnumber answers nor phrases selected from the table. We ignore these questions during training but. the model is penalized during evaluation followingPasupat & Liang(2015). The tables provided in. the test set are unseen at training, hence requiring the model to adapt to unseen column names at tes1. time. We train only on examples for which the provided table has less than 100 rows since we run out of GPU memory otherwise, but consider all examples at test time.."}, {"section_index": "8", "section_name": "3.2 TRAINING DETAILS", "section_text": "We use T = 4 timesteps in our experiments. Words and operations are represented as 256 dimen- sional vectors, and the hidden vectors of the question and the history RNN are also 256 dimensional The parameters are initialized uniformly randomly within the range [-0.1, 0.1]. We train the model using the Adam optimizer (Kingma & Ba2014) with mini-batches of size 20. The e hyperparam- eter in Adam is set to 1e-6 while others are set to the default values. Since the training set is small compared to other datasets in which neural network models are usually applied, we rely on strong regularization:\nTable [1shows the performance of our model in comparison to baselines from Pasupat & Liang (2015). The best result from Neural Programmer is achieved by an ensemble of 15 models. The. only difference among these models is that the parameters of each model is initialized with a differ-. ent random seed. We combine the models by averaging the predicted softmax distributions of the. models at every timestep. While it is generally believed that neural network models require a large number of training examples compared to simpler linear models to get good performance, our model\nWe clip the gradients to norm 1 and employ early-stopping The occurrences of words that appear less than 10 times in the training set are replaced by a single unknown word token. We add a weight decay penalty with strength 0.0001. We use dropout with a keep probability of O.8 on input and output vectors of the RNN, and selector, operation and column name representations (Srivastava et al.]2014) We use dropout with keep probability of 0.9 on the recurrent connections of the question RNN and history RNN using the technique from|Ga1 & Ghahramani|(2016). We use word-dropout (Iyyer et al.[2015) with keep probability of 0.9. Here, words in the question are randomly replaced with the unknown word token while training\nWe tune the dropout rates, regularization strength, and the e hyperparameter using grid search on the development data, we fix the other hyperparameters after a small exploration during initial experi ments.\nTable 2: Model ablation studies. We find that dropout and weight decay, along with the boolean feature indicating a matched table entry for column selection, have a significant effect on the perfor mance of the model.\nWe did not get better results either by using pre-trained word vectors (Mikolov et al.]2013) or by pre-training the question RNN with a language modeling objective (Dai & Le2015). A possible explanation is that the word vectors obtained from unsupervised learning may not be suitable to the task under consideration. For example, the learned representations of words like maximum and minimum from unsupervised learning are usually close to each other but for our task it is counter- productive. We consider replacing soft selection with hard selection and training the model with the REINFORCE algorithm (Williams|1992). The model fails to learn in this experiment, probably be cause the model has to search over millions of symbolic programs for every input question making it highly unlikely to find a program that gives a reward. Hence, the parameters of the model are not updated frequently enough"}, {"section_index": "9", "section_name": "3.3.1 NEURAL NETWORK BASELINES", "section_text": "To understand the difficulty of the task for neural network models, we experiment with two neural. network baselines: the sequence-to-sequence model (Sutskever et al.|2014) and a modified version of the pointer networks (Vinyals et al.] 2015). The input to the sequence-to-sequence model is a concatenation of the table and the question, and the decoder produces the output one token at a time.. We consider only examples whose input length is less than 400 to make the running time reasonable.. The resulting dataset has 8, 857 and 1, 623 training and development examples respectively. The. accuracy of the best model on this development set after hyperparameter tuning is only 8.9%. Next.. we experiment with pointer networks to select entries in the table as the final answer. We modify. pointer networks to have two-attention heads: one to select the column and the other to select entries within a column. Additionally, the model performs multiple pondering steps on the table before returning the final answer. We train this model only on lookup questions, since the model does not. have a decoder to generate answers. We consider only examples whose tables have less than 100. rows resulting in training and development set consisting of 7, 534 and 1, 829 examples respectively.. The accuracy of the best model on this development set after hyperparameter tuning is only 4.0%. These results confirm our intuition that discrete operations are hard to learn for neural networks. particularly with small datasets in real-world settings.."}, {"section_index": "10", "section_name": "3.4.1 MODEL ABLATION", "section_text": "Table 2 shows the impact of different model design choices on the final performance. While. anonymizing phrases in the question that match some table entry seems to have a small positive. effect, regularization has a much larger effect on the performance. Column selection is performed. in|Neelakantan et al.(2016) using only the name of a column; however, this selection procedure is. insufficient in real-world settings. For example the column selected in question 3 in Table 3|does. not have a corresponding phrase in the question. Hence, to select a column we additionally use a boolean feature that indicates whether an entry in that column matches some phrase in the question. Table[2lshows that the addition of this boolean feature has a significant effect on performance..\nwhich section is longest??\nTable 3: A few examples of programs induced by Neural Programmer that generate the correct. answer in the development set. mfe is abbreviation for the operation most frequent entry. The model. runs for 4 timesteps selecting an operation and a column at every step. The model employs hard. selection during evaluation. The column name is displayed in the table only when the operation. picked at that step takes in a column as input while the operation is displayed only when it is other than the reset operation. Programs that choose count as the final operation produce a number as the. final answer while programs that select print as the final operation produce entries selected from the. table as the final answer.\nID Question Step 1 Step 2 Step 3 Step 4 - what is the total number of Operation 1 1 count - teams? Column - - - 2 how many games had more. Operation - - > - count than 1,500 in attendance? Column attendance 1 1 3 what is the total number. Operation - 1 select count of runner-ups listed on the. Column chart? 1 1 outcome 4 which year held the most. Operation - - mfe print competitions? Column - 1 year year 5 what opponent is listed last. Operation last - last print on the table?. Column - opponent - 6 Operation - - argmax print which section is longest??. Column kilometers - - name 7 which engine(s) has the least Operation - - argmin print amount of power?. Column 1 1 power engine 8 what was claudia roll's Operation - - select print time? Column swimmer time - 1 9 who had more silver medals,. Operation argmax select argmax print cuba or brazil?. Column nation nation silver nation 10 who was the next appointed. Operation select next last print director after lee p. brown?. Column name - - name 11 what team is listed previous. Operation select previous first print to belgium? Column team team 1\nOperation Program in Table3 Amount (%) Scalar Answer Only Count 1 6.5 Comparison + Count 2 2.1 Select + Count 3 22.1 Scalar Answer 1,2,3 30.7 Lookup Answer Most Frequent Entry + Print. 4 1.7 First/Last + Print 5 9.5 Superlative + Print 6,7 13.5 Select + Print 8 17.5 Select + {first, last, previous, next, superlative} + Print 9-11 27.1 Lookup Answer 4-11 69.3\nTable 4: Statistics of the different sequence of operations among the examples answered correctly by the model in the development set. For each sequence of operations in the table, we also poin to corresponding example programs in Table3] Superlative operations include argmax and argmin while comparison operations include greater than, less than, greater than or equal to and less than or equal to. The model induces a program that results in a scalar answer 30.7% of the time while the induced program is a table lookup for the remaining questions. print and select are the two most common operations used 69.3% and 66.7% of the time respectively."}, {"section_index": "11", "section_name": "3.4.2 INDUCED PROGRAMS", "section_text": "Table 3 shows few examples of programs induced by Neural Programmer that yield the correc answer in the development set. The programs given in Table 3|show a few characteristics of the learned model. First, our analysis indicates that the model can adapt to unseen column names at tes time. For example in Question 3, the word outcome occurs only 8 times in the training set and i replaced with the unknown word token. Second, the model does not always induce the most efficien (with respect to number of operations other than the reset operation that are picked) program to solve a task. The last 3 questions in the table can be solved using simpler programs. Finally, the mode does not always induce the correct program to get the ground truth answer. For example, the last 2 programs will not result in the correct response for all input database tables. The programs woulc produce the correct response only when the select operation matches one entry in the table."}, {"section_index": "12", "section_name": "3.4.3 CONTRIBUTION OF DIFFERENT OPERATIONS", "section_text": "Table4 shows the contribution of the different operations. The model induces a program that results in a scalar answer 30.7% of the time while the induced program is a table lookup for the remaining questions. The two most commonly used operations by the model are print and select.."}, {"section_index": "13", "section_name": "3.4.4 ERROR ANALYSIS", "section_text": "To conclude this section, we suggest ideas to potentially improve the performance of the mode First, the oracle performance with 15 Neural Programmer models is 50.5% on the development se while averaging achieves only 37.5% implying that there is still room for improvement. Next, th accuracy of a single model on the training set is 53% which is about 20% higher than the accurac in both the development set and the test set. This difference in performance indicates that the mode suffers from significant overfitting even after employing strong regularization. It also suggests tha the performance of the model could be greatly improved by obtaining more training data. Neverthe less, there are limits to the performance improvements we may reasonably expect: in particular, a shown in previous work (Pasupat & Liang2015), 21% of questions on a random set of 200 exam ples in the considered dataset are not answerable because of various issues such as annotation error and tables requiring advanced normalization."}, {"section_index": "14", "section_name": "OTHER RELATED WORK", "section_text": "While we discuss in detail various semantic parsing and neural program induction techniques in Section 1, here we briefly describe other relevant work. Recently, Kocisky et al.(2016) develop a semi-supervised semantic parsing method that uses question-program pairs as supervision. Con- currently to our work,Liang et al.(2016) propose neural symbolic machine, a model very similar to Neural Programmer but trained using the REINFORCE algorithm (Williams1992). They use only 2 discrete operations and run for a total of 3 timesteps, hence inducing programs that are much simpler than ours. Neural networks have also been applied on question-answering datasets that do not require much arithmetic reasoning (Bordes et al.]2014) Iyyer et al.]2014) Sukhbaatar et al. 2015] [Peng et al.]2015] Hermann et al.]2015f Kumar et al.|2016).Wang & Jiang(2016) use a neu- ral network model to get state-of-the-art results on a reading comprehension task (Rajpurkar et al. 2016)."}, {"section_index": "15", "section_name": "5 CONCLUSION", "section_text": "In this paper, we enhance Neural Programmer to work with weaker supervision signals to mak it more broadly applicable. Soft selection during training enables the model to actively explore the space of programs by backpropagation with superior sample complexity. In our experiments we show that the model achieves performance comparable to a state-of-the-art traditional semantic parser even though the training set contains only 10,o00 examples. To our knowledge, this is th first instance of a weakly supervised, end-to-end neural network model that induces programs on real-world dataset."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neural networks for question answering. NAACL, 2016..\nAntoine Bordes, Sumit Chopra, and Jason Weston. Question answering with subgraph embeddings EMNLP, 2014.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. NIPs, 2015\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997\nAcknowledgements We are grateful to Panupong Pasupat for answering numerous questions. about the dataset, and providing pre-processed version of the dataset and the output of the semantic parser. We thank David Belanger, Samy Bengio, Greg Corrado, Andrew Dai, Jeff Dean, Nando de Freitas, Shixiang Gu, Navdeep Jaitly, Rafal Jozefowicz, Ashish Vaswani, Luke Vilnis, Yuan Yu and Barret Zoph for their suggestions and the Google Brain team for the support. Arvind Neelakantan is supported by a Google PhD fellowship in machine learning..\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre. gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good. fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz. Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Gor don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kuna Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viegas, Oriol Vinyals. Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow:. Large-scale machine learning on heterogeneous distributed systems. ArXiv, 2016..\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. NIPS, 2016.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. NIPs, 2015\nMohit Iyyer, Jordan L. Boyd-Graber, Leonardo Max Batista Claudino, Richard Socher, anc Hal Daume III. A neural network for factoid question answering over paragraphs. EMNLP. 2014.\nRobin Jia and Percy Liang. Data recombination for neural semantic parsing. ACL, 2016\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. NIPS, 2015.\nLukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. ICLR, 2016\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2014\nJayant Krishnamurthy and Thomas Kollar. Jointly learning to parse and perceive: Connecting natura language to the physical world. TACL, 2013.\nPercy Liang. Learning executable semantic parsers for natural language understanding. ACM, 2016\nPercy Liang, Michael I. Jordan, and Dan Klein. Learning dependency-based compositional seman tics. ACL, 2011.\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Neural programmer Inducing latent programs with gradient descent. ICLR, 2016.\nPanupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables ACL, 2015.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, O00+ questions for machine comprehension of text. ArXiv, 2016.\nScott Reed and Nando De Freitas. Neural programmer-interpreters. ICLR, 2016\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. JMLR. 2014\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. NIPs, 2015\nMohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daume III. Deep unordered compo sition rivals syntactic methods for text classification. ACL, 2015..\nTomas Kocisky, Gabor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. Semantic parsing with semi-supervised sequential autoencoders. ArXiv,. 2016.\nTom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. Scaling semantic parsers with on-the-fly ontology matching. EMNLP, 2013.\nChen Liang, Jonathan Berant, Quoc Le, Kenneth Forbus, and Ni Lao. Neural symbolic machines.. Learning semantic parsers on freebase with weak supervision. NAMPI Workshop, NIPS, 2016\nBaolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. Towards neural network-based reason ing. ArXiv, 2015.\nlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks NIPS, 2014.\nYushi Wang, Jonathan Berant, and Percy Liang. Building a semantic parser overnight. ACL, 2015\nPengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. Neural enquirer: Learning to query table. with natural language. ArXiv, 2015.\nJohn M. Zelle and Raymond J. Mooney. Learning to parse database queries using inductive logic programming. AAAI/IAAI, 1996.\nLuke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. UAI, 2005.\nTable 5: List of all operations provided to the model along with their definitions. mfe is abbreviation. for the operation most frequent entry. cond] is 1 when cond is True, and O otherwise. Comparison, select, reset and mfe operations are independent of the timestep while all the other operations are computed at every time step. Superlative operations and most frequent entry are computed within a. column. The operations calculate the expected output with the respect to the membership probabili-. ties given by the row selector so that they can work with probabilistic selection..\nTable5lshows the list of operations built into the model along with their definitions"}, {"section_index": "17", "section_name": "ROW SELECTOR", "section_text": "As discussed in Section 2.3, the output variables scalar answer and lookup answer are calculated us ing the output of the count operations and print operation respectively. The row selector is computed. using the output of the remaining operations and is given by,.\nwhere aP (op) and acol (j) are the probabilities assigned by the selector to operation op and column j at timestep t respectively.\nType Operation Definition M Aggregate count countt = 7 row_selectt-1 [i] argmax maxt[i][j] = max(0.0, row_selectt-1[i]- Superlative 1 argmin mint[i][j] = max(0.0, row_selectt-1[i]- M1([II[i][j] > II[k][j]] row_select-1[k])), i = 1,, M, j = 1,., C > g[i][j] =ll|i|[j > pivotq,V(i,j),i=1,...,M,j=1,...,C < l\\i[] =ll|illj < pivot, V(i,j),i=1,...,M,j=1,...,C Comparison V ge[i][j] II[i][j] pivotge,V(i,j),i=1,...,M, j=1,..., C < le[i[[j] =Ii[j< pivotle,V(i,j),i=1,..., M,j =1,..., C select s[i][j] = 1.0 if I[i][j] appears in question else 0.0, V(i,j),i=1,...,M,j=1,...,C mfe mfe[i][j] = 1.0 if H[i][j] is the most common entry in column j else 0.0, Table Ops V(i,j),i=1,...,M,j=1,...,C first ft[i] = max(0.0, row_select-1 [i] - j=1 row_select-1 [j]), i =1,.:., M last lat[i] = max(0.0, row_select- 1 [i] - j=i+1 row_select-1[j]), i =1,..., M previous pt[i]=row_select-1[i+1],i=1,...,M1;pt[M]=0 next nt[i= row_selectt-1i- 1],i=2,...,M ; nt[1= 0 Print print lookup answert[i][j] = row_selectt-1[i], V(i, j)i = 1,..., M, j = 1,..., C Reset reset rt|2 1,Vi= 1,2,..., M\nlectort[i] ={acol(j)aP(>)g[i][j]+ acol(j)aP(<)l[i][j] j=1 + acol(j)atP()ge[i][j] + atol(j)atP()le[i][j], + Qtol(j)atP (argmax) maxt[i][j] + acol(j)atp(argmint) min[i][j] + acol(j)aP(select)s[i][j] + acol(j)aP(mfe)mfe[i][j]} + atp (previous)pt[i] + atp (next) nt[i] + atp(reset)rt[i] + QtP(first)ft[i] + atP(last)lat[i] Vi,i=1,2,...,M\nrow selectort[i] = )`{atol(j)atP(>)g[i][j] + atol(j)atp(<)l[i][j] +acol(j)atP()ge[i][j] + acol(j)aP(<)le[i][j], + acol (j)ap (argmax) maxt [i][j] + Qtol (j)atp (argmint) min[i][j] + acol(j)aP(select)s[i][j] + acol(j)atP(mfe)mfe[i][j]} + atp (previous)pt[i] + atp(next)nt[i] + atP(reset)rt[i] + QP(first)ft[i] + atP(last)lat[i] Vi,i =1,2,...,M"}] |
SyWvgP5el | [{"section_index": "0", "section_name": "EPOPT: LEARNING ROBUST NEURAL NETWORK POLICIES USING MODEL ENSEMBLES", "section_text": "Aravind Rajeswaran', Sarvjeet Ghotra?, Balaraman Ravindran3, Sergey Levine4\nSample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are repre sented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning/adaptation"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Reinforcement learning with powerful function approximators like deep neural networks (deep RI. has recently demonstrated remarkable success in a wide range of tasks like games (Mnih et al.]2015 Silver et al.] 2016), simulated control problems (Lillicrap et al.2015] Mordatch et al.[[2015b), an graphics (Peng et al.|2016). However, high sample complexity is a major barrier for directly applyin. model-free deep RL methods for physical control tasks. Model-free algorithms like Q-learning. actor-critic, and policy gradients are known to suffer from long learning times (Kakade]2003), whic. is compounded when used in conjunction with expressive function approximators like deep neura. networks (DNNs). The challenge of gathering samples from the real world is further exacerbate. by issues of safety for the agent and environment, since sampling with partially learned policie. could be unstable (Garcia & Fernandez| 2015). Thus, model-free deep RL methods often require. prohibitively large numbers of potentially dangerous samples for physical control tasks.\nModel-based methods, where the real-world target domain is approximated with a simulated source domain, provide an avenue to tackle the above challenges by learning policies using simulated data The principal challenge with simulated training is the systematic discrepancy between source and target domains, and therefore, methods that compensate for systematic discrepancies (modeling errors) are needed to transfer results from simulations to real world using RL. We show that the impact of such discrepancies can be mitigated through two key ideas: (1) training on an ensemble of models in an adversarial fashion to learn policies that are robust to parametric model errors, as well as to unmodeled effects; and (2) adaptation of the source domain ensemble using data fron the target domain to progressively make it a better approximation. This can be viewed either as an instance of model-based Bayesian RL (Ghavamzadeh et al.|[2015); or as transfer learning from a collection of simulated source domains to a real-world target domain (Taylor & Stone]2009). While a number of model-free RL algorithms have been proposed (see, e.g.,Duan et al.(2016) for a survey) their high sample complexity demands use of a simulator, effectively making them model-based. We"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we propose the Ensemble Policy Optimization (EPOpt-e) algorithm for finding policies that are robust to model mismatch. In line with model-based Bayesian RL, we learn a policy for the target domain by alternating between two phases: (i) given a source (model) distribution (i.e. ensemble of models), find a robust policy that is competent for the whole distribution: (ii) gather data from the target domain using said robust policy, and adapt the source distribution. EPOpt uses an ensemble of models sampled from the source distribution, and a form of adversarial training to learn robusi policies that generalize to a broad range of models. By robust, we mean insensitivity to parametric model errors and broadly competent performance for direct-transfer (also referred to as jumpstari like in Taylor & Stone (2009)). Direct-transfer performance refers to the average initial performance (return) in the target domain, without any direct training on the target domain. By adversarial training we mean that model instances on which the policy performs poorly in the source distribution are sampled more often in order to encourage learning of policies that perform well for a wide range of model instances. This is in contrast to methods which learn highly optimized policies for specific model instances, but brittle under model perturbations. In our experiments, we did not observe significant loss in performance by requiring the policy to work on multiple models (for example through adopting a more conservative strategy). Further, we show that policies learned using EPOpt are robust even to effects not modeled in the source domain. Such unmodeled effects are a major issue when transferring from simulation to the real world. For the model adaptation step (ii), we present a simple method using approximate Bayesian updates, which progressively makes the source distribution a better approximation of the target domain. We evaluate the proposed methods on the hopper (12 dimensional state space; 3 dimensional action space) and half-cheetah (18 dimensional state space; 6 dimensional action space) benchmarks in MuJoCo. Our experimental results suggest that adversarial training on model ensembles produces robust policies which generalize better than policies trained on a single, maximum-likelihood model (of source distribution) alone.\nWe consider parametrized Markov Decision Processes (MDPs), which are tuples of the form:. M(p) =< S, A, Tp, Rp, 7, So,p > where S, A are (continuous) states and actions respectively;. Tp Rp, and So,p are the state transition, reward function, and initial state distribution respectively, all. parametrized by p; and y is the discount factor. Thus, we consider a set of MDPs with the same state and action spaces. Each MDP in this set could potentially have different transition functions, rewards and initial state distributions. We use transition functions of the form St+1 = Tp(st, at) where Tp is. a random process and St+1 is a random variable..\nWe distinguish between source and target MDPs using M and W respectively. We also refer to M and W as source and target domains respectively, as is common in the transfer learning set-up. Our objective is to learn the optimal policy for W; and to do so, we have access to M(p). We assume that we have a distribution (D) over the source domains (MDPs) generated by a distribution ovel the parameters P = P(p) that capture our subjective belief about the parameters of W. Let P be parametrized by (e.g. mean, standard deviation). For example, M could be a hopping task with reward proportional to hopping velocity and falling down corresponds to a terminal state. For this task, p could correspond to parameters like torso mass, ground friction, and damping in joints, all of which affect the dynamics. Ideally, we would like the target domain to be in the model class, i.e {p | M(p) = W}. However, in practice, there are likely to be unmodeled effects, and we analyze this setting in our experiments. We wish to learn a policy +(s) that performs well for all M ~ D Note that this robust policy does not have an explicit dependence on p, and we require it to perform well without knowledge of p."}, {"section_index": "3", "section_name": "3 LEARNING PROTOCOL AND EPOPT ALGORITHM", "section_text": "We follow the round-based learning protocol of Bayesian model-based RL. We use the term rounds. when interacting with the target domain, and episode when performing rollouts with the simulator. In. each round, we interact with the target domain after computing the robust policy on the current (i.e\nshow in our experiments that such methods learn policies which are highly optimized for the specific models used in the simulator, but are brittle under model mismatch. This is not surprising, since deep. networks are remarkably proficient at exploiting any systematic regularities in a simulator. Addressing. robustness of DNN-policies is particularly important to transfer their success from simulated tasks to. physical systems.\nposterior) simulated source distribution. Following this, we update the source distribution using data from the target domain collected by executing the robust policy. Thus, in round i, we update two set of parameters: 0, the parameters of the robust policy (neural network); and , the parameters of the source distribution. The two key steps in this procedure are finding a robust policy given a source distribution; and updating the source distribution using data from the target domain. In this section we present our approach for both of these steps."}, {"section_index": "4", "section_name": "3.1 ROBUST POLICY SEARCH", "section_text": "We introduce the EPOpt algorithm for finding a robust policy using the source distribution. EPOpt is. a policy gradient based meta-algorithm which uses batch policy optimization methods as a subroutine. Batch policy optimization algorithms (Williams1992] Kakade]2001 Schulman et al.]2015) collect a batch of trajectories by rolling out the current policy, and use the trajectories to make a policy update. The basic structure of EPOpt is to sample a collection of models from the source distribution. sample trajectories from each of these models, and make a gradient update based on a subset of. sampled trajectories. We first define evaluation metrics for the parametrized policy, e:.\nIn (1), nm(0,p) is the evaluation of e on the model M(p), with being trajectories generated by M(p) and e: 7 ={St,at,rt}t=o where St+1 ~ Tp(St,at), so ~ S0,p, rt ~ Rp(St,at), and at ~ e(st). Similarly, np(0) is the evaluation of e over the source domain distribution. The corresponding expectation is over trajectories t generated by D and e: t = {St, at, rt}t-o, where St+1 ~ Tpt(St, at), Pt+1 = Pt, S0 ~ S0,po,Tt ~ Rpt(St,at), at ~ e(St), and po ~ P. With this modified notation of trajectories. batch policy optimization can be invoked for policy search\nOptimizing np allows us to learn a policy that performs best in expectation over models in the source domain distribution. However, this does not necessarily lead to a robust policy, since there could be high variability in performance for different models in the distribution. To explicitly seek a robust policy, we use a softer version of max-min objective suggested in robust control, and optimize for the conditional value at risk (CVaR) (Tamar et al.]2015):\nmax nM(0,p)P(p)dp s.t. P(nM(0,P) <y) = e 0,y F(0)\nwhere F(0) = {p | nm(0, p) y} is the set of parameters corresponding to models that produce the worst e percentile of returns, and provides the limit for the integral; nM(0, P) is the random variable of returns, which is induced by the distribution over model parameters; and e is a hyperparamete. which governs the level of relaxation from max-min objective. The interpretation is that (2) maximize. the expected return for the worst e-percentile of MDPs in the source domain distribution. We adap the previous policy gradient formulation to approximately optimize the objective in (2). The resulting algorithm, which we call EPOpt-e, generalizes learning a policy using an ensemble of source MDPs which are sampled from a source domain distribution.\nIn Algorithm 1, R(Tk) = T- t=o ' rt,k denotes the discounted return obtained in trajectory sample Tk. In line 7, we compute the e-percentile value of returns from the N trajectories. In line 8, w. find the subset of sampled trajectories which have returns lower than Qe. Line 9 calls one step o an underlying batch policy optimization subroutine on the subset of trajectories from line 8. For th CVaR objective, it is important to use a good baseline for the value function.Tamar et al.(2015 show that without a baseline, the resulting policy gradient is biased and not consistent. We use a linear function as the baseline with a time varying feature vector to approximate the value functior similar to|Duan et al.(2016). The parameters of the baseline are estimated using only the subset o trajectories with return less than Qe. We found that this approach led to empirically good results.\nFor small values of e, we observed that using the sub-sampling step from the beginning led to unstable. learning. Policy gradient methods adjust parameters of policy to increase probability of trajectories\nT-1 nM(0,p) =Eg p t=0\nnM(0,p)=E (1) t=0 [nM(0,p)] = Ep~P E: E t=0 t=0\nnD(0) = Ep~P [nM(0,p)] =Ep~P E E\nAlgorithm 1: EPOpt-e for Robust Policy Search\nInput: , 0o, niter, N, e 2 for iteration i = 0, 1, 2, ... niter do for k = 1, 2,... N do 3 4 sample model parameters pk ~ P sample a trajectory Tk = {St, At,^t, St+1}T=1 from M(px) using policy (0;) 5 6 end 7 8 select sub-set T = {Tk : R(Tk) Qe} Update policy: 0i+1 = BatchPolOpt(0, T) 9 10 end\nwith high returns and reduce probability of poor trajectories. EPOpt-e due to the sub-sampling step emphasizes penalizing poor trajectories more. This might constrain the initial exploration needed to find good trajectories. Thus, we initially use a setting of e = 1 for few iterations before setting epsilon to the desired value. This corresponds to exploring initially to find promising trajectories anc rapidly reducing probability of trajectories that do not generalize.\nIn line with model-based Bayesian RL, we can adapt the ensemble distribution after observing trajectory data from the target domain. The Bayesian update can be written as:.\n1 P(St+1 = (k) P(tk x P(P) St+1St t=0\nP(St+1|St,At,P) = Tp(St,At)\nWe follow a sampling based approach to calculate the posterior, by sampling a set of model parameters: Pi = [P1, P2,..., Pm] from a sampling distribution, Ps(pi). Consequently, using Bayes rule and. importance sampling, we have:.\n(Pi P(pi[Tk) x L(Tk[Pi)"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "Supplementary video: https://youtu.be/w1YJ9vwaoto\nwhere Pp(pt) is the probability of drawing p; from the prior distribution; and L(Tk[pt) is the likeli-. hood of generating the observed trajectory with model parameters pi. The weighted samples from the posterior can be used to estimate a parametric model, as we do in this paper. Alternatively, one could approximate the continuous probability distribution using discrete weighted samples like in case of par-. ticle filters. In cases where the prior has very low probability density in certain parts of the parameter space, it might be advantageous to choose a sampling distribution different from the prior. The like-. lihood can be factored using the Markov property as: (Tk|Pi) = IIt P(St+1 = st+1|st~), at~', Pi) (k)[s(k)a(k) This simple model adaptation rule allows us to illustrate the utility of EPOpt for robust policy search. as well as its integration with model adaptation to learn policies in cases where the target model could. be very different from the initially assumed distribution..\nhigh dimensionality, and contact discontinuities make these tasks challenging reinforcement learning. benchmarks. These challenges when coupled with systematic parameter discrepancies can quickly degrade the performance of policies and make them unstable, as we show in the experiments. The. batch policy optimization sub-routine is implemented using TRPO. We parametrize the stochastic. policy using the scheme presented in Schulman et al.(2015). The policy is represented with a. Gaussian distribution, the mean of which is parametrized using a neural network with two hidden. layers. Each hidden layer has 64 units, with a tanh non-linearity, and the final output layer is made of linear units. Normally distributed independent random variables are added to the output of this neural. network, and we also learn the standard deviation of their distributions. Our experiments are aimed at. answering the following questions:.\nIn all the comparisons, performance refers to the average undiscounted return per trajectory or episode (we consider finite horizon episodic problems). In addition to the previously defined performance we also use the 10th percentile of the return distribution as a proxy for the worst-case return.."}, {"section_index": "6", "section_name": "4.1 COMPARISON TO STANDARD POLICY SEARCH", "section_text": "In Figure[1] we evaluate the performance of standard TRPO and EPOpt(e = 0.1) on the hopper task, in the presence of a simple parametric discrepancy in the physics of the system between the. training (source) and test (target) domains. The plots show the performance of various policies on test domains with different torso mass. The first three plots show policies that are each trained on a single torso mass in the source domain, while the last plot illustrates the performance of EPOpt\n4000 3500 m = 3 m = 3000 2500 2000 1500 1000 500 Ensemble 0+ 3 5 6 7 9 3 4 5 4 8 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 Torso Mass Torso Mass Torso Mass Torso Mass\n4000 3500 m m m= y 3000 2500 2000 ou 1500 2 1000 500 Ensemble 0 3 4 5 6 7 8 9 3 4 56 7 8 9 3 4 6 7 8 9 3 4 5 6 7 8 9 Torso Mass Torso Mass Torso Mass Torso Mass\nFigure 1: Performance of hopper policies when testing on target domains with different torso masses The first three plots (blue, green, and red) show the performance of policies trained with TRPO on source domains with torso mass 3, 6, and 9, respectively (denoted by m = in the legend). The rightmost plot shows the performance of EPOpt(e = 0.1) trained on a Gaussian source distribution with mean mass = 6 and standard deviation o = 1.5. The shaded regions show the 10th and 90th percentile of the return distribution. Policies trained using traditional approaches on a single mass value are unstable for even slightly different masses, making the hopper fall over when trying to move forward. In contrast, the EPOpt policy is stable and achieves a high level of performance on the entire range of masses considered. Further, the EPOpt policy does not suffer from degradation in performance as a consequence of adopting a more robust policy.\n1. How does the performance of standard policy search methods (like TRPO) degrade in the presenc of systematic physical differences between the training and test domains, as might be the case when training in simulation and testing in the real world?. 2. Does training on a distribution of models with EPOpt improve the performance of the policy wher tested under various model discrepancies, and how much does ensemble training degrade overal. performance (e.g. due to acquiring a more conservative strategy)?. 3. How does the robustness of the policy to physical parameter discrepancies change when using the. robust EPOpt-e variant of our method?. 4. Can EPOpt learn policies that are robust to unmodeled effects - that is, discrepancies in physica parameters between source and target domains that do not vary in the source domain ensemble?. 5. When the initial model ensemble differs substantially from the target domain, can the ensemble. be adapted efficiently, and how much data from the target domain is required for this?.\nMaximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 3600 56 1.5 1. 1. 1. . 6 3000 1i 7 1.7 1i 89012345 1. 8 2400 11222222 11222222 1222222 9012345 1800 1200 600 0 2 88 4 0'9 2 8 64 0'6 62 8 4 2 8 4 4 6 2 0 8 4 3 4 5 6 7 7 m 3 4 4 5 6 6 7 7 8 9 mj 3 4 4 5 6 6 7 7 8 9 Torso Mass\nFigure 2: On the left, is an illustration of the simulated 2D hopper task studied in this paper. On right, we depict the performance of policies for various model instances of the hopper task. The performance is depicted as a heat map for various model configurations, parameters of which are given in the x and y axis. The adversarially trained policy, EPOpt(e = 0.1), is observed to generalize to a wider range of models and is more robust.\nwhich is trained on a Gaussian mass distribution. The results show that no single torso mass value produces a policy that is successful in all target domains. However, the EPOpt policy succeeds almost uniformly for all tested mass values. Furthermore, the results show that there is almost no degradation in the performance of EPOpt for any mass setting, suggesting that the EPOpt policy does not suffer substantially from adopting a more robust strategy."}, {"section_index": "7", "section_name": "4.2 ANALYSIS OF ROBUSTNESS", "section_text": "Table 1: Initial source domain distribution\nHopper low high 6.0 1.5 3.0 9.0 mass ground friction 2.0 0.25 1.5 2.5 joint damping 2.5 1.0 1.0 4.0 armature 1.0 0.25 0.5 1.5 Half-Cheetah low high 6 6.0 1.5 3.0 mass 9.0 ground friction 0.5 0.1 0.3 0.7 joint damping 1.5 0.5 0.5 2.5 armature 0.125 0.04 0.05 0.2\nNext, we analyze the robustness of policies trained using EPOpt on the hopper domain. For this. analysis, we construct a source distribution which varies four different physical parameters: torso. mass, ground friction, foot joint damping, and joint inertia (armature). This distribution is presented. in Table[1] Using this source distribution, we compare between three different methods: (1) standard. policy search (TRPO) trained on a single model corresponding to the mean parameters in Table[1 (2) EPOpt(e = 1) trained on the source distribution; (3) EPOpt(e = 0.1) i.e. the adversarially. trained policy, again trained on the previously described source distribution. The aim of the compari-. son is to study direct-transfer performance, similar to the robustness evaluations common in robust controller design (Zhou et al.|1996). Hence, we learn a policy using each of the methods, and then. test policies on different model instances (i.e. different combinations of physical parameters) without. any adaptation. The results of this comparison are summarized in Figure2l where we present the. performance of the policy for testing conditions corresponding to different torso mass and friction. values, which we found to have the most pronounced impact on performance. The results indicate. that EPOpt(e = 0.1) produces highly robust policies. A similar analysis for the 10th percentile of the. return distribution (softer version of worst-case performance), the half-cheetah task, and different e. settings are presented in the appendix.\n4000 3500 3000 ernnnnnnee 2500 2000 1500 1000 500 Ensemble (unmodeled) Maximum-Likelihood 0 3 4 5 6 7 8 9 Torso Mass\nFigure 3: Comparison between policies trained on a fixed maximum-likelihood model with mass (6), and an ensemble where all models have the same mass (6) and other parameters varying as described in Table|1"}, {"section_index": "8", "section_name": "4.3 ROBUSTNESS TO UNMODELED EFFECTS", "section_text": "To analyze the robustness to unmodeled effects, our next experiment considers the setting where the source domain distribution is obtained by varying friction, damping, and armature as in Table|1 but does not consider a distribution over torso mass. Specifically, all models in the source domain distribution have the same torso mass (value of 6), but we will evaluate the policy trained on this distribution on target domains where the torso mass is different. Figure|3lindicates that the EPOpt(e = 0.1) policy is robust to a broad range of torso masses even when its variation is not considered. However, as expected, this policy is not as robust as the case when mass is also modeled as part of the source domain distribution."}, {"section_index": "9", "section_name": "4.4 MODEL ADAPTATION", "section_text": "The preceding experiments show that EPOpt can find robust policies, but the source distribution in. these experiments was chosen to be broad enough such that the target domain is not too far from. high-density regions of the distribution. However, for real-world problems, we might not have the. domain knowledge to identify a good source distribution in advance. In such settings, model (source. adaptation allows us to change the parameters of the source distribution using data gathered from the. target domain. Additionally, model adaptation is helpful when the parameters of the target domair could change over time, for example due to wear and tear in a physical system. To illustrate model. adaptation, we performed an experiment where the target domain was very far from the high density. regions of the initial source distribution, as depicted in Figure4[a). In this experiment, the source. distribution varies the torso mass and ground friction. We observe that progressively, the source. distribution becomes a better approximation of the target domain and consequently the performance. improves. In this case, since we followed a sampling based approach, we used a uniform sampling. distribution, and weighted each sample with the importance weight as described in Section 3.2. Eventually, after 10 iterations, the source domain distribution is able to accurately match the targe1. domain. Figure4(b) depicts the learning curve, and we see that a robust policy with return more than. 2500, which roughly corresponds to a situation where the hopper is able to move forward without. falling down for the duration of the episode, can be discovered with just 5 trajectories from the targe1. domain. Subsequently, the policy improves near monotonically, and EPOpt finds a good policy with. just 11 episodes worth of data from the target domain. In contrast, to achieve the same level of. performance on the target domain, completely model-free methods like TRPO would require more. than 2 104 trajectories when the neural network parameters are initialized randomly..\n3.0 Iteration 0 Iteration1 3500 2.5 3000 2.0- 2500 1.5 X ce Pernmmnr 2000 icioon 1.0 Iteration 2 Iteration 7 3.0 1500 2.5 - 1000 2.0- 500 1.5 0 0 2 4 6 8 10 1.0 5 5 15 Iterations 0 10 15 20 0 10 20 Torso Mass (b) (a)\nFigure 4: (a) Visualizes the source distribution during model adaptation on the hopper task, where mass and friction coefficient are varied in the source domain. The red cross indicates the unknown parameters of the target domain. The contours in the plot indicate the distribution over models (we assume a Gaussian distribution). Lighter colors and more concentrated contour lines indicate regions of higher density. Each iteration corresponds to one round (episode) of interaction with the target domain. The high-density regions gradually move toward the true model, while maintaining probability mass over a range of parameters which can explain the behavior of target domain Figure|4(b) presents the corresponding learning curve, where the shaded region describes the 1Oth and 9Oth percentiles of the performance distribution, and the solid line is the average performance."}, {"section_index": "10", "section_name": "5 RELATED WORK", "section_text": "Robust control is a branch of control theory which formally studies development of robust policies [Zhou et al.]|1996} [Nilim & Ghaoui]2005f Lim et al.|[2013). However, typically no distribution over source or target tasks is assumed, and a worst case analysis is performed. Most results from this field have been concentrated around linear systems or finite MDPs, which often cannot adequately model complexities of real-world tasks. The set-up of model-based Bayesian RL maintains a belief over models for decision making under uncertainty (Vlassis et al.]2012]Ghavamzadeh et al.2015) In Bayesian RL, through interaction with the target domain, the uncertainty is reduced to find the correct or closest model. Application of this idea in its full general form is difficult, and requires either restrictive assumptions like finite MDPs (Poupart et al.]2006), gaussian dynamics (Ross et al.|[2008), or task specific innovations. Previous methods have also suggested treating uncertain model parameters as unobserved state variables in a continuous POMDP framework, and solving the POMDP to get optimal exploration-exploitation trade-off (Duff2003] Porta et al.]2006). While this approach is general, and allows automatic learning of epistemic actions, extending such methods to large continuous control tasks like those considered in this paper is difficult.\nRisk sensitive RL methods (Delage & Mannor2010] Tamar et al.]2015) have been proposed to ac as a bridge between robust control and Bayesian RL. These approaches allow for using subjective model belief priors, prevent overly conservative policies, and enjoy some strong guarantees typically associated with robust control. However, their application in high dimensional continuous contro tasks have not been sufficiently explored. We refer readers to[Garcia & Fernandez (2015) for a survey of related risk sensitive RL methods in the context of robustness and safety.\nStandard model-based control methods typically operate by finding a maximum-likelihood estimate. of the target model (Ljung1998}Ross & Bagnell2012} Deisenroth et al.2013), followed by policy optimization. Use of model ensembles to produce robust controllers was explored recently. in robotics. Mordatch et al.(2015a) use a trajectory optimization approach and an ensemble with. small finite set of models; whereas we follow a sampling based direct policy search approach over a continuous distribution of uncertain parameters, and also show domain adaptation. Sampling based. approaches can be applied to complex models and discrete MDPs which cannot be planned through. easily. Similarly,Wang et al.(2010) use an ensemble of models, but their goal is to optimize for. average case performance as opposed to transferring to a target MDP.Wang et al.(2010) use a hand. engineered policy class whose parameters are optimized with CMA-ES. EPOpt on the other hand. can optimize expressive neural network policies directly. In addition, we show model adaptation. effectiveness of the sub-sampling step (e < 1 case), and robustness to unmodeled effects, all of which. are important for transfering to a target MDP.\nLearning of parametrized skills (da Silva et al.2012) is also concerned with finding policies fo a distribution of parametrized tasks. However, this is primarily geared towards situations wher task parameters are revealed during test time. Our work is motivated by situations where target tasl parameters (e.g. friction) are unknown. A number of methods have also been suggested to reduc sample complexity when provided with either a baseline policy (Thomas et al.[2015) Kakade & Langford][2002), expert demonstration (Levine & Koltun]2013] [Argall et al.]2009), or approximat simulator (Tamar et al.]2012f [Abbeel et al.]2006). These are complimentary to our work, in th sense that our policy, which has good direct-transfer performance, can be used to sample from th target domain and other off-policy methods could be explored for policy improvement."}, {"section_index": "11", "section_name": "6 CONCLUSIONS AND FUTURE WORK", "section_text": "In this paper, we presented the EPOpt-e algorithm for training robust policies on ensembles of source. domains. Our method provides for training of robust policies, and supports an adversarial training regime designed to provide good direct-transfer performance. We also describe how our approach. can be combined with Bayesian model adaptation to adapt the source domain ensemble to a target domain using a small amount of target domain experience. Our experimental results demonstrate that the ensemble approach provides for highly robust and generalizable policies in fairly complex. simulated robotic tasks. Our experiments also demonstrate that Bayesian model adaptation can produce distributions over models that lead to better policies on the target domain than more standard maximum likelihood estimation, particularly in presence of unmodeled effects..\nAlthough our method exhibits good generalization performance, the adaptation algorithm we use. currently relies on sampling the parameter space, which is computationally intensive as the number ol. variable physical parameters increase. We observed that (adaptive) sampling from the prior leads tc. fast and reliable adaptation if the true model does not have very low probability in the prior. Howeve. when this assumption breaks, we require a different sampling distribution which could produce. samples from all regions of the parameter space. This is a general drawback of Bayesian adaptatior. methods. In future work, we plan to explore alternative sampling and parameterization schemes. including non-parametric distributions. An eventual end-goal would be to replace the physics. simulator entirely with learned Bayesian neural network models, which could be adapted with limite data from the physical system. These models could be pre-trained using physics based simulators like. MuJoCo to get a practical initialization of neural network parameters. Such representations are likel. useful when dealing with high dimensional inputs like simulated vision from rendered images o. tasks with complex dynamics like deformable bodies, which are needed to train highly generalizable. policies that can successfully transfer to physical robots acting in the real world.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Emo Todorov, Sham Kakade, and students of Emo Todorov's researcl group for insightful comments about the work. The authors would also like to thank Emo Todoroy. for the MuJoCo simulator. Aravind Rajeswaran and Balaraman Ravindran acknowledge financia support from ILDS, IIT Madras."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Pieter Abbeel, Morgan Quigley, and Andrew Y. Ng. Using inaccurate models in reinforcement learning. In ICML, 2006.\nBrenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems. 57(5):469 - 483. 2009\nErick Delage and Shie Mannor. Percentile optimization for markov decision processes with paramete. uncertainty. Operations Research, 58(1):203-213, 2010.\nYan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In ICML, 2016..\nMichael O. Duff. Design for an optimal probe. In ICML, 2003\nTom Erez, Yuval Tassa, and Emanuel Todorov. Infinite-horizon model predictive control for periodic tasks with contacts. In Proceedings of Robotics: Science and Systems, 2011.\nMohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, and Aviv Tamar. Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning. 8(5-6):359 483. 2015\nSham Kakade. A natural policy gradient. In NIPS, 2001\nSham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003.\nJavier Garcia and Fernando Fernandez. A comprehensive survey on safe reinforcement learning Journal of Machine Learning Research, 2015..\nSergey Levine and Vladlen Koltun. Guided policy search. In ICML, 2013.\nLennart Ljung. System Identification, pp. 163-173. Birkhauser Boston, Boston, MA, 1998\nIgor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel V. Todorov. Interactive control of diverse complex characters with neural networks. In NIPs. 2015b.\nArnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780-798, 2005.\nosep M. Porta, Nikos A. Vlassis, Matthijs T. J. Spaan, and Pascal Poupart. Point-based value iteratior Or cont1nuous. 7.232023672006\nStephane Ross and Drew Bagnell. Agnostic system identification for model-based reinforcement learning. In ICML, 2012.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust regio policy optimization. In ICML, 2015.\nMatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey Journal of Machine Learning Research, 10:1633-1685, December 2009\nNikos Vlassis, Mohammad Ghavamzadeh, Shie Mannor, and Pascal Poupart. Bayesian Reinforcemen Learning, pp. 359-386. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012"}, {"section_index": "14", "section_name": "Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, 2002.", "section_text": "Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 518(7540) 529-533, Feb 2015.\nPascal Poupart, Nikos A. Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete bayesian reinforcement learning. In ICML, 2006\nPawel Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experienc replay. Neural Networks, 22:1484-1497. 2009.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229-256. 1992\nKemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice-Hall, Inc. Upper Saddle River, NJ, USA, 1996. ISBN 0-13-456567-3."}, {"section_index": "15", "section_name": "A.1 DESCRIPTION OF SIMULATED ROBOTIC TASKS CONSIDERED IN THIS WORK", "section_text": "Hopper: The hopper task is to make a 2D planar hopper with three joints and 4 body parts hop forward as fast as possible (Erez et al.]2011). This problem has a 12 dimensional state space and a 3 dimensional action space that corresponds to torques at the joints. We construct the source domain by considering a distribution over 4 parameters: torso mass, ground friction, armature (inertia), and damping of foot.\nHalf Cheetah: The half-cheetah task (Wawrzynski 2009) requires us to make a 2D cheetah with two legs run forward as fast as possible. The simulated robot has 8 body links with an 18 dimensional state space and a 6 dimensional action space that corresponds to joint torques. Again, we construct the source domain using a distribution over the following parameters: torso and head mass, ground friction, damping, and armature (inertia) of foot joints.\nFigure 5: Illustrations of the 2D simulated robot models used in the experiments. The hopper (a) and. half-cheetah (b) tasks present the challenges of under-actuation and contact discontinuities. These challenges when coupled with parameter uncertainties lead to dramatic degradation in the quality of. policies when robustness is not explicitly considered..\nA video demonstration of the trained policies on these tasks can be viewed here: Supplimenrary video (https://youtu.be/w1YJ9vwaoto )\nReward functions: For both tasks, we used the standard reward functions implemented with. OpenAI gym (Brockman et al.]2016), with minor modifications. The reward structure for hopper\nr(s,a) = vx- 0.001|[a||2 + b\nFor the cheetah task. we use the reward function:\nthe alive bonus is 1 if head of cheetah is above -0.25 (relative to torso) and similarly episod terminates if the alive condition is violated.\nOur implementation of the algorithms and environments are public in this repository to facilitate reproduction ofresults: https://github.com/aravindr93/robustRL.\nn\n(a) (b)\nwhere s are the states comprising of joint positions and velocities; a are the actions (controls); and v. is the forward velocity. b is a bonus for being alive (b = 1). The episode terminates when Ztorso < 0.7 or when |0.,I < 0.2 where 0., is the forward pitch of the body.\nr(s,a) =vx-0.1[[a[[2 + b\n. Neural network architecture: We used a neural network with two hidden layers, each with 64 units and tanh non-linearity. The policy updates are implemented using TRPO. 2. Trust region size in TRPO: The maximum KL divergence between sucessive policy updates are constrained to be 0.01\n3. Number and length of trajectory rollouts: In each iteration, we sample N = 240 models from the ensemble, one rollout is performed on each such model. This was implemented in parallel on multiple (6) CPUs. Each trajectory is of length 1000 - same as the standard implimentations of these tasks in gym and rllab.\nThe results in Fig1|and Fig2|were generated after 150 and 200 iterations of TRPO respectively, wit each iteration consisting of 240 trajectories as specified in (3) above\nFigure|2jillustrates the performance of the three considered policies: viz. TRPO on mean parameters. EPOpt(e = 1), and EPOpt(e = 0.1). We similarly analyze the 10th percentile of the return distributiol as a proxy for worst-case analysis, which is important for a robust control policy (here, distributiol of returns for a given model instance is due to variations in initial conditions). The corresponding results are presented below:\nMaximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 36 1.5 1.5 1.5 1.6 1.6 1.6 300 1.7 1. 7890r 1. 7 1.8 1:8 241 Frleooon 1.9 11222227 9 3:0 01234 180 2.1 2.7 2 120 2.3 2.4 600 2.5 2.5 0 6N 8 4 6 2 8 4 06 6N 8 28 4 6N 8 O 6 28 4 0 m34 4 5. 6 6 8 m m 4 5 6 6 8 9 m m 4 4 5 6 6 89 Torso Mass\nFigure 6: 10th percentile of return distribution for the hopper task. EPOpt(e = 0.1) clearly outper forms the other approaches. The 10th of return distribution for EPOpt(e = 0.1) also nearly overlaps with the expected return, indicating that the policies trained using EPOpt(e = 0.1) are highly robus and reliable.\nA.4 ROBUSTNESS ANALYSIS FOR HALF-CHEETAH TASK\nFigure 7: Performance of policies for various model instances for the half-cheetah domain, similar to Figure2] Again, it is observed that the adversarial trained policy is robust and generalizes well to all models in the source distribution.\nMaximum Likelihood EPOpt(e =1) EPOpt(e=0.1) 3600 1.5 1.5 1.5 1.6 1.6 1.6 3000 1.7 1.7 1.7 188222 1.8 1.8 2400 Foioon 1.9 1i 9 2.0 1222222 O 1800 2 1 12m45 3:3 2 2m 1200 2. 2. 4 2.4 600 2. 2.5 2.5 0 62 8 4 6 N 8 4 0628 4 O 628 4 628 406 2 8 40 m m 4 4 5 6 6 7 7 8 9 m m 4 4 56 6 7 7 8 9 m m 4 4 56 8 Torso Mass\nMaximum Likelihood EPOpt(e=1) EPOpt=0.1 5000 0.3 0.3 0.3 0.34 0.34 0.34 0.38 0.38 0.38 4000 0.42 0.42 0.42 krleooon 0.46 0.46 0.46 3000 0.5 0.5 0.5 0.54 0.54 0.54 0.58 0.58 0.58 2000 0.62 0.62 0.62 0.66 0.66 0.66 1000 0.7 0.7 0.7 6 6 2 8 0'6 6 N 6 2 5 6 2 8 4 C 34 4 6 3m4 m m4 1 4 6 7 8 9 Torso Mass (a) Maximum Likelihood EPOpt(e=1) EPOpt(e=0.1) 5000 0.3 0.3 0.3 0.34 0.34 0.34 0.38 0.38 0.38 4000 0.42 0.42 0.42 ltionn 0.46 0.46 0.46 3000 0.5 0.5 0.5 0.54 0.54 0.54 0.58 0.58 0.58 2000 0.62 0.62 0.62 0.66 0.66 0.66 1000 0.7 0.7 0.7 O 68 4 6 N840 O 6284062840 O68 4 62840 33445667789 33445667789 334456 67789 Torso Mass (b)"}, {"section_index": "16", "section_name": "A.5 DIFFERENT SETTINGS FOR e", "section_text": "Here, we analyze how different settings for e influences the robustness of learned policies. The policies in this section have been trained for 200 iterations with 240 trajectory samples per iteration Similar to the description in Section 3.1. the first 100 iterations use e = 1. and the final 100 iterations use the desired e. The source distribution is described in Table 1. We test the performance on a grid over the model parameters. Our results, summarized in Table2] indicate that decreasing e decreases the variance in performance, along with a small decrease in average performance, and hence enhances robustness.\nTable 2: Performance statistics for different e settings for the hopper task\nPerformance (Return) mean std Percentiles E 5 10 25 50 75 90 0.05 2889 502 1662 2633 2841 2939 2966 3083 0.1 3063 579 1618 2848 3223 3286 3336 3396 0.2 3097 665 1527 1833 3259 3362 3423 3483 0.3 3121 706 1461 1635 3251 3395 3477 3513 0.4 3126 869 1013 1241 3114 3412 3504 3546 0.5 3122 1009 984 1196 1969 3430 3481 3567 0.75 3133 952 1005 1516 2187 3363 3486 3548 1.0 3224 1060 1198 1354 1928 3461 3557 3604 Max-Lik 1710 1140 352 414 646 1323 3088 3272"}, {"section_index": "17", "section_name": "A.6 IMPORTANCE OF BASELINE FOR BATCHPOLOPT", "section_text": "As described in Section 3.1, it is important to use a good baseline estimate for the value function fo the batch policy optimization step. When optimizing for the expected return, we can interpret th baseline as a variance reduction technique. Intuitively, policy gradient methods adjust parameter.. of the policy to improve probability of trajectories in proportion to their performance. By using . baseline for the value function, we make updates that increase probability of trajectories that perforn better than average and vice versa. In practice, this variance reduction is essential for getting polic. gradients to work. For the CVaR case, Tamar et al.(2015) showed that without using a baseline. the policy gradient is biased. To study importance of the baseline, we first consider the case wher. we do not employ the adversarial sub-sampling step, and fix e = 1. We use a linear baseline with a. time-varying feature vector as described in Section 3.1. Figure[8(a) depicts the learning curve for the. source distribution in Table[1 The results indicate that use of a baseline is important to make policy. gradients work well in practice..\nNext, we turn to the case of e < 1. As mentioned in section 3.1, setting a low e from the start lead. to unstable learning. The adversarial nature encourages penalizing poor trajectories more, which. constrains the initial exploration needed to find promising trajectories. Thus we will \"pre-train\"' by. using e = 1 for some iterations, before switching to the desired e setting. From Figure 8(a), it i. clear that pre-training without a baseline is unlikely to help, since the performance is poor. Thus, we use the following setup for comparison: for 100 iterations, EPOpt(e = 1) is used with the baseline Subsequently, we switch to EPOpt(e = 0.1) and run for another 100 iterations, totaling 200 iterations The results of this experiment are depicted in Figure[8(b). This result indicates that use of a baseline. is crucial for the CVaR case, without which the performance degrades very quickly. We repeatec the experiment with 100 iterations of pre-training with e = 1 and without baseline, and observed the same effect. These empirical results reinforce the theoretical findings of Tamar et al.(2015).\nAs emphasized previously, EPOpt is a generic policy gradient based meta algorithm for finding robust policies. The BatchPolOpt step (line 9, Algorithm 1) calls one gradient step of a policy gradient method, the choice of which is largely orthogonal to the main contributions of this paper. For the\n3500 3500 3000 3000 Prnnrnmee 2500 2500 Prrnrnnnee 2000 2000 1500 1500 1000 1000 500 500 EPOpt(e= 1) with baseline 0 EPOpt(e= 1) without baseline 0 50 100 150 200 0 Iterations 0 50 100 150 200 EPOpt(e =1) with baseline Iterations EPOpt(e= 0.1) with baseline : EPOpt(e=0.1) without baseline (a) (b)\n3000 2500 Prnnrmnee 2000 1500 1000 500 0\nFigure 8: (a) depicts the learning curve for EPOpt(e = 1) with and without baselines. The learning curves indicate that use of a baseline provides a better ascent direction, thereby enabling faster learning. Figure[8(b) depicts the learning curve when using the average return and CVaR objectives For the comparison, we \"pre-train' for 100 iterations with e = 1 setting and using a baseline. The results indicates that a baseline is very important for the CVaR objective (e < 1), without which the performance drops very quickly. Here, performance is the average return in the source distribution\n3500 3000 2500 nee Prrnmnnn 2000 EPOpt(e =1) with TRPO EPOpt(e =1) with REINFORCE 1500 1000 500 0 0 50 100 150 200 Iterations\n3500 3000 2500 Prnnmnmnee 2000 EPOpt(e =1) with TRPO EPOpt(e =1) with REINFORCE 1500 1000 500 0 50 100 150 200 Iterations\nFigure 9: Learning curves for EPOpt(e = 1) when using the TRPO and REINFORCE methods fo the BatchPolOpt step.\nreported results, we have used TRPO as the policy gradient method. Here, we compare the results t the case when using the classic REINFORCE algorithm. For this comparison, we use the same valu function baseline parametrization for both TRPO and REINFORCE. Figure[9|depicts the learning curve when using the two policy gradient methods. We observe that performance with TRPO i significantly better. When optimizing over probability distributions, the natural gradient can navigat the warped parameter space better than the \"vanilla\" gradient. This observation is consistent with th findings ofKakade(2001), Schulman et al.(2015), and Duan et al.(2016)."}] |
B1gtu5ilg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The power of the human mind in inference and generalization rests on our brain's ability to develo models of abstract knowledge of the natural world (Tenenbaum et al.]2011). When shown nove objects, both children and adults can rapidly generalize from just a few examples to classify and grou them based on their perceptual similarity. Understanding the processes that give rise to perceptua similarity will provide insight into the development of abstract models in our brain. In this paper, w explored computational models for understanding the neural basis of human perceptual similarit judgment.\nRecent deep convolutional neural networks (DCNNs) have produced feature representations in the . hidden layers that can match well with neural representations observed in the primate and human. visual cortex. It was found that there is a strong correspondence between neural activities (neuronal spikes or fMRI signals) and the activities of the deep layers of deep networks (Agrawal et al.|2014. Khaligh-Razavi & Kriegeskorte2014] Yamins et al.2014), suggesting that deep neural networks have in fact learned meaningful representations that are close to humans', even though the neural."}, {"section_index": "1", "section_name": "TRANSFER OF VIEW-MANIFOLD LEARNING TO SIMI LARITY PERCEPTION OF NOVEL OBJECTS", "section_text": "Zhihao Li, Yimeng Zhang\nDepartment of Computer Science Carnegie Mellon University. OA\nzhihaol, yimengzh}@andrew.cmu.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "networks are trained for object classification in computer vision. Cognitive neuroscientists have. started to explore how the representations learned by deep networks can be used to model various aspects of human perception such as memorability of objects in images (Dubey et al.]2015), objec1. typicality (Lake et al.]2015), and similarity judgment (Peterson et al.]2016[Kubilius et al.] 2016 Certain correspondence between deep net representations and human experimental results are found. In particular, Peterson et al.(2016) found that human similarity judgment on a set of natural images. might be similar to the feature representations in deep networks after some transformation..\nThe DCNNs that neuroscientists and cognitive scientists have studied so far, such as AlexNe (Krizhevsky et al.]2012), were trained with static images with the goal of classifying objects in static images into different categories. Perceptual similarity judgment is obviously closely related to th mechanisms used in object classification---we classify objects with similar attributes and appearances into the same class, and thus object classification rests in part on our perceptual similarity judgmen and relies on physical, semantic abstract attributes common to objects in each class. Our perceptua similarity judgment might also be tied to our need for individual object recognition---after all, we might want to recognize an individual person or object, not just a class. It is obviously important tc be able to recognize one's own child or the cup one is using. The need to recognize an individua object, independent of view points, requires fine discrimination of details, and might also be a very potent force for shaping our perceptual similarity judgment's machinery.\nWe retrain a DCNN with object persistence constraints, using rendered 3D objects. We call this. retrained network Object Persistence Net (OPnet). During training, we utilize a Siamese networl. architecture for incorporating object persistence constraints into the network. We demonstrated. that multi-view association training with a relatively small set of objects directly affects similarity. udgment across many classes of objects, including novel objects that the network has not seer. before. Our contribution is to demonstrate the surprising transfer of learning of similarity judgmen. to untrained classes of objects and a variety of completely artificial novel objects. We analyze. the view-manifold fine-tuned with object persistence constraints to understand what changes have. taken place in the feature representation of the OPnet that has resulted in the development of this. remarkable transfer of perceptual similarity judgment to novel objects..\nCreating large sets of human labeled data on object similarity judgement is expensive. There ha. been a recent trend in exploring inherent information as supervisory signal, including using cycl. consistency for learning dense correspondance(Zhou et al.|2015), camera motion for foregroun segmentation(Zeng et al.|2016) and context information(Doersch et al.2015). Among these, mos related to our study is the work of|Wang & Gupta (2015) utilizing visual tracking results as supervisor signals, which is an object persistence or continuity assumption, to learn deep networks withou. explicit object labels. While the tracked patches can be partly regarded as multi-view images, th. changes in views tend to be very limited. In comparison, we used graphics rendered multi-viey images as object persistency constraint. Such a clean setup is necessary for us to study the effect o object persistency constraint on novel objects, as well as the transferability of view-manifold learnin. to similarity perception.\nThe development of invariant object recognition has often been attributed to object continuity or persistence in our visual experience. When we see an object, we tend to see it from different angles over time, as we walk by or around it, or directly manipulate it. This temporal persistence of objects allows our visual system to associate one view of an object with another view of the same object experienced in temporal proximity, as were proposed in slow-feature analysis (Wiskott & Sejnowski]2002) or memory trace models (Perry et al. 2006) in computational neuroscience for learning translation and rotation invariance in object recognition. Object persistence as a term in psychology sometimes refers to people's knowledge or belief on the continual existence of an object even when it is occluded and invisible from view. Here, we use it to more generally to denote the temporal persistence of an object in our visual experience. We propose to incorporate the object continuity or persistence constraint in the training of DCNN, and investigate what new abstraction and capability such a network would develop as a consequence. We also evaluate the behaviors of the resulting network to see if they match the data on human perceptual similarity judgment of novel objects in an earlier study (Tenenbaum et al.|2011)\nRecent approaches in representation learning of 3D shapes are also related to our work. Generative models such as (Wu et al.| 2016) and (Tatarchenko et al.]2015) learn a vector representation for generation of 3D shapes. Other approaches learn an embedding space for multi-view object retrieval\nPositive Rank Loss Layer Shared Weights Multi-view rendering from 3D Shared Weights Negative Training with Object Persistency Similarity Judgment Query A=0.83 B=0.41 C=0.31 D=0.23 E=0.14 F=0.07\nFigure 1: Framework for training and testing the network utilizing object persistence. For training (upper panel) we first render multiple views for each object and arrange them into triplets containing a similar pair and a dissimilar pair as input to a Siamese network architecture. For testing (lower panel), when given a query image the network computes a similarity score for each of the candidate images. The lower panel shows some example similarity scores given by our OPnet, where different views of the same object are considered the most similar followed by different objects in the same category, and finally those objects belonging to different categories of. least similarities with the query image..\n(Guo et al.|2016) or for cross-view image and shape retrieval(Li et al.]2015). While these work explored training with multi-view images, they did not constrain the view points in a continuous way and most importantly, the transferability to judgement of novel objects of novel classes wer not studied. We evaluate the performance of the approach with Li et al.(2015) in our tasks fo comparison. That approach learned an embedding space of 3D shapes and used CNN for image embedding for the purpose of image purification.\nWe take a standard CNN (AlexNet), that has already learned good feature representations for objec1 classification, and retrain the network in a Siamese triplet architecture with object persistence constraints using multi-view images rendered from a set of 3D object models in ShapeNet"}, {"section_index": "3", "section_name": "2.1 OBJECT PERSISTENT NET (OPNET)", "section_text": "N W |I2 + max{0,D(X,X+)- D(X,X)+ M} min W 2 i=1\nf(X1) f(X2) D(Xi,X2) =1 I f(X1) I:If(X2) I\nwhere Xis the weight decay and W denotes the weights of the network. f() is the CNN representatior output as a function of an input image, and M denotes the margin parameter. The margin is a threshold\nPositive Rank Loss Layer Shared Weights Multi-view rendering from 3D Shared Weights Negative Training with Object Persistency Similarity Judgment - Query A=0.83 B=0.41 C=0.31 D=0.23 E=0.14 F=0.07\no study the impact of object persistence constraint in the development of perceptual similarity udgment, OPnet utilizes a Siamese triplet architecture. This triplet architecture can be visualized as hree baseline CNN towers that share the same parameters (Figure|1). In implementation, it is just one single CNN applied to three images, two of which are considered more \"similar' than the third different' one. Conceptually, our OPnet tries to bring the feature representations of the two \"similar mages together, and drive apart the representation corresponding to third \"different' image. The architecture and the initial weights of the baseline CNN is same as those of of AlexNet trained on mageNet (Deng et al.|2009). To train our OPnet with triplet input (X,, X+, X,-), we present two iews of the same 3D object to two base networks as (X,, X+), and a view of a different object tc he third base network as X. Object persistence means that given (X,, X+, X), we try to push he representations for views of the same object (X, X+) to be close and make them away from the epresentation for the different object X,-. We minimize the loss function with a hinge loss term:\nto decide whether the two views are considered similar or not. The higher the margin, the more we are forcing the network to develop a uniform representation for multiple views of the same object relative to views of another object. D is the cosine distance function for a pair of features\nThe different objects in principle could be from the same category or from different categories During training, we constrain the \"different object\"' to be another 3D object from the same category to push apart more forcefully the feature representations of objects from the same category, resulting in view-invariant object discrimination within the same category. We expect the result of this training to create a view-manifold for each individual object-views within the same manifold are considered to be \"similar' and closer together because they belong to the same object."}, {"section_index": "4", "section_name": "2.2 DISTANCE METRIC LEARNING", "section_text": "DCNNs, such as AlexNet, pre-trained on large dataset, have developed useful feature representations that can be fine-tuned for other specific tasks (Donahue et al.1 2014 Qian et al.2015]Karpathy et al.]2014). However, the pre-training of DCNN involves class labels as teaching signals. During pretraining, the network learns to throw away much information to extract invariants for classification On the other hand, DML approaches are able to develop feature representations that preserve more fine-grained features, as well as intra- and inter-class variations.\nTo allow the network to learn features under the object persistence constraints and develop a similarit judgment that can transfer, we create one set of data for training and five sets of novel objects fc testing of the transferability. To focus our study on the network's ability to perceive 3D spatia relations and features of individual objects, we grayscale our images during rendering to eliminate the impact of color. For the same reason, we do not add any backgrounds.\nWe render multi-view images of individual objects from 7K 3D CAD models of objects in ShapeNe (Chang et al.] 2015). The 7K models belong to 55 categories, such as cars and chairs. For each model we render 12 different views by rotating the cameras along the equator from a 30 elevation angl and taking photos of the object at 12 equally separated azimuthal angles (see Fig.1. We use the rendering pipeline in Blender, an open source 3D graphics software, with a spotlight that is static. relative to the camera.\nFor training, we sample 200 object models from 29 categories of ShapeNet. 20 of these object models. from each category are saved for cross validation. For testing, we make the assumptions that (1). views of the same object are perceived to be more similar when compared to views of a different object, and (2) views of objects in the same category are perceived to be more similar than views of. objects from different categories. These assumptions are consistent with findings in earlier studies on. similarity judgment in human (Quiroga et al.2005 Erdogan et al.2014) Goldstone2013). Since we render images based on CAD models, we can control the variations to create a large dataset that. can approximate ground-truth data for similarity judgment for our experiments without resorting. to large-scale human judgment evaluation. All the objects in the following five test sets are novel objects in the sense that they are not used in training..\nOur Siamese triplet approach transforms the view-manifold of the original baseline network, sc. hat different views of the same object are considered similar and become closer in the feature epresentation space. Thus, it can be viewed as a form of distance metric learning (DML), which is. set of methods that learn a transformation from the input space to a feature space. The Siamese network has been a popular distance metric learning method, used in signature verification (Bromley t al.|1993), learning invariant mapping (Hadsell et al.]2006), face verification (Chopra et al.[2005) insupervised learning (Wang & Gupta!2015) or image similarity ranking (Wang et al.2014). Ir hese works, the definition of similarity for DML comes from the semantic labeling like class label. Ir our work, the similarity is defined by the object persistence constraints, obtained during the rendering f 3D models and providing a continuous trajectory for each single object. Besides, the large variation. of the 2D appearance induced by 3D rotation prevents our network from learning trivial global. emplates, but induces it to learn features that are more generalized and thus transferable more easily. O novel objects.\nNovel instance: Created by rendering additional 20 novel objects from each of the 29 categorie. used in training the OPnet. This is used to test the transfer of view-manifold learning to novel object of the same category. The task is not trivial due to the large intra-category variation existing in th. ShapeNet.\nNovel category: Created by rendering objects from 26 untrained categories. This is a more challeng ing test of the transfer of view-manifold learning to novel categories.\nSynthesized objects: Created by rendering a set of 3D models we synthesized. These are textureles. objects with completely novel shapes. The dataset consists of 5 categories, with 10 instances for each category. Within each category, the objects either have similar local parts, or have the same globa configuration, based on human judgment. This is an even more challenging test, as these synthesized objects are not in the ImageNet or ShapeNet.\nPokemon Created by 3D models of Pokemon dataset. Pokemons are cartoon characters with certain. evolution relationships with each other, which provides an alternative measurement of similarity. This test evaluates the transfer of learning to novel objects with different styles and more complicated textures. We collected 438 CAD models of Pokemon from an online database. We divide these. models into 251 categories according to their evolution relationships, with most of these categories containing only 2 to 4 objects. Pokemons of the same category look more similar on average due to their \"genetic linkage\"\nThe similarity score between a query image and a candidate image is computed as 1 minus the cosin distance of the feature representations of the query and candidate pair, and higher score means highe similarity. Given a test set containing objects of multiple categories, we evaluate the OPnet via tw retrieval tasks: object instance retrieval and categorical retrieval. In the object instance retrieval task for each image P containing object O of category C in the test set, the network is asked to rank al other images in C, such that images for O should have higher similarity score than images for othe objects in C. In the categorical retrieval task, for each image P of category C, the network is aske to rank all other images, such that images in category C should have higher score than images not ir C. Here we are indirectly utilizing the human perception information, as categories are defined by human perception based on their similarity in shapes or functions."}, {"section_index": "5", "section_name": "2.5 IMPLEMENTATION DETAILS", "section_text": "We use Caffe (Jia et al.[2014) for training the networks. The base network of the OPnet is modifie. from the AlexNet architecture, where we drop the last fully connected layer (fc8) and replace th softmax loss with our triplet hinge loss. The network is initialized by weights pre-trained on ImageNe. The objective is optimized using mini-batch stochastic gradient descent (SGD) and we fine-tun. the network for all layers. For each pair of positive example (X, X+), we select two hard negativ. examples X- which give the highest loss (similar in (Wang & Gupta2015)) and another twc. randomly from within the mini-batch. Starting with a learning rate of O.01, we decrease it by a facto of 10 every 8K iterations and with a momentum of 0.9. We stop the training at 20K iterations. Weigh. decay is set to O.ooo5. We set the margin parameter M to O.1 by cross validation..\nWe compare HoG feature representation (Dalal & Triggs| 2005) and four deep learning networks: 1 OPnet, 2) AlexNet pre-trained on ImageNet, 3) An AlexNet fine-tuned for classification on ShapeNet. data, denoted as \"AlexNetFT\", 4) The joint embedding model by[Li et al.(2015). In AlexNetFT, we. replace the original fc8 layer with a fully connected layer with 29 output units and fine-tune the last two fully connected layers (fc7, fc8) with cross-entropy loss. The AlexNetFT model is trained with the same data we used for training the OPnet. The joint embedding model was pre-trained on 6700. shapes in the chair category of ShapeNet. For the first three deep models, we use the fc7 layer as the. feature representation and cosine distance to compute distance between feature representations. We.\nPrecision-recall curve on shapenet novel instance Precision-recall curve on shapenet chair category. 1.0 1.0 0.8 0.8 0.6 0.6 AlexNet+CosDis AlexNetFT 0.4 0.4 HoG Joint Embedding OPnet 0.2 0.2 0.8.0 0.8.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall (a) (b) Precision-recall curve on shapenet novel category. Precision-recall curve on synthesized objects. Precision-recall curve on pokemon dataset. 1.0 1.0 1.C 0.8 0.8 0.8 0.6 0.6 0.6 Preaoon 0.4 0.4 0.4 0.2 0.2 0.2 0.8.0 0.8.0 0.8.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall (c) (d) (e)\nFigure 2: The precision-recall curves for the object instance retrieval task on different datasets\nNovel instance Novel category Synthesized objects Pokemon Chair HoG 0.316 0.391 0.324 0.332 0.322 AlexNetFT 0.437 0.503 0.356 0.287 0.478 AlexNet+CosDis 0.529 0.623 0.517 0.607 0.686 AlexNet+EucDis 0.524 0.617 0.514 0.591 0.677 OPnet 0.856 0.855 0.574 0.697 0.938 Joint-embedding 0.429 0.513 0.443 0.387 0.814\nTable 1: Mean Average Precision for the object instance retrieval task over all test sets\nalso show results based on AlexNet feature representation both in terms of Eculidean distance an cosine distance measures, denoted as AlexNet+EcuDis and AlexNet+CosDis. Comparison of featur representations from different layers are shown in Appendix B. We show the results for the instance. retrieval task in Figure [2|and Table[1] The precision measure reflects the accuracy of the model's similarity judgment, with the two assumptions given in section 2.3..\nOn similarity judgment of novel objects from both the trained and untrained categories, OPne significantly outperforms AlexNet and AlexNetFT, with an increased Mean Average Precision of a least 23%. The improvement is due to OPnet's gains in ability in discriminating different object inside one category regardless of their viewpoints, while recognizing different views of the object to be similar. For novel shapes in artificial synthesized objects and Pokemons, OPnet still shows al increased MAP of at least 6% (or 15% decreased error rate for the Pokemon test). This shows tha the similarity judgment resulting from view manifold learning is valid not only for the trained object or just to the objects in the same data set, but generalizable to other classes of objects. This suggest the learned feature representations are more abstract and general, allowing the transfer of the learnin to substantially different datasets and novel objects, to a degree that is not well known or well studie in computer vision.\nWe compare OPnet with the joint embedding approach on the chair category of ShapeNet, shown in. Figure|2b Both networks are trained with the chair category and are tested on novel chairs. OPnet. outperforms the joint embedding approach by a large margin, showing that a better instance level\nPrecision-recall curve on shapenet novel instance Precision-recall curve on shapenet novel category Precision-recall curve on synthesized objects. 1.0 1.0 p 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 AlexNet+CosDis 0.2 AlexNet+CosDis 0.2 AlexNet+CosDis AlexNetFT AlexNetFT AlexNetFT 0HoG 0HoG HoG OPnet OPnet OPnet 0.8.0 0.8.0 0.2 0.8 0.8.0 0.2 0.4 0.6 0.8 1.0 0.4 0.6 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall\nFigure 3: The precision-recall curves for the category level retrieval task. The three figures show the network's performance on the ShapeNet dataset with novel instance, novel category and synthesized objects respectively.\ndiscrimination is achieved using object persistence training, compared to using known shapes a anchor points for image embedding. Furthermore, because the joint embedding approach would nee to be trained for each specific category, it does not perform well on novel categories..\nWhen we fine-tuned AlexNet for classification of the 29 trained categories, the resulting AlexNetFT's feature representation actually performs the worst, compared to OPnet and the original AlexNet, on the instance similarity judgment or retrieval tasks. When a network is trained to perform classification it learns to ignore subtle differences among objects in the same category. The fewer categories a network is trained on, the more the instance level similarity judgment will be compromised. This loss of the generality of its feature representation compromises its transferability to novel objects in other classes.\nWe notice that the performance gain for the OPnet is most significant in the ShapeNet dataset and the. gap becomes small for the synthesized and Pokemon dataset. This shows OPnet's certain overfitting. to the bias in ShapeNet, as the synthesized object dataset contains textureless objects and Pokemon dataset contains mainly human-like characters that are not in ShapeNet..\nCategorical retrieval provides another measure of the network's performance in similarity judgment. In this test, we randomly sample 20 categories each from the novel instance test and the novel category. test, with 20 object instances drawn from each category. For the synthesized object test set, we test. all 5 categories and each with 10 instances. For each instance, a single random view is provided. The results are shown in Figure[3] Despite the fact that AlexNet knows more about the semantic. features of each category, our OPnet still achieves comparable results. OPnet here shows an improved. ability in similarity judgment at the categorical level. On our artificially synthesized object dataset.. where all three networks have no prior experience, OPnet performs better than AlexNet. AlexNetFT. performs extremely well on trained categories likely because it is overfitted to the limited trained. objects, even though it uses the same amount of data. This overfitting problem shows that training with only class labels might not preserve the essential information to develop transferable general. feature and abstract feature representation, especially with limited training dataset.."}, {"section_index": "6", "section_name": "3.1 CORRELATION WITH HUMAN PERCEPTION", "section_text": "Using the novel objects from Tenenbaum et al.(2011), we are able to compare our networks with. human similarity perception. We collect 41 images from the paper, one image per object. A pairwise. similarity matrix is calculated based on the cosine distance of their feature representations. We. can then perform hierarchical agglomerative clustering to obtain a tree structure, using the Neares1 Point Algorithm. That is, for all points i in cluster u and points j in cluster v, the distance of the. two clusters are calculated by dist(u, v) = min(D(u[i], v[jD), where D(.) is the cosine distance. function. We merge two clusters with the shortest distance successively to construct the tree. The tree based on human perception is constructed by giving human subjects all the images and asking them to merge two clusters that are most similar each time, similar to the hierarchical agglomerative. clustering algorithm. Results are shown in Figure4.\nIn order to quantitatively measure the similarity between the trees output by neural networks and th one based on human perception, we calculate the Cophenetic distances on the tree for each pair o.\nobject. For object i and j, the Cophenetic distances t,,; are defined as ti,; = dist(u, v), i E u, j E where u,v are clusters connected by U-link. Finally, we can evaluate the similarity of the two trees b calculating the Spearman's rank correlation coefficient. In the experiment, the Spearman correlatioi is O.460 between AlexNet and the human perception and 0.659 between OPnet and the humar perception, meaning that our OPnet, trained with object persistence constraints on a relatively smal set of objects, automatically yielded a higher match to the human perceptual similarity data. Thi finding provides some support to our conjecture that object persistence might play an important role in shaping human similarity judgment.\nWe study the feature representations in these networks and their transformation induced by the objec persistence constraints to understand how the changes in similarity judgment performance come about. As our network uses cosine distance in the feature space as similarity measure, we study hov this measure changes in the view-manifold of the same object and between views of different object\nFigure 5: Distance measures for 5 cabinet objects. Lighter pixels mean larger distance. On the left is the object: each with 12 views. whose similarity distance between each other we are interested in. In the middle and th right is the cosine distance of the ouptut features of OPnet and AlexNet respectively. The element on the ith row and the jth column stands for the cosine distance between the ith and jth image. The ith image is rendered from [i/12]th object and (i mod 12)th view.\n(a) Grouping by Human Perception 0. (b) Grouping by AlexNet Features 0.1 0.00 (c) Grouping by OPnet Features\nFigure 4: Hierarchical clustering of the alien objects, based on (a) human perceptions, (b)A lexNet features. and (c) OPnet features. The dendrograms illustrate how each cluster is composed by drawing a U-shaped link. between a cluster and its children. The height of each U-link denotes the distance between its children clusters when they are merged.\nWe first visualize the pairwise similarity distance matrix of AlexNet and OPnet in Figure 5] We randomly choose 5 objects from the cabinet category for illustration. Each object has 12 views that the network has never seen before. Images are arranged first by different object instances (in columns) then by views (in rows). Many properties of the view manifolds are revealed. First, for the matrix of OPnet, we can see clearly five dark blocks formed in the diagonal, each standing for the strong similarity (small distance) among the different views of the same cabinet. The dark block means that OPnet is associating different views of the same object together, reducing intra-object distance relative to inter-object distance. In this way, the similarity judgment of the OPnet becomes more viewpoint independent. On the other hand, the similarity matrix of AlexNet shows a variety of patterns across all objects within the same category. A closer look at these patterns suggests that AlexNet first forms groups by certain views (e.g. side-views), and then by objects, resulting in a more viewpoint dependent similarity measure that is poor in discriminating objects within a category Second, even though OPnet groups different views together, the view-manifold has not degenerated into a single point. Certain patterns can be seen inside each dark block of OPnet's matrix, forming a hierarchical structure: different views of the same object are more similar to each other than to another object and some rotations in angle are considered more similar than others. To illustrate how the view manifolds have contracted but not completely degenerated, we randomly sample objects from the novel instance test set and use TSNE (Maaten & Hinton]2008) to plot them in 2D, as shown in Figure[6 We can see clearly that different views of the same object are considered more similar in the feature space, and objects form tight and distinct clusters. We borrow a measurement from Linear\nAlexNet Category0 AlexNet Category1 AlexNet Category2 AlexNet Category3 150 300 100 200 200 150 100 50 100 100 50 50 0 0 100 50 -100 -200 50 50 150 -300 200 100 50-100 150 40Q 150-100-50 50100 150200 100 50-100-50 50100150200 -250 -50 50 100 0 0 150 100 50 50 100 OPnet Category0 OPnet Category1 OPnet Category2 OPnet Category3 100 100 100 150 50 888 50 100 O. 50 50 0 50 0 -50 100 -50 -50 100 150 100 20900 15900 -50 50 100 -50 0 50 100 -100 0 200-150-100-50 50 100150 15 200-150-100-50 50 100 150\nFigure 6: TSNE visualization of the features produced by AlexNet and OPnet, on four categories. Each poin represents a view of an object. Different colors represent different objects..\nDiscriminant Analysis (LDA) to evaluate how tightly different views of the same object are clustered together, relative to the distance among different objects within the same category. Formally, let S be the set of all the objects inside one category and c be the set of all views for one object, x be the. center of all image features, and c be the center for the object c. We then calculate the score for category i using the following equation:.\nWe then average over all the categories to get a score for each network. The higher the score is, the. larger the inter-object distance is compared to intra object distance and the more closely different views of the same object are grouped together. In the experiment with the novel instance test set. OPnet's score is 0.535 whereas AlexNet's is 0.328, showing the different views of the same object. are more similar than that between different objects due to the object persistence constraint..\nIn this work, we fine-tune AlexNet with object persistence constraints in the framework of distance metric learning with a Siamese triplet. This fine-tuning modifies the view-manifold of the objec\n1 l|c-x|| |Si Ointer_instance corei C 1 Ointra_instance I 1 |x - c| |Si| |c cESi xEc\nrepresentation, bringing closer together the representations of an object in different views, driving. apart representations of different objects in the same category, resulting in better intra-categorica object recognition, without compromising inter-categorical discrimination. We investigated whethe. this view-manifold learning results in an improvement in the network's ability to recognize th. similarity of novel objects that have never been seen before by performing instance and categorica image retrieval on artificial novel objects or novel object classes, including a set tested in humai. similarity judgment. Interestingly, we find that AlexNet, with its rich feature representations, alread. perform similarity judgement significantly above chance, in the sense that different views of the sam object are considered more similar to the views of another object in the same category, or objects ii. the same category are considered to be more similar than objects in different categories. Fine-tuning. with the object persistence constraint significantly improves this \"'similarity judgement' among . variety of novel objects, suggesting the view manifold learning in the OPnet is accompanied b. feature embeddings with more general and abstract attributes that are transferable, likely at the leve. of local object parts.\nFrom a technical point of view, our OPnet performs better than earlier approaches (Li et al.. 2015) in instance and categorical retrieval of novel objects. We have tested our approach with real image. database (Geusebroek et al.[2005) and found it only yields a slight improvement over AlexNet That database contains 100o objects with different views but without categorical labels. OPnet's. superiority over AlexNet lies in its better discrimination of objects within the same category. When. objects are not organized in categories, i.e. when each object is essentially treated as a category. OPnet loses its advantages. In addition, there are more complex variations such as lighting and scale. in real scene environments that our current OPnet has not considered. We plan to develop this model. to discount additional nuisance variables and to develop or find database to explore the transferability. of its view-manifold learning in more general settings..\nOur work was motivated by our hypothesis that object persistence/continuity constraint in our visual experience might play a role in the development of neural representations that shape our similarity judgement of objects that we have not seen before. The fact that fine-tuning AlexNet with this additional constraint automatically yields a new view-manifold that match human similarity judgment data better than AlexNet lends some support to our hypothesis. However, more extensive testing with human perception ground-truth will be needed to fully confirm our hypothesis."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "Xingyu Lin and Hao Wang were supported by the PKU-CMU summer internship program. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC00007. The U.S. Governmen is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.\nWe thank Kalina Ko for helping us to construct part of the synthesized object database"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). volume 1. pp. 539-546. IEEE. 2005\nDay S. B. Goldstone, R. L. Similarity. The Encyclopedia of Mind., pp. 696-699, 2013\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network InternationalJou 31t101 7(04):669_688.1993\nand Pattern Recognition (CVPR'05), volume 1, pp. 539-546. IEEE, 2005. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In 2005 IEEE. Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1,. pp. 886-893. IEEE, 2005. J. Deng, W. Dong, R. Socher, L. J. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference. on, pp. 248-255, June 2009. doi: 10.1109/CVPR.2009.5206848. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by. context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp.. 1422-1430, 2015. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML. pp. 647-655, 2014.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio. Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia. pp. 675-678. ACM, 2014\nAndrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei Large-scale video classification with convolutional neural networks. 2014.\nYangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J. Guibas Joint embeddings of shapes and images via cnn image purification. ACM Trans. Graph., 2015.\nFrancisco Massa, Bryan Russell, and Mathieu Aubry. Deep exemplar 2d-3d detection by adapting from real to rendered views. arXiv preprint arXiv:1512.02497, 2015.\nJoshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. How to grow a mind: Statistics, structure, and abstraction. science, 331(6022):1279-1285, 2011.\nGavin Perry, Edmund T Rolls, and Simon M Stringer. Spatial vs temporal continuity in view invariant 11suaohiect rec0 6.November 2006\nJiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen. and Ying Wu. Learning fine-grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1386-1393, 2014\nAPPENDIX A EXAMPLES OF SOME TOP RANKING RESULTS"}, {"section_index": "9", "section_name": "APPENDIX B INSTANCE RETRIEVAL RESULTS USING FEATURES FROM DIFFERENT LAYERS", "section_text": "As shown in many literatures (Massa et al.]2015] Aubry & Russell]2015), features from differen. layers sometimes perform differently for a given task. For the instance retrieval task on the nove. instance dataset of the ShapeNet, we compare OPnet and AlexNet using features from different layers as shown in Figure[8] The accuracy of AlexNet is pretty flat up to conv3, and then keeps increasing. until layer fc8 where the feature becomes categorical probability and not appropriate for instance. level discrimination. On the other hand, the object persistence training gives a significant increase ir. accuracy in fully connected layers..\nMean average precision using features from different layers 1.0 0.8 0.6 0.4 ean 0.2 AlexNet+CosDis OPnet 0.0 qoud data conV1 p0ol1 norm1 conv2 pool2 conv3 conV4 conV5 norml Layers\nFigure 8: Instance Retrieval Results Using Features From Different Layers\nShapeNet Novel Category Synthesized D Objects 8 00 - Pokemon Query OPnet Retrieval Results AlexNet Retrieval Results\nFigure 7: Examples of top instance retrieval results for AlexNet and OPnet. Images that are different views of the same object(which are considered more similar) are marked with red solid rectangle while views of other. objects are marked with gray dashed rectangle. Obviously from the gun example we can see how the retrieval. results for AlexNet are highly view-dependent.."}] |
HJDBUF5le | [{"section_index": "0", "section_name": "TOWARDS A NEURAL STATISTICIAN", "section_text": "Harrison Edwards\nlallsonlDuwalas School of Informatics University of Edinburgh Edinburgh, UK\n.L.Edwards@sms.ed.ac.uk\nAn efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fash- ion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classify- ing previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The machine learning community is well-practised at learning representations of data-points and se- quences. A middle-ground between these two is representing, or summarizing, datasets - unordered. collections of vectors, such as photos of a particular person, recordings of a given speaker or a doc-. ument as a bag-of-words. Where these sets take the form of i.i.d samples from some distribution such summaries are called statistics. We explore the idea of using neural networks to learn statistics.. and we refer to our approach as a neural statistician..\nThe key result of our approach is a statistic network that takes as input a set of vectors and outputs a vector of summary statistics specifying a generative model of that set - a mean and variance specifying a Gaussian distribution in a latent space we term the context. The advantages of our approach are that it is:\nWe are given datasets D, for i E I. Each dataset D, = {x1, ..., xk,} consists of a number of i.i.d. samples from an associated distribution p; over Rn. The task can be split into learning and inference. components. The learning component is to produce a generative model p; for each dataset D,. We assume there is a common underlying generative process p such that p; = p(-|c;) for c; E R' drawn.\nAmos Storkey School of Informatics University of Edinburgh Edinburgh, UK A Storkev@ed.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Unsupervised: It provides principled and unsupervised way to learn summary statistics as. the output of a variational encoder of a generative model.. Data efficient: If one has a large number of small but related datasets, modelling the. datasets jointly enables us to gain statistical strength.. Parameter Efficient: By using summary statistics instead of say categorical labellings of. each dataset, we decouple the number of parameters of the model from the number of. datasets. Capable of few-shot learning: If the datasets correspond to examples from different classes class embeddings (summary statistics associated with examples from a class), allow us to. handle new classes at test time..\nfrom p(c). We refer to c as the context. The inference component is to give an approximate posteric over the context q(c|D) for a given dataset produced by a statistic network..\nIn order to exploit the assumption of a hierarchical generative process over datasets we will use a parameter-transfer approach' (seePan & Yang2010) to extend the variational autoencoder model ofKingma & Welling(2013)\nX1 X2 x3 C 0\nFigure 1: Left: basic hierarchical model, where the plate encodes the fact that the context variable c is shared across each item in a given dataset. Center: full neural statistician model with three latent layers 21, 22, 23. Each collection of incoming edges to a node is implemented as a neural. network, the input of which is the concatenation of the edges' sources, the output of which is a parameterization of a distribution over the random variable represented by that node. Right: The statistic network, which combines the data via an exchangeable statistic layer.."}, {"section_index": "3", "section_name": "3.1 VARIATIONAL AUTOENCODER", "section_text": "The variational autoencoder is a latent variable model p(x|z; 0) (often called the decoder) with parameters 0. For each observed x, a corresponding latent variable z is drawn from p(z) so that.\np(x[z;0)p(z) dz\nThe generative parameters 0 are learned by introducing a recognition network (also called an en-. coder) q(z[x; ) with parameters . The recognition network gives an approximate posterior over the latent variables that can then be used to give the standard variational lower bound (Saul & Jordan. 1996) on the single-datum log-likelihood. I.e. log P(x(0) > Lr, where.\nLx =Eq(z|x,g) [logp(x|z;0)]- DkL(q(z|x;$)|p(z))\nLikewise the full-data log likelihood is lower bounded by the sum of the Lr terms over the whole dataset. We can then optimize this lower bound with respect to and 0 using the reparameterization trick introduced byKingma & Welling(2013) and Rezende et al.(2014) to get a Monte-Carlo estimate of the gradient."}, {"section_index": "4", "section_name": "3.2 BASIC MODEL", "section_text": "11 p(c) p(x[z;0)p(z[c;0) dz] dc. xED\nThe prior p(c) is chosen to be a spherical Gaussian with zero mean and unit variance. The condi. tional p(z[c; 0) is Gaussian with diagonal covariance, where all the mean and variance parameters depend on c through a neural network. Similarly the observation model p(x[z; 0) will be a simple likelihood function appropriate to the data modality with dependence on z parameterized by a neural. network. For example, with real valued data, a diagonal Gaussian likelihood could be used where the mean and log variance of x are created from z via a neural network..\nWe extend the variational autoencoder to the model depicted on the left in Figure [1 This includes a latent variable c, the context, that varies between different datasets but is constant, a priori, for items within the same dataset. Now, the likelihood of the parameters 0 for one single particular dataset D is given by\nLp =Eq(c|D;$) Eq(z|c,x;g) [logp(x|z;0)]- DkL(q(z|c,x;$)||p(z|c;0 x E d DkL(q(c|D;$)|lp(c))\nThe full-data variational bound is given by summing the variational bound for each dataset in our collection of datasets. It is by learning the difference of the within-dataset and between-dataset distributions that we are able to discover an appropriate statistic network.."}, {"section_index": "5", "section_name": "3.3 FULL MODEL", "section_text": "The basic model works well for modelling simple datasets, but struggles when the datasets have. complex internal structure. To increase the sophistication of the model we use multiple stochastic layers 21,..., 2k and introduce skip-connections for both the inference and generative networks.. The generative model is shown graphically in Figure[1in the center. The probability of a dataset D is then given by\nL-1 p(c) II p(x[c, Z1:L; 0)p(ZL[c; 0) p(Zi|Zi+1,C;0) dZ1:L dc xED i=1\nThe full approximate posterior factorizes analogously as\nL-1 q(c,z1:L[D;$) = q(c|D;) q(zL[x,c;$) 1[q(zi|Zi+1,x,C;$) xED i=1\nFor convenience we give the variational lower bound as sum of a three parts, a reconstruction tern Rp, a context divergence Cp and a latent divergence Lp:.\nF CD + Lp with Rp = Eq(c|D;q) Eq(z1:L|c,x;q) l0g p(x|Z1:L,c;0) xED Cp = DkL(q(c|D;$)|[p(c)) Lp = Eq(c,z1:L|D;$) DKL(q(zL|c,x;$)|p(zL|c;0)) xE D L-1 DKL(q(zi|Zi+1,C,x;$)|p(zi|Zi+1,C;0) i=1\nThe skip-connections p(z;|Zi+1, c; 0) and q(zi|Zi+1, x; ) allow the context to specify a more precise distribution for each latent variable by explaining-away more generic aspects of the dataset at each. stochastic layer. This architecture was inspired by recent work on probabilistic ladder networks in Kaae Sonderby et al. (2016). Complementing these are the skip-connections from each latent. variable to the observation p(x|Z1:L, c; 0), the intuition here is that each stochastic layer can focus. on representing a certain level of abstraction, since its information does not need to be copied into the next layer, a similar approach was used inMaalge et al.(2016).\nWe use approximate inference networks q(z[x, c; $), q(cD; ), with parameters collected into to once again enable the calculation and optimization of a variational lower bound on the log likelihood. The single dataset log likelihood lower bound is given by\nAs with the generative distributions, the likelihood forms for q(z[x, c; $) and q(c D; ) are diagonal Gaussian distributions, where all the mean and log variance parameters in each distribution are pro. duced by a neural network taking the conditioning variables as inputs. Note that q(c|D; ) accepts as input a dataset D and we refer to this as the statistic network. We describe this in Subsection|3.4\nwhere the p(zi|Zi+1, c, 0) are again Gaussian distributions where the mean and log variance are given as the output of neural networks. The generative process for the full model is described in Algorithm1\nOnce again, note that we are maximizing the lower bound to the log likelihood over many datasets D: we want to maximize the expectation of Lp over all datasets. We do this optimization using stochastic gradient descent. In contrast to a variational autoencoder where a minibatch would consis of a subsample of datapoints from the dataset, we use minibatches consisting of a subsample of datasets - tensors of shape (batch si ze, samplesize, number of features)."}, {"section_index": "6", "section_name": "3.4 STATISTIC NETWORK", "section_text": "In addition to the standard inference networks we require a statistic network q(cD; $) to give a approximate posterior over the context c given a dataset D = {x1, ..., xk} . This inference networ must capture the exchangeability of the data in D.\nWe use a feedforward neural network consisting of three main elements:\nWe note that the humble sample mean already gives the statistic network a great deal of represen tational power due to the fact that the instance encoder can learn a representation where averaging makes sense. For example since the instance encoder can approximate a polynomial on a compac domain, and so can the post-pooling network, a statistic network can approximate any moment of a distribution."}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "Due to the general nature of the problem considered, our work touches on many different topics which we now attempt to summarize..\nTopic models and graphical models The form of the graphical model in Figure [1on the left is. equivalent to that of a standard topic model. In contrast to traditional topic models we do not use discrete latent variables, or restrict to discrete data. Work such as that by Ranganath et al.(2014) has. extended topic models in various directions, but importantly we use flexible conditional distribu. tions and dependency structures parameterized by deep neural networks. Recent work has explored. neural networks for document models (see e.g.Miao et al.]2015) but has been limited to modelling. datapoints with little internal structure. Along related lines are structured variational autoencoders (see[Johnson et al.|[2016), where they treat the general problem of integrating graphical models with. variational autoencoders.\nTransfer learning There is a considerable literature on transfer learning, for a survey seePan & Yang (2010). There they discuss 'parameter-transfer' approaches whereby parameters or priors ar shared across datasets, and our work fits into that paradigm. For examples seeLawrence & Plat (2004) where share they priors between Gaussian processes, and Evgeniou & Pontil (2004) wher they take an SVM-like approach to share kernels.\nOne-shot LearningLearning quickly from small amounts of data is a topic of great interest.Lake. et al.[(2015) use Bayesian program induction for one-shot generation and classification, and Koch (2015) train a Siamese (Chopra et al.[(2005)) convolutional network for one-shot image classifi-. cation. We note the relation to the recent work (Rezende et al.]2016) in which the authors use a. conditional recurrent variational autoencoder capable of one-shot generalization by taking as extra. input a conditioning data point. The important differences here are that we jointly model datasets and datapoints and consider datasets of any size. Recent approaches to one-shot classification are. matching networks (Vinyals et al.]2016b) (which was concurrent with the initial preprint of this work), and related previous work (Santoro et al.]2016). The former can be considered a kind of. differentiable nearest neighbour classifier, and the latter augments their network with memory to. store information about the classification problem. Both are trained end-to-end for the classification. problem, whereas the present work is a general approach to learning representations of datasets.. Probably the closest previous work is bySalakhutdinov et al.(2012) where the authors learn a topic.\nAn instance encoder E that takes each individual datapoint x; to a vector e; = E(x) An exchangeable instance pooling layer that collapses the matrix (e1,..., e) to a single. pre-statistic vector v. Examples include elementwise means, sums, products, geometric. means and maximum. We use the sample mean for all experiments.. A final post-pooling network that takes v to a parameterization of a diagonal Gaussian..\nmodel over the activations of a DBM for one-shot learning. Compared with their work we use mod ern architectures and easier to train VAEs, in particular we have fast and amortized feedforwarc inference for test (and training) datasets, avoiding the need for MCMC.\nMultiple-Instance Learning There is previous work on classifying sets in multiple-instance. learning, for a useful survey see [Cheplygina et al. (2015). Typical approaches involve adapting. kernel based methods such as support measure machines (Muandet et al.|2012), support distribu- tion machines (Poczos et al.[2012) and multiple-instance-kernels (Gartner et al.2002). We do not consider applications to multiple-instance learning type problems here, but it may be fruitful to do. so in the future.\nSet2SeqIn very related work, Vinyals et al. (2016a) explore architectures for mapping sets to sequences. There they use an LSTM to repeatedly compute weighted-averages of the datapoints anc use this to tackle problems such as sorting a list of numbers. The main difference between their work and ours is that they primarily consider supervised problems, whereas we present a general unsupervised method for learning representations of sets of i.i.d instances. In future work we may also explore recurrently computing statistics.\nABC There has also been work on learning summary statistics for Approximate Bayesian Com putation by either learning to predict the parameters generating a sample as a supervised problem, or by using kernel embeddings as infinite dimensional summary statistics. See the work by Fukumizu et al.[(2013) for an example of kernel-based approaches. More recently Jiang et al.[(2015) used deep neural networks to predict the parameters generating the data. The crucial differences are that their problem is supervised, they do not leverage any exchangeability properties the data may have, nor can it deal with varying sample sizes.\nGiven an input set x1, ... xk we can use the statistic network to calculate an approximate posterior over contexts q(c|x1, . .. , xk; $). Under the generative model, each context c specifies a conditional model p(x[c; 0). To get samples from the model corresponding to the most likely posterior value of c, we set c to the mean of the approximate posterior and then sample directly from the condi- tional distributions. This is described in Algorithm2 We use this process in our experiments to show samples. In all experiments, we use the Adam optimization algorithm (Kingma & Ba]2014 to optimize the parameters of the generative models and variational approximations. Batch normal ization (Ioffe & Szegedy|2015) is implemented for convolutional layers and we always use a batch size of 16. We primarily use the Theano (Theano Development Team,2016) framework with the Lasagne (Dieleman et al.2015) library, but the final experiments with face data were done using Tensorflow (Abadi et al.2015). In all cases experiments were terminated after a given number of epochs when training appeared to have sufficiently converged (300 epochs for omniglot, youtube and spatial MNIST examples, and 50 epochs for the synthetic experiment)."}, {"section_index": "8", "section_name": "5.1 SIMPLE 1-D DISTRIBUTIONS", "section_text": "In our first experiment we wanted to know if the neural statistician will learn to cluster synthetic 1-D datasets by distribution family. We generated a collection of synthetic 1-D datasets each con- taining 200 samples. Datasets consist of samples from either an Exponential, Gaussian, Uniform or Laplacian distribution with equal probability. Means and variances are sampled from U[-1, 1 anc U[0.5. 2] respectively. The training data contains 10K sets.\nThe architecture for this experiment contains a single stochastic layer with 32 units for z and 3 units for c, . The model p(x[z, c; 0) and variational approximation q(z[x, c; ) are each a diagona. Gaussian distribution with all mean and log variance parameters given by a network composed of. three dense layers with ReLU activations and 128 units. The statistic network determining the mean and log variance parameters of posterior over context variables is composed of three dense layers. before and after pooling, each with 128 units with Rectified Linear Unit (ReLU) activations..\nFigure[2|shows 3-D scatter plots of the summary statistics learned. Notice that the different families. of distribution cluster. It is interesting to observe that the Exponential cluster is differently orientated to the others, perhaps reflecting the fact that it is the only non-symmetric distribution. We also see. that between the Gaussian and Laplacian clusters there is an area of ambiguity which is as one.\nGaussian Exponential Laplacian Uniform -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 0.6 0.8 1.0 1.4 1.6 1.8 Mean\nFigure 2: Three different views of the same data. Each point is the mean of the approximate posterior over the context q(c|D; ) where c E R3. Each point is a summary statistic for a single dataset with 200 samples. Top plot shows points colored by distribution family, left plot colored by the mean and. right plot colored by the variance. The plots have been rotated to illustrative angles.\nmight expect. We also see that within each cluster the mean and variance are mapped to orthogona directions."}, {"section_index": "9", "section_name": "5.2 SPATIAL MNIST", "section_text": "Building on the previous experiments we investigate 2-D datasets. that have complex structure, but the datapoints contain little in- formation by themselves, making it a good test of the statistic. network. We created a dataset called spatial MNIST. In spatial MNIST each image from MNIST (LeCun et al.|1998) is turned into a dataset by interpreting the normalized pixel intensities as. a probability density and sampling coordinate values. An ex-. ample is shown in Figure [3] This creates two-dimensional spa-. tial datasets. We used a sample size of 50. Note that since the pixel coordinates are discrete, it is necessary to dequantize them. oy adding uniform noise u ~ U|0,1 to the coordinates if one. models them as real numbers, else you can get arbitrarily high. densities (see Theis et al.[(2016) for a discussion of this point)..\nThe generative architecture for this experiment contains 3 stochastic z layers, each with 2 units and a single c layer with 64 units. The means and log variances of the Gaussian likelihood for p(x|Z1:3, c; 0), and each subnetwork for z in both the encoder and decoder contained 3 dense layers. with 256 ReLU units each. The statistic network also contained 3 dense layers pre-pooling and 3. dense layers post pooling with 256 ReLU units..\nIn addition to being able to sample from the model conditioned on a set of inputs, we can alsc summarize a dataset by choosing a subset S C D to minimise the KL divergence of q(C|D; $) from q(C|S; $). We do this greedily by iteratively discarding points from the full sample. Pseudocode. for this process is given in Algorithm[3] The results are shown in Figure[4We see that the model is capable of handling complex arrangements of datapoints. We also see that it can select sensible subsets of a dataset as a summary."}, {"section_index": "10", "section_name": "5.3 OMNIGLOT", "section_text": "Next we work with the OMNIGLOT data (Lake et al.[2015). This contains 1628 classes of hand written characters but with just 20 examples per class. This makes it an excellent test-bed for transfer / few-shot learning. We constructed datasets by splitting each class into datasets of size 5. We train\nFigure3: An image from MNIST on the left. transformed to a set of 50 (x, y) coordinates, shown as a scatter plot on the right.\n7 1 1 : 2\nFigure 4: Conditioned samples from spatial MNIST data.. Blue and red digits are the input sets. black digits above correspond to samples given the input. Red points correspond to a 6-sample summary of the dataset\nO 0 0 0 6 6 0 6 0 1 - D e a1 2 2 2 2 a 4 E 3 3 3 3 3 9 3 3 3 - 11 11 4 4 4 4 4 Lg 4 4 S 5 S 5 5 s T 4 6 6 6 6 b 6 6 6 P p p P 7 7 7 7 7 7 7 7 7 d 7 > & 8 8 8 8 9 8 % 8 8 t07 do $) 9 9 9 9 9 U 89 U 9 9 9 9\non datasets drawn from 1200 classes and reserve the remaining classes to test few-shot sampling anc classification. We created new classes by rotating and reflecting characters. We resized the images to 28 28. We sampled a binarization of each image for each epoch. We also randomly appliec the dilation operator from computer vision as further data augmentation since we observed that the stroke widths are quite uniform in the OMNIGLOT data, whereas there is substantial variation in MNIST, this augmentation improved the visual quality of the few-shot MNIST samples consider- ably and increased the few-shot classification accuracy by about 3 percent. Finally we used sample dropout' whereby a random subset of each dataset was removed from the pooling in the statistic net- work, and then included the number of samples remaining as an extra feature. This was beneficial since it reduced overfitting and also allowed the statistic network to learn to adjust the approximate posterior over c based on the number of samples.\nWe used a single stochastic layer with 16 units for z, and 512 units for c. We used a shared convolu. tional encoder between the inference and statistic networks and a deconvolutional decoder network Full details of the networks are given in Appendix[B.1 The decoder used a Bernoulli likelihood..\nAs a further test we considered few-shot classification of both unseen OMNIGLOT characters and MNIST digits. Given a sets of labelled examples of each class Do,..., Dg (for MNIST say), we computed the approximate posteriors q(C|D; $) using the statistic network. Then for each test image x we also computed the posterior q(C|x; $) and classified it according to the training dataset D, minimizing the KL divergence from the test context to the training context. This process is described in Algorithm 4[ We tried this with either 1 or 5 labelled examples per class and either 5 or 20 classes. For each trial we randomly select K classes, randomly select training examples for each class, and test on the remaining examples. This process is repeated 100 times and the results averaged. The results are shown in Table|1 We compare to a number of results reported in Vinyals et al.(2016b) including Santoro et al.(2016) and Koch(2015). Overall we see that\n6 6 er a m 3 E 11 T T P p p P P C p 7 J 40) do Ce HE\nFigure 5: Few-shot learning Left: Few-shot learning from OMNIGLOT to MNIST. Left rows are input sets, right rows are samples given the inputs. Right: Few-shot learning from with OMNIGLOT data to unseen classes. Left rows are input sets, right rows are samples given the inputs. Black-white inversion is applied for ease of viewing.\nIn Figure 5] we show two examples of few-shot learning by conditioning on samples of unseen characters from OMNIGLOT, and conditioning on samples of digits from MNIST. The samples are mostly of a high-quality, and this shows that the neural statistician can generalize even to new datasets.\nthe neural statistician model can be used as a strong classifier, particularly for the 5-way task. but performs worse than matching networks for the 20-way tasks. One important advantage tha matching networks have is that, whilst each class is processed independently in our model, th. representation in matching networks is conditioned on all of the classes in the few-shot problen This means that it can exaggerate differences between similar classes, which are more likely t. appear in a 20-way problem than a 5-way problem..\nTable 1: The table shows the classification accuracies of various few-shot learning tasks. Models are trained on OMNIGLOT data and tested on either unseen OMNIGLOT classes or MNIST with vary- ing numbers of samples per class (K-shot) with varying numbers of classes (K-way). Comparison. are to|Vinyals et al.(2016b) (Matching), Santoro et al.(2016) (MANN) and Koch (2015) (Siamese) 5-shot MNIST results are included for completeness.\nFigure 6: Few-shot learning for face data. Samples are from model trained on Youtube Faces Database. Left: Each row shows an input set of size 5. Center: Each row shows 5 samples from the model corresponding to the input set on the left. Right: Imagined new faces generated by sampling contexts from the prior. Each row consists of 5 samples from the model given a particular sampled context.\nFinally, we provide a proof of concept for generating faces of a particular person. We use the Youtube Faces Database fromWolf et al.(2011). It contains 3, 245 videos of 1, 595 different people We use the aligned and cropped to face version, resized to 64 64. The validation and test sets contain 100 unique people each, and there is no overlap of persons between data splits. The sets were created by sampling frames randomly without replacement from each video, we use a set size of 5 frames. We resample the sets for the training data each epoch.\nOur architecture for this problem is based on one presented in|Lamb et al.(2016). We used a single stochastic layer with 500 dimensional latent c and 16 dimensional z variable. The statistic network and the inference network q(z|x, c; $) share a common convolutional encoder, and the deocder uses deconvolutional layers. For full details see Appendix [B.2] The likelihood function is a Gaussian but where the variance parameters are shared across all datapoints, this was found to make training faster and more stable.\nWe have demonstrated a highly flexible model on a variety of tasks. Going forward our approach wil. naturally benefit from advances in generative models as we can simply upgrade our base generative. model, and so future work will pursue this. Compared with some other approaches in the literature. for few-shot learning, our requirement for supervision is weaker: we only ask at training time that we. are given datasets, but we do not need labels for the datasets, nor even information on whether twc. datasets represent the same or different classes. It would be interesting then to explore applicatior. areas where only this weaker form of supervision is available. There are two important limitations tc this work, firstly that the method is dataset hungry: it will likely not learn useful representations ol datasets given only a small number of them. Secondly at test time the few-shot fit of the generative. model will not be greatly improved by using larger datasets unless the model was also trained or similarly large datasets. The latter limitation seems like a promising future research direction . bridging the gap between fast adaptation and slow training.."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the Uni. versity of Edinburgh."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Veronika Cheplygina, David M.J. Tax, and Marco Loog. On classification with bags, groups anc sets. Pattern Recognition Letters, 59:11 - 17, 2015.\nKenji Fukumizu, Le Song, and Arthur Gretton. Kernel Bayes' rule: Bayesian inference with positiv. definite kernels. The Journal of Machine Learning Research. 14(1):3753-3783. 2013.\nThomas Gartner. Peter A. Flach. Adam Kowalczyk. and Alex J. Smola. Multi-instance kernels. Ir In Proc. 19th International Conf. on Machine Learning. pp. 179-186. Morgan Kaufmann. 2002\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re ducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine. Learning, pp. 448-456, 2015.\nThe results are shown in Figure|6 Whilst there is room for improvement, we see that it is possible. to specify a complex distribution on-the-fly with a set of photos of a previously unseen person. The. samples conditioned on an input set have a reasonable likeness of the input faces. We also show the ability of the model to generate new datasets and see that the samples have a consistent identity and. varied poses.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, and Zhifeng Chen et al. TensorFlow. Large-scale machine learning on heterogeneous systems, 2015. URL http: //tensorf1ow.. Org/ Software available from tensorflow.org..\nBai Jiang, Tung-yu Wu, Charles Zheng, and Wing H Wong. Learning summary statistic for approx. imate Bayesian computation via deep neural network. arXiv preprint arXiv:1510.02175. 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nGregory Koch. Siamese neural networks for one-shot image recognition. Doctoral dissertation University of Toronto, 2015.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learnin through probabilistic program induction. Science, 350(6266):1332-1338. 2015.\nAlex Lamb. Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generativ models. arXiv preprint arXiv:1602.03220, 2016.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. Knowledge and Data Engineering IEEE Transactions on, 22(10):1345-1359, 2010.\nBarnabas Poczos, Liang Xiong, Dougal J Sutherland, and Jeff Schneider. Support distribution ma chines. Technical Report, 2012. URLhttp://arxiv.0rg/abs/1202.0302\nRajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In AISTATS pp. 814-822, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Con- ference on Machine Learning, pp. 1278-1286, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One shot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016.\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), number 2014, 2013\nLior Wolf, Tal Hassner, and Itay Maoz. Face recognition in unconstrained videos with matchec background similarity. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Confer ence on, pp. 529-534. IEEE, 2011.\nAlgorithm 1 Sampling a dataset of size k\nAlgorithm 2 Sampling a dataset of size k conditioned on a dataset of size m\nAlgorithm 3 Selecting a representative sample of size k\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match ing networks for one shot learning. arXiv preprint arXiv:1606.04080, 2016b\nSalnplng a dalaselol slze k sample c ~ p(c) for i = 1 to k do sample Zi,L ~ p(zL|c;0) for j = L - 1 to 1 do sample Zi,j ~ p(zj|Zi,j+1,c;0) end for sample x; ~ p(x|Zi,1,.:., Zi,L, c; 0) end for\nAlgorithm 4 K -way few-shot classification Do, . .., Dk sets of labelled examples for each class x datapoint to be classified N q(c|x; ) {approximate posterior over c given query point} for i = 1 to K do Niq(c|Di;$) end for y argminiDKL(Ni|Nx)\n2 { conv2d 64 feature maps with 3 3 kernels and ELU activations } conv2d 64 feature maps with 3 3 kernels, stride 2 and ELU activations. 2 {conv2d 128 feature maps with 3 3 kernels and ELU activations } conv2d 128 feature maps with 3 3 kernels, stride 2 and ELU activations. 2 conv2d 256 feature maps with 3 3 kernels and ELU activations } conv2d 256 feature maps with 3 3 kernels, stride 2 and ELU activations\nInference network q(z[x, c; $) : h, c -> z, 0\n3 {fully-connected layer with 256 units and ELU activations} fully-connected linear layers to z and log o?.\nObservation decoder network p(x[c, z; 0) : c, z ->\n7lccle/lcle~ane fully-connected linear layers with 4 . 4 . 256 units. 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } deconv2d 256 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 128 feature maps with 3 3 kernels and ELU activations} deconv2d 128 feature maps with 2 2 kernels, stride 2, ELU activations 2 conv2d 64 feature maps with 3 3 kernels and ELU activations } deconv2d 64 feature maps with 2 2 kernels, stride 2, ELU activations. conv2d 1 feature map with 1 1 kernels, sigmoid activations.\n2 { conv2d 32 feature maps with 3 3 kernels and ELU activations } conv2d 32 feature maps with 3 3 kernels, stride 2 and ELU activations 2 {conv2d 64 feature maps with 3 3 kernels and ELU activations } conv2d 64 feature maps with 3 3 kernels, stride 2 and ELU activations 2 conv2d 128 feature maps with 3 3 kernels and ELU activations } conv2d 128 feature maps with 3 3 kernels, stride 2 and ELU activations 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } conv2d 256 feature maps with 3 3 kernels, stride 2 and ELU activations\nInference network g(z[x, c, ) : h, c -> z, 0\nLOP~OC OZO fully-connected layer with 1000 units and ELU activations fully-connected linear layers to z and log o?\nObservation decoder network p(x|c, z; 0) : c, z -> x\nconcatenate z and c fully-connected layer with 1000 units and ELU activations fully-connected linear layer with 8 : 8 : 256 units 2 { conv2d 256 feature maps with 3 3 kernels and ELU activations } deconv2d 256 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 128 feature maps with 3 3 kernels and ELU activations deconv2d 128 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 64 feature maps with 3 3 kernels and ELU activations } deconv2d 64 feature maps with 2 2 kernels, stride 2, ELU activations 2 { conv2d 32 feature maps with 3 3 kernels and ELU activations } deconv2d 32 feature maps with 2 2 kernels, stride 2, ELU activations conv2d 3 feature maps with 1 1 kernels, sigmoid activations"}] |
SJNDWNOlg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Image retrieval is an important problem both for academic research and for industrial applications Although it has been studied for many years (Sivic & Zisserman 2003} Philbin et al.]2007Tolias et al.] 2015), it is still a challenging task. Generally, image retrieval is divided into two groups. The first one is the category-level image retrieval (Sharma & Schiele|2015), in which an image in the dataset is deemed to be similar to the query image if they share the same class or they are similar ir shape and local structures. The other group is the instance-level image retrieval (Tolias et al.]2015) in which an image is considered to match the query if they contain the same object or the same scene. The instance-level image retrieval is harder in that the retrieval method need to encode th local and detailed information in order to tell two images apart, e.g., the algorithm should be able to detect the differences between the Eiffel Tower and other steel towers although they have similar shapes. In this paper, we focus on the instance-level image retrieval.\nTraditionally, visual instance retrieval is mainly addressed by the BoF (bag of features) based meth ods using the local feature descriptors such as SIFT (Lowe]2004). In order to boost the retrieval performances, post-processing techniques such as query expansion (Chum et al.]2007) and spatial. verification (Philbin et al.|2007) are also employed.\nWith the decisive victory (Krizhevsky et al.]2012) over traditional models in the ImageNet (Rus. sakovsky et al.]2015) image classification challenge, convolutional neural networks (Lecun et al. 1998) continue to achieve remarkable success in diverse fields such as object detection (Liu et al.. 2015 , Shaoqing Ren2015), semantic segmentation (Dai et al.2016) and even image style trans fer (Gatys et al.|2016). Networks trained on the Imagenet classification task can generalize quite well to other tasks, which are either used off-the-shelf (Razavian et al.f2014a) or fine-tuned on the task-specific datasets (Azizpour et al.|2014) Long et al.| 2015). Inspired by all these, researchers in the field of image retrieval also shift their interest to the CNNs. Their experiments have showr promising and surprising results (Babenko et al.| 2014] Razavian et al. 2014c}Tolias et al.]2015] which are on par with or surpass the performances of conventional methods like BoF and VLAL. (vector of locally aggregated descriptors) (Jegou et al.l. 2010 Arandielovic & Zisserman2013"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Despite all these previous advances (Babenko et al.]2014] Babenko & Lempitsky2015) Tolias et al.2015) on using CNNs for image feature representation, the underlying factors that contribute to the success of off-the-shelf CNNs on the image retrieval tasks are still largely unclear and un- explored, e.g., which layer is the best choice for instance retrieval, the convolutional layer or the fully-connected layer? What is the best way to represent the multi-scale information of an image? Clarifying these questions will help us advance a further step towards building a more robust and accurate retrieval system. Also in situations where a large numbers of training samples are not avail- able, instance retrieval using unsupervised method is still preferable and may be the only option.\nIn this paper, we aim to answer these questions and make three novel contributions. Unlike pre.. vious papers, we explicitly choose five factors to study the image representations based on CNNs. and conduct extensive experiments to evaluate their impacts on the retrieval performances. We also. give detailed analysis on these factors and give our recommendations for combining them. Dur-. ing experiments, we borrow wisdoms from literatures and evaluate their usefulness, but find that. they are not as effective as some of the simpler design choices. Second, by combining the insights. obtained during the individual experiments, we are able to propose a new multi-scale image rep-. resentation, which is compact yet effective. Finally, we evaluate our method on four challenging. datasets, i.e., Oxford5k, Paris6k, Oxford105k and UKB. Experimental results show that our method is generally applicable and outperforms all previous methods on compact image representations by. a large margin.\nMulti-scale image representation.Lazebnik et al.(2006) propose the spatial pyramid matching approach to encode the spatial information using BoF based methods. They represent an image us- ing a pyramid of several levels or scales. Features from different scales are combined to form the image representation in such a way that coarser levels get less weight while finer levels get more weight. Their argument is that matches found in coarser levels may involve increasingly dissimilar image features. In our paper, we also explore the multi-scale paradigm in the same spirit using the convolutional feature maps as the local descriptors. We find that the deep features from the convolu- tional feature maps are distinct from the traditional descriptors: the weighted sum of different level of features shows no superior performances than a simple summation of them.Kaiming et al.(2014) devise an approach called SPP (spatial pyramid pooling). In SPP, feature maps of the last convo- lutional layer are divided into a 3 or 4 scale pyramid. First the regional features in each scale are concatenated, then the scale-level features are concatenated to a fixed length vector to be forwarded to the next fully-connected layers. We find that this strategy is ineffective for unsupervised instance retrieval, leading to inferior performances compared to other simple combination methods (see the part about multi-scale representation in section |5.2[for more details.).\nImage representation using off-the-shelf CNNs. Gong et al.(2014) propose the MOP (multi- scale orderless pooling) method to represent an image in which VLAD is used to encode the level 2 and level 3 features. Then features from different scales are PCA-compressed and concatenated to form the image features. This method is rather complicated and time-consuming. At the same time,Babenko et al.(2014) use Alexnet (Krizhevsky et al.2012) trained on the Imagenet 1000-class classification task and retrain the network on task-related dataset. The retraining procedure gives a boost to the retrieval performances. Instead of using the output of the fully-connected layers as the image feature representations,Babenko & Lempitsky[(2015) use the output feature maps of last con- volutional layer to compute the image features. Recently, instead of sum-pooling the convolutional features, Tolias et al.[(2015) use max-pooling to aggregate the deep descriptors. Their multi-scale method, called R-MAC (regional maximum activation of convolutions), further improves the pre- vious results on four common instance retrieval datasets. Our work differs from these papers in that we explicitly explore the various factors that underpin the success of unsupervised instance re- trieval, which have not been fully explored and analysed. By carefully choosing the different setting for each factor and combining them in a complementary way, we show that a large improvement can be achieved without additional cost."}, {"section_index": "2", "section_name": "3.1 CNN FEATURES FOR INSTANCE RETRIEVAL", "section_text": "In this paper, we are mainly interested in extracting compact and discriminative image features using. the off-the-shelf CNNs in an efficient way. For a given image I, we simply subtract the mean value. of the RGB channels from the original image and do not do other sophisticated preprocessing. Ther. the image is fed into the convolutional network and goes through a series of convolutions, non-linea.. activations and pooling operations. The feature activation maps of a certain layer can be interpretec. as the raw image features, based on which we build the final image features. These feature maps. form a tensor of size K H W, where K is the number of feature channels, and H and W are height and width of a feature map. Each feature map represents a specific pattern which encodes. a small part of information about the original image. If we represent the set of feature maps as. F = {F}, i = 1, 2, ..., K, where F, is the ith activation feature map, then the most simple image. feature is formulated as:\nIn the above equation[1] fi is obtained by applying the feature aggregation method (see section|3.2 over the ith feature map Fi. Throughout this paper, we use feature maps after the non-linear acti-. vations (ReLU) so that the elements in each feature map are all non-negative. We also experiment with feature maps prior to ReLU, but find that they lead to inferior performances. After the image feature representation is obtained, post-processing techniques such as PCA and whitening can be. further applied."}, {"section_index": "3", "section_name": "3.2 IMPACTING FACTORS ON PERFORMANCE", "section_text": "Feature aggregation and normalization. After the feature maps of a certain layer are obtained it is still challenging to aggregate the 3-dimensional feature maps to get compact vector represen tations for images. Previous papers use either sum-pooling (Babenko & Lempitskyl2015) or max pooling (Tolias et al.f2015) followed by l2-normalization. Sum-pooling over a particular feature map F; is expressed as\nH W fi= F(m,n),iE{1,2,...,K} m=1 n=1\nwhere m, n are all the possible values over the spatial coordinate of size H W. In this paper for the first time, different combinations of aggregation and normalization methods (l2 and l1 in the manner of RootSIFT (Arandjelovic & Zisserman2012)) are evaluated and their results are reported.\nOutput layer selection.Zeiler & Fergus(2014) has shown that image features aggregated from the feature activation maps of certain layers have interpretable semantic meanings.Gong et al. (2014) and Babenko et al.(2014) use the output of the first fully-connected layer to obtain the image features, while Babenko & Lempitsky[(2015) and Tolias et al.(2015) use the output feature maps of the last convolutional layer. But these choices are somewhat subjective. In this paper, we extract dataset image features from the output feature maps of different layers and compare thei retrieval performances. Based on the finding in this experiment, we choose the best-performing layer and also come up with a layer ensemble approach which outperforms state-of-the-art methods (see section5.3).\nImage resizing. Famous models such as Alexnet (Krizhevsky et al.]2012) and VGGnet (Simonyar & Zisserman 2014) all require that the input images have fixed size. In order to meet this require ment, previous papers (Gong et al. 2014 Babenko & Lempitsky 2015) usually resize the inpu\nWhen we employ off-the-shelf CNNs for the task of instance-level image retrieval, a natural question is: what kind of design choices should we make in order to make full use of the representational power of existing models? In this section, we summarize the five factors that may greatly impact. the performance of the final image retrieval system. In section 5.2l we will show our experimental results on each key factor. Before we delve into the impacting factors, first we will give a brief. introduction about how to represent an image using the activation feature maps of a certain layer..\nf = [f1,f2,..., fi,...,fK]T\nf = max F(m,n) m,n\n(a) level 1 (b) level 2 (c) level 3\nFigure 1: An illustration of multi-scale representation of an image. The whole image is divided into 3 levels from the coarsest (level 1) to the finest (level 3). At each level, the image is divided into different number of equal-sized regions.\nMulti-scale feature representation. Unlike local feature descriptors such as SIFT (Lowe2004 the feature vector extracted from the deep convolutional networks for an image is a global descripto which encodes the holistic information. When used for image retrieval, this kind of features stil lack the detailed and local information desired to accurately match two images. Inspired by spatia oyramid matching (Lazebnik et al.]2006) and SPP (Kaiming et al.]2014), we explore the feasibilit f applying this powerful method to obtain discriminative image features. An image is represente oy a L-level pyramid, and at each level, the image is divided evenly into several overlapping o non-overlapping regions. The vector representations of these small regions are computed, then th egional vectors are combined to form the image feature vectors. The single scale representation c an image is just a special case of the multi-scale method in which the number of level L equals 1.\nPCA and whitening. Principal Component Analysis (PCA) is a simple yet efficient method for. reducing the dimensionality of feature vectors and decorrelating the feature elements. Previous work (Babenko et al.]2014; Jegou et al.]2010) has shown evidences that PCA and whitened features can actually boost the performances of image retrieval. In this paper, we further investigate the. usefulness of PCA and whitening within our pipeline and give some recommendations..\nimages to the fixed size. We postulate that the resizing operation may lead to the distortion of im. portant information about the objects in the natural images. Ultimately, this kind of operation may hurt the discriminative power of image features extracted from the network, thus degrading the re-. trieval performances. For the task of image retrieval, we think it is best to keep the images their original sizes and feed them directly to the network whenever possible. In this paper, three image resizing strategies are explored:\nBoth the height and width of the dataset images are set to the same fixed value (denoted as two-fixed). The minimum of each dataset image's size is set to a fixed value. (The aspect ratio of the original image is kept.) (denoted as one-fixed) ges are kent their Orioinal size (denoted as free)\nFigure|1|shows an example of 3 level representations of an image. The time cost of re-feeding those small regions into the network to compute the regional vectors would be huge, thus unacceptable for instance retrieval tasks. Inspired by the work of Girshick (2015) and Tolias et al.(2015), we. assume a linear projection between the original image regions and the regions in the feature maps of a certain layer. Then the regional feature vectors can be efficiently computed without re-feeding the corresponding image regions. In section 5.2] various settings for the multi-scale and scale-. level feature combination methods are explored and their retrieval performances are reported and. analysed."}, {"section_index": "4", "section_name": "4 IMPLEMENTATION", "section_text": "We use the open source deep learning framework Caffe (Jia et al.[2014) for our whole experiments The aim of this research is to investigate the most effective ways to exploit the feature activations of existing deep convolutional models. Based on past practices for networks to go deeper (Krizhevsky et al.[2012] Simonyan & Zisserman[[2014]Szegedy et a1.2015f|He et a1.[2015], a consideration fol moderate computational cost, and also the results from Tolias et al.(2015) that deeper networks work better than shallower ones, we decide to use the popular VGG-19 model (Simonyan & Zisserman 2014) trained on ImageNet as our model.\nNetwork transformation. The original VGG-19 network only accepts an image of fixed size (224 224), which is not the optimal choice when extracting image features for retrieval tasks. In order for the network to be able to process an image of arbitrary size (of course, the image size can not exceed the GPU's memory limit) and for us to experiment with different input image resizing strategies, we adapt the original VGG-19 network and change the fully-connected layers to convolutional (Long et al.]2015) layers. For more details about network transformations, see appendix A\nThe Oxford5k dataset (Philbin et al.] 2007) contains 5062 images crawled from Flickr by using 11 Oxford landmarks as queries. A total of 11 groups of queries - each having 5 queries with their ground truth relevant image list, are provided. For each query, a bounding box annotation is also provided to denote the query region. During experiment, we report results using the full query images (denoted as full-query) and image regions within the bounding boxes of the query images (denoted as cropped-query). The performance on this dataset is measured by mAP (mean average precision) over all queries.\nThe Paris6k dataset (Philbin et al.2008) includes 6412 images'|from Flickr which contains 11. landmark buildings and the general scenes from Paris. Similar to the Oxford5k dataset, a total of 55 queries belonging to 11 groups and the ground truth bounding boxes for each query are provided . The performance is reported as mAP over 55 queries..\nThe Oxford105k2 dataset contains the original Oxford5k dataset and additional 100,o0o im. ages (Philbin et al. 2007) from Flickr. The 100,000 images are disjoint with the Oxford5k datase1. and are used as distractors to test the retrieval performance when the dataset scales to larger size We use the same evaluation protocol as the Oxford5k on this dataset..\nThe UKB dataset (Nister & Stewenius]2006) consists of 10200 photographs of 2550 objects, each object having exactly 4 images. The pictures of these objects are all taken indoor with large variation in orientation, scale, lighting and shooting angles. During experiment, each image is used to query the whole dataset. The performance is measured by the average number of same-object images in the top-4 results."}, {"section_index": "5", "section_name": "5.2 RESULTS AND DISCUSSION", "section_text": "In this section, we report the results of experiments on the impact of different factors and analys. their particular impact. The experiments in this section are conducted on the Oxford5k dataset.\nFeature aggregation and normalization. In this experiment, we compare the different combina tions of feature aggregation (sum-pooling and max-pooling) and normalization methods (l2 and l1)\n' Following conventions, 20 corrupted images from this dataset are removed, leaving 6392 valid images 2The image named \"portrait_000801.jpg\" was corrupted and manually removed from this dataset.\nIn this section, we first introduce the datasets used and the evaluation metrics. Then we report our experimental results for different impacting factors and give detailed analysis. In the last part we show the performance of our method considering all these impacting factors and compare our method with the state-of-the-art methods on four datasets.\nTable 1: Comparison between different combi- nations of feature aggregation and normaliza- tion methods.\nMethod full-query cropped-query max-l1 52.4 48.0 sum-l2 58.0 52.6 sum-l1 60.3 56.3 max-l2 60.1 53.5\nin terms of their retrieval performances. We use features from the layer conv5_4 with the free inpu. image size. The results (%) are shown in Table 1] Sum-pooling followed by l1 normalization leads. to slightly better results than the other combinations, especially for the cropped-query. However. after preliminary experiment with a multi-scale version of sum-l1 and max-l2, we find that max-l. is much better than sum-l1. For example, employing a 4 level representation of images in the Ox ford5k dataset, for the case of full-query, we find that the mAP for the max-l2 method is 65.1, while the mAP for sum-l1 is only 51.3 (even lower than the single scale representation). Base on these results, we stick to max-l2 in computing the final image features..\nOutput layer selection. In order to verify their feasibility for instance retrieval, we extract fron the network the output feature maps of different layers and aggregate them to get the image feature vectors. We evaluate the performances using features from layer conv3_3 up to the highest fc7-con layer (except the pooling layers, i.e. pool3, pool4 and pool5). Single-scale representations of the dataset images are used in this experiment.\nFigure |2 shows the retrieval performances of image features corresponding to different layers. The retrieval performances for both the full and cropped queries increase as the layer increases from lower layer conv3_3 to higher layers and plateau in layer conv5_4 and fc6-conv, then the perfor- mances begin to decrease as the layers increase to fc7-conv. The result shows that features from lower layers such as conv3_3 and conv3_4 are too generic and lack the semantic meanings of the object in the image, thus rendering them unsuitable for instance retrieval. On the other hand, fea- tures from the highest layer (fc7-conv) contain the semantic meaning of objects but lack the detailed and local information needed to match two similar images. The best results are obtained in layer conv5_4 (0.601) and fc6-conv (0.618), where the feature vectors combine both the low-level detailed information and high level semantic meanings of the image. Based on these observations and the requirement for keeping the image features compact, we mainly focus on image features from the layer conv5 4 (dimensionality = 512 compared to 4096 of layer fc6-conv).\nFigure 2: Performance comparison between different layers. This experiment is conducted using the free input image size.\nImage resizing. We experiment with 3 kinds of image resizing strategies which are detailed in section[3.2] We use grid search to find the optimal size for the two-fixed and one-fixed strategy. As is shown in Table2 the free input strategy outperforms or is close to the other two strategies: it\nresizing strategies. The numbers in the parenthe- ses denote the sizes in which the maximum mAPs are achieved.\nMethod full-query cropped-query two-fixed 55.5 (864) 38.7 (896) one-fixed 59.0 (800) 39.3 (737) free 58.0 52.6\n0.64 0.56 0.48 P 0.40 E 0.32 0.24 full-query. 0.16 cropped-query 3 4 2 3 4 4 6-conv 1-conv conv3_e conv5_e layer names.\n0.64 0.56 - - :. 0.48 0.40 0.32 0.24 full-query 0.16 cropped-query 3 4 3 3 conv3_ conv3_ conv4 onv5 conv5_ :6-conv conv5 fc7-conv\nperforms especially well in the cropped-query case. This experiment shows that changing the image. aspect ratio (two-fixed) distorts the image information, thus reducing the performance dramatically The one-fixed way is better than the two-fixed method. But information loss still occurs due to the resizing operation. The free method is able to capture more natural and un-distorted information. from the images, which explains its superior performance over the other two methods. It is best to. keep the images their original sizes for the instance retrieval tasks..\nThe benefit of multi-scale representation. In our multi-scale approach, the regional vectors fror. each scale are simply added together and l2-normalized to form the scale-level feature vectors. Then features from different scales are combined and l2-normalized to form the image representations. In fact, we also experimented with two methods which concatenate features from different scales. The first method is in same vein to spatial pyramid pooling (Kaiming et al.2014), i.e., region-level as well as the scale-level features are all concatenated to form a high dimensional vector. In the second. method, region-level features are added while scale-level features are concatenated. We find tha these two methods all lead to inferior results. The performance drop for the first in the case of cropped-query can be as large as 41%. The high dimensionality of the concatenated features (large. than 1.5k) will also lead to longer running times. Considering all these, we do not use concatenation. of features in the following experiments..\nTable 3: Multi-scale representation: comparison between different methods. \"overlap\"' denotes whether the regions in each level (see Figure[1) have some overlapping areas. \"s2\",\"s3' mean that overlap occurs in level 2 or 3. \"weighing\"' means if the features from each level are added using same weight or different weight \"version' means the different choice of the number of regions in each scale.\nscale overlap weighing. version full-query cropped-query (a1) 2 x x 63.5 59.0 (a2) 2 x 63.9 61.0 1 (b1) 3 x x 64.2 60.9 1 (b2) 3 x 62.6 61.0 1 (b3) 3 s2 x 1 64.8 60.8 (c1) 4 s3 x v1 65.1 61.4 (c2) 4 s3 v1 64.8 60.7 (c3) 4 s2,s3 x v1 65.5 60.8 (c4) 4 s2,s3 x v2 65.9 61.5 (c5) 4 s2,s3 v2 65.4 61.2 (c6) 4 x x v3 64.5 61.3 (c7) 4 s3 x v3 65.8 62.2 (c8) 4 s2,s3 x v3 66.3 62.6\nWe conduct extensive experiments to decide the best configurations for the multi-scale approach anc report our results in Table3] First, we explore the impact of the number of scales on the retrieva. performances. For the 2 and 3 scale representations, The region number for each level are {1 1 2 2 }, {1 1, 2 2, 3 3}. For the 4 scale representation, 3 versions are used and they differ in the. number of regions in each scale: for \"v1\", \"v2\", and \"v3', the number of regions are {1 1, 2 2. 3 x 3, 4 4},{1 1, 2 2, 3 3, 5 5} and {1 1, 2 2, 3 3, 6 6}. Table3|(a1)(b1)(c6) show the performances of using 2, 3, and 4 scales to represent the dataset images, respectively. Clearly. more scale levels improve the results and in the case of cropped-query, increase the performance by. an absolute 2%.\nWe also conduct experiments to find whether the weighing of different scales leads to improved performance. The weighing method for features from different scales is similar to the manner of spatial pyramid matching (Lazebnik et al.2006) - features from coarser level are given less weight while features from the finer levels are given more weight. Suppose the features of different scales for an L scale representation are f1 f L, then the image representation f is expressed as:\nMore details can be found in|Lazebnik et al.[(2006). Comparing the results of row (a1) and (a2), it seems that weighing different scales leads to better performance. But after more experiments, we find that the weighing method generally leads to inferior results as the number of scales increase,.\nL 1 i=2\n0.75 0.65 0.55 dAW 0.45 crop-paris crop-self 0.35 full-paris A full-self 0.25 16 80 144 208 272 336 400 464 528 number of principal component reserved.\nFigure 3: The number of principal component reserved VS mAP. We show the results of full and cropped query using the PCA and whitening matrix learned from the Oxford5k itself and Paris6k, denoted as \"full-self\" 'full-paris\" and \"crop-self\", \"crop-paris\".\nNext, we look into the issue of overlapping between different scales and try to verify its usefulness. For each scale and its different versions, we set some overlapping areas between the neighboring. regions in either one or two scales of the pyramid (For the exact configurations of overlap in all cases in Table[3] see appendixB for the complete descriptions). From the row pair (b1)(b3) and (c1)(c3),. we can see that overlap increase the performance for full-query but decrease a little the performance for cropped-query. But for 4 scale v3 (note the pair(c7)(c8)), we see a consistent improvement for. both the full and cropped queries. So we decided to use overlap in level 2 and 3 in computing our. final features.\nPCA and whitening. We perform PCA and whitening for the features extracted from the Oxford5k. dataset using the PCA and whitening matrix learned from the Oxford5k or the Paris6k dataset and 2-normalize these features to get the final image representations.\nThe retrieval results for 3 groups of features (from Table 3(b3)(c1)(c8)) are shown in Table Clearly, PCA and whitening lead to better performances. For all 3 groups of features, PCA and\ne.g., compare the results of row pair(b1)(b2) and (c1)(c2). These results suggest that deep features are different from the traditional local feature descriptors such as SIFT. We should exercise with caution when we apply the traditional wisdom found in SIFT to the deep convolutional descriptors, which is also suggested in Babenko & Lempitsky(2015). Based on the results of this experiment. no weighing methods are used in computing our final image feature representations.\nTable 4: The impact of PCA and whitening. \"PCA on self\"' and \"PCA on Paris\" mean that the corresponding. features are post-processed by the PCA and whitening matrices learned on the Oxford5k and Paris6k datasets,. respectively. The numbers in the parentheses indicate the dimensionality of features used for obtaining the corresponding results.\nTable 5: Comparison with state-of-the-art methods. \"single\"' means multi-scale features from single layer (conv5_4) are used. \"single, compression'' uses the same features but compresses them to get the best perfor mances. \"layer ensemble\"' combines the similarity score from layer conv5 4 and fc6-conv. The dimensionality of the combined feature is set to 1024 for compactness considerations. All our methods use PCA and whitening.\nOxford5k Paris6k Oxford105k method D UKB full cropped full cropped full cropped Jegou & Zisserman 12014 128 43.3 35.3 3.40 - - - 128 44.8 1 37.4 Arandjelovic & Zisserman 2012 1 1024 56.0 50.2 3.51 Jegou & Zisserman2014 - - Razavian et al.2014b 256 53.3 67.0 - 48.9 3.38 - - Babenko et al.2014] 512 55.7 52.2 3.56 - Babenko & Lempitsky2015 256 58.9 53.1 57.8 50.1 3.65 256 62.5 63.5 72.0 Arandjelovic et al.(2016 73.5 : - Tolias et al. [2015] 512 66.8 83.0 61.6 ours (single) 512 73.0 70.6 82.0 83.3 68.9 65.3 3.75 73.2 71.2 83.0 84.0 68.9 65.8 ours (single, compression) 3.76 ours (layer ensemble) 1024 75.6 73.7 85.7 85.9 71.6 69.2 3.81\nwhitening on the same dataset lead to insignificant improvement both in the case of full and croppec query. But after doing PCA and whitening on the Paris6k dataset, the results for both the full anc cropped queries improve greatly. In fact, the improvement for the case of cropped-query is ever more surprising. For example, for the third feature group, the improvement are 10.4% and 13.4% for the full and cropped queries. It should also be noted that as the the number of principal compo nent reserved increases, the performance for \"PCA on self' and \"PCA on Paris\" differs greatly. As i shown in Figure[3] the performance for the former peaks at a relatively low dimension (around 100 and begins to decrease, while for the latter, the performance increases as the number of principa component gets larger and then plateaus.\nDo the above results mean that we should always compute the PCA and whitening matrix from any datasets other than the query dataset itself? The short answer is no. We find that for UKB, learning the PCA and whitening matrix on the Oxford5k dataset shows inferior results compared to learning the PCA and whitening matrix on UKB itself (about 2% drop in accuracy). This may be due to the large differences between the images of the two datasets as the Oxford5k dataset are mainly images of buildings while the images in UKB are mainly small indoor objects. We therefore recommend learning the PCA and whitening matrix on a similar dataset to achieve good performances"}, {"section_index": "6", "section_name": "5.3 COMPARISON WITH OTHER METHODS", "section_text": "Based on the previous experimental results and our analysis of different impacting factors on the. retrieval performances, we propose a new multi-scale image feature representation. For a giver. image in the dataset, the whole process of image feature representation is divided into two steps First, the input image is fed into the network without the resizing operation (the free way) and . 4-scale feature representation is built on top of the feature maps of layer conv5_4. During the multi. scale representation step, max-pooling of feature maps are used and regional vectors from the same. scale are added together and l2-normalized. After that, features from different scales are summec. and l2-normalized again. The second step involves applying the PCA and whitening operations or. features from the first step. The PCA and whitening matrix used are either learned from differen. or same dataset: specifically, for the Oxford5k and Oxford105k, it is learned in the Paris6k, while. for Paris6k and UKB, it is learned on Oxford5k and UKB respectively. The final PCA and whitenec. image features are used for reporting our method's performances..\nLayer ensemble. Inspired by previous work on model ensemble to boost the classification perfor. mances (Krizhevsky et al.|2012] Simonyan & Zisserman2014), we consider fusing the similarity score from different layers to improve the retrieval performances. Specifically, for two images, their. similarity score is computed as the weighted sum of the scores from different layers (these weights. sum to 1 so that overall similarity score between two images are still in the range [0, 1].). We have. evaluated various combination of layers to see their performances and find that best performance. is achieved by combining the score from conv5_4 and fc6-conv. For the fc6-conv features of an image, we use a 3-scale representation as the size of output feature maps are already very small.\nThe fc6-conv features are compressed to low dimensional vectors for faster computation. Our laye. ensemble achieves 75.6% and 73.7% on Oxford5k for the full and cropped queries respectively showing a large improvement over previous methods. This suggests that features from the fc6-con and conv5_4 are complementary. See Table|5 for the complete results on all four datasets.\nComparison. We compare the performance of our method with several state-of-the-art methods which use small footprint representations and do not employ the complicated post-processing tech- niques such as geometric re-ranking (Philbin et al.]2007) and query expansion (Arandjelovic & Zisserman2012). The results are shown in Table [5] In all the datasets and different scenarios (full or cropped), our method achieves the best performance with comparable cost. For Oxford5k. (cropped) and UKB dataset, the relative improvement of our best results over previous methods. (from Tolias et al.(2015) and Babenko & Lempitsky (2015)) are 10.3% and 4.4%.\nIn this paper, we focus on instance retrieval based on features extracted from CNNs. we have con. ducted extensive experiments to evaluate the impact of five factors on the performances of imag retrieval and analysed their particular impacts. Based on the insights gained from these experiments we have proposed a new multi-scale image representation which shows superior performances ove previous methods on four datasets. When combined with the technique \"layer ensemble', ou method can achieve further improvements. Overall, we have provided a viable and efficient solutiol to apply CNNs in an unsupervised way to datasets with a relatively small number of images..\nR. Arandjelovic and A. Zisserman. Three things everyone should know to improve object retrieval. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2911-2918, June 2012. doi: 10. 1109/CVPR.2012.6248018. R. Arandjelovic and A. Zisserman. All about vlad. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 1578-1585, June 2013. doi: 10.1109/CVPR.2013.207. R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. CoRR, abs/1406.5774, 2014. URL http : / / arxiv .\nArtem Babenko and Victor Lempitsky. Aggregating local deep features for image retrieval. In The IEEl International Conference on Computer Vision (ICCV), December 2015.\nJifeng Dai, Kaiming He, and Jian Sun. Instance-aware semantic segmentation via multi-task network cascades In CVPR, 2016.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolutional neura networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2O16.\nRoss Girshick. Fast r-cnn. In International Conference on Computer Vision (ICCV), 2015\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi preprint arXiv:1512.03385, 2015.\nYunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale Orderless Pooling of Deep Convolutional Activation Features, pp. 392-407. Springer International Publishing, Cham, 2014. ISBN 978-3-319-10584-0. doi: 10.1007/978-3-319-10584-0_26. URL http://dx.doi.0rg/10.1007/ 978-3-319-10584-0_26\nH. Jegou and A. Zisserman. Triangulation embedding and democratic aggregation for image search. In 201. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3310-3317, June 2014. doi: 10.1109. CVPR.2014.417. H. Jegou, M. Douze, C. Schmid, and P. Perez. Aggregating local descriptors into a compact image representa tion. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 3304-3311, Jun. 2010. doi: 10.1109/CVPR.2010.5540039\nHe Kaiming, Zhang Xiangyu, Ren Shaoqing, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision, 2014.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012\nWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexan der C. Berg. SSD: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015.\nJonathan Long. Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3431-3440. 2015\nDavid G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110, 2004. ISSN 1573-1405. doi: 10.1023/B:VISI.0000029664.99615.94. URLhttp: //dx.d0i.0rg/10.1023/b:v1s1.0000029664.99615.94\nD. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pp. 2161-2168, June 2006. J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving particular object retrieval in large scale image databases. In Computer Vision and Pattern Recognition, 2008. CVPR 2008 IEEE Conference on, pp. 1-8, June 2008. doi: 10.1109/CVPR.2008.4587635.\nAli Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. Visual instance retrieval with deep convolutional networks. CoRR, abs/1412.6574, 2014b. URLhttp://arxiv.0rg/abs/1412.6574\nAli Sharif Razavian. Josephine Sullivan. Atsuto Maki. and Stefan Carlsson. Visual instance retrieval with dee convolutional networks. CoRR, abs/1412.6574, 2014c. URLhttp://arxiv.org/abs/1412. 6574\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andre Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.\nRoss Girshick Jian Sun Shaoqing Ren, Kaiming He. Faster R-CNN: Towards real-time object detection witl region proposal networks. arXiv preprint arXiv:1506.01497. 2015.\nAli Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: An. astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and. Pattern Recognition Workshops, CVPRW '14, pp. 512-519, Washington, DC, USA, 2014a. IEEE Computer Society. ISBN 978-1-4799-4308-1. doi: 10.1109/CVPRW.2014.131. URLhttp://dx. doi.0rg/10. 1109/CVPRW.2014.131\nGaurav Sharma and Bernt Schiele. Scalable nonlinear embeddings for semantic category-based image retrieval In ICCV, 2015.\nJosef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching in videos. I Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pp. 1470-1477. IEEE, 2003\nC. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra- binovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, June 2015. doi: 10.1109/CVPR.2015.7298594. G. Tolias, R. Sicre, and H. Jegou. Particular object retrieval with integral max-pooling of CNN activations ArXiv e-prints, November 2015.\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Compute vision-ECCV 2014, pp. 818-833. Springer, 2014."}, {"section_index": "7", "section_name": "APPENDIX A THE NETWORK TRANSFORMATIONS", "section_text": "In order for the network to process images of varying sizes, We change the layer fc6, fc7 and fc8. from the original model to fc6-conv, fc7-conv and fc8-conv. It should be noted there are certair. constraints on the input image size due to the network's inherent design. The original network. accepts an image of fixed size (224 224), so the output feature maps of the last convolutional laye. conv5_4 is of size 512 7 7. As a result, when we change the operation between layer conv5_. and fc6 from inner product to convolution, each filter bank kernel between conv5_4 and fc6-con. has size 7 7. This in turn means that if we are to extract features from layer fc6-conv and above the minimum size of an input image must equal to or be greater than 224. For output feature maps. of layer conv5_4 and below, there are no restrictions on the input image size. During the experiment. when we are extracting features from layer fc6-conv and above, the minimum size of an image is se. to be 224 if it is less than 224.\nIn this paper, the overlaps between different regions occur in the 3 and 4 scale pyramid. A single region in each scale can be specified as the combination of a slice from the the width and heigh of the feature map. If a scale has N N regions, then the number of slices in width and heigh of the feature map are both N. We use the same set of slices for both the width and height in this experiment.\nIn 3 scale (see Table3](b3)), overlap occurs only in scale 2, and the slice (in the proportion to the length of feature map width or height: {(0, ?), (3, 1)}. In 4 scale v1 (Table|3|(c1)-(c3)), the slices for scale 2 and 3 are {(0, 3), (, 1)} and {(0, 3), (4, 3), (3, 1)}. In 4 scale v2 (Table[3|(c4)(c5)) the slices for scale 2 and 3 are {(0, 5), (3, 1)} and {(0, ), (5, 5), (?, 1)}. In 4 scale v3 (Table3 (c6)-(c8)), the slices are {(0, ), (?, 1)} and {(0, ), (, ), (3, 1)}, for scale 2 and 3, respectively."}] |
rJJRDvcex | [{"section_index": "0", "section_name": "AYER RECURRENT NEURAL NETWORKS", "section_text": "Weidi Xie. Alison Noble & Andrew Zisserman\nDepartment of Engineering Science, University of Oxford, Uk"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contribu- tions are three-fold: (i) we propose a hybrid neural network architecture that in- terleaves traditional convolutional layers with L-RNN module for learning long- range dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable re- sults (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "In this paper we introduce an alternative 'module' for learning multi-scale spatial contextual infor. mation by using Recurrent Neural Networks (RNNs) within layers. This approach is inspired by the ReNet architecture of Visin et al.(2015), which we extend here into a hybrid architecture tha interleaves traditional convolutional neural network (CNN) modules with layer recurrent modules and we term a Layer Recurrent Neural Network (L-RNN). A L-RNN module is a combination o1 1D RNNs, and is able to learn contextual information adaptively, with the effective receptive fielc able to reach across the entire feature map or image, if that is required for the task. The hybric network combines the best of both worlds: canonical CNNs are composed of filters that are efficien in capturing features in a local region, whilst the L-RNNs are able to learn long-range dependencies. across a layer efficiently with only a small number of parameters..\nWe describe the basic L-RNN module in Section [2] and discuss different fusion choices for the. hybrid architecture by incorporating L-RNN into residual blocks (He et al.]2016b) in Section 3 In addition, in Section4] we explain how L-RNN modules can be inserted into pre-trained CNNs seamlessly. This means that the entire network does not have to be trained from scratch, only the added L-RNNs are fine-tuned together with pre-trained networks, and the experiments show that this. addition always improves performance. In Section[5] we experiment on the CIFAR-10 classification with the hybrid networks of increasing depths, by using Layer Normalization (Ba et al.2016).. we are able to train vanilla RNNs to match the performance of GRU (Chung et al.|2015), while.\nIn computer vision tasks, such as image classification or pixel level prediction, multi-scale contex- tual information plays a very important role in achieving high performance. The original architec- tures for these tasks (e.g.He et al.(2016a); Krizhevsky et al.(2012);Long et al.(2015);Ronneberger et al.(2015); Simonyan & Zisserman(2015); Szegedy et al.(2015)) were able to obtain multi-scale context with a large spatial footprint by the combination of filters through the layers of the network. so that a large receptive field was effectively built up. Indeed, the final layers of these networks use average pooling or fully connected layers (convolution with a large kernel) so that the effec- tive receptive field covers the entire input image patch. More recent pixel prediction architectures have used dilated convolutions (Yu & Koltun2016] Chen et al.][2016) which are able to aggregate multi-scale contextual information without losing resolution (due to the spatial pooling and strides in the original architectures), and without incurring the penalty of having to learn many parameters for convolutions with very large kernels.\nIt is worth noting that (broadly) recurrence can be used in feed-forward multi-layer convolutional. neural network architectures in two ways: between layers, and within layers. For example, between layer recurrence was used for scene labelling in (Liang et al.[2015f Pinheiro & Collobert|2014) with. convolutions applied recursively on top of feature maps from different layers or raw input images. And in (Zheng et al.2015), spatial dependencies are modelled explicitly for semantic segmentation. with densely connected Gaussian CRFs by iterated application of bilateral filtering using between-. layer recurrence.\nThe architecture of the network (Figure[1) is composed of two parts. Local features are calculated by the low-level CNNs module, the Layer-RNN (L-RNN) module, consisting of several 1D spatial. RNNs is applied to capture the spatial dependencies. By scanning across the feature maps in differ- ent directions, the complete L-RNN is able to learn the receptive field in an adaptive way, up to the size of the entire image. These two modules can be combined to build networks in various ways; for example, an L-RNN module can be stacked on top of several CNN modules at the final layer, or CNN and L-RNN modules can be interleaved at multiple levels..\nLayer-RNN Module Spatial Recurrent Module Spatial Recurrent Module CNN Module Concat Concat or or Sum Sum Input CNN Features A (B) (C)\nAs shown in Figure[1] the Layer-RNN (L-RNN) module is a combination of the 1D spatial recurrent modules (B) and (C). In each module, there are two 1D RNNs scanning across the feature maps horizontally or vertically from two directions (bidirectional spatial RNNs), and their hidden states are updated at every spatial step. Consequently, for each of the horizontal and vertical directions. two output feature maps are obtained with the same width and height as the input feature maps. In our implementation, we simply sum up these output feature maps (an alternative is to concatenate the output feature maps, but that would increase the number of parameters).\nMore formally, assume the feature maps (layer L) coming into the L-RNN module are XL e Rm n d and output X L+1 (layer L + 1), where m, n, d refers to the width, height, and the number of feature maps respectively for the input layer. For simplicity, assume the input to the 1D spatial\nBy contrast, our Layer-RNN architecture falls into the second category, where within-layer recur. rence is used to capture dependencies. Others have learnt contextual information from within layer. recurrence for tasks such as object detection (Bell et al.[2016), and low-level vision problems, such as de-noising, colourization and smoothing (Liu et al.|2016). We postpone discussing in detail the. relationships of the proposed Layer-RNN modules to these architectures, and to that of ReNet (Visin. et al.[2015) and ReSeg (Visin et al.|[2016), until we have introduced the L-RNN in Section[2\nIn (B), two 1D spatial RNNs are applied to scan along each row independently from dif- ferent directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange;\nIn (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate in- formation over the entire image.\nL+1 L+1 L V +b) left to rig Xi,j i.i-1\nE Rdx1 xL+1 xL+1 denotes the number of nodes used in the 1D spatial RNN, and f refers to the non-linearity function.. 1D spatial RNNs scanning other directions can be calculated similarly. Notice that, the first term of equation|1|encodes local information independently, resembling the normal convolutional layer, and. the second term characterizes the within-layer recurrence (U is a convolution matrix, V a recurrence matrix). We make use of this observation in Section4."}, {"section_index": "3", "section_name": "2.2 DISCUSSION AND RELATION TO OTHER WORK", "section_text": "As can be seen in Figure 1C, the effective receptive field can cover the entire image. However, the actual receptive field depends on the parameters of the RNNs, and can be learnt adaptively. As ar. insight to what is learnt, consider a separable filter, such as an axis aligned 2D Gaussian. Such filters. can be applied exactly by a composition of 1D Gaussian convolutions in the horizontal and vertica directions. The 1D spatial RNNs can approximate finite 1D convolutions of this type..\nWe next discuss the relation of the L-RNN to prior work. First, ReNets (Visin et al.]2015), which. is an architecture completely made of 1D RNNs (i.e. no CNNs). In ReNets, the input images are. first split into non-overlapping patches of size m n d, where m, n, d refer to width, height. and feature channels respectively. The 1D RNNs takes the flattened patch (mn d) as input, and. outputs feature vector of size D 1, where D refers to the number of nodes used in the RNNs. In contrast, we interleave the L-RNN and CNN modules. There are two benefits of this: first, CNNs are. more efficient at capturing local features than RNNs, the L-RNN stacked upon them is able to learn. dependencies between local features (rather than the input channel reformatted); second, we are able. to introduce more non-linearities between the hierarchical layers (through the convolutional+ReLU. and pooling layers), and a RNN provides non-linearities within the same layer..\nThe 2D-RNN, proposed in (Graves & Schmidhuber 2009]Theis & Bethge2015), is able to scan across the image or feature maps row-by-row, or column-by-column sequentially, with each RNN. node accept input from three sources, namely, projections of current input, and feedbacks from the. two neighbour nodes. By contrast, we use unidirectional 1D spatial RNNs, with each hidden node only accepting feedbacks from its previous node. Another advantage of our model is that rows or. columns can be processed in parallel on GPUs, and training time is shortened..\nBell et al.(2016) (Inside-Outside Net) and Visin et al.(2016) (ReSeg) describe similar ideas for. object detection and semantic segmentation. Both architectures follow a pipeline that consists of a. CNN feature extractor (VGG Net) followed by spatial RNNs at the final prediction stage. In contrast,. we treat the L-RNN module as a general computational layer, that can be inserted into any layer of. modern architectures, and interleaved with CNN modules. This enables a network to be capable of learning contextual information in a flexible way at multiple levels, rather than with hand-crafted. kernel sizes and receptive fields.\nNote that the vanilla RNN unit consists of two terms, a local term and a recurrence term, where. the local term is exactly the convolution operation. Therefore, the spatial RNN can be seen as a. generalisation of the convolutional layer, and in the worst case, when the RNN learns no context,. the layer simply becomes a convolutional one. For tasks with limited data (semantic segmentation. in our case), we propose a regime for inserting the L-RNN into the pre-trained FCN and fine-tuning. the entire network end-to-end. This means that we directly increase the representational power of. the model. and set the pre-trained mode1 free to learn contextual information if it is needed.\nIn this section. we describe the architecture for incorporating 1D spatial RNNs into the computa tional block of a Residual Networks(He et al.]2016b), and also discuss fusion methods for such blocks.\nRNNs from X L is a feature vector at each spatial location, each row or column on the feature maps. is treated as one sequence. When scanning from left to right, the feature responses for location ij can be calculated:\nWe start with the standard residual block of He et al.(2016b) (Figure 2(a)), and then replace the included CNN layer with bidirectional spatial RNNs, to includ a L-RNN module instead\nX X BN BN ReLU ReLU Conv Linear BN BN ReLU ReLU Conv (Linear Forward/ Forward/ 0: Sum/ Sum/ Concatenate Concatenate (a) CNN module (b) L-RNN module\ni.e. the block simply becomes a new layer; sum denotes the method of the original residual networks:\nxL+1 =XL;F(X,W)] (;) refers to concatenatior"}, {"section_index": "4", "section_name": "ADDING A LAYER-RNN TO A PRE-TRAINED CNN", "section_text": "In this section, we describe how a Layer-RNN module, can be seamlessly inserted into a pre-traine CNN. In a typical scenario, the CNN would be trained for classification on ImageNet (where ther are copious annotations). After inserting the L-RNN modules, the hybrid L-RNN network can the. be fine tuned for a new task such as pixel-level prediction, e.g. semantic segmentation (where th annotated data is usually more limited). This trick naturally allows multi-level contextual informa tion to be effortlessly incorporated. Avoiding training the network from scratch means the entir network can be re-purposed with the available annotations and trained end-to-end for the new task whilst benefiting from the earlier classification training\nWe illustrate the idea using 1D convolution, but the same principles hold for the entire L-RNN module. As shown in Figure[3] the canonical CNN architecture for a 1D convolution can be denoted as:\nWe consider three fusion options for combining the features from such blocks with the input to. subsequent layers; namely forward, sum and concatenation. Forward refers to the traditional feed- forward architectures:\nxL+1= F(XL,W)\nXL+1 = XL + F(XL,W)\nso that the L-RNN module acts as a residual block; whilst, in concatenation, features from multiple layers (same spatial sizes) are concatenated:\nL+1 =[X;F(XL,W)] (:) refers to concatenation\nTherefore, the channels of output feature maps will be the sum of the channels of the two concate nated layers (the number of parameters will be increased for the next layers). In the experimental evaluation of Section|5.1|we compare these options\nXL+1=f(W*XL+b)\nL+1 = f(U * XL + VXL+1 +b\nwhere U, V, b refer to the parameters that are shared across the whole scan-line Notice that the 1D spatial RNN are designed to incorporate two terms, projections from local region (input-to-hidden) and recurrence term from previous hidden unit (hidden-to-hidden). In fact, it is\nXL+ XL+ XL+1 = f(Xinter) xL+1 = f(Xinter +Vx+1 Xinter Xinter Xinter=W*XL+ b Xinter=U*XL+b X Convolutional Neural Networks Spatial Recurrent Neural Networks. (CNNs) (Spatial RNNs)\nXL+ XL XL+1 = f(Xinter) XL+1 = f(Xinter + HVXL+ Xinter Xinter Xinter =W*XL+b Xinter =U*XL+b Convolutional Neural Networks Spatial Recurrent Neural Networks (CNNs) (Spatial RNNs)\nthe presence of non-zero recurrence matrix V, that characterizes the 1D spatial RNN, and they can be calculated in a two-step way as :\n(X?nter (i = 1, zero initial state vL+1 Xinter + Vx(+1) (i > 1\n-L+1 Xinter + VX(+1) (i>1)\nBy interpreting the recurrence in this way, 1D spatial RNNs can be constructed by inserting recur rence directly into any convolutional layer right after the convolution. If the recurrence matrix V is initialized as zero, and ReLU is the activation function, then the 1D spatial RNN will be initialized exactly as the pre-trained CNNs. The complete L-RNN can be constructed by inserting two bidirec tional spatial RNNs into subsequent layers of the pre-trained CNNs. We derive the expression of the. within-layer gradient for use in back-prop fine-tuning in Appendix B.\nWe test the proposed Layer-RNN on two supervised learning tasks: CIFAR-10 classification i Section[5.1] and PASCAL VOC 2012 segmentation in Section[5.2\nIn this section, we investigate classification performance under variations in an architecture contain ing L-RNN modules. We vary the depth of the network, the number and position of the L-RNN modules, the type of recurrent units in RNNs, the pooling mechanisms for the last pooling layer, anc the method of fusing the block outputs..\nThere are two principal architectural variations. The first variation is that from Network A to D.. we gradually increase the network depth by adding CNN Modules, with the L-RNN module always. stacked at the final stage to capture global information over the entire image, in a similar manner to the fully connected layers or average pooling in other networks. Network A has 5 convolutional. layers.\nThe second principal variation, in Network E and F. is to interleave CNN and L-RNN modules This means that the network is capable of learning representations across large spatial footprints at any stage in the network. To show the effectiveness of adding L-RNN modules, we include a Baseline-CNN composed of only convolutional layers (7 layers, with concatenation used at every skip layer). Network E is built upon the Baseline-CNN by inserting L-RNN modules before CNN modules at multiple stages. To make sure the performance gain is not from the increased number of parameters, we cut down the number of filters in the last CNN module to 128 (this number is 256 in the Baseline-CNN). Network F, uses more convolutional layers interleaved with L-RNN modules.\nrinter =U * XL + b (Convolution) inter) (i = 1, zero initial states ) inter V XL+1 (i > 1\nTable 1: Network architectures for CIFAR-10 experiments In Network A, a variety of selections are tested (coded as blue). In Feature Fusion, we may choose. Forward, Sum, Concatenation; in the LRNN module, GRU and vanilla RNNs are tested; max pool- ing or average pooling can be used for global pooling.. From Network A to D, the depth of networks is gradually increased by adding CNN modules, for. example, comparing C to B, two more CNN modules are added based on B (coded as red). Com paring Networks E and F with the the Baseline-CNN, LRNN modules (green) are interleaved with. CNN modules.\nOther variations of architectures include: firstly, we may use Forward, Sum, Concatenation to fuse features; secondly, GRU and vanilla RNN units are compared for the L-RNN modules, ReLU is used for both cases as the non-linear activation; thirdly, both max pooling and average pooling are tested as global pooling. For clarity, we name the networks by these variations in Table [2 when Forward is selected to fuse features, Network A-Forward simply follows the traditional CNN witl pure feed-forward layers. A-Concat uses concatenation as an alternative, and A-Sum follows the idea of residual networks proposed in (He et al.|2016b), the number of filters is gradually increased as the networks get deeper. To match dimensions for summation, 1 1 convolution is used in A-Sum In our experiments, we found that concatenation works better than sum (Table2). Therefore, in all\nBaseline-CNN A B C D E F input (32 x 32 3) convolution (3 3 64) CNN Module (3 x 3 64) Forward CNN Module (3 x 3 x 64) CNN Module CNN Module Forward (3 x 3 x 64) CNN Module (3 x 3 64) CNN Module Forward Concatenate (3 3 64) (3 3 x 64) CNN Module CNN Module CNN Module CNN Module CNN Module Forward Concatenate (3 x 3 x 64) (3 x 3 x 64) (3 x 3 x 64) (3 x 3 x 64) (3 3 64) CNN Module CNN Module Concatenate Feature Fusion. Forward Concatenate Concatenate (3 x 3 x 64) (3 x 3 x 128) CNN Module CNN Module Concatenate Forward (3 x 3 x 64) (3 x 3 x 64) CNN Module Concatenate (3 x 3 128) Concatenate Forward CNN Module (3 x 3 x 128) Concatenate MaxPooling (2) LRNN Module CNN Module CNN Module (128) (3 x 3 x 128) (3 x 3 x 128) Forward CNN Module LRNN Module Forward Forward CNN Module (3 x 3 x 128) (128) CNN Module CNN Module CNN Module CNN Module (3 3 64) Forward Forward (3 3 128) (3 3 128) (3 3 x 128) (3 x 3 128) Concatenate CNN Module CNN Module Concatenate Feature Fusion. Forward Forward LRNN Module (3 x 3 x 128) (3 x 3 x 128) CNN Module CNN Module (128) Concatenate Concatenate (3 x 3 x 128) (3 x 3 x 128) Forward Concatenate Concatenate CNN Module (3 x 3 x 64) Concatenate MaxPooling (2) LRNN Module (128) Forward LRNN Module CNN Module (128) CNN Module LRNN Module LRNN Module LRNN Module LRNN Module (3 x 3 x 64) Forward (3 x 3 x 256) (256) (256) (256) (256) Concatenate CNN Module Concatenate Feature Fusion. Concatenate Concatenate Concatenate LRNN Module (3 x 3 x 128) (128) Concatenate Forward CNN Module (3 3 64) Concatenate Global Pooling (8). Dropout (0.5) Softmax (10)\nother architectures (B,C,D), as we gradually increase the network depth by adding CNN module we fuse the skip layers by only alternating between concatenation and forward\nFollowing the VGG-net (Simonyan & Zisserman]2015), in all architectures, convolutional kernels in the CNN Module are of size 3 3. Maxpoolings (2 2) are used as intermediate pooling, and 8 8 global poolings (average or max) are applied at the end. To avoid overfitting, we use dropout (0.5). Training details and recurrent units are described in the Appendix [A] Implementations are mostly based in Theano (Theano Development Team2016) with single NVIDIA Titan X.\nDataset & Evaluation. We conducted experiments on the CIFAR-10 dataset, which consists of 4Ok training images, 10k validation and 10k testing images in 10 classes, and each of the image is of 32 32 pixels with RGB channels. We augment the training data with simple transformations (rotation, flipping, scaling) on the fly. The mean image over the whole training set is subtracted from each image during training. Following the standard evaluation protocol, we report the top1 error on the testing set.\nResults & Discussion. We present detailed comparisons with other published methods in Table\nTable 2: Comparison with previous published methods on CIFAR-10 The networks are named by the chosen operation at every step; for instance, A-Forward-GRU-Max refers to the architecture A with Forward feature fusion, GRU in L-RNN Module, and max pooling as the final global pooling.\nFrom the experimental results, we can draw the following conclusions:\nIn our experiments for shallow networks, the summing of residual connections shows no bene- fit compared to feed-forward or concatenation. This observation is made from the results by A Forward-GRU-Max (7.57%), A-Concat-GRU-Max (7.35%) and A-Sum-GRU-Max (7.69%). Thus. as also employed in U-Net or DenseNet (Ronneberger et al.]2015] Huang et al.]2016), concatena- tion can be used as an alternative to summation in building deeper networks..\nCIFAR-10 # Params # Conv Layers Approx. Time / Epoch (s) Top1 Error(%) ReNet [Visin et al.]|2015] 0 12.35 NIN (Lin et al.[[2013) 8.81 FitNet (Romero et al.|2014) 2.5M 19 8.39 Highway (Srivastava et al.)2015) 2.3M 19 7.54 ResNet-110 (He et al.|) [2016a 1.7M 110 6.61 ResNet-164 (He et al.) 2016b 1.7M 164 5.46 Dense Net (Huang et al.|2016) 27.2M 100 3.74 Baseline-CNN-Avg 1.56M 7 331 9.07 Baseline-CNN-Max 1.56M 7 331 8.48 A-Concat-RNN-Avg 0.9M 5 293 7.65 A-Concat-RNN-Max 0.9M 5 293 7.43 A-Forward-GRU-Max 1.68M 5 315 7.57 A-Concat-GRU-Max 1.95M 5 377 7.35 A-Sum-GRU-Max 1.99M 5 383 7.69 B-GRU-Max 2.3M 9 542 6.62 B-RNN-Max 1.27M 9 483 6.78 C (GRU-Max) 2.5M 13 726 6.21 D (GRU-Max) 3M 19 1321 5.73 E (RNN-Max) 0.97M 7 462 5.96 394 F (RNN-Max) 1.55M 15 5.39 (Tensorflow on 2 GPUs)\nComparison of basic choices. Max pooling consistently performs better when used as the global pooling in our case, this is seen in the results by Baseline-CNN-Avg (9.07%) vs. Baseline-CNN- Max (8.48%), and A-Concat-RNN-Avg (7.65%) vs. A-Concat-RNN-Max (7.43%). One possible explanation would be that for classification tasks, decisions are based on the most salient features.\nIt can be seen that vanilla RNN units trained with Layer Normalization (Ba et al. 2016) can perforn almost as well as GRU, while saving a a large number of parameters (by comparing the results fron A-Concat-RNN-Max with 0.9M parameters (7.43%) and that of A-Concat-GRU-Max with 1.95M parameters (7.36%), B-RNN-Max with 1.27M parameters (6.78%) vs. B-GRU-Max with 2.3M parameters (6.62%)).\nNetworks with L-RNN module stacked at the final stage. Even shallow networks with L. RNN modules (architectures A) can achieve comparable or superior performance to deep archi. tectures with 19 layers that requires more parameters (e.g. Network A-Concat-RNN-Max (0.9M). vs. Highway(2.3M)). This confirms that when a L-RNN module is stacked on top of CNNs, it is able to capture global information, avoiding the multiple layer route to increasing receptive fields in. standard architectures, e.g. in (Romero et al.]2014} Srivastava et al.2015).\nAs expected, networks can always improve classification performance by adding more CNN mod- ules (going from architecture A to D). Network D with 19 convolutional layers performs better than the ResNet-110 (by 0.3% top1 error), (though Network D has more parameters than the ResNet- 110) and is slightly worse than ResNet-164 (by 0.25% top1 error). Thus, following this trend, it is reasonable to expect a benefit if L-RNN Modules are combined with very deep networks, like the residual variants.\nNetworks with L-RNN modules interleaved with CNN modules. Comparing the performanc. of Baseline-CNN-Max (8.48%) with that of Network E (5.96%), there is a significant performanc. boost (2.5%), brought by simply inserting L-RNN modules. Network E also has other advantage. over the networks A to D: the number of parameters, network depth, and running time. Further more, when we continue increasing the network depth and interleaving L-RNN modules, Network I achieves comparable results (5.39%) to ResNet-164 (5.46%) and with fewer parameters (1.55M vs. 1.7M). This confirms that, firstly, L-RNN modules can be combined with very deep networks and secondly, rather than hand-craft the kernel size, we should set the model free and learn contex tual information at any stage"}, {"section_index": "5", "section_name": "5.2 SEMANTIC SEGMENTATION", "section_text": "In this section, we insert L-RNN modules into the VGG-16 networks (pre-trained on Ima geNet (Deng et al.]2009), and fine-tune the entire network for the PASCAL VOC 2012 segmenta tion task. The objective is to boost the segmentation performance by providing contextual informa tion via the L-RNNs. In particular, we consider the two FCN segmentation architectures originally introduced byLong et al.(2015), FCN-32s and FCN-8s; these are described below.\nWe proceed in three steps: first, we establish baselines by training our own FCN-32s and FCN-8s. (AppendixC), and comparing their performance to those of (Long et al.||2015). We also investigate. the loss in performance as the fully connected (FC) layer is gradually reduced from 4096 to 512 channels. The reason for doing this is that when we insert the L-RNN module, its complexity. (dimension of the hidden units) depends on this number of channels, and so the overall complexity. can be varied. In the second step, we insert L-RNNs into the FCN-32s architecture and evaluate the change in performance. Finally, we insert L-RNNs into the FCN-8s architecture and compare with previous published methods.\nDataset & Evaluation.We used a training set consisted of VOC2012 training data (1464 images. provided by the challenge organizers), and augmented with training and validation data from Har- iharan et al.(2014), which further extend the training set to a total of 11, 685 images with pixel- level annotation. After removing the overlapping images between VOC2012 validation data and. this dataset, we are left with 346 images from the origina1 VOC2012 validation set to validate our model. In all the following experiments, we use a single scale for the input images (384 384),. and only horizontal flipping is used for data augmentation. The performance is measured in terms. of pixel intersection-over-union (IOU) averaged across the 21 classes..\nArchitecture & Training. In the FCN-32s, input images are passed through the whole networks. and end up with predictions of 12 12 21, then, up-sampling layers are directly used to map. the predictions back to 384 384 (32 times). In the FCN-16s, instead of directly up-sampling 32 times, the predictions are first up-sampled by 2, and summed up with stream predictions from pool (named after VGG16), then up-sampled by 16 times. In the FCN-8s, the stream predictions fron pool3 are further added to the results from FCN-16s, thus, up-sampling layers with only factor 8 is. needed.(AppendixC)\nFor all the architectures, the base net(VGG16) is pre-trained on ImageNet (Deng et al.. 2009), we further train on Pascal VOC2012 for 50 epochs, similar to the experiment for CIFAR-10, we iter. atively increase or decrease the learning rate between 10-3 and 10-5 after every 10 epochs. The. 4096 channel architectures are trained first, and then the number of channels is gradually reduced in. the FC layer by randomly cutting them (e.g. from 4096 to 2048), and re-training the networks.\nResults & Discussion. Table|3|shows the performance of the six baselines: FCN-32s and FCN 8s with the number of channels varying from 512 to 4096. We observe that reducing the nodes ir the FC layers does produce a performance drop (from 4096 to 1024 nodes, 1% mean IOU) in botl FCN-32s and FCN-8s. Although from 1024 to 4096 nodes, the improvement is tiny, the difference in the number of parameters is over 64 million. Consequently, in the following experiments we choose to perform experiments based on networks with 512, 1024 or 2048 channels only (i.e. no 4096). In comparison to the original performance for the FCN-8s architecture in (Long et al.]2015) we exceed this (by 64.4 to 61.3 mean IOU) in our training. Thus, we use our trained networks as a baseline."}, {"section_index": "6", "section_name": "5.2.2 FCN-32s WITH L-RNN MODULES", "section_text": "Architecture & Training. The architecture FCN-32s(L-RNN) is shown in figure 4] the convolu tional part of the architecture is initialized with the pre-trained FCN-32s(2048 channels in FC layer baseline. Then, two 1D spatial RNNs are inserted into the fc1 layer in the horizontal direction, anc two 1D spatial RNNs are inserted into the fc2 layer in the vertical direction. The convolution activa tions of fc1 are shared for both left-right and right-left scanning. Similarly for fc2, the convolution activations are shared for up-down and down-up scanning. Thus the fc1 and fc2 layers together with the added 1D spatial RNNs form a complete L-RNN module.\nDuring training, as described in section4 the 1D spatial RNNs are initialized with a zero recurrence. matrix. The entire network is then fine-tuned end-to-end with the PASCAL VOC2012 data. We adopt RMS-prop (Tieleman & Hinton2012) for 30 epochs with hyper-parameters lr = 10-4 p = 0.9, e = 10-8, then decrease the learning rate to lr = 10-5 for 10 epochs.\nResults & Discussion. The results are shown in Table|3] Compare the 32s rows with and withou the L-RNN for the FC layers with 512, 1024, and 2048 channels. As can be seen, the addition o the L-RNN always improve the segmentation performance over the pre-trained FCN-32s baselines However, the improvement is not large - about 1 - 1.5% mean IOU. This is because the receptive field in the fully connected layers of FCN-32s is sufficiently large to cover 224 224 pixels of the input patch, and consequenly the networks are not able to benefit much from the context provided by the L-RNN. The benefit is greater when L-RNNs are added to the lower layers (where the receptive fields of the convolutions is much smaller), and we turn to that case next."}, {"section_index": "7", "section_name": "5.2.3 FCN-8s WITH L-RNN MODULES", "section_text": "Architecture & Training.The architecture FCN-8s(L-RNN) is shown in figure 4] as with the. FCN-32s architecture, 1D spatial RNNs are inserted into the fc1 and fc2 layers to form a L-RNN module. L-RNNs are also inserted into the lower layers, namely pool3 and pool4 layers. Unlike the FC layers in the FCN-32s, where prediction for each central pixel comes from image patches of size 224 224, the predictions from pool3 and pool4 are based on receptive field on the image of. much smaller sizes (around 44 44 and 100 100 pixels respectively). Thus, the inserted L-RNN modules must be able to model relatively long-range dependencies..\nFigure 4: FCN-32s (above the blue dash line) and FCN-8s with L-RNN modules. Spatial RNNs are inserted to the fully connected (FC) layers in all FCNs, every two FC layer. construct a complete L-RNN module. {384, 192, 96} indicate the spatial sizes of the feature maps. Kernel Sizes for the fully connected layers (n is an experimental variable- number of channels) : fc1 : 7 x 7 x 512 x n , fc2: 1 x 1 xnXn fc3 : 1 x 1 x nx 21 fc4 : 1 x 1 x 512 x 1024, fc5 : 1 x 1 x 1024 x 1024, fc6 : 1 x 1 x 1024 x 21 fc7 : 1 x 1 x 256 x 1024, fc8 : 1 x 1 x 1024 x 1024, fc9 : 1 x 1 x 1024 x 21\nDuring training, the network is initialized from the FCN-8s baseline, and then fine-tuned using. segmentation data. Again the PASCAL VOC dataset is used. Furthermore, when comparing to the. other previously published methods, the network is further trained on the COCO trainval dataset and we use a densely connected CRF as post-processing (Krhenbhl & Koltun|2012)\nResults on PASCAL VOC Validation set. The experimental results are shown in Table|3\nTable 3: Comparison of FCN networks on the PASCAL VOC2012 segmentation validation set\nComparing the rows for 32s with and without L-RNN, to those for 8s with and without L-RNN. W can draw the following conclusions:\nImprovement due to the skip layers. It can be seen (for IOU) that going from FCN-32s(2048) to FCN-8s(2048), where there are additional skip layers, the performance is boosted from 62.7 to 64.1. The skip layers in the FCN-8s architecture introduce more parameters, but this is not the. reason for the performance boost since FCN-8s(2048) and FCN-32s(4096), have a similar number of parameters though they perform very differently (64.1 vs. 62.9). This observation confirms that the. performance gain is brought by the the skip layers, rather than the increased number of parameters..\nImprovement due to L-RNN module. Inserting a L-RNN to the FC layers of FCN-32s(2048) only improves the performance from 62.7 to 64.2. However, as noted earlier, since the nodes in the\nLRNN Module 1 FCN-32s fc1 fc2 fc3 up32x (LRNN) up2x LRNN Module 2 up16x FCN-16s tC5 fc6 (L-RNN) up2x LRNN Module 3 up8x C fc8 fc9 FCN-8s (L-RNN)\nType # of channels in FC. L-RNNs added Pixel Acc % Mean IOU % 32s 512 NO 90.4 61.5 32s 1024 NO 90.5 62.1 32s 2048 NO 90.7 62.7 32s 4096 NO 90.7 62.9 8s 1024 NO 91.3 63.8 8s 2048 NO 91.2 64.1 8s 4096 NO 91.3 64.4 8s (original (Long et al.[[2015)) 4096 61.3 32s 512 YES 90.8 62.7 32s 1024 YES 90.9 63.4 32s 2048 YES 91.1 64.2 8s 2048 YES 92.6 69.1\nIn contrast, adding L-RNNs to FCN-8s brings a substantial improvement from 64.1(FCN-8s) tc 69.1(FCN-8s-LRNN). This process wil1 introduce more parameters due to the recurrence term in the RNNs, but it is clear that the improvement is mainly from the inserted L-RNN module after pool3 and pool4 in FCN-8s, rather than from the increased number of parameters. The reason is that, when comparing FCN-8s (2048 channels without L-RNN) to FCN-8s (4096 channels without L-RNN), although the number of parameters is increased dramatically, the performance is only increased from 64.1 to 64.4. While FCN-8s (4096 channels without L-RNN) has roughly the same number of parameters as that of FCN-8s (2048 channels with L-RNN), but the performance gain is from 64.4 to 69.1. In conclusion, the L-RNN is able to learn contextual information over a much larger range than the receptive field of pure local convolutions.\nResults on PASCAL VOC Test set. Table4|shows the results of the FCN-8s with L-RNNs on the PASCAL VOC test data, and also compares to others who have published on this dataset. The. performance is far superior to the original result (Long et al.]2015) using a FCN-8s with 4096. channels (whereas only 2048 channels are used here). We also compare to the dilated convolutior network of (Yu & Koltun,2016), obtaining comparable, though slightly better performance. Note that in (Yu & Koltun2016), multi-scale contextual information is captured by explicitly designing. dilated convolution kernels, while the L-RNN is able to learn contextual information implicitly. Finally, we compare to (Zheng et al.] 2015) who add a densely connected CRF to FCN-8s. If we. also add a dense CRF as post-processing, we boost the performance by 1% in IOU (the same boost as. obtained by (Yu & Koltun2016)). In Figure[5] we show the samples of semantic segmentations or\nMean IOU % Methods P P+CRF P+COCO P+COCO+CRF FCN-8s (Long et al.J|2015) 62.2 n/a n/a n/a CRF-RNNs (Zheng et al.l[2015) n/a 72.0 n/a 74.7 Dilated Conv. 7Yu & Koltun]2016 n/a n/a 73.5 74.7 FCN-8s-LRNN (2048) 71.9 72.7 74.2 75.7\nthe PASCAL VOC2012 validation set. In each figure, we show our predictions and the results afte CRF post-processing. Comparing with the end-to-end trainable CRF-RNN (Zheng et al.]2015), ou predictions miss the small details, like the wheel of the bicycle, but show much better performance in determining the class of the segmented regions - something that context can really contribute to.\nThis paper has shown that the proposed L-RNN module is an alternative way of adding multi-leve spatial context to a network. In fact, L-RNNs can be interleaved with convolutional layers to lear context at any stage. When the L-RNN is only used at the final stage after the CNNs, it gives shallow networks the receptive fields of far deeper networks. Furthermore, we have demonstratec that inserting L-RNNs can boost the performance of pre-trained networks, and given an initializatior procedure that makes this training a simple matter of end-to-end fine tuning.\nThere is much left to investigate using L-RNNs as a new building block, and we suggest some av enues here: (i) training the hybrid architectures on larger dataset, such as ImageNet (Deng et al. 2009), and learn representations that can be transferred to other vision tasks, (ii) a similar investiga tion for deep residual networks where the residual blocks are either convolutional or L-RNNs; and (iii) including a CRF final layer in end-to-end training.\nInput Image CRF-RNN FCN(8s)-LRNN LRNN+CRF Ground-truth B-ground Aeroplane Bicycle Bird Boat Bottle Bus Cat Chair Cow Dinging-table Dog Horse Motorbike Person Potted-Plant Sheep Sofa Train T V/Monitor\nFigure 5: Qualitative Results. First column: input image. Second column: prediction from Zheng et al.(2015). Third column: prediction from the our networks. Fourth column: CRF post- processing. Fifth column: ground-truth annotation.\nBell, Sean, Zitnick, C Lawrence, Bala, Kavita, and Girshick, Ross. Inside-outside net: Detectin objects in context with skip pooling and recurrent neural networks. CVPR, 2016..\nHariharan, Bharath, Arbelaez, Pablo, Girshick, Ross, and Malik, Jitendra. Simultaneous detection and segmentation. ECCV, 2014.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CVPR, 2016a\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual networks. ECCV, 2016b.\nHuang, Gao, Liu, Zhuang, and Weinberger, Kilian Q. Densely connected convolutional network. https://arxiv.0rg/abs/1608.06993, 2016\nKrhenbhl, Philipp and Koltun, Vladlen. Efficient inference in fully connected crfs with gaussian edge potentials. NIPS. 2012\nKrizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep con volutional neural networks. NIPS, 2012.\nLiang, Ming, Hu, Xiaolin, and Zhang, Bo. Convolutional neural networks with intra-layer recurrent connections for scene labeling. NIPS, 2015.\nLiu, Sifei, Pan, Jinshan, and Yang, Ming-Hsuan. Learning recursive filters for low-level vision via. hybrid neural network. ECCV, 2016..\nLong, Jonathan, Shelhamer, Evan, and Darrell, Trevor. Fully convolutional networks for semant segmentation. CVPR, 2015.\nPinheiro, Pedro HO and Collobert, Ronan. Recurrent convolutional neural networks for scene label ing. ICML, 2014.\nRomero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, anc Bengio, Yoshua. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014\nRonneberger, Olaf, Fischer, Philipp, and Brox, Thomas. U-net: Convolutional networks for biomed. ical image segmentation. MICCAI, 2015.\nSimonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. ICLR, 2015\nTheis, Lucas and Bethge, Matthias. Generative image modeling using spatial lstms. NIPs, 2015\nTieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running. average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URLhttp://arxiv.org/abs/ 1605.02688\nVisin, Francesco, Ciccone, Marco, Romero, Adriana, Kastner, Kyle, Cho, Kyunghyun, Ben- gio, Yoshua, Matteucci, Matteo, and Courville, Aaron. Reseg: A recurrent neural network-based. model for semantic segmentation. CVPR. 2016..\nZheng, Shuai, Jayasumana, Sadeep, Romera-Paredes, Bernardino, Vineet, Vibhav, Su, Zhizhong Du, Dalong, Huang, Chang, and Torr, Philip HS. Conditional random fields as recurrent neural networks. ICCV, 2015."}, {"section_index": "8", "section_name": "Appendices", "section_text": "In the Layer-RNN, we test the gated recurrent units (GRU) for the RNN blocks (Chung et al.J|2015 the GRU has two gates, namely reset gate r and update gate z. Intuitively, the reset gate determine how to combine the new input with the previous memory, and the update gate defines how much c the previous memory to use, thus, the hidden state st of the GRU at time t can be computed as :\nz =O(xtUz + St-1Wz) r = o(xtUr + St-1Wr) h =f(xtUh+(St-10r)Wh St = (1-z) 0 h+ z O St-1\nt)+b H H 1 1 E at (aj- Ut = H H i=1 i=1\nWhere U is the current input-to-hidden term, and V is the hidden-to-hidden recurrence term, b anc g are defined as the bias and gain parameters of the same dimension as ht.\nDuring training, we iteratively increase and decrease the learning rate (learning rate restart) between. 10-3 and 10-5 based on the conjecture that (Figure[6), networks tend to get trapped in the regions. with small derivatives, such as saddle points or bad local minima Dauphin et al.(2014). Tradi-. tionally, the learning rate is decreased every several epochs, and gradients that are used to update parameters depend on both the learning rate and the derivatives w.r.t loss functions. At the end of training, both of these two terms tend to be very small. Therefore, it becomes difficult for the net- works to escape from these regions. During our training, we restart the learning rate every some. epochs (we try 60 or 80 in our training), and decrease it gradually..\nSaddle Point\nZ =OxtUz + St-1Wz) r = o(xtUr + St-1Wr f(xtUh+(St-10r)Wh St=(1-z)oh+ZOSt-1\nTo simplify the training process and reduce number of parameters, we also test the vanilla RNNs for. the RNN blocks with Layer Normalization(Ba et al.]2016). In a standard RNN, the outputs in the recurrent layer are calculated from the current input xt and the previous hidden states ht-1, which are denoted as at = Uxt + Vht-1. The layer normalized layer is computed as :.\nSaddle Point or Side View Bad Local Minima\nFigure 6: Intuitive Loss Surfaces. Deep Neural Networks may easily be trapped into saddle point or bad local minima"}, {"section_index": "9", "section_name": "B FINE-TUNING LAYER-RNNS WITH ZERO RECURRENCE MATRIX", "section_text": "Assume E denotes the loss function for a specific task. Since V is shared for the whole 1D sequence (length denoted by T), the back-propagation within the layer L + 1 can then be derived as:\ndE dE oXL+1 vL+1 ax dst E av 0XL+1 0XL+1 dst av T t<T\nox+1 oxf+1 0x+1 0XL+1 = VT . diag(f') and ox+1x+ oXL+1 xL+1 0XL+1\ndE dE 0XL+1 dST oXL+1 dST where av dST ov dsT av 1\nmatrix V randomly or to be identity matrix, we actually initialize it based on the features in a local neighbourhood (equation|20j. During the back-propagation of spatial RNNs, gradients flow within layers, ?F (between layers) is calculated in the same way as normal convolutional layers.\nIn this section. we derive the procedure for fine-tuning the recurrence matrix, when it is initialized as zeros. We will only consider 1D scan-lines of the spatial RNN, and therefore simplify the derivation to a 1D sequence. Consider the fully connected layer for simplicity, L, L + 1 denote layer, t refers to the index of input, f refers to ReLU, U, V refer to the input-hidden matrix and recurrence matrix. respectively.\nSt = UX F VXL+1 -1 L+1=f(St)\ndE V = Vo - Q gradient descent at first iteration\nThe complete FCNs architecture used in the paper\n38 4096 4096 feats feats [C fc2 fc3 up32x FCN- Kernel size = Kernel size = 7x7x512x4096 1x1x4096x4096 up2x fc5 fc6 up16x FCN- up2x fc7 up8x FCN- fc8 CO\n4096 4096 feats feats 384 fc2 fc3 up32x FCN-32s Kernel size = Kernel_size = 7x7x512x4096 1x1x4096x4096 up2x up16x FCN-16s fc6 up2x up8x FCN-8s"}] |
rJq_YBqxx | [{"section_index": "0", "section_name": "DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION BY LEARNING MORPHOLOGY", "section_text": "Shenjian Zhao\nDepartment of Computer Science and Engineering Shanghai Jiao Tong University Shanghai 200240, China\nNeural machine translation aims at building a single large neural network that can be trained to maximize translation performance. The encoder-decoder architecture with an attention mechanism achieves a translation performance comparable to the existing state-of-the-art phrase-based systems. However, the use of large vocabulary becomes the bottleneck in both training and improving the performance. In this paper, we propose a novel architecture which learns morphology by using two recurrent networks and a hierarchical decoder which translates at character level. This gives rise to a deep character-level model consisting of six recurrent networks Such a deep model has two major advantages. It avoids the large vocabulary issue radically; at the same time, it is more efficient in training than word-based models Our model obtains a higher BLEU score than the bpe-based model after training for one epoch on En-Fr and En-Cs translation tasks. Further analyses show that our model is able to learn morphology."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural machine translation (NMT) attempts to build a single large neural network that reads a sentence and outputs a translation (Sutskever et al.| 2014).Most of the extant neural machine translations models belong to a family of word-level encoder-decoders (Sutskever et al.||2014] |Cho et al.[[2014). Recently, Bahdanau et al. (2015) proposed a model with attention mechanism which automatically searches the alignments and greatly improves the performance. However, the use of a large vocabulary seems necessary for the word-level neural machine translation models to improve performance (Sutskever et al.[2014f Cho et al.[2015).\nChung et al.(2016a) listed three reasons behind the wide adoption of word-level modeling: (i) worc is a basic unit of a language, (ii) data sparsity, (iii) vanishing gradient of character-level modeling. Consider that a language itself is an evolving system. So it is impossible to cover all words in the. language. The problem of rare words that are out of vocabulary (OOV) is a critical issue which car. effect the performance of neural machine translation. In particular, using larger vocabulary does. improve performance (Sutskever et al.f2014] Cho et al.[2015). However, the training becomes. much harder and the vocabulary is often filled with many similar words that share a lexeme but have. different morphology.\nThere are many approaches to dealing with the out-of-vocabulary issue. For example, Gulcehre. et al.[(2016); Luong et al.[(2015); Cho et al.[(2015) proposed to obtain the alignment information of target unknown words, after which simple word dictionary lookup or identity copy can be performed. to replace the unknown words in translation. However, these approaches ignore several important. properties of languages such as monolinguality and crosslinguality as pointed out by Luong and."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Intuitively, it is elegant to directly model pure characters. However, as the length of sequence grows significantly, character-level translation models have failed to produce competitive results compared with word-based models. In addition, they require more memory and computation resource Especially, it is much difficult to train the attention component. For example, Ling et al.(2015a proposed a compositional character to word (C2w) model and applied it to machine translation (Ling et al.[[2015b). They also used a hierarchical decoder which has been explored before in other contex1 (Serban et al.]2015). However, they found it slow and difficult to train the character-level models, and one has to resort to layer-wise training the neural network and applying supervision for the attention component. In fact, such RNNs often struggle with separating words that have similar morphologies but very different meanings.\nIn order to address the issues mentioned earlier, we introduce a novel architecture by exploiting th structure of words. It is built on two recurrent neural networks: one for learning the representatior of preceding characters and another for learning the weight of this representation of the whole word. Unlike subword-level model based on the byte pair encoding (BPE) algorithm (Sennrich et al 2016), we learn the subword unit automatically. Compared with CNN word encoder (Kim et al. 2016] Lee et al.][2016), our model is able to generate a meaningful representation of the word. Tc decode at character level, we devise a hierarchical decoder which sets the state of the second-leve RNN (character-level decoder) to the output of the first-level RNN (word-level decoder), which wil generate a character sequence until generating a delimiter. In this way, our model almost keeps the same encoding length for encoder as word-based models but eliminates the use of a large vocabulary Furthermore, we are able to efficiently train the deep model which consists of six recurrent networks achieving higher performance.\nIn summary, we propose a hierarchical architecture (character -> subword -> word -> source sentence -> target word -> target character) to train a deep character-level neural machine translator. We show that the model achieves a high translation performance which is comparable to the state-of-the-ar neural machine translation model on the task of En-Fr, En-Cs and Cs-En translation. The experiments and analyses further support the statement that our model is able to learn the morphology.\nNeural machine translation is often implemented as an encoder-decoder architecture. The encoder usually uses a recurrent neural network (RNN) or a bidirectional recurrent neural network (BiRNN) (Schuster and Paliwal]1997) to encode the input sentence x = {x1, .:.., xT} into a sequence of hidden states h ={ h1, . hT.} :\nht = f1(e(xt),ht-1)\np(yt{y1,.,Yt-1})=g(e(yt-1),St,Ct)"}, {"section_index": "3", "section_name": "where", "section_text": "St = f2e(yt-1),St-1,Ct\nand g is a nonlinear and potentially multi-layered function that computes the probability of yt. The context ct depends on the sequence of {h1, . . . , hT. }. Sutskever et al.[(2014) encoded all informatior in the source sentence into a fixed-length vector, i.e., c, = hT. .Bahdanau et al.(2015) computed c, by the alignment model which handles the bottleneck that the former approach meets.\n0* = argmax >logp(yt|{y1,.., Yt-1},x,0) 0 t=1\nwhere e(xt) E Rm is an m-dimensional embedding of xt. The decoder, another RNN, is often trained to predict next word yt given previous predicted words {y1, ..., Yt-1} and the context vector. Ct; that is,\nThe whole model is jointly trained by maximizing the conditional log-probability of the correct translation given a source sentence with respect to the parameters of the model 0:."}, {"section_index": "4", "section_name": "DEEP CHARACTER-LEVEL NEURAL MACHINE TRANSLATION", "section_text": "We consider two problems in the word-level neural machine translation models. First, how can we map a word to a vector? It is usually done by a lookup table (embedding matrix) where the size of vocabulary is limited. Second, how do we map a vector to a word when predicting? It is usually done via a softmax function. However, the large vocabulary will make the softmax intractable computationally."}, {"section_index": "5", "section_name": "3.1 LEARNING MORPHOLOGY IN A WORD ENCODER", "section_text": "Many words can be subdivided into smaller meaningful units called morphemes, such as \"any-one' 'any-thing' and \"every-one. At the basic level, words are made of morphemes which are recognized. as grammatically significant or meaningful. Different combinations of morphemes lead to differen. meanings. Based on these facts, we introduce a word encoder to learn the morphemes and the rules of how they are combined. Even if the word encoder had never seen \"everything' before, with a. understanding of English morphology, the word encoder could gather the meaning easily. Thus. learning morphology in a word encoder might speedup training..\nThe word encoder is based on two recurrent neural networks as illustrated in Figure 1 We compute the representation of the. word 'anyone' as\n6 ranyone = tanh t=1\nrt =f(e(xt),rt-1)\nEach rt contains information about the preceding characters The weight w+ of each representation r is computed by.\nwt = exp(aff(ht))\nwhere h is another RNN hidden state at time t and aff( is an affine function which maps h to a scalar. Here, we use a BiRNN to compute ht as shown in Figure[1] Instead of nor- malizing it by >, exp(aff(ht)), we use an activation function tanh as it performs best in experiments.\nWe can regard the weight w; as the energy that determines whether r; is a representation of a morpheme and how it contributes to the representation of the word. Compared with an embedding lookup table, the decoupled RNNs learn the representation of morphemes and the rules of how they are combined respectively, which may be viewed as learning distributed representations of words explicitly. For example, we are able to translate \"convenienter' correctly which validates our idea.\nAfter obtaining the representation of the word, we could encode the sentence using a bidirectional RNN as RNNsearch (Bahdanau et al.2015). The detailed architecture is shown in Figure[2"}, {"section_index": "6", "section_name": "3.2 HIERARCHICAL DECODER", "section_text": "To decode at the character level. we introduce a hierarchical decoder. The first-level decoder is similar to RNNsearch which contains the information of the target word. Specifically, s, in Eqn. (1) contains the information of target word at time t. Instead of using a multi-layer network following a softmax function to compute the probability of each target word using St, we employ a second-level decoder which generates a character sequence based on st.\nWe correspondingly devise two novel architectures, a word encoder which utilizes the morphology and a hierarchical decoder which decodes at character level. Accordingly, we propose a deep character-level neural machine translation model (DCNMT).\n6 ranyone = tanh( W(rt) t=1 r1 r2 r3 r4 r5 r6 000 000 000 000 000 000 ^ A A 1 a n y O n W1 W2 W3 e W4 W5 W6 000 000 000 000 000 000 h5 a n y n e\nr1 r2 r3 r4 r5 r6 000 000 000 000 000 000 A A n y n W3 e W1 W2 W4 W5 W6 000 000 000 000 13 000 000 T h5 T T a n V n e\nFigure 1: The representation of the word 'anyone..\nand Schmidhuber!1997) units instead of the GRU described here). HGRU has a settable state and generates character sequence based on the given state until generating a delimiter. In our model, the state is initialized by the output of the first-level decoder. Once HGRU generates a delimiter, it will set the state to the next output of the first-level decoder. Given the previous output character sequence {yo, y1, . .. , Yt-1} where yo is a token representing the start of sentence, and the auxiliary sequence {ao, a1, ..., at-1} which only contains O and 1 to indicate whether y is a delimiter (ao is set to 1). HGRU undates the state as follows\ngt-1=(1-At-1)gt-1+at-1Sit qt =o( [Wqe(yt-1)] + [Uqgt-1]) zf = o([Wze(yt-1)] +[Uzgt-1]] gt = $([We(yt-1)] + [U(qt O gt-1)]) gt =ztgt-1+(1-zt)gt\np(yt{y1,...,Yt-1},x) = softmax(gt)\nThe current problem is that the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level decoder when training in batch manner (at least intractable for Theano (Bastien et al.]2012) and. other symbolic deep learning frameworks to build symbolic expressions). Luong and Manning (2016 uses two forward passes (one for word-level and another for character-level) in batch training which is less efficient. However, in our model, we use a matrix to unfold the outputs of the first-level decoder, which makes the batch training process more efficient. It is a Ty T matrix R, where Ty is. the number of delimiter (number of words) in the target character sequence and T is the length of. the target character sequence. R[i, J1 + 1] to R[i, J2] are set as 1 if j1 is the index of the (i-1)-th delimiter and j2 is the index of the i-th delimiter in the target character sequence. The index of the. O-th delimiter is set as 0. For example, when the target output is \"g o - ! _\"' and the output of the. first-level decoder is s1. s2. the unfolding step will be:.\n1 1 1 0 0 S1, S1, S1, S2, S2 0 0 0 1\ntherefore {Si1, Si2, Si3, Si4, Si5} is correspondingly set to {S1, S1, S1, S2, S2} in HGRU iterations. After this procedure, we can compute the probability of each target character by the second-level decoder according to Eqns. (2) to (7)"}, {"section_index": "7", "section_name": "3.3 MODEL ARCHITECTURES", "section_text": "There are totally six recurrent neural networks in our model, which can be divided into four layers as shown in Figure[2] Figure2|illustrates the training procedure of a basic deep character-level neura machine translation. It is possible to use multi-layer recurrent neural networks to make the mode deeper. The first layer is a source word encoder which contains two RNNs as shown in Figure[1 The second layer is a bidirectional RNN sentence encoder which is identical to that of (Bahdanau et al. 2015). The third layer is the first-level decoder. It takes the representation of previous target word as a feedback, which is produced by the target word encoder in our model. As the feedback is less important, we use an ordinary RNN to encode the target word. The feedback ryt-, then combines the previous hidden state u-1 and the context ct from the sentence encoder to generate the vector st:\nSt = W1ct + W2ry-1 + W3ut-1 + b\nWith the state of HGRU in the second-level decoder setting to st and the information of previous. generated character, the second-level decoder generates the next character until generating an end of. sentence token (denoted as </s> in Figure2). With such a hierarchical architecture, we can train our character-level neural translation model perfectly well in an end-to-end fashion..\n1-At-1gt-1+at-1Sit t-1 = (1 qt = qe(Yt-1)]' + [Uqgt-1]) V 7 = o([Wze(yt-1)] + [Uzgt-1]) gt = $([We(yt-1)] + [U(qt O gt-1)]) 1 - zt)gt J=Z\nwhere s, is the output of the first-level decoder which calculated as Eqn. (8). We can compute the probability of each target character yt based on gt with a softmax function:\nFigure 2: Deep character-level neural machine translation. The HGRUs with red border indicate that the state should be set to the output of the first-level decoder.."}, {"section_index": "8", "section_name": "3.4 GENERATION PROCEDURE", "section_text": "We first encode the source sequence as in the training procedure, then we generate the target sequence character by character based on the output st of the first-level decoder. Once we generate a delimiter we should compute next vector st+1 according to Eqn. (8) by combining feedback ry, from the targe word encoder, the context ct+1 from the sentence encoder and the hidden state u. The generatior procedure will terminate once an end of sentence (EOS) token is produced.\nWe implement the model using Theano (Bergstra et al.2010]Bastien et al.]2012) and Blocks (var. Merrienboer et al.]2015), the source code and the trained models are available at github[] We train. our model on a single GTX Titan X with 12GB RAM. First we evaluate our model on English-to French translation task where the languages are morphologically poor. For fair comparison, we. use the same dataset as in RNNsearch which is the bilingual, parallel corpora provided by ACI. WMT'14. In order to show the strengths of our model, we conduct on the English-to-Czech and Czech-to-English translation tasks where Czech is a morphologically rich language. We use the same. dataset as (Chung et al.2 2016a, Lee et al.| 2016) which is provided by ACL WMT'152"}, {"section_index": "9", "section_name": "4.1 DATASET", "section_text": "We use the parallel corpora for two language pairs from WMT: En-Cs and En-Fr. They consist of 15.8M and 12.1M sentence pairs, respectively. In terms of preprocessing, we only apply the usual tokenization. We choose a list of 120 most frequent characters for each language which coveres nearly 100% of the training data. Those characters not included in the list are mapped to a special token\nB 0 n 1 0 u r </d> m 0 n d e </d> </s> Y6 Yt Yt+1 Yt+2 Yt+3 Yt+4 Yt+5 Yt+6 ^ ^ ^ A o x P H H H H H H H G O P R Second-level Decoder U U U U U U A R matrix R St St+1 Target Word Encoder. Iyt-1 ryt G O R P R Ut-1 Ut U ut+1 First-level Decoder. Ct Ct+1 Bidirectional RNN Sentence Encoder ^rx1 ^rx2 rx3 Source Word Encoder. X1 X2 X3 x4 x5 X6 X7 X8 x9 x10 X11 X12 X13 X14 H e 1 1 0 </d> W 1 d </d> </s> </d> 0 r\n(<unk>). We use newstest2013(Dev) as the development set and evaluate the models on newstest201 (Test). We do not use any monolingual corpus.."}, {"section_index": "10", "section_name": "4.2 TRAINING DETAILS", "section_text": "We follow (Bahdanau et al. 2015) to use similar hyperparameters. The bidirectional RNN sentence encoder and the hierarchical decoder both consists of two-layer RNNs. each has 1024 hidden units: We choose 120 most frequent characters for DCNMT and the character embedding dimensionality is 64. The source word is encoded into a 600-dimensional vector. The other GRUs in our model have 512 hidden units.\nWe use the ADAM optimizer (Kingma and Ba]2015) with minibatch of 56 sentences to train each model (for En-Fr we use a minibatch of 72 examples). The learning rate is first set to 10-3 and then. annealed to 10-4\nWe use a beam search to find a translation that approximately maximizes the conditional log. probability which is a commonly used approach in neural machine translation (Sutskever et al.| 2014 Bahdanau et al.2015). In our DCNMT model, it is reasonable to search directly on character level to. generate a translation.\nWe conduct comparison of quantitative results on the En-Fr, En-Cs and Cs-En translation tasks in Section[5.1] Apart from measuring translation quality, we analyze the efficiency of our model and effects of character-level modeling in more details..\nTable 1: BLEU scores of different models on three language pairs\nIn Table[1] \"Length\"' indicates the maximum sentence length in training (based on the number of words or characters), \"Size\" is the total number of parameters in the models. We report the BLEU\nWe illustrate the efficiency of the deep character-level neural machine translation by comparing with the bpe-based subword model (Sennrich et al.[|2016) and other character-level models. We measure the performance by BLEU score (Papineni et al.|2002)\nModel Size Src Trgt Length Epochs Days Dev Test bpe2bpe(1) bpe bpe - 50 50 - - 26.91 29.70 C2w(2) ~ 54 M char char 300 300 ~ 2.8 ~ 27 25.89 27.04 CNMT ~ 52 M char char 300 300 ~ 3.8 ~ 21 28.19 29.38 1 ~ 7 27.02 28.13 DCNMT ~ 54 M char char 300 300 ~ 2.8 ~ 19 29.31 30.56 bpe2bpe(1) bpe bpe 50 - 50 - 15.90 13.84 : bpe2char(3) bpe char 50 500 - 16.86 char(5) char char 600 600 > 4 ~ 90 1 17.5 - hybrid(5) ~ 250 M hybrid hybrid 50 50 > 4 ~ 21 19.6 1 1 ~ 5 15.50 14.87 DCNMT ~ 54 M char char 450 450 ~ 2.9 ~ 15 17.89 16.96 bpe2bpe(1) - bpe bpe 50 50 - 21.24 20.32 bpe2char(3) ~ 76 M bpe char 50 500 ~ 6.1 ~ 14 23.27 22.42 Cs-an char2char(4) ~ 69 M char char 450 450 ~ ' 7.9 ~ 30 23.38 22.46 1 ~ 5 20.50 19.75 DCNMT ~ 54 M char char 450 450 ~ 4.6 ~ 22 23.24 22.48\nscores of DCNMT when trained after one epoch in the above line and the final scores in the following. line. The results of other models are taken from (1)Firat et al.(2016), (3)Chung et al.(2016a), (4)Lee et al.(2016) and (5)Luong and Manning(2016) respectively, except (2) is trained according toLing et al.(2015b). The only difference between CNMT and DCNMT is CNMT uses an ordinary RNN to encode source words (takes the last hidden state). The training time for (3) and (4) is calculated . based on the training speed in (Lee et al.]2016). For each test set, the best scores among the models. per language pair are bold-faced. Obviously, character-level models are better than the subword-level. models, and our model is comparable to the start-of-the-art character-level models. Note that, the purely character model of (5)(Luong and Manning2016) took 3 months to train and yielded +0.5 BLEU points compared to our result. We have analyzed the efficiency of our decoder in Section|3.2 Besides, our model is the simplest and the smallest one in terms of the model size..\nFigure 3: Two-dimensional PCA projection of the 600-dimensional representation of the words\nIn this section, we investigate whether our model could learn morphology. First we want to figure ou. the difference between an ordinary RNN word encoder and our word encoder. We choose some word with similar meaning but different in morphology as shown in Figure 3] We could find in Figure. 3(a)[that the words ending with \"ability', which are encoded by the ordinary RNN word encoder, are. jammed together. In contrast, the representations produced by our encoder are more reasonable anc. the words with similar meaning are closer.\nThen we analyze how our word encoder learns morphemes and the rules of how they are combined. We demonstrate the encoding details on \"any*\" and \"every*\". Figure4(a)|shows the energy of each. character, more precisely, the energy of preceding characters. We could see that the last character of a morpheme will result a relative large energy (weight) like \"any' and \"every\"' in these words. Moreover, even the preceding characters are different, it will produce a similar weight for the same. morpheme like \"way\" in \"anyway\" and \"everyway\". The two-dimensional PCA projection in Figure\n1.5 reliable notable Qsolvable 1.5 1 Q flexible reliability notability solvabiligy- flexibility 1 0.5 solvable 0.5 capable 0 Qsolvability :capabptable capability relianlbility 0 -0.5 Q-capabtability -0.5 Q possiblepossibility -1 -1 -1.5 Q flexih|Exibility -1.5 Qpossible possibility 2 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 (a) ordinary RNN word encoder (b) our word encoder.\n0.2 0.15 a y W a 0.18 anybody e e y W a y 0.1 anyone 0.16 everyone a n y 0 e 0.14 0.05 e V e y O n e 0.12 a h y b 0 d 0 0.1 e V b 0 d V Qeveryway 0.08 -0.05 axyfyng everything y h n g 0.06 e y h i n g Q everywhere 0.04 -0.1 anywhere a n W h e e 0.02 e V e y W h e r e -0.15 0 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 (a) energy of each character. (b) two-dimensional PCA projection\n4(b)|further validates our idea. The word encoder may be able to guess the meaning of \"everything even it had never seen \"everything'' before, thus speedup learning. More interestingly, we find that not only the ending letter has high energy, but also the beginning letter is important. It matches the behavior of human perception (White et al.||2008).\nFigure 5: Subword-level boundary detected by our word encoder\nAs analyzed in Section|5.2, learning morphology could speedup learning. This has also been shown in Table[1(En-Fr and En-Cs task) from which we see that when we train our model just for one epoch, the obtained result even outperforms the final result with bpe baseline.\nAnother advantage of our model is the ability to translate the misspelled words or the nonce words The character-level model has a much better chance recovering the original word or sentence. In. Table[2] we list some examples where the source sentences are taken from newstest2013 but we change some words to misspelled words or nonce words. We also list the translations from Google. translate[|and online demo of neural machine translation by LISA.\nTable 2: Sample translations\n(a) Misspelled words\n(b) Nonce words (morphological change)\nAs listed in Table 2(a), DCNMT is able to translate out the misspelled words correctly. For a. word-based translator, it is never possible because the misspelled words are mapped into <unk>\n3The translations by Google translate were made on Nov 4, 2016\nMoreover, we apply our trained word encoder to Penn Treebank Line 1. Unlike Chung et al. (2016b) we are able to detect the boundary of the subword units. As shown in Figure 5l \"consumers\"' \"monday\", \"football' and \"greatest' are segmented into \"consum-er-s\",\"mon-day\", \"foot-ball' and 'great-est' respectively. Since there are no explicit delimiters, it may be more difficult to detect the subword units.\nSource For the time being howeve their research is unconclusive. Reference Leurs recherches ne sont toutefois pas concluantes pour 1'instant. Google translate Pour le moment, leurs recherches ne sont pas concluantes. LISA Pour le moment UNK leur recherche est UNK. DCNMT Pour le moment, cependant, leur recherche n'est pas concluante.\nSource Then we will be able to supplement the real world with virtual objects in. a much convenienter form . Reference Ainsi , nous pourrons completer le monde reel par des objets virtuels. dans une forme plus pratique . Google translate Ensuite, nous serons en mesure de completer le monde reel avec des objets virtuels dans une forme beaucoup plus pratique.. LISA Ensuite, nous serons en mesure de completer le vrai monde avec des objets virtuels sous une forme bien UNK.. DCNMT Ensuite, nous serons en mesure de completer le monde reel avec des objets virtuels dans une forme beaucoup plus pratique..\ntoken before translating. Thus, it will produce an <unk> token or just take the word from source sentence (Gulcehre et al.]2016} Luong et al.| 2015). More interestingly, DCNMT could translate 'convenienter\"' correctly as shown in Table2(b). By concatenating \"convenient' and \"er\", we get the. comparative adjective form of \"convenient' which never appears in the training set; however, our. model guessed it correctly based on the morphemes and the rules..\nIn this paper we have proposed an hierarchical architecture to train the deep character-level neural machine translation model by introducing a novel word encoder and a multi-leveled decoder. We have demonstrated the efficiency of the training process and the effectiveness of the model in comparison with the word-level and other character-level models. The BLEU score implies that our deep character level neural machine translation model likely outperforms the word-level models and is competitive with the state-of-the-art character-based models. It is possible to further improve performance by using deeper recurrent networks (Wu et al.]2016), training for more epochs and training with longer sentence pairs.\nAs a result of the character-level modeling, we have solved the out-of-vocabulary (OOV) issue tha1 word-level models suffer from, and we have obtained a new functionality to translate the misspelled o1 the nonce words. More importantly, the deep character-level is able to learn the similar embedding of the words with similar meanings like the word-level models. Finally, it would be potentially possible that the idea behind our approach could be applied to many other tasks such as speech recognition and text summarization."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holge. Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, 2014.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representation, 2015.\nJunyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicit segmentation for neural machine translation. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a.\nWang Ling, Tiago Luis, Luis Marujo, Ramon Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. Finding function in form: Compositional character models for open vocabulary word representation. Empirical Methods in Natural Language Processing, 2015a.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in Neural Information Processing Svstems. pages 3104-3112. 2014\nJason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of.. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. 2012\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation."}, {"section_index": "12", "section_name": "4 DETAILED DESCRIPTION OF THE MODEL", "section_text": "Here we describe the implementation using Theano, it should be applicable to other symbolic deep learning frameworks. We use f to denote the transition of the recurrent network.."}, {"section_index": "13", "section_name": "A.1 SOURCE WORD ENCODER", "section_text": "As illustrated in Section[3.1] the word encoder is based on two recurrent neural networks. We compute the representation of the word 'anyone' as.\n6 ranyone = tanh( Wtr t=1\nEach rt contains information about the preceding characters. The weight wt of each representatior rt is computed by\n1414\nThe backward state h ; E R' is computed similarly, however in a reverse order.\nAfter encoding the words by the source word encoder, we feed the representations to the source sentence encoder. For example, the source \"Hello world </s>\" is encoded into a vector [rHello, rworld, r</s>], then the BiRNN sentence encoder encodes this vector into [V1, V2, v3]. The com putation is the same as Eqn. (9) and Eqn. (10), however the input now changes to the representation of the words."}, {"section_index": "14", "section_name": "A.3 FIRST-LEVEL DECODER", "section_text": "Ut =(1-Zt)OUp-1 + ZtO U\nZt = o(WzrYt-1+ Uzut-1+ Czct) qt = o(WqrYt-1+ Uqut-1+ Cqct).\nqt =o(WqrYt-1 + Uqut-1+ Cqct)\nryt-, is the representation of the target word which is produced by an ordinary RNN (take the las state). The context vector ct is computed by the attention mechanism at each step:\nTx Ct = j=1\nrt =f(e(xt),rt-1)\nwt = exp(Wwht + bw)\nht=f(e(xt),ht-1)\nThe first-level decoder is similar to Bahdanau et al. (2015) which utilizes the attention mechanism Given the context vector ct from encoder, the hidden state ut E Rm of the GRU is computed by\nXt, etk. etj = Etanh(Weut-1 + Heh;)\nE E R1xm which maps the vector into a scalar. Then the hidden state ut is further processed as Eqn. (8) before feeding to the second-level decoder:\nSt+1 = W1Ct+1+ W2ry, + W3ut +l\nAs described in Section[3.2] the number of outputs of the first-level decoder is much fewer than the target character sequence. It will be intractable to conditionally pick outputs from the the first-level. decoder when training in batch manner (at least intractable for Theano (Bastien et al.[|2012) and other symbolic deep learning frameworks to build symbolic expressions). We use a matrix R E RTy T to unfold the outputs [S1, . .. , ST,] of the first-level decoder (Ty is the number of words in the target. sentence and T is the number of characters). R is a symbolic matrix in the final loss, it is constructed. according the delimiters in the target sentences when training (see Section 3.2 for the detailed. construction, note that R is a tensor in batch training. ). After unfolding, the input of HGRU becomes. [Si1, :.., SiT], that is\nSiu,..., Si S1,..., ST, R\nFinally, we could compute the cross-entroy loss and train with SGD algorithm\nWe show additional sample translations in the following Tables\nTable 3: Sample translations of En-Fr\nSource This \" disturbance \" produces an electromagnetic wave ( of light , infrared , ultraviolet etc . ) , and this wave is nothing other than a photon - and thus one of the \" force carrier \" bosons . Reference Quand , en effet , une particule ayant une charge electrique accelere ou change de direction , cela \" derange \" le champ electromagnetique en cet endroit precis , un peu comme un caillou lance dans un etang . DCNMT Lorsque , en fait , une particule ayant une charge electrique accelere ou change de direction , cela \" perturbe \" le champ electromagnetique dans cet endroit specifique , plutot comme un galet jete dans un etang . Source Since October , a manifesto , signed by palliative care luminaries includ- ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to demonstrate their opposition to such an initiative . Reference Depuis le mois d' octobre , un manifeste , signe de sommites des soins palliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule pour temoigner de leur opposition a une telle initiative . DCNMT Depuis octobre , un manifeste , signe par des liminaires de soins palliatifs , dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circule pour demontrer leur opposition a une telle initiative .\nOurce This \" disturbance \" produces an electromagnetic wave ( of light , infrared , ultraviolet etc . ) , and this wave is nothing other than a photon - and thus one of the \" force carrier \" bosons . Reference Quand , en effet , une particule ayant une charge electrique accelere ou change de direction, cela \" derange \" le champ electromagnetique en cet endroit precis , un peu comme un caillou lance dans un etang . DCNMT Lorsque , en fait , une particule ayant une charge electrique accelere ou change de direction , cela \" perturbe \" le champ electromagnetique dans cet endroit specifique , plutot comme un galet jete dans un etang . Source Since October , a manifesto , signed by palliative care luminaries includ- ing Dr Balfour Mount and Dr Bernard Lapointe , has been circulating to demonstrate their opposition to such an initiative . Reference Depuis le mois d' octobre , un manifeste , signe de sommites des soins palliatifs dont le Dr Balfour Mount et le Dr Bernard Lapointe , circule pour temoigner de leur opposition a une telle initiative . DCNMT Depuis octobre , un manifeste , signe par des liminaires de soins palliatifs , dont le Dr Balfour Mount et le Dr Bernard Lapointe , a circule pour demontrer leur opposition a une telle initiative .\nTable 5: Sample translations of Cs-En\nTable 4: Sample translations of En-Cs\nSource French troops have left their area of responsibility in Afghanistan (. Kapisa and Surobi ) . Reference Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu ( Kapisa a Surobi) . DCNMT Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu. ( Kapisa a Surois ) . Source \" All the guests were made to feel important and loved \" recalls the top. model , who started working with him during Haute Couture Week Paris , in 1995 . Reference Vsichni pozvani se diky nemu mohli citit duleziti a milovani , \" vzpomina top modelka , ktera s nim zacala pracovat v pribehu Parizskeho tydne vrcholne mody v roce 1995 . DCNMT \" Vsichni hoste byli provedeni , aby se citili duleziti a milovani \". pripomina nejvyssi model , ktery s nim zacal pracovat v prubehu ty deniku Haute Coutupe v Parizi v roce 1995 .. Source \" There are so many private weapons factories now , which do not endure. competition on the international market and throw weapons from under the counter to the black market , including in Moscow , \" says the expert. Reference \" V soucasnosti vznikaji soukrome zbrojarske podniky , ktere nejsou. konkurenceschopne na mezinarodnim trhu , a vyrazuji zbrane , ktere. dodavaji na cerny trh vcetne Moskvy , \" rika tento odbornik .. DCNMT \" V soucasnosti existuje tolik soukromych zbrani , ktere nevydrzi. hospodarskou soutez na mezinarodnim trhu a hodi zbrane pod pultem k cernemu trhu , vcetne Moskvy , \" rika odbornik ..\nSource French troops have left their area of responsibility in Afghanistan (. Kapisa and Surobi ) . Reference Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu ( Kapisa a Surobi) . DCNMT Francouzske jednotky opustily svou oblast odpovednosti v Afghanistanu. ( Kapisa a Surois ) . Source \" All the guests were made to feel important and loved \" recalls the top. model , who started working with him during Haute Couture Week Paris , in 1995 . Reference Vsichni pozvani se diky nemu mohli citit duleziti a milovani, \" vzpomina top modelka , ktera s nim zacala pracovat v prubehu Parizskeho tydne vrcholne mody v roce 1995 . DCNMT \" Vsichni hoste byli provedeni , aby se citili duleziti a milovani \". pripomina nejvyssi model, ktery s nim zacal pracovat v pribehu ty- deniku Haute Coutupe v Parizi v roce 1995 .. Source \" There are so many private weapons factories now , which do not endure. competition on the international market and throw weapons from under the counter to the black market , including in Moscow , \" says the expert. Reference \" V soucasnosti vznikaji soukrome zbrojarske podniky , ktere nejsou. konkurenceschopne na mezinarodnim trhu , a vyrazuji zbrane , ktere. dodavaji na cerny trh vcetne Moskvy , \" rika tento odbornik .. DCNMT \" V soucasnosti existuje tolik soukromych zbrani , ktere nevydrzi. hospodarskou soutez na mezinarodnim trhu a hodi zbrane pod pultem k Cernemu trhu , vcetne Moskvy , \" rika odbornik ..\nSource Prezident Karzai nechce zahranicni kontroly , zejmena ne pri prilezitosti voleb planovanych na duben 2014 . Reference President Karzai does not want any foreign controls , particularly on the occasion of the elections in April 2014 . DCNMT President Karzai does not want foreign controls , particularly in the opportunity of elections planned on April 2014 . Source Manzelsky par mel dve deti , Prestona a Heidi , a dlouhou dobu zili v kalifornskem meste Malibu , kde pobyva mnoho celebrit . Reference The couple had two sons , Preston and Heidi , and lived for a long time in the Californian city Malibu , home to many celebrities . DCNMT The married couple had two children , Preston and Heidi , and long lived in the California city of Malibu , where many celebrities resided . Source Trestny cin rouhani je zachovan a urazka je nadale zakazana , coz by mohlo mit vazne dusledky pro svobodu vyjadrovani , zejmena pak pro tisk . Reference The offence of blasphemy is maintained and insults are now prohibited , which could have serious consequences on freedom of expression , particularly for the press . DCNMT The criminal action of blasphemy is maintained and insult is still prohib- ited , which could have serious consequences for freedom of expression , especially for the press ."}] |
BybtVK9lg | [{"section_index": "0", "section_name": "AUTOENCODING VARIATIONAL INFERENCE FOR TOPIC MODELS", "section_text": "Akash Srivastava\nISrlVaslaVa Informatics Forum, University of Edinburg 10, Crichton St Edinburgh, EH89AB, UK\nTopic models are one of the most popular methods for learning representations of. text, but a major challenge is that any change to the topic model requires mathe. matically deriving a new inference algorithm. A promising approach to address. this problem is autoencoding variational Bayes (AEVB), but it has proven diffi-. cult to apply to topic models in practice. We present what is to our knowledge the. first effective AEVB based inference method for latent Dirichlet allocation (LDA) which we call Autoencoded Variational Inference For Topic Model (AVITM). This. model tackles the problems caused for AEVB by the Dirichlet prior and by com-. ponent collapsing. We find that AVITM matches traditional methods in accuracy. with much better inference time. Indeed, because of the inference network, we find that it is unnecessary to pay the computational cost of running variational. optimization on test data. Because AVITM is black box, it is readily applied. to new topic models. As a dramatic illustration of this, we present a new topic. model called ProdLDA, that replaces the mixture model in LDA with a product. of experts. By changing only one line of code from LDA, we find that ProdLDA. yields much more interpretable topics, even if LDA is trained via collapsed Gibbs. sampling."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Both mean-field and collapsed Gibbs have the drawback that applying them to new topic models. even if there is only a small change to the modeling assumptions, requires re-deriving the infer. ence methods, which can be mathematically arduous and time consuming, and limits the ability of practitioners to freely explore the space of different modeling assumptions. This has motivated the development of black-box inference methods (Ranganath et al.]2014) Mnih & Gregor2014] Ku- cukelbir et al.[[2016;Kingma & Welling[2014) which require only very limited and easy to compute information from the model, and hence can be applied automatically to new models given a simple. declarative specification of the generative process.\nAutoencoding variational Bayes (AEVB) (Kingma & Welling 2014}Rezende et al.2014) is particularly natural choice for topic models, because it trains an inference network (Dayan et al 1995), a neural network that directly maps a document to an approximate posterior distribution\nAdditional affiliation: Alan Turing Institute, British Library, 96 Euston Road, London NW1 2DI\nCharles Sutton\ncsutton@inf.ed.ac.uk"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Topic models (Blei 2012) are among the most widely used models for learning unsupervised repre sentations of text, with hundreds of different model variants in the literature, and have have founc applications ranging from the exploration of the scientific literature (Blei & Lafferty2007) tc. computer vision (Fei-Fei & Perona]2005), bioinformatics (Rogers et al. 2005), and archaeology Mimno!2o09). A major challenge in applying topic models and developing new models is the. computational cost of computing the posterior distribution. Therefore a large body of work has. considered approximate inference methods, the most popular methods being variational methods specially mean field methods, and Markov chain Monte Carlo, particularly methods based on col. apsed Gibbs sampling\nwithout the need to run further variational updates. This is intuitively appealing because in topic models, we expect the mapping from documents to posterior distributions to be well behaved, that is, that a small change in the document will produce only a small change in topics. This is exactly the type of mapping that a universal function approximator like a neural network should be good at. representing. Essentially, the inference network learns to mimic the effect of probabilistic inference.. so that on test data, we can enjoy the benefits of probabilistic modeling without paying a further cost. for inference.\nHowever, despite some notable successes for latent Gaussian models, black box inference methods are significantly more challenging to apply to topic models. For example, in initial experiments we tried to apply ADVI (Kucukelbir et al.]2016), a recent black-box variational method, but it was difficult to obtain any meaningful topics. Two main challenges are: first, the Dirichlet prior is not a location scale family, which hinders reparameterisation, and second, the well known problem of component collapsing (Dinh & Dumoulin]2016), in which the inference network becomes stuck in a bad local optimum in which all topics are identical.\nIn this paper, we present what is, to our knowledge, the first effective AEVB inference method fo. topic models, which we call Autoencoded Variational Inference for Topic Models or AVITM1| On. several data sets, we find that AVITM yields topics of equivalent quality to standard mean-field inference, with a large decrease in training time. We also find that the inference network learns to mimic the process of approximate inference highly accurately, so that it is not necessary to run. variational optimization at all on test data..\nBut perhaps more important is that AVITM is a black-box method that is easy to apply to new models. To illustrate this, we present a new topic model, called ProdLDA, in which the distribution over individual words is a product of experts rather than the mixture model used in LDA. We find that ProdLDA consistently produces better topics than standard LDA, whether measured by auto- matically determined topic coherence or qualitative examination. Furthermore, because we perform probabilistic inference using a neural network, we can fit a topic model on roughly a one million documents in under 80 minutes on a single GPU, and because we are using a black box inference method, implementing ProdLDA requires a change of only one line of code from our implementation of standard LDA.\nTo summarize. the main advantages of our methods are\nOverall, our results suggest that AVITM is ready to take its place alongside mean field and collapse. Gibbs as one of the workhorse inference methods for topic models..\nTo fix notation, we begin by describing topic modelling and AVITM\nWe describe the most popular topic model, latent Dirichlet allocation (LDA). In LDA, each doc- ument of the collection is represented as a mixture of topics, where each topic k is a probability distribution over the vocabulary. We also use to denote the matrix = (1 ... k). The generative process is then as described in Algorithm[1] Under this generative model, the marginal likelihood of\n1. Topic coherence: ProdLDA returns consistently better topics than LDA, even when LDA is trained using Gibbs sampling 2. Computational efficiency: Training AVITM is fast and efficient like standard mean-field. On. new data, AVITM is much faster than standard mean field, because it requires only one forward pass through a neural network. 3. Black box: AVITM does not require rigorous mathematical derivations to handle changes in. the model, and can be easily applied to a wide range of topic models..\nPosterior inference over the hidden variables 0 and z is intractable due to the coupling between the 0 and under the multinomial assumption (Dickey1983)."}, {"section_index": "3", "section_name": "2.2 MEAN FIELD AND AEVB", "section_text": "L(y,$ a,) = DkL [q(0,z[y,$)[[p(0, z[w,a, B)]- logp(w[a,)\nIn fact the above equation is a lower bound to the marginal log likelihood, sometimes called al evidence lower bound (ELBO), a fact which can be easily verified by multiplying and dividing (1 by the variational posterior and then applying Jensen's inequality on its logarithm. Note that th mean field method optimizes over an independent set of variational parameters for each document To emphasize this, we will refer to this standard method by the non-standard name of Decouple Mean-Field Variational Inference (DMFVI).\nAEVB (Kingma & Welling 2014; Rezende et al.[[2014) is one of several recent methods that aim at \"black box\"' inference methods to sidestep this issue. First. rewrite the ELBO as\nL(y,$|a, ) DkL [q(0.zy.)][p(0.za)] logp(w|z,0,a,)\nThis form is intuitive. The first term attempts to match the variational posterior over latent variable to the prior on the latent variables, while the second term ensures that the variational posterior favor values of the latent variables that are good at explaining the data. By analogy to autoencoders, this second term is referred to as a reconstruction term.\nWhat makes this method \"Autoencoding, and in fact the main difference from DMFVI, is the pa rameterization of the variational distribution. In AEVB, the variational parameters are computed by using a neural network called an inference network that takes the observed data as input. For example, if the model prior p(0) were Gaussian, we might define the inference network as a feed- forward neural network ((w), v(w)) = f(w, ), where (w) and v(w) are both vectors of length. k, and y are the network's parameters. Then we might choose a Gaussian variational distribution q(0) = N(0; (w), diag(v(w))), where diag(.. .) produces a diagonal matrix from a column vec-. tor. The variational parameters y can then be chosen by optimizing the ELBO (3). Note that we have.\nN k II p(wn|zn,B)p(zn|9) W p(0|a)d0 n=1 zn=1\nA popular approximation for efficient inference in topic models is mean field variational inference. which breaks the coupling between 0 and z by introducing free variational parameters y over 0 and over z and dropping the edges between them. This results in an approximate variational posterior q(0, z|, $) = qy(0) In qo(zn), which is optimized to best approximate the true posterior. p(0, z|w, Q, ). The optimization problem is to minimize.\nFor LDA, this optimization has closed form coordinate descent equations due to the conjugacy. between the Dirichlet and multinomial distributions. Although this is a computationally conve. nient aspect of DMFVI, it also limits its flexibility. Applying DMFVI to new models relies on the. practitioner's ability to derive the closed form updates, which can be impractical and sometimes impossible.\nnow, unlike DMFVI, coupled the variational parameters for different documents because they are all computed from the same neural network. To compute the expectations with respect to q in (3). Kingma & Welling(2014); Rezende et al.(2014) use a Monte Carlo estimator which they call the. \"reparameterization trick\" (RT; appears also inWilliams|(1992)). In the RT, we define a variate U. with a simple distribution that is independent of all variational parameters, like a uniform or standard normal, and a reparameterization function F such that F(U, ) has distribution qy. This is always. possible, as we could choose F to be the inverse cumulative distribution function of qy, although we. will additionally want F to be easy to compute and differentiable. If we can determine a suitable F. then we can approximate (3) by taking Monte Carlo samples of U, and optimize using stochastic. gradient descent.\nAlthough simple conceptually, applying AEVB to topic models raises several practical challenges. The first is the need to determine a reparameterization function for q(0) and q(zn) to use the RT. The zn are easily dealt with, but 0 is more difficult; if we choose q(0) to be Dirichlet, it is difficul to apply the RT, whereas if we choose q to be Gaussian or logistic normal, then the KL divergenc n (3) becomes more problematic. The second issue is the well known problem of component col. apsing (Dinh & Dumoulin!2016), which a type of bad local optimum that is particularly endemi to AEVB and similar methods. We describe our solutions to each of those problems in the next fev. subsections.\nDealing with discrete variables like z using reparameterization can be problematic, but fortunately in LDA the variable z can be conveniently summed out. By collapsing z we are left with having to sample from 0 only, reducing (1) to\nwhere the distribution of wn[, 0 is Multinomial(1, 30), recalling that denotes the matrix of all topic-word probability vectors.\nLDA gets its name from the Dirichlet prior on the topic proportions 0, and the choice of Dirichlet. prior is important to obtaining interpretable topics (Wallach et al.|. 2009). But it is difficult to handle the Dirichlet within AEVB because it is difficult to develop an effective reparameterization function. for the RT. Fortunately, a RT does exist for the Gaussian distribution and has been shown to perform quite well in the context of variational autoencoder (VAE) (Kingma & Welling. 2014).\nWe resolve this issue by constructing a Laplace approximation to the Dirichlet prior. Following MacKay(1998), we do so in the softmax basis instead of the simplex. There are two benefits of this choice. First, Dirichlet distributions are unimodal in the softmax basis with their modes coinciding with the means of the transformed densities. Second, the softmax basis also allows for carrying out unconstrained optimization of the cost function without the simplex constraints. The Dirichlet probability density function in this basis over the softmax variable h is given by\nHere 0 = o(h), where o(.) represents the softmax function. Recall that the Jacobian of is pro portional to II 0k and g() is an arbitrary density that ensures integrability by constraining the redundant degree of freedom. We use the Laplace approximation of Hennig et al.(2012), which\nN p(w[a,B) = p(wn|B,0) p(0|a)d0 e n=1\nFLk Qk P(0(h)[) z 0%rg(1Th) HkTQk k\nhas the property that the covariance matrix becomes diagonal for large k (number of topics). Thi approximation to the Dirichlet prior p(0|a) is results in the distribution over the softmax variables h as a multivariate normal with mean 1 and covariance matrix 1 where\n1 1k = log Qk log Qi K 1 2 1 1 1kk 1 K Qk K2 Qk"}, {"section_index": "4", "section_name": "3.3 VARIATIONAL OBJECTIVE", "section_text": "E1 L(0) = tr(-'o) + (1-o)T(1- o)- K+log +Ee~N(0,I)wT 1og 0()0(o + 1/2\nIn order to impose the simplex constraint on the matrix during the optimization, we apply the. softmax transformation. That is, each topic k E RV is unconstrained, and the notation o() means. to apply the softmax function separately to each column of the matrix . Note that the mixture of multinomials for each word wn can then be written as p(wn[, 0) = o()0, , which explains the dot product in (7). To optimize (7), we use stochastic gradient descent using Monte Carlo samples from e, following the Law of the Unconscious Statistician..\n3.4 TRAINING AND PRACTICAL CONSIDERATIONS: DEALING WITH COMPONENT COLLAPSING\nAEVB is prone to component collapsing (Dinh & Dumoulin]2016), which is a particular type of local optimum very close to the prior belief, early on in the training. As the latent dimensionality of the model is increased, the KL regularization in the variational objective dominates, so that the outgoing decoder weights collapse for the components of the latent variable that reach close to the prior and do not show any posterior divergence. In our case, the collapsing specifically occurs because of the inclusion of the softmax transformation to produce 0. The result is that the k inferred topics are identical as shown in table7\nWe were able to resolve this issue by tweaking the optimization. Specifically, we train the network with the ADAM optimizer (Kingma & Ba]2015) using high moment weight (1) and learning rate (n). Through training at higher rates, early peaks in the functional space can be easily avoided. The\nFinally, we approximate p(0|) in the simplex basis with p(0|1, 1) = LV(0|1, 1) where LV is a logistic normal distribution with parameters 1, 1. Although we approximate the Dirichlet. prior in LDA with a logistic normal, this is not the same idea as a correlated topic model (Blei & Lafferty2006), because we use a diagonal covariance matrix. Rather, it is an approximation to. standard LDA.\nNow we can write the modified variational objective function. We use a logistic normal variational distribution over 0 with diagonal covariance. More precisely, we define two inference networks as feed forward neural networks f and f with parameters o; the output of each network is a vector in RK. Then for a document w, we define q(0) to be logistic normal with mean o = f(w,). and diagonal covariance o = diag(f(w, )), where diag converts a column vector to a diagonal. matrix. Note that we can generate samples from q(0) by sampling e ~ N(0, I) and computing.\nwhere O represents the set of all the model and variational parameters and w1 . .. w p are the docu ments in the corpus. The first line in this equation arises from the KL divergence between the two logistic normal distributions q and p, while the second line is the reconstruction error.\nproblem is that momentum based training coupled with higher learning rate causes the optimizer tc diverge. While explicit gradient clipping helps to a certain extent, we found that batch normalization. (Ioffe & Szegedy|2015) does even better by smoothing out the functional space and hence curbing sudden divergence.\nFinally, we also found an increase in performance with dropout units when applied to 0 to force the network to use more of its capacity.\nWhile more prominent in the AEVB framework, the collapsing can also occurs in DMFVI if the learning offset (referred to as the t parameter (Hofmann1999)) is not set properly. Interestingly, a similar learning offset or annealing based approach can also be used to down-weight the KL term in early iterations of the training to avoid local optima"}, {"section_index": "5", "section_name": "4.1 MODEL", "section_text": "The connection to a product of experts is straightforward, as for the multinomial, a mixture of natural parameters corresponds to a weighted geometric average of the mean parameters. That is, consider two N dimensional multinomials parametrized by mean vectors p and q. Define the corresponding. natural parameters as p = (r) and q = o(s), and let E [0, 1]. It is then easy to show that.\nN N x|8r+(1-0)s xo(8ri+(1-8)si)*i x I[r i=1 i=1\nSo the ProDLDA model can be simply described as a product of experts, that is, p(wn[0,3) o p(wn|zn = k, )0r. PRoDLDA is an instance of the exponential-family PCA (Collins et al. 2001) class, and relates to the exponential-family harmoniums (Welling et al.|2004) but with non- Gaussian priors."}, {"section_index": "6", "section_name": "5 RELATED WORK", "section_text": "For an overview of topic modeling, seeBlei(2012). There are several examples of topic mod-. els based on neural networks and neural variational inference (Hinton & Salakhutdinov2009 Larochelle & Lauly2012] Mnih & Gregor2014] Miao et al.2016) but we are unaware of meth- ods that apply AEVB generically to a topic model specified by an analyst, or even of a successful. application of AEVB to the most widely used topic model, latent Dirichlet allocation..\nIn LDA, the distribution p(w|0, ) is a mixture of multinomials. A problem with this assumption. is that it can never make any predictions that are sharper than the components that are being mixed. (Hinton & Salakhutdinov2009). This can result in some topics appearing that are poor quality. and do not correspond well with human judgment. One way to resolve this issue is to replace this. word-level mixture with a weighted product of experts which by definition is capable of making. sharper predictions than any of the constituent experts (Hinton|2002). In this section we present a. novel topic model ProDLDA that replaces the mixture assumption at the word-level in LDA with. a weighted product of experts, resulting in a drastic improvement in topic coherence. This is a good illustration of the benefits of a black box inference method, like AVITM, to allow exploration of. new models.\nThe ProDLDA model can be simply described as latent Dirichlet allocation where the word-level mixture over topics is carried out in natural parameter space, i.e. the topic matrix is not constrained to exist in a multinomial simplex prior to mixing. In other words, the only changes from LDA are that is unnormalized, and that the conditional distribution of wn is defined as wn[,0 ~ Multinomial(1, o(0)).\nRecently, Miao et al.[(2016) introduced a closely related model called the Neural Variational Docu ment Model (NVDM). This method uses a latent Gaussian distribution over topics, like probabilistic latent semantic indexing, and averages over topic-word distributions in the logit space. However.\nthey do not use either of the two key aspects of our work: explicitly approximating the Dirichlet. prior using a Gaussian, or high-momentum training. In the experiments we show that these aspects lead to much improved training and much better topics..\nQualitative evaluation of topic models is a challenging task and consequently a large body of worl. has developed automatic evaluation metrics that attempt to match human judgment of topic quality. Traditionally, perplexity has been used to measure the goodness-of-fit of the model but it has been. repeatedly shown that perplexity is not a good metric for qualitative evaluation of topics (Newman. et al.2010). Several new metrics of topic coherence evaluation have thus been proposed; seeLau. et al. 72014) for a comparative review.Lau et al.(2014) showed that among all the competing. metrics, normalized pointwise mutual information (NPMI) between all the pairs of words in a set of. topics matches human judgment most closely, so we adopt it in this work. We also report perplexity primarily as a way of evaluating the capability of different optimizers. Following standard practice. (Blei et al.]2003), for variational methods we use the ELBO to calculate perplexity. For AEVB. methods, we calculate the ELBO using the same Monte Carlo approximation as for training..\nWe run experiments on both the 20 Newsgroups (11,000 training instances with 2000 word vocab. ulary) and RCV1 Volume 2 ( 800K training instances with 10000 word vocabulary) datasets. Ou preprocessing involves tokenization, removal of some non UTF-8 characters for 20 Newsgroups. and English stop word removal. We first compare our AVITM inference method with the stan. dard online mean-field variational inference (Hoffman et al.2010) and collapsed Gibbs sampling. (Griffiths & Steyvers]2004) on the LDA model. We use standard implementations of both meth. ods, scikit-1earn for DMFVI and ma11et (McCallum2002) for collapsed Gibbs. Ther. we compare two autoencoding inference methods on three different topic models: standard LDA ProDLDA using our inference method and the Neural Variational Document Model (NVDM (Miao et al.2016), using the inference described in the paper2\nTable 1: Average topic coherence on the 20 News, roups dataset. Higher is better.\nTables[1and[2|show the average topic coherence values for all the models for two different settings of k, the number of topics. Comparing the different inference methods for LDA, we find that, consistent with previous work, collapsed Gibbs sampling yields better topics than mean-field methods. Among. the variational methods, we find that VAE-LDA model (AVITM)|yields similar topic coherence. and perplexity to the standard DMFVI (although in some cases, VAE-LDA yields significantly bette: topics). However, AVITM is significantly faster to train than DMFVI. It takes 46 seconds on 20 Newsgroup compared to 18 minutes for DMFVI. Whereas for a million document corpus of RCV1. it only under 1.5 hours while scikit-learn's implementation of DMFVI failed to return any results even after running for 24 hours4\nComparing the new topic models than LDA, it is clear that ProDLDA finds significantly better topics than LDA, even when trained by collapsed Gibbs sampling. To verify this qualitatively, we display examples of topics from all the models in Table|6 The topics from ProdLDA appear visually more coherent than NVDM or LDA. Unfortunately, NVDM does not perform comparatively to LDA\nTable 2: Average topic coherence on the RCV1 dataset. Higher is better. Results not reported for LDA DMFVI, as inference failed to converge in 24 hours..\nTable 3: Perplexity scores for 20 Newsgroups. Lower is better.\nfor any value of k. To avoid any training dissimilarities we train all the competing models until we reach the perplexities that were reported in previous work. These are reported in Table|3f.\nA major benefit of AVITM inference is that it does not require running variational optimization.. which can be costly, for new data. Rather, the inference network can be used to obtain topic pro. portions for new documents for new data points without running any optimization. We evaluate. whether this approximation is accurate, i.e. whether the neural network effectively learns to mimic. probabilistic inference. We verify this by training the model on the training set, then on the test set.. holding the topics ( matrix) fixed, and comparing the test perplexity if we obtain topic proportions. by running the inference neural network directly, or by the standard method of variational optimiza-. tion of the inference network on the test set. As shown in Table4] the perplexity remains practically. un-changed. The computational benefits of this are remarkable. On both the datasets, computing. perplexity using the neural network takes well under a minute, while running the standard variational. approximation takes ~ 3 minutes even on the smaller 20 Newsgroups data. Finally, we investigate. the reasons behind the improved topic coherence in ProDLDA. First, Table5lexplores the effects of. each of our two main ideas separately. In this table, \"Dirichlet\"' means that the prior is the Laplace. approximation to Dirichlet( = 0.02), while \"Gaussian' indicates that we use a standard Gaussian. as prior. 'High Learning Rate\"' training means we use 1 > 0.8 and 0.1 > n > 0.001||with batch. normalization, whereas \"Low Learning Rate\" means 1 > 0.8 and 0.0009 > n > 0.00009 without batch normalization. (For both parameters, the precise value was chosen by Bayesian optimization.. We found that these values in the 'with BN\" cases were close to the default settings in the Adam. optimizer.) We find that the high topic coherence that we achieve in this work is only possible if we use both tricks together. In fact the high learning rates with momentum is required to avoid. local minima in the beginning of the training and batch-normalization is required to be able to train the network at these values without diverging. If trained at a lower momentum value or at a lower. learning rate ProDLDA shows component collapsing. Interestingly, if we choose a Gaussian prior. rather than the logistic normal approximation used in ProdLDA or NVLDA, the model is easier to train even with low learning rate without any momentum or batch normalization..\nThe main advantage of AVITM topic models as opposed to NVDM is that the Laplace approxima. tion allows us to match a specific Dirichlet prior of interest. As pointed out by|Wallach et al. (2009 the choice of Dirichlet hyperparameter is important to the topic quality of LDA. Following this rea. soning, we hypothesize that AVITM topics are higher quality than those of NVDM because the. are much more focused, i.e., apply to a more specific subset of documents of interest. We provid. support for this hypothesis in Figure|1] by evaluating the sparsity of the posterior proportions ove. topics, that is, how many of the model's topics are typically used to explain each document. In orde. to estimate the sparsity in topic proportions, we project samples from the Gaussian latent spaces o. ProDLDA and NVDM in the simplex and average them across documents. We compare the topi.\n5We note that much recent work followsHinton & Salakhutdinov (2009) in reporting perplexity for the LDA Gibbs sampler on only a small subset of the test data. Our results are different because we use the entire test dataset.\n61 is the weight on the average of the gradients from the previous time step and n refers to the learning *ate\n0 10 20 30 40 Standard Gaussian+softmax 50 Dirichlet with alpha=1/10 Dirichlet with alpha=1/50 - Dirichlet with alpha=1/200 60 0 50 100 150 20 Topic Index\nFigure 1: Effect of prior assumptions on 0 on. oarsity of 0 in neural topic models\nTable 5: Average topic coherence for different choices of prior and optimization strategies o PRODLDA on 20 Newsgroup for k = 50.\nThe inference network architecture can be found in figure2lin the appendix\nWe present what is to our knowledge the first effective AEVB inference algorithm for latent Dirich- let allocation. Although this combination may seem simple in principle, in practice this method is difficult to train because of the Dirichlet prior and because of the component collapsing problem By addressing both of these problems, we presented a black-box inference method for topic models with the notable advantage that the neural network allows computing topic proportions for new doc- uments without the need to run any variational optimization. As an illustration of the advantages of\nTable 4: Evaluation of inference network of VAE-LDA on 20 Newsgroups test set. \"Inference network only\"' shows the test perplexity when the inference network is trained on the training set, but no variational optimization is performed on the test set. \"Inference Network + Optimization' shows the standard approach of optimizing the ELBO on the test set. The neural network effectively learns to approximate probabilistic inference effectively.\nsparsity for the standard Gaussian prior used by NVDM to the Laplace approximation of Dirichlet. priors with different hyperparameters. Clearly the Laplace approximation to the Dirichlet prior sig. nificantly promotes sparsity, providing support for our hypothesis that preserving the Dirichlet prior explains the the increased topic coherence in our method..\nProdLDA\nLDA Collapsed Gibbs\nNVDM\nTable 6: Five randomly selected topics from all the models\nTable 7: VAE-LDA fails to learn any meaningful topics when component collapsing occurs. The table shows five randomly sampled topics (, which are essentially slight variants of each other) from when the VAE-LDA model is trained without BN and high momentum training\nblack box inference techniques, we presented a new topic model, ProdLDA, which achieves signif. icantly better topics than LDA, while requiring a change of only one line of code from AVITM for. LDA. Our results suggest that AVITM inference is ready to take its place alongside mean field and. collapsed Gibbs as one of the workhorse inference methods for topic models. Future work could include extending our inference methods to handle dynamic and correlated topic models."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "David Blei. Probabilistic topic models. Communications of the ACM, 55(4):77-84, 2012\nDavid M. Blei and John D. Lafferty. A correlated topic model of science. Annals of Appliec Statistics, 1(1):17-35, 2007.\nModel Topics motherboard meg printer quadra hd windows processor vga mhz connector armenian genocide turks turkish muslim massacre turkey armenians armenia greek ProdLDA voltage nec outlet circuit cable wiring wire panel motor install season nhl team hockey playoff puck league flyers defensive player israel israeli lebanese arab lebanon arabs civilian territory palestinian militia db file output program line entry write bit int return drive disk get card scsi use hard ide controller one LDA game team play win year player get think good make NVLDA use law state health file gun public issue control firearm people say one think life make know god man see write article dod ride right go get night dealer like gun law use drug crime government court criminal firearm control LDA lunar flyers hitter spacecraft power us existence god go mean DMFVI stephanopoulos encrypt spacecraft ripem rsa cipher saturn violate lunar crypto file program available server version include software entry ftp use get right back light side like see take time one list mail send post anonymous internet file information user message LDA thanks please know anyone help look appreciate get need email Collapsed Gibbs jesus church god law say christian one christ day come bike dod ride dog motorcycle write article bmw helmet get light die burn body life inside mother tear kill christian insurance drug different sport friend bank owner vancouver buy prayer NVDM input package interface output tape offer component channel level model price quadra hockey slot san playoff jose deal market dealer christian church gateway catholic christianity homosexual resurrection modem mouse sunday\nWe thank Andriy Mnih, Chris Dyer, Chris Russell, David Blei, Hannah Wallach, Max Welling Mirella Lapata and Yishu Miao for helpful comments, discussions and feedback\nMichael Collins, Sanjoy Dasgupta, and Robert E Schapire. A generalization of principal compo nent analysis to the exponential family. In Advances in Neural Information Processing Systems volume 13, pp. 23, 2001.\nPeter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine Neural Computation, 7(5):889_904. 1995\nJames M Dickey. Multiple hypergeometric functions: Probabilistic interpretations and statistical uses. Journal of the American Statistical Association, 78(383):628-637, 1983.\nLi Fei-Fei and Pietro Perona. A Bayesian hierarchical model for learning natural scene categories In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) volume 2, pp. 524-531. IEEE, 2005.\nThomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the Nationa. academy of Sciences, 101(suppl 1):5228-5235, 2004.\nPhilipp Hennig, David H Stern, Ralf Herbrich, and Thore Graepel. Kernel topic models. In AISTATS pp. 511-519, 2012.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura computation, 14(8):1771-1800, 2002\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Replicated softmax: an undirected topic model. I 607.16142009 Advances inNeural\nThomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual in ternational ACM SIGIR conference on Research and development in information retrieval, pp 50-57. ACM, 1999.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. pp. 448-456, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 3rd Internationa Conference on Learning Representations (ICLR), 2015..\nAlp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and David M Blei. Automatic differentiation variational inference. arXiv preprint arXiv:1603.00788. 2016\nJey Han Lau, David Newman, and Timothy Baldwin. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In EACL, pp. 530-539, 2014..\nDavid JC MacKay. Choice of basis for Laplace approximation. Machine learning, 33(1):77-86 1998.\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. pp. 1727 1736, 2016.\naurent Dinh andVincent Dumoulin. Training neural Bayesian nets http: //www iro.umontreal.ca/~bengioy/cifar/NcAp2014-summerschool/slides/ Laurent_dinh_cifar_presentation.pdf,August2016.\nMatthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent dirichlet allocation nAdvancesinNeural 11t111 856-864.2010\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. The International Confer ence on Learning Representations (ICLR), Banff, 2014.\nHugo Larochelle and Stanislas Lauly. A neural autoregressive topic model. In Advances in Neural Information Processing Systems, pp. 2708-2716, 2012.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. pp 1791-1799, 2014\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. pp. 1278-1286, 2014.\nSimon Rogers, Mark Girolami, Colin Campbell, and Rainer Breitling. The latent process decom position of cdna microarray data sets. IEEE/ACM Transactions on Computational Biology and. Bioinformatics (TCBB), 2(2):143-156. 2005.\nHanna Wallach, David Mimno, and Andrew McCallum. Rethinking LDA: Why priors matter. In NIPS, 2009.\nMax Welling, Michal Rosen-Zvi, and Geoffrey E Hinton. Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems.. volume 4, pp. 1481-1488, 2004.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256. 1992\nE BN Layer 100x k Mean Sigma Softplus 100x100 FC Layer Softplus Input x100 FC Layer\nFigure 2: Architecture of the inference network used in the experiments"}] |
HJKkY35le | [{"section_index": "0", "section_name": "ABSTRACT", "section_text": "We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data gener- ating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative adversarial networks (GAN) (Goodfellow et al.|2014) have demonstrated their potentia on various tasks, such as image generation, image super-resolution, 3D object generation, and vide. prediction (Radford et al.2015f Ledig et al.]2016] Sonderby et al.2016f Nguyen et al.2016] W et al.[2016f Mathieu et al.2015). The objective is to train a parametrized function (the generator which maps noise samples (e.g., uniform or Gaussian) to samples whose distribution is close to tha. of the data generating distribution. The basic scheme of the GAN training procedure is to trai. a discriminator which assigns higher probabilities to real data samples and lower probabilities t generated data samples, while simultaneously trying to move the generated samples towards the rea. data manifold using the gradient information provided by the discriminator. In a typical setting, th. generator and the discriminator are represented by deep neural networks..\nDespite their success, GANs are generally considered as very hard to train due to training instabilit and sensitivity to hyper-parameters. On the other hand, a common failure pattern observed while training GANs is the collapsing of large volumes of probability mass onto a few modes. Namely although the generators produce meaningful samples, these samples are often from just a few mode (small regions of high probability under the data distribution). Behind this phenomenon is the miss ing modes problem, which is widely conceived as a major problem for training GANs: many mode of the data generating distribution are not at all represented in the generated samples, yielding a much lower entropy distribution, with less variety than the data generating distribution.\nThis issue has been the subject of several recent papers proposing several tricks and new archi-. tectures to stabilize GAN's training and encourage its samples' diversity. However, we argue that a. general cause behind these problems is the lack of control on the discriminator during GAN training. We would like to encourage the manifold of the samples produced by the generator to move towards. that of real data, using the discriminator as a metric. However, even if we train the discriminator. to distinguish between these two manifolds, we have no control over the shape of the discriminator. function in between these manifolds. In fact, the shape of the discriminator function in the data\nAuthors contributed equally"}, {"section_index": "2", "section_name": "MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS", "section_text": "Tong Che* Yanran Li* t,sAthul Paul Jacob, Yoshua Bengio, Wenjie Li. 'Montreal Institute for Learning Algorithms, Universite de Montreal, Montreal, QC H3T 1J4, Canada #Department of Computing, The Hong Kong Polytechnic University, Hong Kong. sDavid R. Cheriton School of Computer Science, University Of Waterloo, Waterloo, ON N2L 3G1, Canada. { tong.che,ap.jacob,yoshua.bengio} @umontreal.ca. csyli.cswili } @comp.polyu.edu.hk.\nAlthough Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction. towards that of higher concentration than that of the data generating distribution.\nspace can be very non-linear with bad plateaus and wrong maxima and this can therefore hurt the training of GANs (Figure|1)\nFigure 1: Samples with very high discrim- ination values (D=1.0) in DCGAN model trained on CelebA dataset.\nRegularizers usually bring a trade-off between model variance and bias. Our results have showr. that, when correctly applied, our regularizers can dramatically reduce model variance, stabilize th training, and fix the missing mode problem all at once, with positive or at the least no negative effect on the generated samples. We also discuss a variant of the regularized GAN algorithm, which cat even improve sample quality as compared to the DCGAN baseline.."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "In Goodfellow et al.(2014), the GAN is able to generate interesting local structure but globally incoherent images on various datasets.Mirza & Osindero(2014) enlarges GAN's representation capacity by introducing an extra vector to allow the generator to produce samples conditioned on other beneficial information. Motivated from this. several conditional variants of GAN has beer applied to a wide range of tasks, including image prediction from a normal map Wang & Gupta (2016), image synthesis from text Reed et al.[(2016) and edge map Isola et al.(2016), real-time image manipulationZhu et al.(2016), temporal image generationZhou & Berg(2016);Saito & Matsumoto(2016);Vondrick et al.(2016), texture synthesis, style transfer, and video stylization Li & Wand(2016).\nResearchers also aim at stretching GAN's limit to generate higher-resolution, photo-realistic images. Denton et al.(2015) initially apply a Laplacian pyramid framework on GAN to generate images of high resolution. At each level of their LAPGAN, both the generator and the discriminator are convo- lutional networks. As an alternative to LAPGAN,Radford et al.(2015) successfully designs a class of deep convolutional generative adversarial networks which has led to significant improvements on unsupervised image representation learning. Another line of work aimed at improving GANs are through feature learning, including features from the latent space and image space. The motivation is that features from different spaces are complementary for generating perceptual and natural-looking images. With this perspective, some researchers use distances between learned features as losses for training objectives for generative models.Larsen et al.(2015) combine a variational autoencoder objective with a GAN and utilize the learned features from the discriminator in the GANs for better image similarity metrics. It is shown that the learned distance from the discriminator is of great help for the sample visual fidelity. Recent literature have also shown impressive results on image super-resolution to infer photo-realistic natural images for 4x upscaling factors Ledig et al.(2016); Sonderby et al.(2016);Nguyen et al.(2016).\nDespite these promising successes, GANs are notably hard to train. Although Radford et al.(2015 provide a class of empirical architectural choices that are critical to stabilize GAN's training, i1 would be even better to train GANs more robustly and systematically. Salimans et al.[(2016) pro pose feature matching technique to stabilize GAN's training. The generator is required to match the statistics of intermediate features of the discriminator. Similar idea is adopted byZhao et al.(2016)\nTo remedy this problem, we propose a novel regu- larizer for the GAN training target. The basic idea is simple yet powerful: in addition to the gradient information provided by the discriminator, we want the generator to take advantage of other similarity metrics with much more predictable behavior, such as the L, norm. Differentiating these similarity met- rics will provide us with more stable gradients to train our generator. Combining this idea with an ap- proach meant to penalize the missing modes, we pro-\nlarizer for the GAN training target. The basic idea. is simple yet powerful: in addition to the gradien. information provided by the discriminator, we want. the generator to take advantage of other similarity metrics with much more predictable behavior, sucl. as the L2 norm. Differentiating these similarity met. 1: Samples with very high discrim. rics will provide us with more stable gradients tc. values (D=1.0) in DCGAN model train our generator. Combining this idea with an ap. on CelebA dataset. proach meant to penalize the missing modes, we pro-. amily of additional regularizers for the GAN objective. We then design a set of metrics tc. e the generated samples in terms of both the diversity of modes and the distribution fairness robability mass. These metrics are shown to be more robust in judging complex generative including those which are well-trained and collapsed ones..\npose a family of additional regularizers for the GAN objective. We then design a set of metrics to evaluate the generated samples in terms of both the diversity of modes and the distribution fairness of the probability mass. These metrics are shown to be more robust in judging complex generative models, including those which are well-trained and collapsed ones.\nIn addition to feature distances,Dosovitskiy & Brox (2016) found that the counterpart loss in image space further improves GAN's training stability. Furthermore, some researchers make use of infor- mation in both spaces in a unified learning procedure (Dumoulin et al.]2016]Donahue et al.]2016). In|Dumoulin et al.(2016), one trains not just a generator but also an encoder, and the discriminator is trained to distinguish between two joint distributions over image and latent spaces produced either by the application of the encoder on the training data or by the application of the generator (decoder) to the latent prior. This is in contrast with the regular GAN training, in which the discriminator only attempts to separate the distributions in the image space. Parallelly, Metz et al.(2016) stabilize GANs by unrolling the optimization of discriminator, which can be considered as an orthogonal work with ours.\nOur work is related to VAEGAN (Larsen et al. 2015) in terms of training an autoencoder or VAE jointly with the GAN model. However, the variational autoencoder (VAE) in VAEGAN is used to generate samples whereas our autoencoder based losses serves as a regularizer to penalize missing modes and thus improving GAN's training stability and sample qualities. We demonstrate detailed differences from various aspects in Appendix|D"}, {"section_index": "4", "section_name": "S MODE REGULARIZERS FOR GANS", "section_text": "The GAN training procedure can be viewed as a non-cooperative two player game, in which the discriminator D tries to distinguish real and generated examples, while the generator G tries to fool the discriminator by pushing the generated samples towards the direction of higher discrimination values. Training the discriminator D can be viewed as training an evaluation metric on the sample space. Then the generator G has to take advantage of the local gradient log D(G) provided by the discriminator to improve itself, namely to move towards the data manifold.\nWe now take a closer look at the root cause of the instabilities while training GANs. The discrim inator is trained on both generated and real examples. As pointed out by[Goodfellow et al.[(2014) Denton et al.(2015);Radford et al.(2015), when the data manifold and the generation manifold are. disjoint (which is true in almost all practical situations), it is equivalent to training a characteristic function to be very close to 1 on the data manifold, and O on the generation manifold. In order tc pass good gradient information to the generator, it is important that the trained discriminator pro. duces stable and smooth gradients. However, since the discriminator objective does not directly depend on the behavior of the discriminator in other parts of the space, training can easily fail if the shape of the discriminator function is not as expected. As an example|Denton et al.[(2015) notec a common failure pattern for training GANs which is the vanishing gradient problem, in which the discriminator D perfectly classifies real and fake examples, such that around the fake examples, L is nearly zero. In such cases, the generator will receive no gradient to improve itself\nAnother important problem while training GANs is mode missing. In theory, if the generated dat. and the real data come from the same low dimensional manifold, the discriminator can help th generator distribute its probability mass, because the missing modes will not have near-O probability. under the generator and so the samples in these areas can be appropriately concentrated towards. regions where D is closer to 1. However, in practice since the two manifolds are disjoint, D tends. to be near 1 on all the real data samples, so large modes usually have a much higher chance o. attracting the gradient of discriminator. For a typical GAN model, since all modes have similar L. values, there is no reason why the generator cannot collapse to just a few major modes. In othe. words, since the discriminator's output is nearly O and 1 on fake and real data respectively, th generator is not penalized for missing modes."}, {"section_index": "5", "section_name": "3.1 GEOMETRIC METRICS REGULARIZER", "section_text": "Compared with the objective for the GAN generator, the optimization targets for supervised learning. are more stable from an optimization point of view. The difference is clear: the optimization targel for the GAN generator is a learned discriminator. While in supervised models, the optimization targets are distance functions with nice geometric properties. The latter usually provides much easier training gradients than the former, especially at the early stages of training..\nIThis problem exists even when we use log D(G(z)) as target for the generator, as noted byDenton et a 2015) and our experiments.\nInspired by this observation, we propose to incorporate a supervised training signal as a regularizei on top of the discriminator target. Assume the generator G(z) : Z -> X generates samples by sam pling first from a fixed prior distribution in space Z followed by a deterministic trainable transforma tion G into the sample space X. Together with G, we also jointly train an encoder E(x) : X -> Z Assume d is some similarity metric in the data space, we add Ex~pa[d(x, Go E(x))] as a regularizer, where pa is the data generating distribution. The encoder itself is trained by minimizing the same reconstruction error.\nIn practice, there are many options for the distance measure d. For instance, the pixel-wise L2 distance, or the distance of learned features by the discriminator (Dumoulin et al.|2016) or by othe. networks, such as a VGG classifier. (Ledig et al.]2016)\nThe geometric intuition for this regularizer is straight-forward. We are trying to move the generated manifold to the real data manifold using gradient descent. In addition to the gradient provided by the discriminator, we can also try to match the two manifolds by other geometric distances, say L' metric. The idea of adding an encoder is equivalent to first training a point to point mapping G(E(x)) between the two manifolds and then trying to minimize the expected distance between the points on these two manifolds.\nIn addition to the metric regularizer, we propose a mode regularizer to further penalize miss. ing modes. In traditional GANs, the optimization target for the generator is the empirical sum. >', Ve log D(Ge(zi)). The missing mode problem is caused by the conjunction of two facts: (1. the areas near missing modes are rarely visited by the generator, by definition, thus providing very. few examples to improve the generator around those areas, and (2) both missing modes and non. missing modes tend to correspond to a high value of D, because the generator is not perfect sc that the discriminator can take strong decisions locally and obtain a high value of D even near. non-missing modes.\ntowardsM towardsM generation manifold M1 M\ngeneration manifold M1 M2\nFigure 2: Illustration of missing modes problem\nIn short, our regularized optimization target for the generator and the encoder becomes\nTg = -Ez[log D(G(z))] + Ex~pa[A1d(x, G o E(x)) + X2 log D(G o E(x))] TE = Ex~pa[1d(x, G o E(x)) + X2log D(G o E(x))]\nTg = -Ez[log D(G(z))] + Ex~pa[1d(x, G o E(x)) + X2 log D(G o E(x))] TE = Ex~pg[1d(x, G o E(x)) + X2 log D(G o E(x))\nAs an example, consider the situation in Fig. ure For most z, the gradient of the generator. Ve log D(Ge(z)) pushes the generator towards. the major mode M1. Only when G(z) is very. close to the mode M, can the generator get gra- dients to push itself towards the minor mode. M2. However, it is possible that such z is of. low or zero probability in the prior distribution. Po.\nGiven this observation, consider a regularize GAN model with the metric regularizer.A sume Mo is a minor mode of the data genera ing distribution. For x E Mo, we know tha if G o E is a good autoencoder, G(E(x)) wj be located very close to mode Mo. Since thei M2 are sufficient training examples of mode Mo i the training data, we add the mode regularize nissing modes problem Ex~pa[log D(G o E(x))] to our optimizatio target for the generator, to encourage G(E(x mode of the data generating distribution. In this way, we can achieve fa ion across different modes.\nGiven this observation, consider a regularized GAN model with the metric regularizer. As-. sume Mo is a minor mode of the data generat-. ing distribution. For x E Mo, we know that if G o E is a good autoencoder, G(E(x)) will. be located very close to mode Mo. Since there. are sufficient training examples of mode Mo in the training data, we add the mode regularizer Ex~pg[log D(G o E(x))] to our optimization target for the generator. to encourage G(E(x))\nto move towards a nearby mode of the data generating distribution. In this way, we can achieve fair orobability mass distribution across different modes.."}, {"section_index": "6", "section_name": "3. 3 MANIFOLD-DIFFUSION TRAINING FOR REGULARIZED GANS", "section_text": "The proposed algorithm divides the training procedure of GANs into two steps: a manifold step and a diffusion step. In the manifold step, we try to match the generation manifold and the real data manifold with the help of an encoder and the geometric metric loss. In the diffusion step, we try to distribute the probability mass on the generation manifold fairly according to the real data distribution.\nAn example of manifold-diffusion training of GAN (MDGAN for short) is as follows: we train a discriminator D1 which separates between the samples x and G o E(x), for x from the data, and we optimize G with respect to the regularized GAN loss E[log D1(Go E(x))+Ad(x, Go E(x))] in orde to match the two manifolds. In the diffusion step we train a discriminator D2 between distribution. G(z) and G o E(x), and we train G to maximize log D(G(z)). Since these two distributions are now nearly on the same low dimensional manifold, the discriminator D2 provides much smoothe and more stable gradients. The detailed training procedure is given in Appendix|A] See Figure6|fo the quality of generated samples."}, {"section_index": "7", "section_name": "3.4 EVALUATION METRICS FOR MODE MISSING", "section_text": "In order to estimate both the missing modes and the sample qualities in our experiments, we use several different metrics for different experiments instead of human annotators.\nexp(ExKL(p(y[x)][p*(y)))\nWhere x denotes one sample, p(y|x) is the softmax output of a trained classifier of the labels, anc. p*(y) is the overall label distribution of generated samples. The intuition behind this score is that. a strong classifier usually has a high confidence for good samples. However, the inception score is sometimes not a good metric for our purpose. Assume a generative model that collapse to a very bac. image. Although the model is very bad, it can have a perfect inception score, because p(y[x) car. have a high entropy and p* (y) can have a low entropy. So instead, for labelled datasets, we propose. another assessment for both visual quality and variety of samples, the MODE score:.\nexp(ExKL(p(y[x)[[p(y)) - KL(p*(y)[[p(y)))\nHowever, in datasets without labels (LSUN) or where the labels are not sufficient to characterize. every data mode (CelebA), the above metric does not work well. We instead train a third party. discriminator between the real data and the generated data from the model. It is similar to the GAN discriminator but is not used to train the generator. We can view the output of the discriminator as an estimator for the quantity (See (Goodfellow et al.|2014) for proof):\nWhere pq is the probability density of the generator and pd is the density of the data distribution To prevent D* from learning a perfect O-1 separation of pq and pd, we inject a zero-mean Gaussiar noise to the inputs when training D*. After training, we test D* on the test set T of the real dataset If for any test sample t E T, the discrimination value D(t) is close to 1, we can conclude that th mode corresponding to t is missing. In this way, although we cannot measure exactly the numbe of modes that are missing, we have a good estimator of the total probability mass of all the missing modes.\nOn some large scale datasets, CelebA for example, the regularizers we have discussed do improve the diversity of generated samples, but the quality of samples may not be as good without care-. fully tuning the hyperparameters. Here we propose a new algorithm for training metric-regularized. GANs, which is very stable and much easier to tune for producing good samples..\nThe inception score (Salimans et al.2016) was considered as a good assessment for sample quality from a labelled dataset:\nwhere p(y) is the distribution of labels in the training data. According to our human evaluation experiences, the MODE score successfully measures two important aspects of generative models, i.e., variety and visual quality, in one metric..\nPg(s) D*(s) ~ Pg(s)+pa(S)"}, {"section_index": "8", "section_name": "4.1 MNIST", "section_text": "We perform two classes of experiments on MNIST For the MNIST dataset. we can assume that the data generating distribution can be approximated with ten dominant modes, if we define the term \"mode\" here as a connected component of the data manifold."}, {"section_index": "9", "section_name": "4.1.1 GRID SEARCH FOR MNIST GAN MODELS", "section_text": "1 order to systemncaly expiore the enect ol our pro OPC1 OptimD [SGD,Adam] posed regularizers on GAN models in terms of im-. Ir [1e-2,1e-3,1e-4] proving stability and sample quality, we use a large. scale grid search of different GAN hyper-parameters. on the MNIST dataset. The grid search is based on a. pair of randomly selected loss weights: X1 = 0.2 and. X2 = 0.4. We use the same hyper-parameter settings for both GAN and Regularized (. list the search ranges in Table[1] Our grid search is similar to those proposed inZhao et a Please refer to it for detailed explanations regarding these hyper-parameters..\nFor evaluation, we first train a 4-layer CNN classifier on the MNIST digits, and then apply it t compute the MODE scores for the generated samples from all these models. The resulting distribu tion of MODE score is shown in Figure[3] Clearly, our proposed regularizer significantly improve. the MODE scores and thus demonstrates its benefits on stabilizing GANs and improving sample. qualities.\n70 69.97 GAN Regularized GAN 60 50 40 30 22.29 20 17.34 14.86 11.15 10 9.6 6.19 7.43 4.026.19 2.484.33 2.79 4.33 0.31 1.552.17 0 0.31 0.31 0.0 0-0.5 0.5-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9\nFigure 3: The distributions of MODE scores for GAN and regularized GAN.\nTo illustrate the effect of regularizers with different coefficients, we randomly pick an architecture and train it with different Xj = X2. The results are shown in Figure4.\n5 3 M 2 4 5 5 2 7 1 0.000 0.0005 0.0009 0.002 0.01 3-3-800-256-T-SGD-Adam-0.001 3-3-1600-512-T-Adam-SGD-0.001\nFigure 4: (Left 1-5) Different hyperparameters for MNIST generation. The values of the X1 and X2 in our Regularized GAN are listed below the corresponding samples. (Right 6-7) Best samples. through grid search for GAN and Regularized GAN..\nIn order to quantitatively study the effect of our regularizers on the missing modes, we concatenate three MNIST digits to a number in [0,999] in a single 64x64 image, and then train DCGAN as a baseline model on the 1o00 modes dataset. The digits on the image are sampled with different.\nTable 1: Grid Search for Hyperparameters\nnLayerG [2,3,4] nLayerD [2,3,4] sizeG [400,800,1600,3200] sizeD [256, 512, 1024] dropoutD [True,False] optimG [SGD,Adam] optimD [SGD,Adam] lr [1e-2,1e-3,1e-4]\n69.97 70 GAN Regularized GAN 60 50 40 30 22.29 20 17.34 14.86 11.15 10 9.6 6.19 7.43 4.02 6.19 2.484.33 2.79 4.33 0.31 1.552.17 0.31 0.31 0 0.0 0-0.5 0.5-1 1-2 2-3 3-4 4-5 5-6 6-7 7-8 8-9\nprobabilities, in order to test the model's capability to preserve small modes in generation. We agair use a pre-trained classifier for MNIST instead of a human to evaluate the models.\nThe performances on the compositional experiment are measured by two metrics. #Miss represents the classifier-reported number of missing modes, which is the size of the set of numbers that the model never generates. KL stands for the KL divergence between the classifier-reported distribution of generated numbers and the distribution of numbers in the training data (as for the Inception score). The results are shown in Table[2 With the help of our proposed regularizer, both the number of missing modes and KL divergence drop dramatically among all the sets of the compositional MNIST dataset, which again proves the effectiveness of our regularizer for preventing the missing modes problem."}, {"section_index": "10", "section_name": "4.2.1 MISSING MODES ESTIMATION ON CELEBA", "section_text": "We also employ a third party discriminator trained with injected noise as a metric for missing mode estimation. To implement this, we add noise in the input layer in the discriminator network. For each GAN model to be estimated, we independently train this noisy discriminator, as mode estimator, with the same architecture and hyper-parameters on the generated data and the training data. We then apply the mode estimator to the test data. The images which have high mode estimator outputs can be viewed as on the missing modes.\nThe comparison result is shown in Table [3] Both our proposed Regularized-GAN and MDGAN outperform baseline DCGAN models on all settings. Especially, MDGAN suppresses other models. showing its superiority on modes preserving. We also find that, although sharing the same architec ture, the DCGAN with 200-dimensional noise performs quite worse than that with 100-dimensional noise as input. On the contrary, our regularized GAN performs more consistently\nTo get a better understanding of the models' performance, we want to figure out when and where these models miss the modes. Visualizing the test images associated with missed modes is instruc- tive. In Figure5] the left three images are missed by all models. It is rare to see in the training data the cap in the second image and the type of background in the third, which thus can be viewed as small modes under this situation. These three images should be considered as the hardest test data\nTable 2: Results for Compositional MNIST with 1000 modes. The proposed regularization (Reg. DCGAN) allows to substantially reduce the number of missed modes as well as the KL divergence that measures the plausibility of the generated samples (like in the Inception score)..\nSet 1 Set 2 Set 3 Set 4 #Miss KL #Miss KL #Miss KL #Miss KL DCGAN 204.7 77.9 204.3 60.2 103.4 75.9 89.3 77.8 Reg-DCGAN 32.1 62.3 71.5 58.9 42.7 68.4 31.6 67.8\nTo test the effectiveness of our proposal on harder problems, we implement an encoder for the. DCGAN algorithm and train our model with different hyper-parameters together with the DCGAN. baseline on the CelebA dataset. We provide the detailed architecture of our regularized DCGAN in Appendix B\nTable 3: Number of images on the missing modes on CelebA estimated by a third-party discrimina tor. The numbers in the brackets indicate the dimension of prior z. denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to other methods .\nDCGAN (100) DCGAN (200) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 5463 17089 754 3644 74 4.0 590 15832 42 391 13\nFigure 5: Test set images that are on missing mode. Left: Both MDGAN and DCGAN missing Right: Only DCGAN missing\nAfter quantitative evaluation, we manually examine the generated samples by our regularized GAN. to see whether the proposed regularizer has side-effects on sample quality. We compare our mode. with ALI (Dumoulin et al.]2016), VAEGAN (Larsen et al.]2015), and DCGAN (Radford et al.) 2015) in terms of sample visual quality and mode diversity. Samples generated from these models. are shown in Figured\nFigure 6: Samples generated from different generative models. For each compared model, we directly take ten decent samples reported in their corresponding papers and code repositories. Note how MDGAN samples are both globally more coherent and locally have sharp textures.\nBoth MDGAN and Regularized-GAN generate clear and natural-looking face images. Although ALI's samples are plausible, they are sightly deformed in comparison with those from MDGAN. The samples from VAEGAN and DCGAN seem globally less coherent and locally less sharp\nAs to sample quality, it is worth noting that the samples from MDGAN enjoy fewer distortions With all four other models, the majority of generated samples suffer from some sort of distortion However, for the samples generated by MDGAN, the level of distortion is lower compared with the other four compared models. We attribute it to the help of the autoencoder as the regularizer to alter the generation manifolds. In this way, the generator is able to learn fine-grained details such as face edges. As a result, MDGAN is able to reduce distortions..\nfor GAN to learn. Nonetheless, our best model, MDGAN still capture certain small modes. The seven images on the right in Figure|5|are only missed by DCGAN. The sideface, paleface, black and the berets are special attributes among these images, but our proposed MDGAN performs well on all of them.\nMDGAN Regularized GAN ALI VAEGAN DCGAN\nFigure 7: Sideface samples generated by Regularized-GAN and MDGAN\nIn terms of missing modes problem, we instructed five individuals to conduct human evaluation on. the generated samples. They achieve consensus that MDGAN wins in terms of mode diversities. Two people pointed out that MDGAN generates a larger amount of samples with side faces than. other models. We select several of these side face samples in Figure[7 Clearly, our samples maintain. acceptable visual fidelity meanwhile share diverse modes. Combined with the above quantitative results, it is convincing that our regularizers bring benefits for both training stability and mode. variety without the loss of sample quality."}, {"section_index": "11", "section_name": "5 CONCLUSIONS", "section_text": "Although GANs achieve state-of-the-art results on a large variety of unsupervised learning tasks training them is considered highly unstable, very difficult and sensitive to hyper-parameters, all the while, missing modes from the data distribution or even collapsing large amounts of probability mass on some modes. Successful GAN training usually requires large amounts of human and com puting efforts to fine tune the hyper-parameters, in order to stabilize training and avoid collapsing Researchers usually rely on their own experience and published tricks and hyper-parameters instead of systematic methods for training GANs.\nWe provide systematic ways to measure and avoid the missing modes problem and stabilize training. with the proposed autoencoder-based regularizers. The key idea is that some geometric metrics car provide more stable gradients than trained discriminators, and when combined with the encoder they can be used as regularizers for training. These regularizers can also penalize missing mode. and encourage a fair distribution of probability mass on the generation manifold.."}, {"section_index": "12", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Naiyan Wang, Jianbo Ye, Yuchen Ding, Saboya Yang for their GPU support. We also want to thank Huiling Zhen for helpful discussions, Junbo Zhao for providing the details of grid search experiments on the EBGAN model, as well as Anders Boesen Lindbo Larsen for kindly helping us on running VAEGAN experiments. We appreciate for the valuable suggestions and comments from the anonymous reviewers. The work described in this paper was partially supported by NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs, CIFAR, National Natural Science Foundation of China (61672445 and 61272291), Research Grants Council of Hong Kong (PolyU 152094/14E), and The Hong Kong Polytechnic University (G-YBP6).\nMDGAN Regularized -GAN\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv. 2016\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016\nMasaki Saito and Eiichi Matsumoto. Temporal generative adversarial nets. arXiv preprini arXiv:1611.06624, 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Cher Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortise map inference for image super-resolution. arXiv preprint arXiv:1610.04490. 2016\nXiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversar ial networks. In ECCV, 2016\nJiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Neural Information Processing Systems (NIPS), 2016\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016.\nYipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. Ir European Conference on Computer Vision, pp. 262-277. Springer, 2016.\nJun- Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula tion on the natural image manifold. In Proceedings of European Conference on Computer Visio (ECCV), 2016.\nAnders Boesen Lindbo Larsen, Soren Kaae Sgnderby, Hugo Larochelle, and Ole Winther. Autoen coding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015."}, {"section_index": "13", "section_name": "APPENDIX: PSEUDO CODE FOR MDGAN", "section_text": "In this Appendix, we give the detailed training procedure of an MDGAN example we discuss in Section3.3\n7. Update generator G using SGD with gradient ascent:\nFigure 8: The detailed training procedure of an MDGAN example\nOne has to pay particular attention to batch normalization layers. In DCGAN, there are batch nor. malization layers both in the generator and the discriminator. However, two classes of data gc through the batch normalization layers in the generator. One come from sampled noise z, the other one come from the encoder. In our implementation, we separate the batch statistics for these twc classes of data in the generator, while keeping the parameters of BN layer to be shared. In this way. the batch statistics of these two kinds of batches cannot interfere with each other.\nThe data is sampled from a mixture of 6 Gaussians, with standard derivation of O.1. The means of the Gaussians are placed around a circle with radius 5. The generator network has two ReLU hidden layers with 128 neurons. It generates 2D output samples from 3D uniform noise from [0,1]. The discriminator consists of only one fully connected layer of ReLU neurons, mapping the 2D input to\nm 1 [log D1(x;) + log(1 - D1(G(E(x;))) V m i=1\n1 m m i=1\nm 1 [log D(G(E(x;))) + log(1- D2(zi)) d. m i=1\nm 1 L V [log D2(G(zi))] m i=1\nWe use similar architectures for Compositional MNIST and CelebA experiments. The architecture. is based on that found in DCGAN Radford et al.[(2015). Apart from the discriminator and generator which are the same as DCGAN, we add an encoder which is the 'inverse\"' of the generator, by. reversing the order of layers and replacing the de-convolutional layers with convolutional layers.\nGAN Reg-GAN Epoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nFigure 9: Comparison results on a toy 2D mixture of Gaussians dataset. The columns on the left. shows heatmaps of the generator distributions as the number of training epochs increases, whereas. the rightmost column presents the target, the original data distribution. The top row shows standard. GAN result. The generator has a hard time oscillating among the modes of the data distribution, and. is only able to \"recover\"' a single data mode at once. In contrast, the bottom row shows results of our regularized GAN. Its generator quickly captures the underlying multiple modes and fits the target. distribution."}, {"section_index": "14", "section_name": "D APPENDIX: COMPARISON WITH VAEGAN", "section_text": "In this appendix section, we demonstrate the effectiveness and uniqueness of mode-regularized GANs proposed in this paper as compared to Larsen et al.(2015) in terms of its theoretical dif ference, sample quality and number of missing modes.\nThe first assumption does not necessarily hold for GANs. We have found that in some trainec. models of DCGANs, the real posterior p(z|x) is even not guaranteed to have only one mode, not to. mention it is anything close to factorized Gaussian. We believe that this difference in probabilistic framework is an essential obstacle when one tries to use the objective of VAEGAN as a regularizer However, in our algorithm, where we use a plain auto-encoder instead of VAE as the objective. Plair auto-encooders works better than VAE for our purposes because as long as the model G(z) is able. to generate training samples, there always exists a function E*(x) such that G(E(x)) = x. Our. encoder can therefore be viewed as being trained to approximate this real encoder E*. There are no conflicts between a good GAN generator and our regularization objective. Hence, our objectives can be used as regularizers for encoding the prior knowledge that good models should be able tc generate the training samples. This is why our work is essentially different from VAEGAN. In oui experiments, we also believe that this is the reason why VAEGAN generates worse samples than a. carefully tuned regularized GANs.\nIn terms of sample quality and missing modes, we run the official code of VAEGAN 3|with theii. default setting. We train VAEGAN for 30 epochsland our models for only 20 epochs. For fairness\na real 1D number. Both networks are optimized with the Adam optimizer with the learning rate of 1e-4.\nIn the regularized version, we choose Xj = X2 = 0.005. The comparison between the generator distribution from standard GAN and our proposed regularized GAN are shown in Figure9\nGAN Reg-GAN Epoch 1 Epoch 200 Epoch 400 Epoch 600 Epoch 800 Epoch 1000 Target\nWith regard to the theoretical difference, the optimization of VAEGAN relies on the probabilistic variational bound, namely p(x) Eq(z|x) [log p(x|z)] KL(q(z|x)||p(z)). This variational bound together with a GAN loss is optimized with several assumptions imposed in VAEGAN:\n1. In general, VAE is based on the assumption that the true posterior p(z[x) can be well. approximated by factorized Gaussian distribution q. 2. As to VAEGAN, It is also assumed that the maximum likelihood objectives does not con- flict with GAN objective in terms of probabilistic framework.\nThe generated samples are shown in Figure[10 The most obvious difference between our sample. and VAEGAN's samples is the face distortion, which is consistent with our experimental results ir Section|4.2.2[ We conjecture that the distortions of VAEGAN's samples are due to the conflicts be tween the two objectives, as we present above. In other words, the way we introduce auto-encoder as regularizers for GAN models is different from VAEGAN's. The difference is that the second as sumption mentioned above is not required in our approaches. In our framework, the auto-encoder. helps alter the generation manifolds, leading to fewer distortions in fine-grained details in our gen. erated samples.\nMDGAN Regularized -GAN VAEGAN -trained VAEGAN -reported\nIn terms of the missing modes problem, we use the same method described in Section 4.2.1|fo computing the number of images with missing modes. The results are shown below\nIn conclusion, we demonstrate that our proposed mode-regularized GANs, i.e., Reg-GAN and MDGAN, are different from VAEGAN theoretically as discussed above. Such differences empiri cally result in better sample quality and mode preserving ability, which are our main contributions.\nMDGAN Regularized GAN VAEGAN -trained VAEGAN -reported\nFigure 10: Samples generated by our models and VAEGAN. The third line are samples generated by our self-trained VAEGAN model, with default settings. The last line are generated samples reported in the original VAEGAN paper. We depict both of them here for a fair comparison.\nTable 4: Number of images on the missing modes on CelebA estimated by a third-party discrimina- tor. The numbers in the brackets indicate the dimension of prior z. denotes the standard deviation of the added Gaussian noise applied at the input of the discriminator to regularize it. MDGAN achieves a very high reduction in the number of missing modes, in comparison to VAEGAN.\nVAEGAN (100) Reg-GAN (100) Reg-GAN (200) MDGAN (200) 3.5 9720 754 3644 74 4.0 5862 42 391 13\nWe see that using our proposed regularizers results in a huge drop in the number of missing modes We conjecture that the reason why VAEGAN performs very bad in our metric for missing modes is because the samples generated are of low quality, so the discriminator classifies the samples as \"not on mode\". Namely, the data generated is too far away from many real data modes. Essentially if a model generates very bad samples, we can say that the model misses all or most modes\nTo conduct more fair evaluation between VAEGAN and our methods, we also perform a blind human evaluation. Again we instructed five individuals to conduct this evaluation of sample variability Without telling them which is generated by VAEGAN and which is generated by our methods, four people agree that our method wins in terms of sample diversity. One person thinks the samples are equally diverse."}] |
Hy-lMNqex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "It is only recently that commodity computing hardware in the form of graphics processors delivered. the performance necessary for practical, large scale Deep Neural Network applications Krizhevsky et al.(2012). At the same time, the end of Dennard Scaling in semiconductor technology Es-. maeilzadeh et al.(2011) makes it difficult to deliver further advances in hardware performance using existing general purpose designs. It seems that further advances in DNN sophistication would have to rely mostly on algorithmic and in general innovations at the software level which can be. helped by innovations in hardware design. Accordingly, hardware DNN accelerators have emerged.. The DianNao accelerator family was the first to use a wide single-instruction single-data (SIsD) architecture to process up to 4K operations in parallel on a single chip Chen et al.[(2014a b) out-. performing graphics processors by two orders of magnitude. Development in hardware accelerators has since proceeded in two directions: either toward more general purpose accelerators that can. support more machine learning algorithms while keeping performance mostly on par with DaDian-. Nao (DaDN) Chen et al.(2014b), or toward further specialization of specific layers or classes of. DNNs with the goal of outperforming DaDN in execution time and/or energy efficiency, e.g., Han. et al.(2016); |Albericio et al.(2016a); Judd et al.(2016a); Chen, Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne[(2016); Reagen et al.(2016). This work is along the second direction. Section5[reviews several other accelerator designs.\nWhile DaDN's functional units process 16-bit fixed-point values, DNNs exhibit varying precision requirements across and within layers, e.g.,Judd et al.(2015). Accordingly, it is possible to use"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Tartan TRT a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT ex- ploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layersJudd et al.[(2016a c) Experiments on image classification CNNs show that on average across all net- works studied, TRT outperforms a state-of-the-art bit-parallel accelerator |Chen et al.[(2014b) by 1.90 without any loss in accuracy while it is 1.17 more en- ergy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy effi- ciency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on average 2.04 faster and 1.25 more energy efficient than a conventional bit- parallel accelerator. A Tartan configuration that processes 2-bits at time, requires less area than the 1-bit configuration, improves efficiency to 1.24 over the bit- parallel baseline while being 73% faster for convolutional layers and 60% faster for fully-connected layers is also presented.\nshorter, per layer representations for activations and/or weights. However, with existing bit-paralle functional units doing so does not translate into a performance nor an energy advantage as the values are expanded into the native hardware precision inside the unit.\nThis work presents Tartan (TRT), a massively parallel hardware accelerator whose execution tim. for fully-connected and convolutional layers scales with the precision p used to represent the inpu. values. TRT uses hybrid bit-serial/bit-parallel functional units and exploits the abundant parallelisn of typical DNN layers with the goal of exceeding DaDN's execution time performance and energy. efficiency. Ideally Tartan can improve execution time by 16 where p is the precision used for the. activations in convolutional layers, and for the activations and weights in fully-connected layers Every bit of precision that can be eliminated ideally reduces execution time and increases energ.. efficiency. TRT builds upon the Stripes (STR) accelerator Judd et al.(2016c a) which improves. execution time and energy efficiency only for convolutional layers..\nThis work evaluates TRT on a set of convolutional neural networks (CNNs) for image classification. On average TRT reduces inference time by 1.61, 1.91 and 1.90 over DaDN for the fully- connected, the convolutional, and all layers respectively. Energy efficiency compared to DaDN with TRT is 0.92, 1.18 and 1.17 respectively. TRT enables trading off accuracy for improving exe-. cution time and energy efficiency. For example, on average for the fully-connected layers, accepting a 1% loss in accuracy improves performance to 1.73 and energy efficiency to 1.00 compared to DaDN.\nThe rest of this document is organized as follows: Section 2|illustrates the key concepts behind TRT via an example. Section |3|reviews the DaDN architecture and presents an equivalent Tartan configuration. Section 4 presents the experimental results. Section 5|reviews related work and discusses the limitations of this study and the potential challenges with TRT. Section|6|concludes\nWhere f1, f2, C1 and c2 are output activations, w1, w2, and w are weights, and a1, a2 and a are input activations. For clarity all values are assumed to be represented in 2 bits of precision"}, {"section_index": "2", "section_name": "2.1 CONVENTIONAL BIT-PARALLEL PROCESSING", "section_text": "Figure2.1a shows a bit-parallel processing engine representative of DaDN. Every cycle, the engine can calculate the product of two 2-bit inputs, i (weight) and v (activation) and accumulate or store it into the output register OR. Parts (b) and (c) of the figure show how this unit can calculate the example CVL over two cycles. In part (b) and during cycle O, the unit accepts along the v input bits O and 1 of a1 (noted as a1/0 and a1/1 respectively on the figure), and along i bits O and 1 of u and produces both bits of output c1. Similarly, during cycle 1 (part (c)), the unit processes a2 and u to produce c2. In total, over two cycles, the engine produced two 2b 2b products. Processing the example FCL also takes two cycles: In the first cycle w1 and a produce f1, and in the second cycle w2 and a produce f2. This process is not shown in the interest of space."}, {"section_index": "3", "section_name": "2.2 Tartan's APPROACH", "section_text": "Figure|2|shows how a TRT-like engine would process the example CVL. Figure|2a|shows the en gine's structure which comprises two subunits. The two subunits accept each one bit of an activatior per cycle through inputs v0 and v1 respectively and as before, there is a common 2-bit weight inpu (i1, i0). In total, the number of input bits is 4, identical to the bit-parallel engine.\nThis section illustrates at a high-level the TRT design by showing how it would process two pur posely trivial cases: 1) a fully-connected layer (FCL) with a single input activation producing two output activations, and 2) a convolutional layer (CVL) with two input activations and one single weight filter producing two output activations. The per layer calculations are:\nFully - Connected : Convolutional : f1 =W1X a C1 = W X a1 f2 = W2 X a C2 = w X a2\nVo V1 a1/0| a1/1 a2/0J a2/1 OR OR OR X X X 11 W/1 W/1 (a) (b) (c) io W/0 W/0\nFigure 1: Bit-Parallel Engine processing the convolutional layer over two cycles: a) Structure, b) Cycle 0, and c) Cycle 1.\nv0 v1| AR BR OR AR BR OR AR BR OR AR BR OR w/1 w/1 >> w/0 w/0 i1 w/1 i0 w/0 (a) Engine Structure (b) Cycle 1: Parallel Load w on BRs a1/0| a2/0 a1/1 a2/1 AR BR OR AR BR OR AR BR OR AR BR OR w/1 w/1 w/1 c1/1 w/1 c2/1 w/0 w/0 w/0 c1/0 w/0 c2/0 (c) Cycle 2: Multiply w with bits O of (d) Cycle 3: Multiply w with bits 1 of the activations the activations\nFigure 2: Processing the example Convolutional Layer Using TRT's Approach\nFigure 3: Processing the example Fully-Connected Layer using TRT's Approach\nAR BR OR AR BR OR AR BR OR BR OR AR BR OR AR BR OR W1/1 w2/1 0 >> W1/1 W1/1 w2/1 w2/1 + w1/0 w2/0 w2/1 W1/0 w1/0 w2/0 w2/0 w1/0 w2/0 ycle 1: Shift in bits 1 of (b) Cycle 2: Shift in bits 0 of (c) Cycle 3: Copy AR into BI Its into the ARs weights into the ARs a/0 a/0 a/1 a/1 AR BR OR AR BR OR AR BR OR AR BR OR w1/1 w1/1 w2/1 w2/1 W1/1 w1/1 f1/1 W2/1 w2/1 f2/1 > >> > w1/0 w1/0 w2/0 w2/0 W1/0 W1/0 f1/0 w2/0 w2/0 f2/0 (d) Cycle 4: Multiply weights with(e) Cycle 5: Multiply weights with first bit of a second bit of a\nEach subunit contains three 2-bit registers: a shift-register AR, a parallel load register BR, and ar parallel load output register OR. Each cycle each subunit can calculate the product of its single bi v, input with BR which it can write or accumulate into its OR. There is no bit-parallel multipliei since the subunits process a single activation bit per cycle. Instead, two AND gates, a shift-and-adc functional unit, and OR form a shift-and-add multiplier/accumulator. Each AR can load a single bi per cycle from one of the i wires, and BR can be parallel loaded from AR or from the i wires.\nConvolutional Layer: Figure2b|through Figure[2d|show how the CVL is processed. The figures abstract away the unit details showing only the register contents. As Figure|2b shows, during cycle 1, the w synapse is loaded in parallel to the BRs of both subunits via the i1 and i0 inputs. During cycle 2, bits O of a1 and of a2 are sent via the v0 and v1 inputs respectively to the first and second subunit. The subunits calculate concurrently a1/0 w and a2/0 w and accumulate these results into their ORs. Finally, in cycle 3, bit 1 of a1 and a2 appear respectively on v0 and v1. The subunits calculate respectively a1/1 w and a2/1 w accumulating the final output activations c1 and c2 into their ORs.\nIn total it took 3 cycles to process the layer. However, at the end of the third cycle, another w could have been loaded into the BRs (the i are idle) allowing a new set of outputs to commence computation during cycle 4. That is loading a new weight can be hidden during the processing of the current output activation for all but the first time. In the steady state, when the input activations are represented in two bits, this engine will be producing two 2b 2b terms every two cycles thus matching the bandwidth of the bit-parallel engine.\nIf the activations a1 and a2 could be represented in just one bit, then this engine would be pro ducing two output activations per cycle, twice the bandwidth of the bit-parallel engine. The latter is incapable of exploiting the reduced precision. In general, if the bit-parallel hardware was using Pbase bits to represent the activations while only Pa bits were enough, TRT would outperform the bit-parallel engine by Pease\nFully-Connected Layer: Figure|3|shows how a TRT-like unit would process the example FCL. As Figure3a shows, in cycle 1, bit 1 of w1 and of w2 appear respectively on lines i1 and i0. The left subunit's AR is connected to i1 while the right subunit's AR is connected to i0. The ARs shift in the corresponding bits into their least significant bit sign-extending to the vacant position (shown as a O bit on the example). During cycle 2, as Figure 3b[shows, bits O of w1 and of w2 appear on the respective i lines and the respective ARs shift them in. At the end of the cycle, the left subunit's AR contains the full 2-bit w1 and the right subunit's AR the full 2-bit w2. In cycle 3, Figure|3cshows that the contents of AR are copied to BR in each subunit. From the next cycle, calculating the products can now proceed similarly to what was done for the CVL. In this case, however, each BR contains a different weight whereas in the CVL all BRs held the same w value. The shift capability of the ARs coupled with the different i wire per subunit connection allowed us to load a different weight bit-serially over two cycles. Figure|3d and Figure|3e|show cycles 4 and 5 respectively. During cycle 4, bit O of a1 appears on both v inputs and is multiplied with the BR in each subunit. In cycle 5, bit 1 of a1 appears on both v inputs and the subunits complete the calculation of f1 and f2. It takes two cycles to produce the two 2b 2b products once the correct inputs appear into the BRs.\nWhile in our example no additional inputs nor outputs are shown, it would have been possible to. overlap the loading of a new set of w inputs into the ARs while processing the current weights storec into the BRs. That is the loading into ARs, copying into BRs, and the bit-serial multiplication of the. BRs with the activations is a 3-stage pipeline where each stage can take multiple cycles. In general. assuming that both activations and weights are represented using 2 bits, this engine would match the. performance of the bit-parallel engine in the steady state. When both set of inputs i and v can be. represented with fewer bits, 1 in this case, the engine would produce two terms per cycle, twice the. bandwidth of the bit-parallel engine of the previous section..\nSummary: In general, if Pbase the precision of the bit-parallel engine, and PL and PL, the preci. sions that can be used respectively for activations and weights for layer L, a TRT engine can ideally Pba se -for FCLs. This P example used the simplest TRT engine configuration. Since typical layers exhibit massive paral lelism, TRT can be configured with many more subunits while exploiting weight reuse for CVLs. and activation reuse for FCLs. The next section describes the baseline state-of-the-art DNNs accel erator and presents an equivalent TRT configuration..\nFigure 5: Overview of the system components and their communication. a) DaDN. b) Tartan\nThis work presents TRT as a modification of the state-of-the-art DaDianNao accelerator. Accord ingly, Section|3.1|reviews DaDN's design and how it can process FCLs and CVLs. For clarity, in what follows the term brick refers to a set of 16 elements of a 3D activation or weight array'|input which are contiguous along the i dimension, e.g., a(x, y, i)...a(x, y, i + 15). Bricks will be denoted by their origin element with a B subscript, e.g., ab(x, y, i). The size of a brick is a design parameter."}, {"section_index": "4", "section_name": "3.1 BASELINE SYSTEM: DADIANNAC", "section_text": "TRT is demonstrated as a modification of the DaDianNao accelerator (DaDN) proposed byCher et al.(2014b). Figure4a shows a DaDN tile which processes 16 filters concurrently calculating 16 activation and weight products per filter for a total of 256 products per cycle. Each cycle the tile accepts 16 weights per filter for total of 256 synapses and 16 input activations. The tile multiplie each weight with only one activation whereas each activation is multiplied with 16 weights, one pe filter. The tile reduces the 16 products into a single partial output activation per filter, for a total o 16 partial output activations for the tile. Each DaDN chip comprises 16 such tiles, each processing a different set of 16 filters per cycle. Accordingly, each cycle, the whole chip processes 16 activations and 256 16 = 4K weights producing 16 16 = 256 partial output activations, 16 per tile.\nInternally, each tile has: 1) a synapse buffer (SB) that provides 256 weights per cycle one per weight lane, 2) an input neuron buffer (NBin) which provides 16 activations per cycle through 16 neuron lanes, and 3) a neuron output buffer (NBout) which accepts 16 partia1 output activations per cycle In the tile's datapath each activation lane is paired with 16 weight lanes one from each filter. Each synapse and neuron lane pair feeds a multiplier, and an adder tree per filter lane reduces the 16 per filter products into a partial sum. In all, the filter lanes produce each a partial sum per cycle, for a\n1 An FCL can be thought of as a CVL where the input activation array has unit x and y dimensions, and there. are as many filters as output activations, and where the filter dimenions are identical to the input activation array\nNBin WindowActivation NBin Lane 0 Bit Lane 0 Activation 16 from central Activation Lane 0 from central Bit Lane 15 eDRAM eDRAM Activation Activation Bit Lane 240 Lane 15 Window Activation Lane 15Bit Lane 25 Weight 16 SIP(0,0) SIP(15,0) Lane 0 IPO Weight Filter Lane 0 Filter Lane 0 SWR SW Weight Lane 0 NBout Weight to central Lane 15 Lane 15 b16 WR WR eDRAM 16 .. : to central ... eDRAM 16 Weight Weight NBout IP15 Lane 0 Lane 0 Filter Filter SWR SWR Lane 15 Lane 15 Weight Weight Lane 15 16 Lane 15 WR 16 SIP(0,15) SIP(15,15) SB (eDRAM) SB (eDRAM) (a) DaDianNao (b) Tartan Figure 4: Processing Titles Tile 0 Tile 15 Tile 0 Tile 15 (Reducer) (Reducer 256 bits 256 bits 4 Dispatcher NM NM (a) (b)\nTile 0 Tile 15 Tile 0 Tile 15 Reducer Reducer 256 bits 256 bits Dispatcher NM NM (a) (b)\nFigure 5a shows an overview of the DaDN chip. There are 16 processing tiles connected via ar. interconnect to a shared central eDRAM Neuron Memory (NM). DaDN's main goal was minimizing. off-chip bandwidth while maximizing on-chip compute utilization. To avoid fetching weights from off-chip, DaDN uses a 2MB eDRAM Synapse Buffer (SB) for weights per tile for a total of 32MB eDRAM. All inter-layer activation outputs except for the initial input and the final output are storec in NM which is connected via a broadcast interconnect to the 16 Input Neuron Buffers (NBin buffers. All values are 16-bit fixed-point, hence a 256-bit wide interconnect can broadcast a ful. activation brick in one step. Off-chip accesses are needed only for reading: 1) the input image. 2) the weight once per layer, and 3) for writing the final output..\nProcessing starts by reading from external memory the first layer's filter weights, and the input image. The weights are distributed over the SBs and the input is stored into NM. Each cycle an input activation brick is broadcast to all units. Each units reads 16 weight bricks from its SB and produces a partial output activation brick which it stores in its NBout. Once computed, the output activations are stored through NBout to NM and then fed back through the NBins when processing the next layer. Loading the next set of weights from external memory can be overlapped with the processing of the current layer as necessary.\nAs Section|2lexplained, TRT processes activations bit-serially multiplying a single activation bit with a full weight per cycle. Each DaDN tile multiplies 16 16-bit activations with 256 weights each cycle. To match DaDN's computation bandwidth, TRT needs to multiply 256 1-bit activations with 256 weights per cycle. Figure|4b|shows the TRT tile. It comprises 256 Serial Inner-Product Units (SIPs) organized in a 16 16 grid. Similar to DaDN each SIP multiplies 16 weights with 16 activations and reduces these products into a partial output activation. Unlike DaDN, each SIP accepts 16 single-bit activation inputs. Each SIP has two registers, each a vector of 16 16-bit subregisters: 1) the Serial Weight Register (SWR), and 2) the Weight Register (WR). These correspond to AR and BR of the example of Section [2] NBout remains as in DaDN, however, it is distributed along the SIPs as shown.\nConvolutional Layers: Processing starts by reading in parallel 256 weights from the SB as in. DaDN, and loading the 16 per SIP row weights in parallel to all SWRs in the row. Over the next. PI cycles, the weights are multiplied by the bits of an input activation brick per column. TRT. exploits weight reuse across 16 windows sending a different input activation brick to each column.. For example, for a CVL with a stride of 4 a TRT tile will processes 16 activation bricks aB(x, y, i),. ab(x + 4, y, i) through a(x + 63, y, i) in parallel a bit per cycle. Assuming that the tile processes. filters fi though fi+15, after P cycles it would produce the following partial output activations:. Ob(x/4, y/4, fi), through oB(x/4 + 15, y/4, fi), that is 16 contiguous on the x dimension output. activation bricks. Whereas DaDN would process 16 activations bricks over 16 cycles, TRT processes them concurrently but bit-serially over PI cycles. If PL is less than 16, TRT will outperform DaDN. by 16/PL, and when PL is 16, TRT will match DaDN's performance.\nFully-Connected Layers: Processing starts by loading bit-serially and in parallel over Ph, cycles, 4K weights into the SWRs. Each SWR per row gets a different set of 16 weights as each subregister is connected to one out of the 256 wires of the SB output bus for the SIP row. Once the weights have been loaded, the SwRs are copied to the SWs and multiplication with the input activations can then proceed bit-serially over PL cycles. Assuming that there are enough output activations so that a different output activation can be assigned to each SIP, the same input activation brick can be broadcast to all SIP columns. For example, for an FCL a TRT tile will process one activation brick aB(i) bit-serially to produce 16 output activation bricks ob(i) through ob(i 16) one per SIP column. Loading the next set of weights can be done in parallel with processing the current set, thus -max(PL, Ph). Thus, a TRT tile produces 256 partial execution time is constrained by PL.\ntotal of 16 partial output activations per Once a full window is processed, the 16 resulting sums are fed through a non-linear activation function, f, to produce the 16 final output activations. The multiplications and reductions needed per cycle are implemented via 256 multipliers one per weight lane and sixteen 17-input (16 products plus the partial sum from NBout) adder trees one per filter lane.\nSWR CONV 1(a0)|MSB WR nbout 16 16 1(a0) weight1 16 - x16 : o_nbout 1(a15) 16 imax 16 16 prec weight 1(a15) _nbout 16 activation 16 MSB Figure 6: TRT's SIP\nSWR CONV 1(a0)|MSB WR i nbout 16 16 1(a0) 8e weight 16 6 x16 : o nbout 1(a15) Bau 16 max 16 16 prec weight <<1 16 1(a15) i nbout activation16 MSB\noutput activations every PL Pmax cycles, a speedup of 16/Pmax over DaDN since a DaDN tile alway needs 16 cycles to do the same..\nFor TRT to be fully utilized an FCL must have at least 4K output activations. Some of the network studied have a layer with as little as 2K output activations. To avoid underutilization, the SIPs alon each row are cascaded into a daisy-chain, where the output of one can feed into an input of the nex via a multiplexer. This way, the computation of an output activation can be sliced over the SIPs alon the same row. In this case, each SIP processes only a portion of the input activations resulting int several partial output activations along the SIPs on the same row. Over the next np cycles, wher np the number of slices used, the np partial outputs can be reduced into the final output activation The user can chose any number of slices up to 16, so that TRT can be fully utilized even with fully connected layers of just 256 outputs. For example, in NeuralTalk Karpathy & Li|(2014) the smalles layers can have 600 outputs or fewer.\nSIP: Bit-Serial Inner-Product Units: Figure 6 shows TRT's Bit-Serial Inner-Product Unit (SIP) Each SIP multiplies 16 activations by 16 weights to produce an output activation. Each SIP has two registers, a Serial Weight Register (SwR) and a Weight Registers (WR), each containing 1 16-bit subregisters. Each SwR subregister is a shift register with a single bit connection to one o the weight bus wires that is used to read weights bit-serially for FCLs. Each WR subregister can b parallel loaded from either the weight bus or the corresponding SwR subregister, to process CVL or FCLs respectively. Each SIP includes 256 2-input AND gates that multiply the weights in the WR with the incoming activation bits, and a 16 16b adder tree that sums the partial products. A final adder plus a shifter accumulate the adder tree results into an output register. In each SIP, a multiplexer at the first input of the adder tree implements the cascade mode supporting slicing th output activation computation along the SIPs of a single row. To support signed 2's complemen neurons, the SIP can subtract the weight corresponding to the most significant bit (MSB) from the partial sum when the MSB is 1. This is done with negation blocks for each weight before the adde tree. Each SIP also includes a comparator (max) to support max pooling layers.\nDispatcher and Reducers: Figure5b shows an overview of the full TRT system. As in DaDN ther is a central NM and 16 tiles. A Dispatcher unit is tasked with reading input activations from NN always performing eDRAM-friendly wide accesses. It transposes each activation and communicate each a bit a time over the global interconnect. For CVLs the dispatcher has to maintain a pool o multiple activation bricks, each from different window, which may require fetching multiple row from NM. However, since a new set of windows is only needed every PL cycles, the dispatcher cai keep up for the layers studied. For FCLs one activation brick is sufficient. A Reducer per title i tasked with collecting the output activations and writing them to NM. Since output activations tak multiple cycles to produce, there is sufficient bandwidth to sustain all 16 tiles.\nOther Layers: TRT like DaDN can process the additional layers needed by the studied networks For this purpose the tile includes additional hardware support for max pooling similar to DaDN. An activation function unit is present at the output of NBout in order to apply nonlinear activations before the output neurons are written back to NM."}, {"section_index": "5", "section_name": "3.4 PROCESSING SEVERAL BITS AT ONCE", "section_text": "In order to improve TRT's area and power efficiency, the number of bits processed at once can be parameterized. In this case, the weights are multiplied with several activation bits at once, and th multiplication results are partially shifted before they are inserted into their corresponding adde tree.\nIn order to load the weights on time, the SwR subregister has to be modified so it can load sev. eral bits in parallel, and shift that number of positions every cycle. The negation block (for 2's complement support) will operate only over the most significant product result.\nThe chief advantage of such a design is that less SIPs are needed in order to achieve the same throughput - for example, processing 2 bits at once allows reducing the number of columns from 16 to 8. Although the total number of bus wires is similar, the distance they have to cover is significantly reduced. Likewise, the total number of adders required stays similar, but they are clustered closer together.\nA drawback of this design is the limitation to precisions that are exact multiples of the number of bits processed at once.\nThis section evaluates TRT's performance, energy and area and explores the trade-off between ac curacy and performance comparing to DaDN"}, {"section_index": "6", "section_name": "4.1 METHODOLOGY", "section_text": "Numerical Representation Requirements Analysis: The per layer precision profiles are found via the methodology of Judd et al. Judd et al.(2015). Caffe Jia et al.(2014) was used to measure hov reducing the precision of each FCL affects the network's overall top-1 prediction accuracy over 5000 images. The network definitions and pre-trained synaptic weights are taken from the Caffe Mode Zoo [Jia(2015). Since TRT's performance for FCLs is bound by the maximum of the weight an activation precisions, our exploration was limited to the cases where both are the same. The searcl procedure is a gradient descent where a given layer's precision is iteratively decremented one bit a a time, until the network's accuracy drops. For weights, the fixed point numbers are set to represen values between -1 and 1. For activations, the number of fractional bits is fixed to a previously determined value known not to hurt accuracy, as per Judd et al.(2015). While both activations anc weights use the same number of bits, their precisions and ranges differ.\nPerformance, Area and Energy: DaDN, STR and TRT were modeled using the same methodol. ogy for consistency. A custom cycle-accurate simulator models execution time. Computation was. scheduled as described by Judd et al.(2016a) to maximize energy efficiency for DaDN. The logic components of the both systems were synthesized with the Synopsys Design Compiler Synopsys. for a TSMC 65nm library to report power and area. The circuit is clocked at 980 MHz. The NBin and NBout SRAM buffers were modelled using CACTI Muralimanohar & Balasubramonian The eDRAM area and energy were modelled with Destiny|Poremba et al.(2015).\nFully-Connected Layer Precisions: Table 1reports the per layer precisions for the CVLs and. FCLs of the networks studied along with the speedup over DaDN that would be ideally possible.. The discussion in this section focuses solely on FCLs. The precisions that can be used vary from 8 up to 10 bits vs. the 16 bits DaDN uses. The ideal speedup ranges from 63% to 66% with. no accuracy loss. Additional exploration of the precision space may yield even shorter precisions without sacrificing accuracy. Modest additional improvements are possible with a loss of 1% in. accuracy.\nExecution Time: Table2reports TRT's performance and energy efficiency relative to DaDN for the precision profiles in Table 1 separately for the fully-connected layers, for the convolutional layers\nConvolutional layers Fully connected layers. Per Layer Activation. Ideal Per Layer Activation and. Ideal Network Precision in Bits. Speedup Weight Precision in Bits. Speedup 100% Accuracy AlexNet 9-8-5-5-7 2.38 10-9-9 1.66 VGG_S 7-8-9-7-9 2.04 10-9-9 1.64 VGG_M 7-7-7-8-7 2.23 10-8-8 1.64 VGG_19 12-12-12-11-12-10-11-11- 1.35 10-9-9 1.63 13-12-13-13-13-13-13-13 99% Accuracy AlexNet 9-7-4-5-7 2.58 9-8-8 1.85 VGG_S 7-8-9-7-9 2.04 9-9-8 1.79 VGG_M 6-8-7-7-7 2.34 9-8-8 1.80 VGG_19 9-9-9-8-12-10-10-12-13- 1.57 10-9-8 1.63 11-12-13-13-13-13-13\nTable 1: Per layer synapse precision profiles needed to maintain the same accuracy as in the base. line. Ideal: Potential speedup with TRT over a 16-bit bit-parallel baseline\nFully Connected Layers Convolutional Layers Accuracy 100% 99% 100% 99% Perf Eff Perf Eff Perf Eff Perf Eff AlexNet 1.61 0.92 1.80 1.04 2.32 1.43 2.52 1.55 VGG_S 1.61 0.92 1.76 1.01 1.97 1.21 1.97 1.21 VGG_M 1.61 0.93 1.77 1.02 2.18 1.34 2.29 1.40 VGG_19 1.60 0.92 1.61 0.93 1.35 0.83 1.56 0.96 1.61 0.92 geomean 1.73 1.00 1.91 1.18 2.05 1.26\nTable 2: Execution time and energy efficiency improvement with TRT compared to DaDN\nand the whole network. For the 1o0% profile, where no accuracy is lost, TRT yields, on average, a speedup of 1.61 over DaDN on FCLs. With the 99% profile, it improves to 1.73.\nThere are two main reasons the ideal speedup can't be reached in practice: dispatch overhead and. underutilization. Dispatch overhead occurs on the initial PL cycles of execution, where the serial. weight loading process prevents any useful products to be performed. In practice, this overhead is less than 2% for any given network, although it can be as high as 6% for the smallest layers. Underutilization can happen when the number of output neurons is not a power of two, or lower than. 256. The last classifier layers of networks designed towards recognition of ImageNet (Russakovsky et al.(2014)) categories all provide 1000 output neurons, which leads to 2.3% of the SIPs being idle\nWe have also performed an evaluation of NeuralTalk LSTM Karpathy & Li|(2014) which uses long short-term memory to automatically generate image captions. Precision can be reduced down to 11 bits withouth affecting the accuracy of the predictions (measured as the BLEU score when comparec to the ground truth) resulting in a ideal performance improvement of 1.45 translating into a 1.38 speedup with TRT.\nEnergy Efficiency: This section compares the energy efficiency or simply efficiency of TRT and. DaDN. Energy Efficiency is the inverse of the relative energy consumption of the two designs. The. average efficiency improvement with TRT across all networks and layers for the 100% profile is 1.17. In the FCLs, TRT is not as efficient as DaDN, however, the energy efficiency for CVLs. more than compensates when whole networks are considered except for VGG_19. Regardless, per- formance would not scale linearly if DaDN was to include more tiles in an attempt to match TRT's performance: under-utilization for most layers in these networks would severely reduce any perfor- mance improvements delivered via additional tiles under DaDN. Overall, efficiency primarily comes from the reduction in effective computation following the use of reduced precision arithmetic for the inner product operations. Furthermore, the amount of data that has to be transmitted from the SB. and the traffic between the central eDRAM and the SIPs is decreased proportionally to the chosen\nTable 3: Area Breakdown for TRT and DaDN\nTable 4: Relative performance of 2-bit TRT variation compared to DaDN and the 1-bit TR7\nArea Overhead: Table [3|reports the area breakdown of TRT and DaDN. Over the full chip, TRT. needs 1.49 the area compared to DaDN while delivering on average a 1.90 improvement in. speed. Generally, performance would scale sublinearly with area for DaDN due to underutilization The 2-bit variant, which has a lower area overhead, is described in detail in the next section.."}, {"section_index": "7", "section_name": "4.3 TWO-BIT AT ONCE PERFORMANCE EVALUATION", "section_text": "We evaluate the performance for a multi-bit design as described in section 3.4] where 2 bits are processed every cycle in as half as many total SIPs. The precisions used are the same as indicated. in Table 1 for 100% accuracy, rounded up to the next multiple of two. The results are shown in. Table 4 The 2-bit TRT always improves performance compared to DaDN as the \"vs.DaDN'. columns show. Compared to the 1-bit TRT performance is slightly lower however given that the. area of the 2-bit TRT is much lower, this can be a good trade-off. Overall, there are two forces. at work that shape performance relative to the 1-bit TRT. There is performance potential lost due. to rounding all precisions to an even number, and there is performance benefit by requiring less parallelism. The time needed to serially load the first bundle of weights is also reduced. In VGG_19. the performance benefit due to the lower parallelism requirement outweights the performance loss. due to precision rounding. In all other cases, the reverse is true..\nA hardware synthesis and layout of both DaDN and TRT's 2-bit variant using TSMC 65nm typica case libraries shows that the total area overhead can be as low as 24.9%, with an improved energy. efficiency in fully connected layers of 1.24 on average.."}, {"section_index": "8", "section_name": "RELATED WORK AND LIMITATIONS OF THIS WORK", "section_text": "The recent success of Deep Learning has led to several proposals for hardware acceleration of DNNs. This section reviews some of these recent efforts. However, specialized hardware designs for neura. networks is a field with a relatively long history. Relevant to TRT, bit-serial processing hardware for neural networks has been proposed several decades ago, e.g., Svensson & Nordstrom(1990);Murray et al.(1988). While the performance of these designs scales with precision it would be lower than that of an equivalently configured bit-parallel engine. For example, Svensson & Nordstrom (1990 uses an interesting bit-serial multiplier which requires O(4 p) cycles, where p the precision ir bits. Furthermore, as semiconductor technology has progressed the number of resources that can be\nTRT area (mm2) TRT 2-bit area (mm2) DaDN area (mm2 Inner-Product Units 57.27 (47.71%) 37.66 (37.50%) 17.85 (22.20%) Synapse Buffer 48.11 (40.08%) 48.11 (47.90%) 48.11 (59.83%) Input Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%) Output Neuron Buffer 3.66 (3.05%) 3.66 (3.64%) 3.66 (4.55%) Neuron Memory 7.13 (5.94%) 7.13 (7.10%) 7.13 (8.87%) Dispatcher 0.21 (0.17%) 0.21 (0.21%) Total 120.04 (100%) 100.43 (100%) 80.41 (100%) Normalized Total 1.49 1.25 1.00\nput on chip and the trade offs (e.g., relative speed of memory vs. transistors vs. wires) are today vastly different facilitating different designs. However, truly bit-serial processing such as that use. in the aforementioned proposals needs to be revisited with today's technology constraints due to it. potentially high compute density (compute bandwidth delivered per area).\nIn general, hardware acceleration for DNNs has recently progressed in two directions: 1) consider-. ing more general purpose accelerators that can support additional machine learing algorithms, and. 2) considering further improvements primarily for convolutional neural networks and the two most dominant in terms of execution time layer types: convolutional and fully-connected. In the first. category there are accelerators such as Cambricon Liu et al.[(2016) and Cambricon-X Zhang et al. (2016). While targeting support for more machine learning algorithms is desirable, work on further. optimizing performance for specific algorithms such as TRT is valuable and needs to be pursued as. it will affect such general purpose accelerators..\nTRT is closely related to Stripes Judd et al.(2016c a) whose execution time scales with precisioi but only for CVLs. STR does not improve performance for FCLs. TRT improves upon STR by enabling: 1) performance improvements for FCLs, and 2) slicing the activation computation across multiple SIPs thus preventing underutilization for layers with fewer than 4K outputs. Pragmatic use a similar in spirit organization to STR but its performance on CVLs depends only on the number o activation bits that are 1|Albericio et al.(2016b). It should be possible to apply the TRT extension to Pragmatic, however, performance in FCLs will still be dictated by weight precision. The area an energy overheads would need to be amortized by a commensurate performance improvement.\nThe Efficient Inference Engine (EIE) uses synapse pruning, weight compression, zero activation. elimination, and network retraining to drastically reduce the amount of computation and data com-. munication when processing fully-connected layers|Han et al.(2016). An appropriately configured EIE will outperform TRT for FCLs, provided that the network is pruned and retrained. However. the two approaches attack a different component of FCL processing and there should be synergy be. tween them. Specifically, EIE currently does not exploit the per layer precision variability of DNNs. and relies on retraining the network. It would be interesting to study how EIE would benefit from a TRT-like compute engine where EIE's data compression and pruning is used to create vectors of weights and activations to be processed in parallel. EIE uses single-lane units whereas TRT uses a. coarser-grain lane arrangement and thus would be prone to more imbalance. A middle ground may. be able to offer some performance improvement while compensating for cross-lane imbalance..\nEyeriss uses a systolic array like organization and gates off computations for zero activations Chen Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne(2016) and targets primarily high. energy efficiency. An actual prototype has been built and is in full operation. Cnvlutin is a SIMD accelerator that skips on-the-fly ineffectual activations such as those that are zero or close to zero A1 bericio et al.(2016a). Minerva is a DNN hardware generator which also takes advantage of zero. activations and that targets high-energy efficiencyReagen et al.(2016). Layer fusion can furthe. reduce off-chip communication and create additional parallelism|Alwani et al.(2016). As multipl layers are processed concurrently, a straightforward combination with TRT would use the maximum. of the precisions when layers are fused..\nGoogle's Tensor Processing Unit uses quantization to represent values using 8 bits Jouppi(2016) tc support TensorFlow|Abadi et al.(2015). As Table[1shows, some layers can use lower than 8 bits of precision which suggests that even with quantization it may be possible to use fewer levels and tc potentially benefit from an engine such as TRT..\nLimitations: As in DaDN this work assumed that each layer fits on-chip. However, as networks evolve it is likely that they will increase in size thus requiring multiple TRT nodes as was suggested in DaDN. However, some newer networks tend to use more but smaller layers. Regardless, it would be desirable to reduce the area cost of TRT most of which is due to the eDRAM buffers. We have noi explored this possibility in this work. Proteus Judd et al.(2016b) is directly compatible with TR7 and can reduce memory footprint by about 60% for both convolutional and fully-connected layers Ideally, compression, quantization and pruning similar in spirit to EIE Han et al.(2016) would be used to reduce computation, communication and footprint. General memory compresion Mittal & Vetter(2016) techniques offer additional opportunities for reducing footprint and communication.\nWe evaluated TRT only on CNNs for image classification. Other network architectures are impor. tant and the layer configurations and their relative importance varies. TRT enables performance\nimprovements for two of the most dominant layer types. We have also provided some preliminary evidence that TRT works well for NeuralTalk LSTM|Karpathy & Li|(2014). Moreover, by enabling output activation computation slicing it can accommodate relatively small layers as well.\nWe have evaluated TRT only for inference only. Using an engine whose performance scales with. precision would provide another degree of freedom for network training as well. However, TRT. needs to be modified accordingly to support all the operations necessary during training and the training algorithms need to be modified to take advantage of precision adjustments..\nThis section commented only on related work on digital hardware accelerators for DNNs. Advances. at the algorithmic level would impact TRT as well or may even render it obsolete. For example, work on using binary weights Courbariaux et al.(2015) would obviate the need for an accelerator whose. performance scales with weight precision. Investigating TRT's interaction with other network types. and architectures and other machine learning algorithms is left for future work..\nThis work presented Tartan an accelerator for inference with Deep Learning Networks whose perfor. mance scales inversely linearly with the number of bits used to represent values in fully-connected and convolutional layers. TRT also enables on-the-fly accuracy vs. performance and energy ef-. ficiency trade offs and its benefits were demonstrated over a set of popular image classification. networks. The new key ideas in TRT are: 1) Supporting both the bit-parallel and the bit-serial. loading of weights into processing units to facilitate the processing of either convolutional or fully-. connected layers, and 2) cascading the adder trees of various subunits (SIPs) to enable slicing the output computation thus reducing or eliminating cross-lane imbalance for relatively small layers.\nTRT opens up a new direction for research in inference and training by enabling precision adjust- ments to translate into performance and energy savings. These precisions adjustments can be done statically prior to execution or dynamically during execution. While we demonstrated TRT for in- ference only, we believe that TRT, especially if combined with Pragmatic, opens up a new direction for research in training as well. For systems level research and development, TRT with its ability to trade off accuracy for performance and energy efficiency enables a new degree of adaptivity for operating systems and applications."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Jorge Albericio, Patrick Judd, Alberto Delmas Lascorz, Sayeh Sharify, and Andreas Moshovos Bit-pragmatic deep neural network computing. Arxiv, arXiv:1610.06920 [cs.LG], 2016b.\nApplying some of the concepts that underlie the TRT design to other more general purpose acceler ators such as Cambricon Liu et al. (2016) or graphics processors would certainly be more preferable than a dedicated accelerator in most application scenarios. However, these techniques are best first nvestigated into specific designs and then can be generalized appropriately.\nHadi Esmaeilzadeh, Emily Blem, Renee St. Amant, Karthikeyan Sankaralingam, and Doug Burger Dark silicon and the end of multicore scaling. In Proceedings of the 38th Annual Internationa Symposium on Computer Architecture, ISCA '11, pp. 365-376, New York, NY, USA, 2011. ACM ISBN 978-1-4503-0472-6. doi: 10.1145/2000064.2000108\nYangqing Jia. Caffe model zoo. https://github.com/BVLC/caffe/wiki/Model-Zoo, 2015\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed. ding. arXiv preprint arXiv:1408.5093. 2014.\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor Aamodt, Natalie Enright Jerger, Raque Urtasun, and Andreas Moshovos. Reduced-Precision Strategies for Bounded Memory in Deep. Neural Nets, arXiv:1511.05236v4 [cs.LG] . arXiv.org, 2015.\nPatrick Judd, Jorge Albericio, and Andreas Moshovos. Stripes: Bit-serial Deep Neural Network Computing . Computer Architecture Letters, 2016c.\nAndrej Karpathy and Fei-Fei Li. Deep visual-semantic alignments for generating image descrip tions. CoRR. abs/1412.2306.2014. URLhttp://arxiv.0rg/abs/1412.2306\nPatrick Judd, Jorge Albericio, Tayler Hetherington, Tor M. Aamodt, Natalie Enright Jerger, and. Andreas Moshovos. Proteus: Exploiting numerical precision variability in deep neural networks.. In Proceedings of the 2016 International Conference on Supercomputing, ICS '16, pp. 23:1- 23:12, New York, NY, USA, 2016b. ACM. ISBN 978-1-4503-4361-9. doi: 10.1145/2925426 2926294. URLhttp://doi.acm.0rg/10.1145/2925426.2926294\nNaveen Muralimanohar and Rajeev Balasubramonian. Cacti 6.0: A tool to understand large cache\nAlan F Murray, Anthony Vw Smith, and Zoe F Butler. Bit-serial neural networks. In Neura Information Processing Systems, pp. 573-583, 1988.\nM. Poremba, S. Mittal, Dong Li, J.S. Vetter, and Yuan Xie. Destiny: A tool for modeling emerging. 3d nvm and edram caches. In Design, Automation Test in Europe Conference Exhibition (DATE). 2015, pp. 1543-1546, March 2015. Brandon Reagen, Paul Whatmough, Robert Adolf, Saketh Rama, Hyunkwang Lee, Sae Kyu Lee. Jose Miguel Hernandez-Lobato, Gu-Yeon Wei, David Brooks, undefined, undefined, undefined and undefined. Minerva: Enabling low-power, highly-accurate deep neural network accelerators.. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 00 (undefined):267-278, 2016. ISSN 1063-6897. doi: doi.ieeecomputersociety.org/10.1109/ISCA 2016.32. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014 arXiv: 1409.0575. Bertil Svensson and T Nordstrom. Execution of neural network algorithms on an array of bit. serial processors. In Pattern Recognition, 1990. Proceedings., 1Oth International Conference on. volume 2, pp. 501-505. IEEE, 1990. Synopsys. Design Compiler. http://www.synopsys.com/Tools/.\nSynopsys. Design Compiler. http://www.synopsys.com/Tools Implementation/RTLSynthesis/DesignCompiler/Pages\nShijin Zhang, Zidong Du, Lei Zhang, Huiying Lan, Shaoli Liu, Ling Li, Qi Guo, Tianshi Chen, and Yunji Chen. Cambricon-x: An accelerator for sparse neural networks. In Proceedings of the 49th International Symposium on Microarchitecture, 2016."}] |
HJTXaw9gx | [{"section_index": "0", "section_name": "RECURSIVE REGRESSION WITH NEURAL NETWORKS: APPROXIMATING THE HJI PDE SOLUTION", "section_text": "Vicenc Rubies Royo, Claire Tomlin\nDepartment of Electrical Engineering and Computer Science. UC Berkeley Rorlzol. IISA\nMost machine learning applications using neural networks seek to approximate some function g(x) by minimizing some cost criterion. In the simplest case, if one has access to pairs of the form (x, y) where y = g(x), the problem can be framed as a regression problem. Beyond this family of problems, we find many cases where the unavailability of data pairs makes this approach unfeasible. However, similar to what we find in the reinforcement learning literature, if we have some known properties of the function we are seeking to approximate, there is still hope to frame the problem as a regression problem. In this context, we present an algorithm that approximates the solution to a partial differential equation known as the Hamilton-Jacobi-Isaacs partial differential equation (HJI PDE) and compare it to current state of the art tools. This PDE, which is found in the fields of control theory and robotics, is of particular importance in safety critical systems where guarantees of performance are a must."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Artificial neural networks are remarkable function approximators used in a myriad of applications. ranging from complex controllers for robotic actuation (Levine et al.]2016) (Schulman et al.]2015) to simple image classifiers for digit recognition (LeCun et al.||1989) . They even find uses in physics. to find approximations to solutions of PDEs and systems of coupled ordinary differential equations. (ODEs) (Lagaris et al.||1998). Their success is in part achieved by their property of being universal. function approximators (Hornik et al.1989). In order to train a neural network one usually defines. a cost function which captures the 'goodness\"' of the choice of parameters in our model, and uses. gradient descent/ascent algorithms to improve them. In supervised learning, for example, input out- put data pairs are used to define a cost function such as the mean squared error or the mean absolute. error; unfortunately, in many cases the function we want to approximate is unkown. For instance,. in many reinforcement learning settings one wants to find the optimal policy, a function from state variables to actions'I which maximizes the expected sum of discounted rewards of an agent in some. environment. This function is usually unkown a priori, so this problem can't readily be framed. as a regression problem using input-output pairs. This assertion becomes blurred, however, when. looking at the work of[Mnih et al.(2013), where a deep Q-network learns by generating targets and. minimizing a cost of the form."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Li(0i) = Es,a~e[(yi Q(s,a;0))2]\nHere, the targets yi are generated from the same Q-network that is being used to approximate the Q-function, hence the neural network has two purposes: approximation and data generation. In this work, we show that this same idea can be extended to the domain of approximating solutions to partial differential equations, and in particular the solution to the Hamiton-Jacobi-Isaacs PDE.\nIn control theory and robotics we often want to know how a system evolves in time given some input signal. In particular, one would like to know whether there exists an (optimal) input signal tha drives our system to a particular region of interest in our state space and what that input is. For deterministic system with continuous states and inputs, this problem can be succinctly expressed as a partial differential equation known as the Hamilton-Jacobi-Isaacs (HJI) PDE.\nLet V : Rn R- -> R. Then, given a time invariant system of the form dxt = f(x, a,b) and. boundary condition V(x,O) = l(x), where x E Rn is the state vector and a E A C Rma and b E B C Rms are inputs to the system] we wish to find the solution to the minimum-payoff HJI PDE, associated to the reachability problem:\naV(x,t) -min{0,H(x,VxV)} dt\nis known as the Hamiltonian. The boundary condition V(x, 0) = l(x) encodes in its zero sub-level set (i.e. l(x) < O) the region of interest in our state space known as the target set T. Lastly, the so. lution V(x, t) to (2) encodes the information about all the starting states whose induced trajectories will enter (and possibly leave) T within t], given the dynamics and input signals. More precisely. for some starting state xo and t < 0, V(xo,t) < O if and only if the trajectory starting from xo. enters T withint.\nTo give some intuition as to why V(x,t) encodes the starting states whose trajectories enter 7 within t, let us consider the simpler problem where dx = f(x) is an autonomous system without any inputs. Further, let us write (2) as a finite difference in t. With some rearranging, and absorbing the gradient into V (i.e. VxVT f(x)t+V(x, t) ~ V(x+ f(x)t, t)), one can obtain the following approximation\nV(x,t- t)~min{ V(x,t), V(x+ f(x)t,t) }\nFor the case of one input trying to drive our system into T, the approximation becomes\nV(x,t- t) ~ min{V(x,t), min V(x+ f(x,b)t,t) }\nV(x,t- t) ~ min{ V(x,t) , max min V(x + f(x, a,b)t,t) }\na is usually taken to be the input and b is taken to be some bounded input disturbance\nH(x, VxV) := max min VxVTf(x, a, b aEA bEB\nIt is straightforward to see from (4) that at time t = 0 all the states outside of T (i.e. V(x, 0) > 0) but near its boundary, whose induced trajectories enter the target (i.e. V(x + f(x)t, O) < O) within t, will become negative in V(x, -t). Thinking of this update recursively one can intuitively see. how the zero sub-level set of V grows backward in time to include more and more states..\nUsing the previous analogy of the autonomous system, one can see how (5) and (6) are essentially. different ways to expand the zero sub-level set backward in time: (5) can be seen as an input trying. to expand the set as fast as possible; (6) can be seen as two inputs with competing goals, where one input tries to expand the set and the other seeks to prevent its growth. Moreover, this last setting shows the relevance of the HJI PDE in safety critical systems. By treating input b as a bounded worse case disturbance and T as some unsafe region, one can establish safety guarantees about the. system and claim which states won't be driven into T within some time horizon..\nb* = argmin xV(xo,t) f(xo,b bEB\nyields the instantaneous optimal input for state xo at time t to guide the trajectory into T as fast as possible. Using this fact one can generate an optimal control policy based on the gradient of V. This idea can then be easily extended to the case of two competing inputs to obtain competing control policies. Finally, even though (7) need not be a convex problem, in this work we will only deal with simple dynamical systems, making the optimization problem easy to solve..\nThe problem presented in section[2(as in many other cases with PDEs) is general not straightforward to solve. For this reason, trying to find a good approximation instead of the actual solution can be a reasonable approach. Many current state-of-the-art tools used to approximate solutions of PDEs, including (2), use gridding techniques (Mitchell]2007) whereby finite differences are used to iteratively update values on a grid. Another approach (Lagaris et al.]|1998) is to train a feedforward neural network by minimizing the following loss\nN Le := G(xi,Ye(xi),Vyo(xi),V2ye(xi)) i=1\nwhere G(x,(x),Vy(x), V2y(x)) = 0 is the PDE whose solution (x) we are trying to ap proximate and x; are points taken from the discretization of our domain. In (8), the function. Ve(x) := A(x) + F(x, Ne(x)) is a candidate approximation which by construction satisfies the boundary condition, where Ne(x) is a feedforward neural network. In order to ensure that the con ditions at the boundary are satisfied, F(x, Ne(x)) = 0 at the boundary and A(x) is a fixed function. which satisfies them.\nN OV(xi,ti Le := + min{0, H(xi,VxV)})2 dt i=1\nIn this work, we try to tackle the problem of finding an approximate solution to (2) from a different perspective. We show that a poor approximation to our solution is enough to generate \"good enough new data for regression, which can in turn be used to improve our model.\nIn this section we present a simple method for approximating the solution to (2) by utilizing a. feedforward neural network in two ways: as a function approximator and a data generator. We. believe that this parametric approach is better suited for finding good approximations by avoiding. some of the limitations found in gridding/tabular techniques due to the curse of dimesionality. To\nLastly, it is important to note that V(x, t) contains useful information in its gradient x V(x, t). In problem\nAlthough this approach is well suited for some problems, special care must be taken when com puting the gradient of the loss with respect to the parameters. For instance, following the previous procedure, the loss for HJI PDE would be written as\nthat end, we start by defining our candidate approximation Ve(x) to be of the same form as ir (Lagaris et al.||1998); that is, a sum of two terms which help satisfy our boundary condition V(x, 0)\nwhere Ne(x, t) is a neural network mapping from our states and time variables to the real numbers.. Next, we sample N points in the state variable x chosen uniformly at random over some set S which. includes T (the target set), and similarly, sample N points in the time variable t uniformly at random. over the set -T, O], where T > 0 is the desired time horizon. By sampling from these distributions we seek to find a good approximation to V(x, t) over the set S [-T, 0]. Once these points have. been gathered, we make use of the update (4), (5) or (6) (depending on our problem) and use Ve(x, t). the approximation itself, to generate the new regression points. The complete algorithm|4.1jis shown. using update equation (6), but it should be clear how to modify it for the other cases..\nAlgorithm 1 Recursive Regression via SGD with Momentum"}, {"section_index": "3", "section_name": "4.2 COMMENTS", "section_text": "Algorithm|4.1|is a type of bootstrapping method in that lines 12 and 13 make use of Ve(x, t) to. generate points for regression to train Ne(x, t) which in turn modify Ve(x, t) itself. At first glance,. it is unclear whether the generated pairs ((xj,tj), yj) will result in a good approximation to the. solution of our PDE after regression; however, given the form of our candidate function (10) we. expect that points sampled near t = 0 will in fact be reasonable approximations of V(x, t) for small t. Given this assumption, we hypothesize that despite the presence of misleading data, our network. will be able to do a good job at regressing over all points, thus improving our initial model and. allowing the generation of improved data. By repeating this procedure, we expect the accuracy of the boundary condition to '\"propagate\"' backward in time (possibly with some minor error) in the. form of better and better points for regression..\nAnother important aspect from line 13 is that we are simulating our dynamics forward in time using. the Euler approximation step x; + f(x, a*, b*)t. In practice, depending on the variability and. complexity of the dynamics, one might use a Runge-Kutta method or a more involved integration procedure. For the experiments in the next sections a Runge-Kutta method with 4 stages (RK4) was. used.\nVe(x,t) = V(x,O)+tNe(x,t)"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution. In case it is not straightforward to obtain the solution, a very accurate approximation taken from state-of-the-art tools is used instead. In particular, we make use of the LevelSet Toolbox from [Mitchell (2007), a powerful computational tool for obtaining good approximations to Hamilton-Jacobi (HJ) PDEs.\nThe first error metric to be used will be\nwhere M are the number of points chosen from our domain to compute the average absolute erro. and V(x, t) can denote either the true solution or an accurate approximation. In the case where the analytical solution is known, the points are taken uniformly at random over S; otherwise, they are taken over some grid in S and [-T. 0]. Lastly, we also use a second error metric\nM 1 OV(xi,ti) E2(Ve(x,t)) : + min{0,H(xi,VxV)} M dt i=1"}, {"section_index": "5", "section_name": "5.1 A LINEAR SYSTEM", "section_text": "In this experiment we study the performance of the algorithm on an autonomous system of the form\n-1 -2 x=fx= x 2\nwith V(x, 0) = ||x||2 1 and T = 1.0. For this simple system, the solution to the HJI PDE can be found analytically to be V(x, t) = e-t[x2 - 1. One can easily verify this by checking it satisfies the boundary condition and (2). For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled. was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)[x1, x2 E [-5, 5]} and. over t E [-T, 0]. The batches were picked to be of size K = 10, momentum decay y = 0.95 and. learning rate n = 0.1. The interval to renew the regression points was chosen to be 1000 iterations. and the program was halted at 500,000 iterations..\nFigure 1: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E2 and the third figure shows the loss Le as defined in algorithm 4.1 over all the data. The horizontal axis represents the iteration number\nM 1 |V(xi,ti)-Vo(xi,ti)] E1(Ve(x,t)) := M i=1\nsimilar to the one defined in (9), which denotes the extent by which (on average) the approximation is violating the PDE equality. For all experiments M = 30o0, all chosen uniformly at random over S T, 0]. In section[5.4 we also show a visual representation of the approximations.\nE 1 E 2 Loss 1.6 4.0 25 1.4 3.5 20 1.2 3.0 1.0 2.5 15 0.8 2.0 0.6 1.5 10 0.4 1.0 5 0.2 0.5 0.0 0.0 0 0 250000 O 250000 0 250000\nThe results shown in Fig. 1where taken over 10 runs of the algorithm concurrently executed over multiple threads. The overall time to run the 500,O00 iterations for all threads was 1521 seconds The average E1 error at halting time was in the order of 7 10-2, whereas the E2 error was in the order of 3 10-1. The sharp jumps appearing in the loss figure in the majority of cases correspond to the error after new points are generated and used for regression."}, {"section_index": "6", "section_name": "5.2 PURSUIT-EVASION GAME: SINGLE INPUT", "section_text": "In this experiment we explore a pursuit-evasion game where a pursuer has to intercept an evader. In. a first simplified approach, we assume the evader has a fixed heading and speed, whereas the pursuer has the same speed as the evader but has the liberty to change the direction of its heading. Fixing. the evader at the origin with its heading aligned with the x-axis we frame the problem in relative coordinates between the evader and pursuer, that is x = [xr yrlT, where xr and yr represent the x. and y position of the pursuer relative to the evader. This system's dynamics are readily encoded in the following equation\nwhere vp = ve = 2.0 represent the speed of the pursuer and evader respectively, b E [0, 2 represents the input available to the pursuer, which is the angle with respect to the x-axis. In this simplified pursuit-evasion game we say the pursuer has captured the evader if they are within 1 unit of distance from each other. Thus, we define our capture condition by defining V(x, O) = x2 - 1 which will ensure that our approximation captures all the states from which the pursuer can capture the evader in within T = 1.0. As in the previous example, we choose the same network architecture and the same values for the halting time, renewal interval, N,K,y and n.\nE_1 E 2 Loss 1.2 2.5 25 1.0 2.0 20 0.8 1.5 15 0.6 1.0 10 0.4 0.5 5 0.2 0.0 0.0 0 250000 0 250000 0 250000\n1.0 0.8 0.6 0.4 0.2 0.0 0\nFigure 2: From left to right: the first figure shows the mean absolute error E1, the second figur. shows the mean absolute PDE error E, and the third figure shows the loss Le as defined in algorithn 4.1 over all the data. The horizontal axis denotes iteration number..\nThe results shown in Fig. 2|where also taken over 10 runs of the algorithm like in section 5.2] The overall time to run the 500,000 iterations was 1952 seconds. The average E1 error at halting time was also in the order of 7 10-2, whereas the E2 error was in the order of 1.5 10-1. The points used to compute E1 were taken from a 51 51 grid at t = 0.5 (half of the time horizon), using a previously computed approximation from the LevelSet Toolbox. The reason why a single time instance was used to compute E1 was purely to reduce the amount of computation of the error at run-time."}, {"section_index": "7", "section_name": "5.3 PURSUIT-EVASION GAME: TWO INPUTS", "section_text": "The last experimental example also consists of a pursuit-evasion game, but in this case the evader has access to a range of speeds through an input a E [-2, 2]. The system dynamics thus become\nx Vpcos(b) - a fx,a,b= yr Vpsin(b)\nxr Vpcos(b) - Ve fx,b= yr Vpsin(b)\n1 E E 2 Loss 0.40 0.9 25 0.35 0.8 20 0.30 0.7 0.25 0.6 15 0.20 0.5 0.15 0.4 10 0.10 0.3 5 0.05 0.2 0.00 0.1 0 O 150000 0 150000 150000\nFigure 3: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E, and the third figure shows the loss Le as defined in algorithn 4.1over all the data.\nThe results shown in Fig. 3|where also taken over 10 runs of the algorithm. The overall time to rur the 300,000 iterations over the all threads was 1028 seconds. The average Ej error at halting time was in the order of 6 10-2, whereas the E2 error was in the order of 1.5 10-1. Like in the single input case, the points used to compute E1 were taken from a 51 51 grid at t = 0.5 of a pre-computed approximation."}, {"section_index": "8", "section_name": "5.4 CONTOUR VISUALIZATION", "section_text": "In this section we briefly display some of the contours for a neural network picked at random. from those computed in the experimental section. Each line corresponds to the set of states where. Ve(x, t) = 0 for t = 0, 0.25, -0.5, -0.75, -1.0. These contours enclose within them the states from which our system can reach the target set T within the absolute value of its associated time\n4 4 4 2 2 2 0 0 -2 -2 -2 -4 -4 -4 -4 -2 0 2 4 -4 -2 0 2 4 -4 -2 0 2 4\nFigure 4: From left to right: contours for experiment one, experiment two and experiment three. As one can appreciate, the contours grow according to the specified dynamical model.\nAs expected, the linear system's contours expand radially in all directions since the origin is a stable equilibrium poinl|where all trajectories converge. For the pursuit-evasion game of one input, we also see that the contours grow toward the right, which is a sensible outcome given that the pursue. can't catch up with the evader if it starts somewhere where xr < -1.0. Finally, the last set o1 contours associated with the pursuer-evader game of two competing inputs also make sense, since starting states xr < 1.0 or xr > 1.0 should not permit the pursuer to intercept the evader, and sc\nwith the same negative real part for the eigenvalues\nand, similarly, V(x, 0) = x2 1 and T = 1.0. As before, vp = 2.0. The interesting behavior we expect to see from this experiment, in comparison to the single input counterpart, is that this new available action to the evader will make it more difficult for the pursuer to intercept. This should then be evident by looking at our approximation Ve and its zero sub-level sets at different times. For this experiment we also chose the same architecture for the network as in the previous experiments and the same parameters, except for the halting time which was 300,o00 iterations.\nthe contours should not expand in those directions. As a last comparison, in Fig. 5|we display the actual contours that would be obtained using the LevelSet Toolbox.\n5 5 5 4 4 3 3 3 2 2 2 1 1 0 0 0 -1 -1 1 -2 -2 2 -3 -3 -3 -4 -4 4 -5 -5 -5 -5 0 5 5 0 5 -5 0\nFigure 5: Contours obtained from the LevelSet Toolbox in Matlab\nBy comparing Fig.5and4|one can qualitatively see that the neural network has learned an accurate approximation of V (x, t)\nThe first advantage of using this method over gridding techniques is a dramatic improvement in memory requirements. For instance, using a standard grid with 51, 51, 10] discretization points per axis (i.e. 51 in xr, 51 in yr and 10 in t) each of the three previous experiments requires the storage of 26, 010 numbers, as opposed to 51 weights for our neural network. For the gridding approach this memory requirement must increase exponentially with the number of dimensions, whereas this need not be the case for our method. Furthermore, points that do not fall exactly on the grid have to be interpolated, whereas the neural network is an approximation that assigns values to all points in the domain. To this we can also add that fact that the neural network can yield the gradient at any point directly with backpropagation, whereas the gradient must once again be approximated for gridding techniques.\nThe main disadvantage of this method, for small dimensional systems in particular, is the time requirement. Computing values over a grid with the LevelSet Toolbox for the previous systems tool less than 10 seconds. This advantage of gridding/tabular procedures, however, quickly disappears ir higher dimensions (4D, 5D...) due to the curse of dimensionality. Finally, another disadvantage o using this method is the necessity to tune hyper parameters.\nIn this work we focus our attention on the idea that recursive/bootstrapped regression can be used in some problems where the function we wish to approximate has some known characteristics. In particular, we show that accurate approximations to the HJI PDE solution can be found by assigning a neural network two roles, one of them being function approximation, and the other data gener ation.To validate our hypothesis three different experiments with three distinct dynamical systems were performed with satisfactory results.\nIn this work we did not focus on the architecture of the neural network, but rather on its ability to. perform well on three distinct tasks using the same algorithm. In future work we will try to find. whether one can construct wider or deeper neural networks and obtain better results. We also want to investigate how well this method scales with the number of state and input dimensions. Positive results in that front could suppose an important step to further alleviate the effects of the curse of. dimensionality, which are pervasive in griding methods.."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "Special thanks to Carlos Florensa for his implementation tips and to Jaime F. Fisac for helping ii the process of writing this work"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "John Schulman, Sergey Levine, Michael Jordan, and Pieter Abbeel. Trust Region Policy Optimiza tion. Icml-2015, page 16, 2015. ISSN 2158-3226. doi: 10.1063/1.4927398.\nIan Mitchell. A toolbox of level set methods. Technical report, 2007.\nY. LeCun, B. Boser. J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel Backpropagation Applied to Handwritten Zip Code Recognition, 1989. ISSN 0899-7667.\nBadis Djeridane and John Lygeros. Neural approximation of PDE solutions: An application to. reachability computations. Proceedings of the 45th IEEE Conference on Decision and Control pages 3034-3039, 2006. ISSN 01912216. doi: 10.1109/CDC.2006.377184.\nThis experiment was designed to test the applicability of the method to problems beyond those presented in the previous sections. In particular, we show that with small changes we can also compute an accurate approximation to a pursuit-evasion problem in 3 dimensions. Similar to the previous examples, we frame the problem in relative coordinates with the x-axis aligned with the evader's heading, and give the pursuer and evader control over the rate of rotation. This can be. written as follows:\nFor this problem the capture condition is encoded in the boundary condition V(x,0) [xr yr[TT2 - 1 (where we ignore 0r since the capture condition only depends on the distance) and we consider a the time horizon T = 1.0s. For this problem we give both pursuer and evader the same speed vp = ve = 1.0 and the same turning rates a,b E [-1,1]. Unlike the previous experiments, we used a neural network with two hidden layers with 10 and 5 units respectively and sigmoid activations. The number of points sampled was chosen to be N = 2000, uniformly picked over the set S := {(xr, yr, 0r)[xr, yr E [-5, 5], 0r E [, ]} and over t E [T, 0]. The batches were picked to be of size K = 25, momentum decay y = 0.999 and learning rate n = 0.001. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations.\nE 1 E 2 Loss 0.25 0.50 10 0.45 9 0.20 0.40 8 0.35 7 0.15 0.30 6 0.25 5 0.10 0.20 4 0.05 0.15 3 0.10 2 0.00 0.05 0 250000 0 250000 0 250000\nFigure 6: From left to right: the first figure shows the mean absolute error E1, the second figure shows the mean absolute PDE error E2 and the third figure shows the loss Le as defined in algorithm 4.1 over all the data.\nAs shown in Fig. 6] both error metrics decrease as the algorithm progresses, reaching an averag. error for E1 in the order of 5.0 10-2 and an average error for E2 in the order of 1.0 10-1. The. points used to compute E1 were taken from a 51 51 50 approximation grid at t = -0.5s. Thi set of experiments was run in a different machine'|using 8 threads and the total time for all thread.. to finish was 1000 seconds. Finally, Fig. 7 shows the zero level set contour at t = 0.5, which is. now a 3D surface, from side and top perspectives. The first row shows the output of the LevelSe. Toolbox from each perspective, and the second row shows a 3D scatter plot of points on the zerc. level-set obtained from one of the 8 neural networks that were trained..\n4due to heavy usage of the first machine we had to switch to a different one\n-ve + vpcos(0r) + ayr fx,a,b= Vpsin(0r) - axr b - a\nt=0.5 t= 0.5 t= 0.5 6- 64 3 5 5 4- 4- 0 3 3 - -1 2 2 -2 1 1 -3 0 0 4 3 .2 -1 2 4 -5 -4 -3 2 -1 - 2 5 3 5 54321012345 5 4 5 5 3 4 4 2 3 3 1 0 2 2 1 1 -2 0 3 0 5 4 3 .2 2 3 4 -5 4 -3 -2 -1 1 2 3 4 5 4 5 54321012345\nFigure 7: The first column shows the first side view perpendicular with respect to the x-z plane. The second column shows the second side view perpendicular with respect to the y-z plane. Finally, the third column shows the top view which is perpendicular with respect to the x-y plane\nFor this experiment, only 111 numbers were needed to store the approximation, as opposed to 51 51 50 10 = 1300500 numbers (i.e. 51 in xr, 51 in yr, 50 in 0r and 10 in t) for a 51 51 50 10 grid approximation."}] |
S13wCE9xx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Step 1. Search for a low-rank matrix X that provides a ood SGNS objective value:\nAlexander Fonarev123, Alexey Grinchuk12, Gleb Gusev2, Pavel Serdyukov2, Ivan Oseledets14. 1Skolkovo Institute of Science and Technology, Moscow, Russia. 2Yandex LLC, Moscow, Russia 3sBDA Group, Dublin, Ireland 4Institute of Numerical Mathematics, Russian Academy of Sciences, Moscow, Russia. newo@newo.su, oleksii.hrinchuk@skolkovotech.ru, gleb57@yandex-team.r\nnewo@newo.su, oleksii.hrinchuk@skolkovotech.ru, gleb57@yandex-team.ru pavser@yandex-team.ru, ioseledets@skoltech.ru"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in \"word2vec\"' software, is usually optimized by stochastic gra- dient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank con- straint. The most standard way to solve this type of problems is to apply Rieman- nian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes. SGNS objective using Riemannian optimization and demonstrates its superiority. over popular competitors, such as the original method to train SGNS and SVD. over SPPMI matrix.\nIn this paper, we consider the problem of embedding words into a low-dimensional space in order. to measure the semantic similarity between them. As an example, how to find whether the word \"table\" is semantically more similar to the word \"stool\"' than to the word \"sky'? That is achieved. by constructing a low-dimensional vector representation for each word and measuring similarity. between the words as the similarity between the corresponding vectors..\nOne of the most popular word embedding models by Mikolov et al. (2013) is a discriminative neural network that optimizes Skip-Gram Negative Sampling (SGNS) objective (see Equation 3). It aims at predicting whether two words can be found close to each other within a text. As shown in Section 2. the process of word embeddings training using SGNS can be divided into two general steps with clear objectives:\nUnfortunately, most previous approaches mixed these two steps into a single one, what entails a not completely correct formulation of the optimization problem. For example, popular approaches to train embeddings (including the original \"word2vec\" implementation) do not take into account that the objective from Step 1 depends only on the product X = W CT : instead of straightforward computing of the derivative w.r.t. X, these methods are explicitly based on the derivatives w.r.t W and C, what complicates the optimization procedure. Moreover, such approaches do not take into account that parametrization WC' of matrix X is non-unique and Step 2 is required. Indeed. for any invertible matrix S, we have X = WiCT = W SS-1CT = WCT, therefore, solutions W C1 and WC2 are equally good in terms of the SGNS objective but entail different cosine sim ilarities between embeddings and, as a result, different performance in terms of linguistic metrics (see Section 4.2 for details).\nA successful attempt to follow the above described steps, which outperforms the original SGNS op timization approach in terms of various linguistic tasks, was proposed by Levy & Goldberg (2014 In order to obtain a low-rank matrix X on Step 1, the method reduces the dimensionality of Shifte Positive Pointwise Mutual Information (SPPMI) matrix via Singular Value Decomposition (SVD) On Step 2, it computes embeddings W and C via a simple formula that depends on the factors ob tained by SVD. However, this method has one important limitation: SVD provides a solution to surrogate optimization problem, which has no direct relation to the SGNS objective. In fact, SVI minimizes the Mean Squared Error (MSE) between X and SPPMI matrix, what does not lead t minimization of SGNS objective in general (see Section 6.1 and Section 4.2 in Levy & Goldber (2014) for details).\nThese issues bring us to the main idea of our paper: while keeping the low-rank matrix search setup. on Step 1, optimize the original SGNS objective directly. This leads to an optimization problen. over matrix X with the low-rank constraint, which is often (Mishra et al. (2014)) solved by applying. Riemannian optimization framework (Udriste (1994)). In our paper, we use the projector-splitting. algorithm (Lubich & Oseledets (2014)), which is easy to implement and has low computationa complexity. Of course, Step 2 may be improved as well, but we regard this as a direction of future. WOrk.\nTo summarize, the main contributions of our paper are:\nIn this paper, we consider the Skip-Gram Negative Sampling (SGNS) word embedding model (Mikolov et al. (2013)), which is a probabilistic discriminative model. Assume we have a text cor- pus given as a sequence of words w1,..., Wn, where n may be larger than 1012 and w, E Vw belongs to a vocabulary of words Vw. A context c E Vc of the word wi is a word from set {Wi- L, , Wi-1, Wi+1, ., Wi+L} for some fixed window size L. Let w, c E Rd be the word embed- dings of word w and context c, respectively. Assume they are specified by the following mappings:\nW : Vw ->Rd. C : Vc ->Rd\nThe ultimate goal of SGNS word embedding training is to fit good mappings W and C\nIn the SGNS model, the probability that pair (w, c) is observed in the corpus is modeled as a follow ing function:\n1\nIn order to collect a training set, we take all pairs (w, c) from D as positive examples and k randomly generated pairs (w, c) as negative ones. Let #(w, c) be the number of times the pair (w, c) appears.\nAs a result, our approach achieves the significant improvement in terms of SGNS optimization on Step 1 and, moreover, the improvement on Step 1 entails the improvement on Step 2 in terms of linguistic metrics. That is why, the proposed two-step decomposition of the problem makes sense, what, most importantly, opens the way to applying even more advanced approaches based on it (e.g., more advanced Riemannian optimization techniques for Step 1 or a more sophisticated treatment of Step 2).\nWe reformulated the problem of SGNS word embedding learning as a two-step procedure with clear objectives; For Step 1, we developed an algorithm based on Riemannian optimization framework that optimizes SGNS objective over low-rank matrix X directly; Our algorithm outperforms state-of-the-art competitors in terms of SGNS objective and the semantic similarity linguistic metric (Levy & Goldberg (2014); Mikolov et al. (2013); Schnabel et al. (2015)).\nwhere D is the multiset of all word-context pairs (w, c) observed in the corpus and (x, y) is the scalar product of vectors x and y. Number d is a hyperparameter that adjusts the flexibility of the model. It usually takes values from tens to hundreds\n(w,c)(logo((w,c)) + k: Ec'~Pp logo(-(w,c)))\nl=) #(w,c)(logo((w,c)) + k: Ec'~Pp logo(-(w,c))) > max W, wEVw cEVc\nUsually, this optimization is done via the stochastic gradient descent procedure that is performed during passing through the corpus (Mikolov et al. (2013); Rong (2014))."}, {"section_index": "2", "section_name": "2.2 OPTIMIZATION OVER LOW-RANK MATRICES", "section_text": "Relying on the prospect proposed by Levy & Goldberg (2014), let us show that the optimization problem given by (3) can be considered as a problem of searching for a matrix that maximizes a certain objective function and has the rank-d constraint (Step 1 in the scheme described in Section 1)"}, {"section_index": "3", "section_name": "2.2.1 SGNS LOSS FUNCTION", "section_text": "As shown by Levy & Goldberg (2014), the logarithmic likelihood (3) can be represented as the sun of lw,c(w, c) over all pairs (w, c), where lw,c(w, c) has the following form:\n#(w)#(c) lw,c(w,c) =#(w,c) logo((w,c)) + k log(-(w,c)) D"}, {"section_index": "4", "section_name": "2.2.2 MATRIX NOTATION", "section_text": "X =(xw,c), w E Vw,c E Vc\nF(X) = LL wEVw cEVc\nProposition1 SGNS optimization problem given by (3) can be rewritten in the following con\nmaximize F(X) XERn X m subject to X E Md\nMa ={X E Rnxm:rank(X) = d}\nA crucial observation is that this loss function depends only on the scalar product (w, c) but not on embeddings w and c separately:\nDenote |Vw| as n and |Vc] as m. Let W E Rnxd and C E Rmxd be matrices, where each row w E Rd of matrix W is the word embedding of the corresponding word w and each row c E Rd of matrix C is the context embedding of the corresponding context c. Then the elements of the product of these matrices\nX = WC\nThe key idea of this paper is to solve the optimization problem given by (6) via the framework o Riemannian optimization, which we introduce in Section 3..\nImportant to note that this prospect does not suppose the optimization over parameters W and C directly. This entails the optimization in the space with ((n + m - d) : d) degrees of freedom (Mukherjee et al. (2015)) instead of ((n + m) : d), what simplifies the optimization process (see Section 5 for the experimental results)."}, {"section_index": "5", "section_name": "2.3 COMPUTING EMBEDDINGS FROM A LOW-RANK SOLUTION", "section_text": "Once X is found, we need to recover W and C such that X = W C ' (Step 2 in the scheme described in Section 1). This problem does not have a unique solution, since if (W, C) satisfy this equation, then W S-1 and CS' satisfy it as well for any non-singular matrix S. Moreover, different solutions. may achieve different values of the linguistic metrics (see Section 4.2 for details). While our paper focuses on Step 1, we use, for Step 2, a heuristic approach that was proposed by Levy et al. (2015) and it shows good results in practice. We compute SVD of X in the form X = UV', where U. and V have orthonormal columns, and is the diagonal matrix, and use.\nW =UV C =VV"}, {"section_index": "6", "section_name": "as matrices of embeddings", "section_text": "A simple justification of this solution is the following: we need to map words into vectors in a way that similar words would have similar embeddings in terms of cosine similarities:\ncOs(w1, W2) W1 W2\nIt is reasonable to assume that two words are similar, if they share contexts. Therefore, we can estimate the similarity of two words w1, W2 as s(w1, w2) = cevc w1,c Xw2,c, what is the. element of the matrix XXT with indices (w1, w2). Note that XXT = UVTVUT = U2UT. If we choose W = U, we exactly obtain (w1, w2) = s(w1, w2), since WwT = XXT in this. case. That is, the cosine similarity of the embeddings w1, w2 coincides with the intuitive similarity. s(w1, w2). However, scaling by instead of was shown by Levy et al. (2015) to be a better. solution in experiments.\nThe main idea of Riemannian optimization (Udriste (1994)) is to consider (6) as a constrained op. timization problem. Assume we have an approximated solution X, on a current step of the opti. mization process, where i is the step number. In order to improve X, the next step of the stan. dard gradient ascent outputs X; + VF(X), where VF(X) is the gradient of objective F at th point X. Note that the gradient VF(X) can be naturally considered as a matrix in Rnm. Poin. X, + F(X) leaves the manifold Md, because its rank is generally greater than d. That is wh. Riemannian optimization methods map point X, + F(X) back to manifold Md. The standar. Riemannian gradient method first projects the gradient step onto the tangent space at the curren. point X, and then retracts it back to the manifold:.\nXi+1 = R(PT(X;+ VF(X)))\nwhere R is the retraction operator. and PT. .. is the proiection onto the tangent space\nIn our paper, we use a much simpler version of such approach that retracts point X, + F(X) directly to the manifold, as illustrated on Figure 1: X+1 = R(X, + V F(X)).\nIntuitively, retractor R finds a rank-d matrix on the manifold M that is similar to high-rank ma trix X, + F(X) in terms of Frobenius norm. How can we do it? The most straightforward way tc reduce the rank of X, + F(X) is to perform the SVD, which keeps d largest singular values of it:\n1: Ui+1, Si+1, V SVD(X;+ VF(X))\nHowever, it is computationally expensive. Instead of this approach, we use the projector-splitting method (Lubich & Oseledets (2014)), which is a second-order retraction onto the manifold (fo details. see the review by Absil & Oseledets (2015)). Its practical implementation is also quit intuitive: instead of computing the full SVD of X, + V F(X) according to the gradient projectior method, we use just one step of the block power numerical method (Bentbib & Kanber (2015)) which computes the SVD, what reduces the computational complexity.\nwhich means that the gradient will be large if S is close to singular. The projector-splitting scheme is free from this problem.\nIn case of SGNS objective given by (5), an element of gradient V F has the form.\nxw,c VF(X =#(w,c).(-xw,c dxw.c\nThe whole optimization procedure is summarized in Algorithm 1\nXi+VF(Xi) VF(X) retraction Md Xi Xi+1\nFigure 1: Geometric interpretation of one step of projector-splitting optimization procedure: the gradient step an the retraction of the high-rank matrix X; + F(X) to the manifold of low-rank matricesMd.\n2: Vi+1,S+1 QR((X+ VF(Xi))' 3: Xi+1 Ui+1Si+1Vi+1 IT\nIn this way, we always keep the solution Xi+1 = U+1Si+1V+1 on the manifold Ma and in the form (8).\nWhat is important, we only need to compute V F(X,), so the gradients with respect to U, S and V. are never computed explicitly, thus avoiding the subtle case where S is close to singular (so-called singular (critical) point on the manifold). Indeed, the gradient with respect to U (while keeping the. orthogonality constraints) can be written (Koch & Lubich (2007)) as:.\nOF OF VS-1 au ax\nRequire: Dimentionality d, initialization Wo and Co, step size A, gradient function VF : Rnm Rn x m, number of iterations K Ensure: Factor W E Rnxd 1: Xo WoCT # get an initial point at the manifold 2: Uo, So, V' SVD(Xo) # compute the first point satisfying the low-rank constraint 3: i 0 4: while i < K do 5: Ui+1,Si+1QR((Xi+XVF(Xi))Vi) # perform one step of the block power method with two QR-decompositions Vi+1,S+1QR((X;+AVF(Xi)TUi+1) 6: Xi+1 Ui+1Si+1Vi+1 7: # update the point at the manifold 8: i Ii+ 1 9: end while 10: U,,VT SVD(Xk) 11: W U # compute word embeddings 12: return W"}, {"section_index": "7", "section_name": "4.1 TRAINING MODELS", "section_text": "We compare our method (\"RO-SGNS\" in the tables) performance to two baselines: SGNS embed- dings optimized via Stochastic Gradient Descent, implemented in the original \"word2vec\", (\"SGD- SGNS\" in the tables) by Mikolov et al. (2013) and embeddings obtained by SVD over SPPMI matrix (\"SVD-SPPMI\"' in the tables) by Levy & Goldberg (2014). We have also experimented with the blockwise alternating optimization over factors W and C, but the results are almost the same to SGD. results, that is why we do not to include them into the paper. The source code of our experiments is available online'.\nThe models were trained on English Wikipedia \"enwik9' corpus2, which was previously used in. most papers on this topic. Like in previous studies, we counted only the words which occur more than 200 times in the training corpus (Levy & Goldberg (2014); Mikolov et al. (2013)). As a result. we obtained a vocabulary of 24292 unique tokens (set of words Vw and set of contexts Vc are equal). The size of the context window was set to 5 for all experiments, as it was done by Levy & Goldberg (2014); Mikolov et al. (2013). We conduct two series of experiments: for dimensionality d = 100 and d = 200.\nOptimization step size is chosen to be small enough to avoid huge gradient values. However, thor. ough choice of does not result in a significant difference in performance (this parameter was tunec on the training data only, the exact values used in experiments are reported below).."}, {"section_index": "8", "section_name": "4.2 EVALUATION", "section_text": "We evaluate word embeddings via the word similarity task. We use the following popular datasets for this purpose: \"wordsim-353\" (Finkelstein et al. (2001); 3 datasets), \"simlex-999\" (Hill et al (2016)) and \"men\" (Bruni et al. (2014)). Original \"wordsim-353' dataset is a mixture of the word pairs for both word similarity and word relatedness tasks. This dataset was split (Agirre et al. (2009)) into two intersecting parts: \"wordsim-sim'' (\"ws-sim' in the tables) and \"wordsim-rel' (\"ws-rel'' in the tables) to separate the words from different tasks. In our experiments, we use both of them or a par with the full version of \"wordsim-353' (\"ws-full'' in the tables). Each dataset contains worc pairs together with assessor-assigned similarity scores for each pair. As a quality measure, we use Spearman's correlation between these human ratings and cosine similarities for each pair. We call this quality metric linguistic in our paper.\nTable 1: Comparison of SGNS values obtained by the models. The larger is better\nDim. d Algorithm ws-sim ws-rel ws-full simlex men SGD-SGNS 0.719 0.570 0.662 0.288 0.645 d = 100 SVD-SPPMI 0.722 0.585 0.669 0.317 0.686 RO-SGNS 0.729 0.597 0.677 0.322 0.683 SGD-SGNS 0.733 0.584 0.677 0.317 0.664 d = 200 SVD-SPPMI 0.747 0.625 0.694 0.347 0.710 RO-SGNS 0.757 0.647 0.709 0.353 0.701\nTable 2: Comparison of the methods in terms of the semantic similarity task. Each entry represents the Spearman's correlation between predicted similarities and the manually assessed ones.\nWe see that SGD-SGNS and SVD-SPPMI methods provide quite similar results, however, the pro posed method obtains significantly better SGNS values, what proves the feasibility of using Rie mannian optimization framework in SGNS optimization problem. It is interesting to note that SVD SPPMI method, which does not optimize SGNS objective directly, obtains better results than SGD SGNS method, which aims at optimizing SGNS. This fact additionally confirms the idea describec in Section 2.2.2 that the independent optimization over parameters W and C may decrease the per formance.\nHowever, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods in terms of it. We see that our method outperforms the competitors on all datasets except for \"men' dataset where it obtains slightly worse results. Moreover, it is important that the higher dimensior entails higher performance gain of our method in comparison to the competitors.\nIn order to understand how exactly our model improves or degrades the performance in comparison. to the baseline, we found several words, whose neighbors in terms of cosine distance change signif-. icantly. Table 3 demonstrates neighbors of words \"five\", \"he\"' and \"main\"' in terms of our model and. its nearest competitor according to the similarity task - SVD-SPPMI. These words were chosen as representative examples whose neighborhoods in terms of SVD-SPPMI and RO-SGNS models are strikingly different. A neighbour of a source word is bold if we suppose that it has a similar. semantic meaning to the source word. First of all, we notice that our model produces much better. neighbors of the words describing digits or numbers (see word \"five\"' as an example). The similar. situation happens for many other words, e.g. in case of word \"main' - the nearest neighbors con-. tain 4 similar words in case of our model instead of 2 in case of SVD-SPPMI. The neighbourhood. of word \"he\"' contains less semantically similar words in case of our model. However, it filters out completely irrelevant words, such as \"promptly'' and \"dumbledore'..\nTalking about the optimal number K of iterations in the optimization procedure and step size X. we found that they depend on the particular value of dimensionality d. For d = 100, we have K = 25, X ~ 5 : 10-5, and for d = 200, we have K = 13, X = 10-4. Moreover, it is interesting that the best results were obtained when SVD-SPPMI embeddings were used as an initialization of Riemannian optimization process.\nSkip-Gram Negative Sampling was introduced by Mikolov et al. (2013). The \"negative sampling approach was thoroughly described by Goldberg & Levy (2014), and the learning method is ex-\nd = 100 d = 200 SGD-SGNS 1.68 109 1.67 . 109 SVD-SPPMI 1.65 . 109 1.65 109 RO-SGNS -1.44 109 1.43 109\nTable 3: Examples of the semantic neighbors obtained for words \"five\", \"he' and \"main\"' by our method and SVD-SPPMI.\nplained by Rong (2014). There are several open-source implementations of SGNS neural networl which is widely known as \"word2vec\" 34.\nAs shown in Section 2.2, Skip-Gram Negative Sampling optimization can be reformulated as a problem of searching for a low-rank matrix. In order to be able to use out-of-the-box SVD for this task, Levy & Goldberg (2014) used the surrogate version of SGNS as the objective function. There are two general assumptions made in their algorithm that distinguish it from the SGNS optimization:\nThis makes the objective not interpretable in terms of the original task (3). As mentioned by Levy & Goldberg (2014), SGNS objective weighs different (w, c) pairs differently, unlike the SVD, which works with the same weight for all pairs, what may entail the performance fall. The comprehen- sive explanation of the relation between SGNS, SPPMI, SVD-over-SPPMI methods is provided by Keerthi et al. (2015). Lai et al. (2015); Levy et al. (2015) give a good overview of highly practical methods to improve these word embedding models.\nAn introduction to optimization over Riemannian manifolds can be found in the paper of Udrist (1994). The overview of retractions of high rank matrices to low-rank manifolds is provided by Ab sil & Oseledets (2015). The projector-splitting algorithm was introduced by Lubich & Oseledets (2014), and also was mentioned by Absil & Oseledets (2015) as \"Lie-Trotter retraction\".\nRiemannian optimization is succesfully applied to various data science problems: for example, ma trix completion (Vandereycken (2013)), large-scale recommender systems (Tan et al. (2014)), and. tensor completion (Kressner et al. (2014))\nIt seems to be an interesting direction of future work to apply more advanced optimization tech niques to Step 1 of the scheme proposed in Section 1 and to explore the Step 2 - obtaining embed dings with a given low-rank matrix.\n3Original Google word2vec: https: //code. google. com/archive/p/word2vec/ 4Gensim word2vec: https://radimrehurek. com/gensim/models/word2vec.htm.\nfive he main SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS SVD-SPPMI RO-SGNS Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Neighbors Dist. Ib 0.748 four 0.999 she 0.918 when 0.904 major 0.631 major 0.689 kg 0.731 three 0.999 was 0.797 had 0.903 busiest 0.621 important 0.661 mm 0.670 six 0.997 promptly 0.742 was 0.901 principal 0.607 line 0.631 mk 0.651 seven 0.997 having 0.731 who 0.892 nearest 0.607 external 0.624 lbf 0.650 eight 0.996 dumbledore 0.731 she 0.884 connecting 0.591 principal 0.618 per 0.644 and 0.985 him 0.730 by 0.880 linking 0.588 primary 0.612\n1. SVD optimizes Mean Squared Error (MSE) objective instead of SGNS loss function.. 2. In order to avoid infinite elements in SPMI matrix, it is transformed in ad-hoc manner (SPPMI matrix) before applying SVD.\nIn our paper, we proposed the general two-step scheme of training SGNS word embedding model and introduced the algorithm that performs the search of a solution in the low-rank form via Rie. mannian optimization framework. We also demonstrated the superiority of the proposed method, by providing the experimental comparison to the existing state-of-the-art approaches."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. Multimodal distributional semantics. J. Artif Intell. Res.(JAIR), 49(1-47), 2014.\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Www, pp. 406-414, 2001.\nOthmar Koch and Christian Lubich. Dynamical low-rank approximation. SIAM J. Matrix Ana Appl., 29(2):434 454, 2007.\nDaniel Kressner, Michael Steinlechner, and Bart Vandereycken. Low-rank tensor completion by riemannian optimization. B1T Numerical Mathematics, 54(2):447-468, 2014\nSiwei Lai, Kang Liu, Shi He, and Jun Zhao. How to generate a good word embedding? arXiv preprint arXiv:1507.05523, 2015.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa tions of words and phrases and their compositionality. In NIPS, pp. 3111-3119, 2013.\nXin Rong. word2vec parameter learning explained. arXiy p rint arXiv:1411.2738, 2014.\nMingkui Tan, Ivor W Tsang, Li Wang, Bart Vandereycken, and Sinno Jialin Pan. Riemannian pursui for big matrix recovery. In ICML, volume 32, pp. 1539-1547, 2014.\nTobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsupervised word embeddings. In EMNLP, 2015."}] |
SywUHFcge | [{"section_index": "0", "section_name": "A THEORETICAL FRAMEWORK FOR ROBUSTNESS OF (DEEP CLASSIFIERS AGAINST ADVERSARIAL EXAMPLES", "section_text": "Beilun Wang, Ji Gao, Yanjun Q. Department of Computer Science. University of Virginia. Charlottesville. VA 22901. USA\nBeilun Wang, Ji Gao, Yanjun Qi\nRainer Dahlhaus. Fitting Time Series Models to Nonstationary Processes. The Annals of Statistics 25(1):1-37. 1997\n[bw4mw, jg6yd,yanjun}@virginia.edu\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009 IEEE Conference on, pp. 248-255. IEEE, 2009.\nMost machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. Such inputs are typically generated by adding small but purposeful modifications that lead to incorrect outputs while imperceptible to human eyes. The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. By using concepts from topology, our theoretical analysis brings forth the key reasons why an adversarial example can fool a classifier (.f1) and adds its oracle (.f2, like human eyes) in such analysis. By investigating the topological relationship between two (pseudo)metric spaces corresponding to predictor f1 and oracle f2, we develop necessary and sufficient conditions that can determine if f1 is always robust (strong robust) against adversarial examples according to f2. Interestingly our theorems indicate that just one unnecessary feature can make f1 not strong-robust, and the right feature representation learning is the key to getting a classifier that is both accurate and strong robust.\nJames J DiCarlo and David D Cox. Untangling invariant object recognition. Trends in cognitive sciences, 11(8):333-341, 2007.\nJames J DiCarlo, Davide Zoccolan, and Nicole C Rust. How does the brain solve visual objec recognition? Neuron, 73(3):415-434. 2012\nRichard O Duda, Peter E Hart, and David G Stork. Pattern classification. John Wiley & Sons, 2012\nAlhussein Fawzi, Omar Fawzi, and Pascal Frossard. Fundamental limits on adversarial robustness In Proceedings of ICML, Workshop on Deep Learning, number EPFL-CONF-214923, 2015."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Neural Networks (DNNs) can efficiently learn highly accurate models and have been demon strated to perform exceptionally well (Krizhevsky et al.]2012) Hannun et al.]2014). However, recent studies show that intelligent attackers can force many machine learning models, including DNNs, to. misclassify examples by adding small and hardly visible modifications on a regular test sample..\nThe maliciously generated inputs are called \"adversarial examples\" (Goodfellow et al.2014]Szegedy et al.[2013) and are commonly crafted by carefully searching small perturbations through an optimization procedure. Several recent studies proposed algorithms for solving such optimization to. fool DNN classifiers. (Szegedy et al.][2013) firstly observe that convolution DNNs are vulnerable to small artificial perturbations. They use box-constrained Limited-memory BFGS (L-BFGS) to create adversarial examples and find that adversarial perturbations generated from one DNN network can also force other networks to produce wrong outputs. Then, (Goodfellow et al.]2014) try to. clarify that the primary cause of such vulnerabilities may be the linear nature of DNNs. They then propose the fast gradient sign method for generating adversarial examples quickly. Subsequent papers (Fawzi et al.[2015] Papernot et al.[2015a] [Nguyen et al.2015] have explored other ways to explore adversarial examples for DNN (details in Section 2.1). The goal of this paper is to analyze the robustness of machine learning models in the face of adversarial examples..\nKathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435. 2016\nIn response to progress in generating adversarial examples, researchers attempt to design strategies for making machine-learning systems robust to various noise, in the worst case as adversarial examples. For instance, denoising NN architectures (Vincent et al.2008Gu & Rigazio]2014) Jin et al.]2015 can discover more robust features by using a noise-corrupted version of inputs as training samples. A modified distillation strategy (Papernot et al.2015b) is proposed to improve the robustness of. DNNs against adversarial examples, though it has been shown to be unsuccessful recently (Carlini &. Wagner| 2016a). The most generally successful strategy to date is adversarial training (Goodfellow. et al.[[2014) Szegedy et al.[2013) which injects adversarial examples into training to improve the generalization of DNN models. More recent techniques incorporate a smoothness penalty (Miyato.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.\nNicolo Cesa-Bianchi and Gabor Lugosi. Prediction, learning, and games. Cambridge University Press, 2006."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Table 1: A list of important notations used in the paper\nJohn L Kelley. General topology. Springer Science & Business Media, 1975.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems, pp 1097-1105, 2012.\nAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arX preprint arXiv:1611.01236, 2016.\nTaehoon Lee, Minsuk Choi, and Sungroh Yoon. Manifold regularized deep neural networks using adversarial examples. arXiv preprint arXiv:1511.06381, 2015.\nBo Li and Yevgeniy Vorobeychik. Feature cross-substitution in adversarial classification. In Advances in Neural Information Processing Systems. pp. 2087-2095. 2014.\nThis paper tries to answer above questions and makes the following contributions:\nWei Liu and Sanjay Chawla. Mining adversarial patterns via regularized loss minimization. Machin learning, 81(1):69-83, 2010.\nSection[2|points out that previous definitions of adversarial examples for a classifier (f1) have. overlooked the importance of an oracle function (f2) of the same task.. Section[3|formally defines when a classifier f1 is always robust (\"strong-robust\") against adversarial examples. It proves four theorems about sufficient and necessary conditions that make f1 always robust against adversarial examples according to f2. Our theorems lead to a number of interesting insights, like that the feature representation learning controls if a DNN is strong-robust or not.. Section[12|is dedicated to provide practical and theoretically grounded directions for understanding and hardening DNN models against adversarial examples..\nShike Mei and Xiaojin Zhu. The security of latent dirichlet allocation. 2015a\nShike Mei and Xiaojin Zhu. Some submodular data-poisoning attacks on machine learners. 2015b\nTable1provides a list of important notations we use in the paper.\nTakeru Miyato, Shin-ichi Maeda, and Koyama Masanori. Distributional smoothing with virtua adversarial training. ICLR' 16, 2016.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599, 2015.\nAnh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR. IEEE, 2015.\nVarious definitions of \"adversarial examples\"' exist in the recent literature, with most following. Eq. (2.1). See more detailed reviews in Section[8] The basic idea is to generate a misclassified sample"}, {"section_index": "3", "section_name": "et al.]2016] Zheng et al.]2016) or a layer-wise penalty (Carlini & Wagner]2016b) as a regularization term in the loss function to promote the smoothness of the DNN model distributions.", "section_text": "Recent studies (reviewed by (Papernot et al.|2016b)) are mostly empirical and provide little under-. standing of why an adversary can fool machine learning models with adversarial examples. Several important questions have not been answered yet:.\nWhat makes a classifier always robust to adversarial examples? Which parts of a classifier influence its robustness against adversarial examples more, compared with the rest? What is the relationship between a classifier's generalization accuracy and its robustness against adversarial examples? Why (many) DNN classifiers are not robust against adversarial examples ? How to improve?\nThis section provides a general definition of adversarial examples , by including the notion of an oracle. For a particular classification task, a learned classifier is represented as f1 : X -> Y, where X represents the input sample space and Y is the output space representing a categorical set\nf1 Machine-learning. (X,d'1) classifier (X1, d1) Y g1 C1 X 0 Feature Extraction Classification 0 g2 C2 (X2,d2) X, d' f2 T Oracle\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nFigure 1: Example of a machine-learning classifier (predictor) and a human annotator (oracle) for classifying images of hand-written \"0'. Both include two steps: feature extraction and classification The upper half is about the learned machine classifier f1 and the lower half is about the oracle f2. f1 transforms samples from the original space X to an embedded metric space (X1, d) using its feature extraction step. Here, dj is the similarity measure on the feature space X1. Classification models like DNN cover the feature extraction step in its model, though many other models like decision tree need hard-crafted or domain-specific feature extraction. Then f1 can use a linear function to decide the classification prediction y E Y. Similarly, human oracle f2 transforms data samples from the original space X into an embedded metric space (X2, d2) by its feature extraction. Here, d2 is the corresponding similarity measure. Then the oracle get the classification result y E Y using the feature representation of samples (X2, d2).\nArunesh Sinha, Debarun Kar, and Milind Tambe. Learning adversary behavior in security games. A pac model perspective. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 214-222. International Foundation for Autonomous Agents and. Multiagent Systems, 2016.\nBen Stoddard, Yan Chen, and Ashwin Machanavajjhala. Differentially private algorithms fo empirical machine learning. arXiv preprint arXiv:1411.5428, 2014\nx' by \"slightly' perturbing a correctly classified sample x, with an adversarial perturbation (x, x') Formally, when given x E X.\nWilliam Uther and Manuela Veloso. Adversarial reinforcement learning. Technical report, Technica report, Carnegie Mellon University, 1997. Unpublished, 1997..\nHere x, x' E X. (x, x') represents the difference between x and x', which depends on the specific data type that x and x' belong to[ Table[2|summarizes different choices of f1 and (x, x') used in the recent literature, in which norm functions on the original space X are mostly used to calculate (x, x' ). Multiple algorithms have been implemented to solve Eq. (2.1) as a constrained optimization (summarized by the last column of Table |2). More details are included for three such studies in Section8.2\nWhen searching for adversarial examples, one important property has not been fully captured by. Eq. (2.1). That is, an adversarial example has been modified very slightly from its seed and these. modifications can be so subtle that, for example in image classification, a human observer does not even notice the modification at all. We define the role of \"human observer' more formally as follows.\nEric P. Xing, Michael I. Jordan, Stuart J Russell, and Andrew Y. Ng. Distance metric learning with application to clustering with side-information. In S. Becker, S. Thrun, and K. Obermayer (eds.),. Advances in Neural Information Processing Systems 15. pp. 521-528. MIT Press. 2003..\nDefinition 2.1. An \"Oracle\" represents a decision process generating ground truth labels for a tasl of interest. Each oracle is task-specific, with finite knowledge and noise-fre\n'For example, in the case of strings, (x, x') represents the difference between two strings. 2we leave all detailed analysis of when an oracle contains noise as future work.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096-1103. ACM, 2008.\ns.t. f1(x) f1(x)\nPengtao Xie, Misha Bilenko, Tom Finley, Ran Gilad-Bachrach, Kristin Lauter, and Michael Naehrig Crypto-nets: Neural networks over encrypted data. arXiv preprint arXiv:1412.6181, 2014.\nFei Zhang, Patrick PK Chan, Battista Biggio, Daniel S. Yeung, and Fabio Roli. Adversarial Featur Selection against Evasion Attacks. IEEE Transactions on Cybernetics, PP(1), 2015..\nStephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. arXiv preprint arXiv:1604.04326, 2016\nMachine classifier f\nFigure 2: An example showing that f1 with one unnecessary feature (according to f2) is prone to. adversarial examples. The red circle denotes an adversarial example (e.g. generated by some attack. similar as JSMA (Papernot et al.] 2015a) (details in Section[8.2). Each adversarial example is very close to its seed sample in the oracle feature space (according to d2), but it is comparatively far from. its seed sample in the feature space (according to d1) of the trained classifier and is at the different. side of the decision boundary of f1. Essentially \"adversarial examples'' can be easily found for all. seed samples in this Figure. We only draw cases for two seeds. Besides, for each seed sample, we. can generate a series of \"adversarial examples\" (by varying attacking power) after the attacking line. crosses the decision boundary of f1. We only show one case of such an adversarial example for each seed sample.\nTable 2: Summary of the previous studies defining adversarial examples\nConvolutional neural networks Random forest and SVM.\nRandom forest and SVM\nClass 1 Class 2 Adversarial sample Machine classifier f. X1 the oracle f2. X2\nPrevious studies f1 (x, x) Formulation of f1(x) f1(x) (Goodfellow et al.|2014 Convolutional neural networkss. lo argmax Loss(f1(x), f1(x)) (Szegedy et al.2013) Convolutional neural networks l2 argmin Loss(f1(x'), l), subject to: l f1(x') (Biggio et al.[[2013) Support vector machine (SVM) l2 argmin Loss(f1(x'), -1), subject to: f1(x) = 1 (Kantchelian et al.]2015) Decision tree and Random forest. l2, l1, argmin Loss(f1(x'), -1), subject to: f1(x) = 1 lo [Papernot et al.[2016a) Convolutional neural networks. lo argmax Loss(f1(x'), f1(x)) (Grosse et al.2016) Convolutional neural networkss lo argmax Loss(f1(x), f1(x)) (Xu et al.2016) Random forest and SVM l1, lo argmin Loss(f1(x'), -1), subject to: f1(x) = 1 C\nThe goal of machine learning is to train a learning-based predictor function f1 : X -> Y to. approximate an oracle classifier f2 : X -> Y for the same classification task. For example, in image. classification tasks, the oracle f2 is often a group of human annotators. Adding the notation of oracle we revise Eq. (2.1) into\ns.t. f1(x)f1(x') 2(x,x') < e f2(x)=f2(x)\nA2(x, x') < e reflects that adversarial examples add \"small modifications\"' that are almost imper ceptible to oracle of the task. Clearly calculating 2(x, x') needs to accord to oracle f2. For most classification tasks, an oracle does not measure the sample difference in the original input space X. We want to emphasize that sample difference is with regards to its classification purpose. For instance, when labeling images for the hand-written digital recognition, human annotators do not need to consider those background pixels to decide if an image is \"0' or not.\nIn Section[3|our theoretical analysis uses (X2, d2) to bring forth the fundamental causes of adversarial. examples and leads to a set of novel insights to understand such examples. To the best of the authors. knowledge, the theoretical analysis made by this paper has not been uncovered by the literature\nModeling Oracle f2: One may argue that it is hard to model f2 and (X2, d2) for real application since if such oracles can be easily modeled machine-learning based f1 seems not necessary. Ii Section8.3] we provide examples of modeling oracles for real applications. For many security sensitive applications about machines, oracles f2 do exist[3] For artificial intelligence tasks like imag classification, humans are f2. As illustrated by cognitive neuroscience papers (DiCarlo & Cox]2007 DiCarlo et al.2012), human brains perform visual object recognition using the ventral visual strean and this stream is considered to be a progressive series of visual re-representations, from V1 to V to V4 to IT cortex (DiCarlo & Cox|2007). Experimental results support that human visual syster makes classification decision at the final IT cortex layer. This process is captured exactly by ou decomposition f2 = C2 0 g2.\nDefinition 2.2. adversarial example: Suppose we have two functions f1 and f2. f1 : X -> Y is the. classification function learned from a training set and f2 : X -> Y is the classification function of the oracle that generates ground-truth labels for the same task. Given a sample x E X, an adversarial. example x' E X. (x, x') satisfies Eq. (2.3)\ns.t. f1(x)f1(x') d2(g2(x),g2(x)) < 82 f2(x) = f2(x)\n3Oracles f2 do exist in many security-sensitive applications about machines. But machine-learning classifier f1 are used popularly due to speed or efficiency\nIllustrated in Figure [1] we denote the feature space an oracle uses to consider difference among samples for the purpose of classification decision as X2. The sample difference uses a distance function d2 in this space. An oracle function f2 : X -> Y can be decomposed as f2 = c2 o g2 where g2 : X > X2 represents the operations for feature extraction from X to X2 and c2 : X2 -> Y denotes the simple operation of classification in X2. Essentially g2 includes the operations that (progressively) transform input representations into an informative form of representations X2. c2 applies relatively simple functions (like linear) in X2 for the purpose of classification. d2 is the metric function (details in Section|3) an oracle uses to measure the similarity among samples (by relying on representations learned in the space X2). We illustrate the modeling and decomposition in Figure|1\nMost previous studies (Table|2) have made an important and implicit assumption about f2 (through using (x, x') < e): f2 is almost everywhere (a.e.) continuous. We explains the a.e. continuity assumption and its indication in Section9] Basically, when f2 is assumed continuous a.e.,. Pfo(r S=\ns.t.f1(x)f1(x) d2(g2(x),g2(x')) < 82"}, {"section_index": "4", "section_name": "3.1 MODELING AND DECOMPOSING f", "section_text": "As shown in Figure[1] we decompose f1 in a similar way as the decomposition of f2. This is to answer another key question: \"which parts of a learned classifier influence its robustness against. adversarial examples more, compared with the rest?\". A machine-learning classifier f1 = c1 o g1. where g1 : X -> X1 represents the feature extraction operations and c1 : Xj -> Y performs a simple operation (e.g., linear) of classification. Section 8.4|provides multiple examples of decomposing. state-of-the-art f14I d1 denotes the distance function f1 uses to measure difference among samples. in X1 :\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. It means.\nFor the rare cases that f1 is not continuous a.e., Section|11|discusses \"boundary points\" of f1 Roughly speaking, when f1 is not continuous a.e.,\nP(f1(x) f1(x[f2(x)=f2(x)\n3 .2 2{d2,n}-STRONG-ROBUST AGAINST ADVERSARIAL EXAMPLES\nVx,x' E X P(f1(x)=f1(x)[f2(x)=f2(x) d2(g2(x),g2(x)) < 02) >1 7\nEq. (3.2) defines the \"{2, n}-strong-robustness\" as a claim with the high probability. To simplify notations, in the rest of this paper, we use \"strong-robust\"' representing \"{ 2, n}-strong-robust\"'. Also in the rest of this paper we propose and prove theorems and corollaries by using its more general. form by Eq. (3.2). For all cases, if f2 is continuous a.e., all proofs and equations can be simplified. by using only the term d2(g2(x), g2(x')) < 02 (i.e. removing the term f2(x) = f2(x')) according to Eq. (3.3)).\nThe \"strong-robustness\" definition leads to four important theorems in next two subsections\nWith a more accurate definition of \"adversarial examples\", now we aim to answer the first central question: \"What makes a classifier always robust against adversarial examples?'. Section |3.2 defines the concept \"strong-robust\"' describing a classifier always robust against adversarial examples. Section [3.3|and Section [3.4]present sufficient and necessary conditions for \"strong-robustness\". Section4[then provides a set of theoretical insights to understand \"strong-robustness\"'.\nWe then apply reverse-thinking on Definition (2.2) and derive the following definition of strong. robustness for a machine learning classifier against adversarial examples:.\nDefinition 3.1. {2, n}-Strong-robustness of a machine-learning classifier: A machine-learning. classifier f1() is {2,n}-strong-robust against adversarial examples if: Vx, x' E X a.e., (x, x') satisfies Eq. (3.2).\n3.3 TOPOLOGICAL EQUIVALENCE OF TwO METRIC SPACES (X1, d1) AND (X2, d2) IS SUFFICIENT IN DETERMINING STRONG-ROBUSTNESS\nIf the topological equivalence ( Eq. (10.1) exists between (X1, d1) and (X2, d2), it means that for all pair of samples from X, we have the following relationship:.\nd1(g1(x),g1(x))< 01 H> d2(g2(x),g2(x)) < 0z\nTheorem 3.2. When f1 is continuous a.e., if (X1, d1) and (X2, d2) are topologically equivalent then the learned classifier f1() is strong-robust to adversarial examples.\nP(f1(x)= f1(x)[f2(x)=f2(x)\n3.4 FINER TOPOLOGY OF (X, d) THAN (X, d2) IS SUFFICIENT AND NECESSARY IN DETERMINING STRONG-ROBUSTNESS\nVx,x' e X, d2(g2(x),g2(x)) < 02 => d1(g1(x),g1(x)) <\nUsing Eq. (3.7) and the continuity a.e. assumption, we can derive the following Theorem about the sufficient and necessary condition for f1 being strong-robust:\nTheorem 3.4. When f1 is continuous a.e., f1 is strong-robust against adversarial examples if and only if the topology in (X, d' ) is a finer topology than the topology in (X, d2).\nIn the appendix, Section|10.1 briefly introduces the concept of metric space and the definition of topological equivalence among two metric spaces. As shown in Figure[1] here f1 defines a metric space (X1, d1) on X1 with the metric function d1. Similarly f2 defines a metric space (X2, d2) on X, with the metric function d2\ndj(g1(x),g1(x)) < 0j > d2(g2(x),g2(x)) < 02\nWhen fi is continuous a.e., this can get us the following important theorem, indicating that the topological equivalence between (X1, d1) and (X2, d2) is a sufficient condition in determining whether or not f1 is strong-robust against adversarial examples.\nFor more general cases including f1 might not be continuous a.e., we need to consider the probability of the boundary point attacks (Eq. (3.1)). Therefore, we get a more general theorem as follows:\nNow we extend the discussion from two metric spaces into two pseudometric spaces. This extension. finds the sufficient and necessary condition that determines the strong-robustness of f1. The related two pseudometrics are d' (for f1) and d, (for f2), both directly being defined on X. Appendix Sec-. tion|10.2|includes detailed descriptions of pseudometric, pseudometric spaces, topology and a finer. topology relationship between two pseudometric spaces..\nWhen f1 is not continuous a.e., we need to consider the probability of the boundary points based adversarial examples (Eq. (3.1)). For such a case, we get a sufficient condition|for the strong robustness:\nTheorem 3.5. When f1 is not continuous a.e., if the topology in (X,d')) is a finer topol. ogy than the topology in (X,d) and P(f1(x) f1(x)[f2(x) = f2(x),d1(g1(x),g1(x)) 1, d2(g2(x), g2(x')) < 82) < n, then f1 is strong-robust against adversarial examples.\nWhen f1 is not continuous a.e., its strong-robustness is significantly influenced by its boundary points and therefore relates to the c1 function. Section 11.2|provides some discussion and we omit covering such cases in the rest of this paper.."}, {"section_index": "5", "section_name": "TOWARDS PRINCIPLED UNDERSTANDING", "section_text": "4.1 UNNECESSARY FEATURES RUIN STRONG-ROBUSTNESS\nCorollary 4.1. When f1 is continuous a.e., if Xj = Rn1, X2 = Rn2, n1 > n2, X2 C X1, d1, d are norm functions, then f1() is not strong-robust against adversarial examples..\nThis corollary shows if unnecessary features (with regards to X) are selected in the feature selection step, then no matter how accurate the model is trained, it is not strong-robust to adversarial examples\nFigure|2|shows a situation that the oracle for the current task only needs to use one feature to classify samples correctly. A machine learning classifier extracts two features with one used by the oracle and the other is an extra unnecessary feature[] In X1, f1 (actually c1) successfully classifies all the test inputs. However, it's very easy to find adversary examples satisfying Eq. (2.4) by only adding a small perturbation along the unnecessary feature dimension. In Figure2l red circles show a few such. adversarial examples. The adversarial examples are very close to seed samples in the oracle space But they are predicted into a different class by f1.\nFor many security sensitive applications, previous studies using state-of-art learning-based classifiers normally believe that adding more features is always helpful. Apparently, our corollary indicates that\n8when f1 is not continuous a.e., it is difficult to find the necessary and sufficient condition for strong. robustness of f1. We leave this to future research. 9Two features of X1 actually positively correlate in Figure[2 However, the oracle does not need to use the second feature for making classification decisior.\nTable 3: Summary of theoretical conclusions that we can derive. Here Xj = Rn1 and X, = Rn2 The strong-robustness is determined by feature extraction function g1. The accuracy is determined by both the classification function c1 and the feature extraction function g1..\nCases: d1&d2 are norms Can be accurate? Based on Illustration (I) Xi\\(XiNX2)F0, Not Strong-robust may not be accurate Theorem Figure2 (3.4 X2 X1 (II) n1 > n2,X2 C X1 Not strong-robust may be accurate Corollary (4.1) Figure2 (III) n2,X1 = X2 Strong-robust. may be accurate Corollary n1 = (4.2] Figure (IV) n1< n2,X1 C X2 Strong-robust. may not be accurate Theorem (3.4) Figure[5\nThe four theorems proposed above lead to a set of key insights about why and how an adversarial can fool a machine-learning classifier using adversarial examples. One of the most valuable insights is: feature learning step decides whether a predictor is strong-robust or not in an adversarial test setting All the discussions in the subsection assume f1 is continuous a.e..\nTheorem (3.2) and Theorem (3.4) indicate that when f1 is continuous a.e., the two feature spaces (X1, d1) and (X2, d2) or the functions g1 and g2 determine the strong-robustness of f1. Based on. Theorem (3.4), we can derive a corollary as follows (proof in Section10.3.1):"}, {"section_index": "6", "section_name": "this thinking is wrong and can lead to their classifiers vulnerable to adversarial examples(Xu et al. 2016).", "section_text": "Using Theorem (3.3), we obtain another corollary as follows (proof in Section 10.3.1):\nThis corollary shows that if a learned classifier and its oracle share the same derived feature space. (Xj = X), the learned classifier is strong-robust when two metrics are both norm functions (even if not the same norm). We can call this corollary as \"norm doesn't matter\"'.\nMany interesting phenomena can be answered by Corollary (4.2. For instance, for a norm regularized. classifier, this corollary answers an important question that whether a different norm function will influence its robustness against adversarial examples. The corollary indicates that changing to a. different norm function may not improve the robustness of the model under adversarial perturbation"}, {"section_index": "7", "section_name": "4.3 ROBUSTNESS AND GENERALIZATION", "section_text": "In Table[3] we provide four situations in which the proposed theorems can be used to determin whether a classifier f1 is strong-robust against adversarial examples or not.\nTable[3|provides a much better understanding of the relationship between robustness and accuracy Two interesting cases from Table[3|are worth to emphasize again: (1) If f1 misses features used by f2 and does not include unnecessary features (according to X2), f1 is strong-robust (even though it may not be accurate). (2) If f1 extracts some extra unnecessary features, it will not be strong-robust (though it may be a very accurate predictor).\nWe want to emphasize that \"f1 is strong-robust' does not mean it is a good classifier. For example, a trivial example for strong-robust models is f1(x) = 1, Vx E X. However, it is a useless model since it doesn't have any prediction power. In an adversarial setting, we should aim to get a classifier that is both strong-robust and precise. A better feature learning function g1 is exactly the solution that may achieve both goals.\nTable3lindicates that c1 and c2 do not influence the strong-robustness of f1 when f1 is continuous a.e.[10] Figure4and Figure5|further show two concrete example cases in which f1 is strong-robus according to f2. However, in both figures, f1 is not accurate according to f2.\n10When f1 is not continuous a.e., c1 matters for the strong-robustness. See Section|11|for det:\nAs another example, multiple DNN studies about adversarial examples claim that adversarial examples are transferable among different DNN models. This can be explained by Figure[2|(when X1 is a much higher-dimensional space). Since different DNN models learn over-complete feature spaces {Xj}, there is a high chance that these different Xj involve a similar set of unnecessary features (e.g., the different learned features are correlated with others). Therefore the adversarial examples are generated along similar gradient directions. That is why many such samples can evade multiple DNN models.\nSummarizing Theorem (3.2), Theorem (3.4), Corollary (4.2) and Corollary (4.1), the robustness of a. learned classifier is decided by two factors: (1) the difference between two derived feature spaces:. and (2) the difference between the metric functions. Two corollaries show that the difference between the feature spaces is more important than the difference between the two metric functions..\nCase (I): If f1 uses some unnecessary features, it will not be strong-robust to adversarial examples It may not be an accurate predictor if f1 misses some necessary features used by f2 Case (II): If f1 uses some unnecessary features, it will not be strong-robust to adversarial examples It may be an accurate predictor if f1 uses all the features used by f2. Case (III): If f1 and f2 use the same set of features and nothing else, f1 is strong-robust and may be accurate. Case (IV): If f1 misses some necessary features and does not extract unnecessary features, f1 is strong-robust (even tough its accuracy may not be good).\nTable 4: Connecting to relevant DNN hardening solutions. The experimental results of comparing different hardening solutions are shown in Figure[9] Figure[10] Table[10|and Table[11\nrandom perturbation\nrandom perturbation\nFor DNN, it is difficult to derive a precise analytic form of d1 (or d). But we can observe some properties of d] through experimental results. Table 5|Table|6[Table[7and Table[8|show properties oi d1 (and d) resulting from performing testing experiments on four state-of-art DNN networks (detail in Section 12.1). All four tables indicate that the accuracy of DNN models in the adversarial setting are quite bad. The performance on randomly perturbed inputs is much better than performance or maliciously perturbed adversarial examples."}, {"section_index": "8", "section_name": "5.2 TOWARDS PRINCIPLED SOLUTIONS", "section_text": "Our theorems suggest a list of possible solutions that may improve the robustness of DNN classifier against adversarial samples. Options include such as:.\nBy learning a better g1: Methods like DNNs directly learn the feature extraction function g1. Table summarizes multiple hardening solutions (Zheng et al.2016) [Miyato et al.]2016] Lee et al.]2015 in the DNN literature. They mostly aim to learn a better g1 by minimizing different loss functions L f1 (x, x') so that when d2(g2(x), g2(x)) < e (approximated by (X, |L : ID), this loss L f1 (x, x') is small. Two major variations exist among related methods: the choice of Lf (x, x') and the way to generate pairs of (x, x'). For instance, to reach the strong-robustness we can force to learn a g1 that helps (X, d) to be a finer topology than (X2, d2). Section|12.4|explores this option (\"Siamese training\" in Table[4) through Siamese architecture. Experimentally Section[12.5[compares adversarial training, stability training and Siamese training on two state-of-the-art DNN image-classification\nx Loss Lf1(x, x) On Layer Stability training (Zheng random perturbation KL(f1(x),f1(x)) Classification layer et al.]2016) (Miyato et al.[(2016) adversarial perturbation KL(f1(x),f1(x)) Classification layer Adversarial train- adversarial perturbation L(f1(x'),f2(x)) Loss function ing(Goodfellow et al. 2014) Large Adversarial train- adversarial perturbation L(f1(x'),f2(x)) Loss function ing(Kurakin et al.[2016) (Lee et al.[[2015) adversarial perturbation ll g1(x) - g1(x') lI2 Layer before classification layer Siamese Training random perturbation l| g1(x)-g1(x') ||2 Layer before classification layer\nOur theoretical analysis uncovers fundamental properties to explain the adversarial examples. In this section, we apply them to analyze DNN classifiers. More specifically, (1) we find that DNNs are not. strong-robust against adversarial examples; and (ii) we connect to possible hardening solutions and introduce principled understanding of these solutions..\nThe phenomenon we observed can be explained by Figure[3] Comparing the second column and. the third column in four tables we can conclude that d1 (and d') in a random direction is larger than d1 (and d) in the adversarial direction. This indicates that a round sphere in (X1, d1) (and (X, d)) corresponds to a very thin high-dimensional ellipsoid in (X, : ) (illustrated by the left half. of Figure[3). Figure[3[(I) shows a sphere in (X, d) and Figure[3|(III) shows a sphere in (X1, d1) They correspond to the very thin high-dimensional ellipsoid in (X, || : |) in Figure 3(V). The norm. function |I : || is defined in space X and is application-dependent. All four tables uses |I : = |I : oo.\nDifferently, for human oracles, a sphere in (X, d2) (shown in Figure[3|(II)) or in (X2, d2) (shown in Figure[3|(IV)) corresponds to an ellipsoid in (X, II : I) not including very-thin directions (shown in Figure[3|(VI)). When the attackers try to minimize the perturbation size using the approximated distance function d2 = || : ||, the thin direction of ellipsoid in Figure[3|(V) is exactly the adversarial direction.\nX,d II (X,d2') Not a Finer Tppology. a a Far! Close 1 1 Human oracle Deep Neural Nets. d'(a, b) Large d2'(a, b) small III (X1,d IV Not Topological Equivalent @ 0 a Far! Clbse Human oracle Deep Neural Nets. d1(a, b) Large d2(a, b) small V (X, III) VI (X, II II) Adversarial direction II a - b IIthe same. @ Class 1 a Class 2 Class 3\nFigure 3: This figure shows one situation that (X, d) is not a finer topology than (X, d) (therefore. (X1, d1) and (X2, d2) are not topologically equivalent). According to Theorem (3.4), in this case, the. DNN is vulnerable to adversarial examples. The two sample points a and b are close with regards to (w.r.t.) a norm : in X. They are also close w.r.t. d2 in (X2, d2) space and close w.r.t. d, in (X, d2. space. But they are far from each other in the space of (X, d) and in the space of (X1, d1). In other. words, while d2(a, b), d2(a, b) and ||a - b| are small, d1(a, b) and d' (a, b) are large. Clearly, DNN. can be easily evaded by adding a small perturbation a - b on sample a or sample b. NOTE: it is normally difficult to get the analytic form of (X2, d2) for most applications. Most previous studies. (reviewed in Section|2.2) assume (X2, d2) equals to (X, : D, where : is a norm function.\ntasks through performance against adversarial samples (details in Section12.5). The hardening effects of these strategies vary from task to task, however, they all improve the base DNN models performance in the adversarial setting\nBy modifying unnecessary features: As shown by Table 3] unnecessary features ruin the strong robustness of learning-based classifiers. A simple way to remove the unrelated features is to identify which feature is unnecessary. In (Gao et al.] 2017) the authors compare the difference between g1(x') and g1(x) from DNN. They hypothesize that those learned DNN feature dimensions (in X1) changing rapidly are utilized by an adversary, and thus can be removed to improve the robustness of DNN model. Another efficient method is to substitute different values of features into several equivalent classes. By this way, the adversarial perturbation in the unnecessary feature dimensions can be squeezed by projecting into the same equivalent class. A recent study (Li & Vorobeychik 2014) explored a similar strategy by using equivalent-feature-group to replace each word feature in a group, in order to improve the robustness of spam-email classifiers against evasion attacks.\nAdversarial examples are maliciously created inputs that lead a learning-based classifier to produce. incorrect output labels. An adversarial example is often generated by adding small perturbations. that appear unmodified to human observers. Recent studies that tried to analyze classifiers under. adversarial examples are mostly empirical and provide little understanding of why. To fill the gap, we. propose a theoretical framework for analyzing machine learning classifiers, especially deep neura. networks (DNN) against such examples. This paper is divided into three parts. The first section. provides a revised definition of adversarial examples by taking into account of the oracle of the task The second section defines strong-robustness and provides the principled understanding of wha. makes a classifier strong-robust. The third section examines practical and theoretically groundec. directions for understanding and hardening DNN models against adversarial examples. Future steps. will include an empirical comparison to analyze recent literature using our theorems.."}, {"section_index": "9", "section_name": "RELATED WORKS IN A BROADER CONTEXT", "section_text": "In the broader secure machine learning field, researchers also make attempts for hardening learning systems. For instance: (1) (Barreno et al.2010) and (Biggio et al.]2008) propose a method to introduce some randomness in the selection of classification boundaries; (2) A few recent studie. (Xiao et al.|2015] Zhang et al.2015) consider the impact of using reduced feature sets on classifiers under adversarial attacks. (Xiao et al.2015) proposes an adversary-aware feature selection model that can improve a classifier's robustness against adversarial attacks by incorporating specific assumptions about the adversary's data manipulation strategy. (3) Another line of works, named as adversarial training (Goodfellow et al.||2014), designs a new loss function for training neural networks, which is a linear interpolation of the loss function of the original sample and the loss function of the adversarial example generated by the original sample. A scalable version of adversarial training (Kurakin et al. 2016) was recently proposed. By applying several tricks, the author can apply the adversarial training to deeper network trained by the imagenet dataset. (4) Multiple studies model adversarial scenarios with formal frameworks representing the interaction between the classifier and the adversary. Related efforts include perfect information assumptions (Dalvi et al.2004), assuming a polynomial number of membership queries (Lowd & Meek|2005), formalizing the attack process as a two-person sequential Stackelberg game (Bruckner & Scheffer2011)Liu & Chawla2010), a min-max strategy (training a classifier with best performance under the worst perturbation) (Dekel et al. 2010 [Globerson & Roweis2006), exploring online and non-stationary learning (Dahlhaus1997 Cesa-Bianchi & Lugosi!2006), and formalizing as an adversarial reinforcement learning problem (Uther & Veloso 1997). (5) A PAC model study about learning adversary behavior in a security games also investigated the solution of computing the best defender strategy against the learned adversary behavior. It has a\nInvestigating the behavior of machine learning systems in adversarial environments is an emerging. topic (Huang et al. 2011 Barreno et al.]2006 2010; Globerson & Roweis2006, Biggio et al. 2013 Kantchelian et al.]2015] Zhang et al.]2015). Recent studies can be roughly categorized into three types: (1) Poisoning attacks in which specially crafted attack points are injected into the. training data. Multiple recent papers (Alfeld et al.]2016f Mei & Zhu]2015bf Biggio et al.]2014 2012] Mei & Zhu[2015a) have considered the problem of an adversary being able to pollute the. training data with the goal of influencing learning systems including support vector machines (SVM).. autoregressive models and topic models. (2) Evasion attacks are attacks in which the adversary's. goal is to create inputs that are misclassified by a deployed target classifier. Related studies (Szegedy. et al.[2013]|Goodfellow et al.]2014f|Xu et al.2016] Kantchelian et al.[[2015] Rndic & Laskov2014 Biggio et al.[2013] Papernot et al.[2016bf Sinha et al.2016) assume the adversary does not have an opportunity to influence the training data, but instead finds \"adversarial examples\"' to evade a. trained classifier like DNN, SVM or random forest. (3) Privacy-aware machine learning (Duchi et al.. 2014) is another important category relevant to data security in machine learning systems. Recent. studies have proposed various strategies (Xie et al.2014]Bojarski et al.2014] |Stoddard et al.|2014 Li & Zhou]2015] Rajkumar & Agarwal2012]Dwork2011} Nock et al.]2015] to preserve the privacy of data such as differential privacy. This paper focuses on evasion attacks that are mostly. used to attacking classifiers that try to distinguish malicious behaviors from benign behaviors. Here. we extend it to a broader meaning - adversarial manipulation of test samples. Evasion attacks may be encountered during system deployment of machine learning methods in adversarial settings..\nTest-Sample Case (a) X1 Machine X classifier X f1 Accurate Prediction fi(x) =fi(x')|d2(x,x')< e X2 the oracle f2 Class 1 Class 2 Test-Sample Case (b) Machine x classifier f1 Not accurate fi(x) =fi(x')|d2(x,x')<e X2 the oracle f2 x' Class 1 Class 2 Test-Sample Case (c) Machine X classifier f1 f1(x) = fi(x)|d2(x,x') < e P(fi(x)= f1(x')|d2(x,x')< e) = 0 X2 the oracle f2 O Class 1 Class 2\nFigure 4: An example figure illustrating Table3|Case (III) when f1 is strong-robust. We assume c1 and c2 as linear classification functions. We show one case of X1 = X2 = R2 and f1, f2 are. continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2. (red boundary line). All pairs of test samples (x, x') can be categorized into the three cases shown in this figure. Test-case (a): f1 and f2 assign the same classification label (yellow circle) on x and x'. x. and x' are predicted as the same class by both. Test-case (b): f1 assigns the class of \"blue square\"' on. both x and x'. f2 assigns the class of \"yellow circle\"' on both x and x'. Test-case (c): f2 assigns the. class of \"yellow circle'' on both x and x'. However, f1 assigns the class of \"blue square\"' on x and assigns a different class of \"yellow circle\" on x'. This case has been explained in Section|11.\nTest-Sample Case (a) Machine classifier f1 X1 Accurate Prediction f1(x) =fi(x')|d2(x,x')< e X the oracle f2 Class 1 Class 2 Test-Sample Case (b) Machine classifier f1 X1 Xx Not accurate f1(x) =f1(x')|d2(x,x')< e the oracle f2 Class 1 Class 2 X2 Machine classifier f1 Test-Sample Case (c) >X1 f1(x) = fi(x')|d2(x,x')< e P(f1(x) = f1(x)|d,(x,x')< e) = 0 X' the oracle f2 Class 1 Class 2 X2\nFigure 5: An example figure illustrating Table[3|Case (IV) when f1 is strong-robust. We assume c1 and c2 as linear classification functions. We show one case of 1 = n1 < n2 = 2, Xj C X2 and f1 f2 are continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2 (red boundary line). All pairs of test samples (x, x') can be categorized into the three cases shown in this figure. Test-case (a): f1 and f2 assign the same classification label (yellow circle) on x and x' x and x' are predicted as the same class by both. Test-case (b): f1 assigns the class of \"yellow circle' on both x and x'. f2 assigns the class of \"blue square\"' on both x and x'. Test-case (c): f2 assigns the class of \"yellow circle\" on both x and x'. However, f1 assigns the class of \"blue square\"' on x and assigns a different class of \"yellow circle\"' on x'. This case can be explained in Section|11\nFor the purpose of \"fooling\" a classifier, naturally, the attacker wants to control the size of the perturbation (x, x') to ensure the perturbed sample x' still stays close enough to the original sample\nx to satisfy the intended \"fooling\" purpose. For example, in the image classification case, Eq. (2.1 can use the gradient information to find a (x, x' ) that makes human annotators still recognize x' as almost the same as x. though the classifier will predict x' into a different class. In another example with more obvious security implications about PDF malware (Xu et al.|2016), x' in Eq. (2.1) is found by genetic programming. A modified PDF file from a malicious PDF seed will still be recognized as malicious by an oracle machine (i.e., a virtual machine decides if a PDF file is malicious or not by actually running it), but are classified as benign by state-of-art machine learning classifiers (Xu et al. 2016).\nSubject to: f1(x) f1(x)\nBesides, in the field of computer security, machine learning has been popular in classifying the malicious (y = 1) behavior versus benign behavior (y = -1). For such a context, two different definitions of adversarial examples exist in the literature:.\nFor instance. Biggio et al 2013) uses a formula as follows:\nTo fool classifiers at test time, several approaches have been implemented to generate \"adversaria perturbations' by solving Eq. (2.2). According to Eq. (2.2), an adversarial example should be able to change the classification result f1(x), which is a discrete value. To solve Eq. (2.2), we need to transform the constraint f1(x) / f1(x') into an optimizable formulation. Then we can easily use the Lagrangian multiplier to solve Eq. (2.2). All the previous studies define a loss function Loss(., :) tc quantify the constraint f1(x) f1(x'). This loss function can be the same with the training loss, or it can be chosen differently, such as hinge loss or cross entropy loss.\nWe summarize four common attacking studies as follows:\nGradient ascent method (Biggio et al.] 2013) Machine learning has been popular in classifying malicious (y = 1) versus benign (y 1) in computer security tasks. For such contexts, a simple way to solve Eq. (2.2) is through gradient ascent. To minimize the size of the perturbation and maximize the adversarial effect, the perturbation should follow the gradient direction (i.e., the direction providing the largest increase of function value, here from y = -1 to 1). Therefore, the perturbation r in each iteration is calculated as:\nEq. (8.1) tries to find the x' by minimizing (x, x') under some constraints. Eq. (2.1) is a more general formulation than Eq. (8.1) and can summarize most relevant studies. For example, in (Xu et al.]2016) \"adversarial examples\" are those generated PDFs that can fool PDFRate (a learning-based classifier for detecting malicious PDFs) to classify them as benign. The distances of these variant PDFs to the seed PDF are not necessarily minimal. For such cases, Eq. (2.1) still fits, while Eq. (8.1 does not.\ns.t. (x, x') < dmax f1(x) > 0\ns.t. fi(x)< 0 fi(x) >0\nHere dmax is a small positive constant. These definitions of \"adversarial examples\" are special cases of Eq. (8.1) and Eq. (2.1)\nS11V zLOSSJ1~)J1 O. Here the loss function is the function used to train the neural network. A recent paper (Kurakin et al. 2016) shows that adversarial examples generated by fast gradient sign method are misclassified even after these images have been recaptured by cameras.\nJacobian-based saliency map approach (Papernot et al.] 2015a) (Papernot et al.]2015a) pro posed the Jacobian-based saliency map approach (JSMA) to search for adversarial samples while limiting the number of pixel to modify in the image. As a targeted attack, JSMA iteratively pe. turbs pixels in an input that have large adversarial saliency scores. The adversarial saliency map is calculated from the Jacobian (gradient) matrix x.f1(x) of the DNN model at the current inpu x. The (i, j)th component in Jacobian matrix xf1(x) describes the derivative of output class with respect to feature pixel i. For each pixel i, its adversarial saliency score is calculated to reflec how this pixel will increase the output score of class j versus changing the score of other possible output classes. The process is repeated until misclassification in the target class is achieved or the maximum number of perturbed pixels has been reached. Essentially, JSMA optimizes Equation|2. by measuring perturbation (x, x') through the lo-norm.\nThough difficult, we want to argue that it is possible to theoretically model \"oracles\" for some state-of-the-art applications. For instance, as illustrated by the seminal cognitive neuroscience paper \"untangling invariant object recognition\" (DiCarlo & Cox2007) and its follow-up study (DiCarlo et al.|2012), the authors show that one can view the information processing of visual object recognition by human brains as the process of finding operations that progressively transform retinal representations into a new form of representation (X2 in this paper), followed by the application of relatively simple decision functions (e.g., linear classifiers (Duda et al.[2012)). More specifically in human and other primates, such visual recognition takes place along the ventral visual stream, and this stream is considered to be a progressive series of visual re-representations, from V1 to V2 to V4 to IT cortex (DiCarlo & Cox]2007). Multiple relevant studies (e.g., (DiCarlo & Cox 2007; Johnson1980] Hung et al.|2005)) have argued that this viewpoint of representation learning plus simple decision function is more productive than hypothesizing that brains directly learn very complex decision functions (highly non-linear) that operate on the retinal image representation. This is because the experimental evidence suggests that this view takes the problem apart in a way that is consistent with the architecture and response properties of the ventral visual stream. Besides, simple decision functions can be easily implemented in a single step of biologically plausible neuronal processing (i.e., a thresholded sum over weighted synapses).\nAs another example, the authors of (Xu et al.]2016) used genetic programming to find \"adversarial examples\" (by solving Eq. (2.2)) for a learning-based malicious-PDF classifier. This search needs an oracle to determine if a variant x' preserves the malicious behavior of a seed PDF x (i.e., f2(x) = f2(x')). The authors of (Xu et al.[2016) therefore used the Cuckoo sandbox (a malware analysis system through actual execution) to run a variant PDF sample in a virtual machine installed with a PDF reader and reported the behavior of the sample including network APIs calls. By comparing the\nHere p is the total number of features, c is a term added for the Lagrange multiplier. (for an image classification task, it is 3 times the total number of pixels of an RGB image) l is a target label, which. is different from the original label. The constraint x + r E [0, 1|P means that the adversarial example. is still in the range of sample space..\nFast gradient sign method (Goodfellow et al. 2014) The fast gradient sign method proposed by (Goodfellow et al.][2014) views d2 as the loo-norm. In this case, a natural choice is to make the. attack strength at every feature dimension the same. The perturbation is obtained by solving the following equation:\nnin(c d2(x,x+r)- Loss(f1(x+r),f1(x))),x+r E [0,1\nbehavioral signature of the original PDF malware and the manipulated variant, this oracle successfull determines if the malicious behavior is preserved from x to x'. One may argue that \"since Cuckoc sandbox works well for PDF-malware identification, why a machine-learning based detection systen is even necessary?\". This is because Cuckoo sandbox is computationally expensive and runs slow For many security-sensitive applications about machines, oracles f2 do exist, but machine-learning classifiers f1 are used popularly due to speed or efficiency.\nIt is difficult to decompose an arbitrary f1 into g1 o c1. However, since in our context, f1 is a machine learning classifier, we can enumerate many possible g1 functions to cover classic machine learning classifiers.\nVarious feature selection methods are potential g1. For DNN, g1 includes all the layers from input layer to the layer before the classification layer In SVM, X1, d1 is decided by the chosen reproducing Hilbert kernel space.. Regularization is another popular implicit feature extraction method. For example, l1 regularizatior. can automatically do the feature extraction by pushing some parameters to be 0..\nMost previous studies (Table[2) have made an important and implicit assumption about f1 and f2: J is almost everywhere (a.e.) continuous. i E {1, 2}.\nDefinition 9.1. Suppose f is the classification function. fi is continuous a.e., i E {1,2}, if Vx E X a.e., 38, > 0, such that Vx' E X,d(g(x),gi(x')) < dj, fi(x) = fi(x')\nIllustrated in Figure[1] d, is the metric function (details in Section[3) f; uses to measure the similarity among samples in the space X,. For notation simplicity, we use the term \"continuous a.e.\" for \"continuous almost everywhere'|11in the rest of the paper. The above definition is a special case of almost everywhere continuity defined in (Folland]2013) (see Definition (9.2) in Section[9.1), since we decompose fi in a certain way (see Figure[1). The a.e. continuity has a few indications, like: D/\nf2 is assumed continuous a.e. previously: Most previous studies find \"adversarial examples\" by solving Eq. (2.1), instead of Eq. (2.2). This made an implicit assumption that if the adversarial. example x' is similar to the seed sample x, they belong to the same class according to f2. This assumption essentially is: f2 is almost everywhere (a.e.) continuous..\nf1 is continuous a.e.: Almost all popular machine learning classifiers satisfy the a.e. continuity assumption. For instance, a deep neural network is certainly continuous a.e.. Similarly to the results shown by (Szegedy et al.]2013), DNNs satisfy that [f1(x) - f1(x')] W |I x - x' |2 where W I W; and W, (wi, bi)||oo. Here i = {1, 2, ..., L} representing i-th linear layer in NN Therefore, Ve > 0, let & = e/W. Then |f1(x) f1(x')] < e when d1(x, x') =|I x - x' ||2< 8. This shows that a deep neural network is almost everywhere continuous when d1 (.\nFor the rare cases when f1 is not continuous a.e., see next Section 11discussing \"boundary points that matter for analyzing adversarial perturbations.\nThe a.e. continuity has a few indications\nX is not a finite space; and Vx,x' E X,P(fi(x) = fi(x')|d;(gi(x),gi(x')) < 0) = 1 It does not mean the function f1 is continuous in every point in its feature space X;\n11The measure (e.g., Lebesgue measure) of discontinuous set is 0.\nIn Section [9.1] we show that if f1 is not continuous a.e., it is not robust to any types of noise Considering the generalization assumption of machine learning, machine learning classifiers should satisfy the continuity a.e. assumption. Section 9.2|provides two examples of how popular machine learning classifiers satisfy this assumption.\nIf a probability distribution admits a density, then the probability of every one-point set {a} is zero the same holds for finite and countable sets and the same conclusion holds for zero measure sets[12 for instance, straight lines or circle in Rn. . The a.e. continuity follows the same property as density function: the probability of picking one-point set {x} from the whole feature space is zero; the same holds for zero measure sets. This means: the probability of picking the discontinuous points (e.g., points on the decision boundary is zero, because they are null sets. : Most machine learning methods focus on X = RP or space equivalent to RP (e.g., [0, 1|P) (see Appendix: Section [11.1). Most machine learning methods assume f1 is continuous a.e. (see Appendix: Section9.2).\nDefinition 9.2. Suppose (X,F, P) is a probability space(for general definition, (X,, ) is a. measure space), where X is the sample space, a -algebra F is a collection of all the events and P is. a probability measure defined in X and F. A property holds \"almost everywhere\" (a.e.) in X if anc only if the probability measure of the set for which the property holds equals 1..\nLemma 9.3. If the a.e. continuity assumption doesn't hold, there exists a non-zero measure set D\nVx E D,3x' s.t. f1(x)f1x d1(x,x') < dj\nProof. Without it, for any test sample x, you can easily find a very similar sample x' (i.e. for any. small 01, d1(x, x') < 01) such that [f1(x) - f1(x')] > e. In classification problems, this means that f1(x) f1(x' )(i.e. there exist very similar pair of two samples x and x' that have different labels for most x E X1).\n9.2 MOST MACHINE-LEARNING CLASSIFIERS SATISFY THE A.E. CONTINUITY ASSUMPTION\nAlmost all popular machine learning classifiers satisfy the a.e. continuity assumption. For exampl\n1 0 APPENDIX: USING METRIC SPACE AND PSEUDO METRIC SPACES TO UNDERSTAND CLASSIFIERS' ROBUSTNESS AGAINST ADVERSARIAL EXAMPLES\n10.1 METRIC SPACES AND TOPOLOGICAL EQUIVALENCE OF TWO METRIC SPACES\nThe Lemma (9.3) shows that f1 is not robust to a random noise if we don't assume f1 is continuous\nLogistic regression for text categorization with a bag of word representation. A classifier on a multivariate feature representation in which each feature representing (modified) counts of a word is naturally a.e. continuous. Since {x'[d1(x, x') < d1, x x'} = 0 when 1 is small and x, x' are mostly sparse vectors. Logistic regression with a bag of word representation is a continuous a.e. predictor. Support Vector Machine with continuous feature representation. Suppose we define (X1, d1) by the d?(x, x) = k(x, x) + k(x', x') - 2k(x, x). Then support vector machine is a linear classifier on (X1, d1). Thus, the SVM prediction function is continuous a.e. with d1.\nMost machine learning methods focus on the Rn space or the space equivalent to Rn (e.g., [0, 1]n). For example, the sample space of image classification task intuitively is 255p, where p is the number of features (e.g., 3 224 224). However, people mostly rescale the raw image data samples into. X = [0, 1]p. Therefore, the sample space X for f1 for this case is [0, 1]p..\nThis subsection briefly introduces the concept of metric space and topological equivalence. A metric. on a set/space X is a function d : X X -> 0, oo satisfying four properties: (1) non-negativity, (2). identity of indiscernibles, (3) symmetry and (4) triangle inequality. In machine learning, for example. the most widely used metric is Euclidean distance. Kernel based methods, such as SVM, kernel\nregression and Gaussian process, consider samples in a Reproducing kernel Hilbert space (RKHS) The metric in a RKHS is naturally defined as: d2(x, y) = K(x, x) + K(y, y) - 2K(x, y), in which K (:, :) is a kernel function\nNow we present an important definition, namely that of \"topological equivalence\", that can represe. a special relationship between two metric spaces.\nA function or mapping h() from one topological space to another is continuous if the inverse image of any open set is open. If this continuous function is one-to-one and onto, and the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function, in our case (X1, d1), is said to be homeomorphic to the output range, e.g., here (X2, d2) In other words, metric space (X1, d1) is topologically equivalent to the metric space (X2, d2).\nWe can state this definition as the following equation:\nh(x1) = x2,h(x1) = x2\n10.2 PSEUDOMETRIC SPACES AND FINER TOPOLOGY AMONG PSEUDOMETRIC SPACES\nWe have briefly reviewed the concept of metric space in Section 10.1and proposed the related Theo rem (3.2) in Section[3.3] This is partly because the concept of metric space has been widely used in many machine learning models, such as metric learning (Xing et al.|2003). Theorem (3.2) anc related analysis indicate that feature spaces X1 and X2 (See Figure[1) are key determining factors for deciding learning model's strong-robustness.\nHowever, it is difficult to get the analytic form of X2 in most applications (e.g., when an oracle f2 is. a human annotator). In fact, most previous studies (reviewed in Section|2.2) assume (X2, d2) equals to (X, . D, where : is a norm function. Therefore, we want to extend our analysis and result. from the implicit feature space X2 to the original feature space X..\nWhen we extend the analysis to the original space X, it is important to point out that the distance. function measuring sample similarity for a learned predictor f1 in the original space X may not be a. metric. The distance function in the original feature space X for oracle f2 may not be a metric as well. This is because the distance between two different samples in the original space X may equa to 0. Because two different samples may be projected into the same point in X1 or X2. For example a change in one pixel of background in an image does not affect the prediction of f1 or f2 since th 91 and g2 have already eliminated that (irrelevant) feature. This property contradicts the identity o. indiscernibles assumption for a metric function. Therefore we need a more general concept of th. distance function for performing theoretical analysis in the original space X. By using the concept o Pseudometric Space13] we derive another important theorem about strong-robustness..\nPseudometric: If a distance function d' : X X -> [0, oo] has the following three properties: (1) non-negativity, (2) symmetry and (3) triangle inequality, we call d is a pseudometric or generalized metric. The space (X, d') is a pseudometric space or generalized metric space. It is worth to point out that the generalized metric space is a special case of topological space and metric space is a special case of pseudometric space.\nWhy Pseudometric Space: As shown in Figure[1] we can decompose a common machine learning classifier f1 = c1 0 g1, where g1 : X -> X1 represents the feature extraction and c1 : X1 > Y performs the operation of classification. Assume there exists a pseudometric d' (,.) on X and a metric d1(, .) defined on X14] so that Vx, x' E X 10?\nSince d1 is a metric in X1, d' fulfills the (1) non-negativity, (2) symmetry and (3) triangle inequality properties. However, d' may not satisfy the identity of indiscernible property (i.e., making it not a\nmetric). For example, suppose g1 only selects the first three features from X. Two samples x and x have the same value in the first three features but different values in the rest features. Clearly, x / x but d' (x, x') = d1(g1(x), g1(x')) = 0. This shows that d' (, :) is a pseudometric but not a metric in X. Similarly, a pseudometric d', for the oracle can be defined as follow: d.(r x = do(ao(r)ao( (10.3)\nBefore introducing this condition, we need to briefly introduce the definition of topology and finer/coarser topology here:\nDefinition 10.2. A topology t is a collection of. en sets in a space X..\nA topology t is generated by a collection of open balls {B(x, 01)} where x E X and B(x, 01) =. {z[d(x, z) < 01}. The collection contains {B(x, 01)}, the infinite/finite number of the union of balls and the finite number of intersection of them..\nDefinition 10.3. Suppose T1 and T2 are two topologies in space X. If T2 C T1, the topology T2 is. called a coarser (weaker or smaller) topology than the topology T1, and T1 is called a finer (stronger or larger) topology than T2.\nS2. S1 and S2 are bases of (X, T1) and (X, T2). First, we want to prove that given 02 > 0, s1 > 0 such that if d,(x, x) d2, then d (x, x) < 01 Consider a pair of samples (x, x) and d,(x, x) < d2. x, x' E B2(x, 02). Of course, B2(x, 02) E T2. Suppose the (X, d) is a finer topology than (X, d2). Then B(x, 02) E T1. You can. find B1(xo, 01/2) E T1 such that B2(x, 02) C B1(xo, 01/2), where B2(x, 02) is the closure of. B2(x, 02). Therefore d' (x, x') 01. Based on a.e. continuity assumption of f1, since d' (x, x') < 8, f1(x) = f1(x') a.e. . This means. that P(f1(x) = f1(x')|d2(g2(x), g2(x')) < 02) = 1, which is our definition of strong-robustness. Next, we want to show that if f1 is strong-robust, then t1 is a finer topology than T2.. Suppose f1 is strong-robust, we need to prove that V2 > 0, 1 > 0 such that if d'(x, x') 02,. then d'(x, x') d1. Assume T1 is not a finer topology than T2. This means there exists a B2(x, 02) such that B2(x, 02) T1. Therefore Vj > 0, there exists x' E B2(x, 2) such that d,(x, x') < d2 and d (x, x) > 01 Based on a.e. continuity assumption of f1, d't(x, x') > 01 indicates that f1(x) / f1(x'). This. contradicts the strong-robust assumption. Thus, T1 is a finer topology than T2..\nTo analyze the strong robustness problem in the original feature space X, we assume it to be a generalized metric (pseudometric) space (X, d) for f1 and a generalized metric (pseudometric) space (X, d2) for f2. Now we can analyze f1 and f2 on the same feature space X but relate to two different pseudometrics. This makes it possible to define a sufficient and necessary condition for determining the strong robustness of f1 against adversarial perturbation..\nS2. S1 and S2 are bases of (X, T1) and (X, T2). First, we want to prove that given 2 > 0, 1 > 0 such that if d,(x, x') d2, then d (x, x') 01 Consider a pair of samples (x, x') and d,(x, x') < 82. x, x' E B2(x, 2). Of course, B2(x, 2) E Suppose the (X. d) is a finer topology than (X.d). Then B,(x,d) E T1. You can\nfind B1(xo,01/2) E T1 such that B2(x, d2) C B1(xo,01/2), where B2(x, 02) is the closure of. B2(x, 02). Therefore d'(x, x') o1. Based on a.e. continuity assumption of f1, since d' (x, x') < d1, f1(x) = f1(x') a.e. . This means. that P(f1(x) = f1(x)[f2(x) = f2(x), d2(g2(x),g2(x)) < 02) = 1, which is our definition of strong-robustness. . Next, we want to show that if f1 is strong-robust, then t1 is a finer topology than T2.. Suppose f1 is strong-robust, we need to prove that Vo2 > 0, o1 > 0 such that if d,(x, x') < 02 then d'j (x, x') d1. Assume T1 is not a finer topology than T2. This means there exists a B2(x, 02) such that B2(x, 02) . T1. Therefore V1 > 0, there exists x' E B(x, 02) such that d,(x, x') < d2 and d (x, x') > 01 Based on a.e. continuity assumption of f1, d' (x, x') > 01 indicates that f1(x) f1(x'). This. contradicts the strong-robust assumption. Thus, T1 is a finer topology than T2..\n=1P(f1(x)f1(x)[f2(x)=f2(x),d(x,x)< 82 =1-P(f1(x)f1(x)[f2(x)=f2(x),d1(x,x)<01, d2(x,x')< d2) >1 - n\nP(f1(x) = f1(x)|f2(x) =f2(x),d2(g2(x),g2(x)) < 02 =1-P(f1(x)f1(x)|f2(x)=f2(x) d2(g2(x),g2(x')) < 02) 1-P(f1x)f1(x)f2x)=f2(x) d1(g1(x),g1(x')) < 01,d2(g2(x),g2(x')) < 82) >1 - n\nProof. Suppose n1 > n2 and X2 C X1. (X, d2) is a finer topology than (X, d). Therefore (X, d). is not a finer topology than (X, d,), which indicates that f1 is not strong-robust against adversarial examples.\nAll pairs of test samples (x, x) can be categorized into the three cases shown in both figures\nClearly from the two figures, c1 does not determine the strong-robustness of f1.\n10.4.2 MORE ABOUT EXTRA UNNECESSARY FEATURES RUIN STRONG-ROBUSTNES\nIn real-world applications, such attacks can be, for example, adding words with a very tiny font siz in a spam E-mail, that is invisible to a human annotator. When a learning-based classifier tries t utilize such extra words (unnecessary for human), it can lead to many easily generated adversaria emails.\nAs another example, one previous study (Xu et al.] 2016) shows that a genetic-programming based adversarial example strategy can always evade two state-of-art learning-based PDF-malware classifiers. (with \"100%\" evasion rates). The reason behind such good evasion rates is the Condition (4.1). Both state-of-art PDF-malware classifiers have used many superficial features (e.g., a feature representing \"is there a long comment section\") that are not relevant to \"the malicious property\" of a PDF sample. at all !\nWhen f1 is not continuous a.e., the analysis of adversarial examples needs to consider \"boundary points\" of f1 with certain properties. This section tries to clarify the definition and related scope\nx E X,x'E X\nFigure4|uses an example to illustrate Table[3|Case (III) when f1 is strong-robust. We show one case of X = X2 = R2 and f1, f2 are continuous a.e.. In terms of classification, f1 (green boundary line). is not accurate according to f2 (red boundary line)..\nFigure5|uses an example figure to illustrate Table|3|Case (IV) when f1 is strong-robust. We show one case of 1 = n1 < n2 = 2, X1 C X2 and f1, f2 are continuous a.e.. In terms of classification, f1 (green boundary line) is not accurate according to f2 (red boundary line)..\nTest-case (a) is when x and x' are predicted as the same class by both. f1 gets correct predictions according to f2. There exist no adversarial examples. Test-case (b) is when x and x' are predicted as the same class by both. But f1 gets incorrect predictions according to f2. There exist no adversarial examples. Test-case (c) shows when f1(x) f1(x'), d2(x,x') < d2 and f2(x) = f2(x'). This case is explained in Section|11] Essentially, this is about \"Boundary based adversarial examples\" and can only attack points whose distance to the boundary of f1 is smaller than d2 (f1(x) f1(x'). d2(x, x') < d2 and f2(x) = f2(x')). When f1 is continuous a.e., the probability of this set is 0.\nTable[3Jindicates that training a strong-robust and accurate classifier in practice is extremely difficult For instance, Figure[2 shows only one extra irrelevant feature, which does not hurt accuracy, makes. the classifier not robust to adversarial perturbation at all (i.e., for samples a.e. in X, easy to find its adversarial examples.).\nWhen f1 is continuous a.e., P(f1(x) = f1(x')|f2(x) = f2(x'), d2(g2(x), g2(x')) < 82) equals to either 1 or 0. This means f1 is either strong-robust or not robust under AN at all a.e.. One case with this probability as 0 is illustrated by Figure[2] Case (III) and Case (IV) from Table[3|have this probability equaling to 1."}, {"section_index": "10", "section_name": "1 1 BOUNDARY POINTS OF f1 MATTER FOR ADVERSARIAL EXAMPLES WHEN f1 IS NOT CONTINUOUS A.E.", "section_text": "Our definition of the boundary points describes such points as pairs of samples that are across the. classification boundary. This format of definition makes the following analysis (notation-wise) easy and concise.\nThis lemma shows that a case with probability of boundary points larger than O is exactly the situatio when f, being not continuous a.e...\nThe third column of Figure|6|describes \"Boundary based adversarial examples\"' that can only attack seed samples whose distance to the boundary of f1 is smaller than d2. Essentially this attack is about those boundary points of f1 that are treated as similar and belong to the same class by f2. That is\nCase (a) Case (b) Case (c) X1 X1 X1 Machine a a classifier b f1 Not consider in the Not consider in the. Boundary points of f1. strong-robustness strong-robustness Boundary points of f2. Boundary points of f2. Boundary-based attack and f1 X2 the oracle f2 b a Class 1 b Class 2\nFigure 6: An example showing boundary points of f1 and boundary points of f2. We assume f1 and f2 are continuous a.e.. We assume c1 and c2 as linear classification functions. The first two columns showing boundary points of f2 that are not considered in this paper. The third column describes \"Boundary based adversarial attacks\"' that can only attack seed samples whose distance to the boundary of f1 is smaller than e. Essentially this attack is about those boundary points of f1 that are treated as similar and belong to the same class by f2.\nIn addition, we want to point out that all boundary pairs of f2 (satisfying f2(x) f2(x') and. d2(g2(x), g2(x')) < d2) are not considered in our analysis of adversarial examples. Figure 6 illustrates three types of boundary points, using the first two columns showing boundary points of f2\nd1(g1(x),g1(x))< 01\nThe value of this probability is critical for our analysis in Theorem (3.3) and in Theorem (3.5). Again we want to emphasize that most machine learning methods assume f1 is continuous a.e. and therefore 'boundary based adversarial attacks'' are not crucial.\nX1 Machine classifier Assume |X] = 10, g1 = f1 g2,C1 # C2 P(adversarial samples 2x3 = 60% X2 the oracle O f2 Class 1 Class 2\nFigure 7: When f1 is not continuous a.e., the strong-robustness of f1 is determined by both g1 and c1 We assume c1 and c2 as linear classification functions. This figure shows when (1) sample space X is finite, (2) f1 learns a wrong decision boundary and (3) the probability of test samples around f1's decision boundary is large, f1 is not strong-robust against adversarial examples. However, we want to emphasize that the situation is very rare for a well-trained classifier f1..\n. C2.929202 #{(x,x)|f2(x) = f2(x)&d2(g2(x),g2(x))< 82&f1(x) F f1(x)} #{(x,x')|f2(x) = f2(x')&d2(g2(x),g2(x'))< 02}\nThis is exactly the proportion of those pairs of points for which f1 classifies them into different classes and f2 treats them as similar and \"same-class\" samples. For this case, both g1 and c1 matter for the strong-robustness of f1. See Appendix Section|11.2|for an example showing how c1 makes f1 not strong robust.\nBased on Eq. (11.4), when f1 is not continuous a.e., the strong-robustness of f1 is determined by both g1 and c1. Figure7|shows an exemplar case in which X has only ten samples (i.e. X = 10) We assume the learned f1 and the oracle f2 derive the same feature space, i.e., X1 = X2. And we also assume f1 performs the classification very badly because the decision boundary (by c1) on X1 is largely different from the decision boundary on X2. The probability of \"adversarial examples\" in this case can be calculated by using Eq. q11.4). We get P(f1(x) f1(x')|f2(x) = f2(x'), d1(91(x), g1(x')) < 01) = 2*3 = 0.6. 5*2\nClearly in this case, c1 matters for the strong-robustness (when f1 is not a.e. continuous). This figure. indicates that when (1) sample space X is finite, (2) f1 learns a wrong decision boundary and (3) the. probability of test samples around f1's decision boundary is large, f1 is not strong-robust against adversarial examples. However, we want to point out that this situation is very rare for a well-trained classifier f1.\nTable 5: Accuracy of the deep residual network(He et al.]2015) obtained from two noise-perturbec testing cases. The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9411 0.9411 1 0.9409 0.5833 5 0.9369 0.3943 10 0.9288 0.3853\nFor cases when f1 is not continuous a.e., obtaining more samples is clearly a good way to learn a better decision boundary that might improve the adversarial robustness of the classifier at the same. time."}, {"section_index": "11", "section_name": "1 2 MORE ABOUT DNNS' ROBUSTNESS AGAINST ADVERSARIAL SAMPLES", "section_text": "f1(): f1() is a DNN classifier with multiple layers, including linear perceptron layers, activation. layers, convolutional layers and softmax decision layer.. (X1, d1): X1 denotes the feature space discovered by the layer right before the last fully connected. layer. This feature space is automatically extracted from the original image space (e.g., RGB. representation) by the DNN. (X, d) is defined by d1 using Eq. (10.2). (X2, d2): X2 denotes the feature space that oracle (e.g., human annotators) used to decide ground. truth labels of training images. For example, a human annotator needs to recognize a hand-written digit \"O. X includes what patterns he/she needs for such a decision. (X, d) is defined by d2. using Eq. (10.3)\n12.1 MORE ABOUT ARE STATE-OF-THE-ART DEEP NEURAL NETS STRONG-ROBUST ?\nWe can observe some properties of d1 through experimental results. Table5lTable [6lTable7|anc Table 8 show properties of dj (and d) resulting from performing testing experiments on four state-of-art DNN networks.\nIn Table 9] the model we use is a 200-layer residual network (He et al.] 2015) trained on Imagenet. dataset (Deng et al.2009) by Facebool15We generate two types of test samples from 50000 images in the validation set of Imagenet: (1) 50oo0 randomly perturbed images. The random perturbations on each image are generated by first fixing the perturbation value on every dimension to be the same, and then randomly assigning the sign on every dimension as + or - (with probability 1/2). In this way, the size of the perturbation can be described by x - x'Ioo that we name as the level of. attacking power ( later defined in Eq. (12.6)). (2) 50000 adversarially perturbed images. We use the. fast-gradient sign method (introduced in Section[8.2) to generate such adversarial perturbations on each seed image. The \"attacking power' of such adversarial perturbations uses the same formula. as Eq. (12.6). The first column of Table 9|shows different attack powers (Eq. (12.6)) we use in the. experiment. The second column shows the accuracy of running the DNN model on the first group of image samples and the third column shows the accuracy of running the DNN model on the second group of image samples.\nTable 6,Table 7]and Table [8repeat similar experiments on three other DNN models: overfea1 network(Sermanet et al.]2013), the residual network(He et al.|2015) and the VGG model (Simonyan & Zisserman2014). The conclusion is consistent across all four models.\n5https://github.com/facebook/fb.resnet.torch\nTable 6: Accuracy of the overfeat network(Sermanet et al.]2013) obtained from two noise-perturbed testing cases. The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6) perturbed sam- ally perturbed ples samples 0 0.7944 0.7944 1 0.7923 0.5922 5 0.7844 0.4270 10 0.7762 0.3485\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9431 0.9431 1 0.9431 0.9294 5 0.9429 0.6815 10 0.943 0.2961"}, {"section_index": "12", "section_name": "12.2 CONNECTING PREVIOUS STUDIES HARDENING DNNS", "section_text": "Multiple hardening solutions (Zheng et al.]2016} Miyato et al.]2016]Lee et al.] 2015) exist in the DNN literature. They mostly aim to learn a better g1 by minimizing different loss functions L f1 (x, x') so that when d2(g2(x), g2(x')) < e, this loss Lf1 (x, x') is small. This might improve the the topological equivalence (or finer topology). Two major variations exist among related methods the choice of L f (x, x') and the way to generate pairs of (x, x').\nBesides, (Zheng et al.[2016) uses L f1 (x, x) = K L(f1(x), f1(x)) and uses it as a regularization term adding onto the original training loss function. Its samples x' are generated from original samples x adding a small Gaussian noise. (Miyato et al.2016) uses the similar loss function as (Zheng et al.]2016).But(Miyato et al.]2016) uses adversarial perturbed x' from x. (Lee et al.2015) uses Lf1 (x, x') = d1(g1(x),g1(x')) and x's are generated xs by adding a small Gaussian noise. Recently proposed adversarial training (Goodfellow et al.|2014, Kurakin et al. 2016) uses L f (x, x') = L(f1(x), f2(x)) and uses adversarial perturbed x' from x. These studies are summarized and compared in Table4\nTable 7: Accuracy of the residual network(He et al.2015) obtained from two noise-perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2009). The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\nChoice of loss function L f, (x, x'): Siamese training (G) (Section[12.4) and (Lee et al.2015) use Lf1(x, x') = d1(g1(x), g1(x')). Siamese training (F) chooses Lf1(x, x') = dist(f1(x), f1(x')) where dist(.,:) is a distance function measuring the difference between f1(x) and f1(x' ). If f1 is. continuous a.e., when d1(g1(x), g1(x')) is small -> we get dist(f1(x), f1(x')) is small. However,. the reverse direction may not hold. Therefore, Lf1 (x, x') = dist(f1(x), f1(x')) may not work for. cases. Generating pairs of (x, x'): Another variation is the way of generating pairs of (x, x') so that. d2(g2(x), g2(x')) is small. There exist two common ways. One is generating x' by adding a. random (e.g. Gaussian) perturbation on x. The other one is generating the adversarial perturbation to get x' from x.\nTable 8: Accuracy of the wide residual network(Zagoruyko & Komodakis, 2016) obtained fron two noise-perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2009). The second column shows the result on randomly perturbed samples, and the third column shows the result or adversarially perturbed samples.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.953 0.953 1 0.953 0.8527 5 0.953 0.4718 10 0.953 0.2529\nTable 9: Accuracy of the VGG model (Simonyan & Zisserman. 2014) obtained from two noise perturbed testing cases in CIFAR-10 dataset (Krizhevsky & Hinton2. 2009). The second column shows the result on randomly perturbed samples, and the third column shows the result on adversarially perturbed samples.\nAttack power Test accuracy Test accuracy (defined in on randomly on adversari- Eq. (12.6)) perturbed sam- ally perturbed ples samples 0 0.9395 0.9395 1 0.938 0.7807 5 0.938 0.3767 10 0.9377 0.2092\nOur theoretical analysis indicates that strong-robustness is a strong condition of machine learning classifiers and requires thorough understanding of oracle. Since many state-of-the-art learning models including many DNNs, are not strong-robust, it is important to understand and quantify how far they are away from strong-robustness.\nThis section proposes a new evaluation measure \"Adversarial Robustness of Classifiers (ARC)\" to. quantify how far a classifier is away from the strong-robustness. This quantitative measure considers. both the predictor f1 and the oracle f2. By design, a classifier (f1)'s ARC achieves the maximum (1 since ARC is rescaled to [0, 1]) if and only if f1 is strong-robust (see Theorem (12.3)).\nWe name such situations as \"weak-robustness\" and propose a quantitative measure to describe how robust a classification model is against adversarial examples. The proposed measure \"Adversarial Robustness of Classifiers (ARC)\"' considers both the predictor f1 and the oracle f2 (introduced in Section2.2). By design, a classifier (f1)'s ARC achieves the maximum (1 since ARC is rescaled to [0, 1]) if and only if f1 is strong-robust against adversarial examples and is based on the expectation of how difficult it is to generate adversarial examples."}, {"section_index": "13", "section_name": "Definition 12.1. Adversarial Robustness of Classifiers (ARC)", "section_text": "By adding the constraint d2(x, x') < 82 into Eq. (2.2) (our general definition of adversarial examples. and taking the expactation of dy between adversarial example and seed sample, we define a measure\nThis motivates us to design a computable criteria to estimate Definition (12.1). For instance, for image classification tasks, we can choose d2 = || : ||. as an example. Then in Eq. (12.1), to estimate of IE[||x - x'I[o], we need to make some assumptions. Assume that there exists a threshold 2, that any perturbation larger than d2 will change the classification of oracle f2. That is if x - x'Ioo 02 then f2(x) f2(x'). More concretely, for image classification tasks, as the input space is discrete (with every dimension ranging from 0 to 255), ARC can be estimated by the following Eq. (12.2):\nd2-1 ARCx(f1,f2)=E[l x-x']=>~iP(l| x-x'llx=i i=1 +82P(f1(x) =f1(t),Vlx-to< 82) x' = argmin d2(x,t) tEX\nARC(f1,f2) ARCA(f1) = Accuracy(f1) d2\nTheorem 12.3. f1 is strong-robust against adversarial examples if and only if ARC(f1)/8, = 1\nProof. If ARC(f)/, = 1, then based on Definition d12.1b\nx' =argmin d2(x,t) tEX Subject to: f1(x) f1(t d2(x,t) < d2\nTwo recent studies (Moosavi-Dezfooli et al.. 2015}Papernot et al. 2015b) propose two similar measures both assuming d2 as norm functions, but do not consider the importance of an oracle. More. importantly, (Papernot et al. 2015b) does not provide any computable way to calculate the measure.. In (Moosavi-Dezfooli et al.| 2015). the measure is normalized by the size of the test samples. while. no evidence exists to show that the size of perturbation is related to the size of test samples.\nThe fact that previous measures neglect the oracle f2 leads to a severe problem: the generated. adversarial examples are not necessarily valid. This is because if the size of perturbation is too large oracle f2 may classify the perturbed sample into a different class (different from the class of the seed. sample).\nAs we have discussed in the Section4] both accuracy and robustness are important properties in. determining whether a classification model is preferred or not. Therefore we combine accuracy and ARC into the following unified measure ARCA:.\n12.4 USING \"SIAMESE ARCHITECTURE\" TO IMPROVE DNNS' ADVERSARIAL ROBUSTNESS\nOne intuitive formulation that we can use to improve a DNN's adversarial robustness is by solving the following:\nVx,x' E X,if d2(g2(x),g2(x')) < e argmin d1(g1(x; w), g1(x'; w)\nThis essentially forces the DNN to have the finer topology between (X1, d1) and (X2, d2) by learning. a better g1. We name the strategy minimizing the loss defined in Eq. (12.5) as \"Siamese Training. because this formulation uses the Siamese architecture (Bromley et al.|1993), a classical deep. learning approach proposed for learning embedding. We feed a slightly perturbed input x' together. with its original seed x to the Siamese network which contains two copies (sharing the same weights) of a DNN model we want to improve. By penalizing the difference between middle-layer (g1()) outputs of (x, x'), \"Siamese Training\" can push two spaces (X, d) versus (X2, d) to approach finer topology relationship, and thus increase the robustness of the model. This can be concluded from Figure [8] By assuming d2(g2(x), g2(x)) equals (approximately) to (x, x)|], previous studies (summarized in Table[2) normally assume d2 is a norm function : . Because for a pair of inputs (x, x) that are close to each other (i.e., [x - x'| is small) in (X, . , Siamese training pushes. them to be close also in (X1, d1) . As a result, this means that a sphere in (X1, d1) maps to a. not-too-thin high-dimensional ellipsoid in (X, || : I). Therefore the adversarial robustness of DNN. model after Siamese training may improve. In experiments, we choose Euclidean distance : 2 for. di() (however, many other choices are possible).\nDatasets: Currently, we are using the following 2 image datasets to evaluate our model:\nMNIST: MNIST, released in (LeCun et al.l[1998) includes a task to classify handwritten digits. It has a training set of 60,000 examples, and a test set of 10,000 examples. Each example is a 32x32 pixel black and white image of handwritten digit.. CIFAR-10: CIFAR-10 is an image classification dataset released by (Krizhevsky & Hinton 2009 The training set contains 50,000 32x32 color images in 10 classes, and the test set contains. 10.000 32x32 color images. VGG model: We choose a VGG model (Simonyan & Zisserman2014) as a base DNN model. The VGG model in our experiment has 16 weight layers (55 layers in total).\nBaseline: Three different hardening strategies are compared through testing on adversarial examples (details in Section|12.2): (1) original model; (2) stability training (Zheng et al.2016) (3) Siamese training (alone); (4) adversarial training (Goodfellow et al.2014)|Kurakin et al.2016) uses adversarial perturbed x' and original samples x to train a DNN model..\nThe first column of Table[10|and Table[11$hows different levels of attack power (defined in Eq. (12.6)) Test accuracy reported in Figure [9(a), Figure 10(a), Table[10|and Table 11shows different hardening approches can increase the effectiveness of the adversarial attacks. Details of our experimental set-up and datasets are included in Section|12.2"}, {"section_index": "14", "section_name": "Evaluation Metrics:", "section_text": "16 ATT: Stability training was shown to improve the model robustness against Gaussian noise in. (Zheng et al.|[2016). Differently, our experiments focus on testing a learning model's robustness against \"adversarial. perturbation\"'. The sole purpose of including this baseline is to show where state-of-art hardening strategies are in our experimental setting.\nFigure 8: Sketch of Siamese training. Inputs are pairs of seed sample and their randomly perturbe version, while we suppose the d2 distance between the pair is small. By forwarding a pair into th Siamese network and penalizing the outputs of the pair, this training intuitively limit the di distanc between two similar samples to be small. Backpropagation is used to update the weights of the network.\nTest accuracy: We use top-1 test accuracy as the performance metric. It is defined as the number of successfully classified samples divided by the number of all test samples. The base model achieves accuracy when there's no adversarial attack.. ARC (Eq. (12.2)) : We use ARC to measure the adversarial robustness of each model. n is chosen to be 10. ARCA: (Eq. (12.3)) : We use ARCA to measure the total performance of each model..\nWe generate adversarial examples using the fast gradient sign method, in which the power of the adversary attack can be easily controlled. By controlling the power of fast-sign attacks, we can obtain a complete view of how the accuracy changes according to different attack powers\nIn the following analysis, the attack power is defined as\nFor image classification tasks, we control the perturbed sample to be still in the valid input space, so. that every dimension of the perturbed samples is in the range of integers between 0 and 255"}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "d1(g1(x),g1(x')) Loss II g1(x)-g1(x') II2 function to be small. Shared parameters Finer Network g1(x) g1(x') Topology W Assume (X2,d2) = (X,IIII) Network d2(g1(x),g1(x')) x input X is small Before Siamese training: (X1,d1) After Siamese training:. (X1,d1) b Far IClose Deep Neural Nets. Deep Neural Nets d1 (a, b) Large d1 (a, b) Large (X, II-1I) (X, IIII) Adversarial direction Adversarial direction Class 1 b Class 1 a Class 2 Class 2 Class 3 Class 3\nTable 10: Test accuracy for different training strategies on CIFAR-10 dataset Attack power (Eq. (12.6) Original model Stability Training Siamese Training\nAttack power (Eq. (12.6)) Original model. Stability Training Siamese Training 0 93.95% 93.81% 93.96% 1 78.07% 78.01% 93.88% 2 61.38% 60.34% 90.13% 3 50.07% 49.21% 86.73% 4 42.86% 41.51% 83.85% 5 37.67% 36.33% 81.21% 6 33.60% 32.08% 78.61% 7 29.70% 28.09% 76.09% 8 26.23% 25.11% 73.21% 9 23.53% 22.43% 69.67% 10 20.92% 20.25% 65.98% ARC 4.9798 4.8717 8.9332 ARCA 0.4253 0.4155 0.7631\nTable Iest accuracy Slldlegles oll Mi dalasel. Attack power Original model Adversarial Training Stability Training Siamese Training 0 98.98% 98.96% 99.06% 99.03% 1 98.75% 98.84% 98.94% 98.84% 2 98.44% 98.63% 98.60% 98.47% 3 98.10% 98.41% 98.29% 98.16% 4 97.56% 98.12% 97.80% 97.78% 5 97.09% 97.80% 97.47% 97.26% 6 96.23% 97.38% 97.01% 96.56% 7 95.43% 96.96% 96.23% 95.81% 8 94.22% 96.47% 95.37% 95.01% 9 92.95% 96.06% 94.49% 93.89% 10 91.53% 95.57% 93.30% 92.76% ARC 10.5928 10.732 10.6656 10.6357 ARCA 0.953159 0.96549 0.960486 0.957503\nBattista Biggio, Giorgio Fumera, and Fabio Roli. Adversarial pattern classification using multi ple classifiers and randomisation. In Structural, Syntactic, and Statistical Pattern Recognition pp. 500-509. Springer, 2008. URLhttp://1ink. springer.com/chapter/10.1007/ 978-3-540-89689-0_54\nBattista Biggio, Samuel Rota Bulo, Ignazio Pillai, Michele Mura, Eyasu Zemene Mequanint, Marcello Pelillo, and Fabio Roli. Poisoning complete-linkage hierarchical clustering. In Structural, Syntactic. and Statistical Pattern Recognition, pp. 42-52. Springer Berlin Heidelberg, 2014..\nMariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, and Yann LeCun. Differentially-and non-differentially-private random decision trees. arXiv preprint arXiv:1410.6973. 2014.\nJane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Sackinger, and Roopak Shah. Signature verification using a \"siamese\" time delay neural network International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nTable 10: Test accuracy for different training strategies on CIFAR-10 dataset\nTable 11: Test accuracy for different training strategies on MNIST dataset.\nBattista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgic. Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013..\n(a) Attack Power vs. Accuracy (b) ARC & ARCA value among different methods 0.9 JARCARCA 0.9 0.8 0.8 Q 0.7 0.7 0.6 0.5 A ARC 0.5 8 38\\ 0.4 0.4 0.3 A 0.3 0.2 0.2 - Original 0.1 0.1 - Stability Training *-Siamese Training 0 0 2 4 6 8 10 Original 0 Stability Training Attack Power\nFigure 9: Result of CIFAR-10: (a) Test accuracy under adversarial example attacks: three differen colors for three different training strategies. (Details in Section 12.2) We don't include the resuli of adversarial training because previous adversarial training can't be used on networks with batch normalization. Some tricks of training such networks are released in a recent paper (Kurakin et al. 2016) (b) ARC and ARCA for three different training strategies under adversarial example attacks\n(a) Attack Power vs. Accuracy (b) ARC & ARCA value among different methods JARCARCA 0.99 0.98 0.8 0.97 Q Q 0.96 0.6 0.95 CCr Q A 0.94 ARC 0.4 0.93 0.92 0.2 Original Adversarial Training 0.91 Stability Training *-Siamese Training 0.9 0 2 6 10 Adversarial Training 0 4 8 Original Stability Siamese Attack Power Training Training\nFigure 10: (a) Test accuracy under adversarial example attacks on MNIST dataset: four different colors for four different training strategies. (Details in Section|12.2) (b) ARC and ARCA for four different training strategies under adversarial example attacks\nNicholas Carlini and David Wagner. Defensive distillation is not robust to adversarial examples arXiv preprint arXiv:1607.04311, 2016a."}] |
H12GRgcxg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The presence of class label noise inherent to training samples has been reported to deteriorate the. performance of even the best classifiers in a broad range of classification problems (Nettleton et al. (2010), Pechenizkiy et al.(2006),Zhu & Wu(2004)). Noisy labels also tend to be more harmfu than noisy attributes (Zhu & Wu (2004)). Noisy data are usually related to the data collectioi process. Typically, the labels used to train a classifier are assumed to be unambiguous and accurate. However, this assumption often does not hold since labels that are provided by human judgments. are subjective. Many of the largest image datasets have been extracted from social networks. These. images are labeled by non-expert users and building a consistent model based on a precisely labelec. training set is very tedious. Mislabeling examples have been reported even in critical application. such as biomedical datasets where the available data are restricted (Alon et al.(1999)). A very. common approach to noisy datasets is to remove the suspect samples in a preprocessing stage or have. them relabeled by a data expert (Brodley & Friedl(1999)). However, these methods are not scalable. and may run the risk of removing crucial examples that can impact small datasets considerably.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Variants that are noise robust have been proposed for the most common classifiers such as logistic- regression and SVM (Frenay & Verleysen(2014),Jakramate & Kaban(2012),Beigman & Klebanov (2009)). However, classifiers based on label-noise robust algorithms are still affected by label noise. From a theoretical point of view, Bartlett et al.[(2006) showed that most loss functions are not com. pletely robust to label noise. Natarajan et al.(2013) proposed a generic unbiased estimator for binary classification with noisy labels. They developed a surrogate cost function that can be expressed by a weighted sum of the original cost functions, and provided asymptotic bounds for performance Grandvalet & Bengio (2005) addressed the problem of missing labels that can be viewed as an ex- treme case of noisy label data. They suggested a semi-supervised algorithm that encourages the classifier to predict the non-labeled data with high confidence by adding a regularization term to the cost function. The problem of classification with label noise is an active research area. Comprehen sive up-to-date reviews of both the theoretical and applied aspects of classification with label noise can be found inFrenay & Kaban(2014) and Frenay & Verleysen(2014)."}, {"section_index": "2", "section_name": "A PROBABILISTIC FRAMEWORK FOR NOISY LABELS", "section_text": "Assume we want to train a multi-class neural-network soft-classifier p(y = i[x; w) where x is the feature vector, w is the network parameter-set and i is a member of the class-set {1, ..., k}. We further assume that in the training process we cannot directly observe the correct label y. Instead. we only have access to a noisy version of it denoted by z. Here we follow the probabilistic modeling and the EM learning approach described in Bekker & Goldberger(2016). In this approach noise generation is assumed to be independent of the features and is modeled by a parameter 0(i, j) = p(z = j|y = i). The noise distribution is unknown and we want to learn it as part of the training phase. The probability of observing a noisy label z given the feature vector x is:\nwhere k is the number of classes. The model is illustrated in the following diagram\nIn the training phase we are given n feature vectors x1,..., xn with the corresponding noisy la bels z1, ..., n which are viewed as noisy versions of the correct hidden labels y1, .., Yn. The log likelihood of the model parameters is:\nn k (w,0) = log p(zt|yt=i;O)p(yt =i|xt;W t=1 i=1\nBased on the training data, the goal is to find both the noise distribution 0 and the Neural Network parameters w that maximize the likelihood function. Since the random variables y1, ..., yn are hid- den, we can apply the EM algorithm to find the maximum-likelihood parameter set. In the E-step of\nn spite of the huge success of deep learning there are not many studies that have explicitly attempted to address the problem of Neural Net (NN) training using data with unreliable labels.Larsen et al.. (1998) introduced a single noise parameter that can be calculated by adding a new regularization. term and cross validation.Minh & Hinton(2012) proposed a more realistic noise model that de-. pends on the true label. However, they only considered the binary classification case. Sukhbaatar. & Fergus(2014) recently proposed adding a constrained linear layer at the top of the softmax layer,. and showed that only under some strong assumptions can the linear layer be interpreted as the tran-. sition matrix between the true and noisy (observed) labels and the softmax output layer as the true. probabilities of the labels. Reed et al.(2014) suggested handling the unreliability of the training data. labels by maximizing the likelihood function with an additional classification entropy regularization. term.\nThe correct unknown label can be viewed as a hidden random variable. Hence, it is natural to apply. the EM algorithm where in the E-step we estimate the true label and in the M-step we retrain the network. Several variations of this paradigm have been proposed (e.g.Minh & Hinton (2012). Bekker & Goldberger(2016)). However, iterating between EM-steps and neural network training. does not scale well. In this study we use latent variable probabilistic modeling but we optimize the. likelihood score function within the framework of neural networks. Current noisy label approaches. assume either implicitly or explicitly that, given the correct label, the noisy label is independent. of the feature vector. This assumption is probably needed to simplify the modeling and derive. applicable learning algorithms. However, in many cases this assumption is not realistic since a wrong annotation is more likely to occur in cases where the features are misleading. By contrast. our framework makes it easy to extend the proposed learning algorithm to the case where the noise. s dependent on both the correct label and the input features. In the next section we describe a model formulation and review the EM based approach. In Section 3 we described our method which is. based on adding another softmax layer to the network and in Section 4 we present our results..\nk >p(z=j|y=i;0)p(y=i|x;w) o(z=j[x;w,0) = i=1\neach EM iteration we estimate the hidden true data labels based on the noisy labels and the curren parameters:\nCti = p(yt = i|xt, Zt;W0,0o) i = 1,..., k. t = 1,...,n\n`tCtil{zt=j} 0(i,j) = i,j e{1,..., k} t Cti\nThe k k matrix 0 can be viewed as a confusion matrix between the soft estimates of the true label {ct[i = 1, ..., k} and the observed noisy labels zt. As part of the EM M-step, to find the updated NN parameter w we need to maximize the following function:.\nas n (P(yt =i|xt,Zt;w0,0o) -p(Yt =i|xt;w))h(xt) dui t=1\nsuch that h is the final hidden layer and u1 tr are the parameters of the soft-max output layer\nThe method reviewed here is closely related to the work of Minh & Hinton (2012). They addresse. the problem of mislabeled data points in a particular type of dataset (aerial images). The maii. difference is that in their approach they assumed that they do not learn the noise parameter. Instea. they assume that the noise model can be separately tuned using a validation set or set by hand. Note that even if the true noise parameters are given, we still need the apply the EM iterative procedure. However, this assumption makes the interaction between the E-step and the NN learning mucl. easier since each time a data-point xt is visited we can compute the p(yt = i[xt, Zt) based on th current network parameters and the pre-defined noise parameters. Motivated by the need for mode compression, Hinton et al.(2014) introduced an approach to learn a \"distilled\" model by training. a more compact neural network to reproduce the output of a larger network. Using the notatio defined above, in the second training stage they actually optimized the cost function: S(w) = network that was trained using the labels z1, ..., 2n, w is the parameter of the smaller network anc. 0o(i, j) in this case is a non-informative distribution (i.e. 0o(i, j) = 1/k)..\nThere are several drawbacks to the EM-based approach described above. The EM algorithm is a greedy optimization procedure that is notoriously known to get stuck in local optima. Another potential issue with combining neural networks and EM direction is scalability. The framework requires training a neural network in each iteration of the EM algorithm. For real-world, large-scale networks, even a single training iteration is a non-trivial challenge. Moreover, in many domains (e.g. object recognition in images) the number of labels is very large, so many EM iterations are likely to be needed for convergence. Another drawback of the probabilistic models is that they are based on the simplistic assumption that the noise error is only based on the true labels but not on the input features. In this study we propose a method for training neural networks with noisy labels that successfully addresses all these problems.\nIn the previous section we utilized the EM algorithm to optimize the noisy-label likelihood functior (2). In this section we describe an algorithm that optimizes the same function within the framework of neural networks. Assume the neural network classifier we are using is based on non-linear inter mediate layers followed by a soft-max output layer used for soft classification. Denote the non-linea\nwhere wo and 0o are the current parameter estimations. In the M-step we update both the NN and the noisy channel parameters. The updated noise distribution has a closed-form solution\nn k S(w) = Cti logp(yt = i|xt; W t=1 i=1\nwhich is a soft-version of the likelihood function of the fully observed case, based on the current estimate of the true labels. The back-propagation derivatives of the function (5) that we maximize in the M-step are:\nfunction applied on an input x by h = h(x) and denote the soft-max layer that predicts the true y label by:\nexp(u[h+ bi) p(y = i(x;w) = =1. .K =1 exp(ufh + bl\nexp(uT;h+ bij p(z =j[y=i,x) , exp(uh+ bil\np(z =j|x) =)`p(z=j|y=i,x)p(y=i|x) ~\nexp(bij) .\np(z =j|x) =)`p(z=j|y=i)p(y=i|x)\nWe denote the two noise modeling variants as the complex model (c-model) (8) and the simple model (s-model) q10h. Hereafter we use the notation wnoise for all the parameters of the second softmax layer which can be viewed as a noise adaptation layer.\nIn the training phase we are given n feature vectors x1,..., xn with corresponding noisy labels Z1, ..., ~n which are viewed as noisy versions of the correct hidden labels y1,..., yn. The log likelihood of the model parameters is:\nS(w, Wnoise) l0g p(zt[xt) log P(Zt|Yt = i,Xt;Wnoise)P(Yt = i|Xt;W))\nSince the noise is modeled by adding another layer to the network, the score S(w, wnoise) can be optimized using standard techniques for neural network training. By setting.\nexp(bij) p(z=j|y=i)=0(i,j)= ,exp(bil)\nit can easily verified that, by using either the EM algorithm (2) or the s-model neural network scheme (12), we are actually optimizing exactly the same function. Thus the neural network with the s-model noise adaptation layer provides an alternative optimization strategy to the EM algorithm. Instead of alternating between optimizing the noisy model and the network classifier, we consider them as components of the same network and optimize them simultaneously..\nWnoise W W h, y X h Z non-linear function soft-max soft-max W W x h y non-linear function soft-max\nFigure 1: An illustration of the noisy-label neural network architecture for the training phase (above and test phase (below).\nwhere w is the network parameter-set (including the softmax layer). We next add another softmax output layer to predict the noisy label z based on both the true label and the input features:\nWe can also define a simplified version where the noisy label only depends on the true label; i.e. we assume that labels flips are independent of x:\nThere are degrees of freedom in the two softmax layer model. Hence, a careful initialization of the. parameters of the noise adaptation layer is crucial for successful convergence of the network into. a good classifier of the correct labels at test time. We used the parameters of the original network. to initialize the parameters of the s-model network that contains the noise adaptation level. We can initialize the softmax parameters of the s-model by assuming a small uniform noise:.\nsuch that k is the number of different classes. A better procedure is to first train the original NN. without the noise-adaptation layer, ignoring the fact that the labels are noisy. We can then treat the. labels produced by the NN as the true labels and compute the confusion matrix on the train set and used it as an initial value for the bias parameters:.\n1{zt=j}P(Yt=i|xt) tP(Yt =i|xt)\nThe computational complexity of the proposed method is quadratic in the size of the class-set. Sup pose there are k classes to predict, in this case the proposed methods require k+1 sets of softmax operations with a size of k each. Hence there are scalability problems when the class set is large. As we explained in the previous paragraph, we initialized the second soft-max layer using the confusion matrix of the baseline system. The confusion matrix is a good estimation of the label noise. Assume the rows of the matrix correspond to the true labels and the matrix columns correspond to the noisy labels. The l largest elements in the i-th row are the most frequent noisy class values when the true class value is i. We can thus connect the i-th element in the first softmax layer only to its l most probable noisy class candidates. Note that if we connect the i-th label in the first softmax only to the i-th label in the second softmax layer, the second softmax layer collapses to identity and we obtain the standard baseline model. Taking the l most likely connections to the second softmax layer, we allow an additional l - 1 possible noisy labels for each correct label. We thus obtain a data driven sparsifying of the second softmax layer which solves the scalability problem since the complexity becomes linear in the number of classes instead of quadratic. In the experiment section we show that by using this approach there is not much deference in performance.\nOur architecture, which is based on a concatenation of softmax layers, resembles the hierarchical softmax approach Morin & Bengio (2005) that replaces the flat softmax layer with a hierarchical. layer that has the classes as leaves. This allowed them to decompose calculating the probability of the class into a sequence of probability calculations, which saves us from having to calculate the expensive normalization over all classes. The main difference between our approach and theirs. (apart from the motivation) is that in our approach the true-label softmax layer is fully connected to the noisy-label layer.Sukhbaatar & Fergus (2014) suggested adding a linear layer to handle. noisy labels. Their approach is similar to our s-model. In their approach, however, they proposed a. different learning procedure."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate the robustness of deep learning to training data with noisy labels with and without explicit noise modeling. We first show results on the MNIST data-set with injected label\nNote that in the c-model, where the noise is also dependent on the input features, we can still apply the EM algorithm to learn the parameters of the additional noise layer. However, there is no closed form solution in the M-step for the optimal parameters and we need to apply neural-network training in the M-step to find the noise-layer parameters..\nAt test time we want to predict the true labels. Hence, we remove the last softmax layer that aims to get rid of the noise in the training set. We compute the true-label softmax estimation p(y = i[x; w) 7). The proposed architecture for training the neural network based on training data with noisy labels is illustrated in Figure|1\nbi = log((1- e)1{i=j} -\nsuch that x1, ..., xn are the feature vectors of the training dataset and z1, ..., 2n are the corresponding noisy labels. So far we have concentrated on parameter initialization for the s-model. The strategy that works best to initialize the c-model parameters is to use the parameters that were optimized for the s-model. In other words we set linear terms u; to zero and initialize the bias terms b; with the values that were optimized by the s-model.\nteeeenre eeey teeeenre eeey 0.7 0.6 0. 0.5 Complex 0.5 Complex Simple Simple Reed hard Reed hard Reed soft Reed soft Baseline Baseline 0.40.30 0.403 0.35 0.40 0.45 ).50 0.35 0.40 0.5 noise fraction noise fraction (a) 20% dataset (b) 50% dataset Eeeeeere eeeyy 0.6 0.5 Complex Simple Reed hard Reed soft Baseline 0.40.30 0.35 0.40 0.45 0.50 noise fraction (c) 100% dataset\nFigure 2: Test classification accuracy results on the MNIST dataset as a function of the noise level The results are shown for several training data sizes (20%,50%,100%) of the training subset.\nnoise in our experiments. The MNIST is a database of handwritten digits, which consists of 28 28 images. The dataset has 60k images for training and 10k images for testing. We used a two hidder layer NN comprised of 500 and 300 neurons. The non-linear activation we used was ReLU and we used dropout with parameter O.5. We trained the network using the Adam optimizer (Kingma & Ba (2014)) with default parameters, which we found to converge more quickly and effectively. than SGD. We used a mini-batch size of 256. These settings were kept fixed for all the experiments described below. In addition to a network that is based on fully connected layers, we also applied a. network based on a CNN architecture. The results we obtained in the two architectures were similar The network we implemented is publicly available.\nWe generated noisy data from clean data by stochastically changing some of the labels. We con verted each label with probability p to a different label according to a predefined permutation. We used the same permutation as in Reed et al.(2014). The labels of the test data remained, of course, unperturbed to validate and compare our method to the regular approach.\nWe compared the proposed noise robust models to other model training strategies. The first network. was the baseline approach that ignores the fact that the labels of the training data are unreliable Denote the observed noisy label by z and the softmax decision by q1, ..., qk. The baseline log. likelihood score (for a single input) is:.\nS = Og(qi\n0.40 Complex CNN 20% Complex CNN 50% Complex CNN 100% Simple CNN 20% 0.35 Simple CNN 50% Simple CNN 100% Reed hard 20% Reed hard 50% Reed hard 100% 0.30 Baseline CNN 20% Baseline CNN 50% teeennee teey Baseline CNN 100% 0.25 0.20 0.15 0.10 0.30 0.35 0.40 0.45 0.50 noise fraction\nComplex CNN 20% Complex CNN 50% Complex CNN 100% Simple CNN 20% 0.35 Simple CNN 50% Simple CNN 100% Reed hard 20% Reed hard 50% Reed hard 100% 0.30 Baseline CNN 20% Baseline CNN 50% Leeennne eeey Baseline CNN 100% 0.25 0.20 0.15 0.10 0.30 0.35 0.40 0.45 0.50 noise fraction\nFigure 3: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise level. The results are shown for several training data sizes (20%,50%,100%) of the training subse for a CNN network architecture).\nWe also implemented two variants of the noise robust approach proposed by Reed et al. (2014) They suggested a soft version\n3S-(1-)Hq)= >1{z=i} log(qi)+1) ) qi log(qi) i\nFigure|2|depicts the comparative test errors results as a function of the fractions of noise. The results are shown for three different sizes of training data i.e. (20%,50%,100%) of the MNIST training subset. Bootstrapping was used to compute confidence intervals around the mean. For 1o00 times, N = 10 samples were randomly drawn with repeats from the N available samples and mean was computed. The confidence interval was taken to be the 2.5% and 97.5% percentiles of this process.\nThe results show that all the methods that are explicitly aware of the noise in the labels are bette. than the baseline which is the standard training approach. We revalidated the results reported inReec. et al.(2014) and showed that the hard version of their method performs better than the soft version. In all cases our models performed better than the alternatives. In most cases the c-model was bette than the s-model. In the case where the entire dataset was used for training, we can see from the. results that there was a phase transition phenomenon. We obtained almost perfect classificatioi results until the noise level was high and there was a sudden strong performance drop. Analyzing. why this effect occurred is left for future research..\nWe next show the results on the CIFAR-100 image dataset Krizhevsky & Hinton(2009) which con-. sists of 32 32 color images arranged in 100 classes containing 600 images each. There are 500. training images and 100 testing images per class. We used raw images directly without any pre-. processing or augmentation. We generated noisy data from clean data by stochastically changing. some of the labels. We converted each one of the 100 labels with probability p to a different label. according to a predefined permutation. The labels of the test data remained, of course, unperturbed. to validate and compare our method to the regular approach. We used a CNN network with two. convolutional layers combined with ReLU activation and max-pooling, followed by two fully con nected layers. Figure 3] depicts the comparative test errors results as a function of the fractions. of noise for three different sizes of training data i.e. (20%,50%,100%) of the CIFAR-100 training.\nBS + (1 - ) maxlog(qi\nIn their experiments they took = 0.8 for the hard version and = 0.95 for the soft version, and. observed that the hard version provided better results. Finally we implemented the two variants of our approach; namely, the noise modeling based only on the labels (s-model) and the noise modeling. that was also based on the features (c-model).\n0.4 0.40 0.35 0.35 0.30 0.30 teeeecre eeey Leeennne eeey 0.25 0.2 0.20 0.20 Simple CNN sparse 5 100% Complex CNN 100% 0.15 Simple CNN 100% 0.15 Complex CNN sparse 5 100% Simple CNN sparse 5 50% Complex CNN sparse 5 50% Simple CNN 50% Complex CNN 50% Simple CNN 20% Complex CNN 20% Simple CNN sparse 5 20% Complex CNN sparse 5 20% 0.10.30 0.10.30 0.35 0.40 0.50 0.35 0.40 0.45 0.50 noise fraction noise fraction\nFigure 4: Test classification accuracy results on the CIFAR-100 dataset as a function of the noise level. The results of regular and sparse second softmax layers are shown for several training data sizes (20%,50%,100%) of the training subset\nsubset. Bootstrapping was used to compute confidence intervals around the mean in the same way as for the MNIST experiment. The results showed that the proposed method works better than the alternatives. The simple model consistently provided the best results but when the noise level was very high the complex method tended to perform better.\nWe next report experimental results for the sparse variant of our method that remains efficient even when the class set is large. We demonstrate this on the case of the CIFAR-100 dataset which consists of 100 possible classes. For each class we only took the five most probable classes in the confusion matrix which is used to initialize the model parameter (see Section 3). As can be seen in Figure4 sparsifying the second softmax layer did not not result in a drop in performance"}, {"section_index": "4", "section_name": "5 CONCLUSION", "section_text": "In this paper we investigated the problem of training neural networks that are robust to label noise. We proposed an algorithm for training neural networks based solely on noisy data where the noise. distribution is unknown. We showed that we can reliably learn the noise distribution from the noisy. data without using any clean data which, in many cases, are not available. The algorithm can be. easily combined with any existing deep learning implementation by simply adding another softmax. output layer. Our results encourage collecting more data at a cheaper price, since mistaken data. labels can be less harmful to performance. One possible future research direction would be tc. generalize our learning scheme to cases where both the features and the labels are noisy. We showec. results on datasets with small and medium sized class-sets. Future research direction would be tc evaluate the performance and efficiency of the proposed method on tasks with large class-sets.."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "U. Alon, N. Barkai, D. Notterman, K. Gish, S.and D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12):6745-6750, 1999. P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, pp. 138-156, 2006.\nE. Beigman and B. B. Klebanov. Learning with annotation noise. In ACL-IJCNLP, 2009\nD. Nettleton, A. Orriols-Puig, and A. Fornells. A study of the effect of different types of noise on. the precision of supervised learning techniques. Artificial intelligence review, 2010.. M. Pechenizkiy, A. Tsymbal, S. Puuronen, and O. Pechenizkiy. Class noise and supervised learn- ing in medical domains: The effect of feature extraction. In Computer-Based Medical Systems (CBMS), 2006. S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. In arXiv preprint arXiv:1412.6596, 2014.. S. Sukhbaatar and R. Fergus. Learning from noisy labels with deep neural networks. In arXiv preprint arXiv:1406.2080, 2014. X. Zhu and X. Wu. Class noise vs. attribute noise: A quantitative study. Artificial Intelligence Review, 22(3):177-210, 2004."}] |
HJStZKqel | [{"section_index": "0", "section_name": "LIFELONG PERCEPTUAL PROGRAMMING BY EXAMPLE", "section_text": "Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A goal of artificial intelligence is to build a single large neural network model that can be trained in a lifelong learning setting; i.e., on a sequence of diverse tasks over a long period of time, and gain cumulative knowledge about different domains as it is presented with new tasks. The hope is that such systems will learn more accurately and from less data than existing systems, and that they will exhibit more flexible intelligence. However, despite some work showing promise towards multitask learning (training on many tasks at once) and transfer learning (using source tasks to improve learning in a later target task) (Caruana[1997) Luong et al.[2015] Parisotto et al.[2015f |Rusu et al.2016), most successes of neural networks today come from training a single network on a single task, indicating that this goal is highly challenging to achieve.\nWe argue for two properties that such systems should have in addition to the ability to learn from sequence of diverse tasks. First is the ability to learn from weak supervision. Gathering high-qualit labeled datasets is expensive, and this effort is multiplied if all tasks require strong labelling. I this work, we focus on weak supervision in the form of pairs of input-output examples that com from executing simple programs with no labelling of intermediate states. Second is the ability t distill knowledge into subcomponents that can be shared across tasks. If we can learn models wher the knowledge about shared subcomponents is disentangled from task-specific knowledge, then th sharing of knowledge across tasks will likely be more effective. Further, by isolating shared subcon ponents, we expect that we could develop systems that exhibit reverse transfer (i.e., performance o earlier tasks automatically improves by improving the shared components in later tasks).\nA key challenge in achieving these goals with neural models is the difficulty in interpreting weights. inside a trained network. Most notably, with a purely neural model, subcomponents of knowledge. gained after training on one task cannot be easily transferred to related tasks. Conversely, traditional computer programs naturally structure solutions to diverse problems in an interpretable, modular. form allowing (1) re-use of subroutines in solutions to new tasks and (2) modification or error. correction by humans. Inspired by this fact, we develop end-to-end trainable models that structure. their solutions as a library of functions, some of which are represented as source code, and some of. which are neural networks.\nMethodologically, we start from recent work on programming by example (PBE) with differentiable. interpreters, which shows that it is possible to use gradient descent to induce source code operating. on basic data types (e.g. integers) from input-output examples (Gaunt et al.]2016)Riedel et al.]2016 Bunel et al.f 2016). In this work we combine these differentiable interpreters with neural network classifiers in an end-to-end trainable system that learns programs that manipulate perceptual data.\nWe introduce and develop solutions for the problem of Lifelong Perceptual Pro gramming By Example (LPPBE). The problem is to induce a series of programs. that require understanding perceptual data like images or text. LPPBE systems learn from weak supervision (input-output examples) and incrementally construct a shared library of components that grows and improves as more tasks are solved Methodologically, we extend differentiable interpreters to operate on perceptual. data and to share components across tasks. Empirically we show that this leads to a lifelong learning system that transfers knowledge to new tasks more effectively. than baselines, and the performance on earlier tasks continues to improve even as the system learns on new, different tasks..\nIn addition, we make our interpreter modular, which allows lifelong learning on a sequence of re. lated tasks: rather than inducing one fresh program per task, the system is able to incrementally. build a library of (neural) functions that are shared across task-specific programs. To encapsulate. the challenges embodied in this problem formulation, we name the problem Lifelong Perceptual Programming By Example (LPPBE). Our extension of differentiable interpreters that allows per ceptual data types, neural network function definitions, and lifelong learning is called NeuRAL. TERPRET (NTPT)."}, {"section_index": "2", "section_name": "2.1 TERPRET", "section_text": "TeRpReT programs describe differentiable interpreters by defining the relationship between Input s and Outputs via a set of inferrable P arams that define an executable program and Vars that store intermediate results. TeRPRET requires all of these variables to be finite integers. To learn using gradient descent, the model is made differentiable by a compilation step that lifts the relationships\nFigure 1: (NEURAL) TERPRET programs for counting symbols on a tape, with input-output examples. Both programs describe an interpreter with instructions to MOvE on the tape and READ the tape according to source code parametrized by instr. (left) A TeRpReT program that counts '1's. (right) A NeURAL TeRPRET program that additionally learns a classifier is_dinosaur.\nEmpirically, we show that a NTPT-based model learns to perform a sequence of tasks based on images of digits and mathematical operators. In early tasks, the model learns the concepts of digits and mathematical operators from a variety of weak supervision, then in a later task it learns to compute the results of variable-length mathematical expressions. The approach is resilient to catastrophic forgetting (McCloskey & Cohen||1989| Ratcliff||1990); on the contrary, results show that performance continues to improve on earlier tasks even when only training on later tasks. In total, the result is a method that can gather knowledge from a variety of weak supervision, distill it into a cumulative re-usable library, and use the library within induced algorithms to exhibit strong generalization.\nWe briefly review the TeRpRET language (Gaunt et al.] 2016) for constructing differentiable in terpreters. To address LPPBE, we develop NeURAL TERPRET, an extension to support lifelong learning, perceptual data types, and neural network classifiers. We also define our tasks.\n(b) (c) 4 >8 A>14 ?3 8 >10 A>3 + A A 11 7 11 5 14\nFigure 2: Overview of tasks in the (a) ADD2x2, (b) ApPLy2x2 and (c) MATH scenarios. 'A' denotes the APPLY operator which replaces the ? tiles with the selected operators and executes the sum. We show two MATH examples of different length\nbetween integers specified by the TeRpRET code to relationships between marginal distributions over integers in finite ranges. There are two key operations in this compilation process:\nThis compilation process yields a TensorFlow (Abadi et al.] 2016) computation graph containing many of these two operations, which can then be trained using standard methods."}, {"section_index": "3", "section_name": "2.2 NEURAL TERPRET", "section_text": "To handle perceptual data, we relax the restriction that all variables need to be finite integers. We intro duce a new tensor type whose dimensions are fixed at declaration, and which is suitable to store per ceptual data. Additionally, we introduce learnable functions that can process vector variables. A learn able function is declared using @Learn([d1,...,dp], dout, hid_sizes=[l1,...,l]) where the first component specifies the dimensions d1,..., dp of the inputs (which can be finite integers or tensors) and the second the dimension of the output. NTPT compiles such functions intc a fully-connected feed-forward neural network whose layout can be controlled by the hi d_s i zes component, which specifies the number of layers and neurons in each layer. The inputs of the functior are simply concatenated. Vector output is generated by learning a mapping from the last hidden layer and finite integer output is generated by a softmax layer producing a distribution over integers up tc the declared bound. Learnable parameters for the generated network are shared across every use ir the NTPT program, and as they naturally fit into the computation graph for the remaining TeRPRET program, the whole system is trained end-to-end.\nA simple TeRpRET program counting bits on a tape, and a related NTPT program that counts up images of a particular class on a tape are displayed in Fig.1\nTo demonstrate the benefits of our approach for combining neural networks with program-like archi tecture, we consider three toy scenarios consisting of several related tasks depicted in Fig.2\nADp2x2 scenario: The first scenario in Fig.2(a) uses of a 2 2 grid of MNIST digits. We set 4. tasks based on this grid: compute the sum of the digits in the (1) top row, (2) left column, (3) bottom row, (4) right column. All tasks require classification of MNIST digits, but need different programs. to compute the result. As training examples, we supply only a grid and the resulting sum. Thus, we never directly label an MNIST digit with its class..\nAppLy2x2 scenario: The second scenario in Fig.2(b) presents a 2 2 grid of of handwritten arithmetic operators. Providing three auxiliary random integers d1, d2, d3, we again set 4 tasks\n(a) (b) (c) >14 03 2 8 >10 A>3 11 7 11 5 14\nFunction application. The statement z. set to (foo (x, y)) is translated into ? = jk Iijk % where o represents the marginal distribution for the variable a and I is. an indicator tensor 1[i = foo(j, k)]. This approach extends to all functions mapping any. number of integer arguments to an integer output.. Conditional statements The statements if x == O: z.set_to(a) ; elif x 1 : z. set_to (b) are translated to = a + . More complex statements follow. a similar pattern, with details given in|Gaunt et al.(2016).\nFigure 3: Example solutions for the tasks on the right columns of the (a) ADD2x2 and (b) AppLy2x2. scenarios. The read head is initialized READing the top left cell and any auxiliary Input Ints are loaded into memory. Instructions and arguments shown in black must be learned..\nbased on this grid, namely to evaluate the expression'[d1 op1 d2 op2 d3 where (op1, op2) are the operators represented in the (1) top row, (2) left column, (3) bottom row, (4) right column. In comparison to the first scenario, the dataset of operators is relatively small and consistentl making the perceptual task of classifying operators considerably easier. However, the algorithmic part is more difficult, requiring non-linear operations on the supplied integers.\nMATH scenario: The final task in Fig.2(c) requires combination of the knowledge gained from the weakly labeled data in the first two scenarios to execute a handwritten arithmetic expression"}, {"section_index": "4", "section_name": "3 MODELS", "section_text": "We design one NTPT model for each of the three scenarios outlined above. Knowledge transfer is achieved by defining a library of 2 neural networks shared across all tasks and scenarios. Training on each task should produce a task-specific source code solution (from scratch) and improve the overall usefulness of the shared networks. Below we outline the details of the specific models foi each scenario along with baseline models."}, {"section_index": "5", "section_name": "3.2 ADD2x2 MODEL", "section_text": "For the ADD2x2 scenario we build a model capable of writing short straight line algorithms with up. to 4 instructions. The model consists of a read head containing net_0 and net_1 (with the exception. of the very first task, which only has access to net_0, as discussed above) which are connected to a set of registers each capable of holding integers in the range 0, . . . , M, where M = 18. The head is. initialized reading the top left cell of the 2 2 grid, and at each step in the program, one instruction. can be executed from the following instruction set:.\nNOOP: a trivial no-operation instruction\nWe refer to the 2 networks in the shared library as net_0 and net_1. Both networks have similar. architectures: they take a 28 28 monochrome image as input and pass this sequentially through two fully connected layers each with 256 neurons and ReLU activations. The last hidden vector is. passed through a fully connected layer and a softmax to produce a 10 dimensional output (net _0). or 4 dimensional output (net_1) to feed to the differentiable interpreter. Note that the output sizes are chosen to match the number of classes of MNIST digits and arithmetic operators respectively..\nIf we create an interpreter model which is allowed to make calls to N untrained networks, and part of the interpreter uses a parameter net_choice = Param (N) to deciding which network to apply then the system effectively sees one large untrained network, which cannot usefully be split apart into the N components after training. To avoid this, we enforce that no more than one untrained network is introduced at a time (i.e. the first task has access to only net_0, and all other tasks have access to both nets). We find that this breaks the symmetry sufficiently to learn separate, useful classifiers.\nLO: MOVE Label: R0 = READ(net_0) instr, GOTO_IF L1 if is MOVE: pos++ net_choice 1: R = READ R1 = APPLYR1R0 R2) GOTO IF L2 else: arg1j arg2 arg3 R = APPLY L2: MOVE GOTO IF R2 = READ(net_1) GOTO IF L0 Label,: halt: return_addr halt: return (a) (b) return R1\nFigure 4: Overview of the MATH model. (a) The general form of a block in the model. Blue element are learnable. (b) A loop-based solution to the task in the MATH scenario.\nwhere the parameter net _choi ce is to be learned and decides which of net_0 and net_1 to apply"}, {"section_index": "6", "section_name": "3.3 APPLY2x2 MODEL", "section_text": "We adapt the ADD2x2 model to the ApPLy2x2 scenario by initializing three immutable registers with the auxiliary integers supplied with each 2 2 operator grid [see Fig.2[b)]. In addition, we swap the ADD (,.) instruction for APPLY (,:,.). The action of APPLY (a, b, op) is to interpret the. integer stored at op as an arithmetic operator and to compute a op b. All operations are performed modulo (M + 1) and division by zero returns M. In total, this model exposes a program space of. size ~ 1012 syntactically distinct programs."}, {"section_index": "7", "section_name": "3.4 MATH MODEL", "section_text": "We design the final scenario to investigate the synthesis of more complex control flow than straight. line code. A natural solution to execute the expression on the tape is to build a loop with a body tha alternates between moving the head and applying the operators [see Fig.4[b)]. This loopy solution has the advantage that it generalizes to handle arbitrary length arithmetic expressions.."}, {"section_index": "8", "section_name": "4 BASELINES", "section_text": "MOVE_NORTH, MOVE_EAST, MOVE_SOUTH, MOVE_WEST: translate the head (if po. sible) and return the result of applying the neural network chosen by net_choi ce to the image in the new cell ADD (., :) : accepts two register addresses and returns the sum of their contents.\nTo construct each line of code requires choosing an instruction and (in the case of sum) addresses of arguments for that instruction. We follow|Feser et al. (2016) and allow each line to store its result in a separate immutable register. Finally, we learn a parameter specifying which register to return after execution of the program. An example program in this model is shown in Fig.3[a). Even this simple model permits ~ 107 syntactically distinct programs for the differentiable interpreter to search over.\nFig.4(a) shows the basic architecture of the interpreter used in this scenario. We provide a set of blocks each containing the instruction MOVE or APPLY. A MOVE instruction increments the position of the head and loads the new symbol into a block specific immutable register using either net_0 or net_1 as determined by a block specific net_choice. After executing the instruction, the interpreter executes a GOTO_IF statement which checks whether the head is over the end of the tape and if not then it passes control to the block specified by goto_addr, otherwise control passes to a ha1t block which returns a chosen register value and exits the program. This model describes a space of ~ 106 syntactically distinct programs.\nNTPT aims to combine neural networks and differentiable interpreters for handling perceptual and algorithmic parts of a task respectively. A natural baseline is to replace the differentiable interpreter with a neural network to create a purely neural solution. In this spirit we define a column as the following architecture for handling the 2 2 tasks (see Fig.5(a)):\n(a) indep. (b) PNN (c) MTNN (d) NTPT TASK 1 TASK 2 TASK 3 19 RO= RE R0 = In R0 = RE R1 = MO R1 In R1 = MOI 128 R2 = SU R2 = In R2 = MO 128 R3 = NO R3 = MO R3 = SUI R4 = NO R4 = MO R4 = NO 128 return R5 = AP return concat concat concat concat Library concat A A 10 + 256 256 4,3,2 ++ 4,32+x+ 03 4,3,2 + 02 ?3 ?2\nFigure 5: Cartoon illustration of all models used in the experiments. See text for detail.\nWe construct 3 different neural baselines derived from this column architecture (see Fig.5)\nFor the MATH task, we build a purely neural baseline by replacing the task-specific part of the MTNN network with an LSTM. At each step, this network takes in the shared embeddings of the current symbol, updates an LSTM hidden state and then proceeds to the next symbol. We make a classification of the final answer using the last hidden states of the LSTM. We find that we achieve best performance with a 3 layer LSTM with 1024 elements in each hidden state and dropout between layers. In addition, we investigate a Neural GPU baseline based on|Kaiser & Sutskever(20163\nEach of the images in the 2 2 grid is passed through an embedding network with 2 layers of 256 neurons (c.f. net_0/1) to produce a 10-dimensional embedding. The weights of. the embedding network are shared across all 4 images.. These 4 embeddings are concatenated into a 40-dimensional vector and for the ApPLY2x2 the auxiliary integers are represented as one-hot vectors and concatenated with this 40-. dimensional vector. This is then passed through a network consisting of 3 hidden layers of 128 neurons to. produce a 19-dimensional output\n1. Indep.: Each task is handled by an independent column with no mechanism for transfer. 2. Progressive Neural Network (PNN): We followRusu et al.(2016) and build lateral con- nections linking each task specific column to columns from tasks appearing earlier in the. learning lifetime. Weights in all columns except the active task's column are frozen during. a training update. Note that the number of layers in each column must be identical to allow lateral connections, meaning we cannot tune the architecture separately for each task.. 3. Multitask neural network (MTNN): We split the column into a shared perceptual part and. a task specific part. The perceptual part consists of net_0 and net_1 embedding networks In an ideal case the symmetry between these embedding networks will be broken and one. will become specialized to handle handwritten digits while the other will handle handwritten. operators. In order to encourage this symmetry breaking we zero out one of the networks when training on the first task (cf. the symmetry breaking technique mentioned in Sec.3.1). The task-specific part consists of a neural network that maps the perceptual embeddings. to a 19 dimensional output. Note that unlike PNNs, the precise architecture of the task. specific part of the MTNN can be tuned for each individual task. We consider two MTNN architectures: (a) MTNN-1: All task-specific parts are 3 layer networks comparable to the PNN case. (b) MTNN-2: We manually tune the number of layers for each task and find best perfor-. when the task s1 Ontainc ns 1 hidden 1aver for the A DD2x2 tasks and 3.\n1.0 1.0 ADD2x2: top row ADD2x2: left column 0.5 ADD2x2: bottom row 0.5 ADD2x2: right column APPLY2x2 tasks 0.0 1.0 0.0 1.0 Coeuney indep. 0.5 PNN 0.5 MTNN-1 MTNN-2 NTPT 0.0 0 128 256 384 512 0 128 256 384 512 0.0 training example (1000s) training example (1000s) 0 128 256 384 512 training example (1000s) (a) (b) c"}, {"section_index": "9", "section_name": "5.1 LIFELONG LEARNING", "section_text": "Reverse transfer: Fig.6[a) focuses on the performance of NTPT on the first task (ADD2x2:top) The red bars indicate times where the the system was presented with an example from this task. Note that even when we have stopped presenting examples, the performance on this task continues. to increase as we train on later tasks - an example of reverse transfer. We verify that this is due tc continuous improvement of net_0 in later tasks by observing that the accuracy on the ADD2x2:top. task closely tracks measurements of the accuracy of net_0 directly on the digit classification task\nAvoidance of catastrophic forgetting: Fig. 6(b) shows the performance of the NTPT on the remaining ADD2x2 tasks. Both Fig.6(a) and (b) include results for the MTNN- 2 baseline (the best baseline for the ADD2x2 tasks). Note that whenever the dominant training task swaps from an ADD2x2 task to an ApPLY2x2 task the baseline's perfor mance on ADD2x2 tasks drops. This is because the shared perceptual network becomes corrupted by the change in task - an example of catastrophic forgetting. To try to limit\nFigure 6: Lifelong learning with NTPT. (a) top: the sequential learning schedule for all 8 tasks bottom: performance of NTPT (solid) and the MTNN-2 baseline (dashed) on the first ADD2x2 task.. (b) performance on the remaining ADD2x2 tasks. (c) Performance of all the baselines on the *:left. tasks.\nFirst we create a data set in a regime which best demonstrates the LPPBE problem. The most convincing demonstration of LPPBE requires a series of tasks for which there is insufficient data to learn independent solutions to all tasks and instead, success requires transferring knowledge from one task to the next. Empirically, we find that training on any individual ADD2x2 task with only 1k distinct 2 2 examples produces low accuracies of around 40 20% (measured on a held-out test set of 1Ok examples) for both the purely neural baselines and NTPT methods. Since none of our models can satisfactorily solve an ADD2x2 task independently in this regime, we work with this limited data set and argue that any success on these tasks during a lifetime of learning can be attributed to successful knowledge transfer. In addition, we check that in a data rich regime (e.g >4k examples) all of the baseline models and NTPT can independently solve each task with >80% accuracy. This indicates that the models all have sufficient capacity to represent satisfactory solutions and the challenge is to find these solutions during training.\nTo test knowledge transfer between tasks we train on batches of data drawn from a time-evolving prob. ability distribution over all 8 tasks in the ADD2x2 and ApPLy2x2 scenarios (see the top of Fig.|6(a)) During training, we observe the following key properties of the knowledge transfer achieved by NTPT:\ntask indep PNN MTNN-1 MTNN-2 NTPT top 35% 35% 26% 24% 87% left 32% 36% 38% 47% 87% bottom 34% 33% 40% 56% 86% right 32% 35% 44% 60% 86% top 38% 39% 40% 38% 98% left 39% 51% 41% 39% 100% bottom 39% 48% 41% 40% 100% right 39% 51% 42% 37% 100%\nFigure 7: Final accuracies on all 2 2 tasks for all models at the end of lifelong learning\nFinal performance: Fig.6(b) focuses on the ADD2x2:left and ADD2x2:left tasks to illustrate the. relative performance of the baselines described in Sec.4] Note that although PNNs avoid catastrophic forgetting, there is no clear overall winner between the MTNN and PNN baselines. NTPT learns. faster and to a higher accuracy than all baselines for all the tasks considered here. For clarity we only plot results for the *:left tasks: the other tasks show similar behavior and the accuracies for all tasks. at the end of the lifetime of learning are presented in Fig.7."}, {"section_index": "10", "section_name": "5.2 GENERALIZATION", "section_text": "In the final experiment we take net_0/1 from the end of the NTPT 2 2 training and start training on the MATH scenario. For the NTPT model we train on arithmetic expressions con- taining only 2 digits. The loopy structure of the MATH model introduces many local optima into the optimization landscape and only 2/100 ran- dom restarts converge on a correct program. We detect convergence to the correct program by a rapid increase in the accuracy on a valida- tion set (typically occurring after around 30k training examples). Once the correct program is found, continuing to train the model model mainly leads to further improvement in the ac- curacy of net_0, which saturates at 97.5% on the digit classification task. The learned source code generalizes perfectly to expressions containil the performance on long expressions comes from\nTo pick a strong baseline for the MATH problem, we first perform a preliminary experiment with. two simplifications from the case above: (1) rather than expecting strong generalization from just 2-digit training examples, we train candidate baselines with supervision on examples up to 5 digits in. length, and (2) we remove the perceptual component of the task, presenting the digits and operators as one-hot vectors rather than images. Fig.[8[a) shows the generalization performance of the LSTM and Neural GPU (512-filter) baselines in this simpler setting after training to convergencq4. Based on these results, we restrict attention to the LSTM baseline and return to the full task including the perceptual component. In the full MATH task, we initialize the embedding networks of each model using net_0/1 from the end of the NTPT 2 2 training. Fig.8(b) shows generalization of the. NTPT and LSTM models on expressions of up to 16 digits after training to convergence. We find. that even though the LSTM shows surprisingly effective generalization when supplied supervision up to 5 digits, NTPT trained on only 2-digit expressions still offers better results..\nLifelong Machine Learning. We operate in the paradigm of Lifelong Machine Learning (LML) (Thrun]1994]1995]Thrun & O'Sullivan]1996][Silver et al.][2013] Chen et al.2015), where a learner is presented a sequence of different tasks and the aim is to retain and re-use knowledge from earlier tasks to more efficiently and effectively learn new tasks. This is distinct from related paradigms of multitask learning (presentation of a finite set of tasks simultaneously rather than in sequence (Caruana][1997] |Kumar & Daume II[2012]|Luong et al.[|2015} [Rusu et al.[2016), transfer learning (transfer of knowledge from a source to target domain without notion of knowledge retention (Pan & Yang 2010), and curriculum learning (training a single model for a single task of varying difficulty (Bengio et al.2009)).\nthe extent of catastrophic forgetting and make the shared components more robust, we have a separate learning rate for the perceptual networks in both the MTNN baseline and NTPT which is 100 fold. smaller than the learning rate for the task-specific parts. With this balance of learning rates we find empirically that NTPT does not display catastrophic forgetting..\n(a) 100 100 (%) Aceancce 92.8 50 Neural GPU (43.8M) LSTM (21.1M) 25.0 TerpreT (32) 0 (b) 100 LSTM - 2digit (%) Aceunnge LSTM - 5digit NTPT - 2digit 90 87.1 82.8 80 0 5 10 15 digits in expression\nFigure 8: Generalization behavior on MATH expres-. sions. Solid dots indicate expression lengths used in training. We show results on (a) a simpler non- perceptual MATH task (numbers in parentheses indicate parameter count in each model) and (b) the MATH task including perception\n4Note that|Price et al.(2016) find similarly poor generalization performance for a Neural GPU applied to the similar task of evaluating arithmetic expressions involving binary numbers.\nThe challenge for LML with neural networks is the problem of catastrophic forgetting: if the dis tribution of examples changes during training, then neural networks are prone to forget knowledge gathered from early examples. Solutions to this problem involve instantiating a knowledge repository (KR) either directly storing data from earlier tasks or storing (sub)networks trained on the earlier tasks with their weights frozen. This knowledge base allows either (1) rehearsal on historical examples (Robinsl|1995), (2) rehearsal on virtual examples generated by the frozen networks (Silver & Mercer 2002||Silver & Poirier2006) or (3) creation of new networks containing frozen sub networks from the historical tasks (Rusu et al. 2016 Shultz & Rivest2001)\nTo frame our approach in these terms, our KR contains partially-trained neural network classifiers which we call from learned source code. Crucially, we never freeze the weights of the networks ir the KR: all parts of the KR can be updated during the training of all tasks - this allows us to improve performance on earlier tasks by continuing training on later tasks (so-called reverse transfer). Reverse transfer has been demonstrated previously in systems which assume that each task can be solved by a model parametrized by an (uninterpretable) task-specific linear combination of shared basis weights Ruvolo & Eaton[2013). The representation of task-specific knowledge as source code, learning fron weak supervision, and shared knowledge as a deep neural networks distinguishes this work from the linear model used inRuvolo & Eaton (2013).\nNeural Networks Learning Algorithms. Recently, extensions of neural networks with primitive. such as memory and discrete computation units have been studied to learn algorithms from input output data (Graves et al. 2014 , Weston et al 2014 Joulin & Mikolov 2015 Grefenstette et al. 2015} Kurach et al.|2015 Kaiser & Sutskever 2016 Reed & de Freitas. 2016 Bunel et al.|2016 Andrychowicz & Kurach |2016fZaremba et al. 2016f Graves et al.| 2016 Riedel et al.|2 2016 Gaunt et al.[[2016f Feser et al.f2016). Whereas many of these works use a neural network controller manag ing a differentiable computer architecture, we flip this relationship. In our approach, a differentiable interpreter that is expressible as source code and makes calls to neural network components.\nThe methods above, with the exception of Reed & de Freitas[(2016) and|Graves et al.[(2016), operate on inputs of (arrays of) integers. However, Reed & de Freitas (2016) requires extremely strong supervision, where the learner is shown all intermediate steps to solving a problem; our learner only observes input-output examples.Reed & de Freitas(2016) also show the performance of their system. in a multitask setting. In some cases, additional tasks harm performance of their model and they freeze parts of their model when adding to their library of functions. OnlyBunel et al.(2016), Riedel. et al.[(2016) and[Gaunt et al.[(2016) aim to consume and produce source code that can be provided by a human (e.g. as sketch of a solution) to or returned to a human (to potentially provide feedback).. DISCUSSION\nWe have presented NeuRAL TeRPRET, a framework for building end-to-end trainable models that. structure their solution as a library of functions represented as source code or neural networks Experimental results show that these models can successfully be trained in a lifelong learning context. and they are resistant to catastrophic forgetting; in fact, they show that even after instances of earlier tasks are no longer presented to the model, performance still continues to improve.\nLearning neural network models within differentiable interpreters has several benefits. First, learning programs imposes a bias that favors learning models that exhibit strong generalization, as illus trated by many works on program-like neural networks. Second, the source code components are interpretable by humans, allowing incorporation of domain knowledge describing the shape of th problem through the source code structure. Third, source code components can be inspected, anc the neural network components can be queried with specific instances to inspect whether the sharec classifiers have learned the expected mappings. A final benefit is that the differentiable interprete can be seen as focusing the supervision. If a component is un-needed for a given task, then the differentiable interpreter can choose not to use the component, which shuts off any gradients fron flowing to the component. We speculate that this could be a reason for the models being resistant tc catastrophic forgetting, as the model either chooses to use a classifier, or ignores it (which leaves the component unchanged).\nIt is known that differentiable interpreters are difficult to train (Kurach et al.]2015]Neelakantan et al. 2016, [Gaunt et al.2016), and being dependent on differentiable interpreters is the primary limitation of this work. However, if progress can be made on more robust training of differentiable interpreters (perhaps extending ideas in|Neelakantan et al.(2016); Feser et al.(2016)), then we believe there tc be great promise in using the models we have presented here to build large lifelong neural networks"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Rich Caruana. Multitask learning. Machine Learning, 28:41-75, 1997.\nJohn K. Feser, Marc Brockschmidt, Alexander L. Gaunt, and Daniel Tarlow. Neural functional programming. 2016. Submitted to ICLR 2017.\nAlexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathar Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. CoRR.abs/1608.04428.2016. URLhttp://arxiv.0rg/abs/1608.04428\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning tc. transduce with unbounded memory. In Proceedings of the 28th Conference on Advances in Neura Information Processing Systems (NIPS), pp. 1828-1836, 2015\nMichael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psvchology. of learning and motivation. 24:109-165. 1989\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nAbhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning arXiv preprint arXiv:1206.6417. 2012\nMinh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task se quence to sequence learning. In International Conference on Learning Representations (ICLR) 2015.\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro grams with gradient descent. In Proceedings of the 4th International Conference on Learning. Representations 2016, 2016.\nEric Price, Wojciech Zaremba, and Ilya Sutskever. Extensions and limitations of the neural gpu 2016. Submitted to ICLR 2017.\nRoger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning anc forgetting functions. Psychological review, 97(2):285, 1990\nScott E. Reed and Nando de Freitas. Neural programmer-interpreters. 2016\nSebastian Riedel, Matko Bosnjak. and Tim Rocktaschel. Programming with a differentiable fort interpreter. CoRR, abs/1605.06640, 2016. URLhttp://arxiv.0rg/abs/1605.06640\nDaniel L Silver and Ryan Poirier. Machine life-long learning with csmtl networks. In AAAI, 2006\nDaniel L Silver, Qiang Yang, and Lianghao Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pp. 49-55, 2013.\nSebastian Thrun. Is learning the n-th thing any easier than learning the first? In Advances in Neura Information Processing Systems 8 (NIPS), pp. 640-646, 1995.\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2010.\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2) 123. 146.1995\nThomas R Shultz and Francois Rivest. Knowledge-based cascade-correlation: Using knowledge to speed learning. Connection Science, 13(1):43-72, 2001.\nDaniel L Silver and Robert E Mercer. The task rehearsal method of life-long learning: Overcom ing impoverished data. In Conference of the Canadian Society for Computational Studies of Intelligence, pp. 90-101. Springer, 2002."}] |
HkYhZDqxg | [{"section_index": "0", "section_name": "TREE-STRUCTURED DECODING RECURRENT NEURAL NETWORKS", "section_text": "Dayid AIyarez-Melis & Tommi S. Iaakkola\nComputer Science and Artificial Intelligence Lab MIT\ndavidam,tommi}@csail.mit.edu\nWe propose a neural network architecture for generating tree-structured objects. from encoded representations. The core of the method is a doubly recurrent neu-. ral network model comprised of separate width and depth recurrences that are. combined inside each cell (node) to generate an output. The topology of the tree. is modeled explicitly together with the content. That is, in response to an encoded. vector representation, co-evolving recurrences are used to realize the associated. tree and the labels for the nodes in the tree. We test this architecture in an encoder. decoder framework, where we train a network to encode a sentence as a vector. and then generate a tree structure from it. The experimental results show the ef-. fectiveness of this architecture at recovering latent tree structure in sequences and. at mapping sentences to simple functional programs.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recurrent neural networks have become extremely popular for modeling structured data. Key tc. their success is their ability to learn long-range temporal dependencies, their flexibility, and ease o. customization. These architectures are naturally suited for modeling sequences since the underlying. state evolution resulting from successive operations follows an inherently linear order (Williams & Zipser1995 Hochreiter & Schmidhuber1997). Indeed, they have been successfully adapted tc language modeling (Zaremba et al.[2015), machine translation (Sutskever et al.[2014) and conver sational agents (Vinyals & Le2015), among other applications..\nAlthough sequences arise frequently in practice, other structures such as trees or graphs do no1. naturally conform to a linear ordering. For example, natural language sentences or associated parse trees, programs, hierarchical structures in biology, or molecules are not inherently linear structures.. While sentences in natural language can be modeled as if they were linear sequences, the underlying. process is compositional (Frege 1892). Models that construct sentences compositionally should. derive an advantage from adopting a more appropriate inductive bias..\nThe flexibility and success of recurrent neural networks in modeling and generating sequential data has prompted efforts to adapt them to non-sequential data too. Recent work has focused on the application of neural architectures to hierarchical structures, albeit in limited ways. Much of this work has assumed that either the full tree structure is given (Socher et al.[|2012)[Tai et al.[[2015) or at least the nodes are (Socher & Lin]2011Chen & Manning]2014] Kiperwasser & Goldberg2016) In the former scenario, the network aggregates the node information in a manner that is coherent with a given tree structure while, in the latter, generation is reduced to an attachment problem, i.e., sequentially deciding which pairs of nodes to join with an edge until a tree is formed.\nThe full problem of decoding with structure, i.e., generating a tree-structured object with node labels from a given vector representation, has remained largely unexplored until recently. Recent efforts to adapt RNNs to this context have so far remained relatively close to their sequential counterparts. For example, in order to capture depth and branching in the tree, one can introduce special tokens (Dong & Lapata!2016) or use alternating RNNs coupled with external classifiers to predict branching (Zhang et al.|2016)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this work, we propose a novel architecture tailored specifically to tree-structured decoding. At th. heart of our approach is a doubly-recurrent (breadth and depth-wise recurrent) neural network whicl. separately models the flow of information between parent and children nodes, and between siblings. Each of these relationships is modeled with a recurrent module whose hidden states are update. upon observing node labels. Every node in the tree receives two hidden states, which are ther. combined and used to predict a label for that node. Besides maintaining separate but simultaneous. fraternal and paternal recurrences, the proposed architecture departs from previous methods in tha it explicitly models tree topology. Each node in the network has modules that predict, based or. the cell state, whether the node is terminal, both in terms of depth and width. Decoupling thes. decisions from the label prediction allows for a more concise formulation, which does not require. artificial tokens to be added to the tree to simulate branching.\nTo summarize, the main contributions of this paper are as follows.\nRecursive Neural Networks.Recursive neural networks (Socher & Lin]2011) Socher et al.]|2012 were proposed to model data with hierarchical structures, such as parsed scenes and natural language. sentences. Though they have been most successfully applied to encoding objects when their tree structured representation is given (Socher et al.2013), the original formulation by Socher & Lin. [2011) also considered using them to predict the structure (edges), albeit for the case where nodes. are given. Thus, besides their limited applicability due to their assumption of binary trees, recursive. neural networks are not useful for fully generating trees from scratch..\nTree-structured encoders. The Tree-LSTM of Tai et al.[(2015) is a generalization of long short. term memory networks (Hochreiter & Schmidhuber1997) to tree-structured inputs. Their mode). constructs a sentence representation bottom-up, obtaining at every step the representation of a node. in the tree from those of its children. In this sense, this model can be seen as a generalization of recursive neural networks to trees with degree potentially greater than two, with the additional long. range dependency modeling provided by LSTMs. They propose two methods for aggregating the. states of the children, depending on the type of underlying tree: N-ary trees or trees with unknowr. and potentially unbounded branching factor. TreeLSTMs have shown promising results for compo. sitional encoding of structured data, though by construction they cannot be used for decoding, since. they operate on a given tree structure.\nTree-structured decoders. Proposed only very recently, most tree-structured decoders rely on stacked on intertwined RNNs, and use heuristic methods for topological decisions during genera- tion. Closest to our method is the Top-down Tree LSTM ofZhang et al.(2016), which generates a tree from an encoded representation. Their method relies on 4 independent LSTMs, which act in alternation-as opposed to simultaneously in our approach--yielding essentially a standard LSTM that changes the weights it uses based on the position of the current node. In addition, their method\nWe test this novel architecture in various encoder-decoder frameworks, coupling it with sequential. encoders to predict tree structure from encoded vector representations of sequences. The experimen- tal results show the effectiveness of this approach at recovering latent structure in flattened string. representations of trees (Section|4.1) and at mapping from natural language descriptions of simple programs to abstract syntax trees (Section |4.2). In addition, we show that even for sequence-to sequence tasks such as machine translation, the proposed architecture exhibits desirable properties. such as invariance to structural changes and coarse-to-fine generation (Section4.3).\nWe propose a novel neural network architecture specifically tailored to tree-structured de coding, which maintains separate depth and width recurrent states and combines them t obtain hidden states for every node in the tree.. We equip this novel architecture with a mechanism to predict tree topology explicitly (a. opposed to implicitly by adding nodes with special tokens).. We show experimentally that the proposed method is capable of recovering trees fron. encoded representations and that it outperforms state-of-the-art methods in a task consisting. of mapping sentences to simple functional programs.\nprovides children with asymmetric parent input: \"younger' children receive information from the parent state only through the previous sibling's state. Though most of their experiments focus on the case where the nodes are given, they mention how to use their method for full prediction by in troducing additional binary classifiers which predict which of the four LSTMs is to be used. These classifiers are trained in isolation after the main architecture has been trained. Contrary to this approach, our method can be trained end-to-end in only one pass, has a simpler formulation and explicitly incorporates topological prediction as part of the functioning of each neuron.\nA similar approach is proposed byDong & Lapata(2016). They propose sEQ2TREE, an encoder. decoder architecture that maps sentences to tree structures. For the decoder, they rely on hierarchical. use of an LSTM, similar to Tai et al.(2015), but in the opposite direction: working top-down from. the root of the tree. To decide when to change levels in the hierarchy, they augment the training trees with nonterminal nodes labeled with a special token <n>, which when generated during decoding. trigger the branching out into a lower level in the tree. Similar to our method, they feed nodes with. hidden representations of their parent and sibling, but they do so by concatenating both states and. running them through a single recurrent unit, as opposed to our method, where these two sources. of information are handled separately. A further difference is that our approach does not require. artificial nodes with special tokens to be added to the tree, resulting in smaller trees..\nHierarchical Neural Networks for Parsing. Neural networks have also been recently introduced to the problem of natural language parsing (Chen & Manning2014) Kiperwasser & Goldberg 2016). In this problem, the task is to predict a parse tree over a given sentence. For this,Kiperwasser & Goldberg[(2016) use recurrent neural networks as a building block, and compose them recursively to obtain a tree-structured encoder. Starting from the leaves (words) they predict a parse tree with a projective bottom-up strategy, which sequentially updates the encoded vector representation of the tree and uses it to guide edge-attaching decisions. Though conceptually similar to our approach their method relies on having access to the nodes of the tree (words) and only predicts its topology, so---similar to recursive neural networks-it cannot be used for a fully generative decoding."}, {"section_index": "3", "section_name": "3 DOUBLY RECURRENT NEURAL NETWORKS", "section_text": "Generating a tree-structured object from scratch using only an encoded representation poses severa design challenges. First, one must decide in which order to generate the tree. If the nodes on th decoder side were given (such as in parsing), it would be possible to generate a tree bottom-up fron these nodes (e.g. as Kiperwasser & Goldberg 2016|do). In the setting we are interested in, howeve not even the nodes are known when decoding, so the natural choice is a top-down decoder, whicl starting from an encoded representation generates the root of the tree and then recursively generate the children (if any) of every node.\nThe second challenge arises from the asymmetric hierarchical nature of trees. Unlike the sequence. to-sequence setting where encoding and decoding can be achieved with analogous procedures, wher. dealing with tree-structured data these two involve significantly different operations. For example an encoder that processes a tree bottom-up using information of a node's children to obtain its. representation cannot be simply reversed and used as a decoder, since when generating the tree. top-down, nodes have to be generated before their children are..\nAn additional design constraint comes from deciding what information to feed to each node. Fol sequences, the choice is obvious: a node should receive information from the node preceding o1 succeeding it (or both), i.e. there is a one-dimensional flow of information. In trees, there is ar evident flow of information from parent to children (or vice-versa), but when generating nodes ir a top-down order it seems unnatural to generate children in isolation: the label of one of them will likely influence what the states of the other children might be. For example, in the case of parse trees, generating a verb will reduce the chances of other verbs occurring in that branch.\nWith these considerations in mind, we propose an architecture tailored to tree decoding from scratch:. top-down, recursive and doubly-recurrent, i.e. where both the ancestral (parent-to-children) and fraternal (sibling-to-sibling) flows of information are modeled with recurrent modules. Thus, the building block of a doubly recurrent neural network (DRNN) is a cell with two types of input states.. one coming from its parent, updated and passed on to its descendants, and another one received from\nits previous sibling!'[updated and passed on to the next one. We model the flow of information ir the two directions with separate recurrent modules.\nO; = softmax(Wh(pred)"}, {"section_index": "4", "section_name": "3.1 TOPOLOGICAL PREDICTION", "section_text": "As mentioned before, the central issue with free-form tree construction is to predict the topology. of the tree. When constructing the tree top-down, for each node we need to decide: (i) whether i is a leaf node (and thus it should not produce offspring) and (ii) whether there should be additiona. siblings produced after it. Answering these two questions for every node allows us to construct a. tree from scratch and eventual stop growing it.\nSequence decoders typically rely on special tokens to terminate generation (Sutskever et al.|2014. The token is added to the vocabulary and treated as a regular word. During training, the examples ar. padded with this token at the end of the sequence, and during testing, generation of this token signal. termination. These ideas has been adopted by most tree decoders (Dong & Lapata2016). Ther. are two important downsides of using a padding strategy for topology prediction in trees. First. the size of the tree can grow considerably. While in the sequence framework only one stoppin. token is needed, a tree with n nodes might need up to O(n) padding nodes to be added. This ca. have important effects in training speed. The second reason is that a single stopping token selecte. competitively with other tokens requires one to continually update the associated parameters i response to any changes in the distribution over ordinary tokens so as to maintain topological contro...\nFormally, let T = {V, E, } be a connected labeled tree, where V is the set of nodes, E the set of edges and ' are node labels2[Let ga and gf be functions which apply one step of the two separate RNNs. For a node i E V with parent p(i) and previous sibling s(i), the ancestral and fraternal. hidden states are updated via\nh = ga(h) )\nwhere xs(), Xp(i) are the vectors representing the previous sibling's and parent's values, respec-. tively. Once the hidden depth and width states have been updated with these observed labels, they are combined to obtain a predictive hidden state:.\nre : tanh UJf 1 + U*h,\nwhere Uf E RnD and Ua E RnxDa are learnable parameters. This state contains combined. information of the node's neighborhood in the tree, and is used to predict a label for it. In its simplest form, the network could compute the output of node i by sampling from distribution .\nIn the next section, we propose a slight modification to (4) whereby topological information is included in the computation of cell outputs. After the node's output symbol x, has been obtained by sampling from oi, the cell passes h to all its children and h, to the next sibling (if any), enabling them to apply Eqs (1) and (2) to realize their states. This procedure continues recursively, until termination conditions (explained in the next section) cause it to halt.\nBased on these observations, we propose an alternative approach to stopping, in which topological decisions are made explicitly (as opposed to implicitly, with stopping tokens). For this, we use the predictive hidden state of the node h(pred) with a projection and sigmoid activation:\nprec 11a\nu .h pred\n1Unlike the \"ancestral' line, the order within sibling nodes is ambiguous. While in abstract trees it is. assumed that the there is no such ordering, we assume that for the structures were are interested in learning. there is always one: either chronological (the temporal order in which the nodes were generated) or latent (e.g. the grammatical order of the words in a parse tree with respect to their sentence representation).. 2We assume throughout that these values are given as class indicators x, E {1, . .., N}..\nha Xp h% Encoder 1 hf h1 hi h1 2 3 4 h2 h ha ha h4 h4 h4 5 6 8 hf ht hg h pi\nFigure 1: Left: A cell of the doubly-recurrent neural network corresponding to node i with parent p. and sibling s. Right: Structure-unrolled DRNN network in an encoder-decoder setting. The nodes are labeled in the order in which they are generated. Solid (dashed) lines indicate ancestral (fraternal) connections. Crossed arrows indicate production halted by the topology modules..\nNote that these stopping strategies depart from the usual padding methods in a fundamental property. the decision to stop is made before instead of in conjunction with the label prediction. The rationale behind this is that the label of a node will likely be influenced not only by its context, but also by the type of node (terminal or non-terminal) where it is to be assigned. This is the case in language. for example, where syntactic constraints restrict the type of words that can be found in termina. nodes. For this purpose, we include the topological information as inputs to the label prediction. layer. Thus, (4) takes the form"}, {"section_index": "5", "section_name": "3.2 TRAINING DRNNS", "section_text": "We train DRNNs with (reverse) back-propagation through structure (BPTS) (Goller & Kuechler. 1996). In the forward pass, node outputs are computed in a top-down fashion on the structure. unrolled version of the network, following the natural'[dependencies of the tree. We obtain erro signal at the node level from the two types of prediction: label and topology. For the former, w. compute cross-entropy loss of o; with respect to the true label of the node xy. For the topologica. values p4 and p, we compute binary cross entropy loss with respect to gold topological indicator. Qi, $i E {0, 1}. In the backward pass, we proceed in the reverse (bottom-up) direction, feeding int. every node the gradients received from child and sibling nodes and computing internally gradient. with respect to both topology and label prediction. Further details on the backpropagation flow ar provided in the Appendix.\nNote that the way BPTS is computed implies and underlying decoupled loss functio.\nThe decoupled nature of this loss allows us to weigh these two objectives differently, to emphasiz either topology or label prediction accuracy. Investigating the effect of this is left for future work.\n3The traversal is always breadth-first starting from the root, but the order in which sibling nodes are visited might depend on the specific problem. If the nodes of the tree have an underlying order (such as in dependency parse trees), it is usually desirable to preserve this order.\nho Encoder 1 h, hq h1 hq 2 3 4 h2 hf h2 h2 h4 h4 h4 Oi 5 6 7 8 h, h h pa\nO; = softmax(Wh(pred) F Q;va+S;vJ\nwhere ai, i E {0, 1} are binary variables indicating the topological decisions and va, vf are learn- able offset parameters. During training, we use gold-truth values in (7), i.e. Q; = 1 if node i has children and ; = 1 if it has a succeeding sibling. During testing, these values are obtained from pa, pf by sampling or beam-search. A schematic representation of the internal structure of a DRNN cell and the flow of information in a tree are shown in Figure|1\n(*) = Llabel(x;,Xi) ) + Ltopo(Pi,Pi) iEV\nN=500 N=1000 N=3500 N=4000 gold ROOT ROOT ROOT ROOT ROOT\nN=500 N=1000 N=3500 N=4000 gold ROOT ROOT ROOT ROOT ROOT R\nFigure 2: Trees generated by the DRNN decoder trained on subset of size N of the synthetic datase for a test example with description \"ROOT B W F J V\"\nAs is common with sequence generation, during training we perform teacher forcing: after predict ing the label of a node and its corresponding loss, we replace it with its gold value, so that childre. and siblings receive the correct label for that node. Analogously, we obtain the probabilities p and pf, compute their loss, and replace them for ground truth variables ai, 9i for all downstrean computations. Addressing this exposure bias by mixing ground truth labels with model predictions during training (Venkatraman et al.2015) or by incremental hybrid losses (Ranzato et al.]2016) i left as an avenue for future work."}, {"section_index": "6", "section_name": "4.1 SYNTHETIC TREE RECOVERY", "section_text": "In our first set of experiments we evaluate the effectiveness of the proposed architecture to recover trees from flattened string representations. For this, we first generate a toy dataset consisting of simple labeled trees. To isolate the effect of label content from topological prediction, we take a small vocabulary consisting of the 26 letters of the English alphabet. We generate trees in a top-down fashion, conditioning the label and topology of every node on the state of its ancestors and siblings For simplicity, we use a Markovian assumption on these dependencies, modeling the probability of a node's label as depending only on the label of its parent and the last sibling generated before it (if any). Conditioned on these two inputs, we model the label of the node as coming from a multinomial distribution over the alphabet with a dirichlet prior. To generate the topology of the tree, we model the probability of a node having children and a next-sibling as depending only on its label and the depth of the tree. For each tree we generate a string representation by traversing it in breadth-first preorder, starting from the root. The labels of the nodes are concatenated into a string in the order in which they were visited, resulting in a string of [T symbols. We create a dataset of 5,o00 trees with this procedure, and split it randomly into train, validation and test sets (with a 80%,10%,10% split). Further details on the construction of this dataset are provided in the Appendix.\nThe task consists of learning a mapping from strings to trees, and using this learned mapping tc. recover the tree structure of the test set examples, given only their flattened representation. To do so, we use an encoder-decoder framework, where the strings are mapped to a fixed-size vector. representation using a recurrent neural network. For the decoder, we use a DRNN with LSTM modules, which given the encoded representation generates a tree. We choose hyper-parameters with cross-validation. Full training details are provided in the Appendix..\nMeasuring performance only in terms of exact recovery would likely yield near-zero accuracies for most trees. Instead, we opt for a finer-grained metric of tree similarity that gives partial credit for correctly predicted subtrees. Treating tree generation as a retrieval problem, we evaluate the quality of the predicted tree in terms of the precision and recall of recovering nodes and edges present in the gold tree. Thus, we penalize both missing and superfluous components. As baseline, we induce a probabilistic context-free grammar (PCFG) on the full training data and use it to parse the test sentences. Note that unlike the DRNN, this parser has direct access to the sentence representation and thus its task is only to infer the tree structure on top of it, so this is indeed a strong baseline\nFigure [3|shows the results on the test set. Training on the full data yields node and edge retrieval. F1-Scores of 75% and 71%, respectively, the latter considerably above the baseline4| This 4% gap. can be explained by correct nodes being generated in the wrong part of the tree, as in the example in\n4Since the PCFG parser has access to the nodes by construction, node accuracy for the baseline method is irrelevant and thus omitted from the analysis.\nFigure 3: Left: F1-Score for models trained on randomly sampled subsets of varying size, averaged. over 5 repetitions. Right: Node (first column) and edge (second) precision as a function of tree size\n1.0 1.0 Node Nod 0.8 0.8 Edge Edg uo! 0.6 0.6 reeesl 0.4 0.4 0.2 0.2 0.0 0.0 2 3 4 5 6 7 8 6 C 2 3 4 5 6 Tree Depth (# nodes) Tree Width (# nodes)\nFigure 4: Node and edge precision as a function of tree depth (left figure) and width (right)\nTree structures arise naturally in the context of programs. A typical compiler takes human-readable. source code (expressed as sequences of characters) and transforms it into an executable abstrac syntax tree (AST). Source code, however, is already semi-structured. Mapping natural languag sentences directly into executable programs is an open problem, which has received considerable. interest in the natural language processing community (Kate et al.]2005] Branavan et al.]2009).\nThe IFTTT dataset (Quirk et al.] 2015) is a simple testbed for language-to-program mapping. It consists of if-this-then-that programs (called recipes) crawled from the IFTTT website| paired with natural language descriptions of their purpose. The recipes consist of a trigger and an action, each defined in terms of a channel (e.g. \"Facebook'), a function (e.g. \"Post a status update') and poten tially arguments and parameters. An example of a recipe and its description are shown in Figure|5 The data is user-generated and extremely noisy, which makes the task significantly challenging.\nRecipe Save photos you're tagged in on Facebook to Dropbox Root IF (TRIGGER) THEN (ACTION) (a) Channels Facebook Dropbox (b) Functions You_are_tagged_in_a_photo Add_file_from_URL File_URL File name Dropbox Folder.Path (c) Arguments \"{{CreatedAt}}} {{ImageSource}}} -{{From}}} - {{Facebook}} (b) Parameters {{Caption}}\"\nFigure 5: Example recipe from the IFTTT dataset. The description (above) is a user-generated natural language explanation of the if-this-then-that program (below)..\n80 1.0 Basline - Edge. Node seoree 75 0.8 Node Edge 70 Edge 0.6 Maaaeeer 65 0.4 60 0.2 55 50 0.0 500 1000 1500 2000 2500 3000 3500 4000 2 3 5 6 8 9 11 2 4 5 8 6 2 2 Training examples Tree Size (# nodes)\n1.0 1.0 Node Node 0.8 0.8 Edge Edge uo uol 0.6 0.6 Cisi preessg 2 0.4 0.4 P 0.2 0.2 0.0 0.0 2 3 4 5 6 8 2 2 3 5 6 Tree Depth (# nodes) Tree Width (# nodes)\nFigure[2] The second plot in Figure[3 shows that although small trees are recovered more accurately precision decays slowly with tree size, with depth accounting for the largest effect (Figure4)\nTable 1: Results on the IFTTT task. Left: non-English and unintelligible examples removed (2,262 recipes). Right: examples for which at least 3+ humans agree with gold (758 recipes).\nMethod Channel +Func F1 retrieval 36.8 25.4 49.0 phrasal 27.8 16.4 39.9 sync 26.7 15.4 37.6 classifier 64.8 47.2 56.5 posclass 67.2 50.4 57.7 SEQ2SEQ 68.8 50.5 60.3 SEQ2TREE 69.6 51.4 60.4 GRU-DRNN 70.1 51.2 62.7 LSTM-DRNN 74.9 54.3 65.2\nWe approach this task using an encoder-decoder framework. We use a standard RNN encoder, either. an LSTM or a GRU (Cho et al.|2014), to map the sentence to a vector representation, and we use. a DRNN decoder to generate the AST representation of the recipe. We use the original data split. which consists of 77,495 training, 5,171 development and 4,294 test examples. For evaluation, we use the same metrics as Quirk et al.[(2015), who note that computing exact accuracy on such a noisy dataset is problematic, and instead propose to evaluate the generated AST in terms of F1-score on. the set of recovered productions. In addition, they compute accuracy at the channel level (i.e. when. both channels are predicted correctly) and at the function level (both channels and both functions. predicted correctly)."}, {"section_index": "7", "section_name": "4.3 MACHINE TRANSLATION", "section_text": "In our last set of experiments, we offer a qualitative evaluation DRNNs in the context of machine translation. Obtaining state-of-the-art results in machine translation requires highly-optimized ar chitectures and large parallel corpora. This is not our goal. Instead, we investigate whether decoding with structure can bring benefits to a task traditionally approached as a sequence-to-sequence prob lem. For this reason, we consider a setting with limited data: a subset of the WMT14 datasel consisting of about 50K English > French sentence pairs (see the Appendix for details) along with dependency parses of the target (English) side.\nWe train a sequence-to-tree model using an LSTM encoder and a DRNN decoder as in the previous. experiments. A slight modification here is that we distinguish left and right children in the tree. using two symmetric width-modules gt, gR that produce children from the parent outwards. With. this, children are lexically ordered, and therefore trees can be easily and un-ambiguously projected. back into sentences. We compare our model against a sequence-to-sequence architecture of similar. complexity (in terms of number of parameters) trained on the same data using the optimized Open NMT library (Klein et al.2017). For decoding, we use a simple best-of-k sampling scheme for our. model, and beam search for the SEQ2SEQ models..\nMethod Channel +Func F1 Method Channel +Func F1 retrieval 36.8 25.4 49.0 retrieval 43.3 32.3 56.2 phrasal 27.8 16.4 39.9 phrasal 37.2 23.5 45.5 sync 26.7 15.4 37.6 sync 36.5 23.5 45.5 classifier 64.8 47.2 56.5 classifier 79.3 66.2 65.0 posclass 67.2 50.4 57.7 posclass 81.4 71.0 66.5 SEQ2SEQ 68.8 50.5 60.3 SEQ2SEQ 87.8 75.2 73.7 SEQ2TREE 69.6 51.4 60.4 SEQ2TREE 89.7 78.4 74.2 GRU-DRNN 70.1 51.2 62.7 GRU-DRNN 89.9 77.6 74.1 STM-DRNN 74.9 54.3 65.2 LSTM-DRNN 90.1 78.2 77.4\nMethod Channel +Func F1 retrieval 43.3 32.3 56.2 phrasal 37.2 23.5 45.5 sync 36.5 23.5 45.5 classifier 79.3 66.2 65.0 posclass 81.4 71.0 66.5 SEQ2SEQ 87.8 75.2 73.7 SEQ2TREE 89.7 78.4 74.2 GRU-DRNN 89.9 77.6 74.1 LSTM-DRNN 90.1 78.2 77.4\nWe compare our methods against the various extraction and phrased-based machine translation base lines ofQuirk et al.(2015) and the the methods ofDong & Lapata (2016): SEQ2SEQ, a sequence- to-sequence model trained on flattened representations of the AST, and SEQ2TREE, a token-driven hierarchical RNN. Following these two works, we report results on two noise-filtered subsets of the data: one with all non-English and unintelligible recipes removed and the other one with recipes for which at least three humans agreed with the gold AST. The results are shown in Table[1 In both subsets, DRNNs perform on par or above previous approaches, with LsTm-DRNN achieving. significantly better results. The improvement is particularly evident in terms of F1-score, which is the only metric used by previous approaches that measures global tree reconstruction accuracy. To. better understand the quality of the predicted trees beyond the function level (i.e. (b) in Figure [5) we computed node accuracy on the arguments level. Our best performing model, LsTm-DRNN,. achieves a Macro F1 score of 51% (0.71 precision, 0.40 recall) over argument nodes, which shows that the model is reasonably successful at predicting structure even beyond depth three. The best. performing alternative model, SEQ2TREE, achieves a corresponding F1 score of 46%..\nDRNN (Small) DRNN (Large) Seq2Seq (Large) Seq2Seq (Small) 0 20 40 60 80 100 Log-Likelihood relative change (%)\nFirst, we analyze the quality of translations as a function of the maximum allowed target sentence \"size\"'. The notion of size for a sequence decoder is simply the length while for DrNN we use depth instead so as to tap into the inherent granularity at which sentences can be generated from this architecture. Two such examples are shown in Table[2] Since DRNN topology has been trained to mimic dependency parses top-down, the decoder tends to first generate the fundamental aspects of the sentence (verb, nouns), leaving less important refinements for deeper structures down in the tree. The sequence decoder, in contrast, is trained for left-to-right sequential generation, and thus produces less informative translations under max-length restrictions.\nIn our second experiment we investigate the decoders' ability to entertain natural paraphrases o sentences. If we keep the semantic content of a sentence fixed and only change its grammatica structure, it is desirable that the decoder would assign nearly the same likelihood to the new sentence One way to assess this invariance is to compare the relative likelihood that the model assigns to th gold sentence in comparison to its paraphrase. To test this, we take 50 examples from the WM test split and manually generate paraphrases with various types of structural alterations (see detail n the Appendix). For each type of decoder, we measure the relative change (in absolute value) o he log-likelihood resulting from the perturbation. All the models we compare have similar standar deviation (40 20) of log-likelihood scores over these examples, so the relative changes in th og-likelihood remain directly comparable. For each architecture we train two versions of differen sizes, where the sizes are balanced in terms of the number of parameters across the architectures. Th results in Figure|6|show that DRNN's exhibit significantly lower log-likelihood change, suggesting that, as language models, they are more robust to natural structural variation than their SEQ2SEC counterparts."}, {"section_index": "8", "section_name": "5 DISCUSSION AND FUTURE WORK", "section_text": "We have presented doubly recurrent neural networks, a natural extension of (sequential) recurrent architectures to tree-structured objects. This architecture models the information flow in a tree with two separate recurrent modules: one carrying ancestral information (received from parent and passed on to offspring) and the other carrying fraternal information (passed from sibling to sibling). The topology of the tree is modeled explicitly and separately from the label prediction, with modules that given the state of a node predict whether it has children and siblings.\nThe experimental results show that the proposed method is able to predict reasonable tree structure. from encoded vector representations. Despite the simple structure of the IFTTT trees, the result. on that task suggest a promising direction of using DrNNs for generating programs or executable. queries from natural language. On the other hand, the results on the toy machine translation tasl. show that even when used to generate sequences, DRNN's exhibit desirable properties, such as in variance over structural modifications and the ability to perform coarse-to-fine decoding. In orde. to truly use this architecture for machine translation, the approach must be scaled by resorting t batch processing in GPU. This is possible since forward and backward propagation are computec. sequentially along tree traversal paths so that inputs and hidden states of parents and siblings can be. grouped into tensors and operated in batch. We leave this as an avenue for future work..\nproduit differentes reponses qui. 'je ne sais jamais quoi. Source changent avec le temps selon nos. dire dans ces cas la'. experiences et nos relations \". SEQ2SEQ: l = 1 a 1 l = 4 with the different actions. I do l = 8 with the different actions who change with I do not know what to say. DRNN: d = 1 answers know d = 2 different answers change but i do not know. d = 3 product the different answers change . but i do not know to say.\nnge un- Table 2: Translations at different resolutions (size constraints im- rbation. posed during decoding) for two example sentences.."}, {"section_index": "9", "section_name": "ACKNOWLEDGEMENTS", "section_text": "DA-M acknowledges support from a CONACYT fellowship. The authors would like to thank the anonymous reviewers for their constructive comments..\nGottlob Frege. Uber Sinn und Bedeutung. Zeitschrift fur Philos. und Philos. Krit., (1):25-50, 1892\nSepp Hochreiter and Jurgen Jurgen Schmidhuber. Long short-term memory. Neural Comput., 9(8): 1-32, 1997. 1SSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Int. Conf. Learn Represent., pp. 1-13, 2014. URLhttp://arxiv.0rg/abs/1412.6980\nMarc' Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train ing with Recurrent Neural Networks. In ICLR, pp. 1-15, 2016. URL http: / /arxiv. org/\nKyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Proper- ties of Neural Machine Translation: Encoder-Decoder Approaches. Proc. SssT-8, Eighth Work. Syntax. Semant. Struct. Stat. Transl., pp. 103-111, 2014. URL http://arxiv.org/pdf/. 1409.1259v2.pdf\nRj Kate, Yw Wong, and Rj Mooney. Learning to transform natural to formal languages. In Proc. Natl. Conf. Artif. Intell., volume 20, pp. 1062-1068, 2005. ISBN 1-57735-236-x. URLhttp: //www.aaai.org/Librarv/AAAI/2005/aaai05-168.php\nR Socher and Cc Lin. Parsing natural scenes and natural language with recursive neural network In EMNLP, pp. 129-136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved Semantic Representa. tions From Tree-Structured Long Short-Term Memory Networks. In Proc. 53rd Annu. Meet Assoc. Comput. Linguist. 7th Int. Jt. Conf. Nat. Lang. Process., pp. 1556-1566, 2015. ISBN. 9781941643723. URLhttp://arxiv.0rg/abs/1503.0075\nArun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving Multi-step Prediction o. Learned Time Series Models. Twenty-Ninth AAAI Conf. Artif. Intell.. pp. 3024-3030. 2015\nOrioi Vinyals and Quoc V. Le. A Neural Conversational Model. arXiv, 37, 2015\nRonald J. Williams and David Zipser. Gradient-based learning algorithms for recurrent network. and their computational complexity. Back-propagation Theory, Archit. Appl., pp. 433-486, 1995 doi: 10.1080/02673039508720837.\nXingxing Zhang, Liang Lu, and Mirella Lapata. Top-down Tree Long Short-Term Memory Net works. In NAACL-HLT-2016, pp. 310-320, 2016"}, {"section_index": "10", "section_name": "A VARIATIONS ON TOPOLOGY PREDICTION", "section_text": "Besides the topology prediction approach presented in Section|3.1 we experimented with two addi tional variations of the proposed doubly-recurrent neuron: (i) using tokens to trigger both depth an width termination (i.e. implicit topology prediction) and (ii) using tokens for width-stopping deci sion, but predict explicitly depth termination (single topology prediction). Recall that in the mode proposed in Section3.1 both decisions are explicit (double topology prediction). The neurons i each of these alternative formulations are depicted in Figure[7 In order to train these two alternativ models, we add special stopping tokens to the vocabulary, and we pad the training with additiona nodes labeled with this token. Besides requiring larger trees and resulting in slower training, w empirically observed alternatives (i) and (ii) to result in worse performance. We hypothesize tha this has to do with the fact that when using token-based stopping, topological and label predictio decisions are confounded, which results in less efficient learning.\nha Xp ha X ha Xp hf h Pi O ha ha Pi ha Pi"}, {"section_index": "11", "section_name": "B.1 BACKPROPAGATION WITH DRNN'S", "section_text": "The gradients of the input ancestral and fraternal hidden states are then passed on to the previous. sibling and parent. When nodes have more than one child, we combine gradients from multiple children by averaging them. This procedure is repeated until the root note is reached, after which a. single (ancestral state) gradient is passed to the encoder.\nFigure 7: A single unit in each of the three alternative versions of the doubly-recurrent neural net. work, for node i with parent p and sibling s. Left: No explicit topology prediction, Middle: single (ancestral) topology prediction, Right: double (ancestral and fraternal) topology prediction. The top. (left) incoming arrows represent the input and state received from the parent node (previous node,. respectively).\nDuring training, we do the forward pass over the trees in breadth-first preorder, feeding into every node an ancestral and a fraternal state. For computational efficiency, before passing on the ancestral state to the offspring, we update it through the RNN using the current node's label, so as to avoid repeating this step for every child node. After the forward pass is complete, we compute label (cross-entropy) and topological (binary cross-entropy) loss for every node. In the backward pass. we compute in this order:\n1. Gradient of the current node's label prediction loss with respect to softmax layer parameters W, va, vf: VeL(xi,Xi). 2. Gradients of topological prediction variable loss with respect to sigmoid layer parameters: VeL(p,t) and VL(pf,t) 3. Gradient of predictive state layer parameters with respect to h(pred) 4. Gradient of predicted ancestral and fraternal hidden states with respect to gf and ga's pa- rameters."}, {"section_index": "12", "section_name": "B.2 MODEL SPECIFICATION AND TRAINING PARAMETERS", "section_text": "The best parameters for all tasks are chosen by performance on the validation sets. We perform early stopping based on the validation loss. For the IFTTT task, we initialize word embeddings with pretrained G1oVe vectors (Pennington et al.[2014). For both tasks we clip gradients when the absolute value of any element exceeds 5. We regularize with a small penalty p on the l2 norm of the parameters. We train all methods with ADAM (Kingma & Ba] 2014), with initial learning rate chosen by cross-validation. The parameter configurations that yielded the best results and were used for the final models are shown in Table 3] Details about the four models used for the machine translation task are shown in Table\nTable 3: Hyperparameter choice for DRNNs in the synthetic and IFTTT tasks\nTable 4: Models used in the machine translation task\nModel Encoder Decoder Dim RNN Layers Batch SEQ2SEQ (Small) LSTM LSTM 150 1 64 SEQ2SEQ (Large) LSTM LSTM 300 3 64 DRNN (Small) LSTM DRNN-GRU (Left-Right) 150 1 32 DRNN (Large) LSTM DRNN-GRU (Left-Right) 300 1 32\nWe generate trees in a top-down fashion, conditioning the label and topology of every node ol. the state of its ancestors and siblings. For simplicity, we use a Markovian assumption on thes dependencies, modeling the probability of a node's label as depending only on the label of its paren p(i) and the last sibling s(i) generated before it (if any). Conditioned on these two inputs, we mode. the label of the node as coming from a multinomial distribution over the alphabet:.\nwhere 0wp(i),ws(i) a are class probabilities drawn from a Dirichlet prior with parameter Qy. On the other hand, we denote by ba the binary variable indicating whether node i has descendants, and by 6f that indicating whether it has an ensuing sibling. We model these variables as depending only on the label of the current node and its position in the tree:\nP(ba T) = P(ba w, D) = Bernoulli(pa,.: g(D) P(bf |T) = P(bf | wi, Wi) = Bernoulli( (W)\nIn summary, we use the following. nerative procedure to grow the trees:\nTaske Encoder Dim Batch Learning Rate Regularization p synthetic LSTM 50 20 0.05 1x10-5 IFTTT GRU 150 35 0.06 1x10-4 IFTTT LSTM 150 35 0.05 5x10-4\nP(ba T) = P(ba wi, D) = Bernoulli(pa. g(Di)) P(bf |T) = P(bf | w, Wi) = Bernoulli(pw, gf (Wi))\nwhere D, is the depth of node i and W, its width, defined as its position among the children of its par- ent p(i). Intuitively, we want to make P(6 = 1T) decrease as we go deeper and further along the branches of the tree, so as to control its growth. Thus, we model ga and gf as decreasing functions with geometric decay, namely g(D) = (ya)D and gf (W) = (yf)W, with , f E (0,1). For the label-conditioned branching probabilities P(ba w) and P(bwi), we use Bernoulli distributions. with probabilities drawn from beta priors with parameters (aa, a) and (f , f ), respectively..\nNote that this generative process does create a dependence between the topology and content of th trees (since the variables ba and bf depend on the content of the tree via their dependence on th label of their corresponding node). However, the actual process by which labels and topologica decision is generated relies on separate mechanisms. This is natural assumption which is reasonabl to expect in practice.\nThe choice of prior parameters is done drawing inspiration from natural language parse trees. We. want nodes to have low but diverse probabilities of generating children, so we seek a slow-decaying distribution with most mass allocated in values close to 0. For this, we use (aa, 3a) = (0.25, 1). Fo. sibling generation, we use (af , f) = (7, 2), which yields a distribution concentrated in values close. to 1, so that nodes have on average a high and similar probability of producing siblings. Since we. seek trees that are wider than they are deep, we use decay parameters Ya = 0.6, 7f = 0.9. Finally. we use a d, = 10 : 1 for the parent-sibling probability prior, favoring non-uniform interactions.. Using this configuration, we generate 5000 sentence-tree pairs, which we split into training (4000. examples), validation (500) and test (500) sets. The characteristics of the trees in the dataset are. summarized in Table5\nTable 5: Synthetic tree dataset statistics. Tree size is measured in number of nodes, depth is the. largest path from the root node to a leaf and width is the maximum number of children for any node in the tree. The values reported correspond to means with one standard deviation in parentheses.\nThe IFTTT dataset comes with a script to generate the data by crawling and parsing the recipes. Unfortunately, by the time we ran the script many recipes had been removed or changed. We there. fore resorted to the original dataset used by Quirk et al.[(2015). We converted these recipes intc our tree format, assigning a node to each element in the first three levels (channels, functions anc. arguments, see figure 5). For the parameters level, many recipes have sentences instead of single. tokens, so we broke these up creating one node per word. The last two layers are therefore the mos. topologically diverse, whereas the structure of the first two layers is constant (all trees have channel. and functions). A very small fraction (< 1%) of trees that could not by parsed into our format wa. excluded from the dataset.\nTable [6 shows various statistics about the topological characteristics of the recipes in the IFTTT. dataset. The middle columns show percentage of trees that contain nonempty arguments and param. eters in trigger (IF) and action (THEN) branches. Almost all recipes have none empty arguments and. parameters (and thus depth 4, excluding the root), and a lower percentage--but still a majority--has- arguments and parameters on the trigger side too. The last two columns show tree statistics pertain-. ing to the complexity of trees after conversion to our format. The distribution of tree sizes is mostly concentrated between 4 and 30 nodes, with a slow-decaying tail of examples above this range (see Figure [8).\n1. For each w, E V, draw pw, ~ Beta(aa, a) and pw, ~ Beta(f 2. For each pair (w, wj) draw Owi,w, ~ Dir(aV 3. While there is an unlabeled non-terminal node i do: Sample a label for i from w* ~ P(w|Wp(i), Ws(i) = Multi(0wp(s),ws(i) : Draw ba ~ P(ba|w*, D) = Bernoulli( - pw(), where D is the current depth. If ba = 1, generate an node k, set p(k) = i, and add it to the queue. bf = 1, generate an node k, set s(k) = i, and add it to the queue.\nFold Examples Size Depth Width train 4000 3.94 (3.38) 1.42 (0.66) 2.89 (1.71) dev 500 4.13 (3.21) 1.46 (0.67) 2.91 (1.76) test 500 3.64 (3.21) 1.32 (0.61) 2.80 (1.71)\nTable 6: IFTTT dataset statistics. The middle columns show percentage of trees that contain nonempty arguments and parameters in trigger (IF) and action (THEN) branches. The last column. shows average (with standard deviation) tree size and depth..\nHas args. (%) Has params. (%) Tree Size Fold Examples Trigger Action Trigger Action # Nodes Depth train 67,444 69.10 98.46 65.47 96.77 16.93 (31.71) 3.99 (.13) dev 4,038 69.44 98.46 66.42 96.31 16.55 (8.75) 3.99 (.11) test 3,725 68.38 98.66 65.64 97.50 16.43 (8.18) 3.99 (.12) Dev Test Train 100 (fepoN #) az!S eee 80 60 40 20 0 O 2000 4000 0009 0008 10000 12000 14000 10090 Frequency\nFigure 8: Tree size distribution in the IFTTT dataset\nRegarding the content of the trees, the labels of the nodes in the first two levels (channels anc functions) come from somewhat reduced vocabularies: 111 and 434 unique symbols for the trigge branch, respectively, and 157 and 85 for the action branch. The lower layers of the tree have a mucl more diverse vocabulary, with about 60K unique tokens in total. On the source side, the vocabulary over the sentence descriptions is large too, with about 30K unique tokens. The average sentence size. is 6.07 tokens, with 80% of the sentences having at most 12 tokens..\nFor the perturbation experiments, we randomly selected 50 sentences from among those in the tes that could be easily restructured without significantly altering their meaning. The type of alterations we perform are: subordinate clause swapping, alternative construction substitution, passive/active voice change. In doing this, we try to keep the number of added/deleted words to a minimum, tc minimize vocabulary-induced likelihood variations. When inserting new words, be verify that the are contained in the original vocabulary of 20K words. In Table|7|we show a few examples of the source, original target and perturbed target sentences.\nStarting from a preprocessed2% sub-selection of the English-French section of the WMT14 dataset, we further prune down the data by keeping only sentences of length between 5 and 20 words, and for which every word is within the 20K most frequent. The reason for this is to simplify the task by keeping only common words and avoiding out-of-vocabulary tokens. After this filtering. we are left with 53,607, 918 and 371 sentences for train, validation and test sets. After tokenizing we obtain dependency parses for the target (English) sentences using the Stanford CoreNLP toolkit (Manning et al.|2014).\nTable 7: Example structural perturbations for likelihood robustness experiments\nsource apres un accord de paix signe en 1992 elle est devenue un parti d opposition.'. target \"after a 1992 peace deal it became an opposition party'. perturbation \"it became an opposition party after a 1992 peace deal.'*. source \"cela represente environ 9 milliards de grains de mais.. target \"'that's about 9 billion individual kernels of corn'. perturbation \"this amounts to about 9 billion kernels of corn\". source \"1'exercice de fonctions publiques est une question de service public.. target \"public office is about public service.\". perturbation \"the exercise of public functions is a matter of public service.\". source 'nous avons ainsi effectue depuis la fin de 1'hiver dernier 64 interventions.' target \"hence we have carried out 64 operations since last winter\". perturbation \"we have therefore carried out 64 operations since last winter.\". source \"on estime qu'un enfant sur 2000 nes chaque annee n'est ni un garcon ni une fille.. target \"an estimated one in 2000 children born each year is neither boy nor girl.\". perturbation \"it is estimated that one in every 2000 children born every year is neither a boy nor a girl.\"\n(a) Encoder sentence input: \"ROOT P R C\n(b) Encoder sentence input: \"ROOT Z T Y Q\nFigure 9: Selected trees generated by the DRNN decoder from vector-encoded descriptions for test. examples of the synthetic tree dataset. Trees in the same row correspond to predictions by models trained on randomly sampled subsets of size N of the training split. We present cases for which the. prediction is accurate (a,c) and cases for which it is not (b,d). Note how in (d) the model predicts many of the labels correctly, but confuses some of the dependencies (edges) in the tree..\nN=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROO ROOT ROOT\n(a) Encoder sentence input: \"ROOT P R C' N=500 N=1000 N=1500 N=3500 gold ROOT ROO (b) Encoder sentence input: \"ROOT Z T Y Q' N=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROOT ROOT ROOT (c) Encoder sentence input: \"ROOT K T V\" N=500 N=1500 N=2500 N=4000 gold (d) Encoder sentence input: \"ROOT Q F V R G D A'\nN=500 N=1000 N=1500 N=3500 gold ROOT ROOT ROOT ROOT ROOT D"}] |
HkzuKpLgg | [{"section_index": "0", "section_name": "EFFICIENT COMMUNICATIONS IN TRAINING LARGE SCALE NEURAL NETWORKS", "section_text": "Linnan Wang\nSchool of Computer Science Georgia Institute of Technology\nSchool of Computational Science & Engineering Georgia Institute of Technology.\nWe consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for partial gradient aggregations, which for large messages quickly dominates overall execution time and limits parallel scalability. To address this problem, we develop a new technique for collective operations, referred to as Linear Pipelining (LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively on multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is the number of GPUs, while the cost of the more conventional Minimum Spanning Tree (MST) scales like O(log P). LP also demonstrates up to 2x higher bandwidth than Bidirectional Exchange (BE) techniques that are widely adopted by current MPI implementations. We apply these collectives to BSP-SGD, showing that the proposed implementations reduce communication bottlenecks in practice while preserving the attractive convergence properties of BSP-SGD."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Scaling up neural networks with respect to parameter sizes, training sets, or both has drasticall improved the state-of-the-art performance in several domains ranging from scene understanding speech recognition, even to playing Go against professional players. Although training a larg network saturated with nonlinearities is extremely time-consuming, the benefits brought forth b large-scale models has sparked a surge of interest in parallelizing training on multi-GPUs. The parallelization of SGD demands synchronizations to exchange gradients and parameters per iteration and this introduces significant communication overhead. Previous studies have focused on trading th SGD convergence rate for fast gradient updates, such as stale or asynchronous SGD, 1-bit compressec gradient, etc. However, these methods are rarely adopted by Deep Learning frameworks as the depend on the balance between the enhanced iteration throughput and the decelerated convergenc rate. Since BSP retains the convergence properties of SGD, its optimization should be of interest.\nThe gradient aggregations and parameter exchanges in BSP SGD are typical operations of commu. nication collectives (Chan et al. 2007). Messages in the large-scale neural networks training are. dense, long, and fixed-length, while the performance of collective algorithms is drastically sensitive to these attributes. Besides, the processing speed is several orders of magnitude faster than the. network unidirectional transmission rate. These prioritize the utilization of network bandwidth in the collective design. However, we have seen sub-optimal collective algorithms, e.g. MST and BE.. widely adopted by the deep learning community (Agarwal et al.[2014) (Jia et al.2014) (Duchi et al.. 2011). MST is only suitable for the latency dominant case such as frequent short message exchanges.. while the bandwidth term of BE can be further improved (Thakur et al.2005)..\nWei Wu & George Bosilca\nBig Data Research Center Uniy. of Electr. Sci. & Tech. of Chin zlxu@uestc.edu.cn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "GPUO GPUO COMMUNICATION COMPUTE J SYNC GPU1 GPU1 (a) legend (b) Reference SGD (c) ASGD GPUO GPUO GPUO GPU1 staleness GPU1 GPU1 (d) Stale SGD (e) CUDNN (f) ours\nFigure 1: Illustrations of various methods to accelerate the training. Black blocks stands for computa tions, and white blocks stands for communications. CUDNN reduces the computation cost, while we. reduce the communication cost.\nIn this paper, we introduce new Linear Pipeline based collectives for multiGPU training. Th. collectives demonstrate O(log(P)) speedups over MST collectives and up to 2x speedups over BI based ones; the bounds only hold in training large neural networks. In particular, the theoretica analysis and the implementation yield an interesting insight that the cost of our design is invarian to GPU numbers, i.e., the cost of collective operations on 2 GPUs is similar to 20 GPUs. The desigr explores message granularity to maximize simultaneous bidirectional data exchanges. In specific. it divides a message into fine-grained blocks as the basic communication element. A GPU send a block (via DMA 1) while receiving (via DMA 2) a new block from a neighbor. The copies are. asynchronously launched on two GPU streams, and numerical operations further overlap data copie. As a result, our method yields a highly efficient pipeline over which messages for neural networl training may be exchanged.\nThe proposed collective design achieves 2.3x to 360.55x speedups over Open MPI alternatives o1 6 GPUs. In training GoogLeNet, we set up the same BSP SGD implementation with differen underlying collectives. Our design demonstrates up to 1.7x convergence speedup over MST base Caffe.\nThe first group of approaches relaxes synchronous models of SGD to increase the iteration throughpu. (Dean et al.(2012),Zinkevich et al.(2010)). In this case, the relaxed SGD enables computations on a GPU to partially overlap with communications on others as demonstrated in Fig|1c|and Fig|1d. Recht et al.(2011) proposed a lock free Asynchronous SGD (ASGD) that entirely gets rid of the. synchronization requirement by allowing free concurrent parameter updates. But the relaxation only. works well on sparse learning problems. In response, Ho et al.[(2013) introduced the concept o1. staleness by bounding the fastest and the slowest machine within a few iterations of each other t ensure correctness. These relaxations claim to be effective as the enhanced iteration throughpu. offsets the disadvantages of degraded convergence rate. However, recent advances in deep learning. frameworks (Cui et al.[(2016) have reestablished the advantages of BSP over relaxed ones in training. neural networks. This reiterates the importance of studying BSP SGD..\nThe second group of approaches tries to reduce the overall communication volume. Seide et al. (2014) quantized gradients from 32 bits to 1 bit to reduce the message length, but the lost gradient. information decelerates the convergence rate. Another approach is to accelerate the convergence with a large batch.Dekel et al.[(2012) shows the convergence rate of mini-batch SGD is O(1/Tb + 1/T) with b being the batch size. This result indicates a large batch needs fewer iterations to find a solution and thereby fewer overall synchronizations. However, unwieldy increasing the batch size is also unfavorable under limited computing resources demonstrated by|Wang et al.(2016b). Please note these methods still need synchronizations, and our work will further improve their performance..\nThe communication overhead has been widely identified as the major bottleneck in the data-parallel. SGD (Shamir (2014),Li et al.(2014)). The data parallelism linearly adds the processing power by concurrent gradient computations with multiple GPUs. But it also requires synchronizations to collect partial gradients or to broadcast parameters. In practice, the communication rate is several. orders of magnitude slower than the computation (Coates et al.|2013). Various approaches have been. proposed to reduce the overhead.\nin1 TIME- TIME- TIME in2- out 3 M SENTFROMGPU2 REDUCE a\"b\"c\"d\" abcd abcd GPUO GPUO a0b0c0d0 a0a1a GPUO a0b0 c0d05 1y 2y 3y`44 123 123 abcd MEMCPYa1b1c1d1 MEMCPYa1b1c1d1abcd\" >a0 a GPU1MEMCPYabcd GPU1 GPU1 S: GPU Stream COMPUT a'b' c'd' Buffer COMPUTa'b'c'd so1 3 2345 X 2y 3x`4 S1 2 4 MEMCPY a2|b2]c2|d2 MEMCPY a2|b2]c2[d2a\"b\"c\"d\" GPU2MEMCPY abcd GPU2 a0 a1 GPU2 COMPUT a\"b\"c\"d\" COMPUT a\"b\"c\"d\" (a) broadcast (b) reduce (c) allreduce\nFigure 2: The data flow of broadcast, reduce and allreduce on 3 GPUs\nThe third group of approaches conducts system optimizations to minimize the communication cos (Wang et al.[ [2016a). Agarwal & Duchi[(2011) and Agarwal et al.[(2014) presented partial gradient aggregations guided with a MST that takes log(P) steps to fully synchronize the model. Deep learning frameworks such as Caffe (Jia et al.]2014) also adopt this approach. Unfortunately, MST i only suitable for latency dominant scenarios (i.e. high frequent short messages). Although collective algorithms have been thoroughly discussed in the HPC community (Almasi et al.(2005), Gabrie et al.(2004), Shipman et al.(2006)), few have studied their performances for the deep learning. The performance of collectives varies significantly with different message lengths and network topologies while messages in deep network training are dense, long and fixed-length. Therefore, it is imperativ to address such peculiarities in the collectives. Worringen (2003) proposed a pipeline collective model in shared memory environment for CPU data, but communications of different MPI processe sharing the same CPU memory bus within the same CPU socket. This causes bandwidth competitioi among different processes, thereby poor performance for the collective communication in share. memory environment for CPU data. In contrast, PCI-E is bi-directional. The latest GPUs also featur two independent DMA engines for simultaneous independent in/out communications. The hardware updates pave the way for LP based GPU communications.\nBroadcast tackles the synchronizations of parameters among multiple GPUs. It copies the source vector to every GPU. Fig|2ajillustrates the data flow of the broadcast collective on 3 GPUs. GPU0 is the source, and the rest are destinations. Broadcast starts with filling the pipe by copying block a on GPU0 to GPU1 at step 1. Let's focus on GPU1. At each step, GPU1 receives a block from GPU0 via DMA1, while GPU1 is also sending a block to GPU2 via DMA2. The data exchange in either way utilizes an independent link and DMA engine to achieve the maximal unidirectional rate. Hence the bandwidth is fully exploited.\nReduce aggregates the partial gradients to reconstruct the global one. It combines the elements. provided in the vector of each GPU, and returns the combined value in the receive vector to a specific GPU. It supports basic arithmetic operations such as summations and multiplications. Fig2l illustrates the data flow of the reduce collective. GPU2 is the root that aggregates the vectors across all GPUs. Reduce starts with filling the pipe by writing block a0 to a buffer on GPU1. Then. GPU1 reduces the received block a0 with a1 to yield a' (within the rectangle of Fig2b). Please note. the computation is much faster than the communication, we assume no latency on it. In practice computations are further overlapped with communications. In the next step, GPU1 retrieves b0 fron GPU0 to reduce to b' via DMA 1, while GPU1 is also sending a' to GPU2 to reduce to a\" via DMA. 2. b\", c\"', d\" are reduced at steps 3, 4, 5 in a similar fashion..\nAllReduce enables us to collect partial gradients and broadcast the latest parameters with only one. synchronization point per SGD iteration. It combines vectors from all GPUs and distributes the\nThis section presents a new LP based MultiGPU collective design ensued by the concrete proof of its performance in training neural networks. The general idea of LP is as follows: a) we dissect a long message into fine-grained blocks. b) a GPU receives a block from the prior GPU via DMA1 while sending a block to the next one via DMA2. Please note each block exchange utilizes an independent physical link, and the entire network is fully utilized once the pipeline is filled..\nTable 1: The estimated costs of 3 collective communications\nBidirectional Exchange (BE) Minimal Spanning Tree (MST) Linear Pipeline (LP) broadcast (log p + p - 1) + 2(-1n) log p(a + n) (p-1+ )+(b(p-1) +n) (2 logp)a+2(P-1n)+(P-1n)y logp(a+ n+ ny) reduce (p-1+ )a+(bp-b+n)(+y) -n)y logp(2a + 2n + ny) 2(p-1+ )a+ (bp-b+n)(2+y) P p\nresult back to them. Mathematically, it is equivalent to a reduce followed by a broadcast. However allreduce is more efficient than two separate calls as it only needs to fill the pipeline once. For example, it takes 9 timesteps to allreduce 4 message blocks, while broadcast + reduce will cost 10 Fig|2c|illustrates the data flow of the allreduce collective. It starts with reducing a\", after which a\" is broadcast to GPU1 and GPU2 at step 5, 6 respectively. Please note d0 utilizes the outbound DMA at step 4, therefore a\" has to wait until step 5. b\", c\", d' are processed in a similar fashion.\nOur collective is also specifically designed to accommodate GPU features such as asynchronous kernel launches and multi-stream processing. In the rectangle of Fig|2al it demonstrates the data transfers are asynchronously launched on two separate streams. The copies happening in the red steps are scheduled on one stream while copies in the black steps are scheduled on another stream This overlaps the overhead of GPU kernel launches, further improving the pipeline. We illustrate the data flow of the collectives on 3 GPUs. If there are k GPUs, GPU n, 0 < n < k - 1, duplicates the same communication pattern on GPU 1."}, {"section_index": "3", "section_name": "3.1 ARCHITECTURE ANALYSIS", "section_text": "LP is the optimal collective algorithm to fully exploit the network bandwidth of a MultiGPU system Even though PCI-E supports full-duplex communication between any two endpoints, each PCI- endpoint device only has one input and output port. This results in bandwidth competition if a GPU is receiving from multiple GPUs. Similarly, each PCI-E switch only contains one input and output port used for inter-switch communication, and inter-switch communications of the same directior also compete for the PCI-E bus. It is known that any delay in data movement between two GPUs interrupts the pipelining in the collectives. In such architecture, the communication from parents to children in MST based collective algorithms will compete for the same PCI-E bus, therefore breaking pipelining. The data exchange of BE also suffers from the inter-switch communication congestion in one direction. In contrast, LP connects all GPUs into a chain, and data always flow in one direction. Hence, data movements between two GPUs exclusively occupy the entire PCI-E bus ensuring uninterrupted pipelining.\nwhere a is the latency or startup time of sending a message, and y is the transmission rate and. reduce rate measured by time per byte, and n is the message size in bytes. We also denote p as the node count, and b as the block size (in bytes) in the pipeline..\nProposition 1 If the network latency Q -> 0, Linear Pipeline collectives provide an O(log p) speedu over Minimal Spanning Tree collectives and up to a 2 times speedup over Bidirectional Exchang collectives as the message size n -> oo.\nProof. First, we derive the costs of the three Linear Pipeline collectives. According to Fig|2 the length of pipeline is p -- 1 + n blocks assuming each block to be b bytes. A block exchange takes a + 36 + y6 (with reduce) or a + 36 (without reduce). Consequently, broadcast essentially costs a+36)(p-1+ ) = (p-1+)a+(b(p-1)+n), and reduce costs (a+b+y6)(p-1+ ) (p - 1 + )a + (b(p - 1) + n)( + ). allreduce is approximately equivalent with a reduce\nT=a+ Bn+yn\nfollowed by a broadcast. Therefore, the allreduce's cost is broadcast's cost plus reduce's cost, i.e 2(p- 1 + )a + (bp- b+ n)(2 + y)\nSecondly, we derive the costs of the three Minimal Spanning Tree collectives. MPI adopts MST tc. broadcast or reduce short messages (Thakur et al.(2005)), the length of which is less than 12 KB. The core concept of MST is to organize p GPUs into a balanced tree of height [logp]. Then, it takes. log p steps to traverse all GPUs in the tree. Each step carries the message of length n, resulting ir the cost of broadcast to be the tree height times the cost per step, i.e. log p(a + n3) (we omit the ceiling for simplicity). Similarly, MST reduce is log p(a + n + ny), and MST allreduce is also a combination of broadcast and reduce. Please note the latency term, log pa, is the smallest among. algorithms in Table[1] and the bandwidth term, log pn, is the slowest as log pn > n. Therefore. MST is widely used for high frequent exchanges of short message..\nFinally, we present the costs of the three Bidirectional Exchange collectives. MPI broadcast handles long messages with a MST scatter followed by a BE allgather. Please refer to|Chan et al.(2007 while allgather costs (p - 1) + p-1n. The cost of broadcast is the sum of these two. The MPI long message reduce consists of a reducescatter plus a gather, while allreduce consists of a reducescatter and a allgather. The cost for reducescatter is log pa + P-1n + P-1ny, and both the costs of gather and allgather are log pa + -1n (also inChan et al.(2oo7)). Table 1summarizes the costs of broadcast, reduce and allreduce for the three different underlying algorithms.\nThe proposition holds under the assumptions of -> 0 and n -> oo, and these assumptions are. legitimate for the training of large scale neural networks on multiGPUs. Nowadays, the PCI Express. x16 effectively reduces the latency down to 10-7s. The current two sockets shared memory. machine supports up to 8 GPUs indicating limited p in practice. Let's take an appropriate block size. b to ensure p < n and a ~ 0. This enables us to safely ignore the latency term, e.g. log pa ir. MST broadcast. On the other hand, current deep convolutional neural network uses a tremendous. number of parameters. For example, AlexNet uses 50 MB parameters. The transmission rate. 3 ~ 109 Byte/Seconds. Compared to the trivial latency term, the bandwidth term dominates the. entire cost T. This result leads us to simplify the costs of BE, MST, and LP based broadcast (Table 2) to be 2P-1 n, n log p and (b(p - 1) + n) , obtaining the following equations:.\nCompared with broadcast, reduce has the additional y term. Please note the processing speed of GPUs exceeds TFLOPs implying the term y * n -> 0. Therefore, it is also legitimate to ignore the term, and it yields the same result Treduce_BE/Treduce_LP < 2 and Treduce_M ST/Treduce.L P < log p. This completes our proof of the proposition 1..\nAnother interesting point is the cost of Linear Pipeline is invariant to GPU count p regardless of message length n. This implies broadcasting a vector to 8 GPUs should cost the same as broadcasting to 2 GPUs. In practice, we set the block size b around 64 KB, and p is within 101. This suggests the bandwidth term, e.g. the cost of LP broadcast (bp - p + n) ~ n. Hence, the cost of LP collectives are less likely to be affected by GPU counts p."}, {"section_index": "4", "section_name": "3.3 DEEP LEARNING WITH EFFICIENT BSP SGD", "section_text": "Tbroadcast_BE 2(1-) 2 Tbroadcast.LP Tbroadcast M ST log p ogr Toroadcast.LP b(p-1) n\nWe formulate the neural network training as the following optimization problem. Let be a loss function with weight vector w as function parameters that takes randomly sampled images d, as the\nAlgorithm 1: BSP SGD with communications/computations overlapping\nwhile not converge do 2 broadcast(wt) 3 for i E [0, 1, ..., max layers] do 4 nonblocking_broadcast(wi+1) 5 Forward(i) 6 sync_broadcast() Backward(max layers) 7 8 for i E [max_layers - 1, ..., 1, 0] do 9 10 Backward(i) 11 sync_reduce() 12 wt+1 = GradientUpdate()\nAlgorithm 2: BSP SGD uses broadcast + reduce\nwhile not converge do 1 whi Vysub = ForwardBackward(dt) 2 Vy = reduce(Vysub) 3 if root then 4 wt+1 = GradientUpdate() 5 broadcast(wt+1) 6 barrier /* sync new 7 W\nnput. The objective of training is to find an approximate solution to the following problem\nmin E{yw(dt)} = Yw(dt)dP W\nIn Alg[2] synchronizations rely on broadcast and reduce. Each GPU calculates a partial gradient referred to as Vysub. The master GPU reconstructs Vy by reducing all Vysub. Then, the GPUs synchronize the latest weight, w, by broadcasting..\nIn Alg|3] synchronizations only rely on allreduce. The differences between this and Alg2|are that 1) there is only 1 synchronization point; 2) every GPU computes the gradient update. However, the parameters are not consistent after several iterations due to the precision issues of float multiplications in GradientUpdate. We synchronize w every 5 iterations to enforce consistency while still retaining the benefit of efficient pipelining in allreduce (line 7-8 Alg3)\nAlgorithm 3: BSP SGD uses allreduce\n1 while not converge do. 2 Vsub = ForwardBackward(dt) Vy = allreduce(Vysub) 3 4 barrier /* collect Vysub 5 wt+1 = GradientUpdate( if iter%5 = 0 then 6 broadcast(wt+1 ew W\nA typical neural network training iteration consists of a forward and backward pass. The forward. pass yields a loss that measures the discrepancy between the current predictions and the target; The backward pass calculates the gradient, the negative of which points to the steepest descent direction The gradient descent updates the parameters, w, as follows:.\n1 ntVyw(dt)\nGuided with Data Parallelism, BSP SGD evenly divides dt into p slices dt, d?, ..., d so that every. GPU computes a partial gradient from dt in parallel. The global gradient is equivalent to the average of partial gradients. After finishing the gradient update, w' is synchronized to all GPUs. We integrate the proposed collectives into this process to harness parallel processing capabilities of multiGPU system. In this paper, we discuss two approaches to BSP SGD implementations..\nfork and join: This approach forks the gradient computations, and joins partial gradients with. communications. In this case, communications do not overlap with computations. Alg2|and Alg|3 demonstrate two collective based implementations using 2 and 1 synchronization points, respectively\noverlapping communications with computations: Another approach is to overlap communica- tions and computations for each network layer. In the forward pass, GPUs broadcast network parameters of layer t+1 during forward computations at layer t. In the backward pass, GPUs reduce\n4 k40m 4 k40m 4 k40m BE BE BE MST MST MST 10 LP 10 LP LP TTine nme 10 102 103 100 101 102 100 101 102 100 101 102 Message Size in MB Message Size in MB Message Size in MB (a) Broadcast (b) Reduce (c) AllReduce\n4 k40m 4 k40m 4 k40m 10 Q- BE BE - BE MST MS MST 10 100 LP LP IP 102 10-2 10 100 101 102 100 101 102 100 101 102 Message Size in MB. Message Size in MB Message Size in MB (a) Broadcast (b) Reduce (c) AllReduce\nFigure 3: The performance of different collective algorithms at different message sizes on 4 K40m\n.2 10 BE 0.15 BE O BE MST MST MST 4 X LP LP LP 0.8 0.1 10 0.6 0.4 0.2 0 0.05 2 5 6 2 5 6 GPU K40m Count GPU K40m Count GPU K40m Count (a) Broadcast (b) Reduce (c) AllReduce\nFigure 4: The scalability experiment: it measures performance variations with increasing GPUs.. partial gradients of layer t+1 during backward computations at layer t. As a result, layer-wise compu tations partially overlap with communications further improving the SGD efficiency. Alg|1|outlines. the general idea of overlapping communications and computations during network training. We use. nonblocking collectives to achieve the overlap.."}, {"section_index": "5", "section_name": "4.1 COLLECTIVES EVALUATION", "section_text": "The MST and BE implementations used in benchmarks are Caffe2|and OpenMPI. Caffe optimizes the GPU placement in an MST to fully utilize inter-GPU peer to peer (P2P) access. OpenMPI and our implementation, similar to Caffe, also take advantages of P2P. We set up AlexNet and GoogLeNet training using the three BSP SGD algorithms proposed in section|3.3\nFig|3|presents the performance of LP, MST, and BE based collectives at different message sizes on 4 K40m. The LP broadcast demonstrates an average of 29.2x and 2.3x speedup over BE and MST based alternatives in Caffe and OpenMPI; the LP reduce demonstrates an average of 360.55x and 8.7x speedup over BE and MST reduce, and the LP allreduce demonstrates an average of 109.2x and 7.9x speedup over BE and MST allreduce. In theory, LP is approximately 2x faster than both the MST (p = 4 -> logp = 2) and BE approaches. An extraordinary speedup against Open MPI is observable due to inefficient data movement in Open MPI, which moves data to host RAM to perform reduce operations on the CPU before being copied to the target GPU. Instead, we perform reduce on the GPUs, and data blocks directly flow to the target GPU via P2P access. The overlapped reduce computations with communications enables our reduce and allreduce to be 8x faster than that of MST. At each step of MST, GPUs reduce the incoming data only after all the data is available In contrast, our fine-grained block design enables communications and computations to overlap by reducing a block while receiving a new one in the pipeline. broadcast only involves data copies, and both we and Caffe use P2P to transmit the data. Therefore, the speedup of MST broadcast (2.3x) conforms to the 2.0x theoretical prediction.\nThe theoretical analysis indicates both the cost of LP and BE collectives are invariant to the GPU count p. while the cost of MST increases with p by a factor of loqp. This is also noticeable in the\n2Caffe implements an MST based broadcast and reduce for the multiGPU training\n200mR 10 .2 BE 0.15 BE Q BE MST MST MST + LP LP 0.8 LP 0.1 10 0.6 0.4 0.2 102 0.05 : 6 2 2 3 6 GPU K40m Count GPU K40m Count GPU K40m Count (a) Broadcast (b) Reduce (c) AllReduce\npros and cons of both approaches: The cost of Alg|2|or Alg|3|is comm + compt, while the cost of Alg|1|is max(comm, compt). If the network has over a few hundred MB of parameters, the overlapping will be significantly better than the fork and join approach. However, Alg|2land Alg3|are. relatively easy to implement, and the performance on networks < 100 MB is similar to that of Alg|1\nAlexNet 256MB.iters = 30000. batch size = 1000 GoogLeNet 51MB, iters = 67000, batch size = 80 10 DBE Alg.1 BE Alg.1 - MST Alg.1 9 6 - MST AIg.1 faon naannsn BE Overlap Alg.3 sso 8 BE Overlap Alg.3 LP AIg.1 LP Alg.1 7 -- LP AIg.2 LP AIg.2 -LP Overlap Alg.3 6 LP Overlap Alg.3 5 4 2 3 2 0 2 3 4 5 0 2 3 seconds x104 seconds x10 (a) AlexNet (b) GoogLeNet\nFigure 5: The training losses in fixed iterations on 4 K40m. We set GoogLeNet lr = 0.01. AlexNet starts at lr = 0.015, and set to 0.0015 after the average loss < 2. The solver is SGD + momentum, and the dataset is ImageNet."}, {"section_index": "6", "section_name": "4.2 IMPACT ON THE NEURAL NETWORK TRAINING", "section_text": "Fig|5|demonstrates LP collectives effectively reduce the total training time without affecting SGD's. convergence properties in training large scale neural networks. We use inspurCaffe, Caffe and cuhk's Caffe branch to benchmark the performance of BE-Alg.1, MST-Alg.1 and BE-Overlap-Alg.3. We also implement Alg.1,2,3, integrated with LP collectives, in Caffe to ensure consistency. Please note the model size affects the communication time, while the batch size affects the computation time We carefully set these parameters to cover as many cases as possible. Please refer to the captions. of Table|2land Fig|5|for experiment details. We assume these algorithms have similar convergence. speeds in iterations as losses of AlexNet are approximately 1 after 30oo0 iterations and losses of GoogLeNet are approximately 2 after 67000 iterations. However, the time taken to reach the targe1 loss varies dramatically. For example, the speedups of LP-Overlap-Alg.3 over BE-Alg.1 in training. AlexNet and GoogLeNet are 2.12x and 2.19x, respectively..\nThe experiments demonstrate that the speed of the three proposed BSP SGD algorithms is Alg.3 > Alg.2 > Alg.1. The result conforms to our expectations as the cost of Alg.3 is max(comm, compt) while the cost of Alg.1 and Alg.2 is comm + compt. However, the performance gain is quite limitec from Alg.2 to Alg.3 as there is little room left for reducing communications from LP Alg.2 to Alg.3 as demonstrated in Table|2] If the model parameters keep increasing, we expect Alg.3 to be more efficient than Alg.2.\nTable 2: The iteration profile. comm stands for communications, and compt stands for computations. % represents the percentages of communications in an iteration. The statistics are the average of 30000 AlexNet iterations, and 67000 GoogLeNet iterations. We set the batch size of AlexNet to 1000, and GoogLeNet to 80. AlexNet and GoogLeNet are 256MB and 51MB, respectively.\nscalability experiment demonstrated in Fig4] Please note there is a cost jump between 4 and 5 GPUs Communications have to go through QPI after 4 GPUs incurring the additional cost of copying through the host RAM. The cost of the Linear Pipeline method robustly stays the same if GPU counts -[2,3,4] or [5,6], and QPI explains the inconsistency. The communication steps of MST for 2,3,4,5,6 GPUs are 1,2,2,3,3, respectively. The MST experiments verify the logp cost increase w.r.t GPU counts by evident cost jumps at 3 and 5 GPUs. The data flow of OpenMPI between two GPUs follows GPU RAM->host RAM->GPU RAM. The inefficient data flow inside Open MPI contributes to the near linear cost increase with GPU counts p.\nUnder Alg.1, but using different underlying collective algorithms, LP-Alg.1 presents 1.91x and 1.74x speedup over BE-Alg.1 and MST-Alg.1 in AlexNet, and 1.6x and 1.1x speedup over BE-Alg.1 and MST-Alg.1 in GoogLeNet. The iteration profiles of these 3 algorithms in Table|2lindicate the communication cost of LP-Alg.1 is only 10% of BE-Alg.1, and 11% of MST-Alg.1 in AlexNet; and 6% of BE-Alg.1, and 43% of MST-Alg.1 in GoogLetNet.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Edgar Gabriel, Graham E Fagg, George Bosilca, Thara Angskun, Jack J Dongarra, Jeffrey M Squyres Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, et al. Open mpi: Goals concept, and design of a next generation mpi implementation. In European Parallel Virtual Machine/Message Passing Interface Users' Group Meeting, pp. 97-104. Springer, 2004..\nAlekh Agarwal, Olivier Chapelle, Miroslav Dudik, and John Langford. A reliable effective terascale 1near learning svster louLvnaLotNaehiwoLoarninaRocoareh 1111-1133.2014\nGalen M Shipman, Timothy S Woodall, Richard L Graham, Arthur B Maccabe, and Patrick G Bridge Infiniband scalability in open mpi. In Proceedings 2Oth IEEE International Parallel & Distribute Processing Symposium, pp. 10-pp. IEEE, 2006.\nMartin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pp. 2595-2603, 2010."}] |
rJzaDdYxx | [{"section_index": "0", "section_name": "GRADIENTS OF COUNTERFACTUALS", "section_text": "Mukund Sundararajan, Ankur Taly & Qiqi Yan\nmukunds, ataly, qiqiyan}@google.com\nGradients have been used to quantify feature importance in machine learning mod. els. Unfortunately, in nonlinear deep networks, not only individual neurons but. also the whole network can saturate, and as a result an important input feature car have a tiny gradient. We study various networks, and observe that this phenomena. is indeed widespread, across many inputs..\nWe propose to examine interior gradients, which are gradients of counterfactual. inputs constructed by scaling down the original input. We apply our method to the GoogleNet architecture for object recognition in images, as well as a ligand-based virtual screening network with categorical features and an LSTM based language. model for the Penn Treebank dataset. We visualize how interior gradients better capture feature importance. Furthermore, interior gradients are applicable to a. wide variety of deep networks, and have the attribution property that the feature importance scores sum to the the prediction score..\nWhile there has been other work (see Section[2.10) to address this problem, these techniques involve. instrumenting the network. This instrumentation currently involves significant developer effort be- cause they are not primitive operations in standard machine learning libraries. Besides, these tech-. niques are not simple to understand-they invert the operation of the network in different ways, and have their own peculiarities-for instance, the feature importances are not invariant over networks. that compute the exact same function (see Figure|14).\nIn contrast, the method we propose builds on the very familiar, primitive concept of the gradient---al. it involves is inspecting the gradients of a few carefully chosen counterfactual inputs that are scalec versions of the initial input. This allows anyone who knows how to extract gradients-presumably even novice practitioners that are not very familiar with the network's implementation-to debug the network. Ultimately, this seems essential to ensuring that deep networks perform predictably when deployed.\nBest of all, interior gradients can be computed just as easily as gradients. In contrast, previous methods are complex to implement, which hinders practical adoption.\nPractitioners of machine learning regularly inspect the coefficients of linear models as a measure of feature importance. This process allows them to understand and debug these models. The natural analog of these coefficients for deep models are the gradients of the prediction score with respect to the input. For linear models, the gradient of an input feature is equal to its coefficient. For deep nonlinear models, the gradient can be thought of as a local linear approximation (Simonyan et al. (2013)). Unfortunately, (see the next section), the network can saturate and as a result an important input feature can have a tiny gradient.\nTop label:reflex camera Score:0.993755 minclla (a) Original image. Top label:reflex camera Score:0.996577 minclla (b) Ablated image..\nFigure 1: Pixel importance using gradients at the image"}, {"section_index": "1", "section_name": "2 OUR TECHNIQUE", "section_text": "Let us start by investigating the performance of gradients as a measure of feature importance. We. use an object recognition network built using the GoogleNet architecture (Szegedy et al.(2014)) as a running example; we refer to this network by its codename Inception. (We present applications. of our techniques to other networks in Section 3) The network has been trained on the ImageNe object recognition dataset (Russakovsky et al.(2015)). It is is 22 layers deep with a softmax layer or top for classifying images into one of the 1000 ImageNet object classes. The input to the network is. a 224 x 224 sized RGB image.\nWe represent a 224 224 sized RGB image as a vector in R224224x3. Let IncpL : R224x224x3 _ 0, 1 be the function represented by the Inception network that computes the softmax score for the object class labeled L. Let Incp\"(img) be the gradients of Incp\" at the input image img. Thus, the vector Incp (img) is the same size as the image and lies in R224x224x3. As a shorthand, we write lncp,, c(img) for the gradient of a specific pixel (i, j) and color channel c E { R, G, B}.\nWe compute the gradients of Incp (with respect to the image) for the highest-scoring object class. and then aggregate the gradients Incp (img) along the color dimension to obtain pixel importance Scores\nVi,j : P,(img) := ce{R,G,B}| Incp,j,c(img)|\nNext, we visualize pixel importance scores by scaling the intensities of the pixels in the original mage in proportion to their respective scores; thus, higher the score brighter would be the pixel. Figure|1a shows a visualization for an image for which the highest scoring object class is \"reflex camera' with a softmax score of 0.9938.\n1 These pixel importance scores are similar to the gradient-based saliency map defined bySimonyan et al 2013) with the difference being in how the gradients are aggregated along the color channel.\nIntuitively, one would expect the the high gradient pixels for this classification to be ones falling on the camera or those providing useful context for the classification (e.g., the lens cap). However most of the highlighted pixels seem to be on the left or above the camera, which to a human seem not essential to the prediction. This could either mean that (1) the highlighted pixels are somehow important for the internal computation performed by the Inception network, or (2) gradients of the image fail to appropriately quantify pixel importance..\nLet us consider hypothesis (1). In order to test it we ablate parts of the image on the left and above the camera (by zeroing out the pixel intensities) and run the ablated image through the Inceptior network. See Figure 1b] The top predicted category still remains \"reflex camera'' with a softmax score of 0.9966 - slightly higher than before. This indicates that the ablated portions are indeec irrelevant to the classification. On computing gradients of the ablated image, we still find that most of the high gradient pixels lie outside of the camera. This suggests that for this image, it is in fac hypothesis (2) that holds true. Upon studying more images (see Figure4), we find that the gradients often fail to highlight the relevant pixels for the predicted object label."}, {"section_index": "2", "section_name": "2.2 SATURATION", "section_text": "In theory, it is easy to see that the gradients may not reflect feature importance if the prediction function flattens in the vicinity of the input, or equivalently, the gradient of the prediction function. with respect to the input is tiny in the vicinity of the input vector. This is what we call saturation. which has also been reported in previous work (Shrikumar et al.[(2016), Glorot & Bengio(2010))\nWe analyze how widespread saturation is in the Inception network by inspecting the behavior of. the network on counterfactual images obtained by uniformly scaling pixel intensities from zerc counterfactual images is\nFigure 2a|shows the trend in the softmax output of the highest scoring class, for thirty randomly. chosen images form the ImageNet dataset. More specifically, for each image img, it shows the trend.. in Incp\"(a img) as varies from zero to one with L being the label of highest scoring object class. for img. It is easy to see that the trend flattens (saturates) for all images a increases. Notice that. saturation is present even for images whose final score is significantly below 1.0. Moreover, for a majority of images, saturation happens quite soon when a = 0.2..\nIn fact, to our surprise, we found that the saturation is inherently present in the Inception network anc the outputs of the intermediate layers also saturate. We plot the distance between the intermediate layer neuron activations for a scaled down input image and the actual input image with respect tc the scaling parameter, and find that the trend flattens. Due to lack of space, we provide these plots in Figure12|in the appendix.\nNote that it is well known that the saturation of gradients prevent the model from converging tc a good quality minima (Glorot & Bengio (2010)). So one may expect good quality models tc not have saturation and hence for the (final) gradients to convey feature importance. Clearly, ou observations on the Inception model show that this is not the case. It has good prediction accuracy but also exhibits saturation (see Figure2. Our hypothesis is that the gradients of important features are not saturated early in the training process. The gradients only saturate after the features have been learned adequately, i.e., the input is far away from the decision boundary.\n{a img0 a1}\nOne may argue that since the output of the Inception network is the result of applying the softmax function to a vector of activation values, the saturation is expected due to the squashing property of the softmax function. However, as shown in Figure[2b] we find that even the pre-softmax activation scores for the highest scoring class saturate.\nIt is quite clear from these plots that saturation is widespread across images in the Inception network and there is a lot more activity in the network for counterfactual images at relatively low values of the scaling parameter a. This observation forms the basis of our technique for quantifying feature. importance.\nWe study the importance of input features in a prediction made for an input by examining the gra dients of the counterfactuals obtained by scaling the input; we call this set of gradients interior gradients.\nThese interior gradients explore the behavior of the network along the entire scaling curve depicted in Figure2a rather than at a specific point. We can aggregate the interior gradients along the color dimension to obtain interior pixel importance scores using equation1\nWe individually visualize the pixel importance scores for each scaling parameter a by scaling the intensities of the pixels in the actual image in proportion to their scores. The visualizations show how the importance of each pixel evolves as we scale the image, with the last visualization being identical to one generated by gradients at the actual image. In this regard, the interior gradients offer strictly more insight into pixel importance than just the gradients at the actual image.\nFigure[3shows the visualizations for the \"reflex camera\"' image from Figure 1a|for various values of. the scaling parameter a. The plot in the top right corner shows the trend in the absolute magnitude. of the average pixel importance score. The magnitude is significantly larger at lower values of a and. nearly zero at higher values -- the latter is a consequence of saturation. Note that each visualization is only indicative of the relative distribution of the importance scores across pixels and not the absolute magnitude of the scores, i.e., the later snapshots are responsible for tiny increases in the scores as the chart in the top right depicts..\nThe visualizations show that at lower values of a, the pixels that lie on the camera are most impor tant, and as a increases, the region above the camera gains importance. Given the high magnitude of gradients at lower values of a, we consider those gradients to be the primary drivers of the fina prediction score. They are more indicative of feature importance in the prediction compared to the gradients at the actual image (i.e., when a = 1).\nThe visualizations of the interior pixel gradients can also be viewed together as a single animation that chains the visualizations in sequence of the scaling parameter. This animation offers a concise yet complete summary of how pixel importance moves around the image as the scaling parameter increase from zero to one.\nRationale. While measuring saturation via counterfactuals seems natural, using them for quanti. fying feature importance deserves some discussion. The first thing one may try to identify feature. importance is to examine the deep network like one would with human authored code. This seems hard; just as deep networks employ distributed representations (such as embeddings), they perform convoluted (pun intended) distributed reasoning. So instead, we choose to probe the network with several counterfactual inputs (related to the input at hand), hoping to trigger all the internal work ings of the network. This process would help summarize the effect of the network on the protagonist input; the assumption being that the input is human understandable. Naturally, it helps to work with. gradients in this process as via back propagation, they induce an aggregate view over the function. computed by the neurons.\nInterior gradients use counterfactual inputs to artifactually induce a procedure on how the network. attention moves across the image as it compute the final prediction score. From the animation we gather that the network focuses on strong and distinctive patterns in the image at lower value of the scaling parameter, and subtle and weak patterns in the image at higher values. Thus, we speculate that the network's computation can be loosely abstracted by a procedure that first recogniz distinctive features of the image to make an initial prediction, and then fine tunes (these are smal score jumps as the chart in Figure 3|shows) the prediction using weaker patterns in the image.\nWhile the method of examining gradients of counterfactual inputs is broadly applicable to a wide. range of networks, we first explain it in the context of Inception. Here, the counterfactual image inputs we consider are obtained by uniformly scaling pixel intensities from zero to their values in the actual image (this is the same set of counterfactuals that was used to study saturation). The interior gradients are the gradients of these images..\n1.0 35 30 0.8 25 20 15 SSorre 0.4 10 0.2 0.0 -5 0.0 0.2 0.4 0.6 0.8 10 0.0 0.2 0.4 0.6 0.8 Scaling (a) Scaling (a) (a) Softmax score for top label (b) Pre-softmax score for top lab. Figure 2: Saturation in Inception Top label:reflex camera COG1 Score:0.993755 02\nTop label:reflex camera C2 Score:0.993755 minclta abes 02 44 08 10\nInput image and trend of the pixel importance scores obtained from interior gradients\nQ = 0.02 a = 0.04 a = 0.06 a = 0.08 a = 0.1 q = 0.2 Q = 0.4 a = 0.6 q = 0.8 Q = 1.0\nFigure 3: Visualization of interior gradients. Notice that the visualizations at lower values of the scaling parameter (a) are sharper and much better at surfacing important features of the input image\nA different summarization of the interior gradients can be obtained by cumulating them. While ther are a few ways of cumulating counterfactual gradients, the approach we take has the nice attributior oroperty (Proposition|1) that the feature importance scores approximately add up to the predictior score. The feature importance scores are thus also referred to as attributions.\nNotice that the set of counterfactual images {a img 0 < a < 1} fall on a straight line path in be cumulated by integrating them along this line. We call the resulting gradients as integrated gradients. In what follows, we formalize integrated gradients for an arbitrary function F : Rn -> [0, 1] (representing a deep network), and an arbitrary set of counterfactual inputs falling on a path in Rn\nLet x E Rn be the input at hand, and y = (71, . .., Yn) : [0, 1] -> Rn be a smooth function specifying the set of counterfactuals; here, y(0) is the baseline input (for Inception, a black image), and y(1 is the actual input (for Inception, the image being studied). Specifically, {y(a) 0 a 1} is the set of counterfactuals (for Inception, a series of images that interpolate between the black image and the actual input).\nThe integrated gradient along the ith dimension for an input x E Rn is defined as follows\n8F(y(a)) dYi(a) IntegratedGrads,(x) ::- da dYi(Q) da Q=0\n0F(x) i where is the gradient of F along the ith dimension at x. dx :\nA nice technical property of the integrated gradients is that they add up to the difference between the output of F at the final counterfactual y(1) and the baseline counterfactual y(0). This is formalized by the proposition below, which is an instantiation of the fundamental theorem of calculus for path integrals.\nFor most deep networks, it is possible to choose counterfactuals such that the prediction at the base line counterfactual is near zero (F(y(0)) ~ 0)|3|For instance, for the Inception network, the coun- terfactual defined by the scaling path satisfies this property as Incp(0224 2243) ~ 0. In such cases, it follows from the Proposition that the integrated gradients form an attribution of the prediction output F(x), i.e., they almost exactly distribute the output to the individual input features.\nThe additivity property provides a form of sanity checking for the integrated gradients and ensures. that we do not under or over attribute to features. This is a common pitfall for attribution schemes based on feature ablations, wherein, an ablation may lead to small or a large change in the prediction score depending on whether the ablated feature interacts disjunctively or conjunctively to the rest of the features. This additivity is even more desirable when the networks score is numerically critical. i.e., the score is not used purely in an ordinal sense. In this case, the attributions (together with additivity) guarantee that the attributions are in the units of the score, and account for all of the. Score.\nComputing integrated gradients. The integrated gradients can be efficiently approximated by Rie. mann sum, wherein, we simply sum the gradients at points occurring at sufficiently small interva along the path of counterfactuals.\n8F(y(k/m)) dyi(x) YY\nHere m is the number of steps in the Riemman approximation of the integral. Notice that the approximation simply involves computing the gradient in a for loop; computing the gradient is central to deep learning and is a pretty efficient operation. The implementation should therefore\n3we did have trouble finding a baseline couterfactual for an RNN model that simulated the workings of a traffic light intersection between a main road and a side street; the naive benchmark counterfactual was one o no traffic at either intersection. But this did not have the lack of semantics that a black image or pure noise has for the Inception network. While no interesting labels are activated for the black image supplied to the Inception network, the same is not true for the \"no traffic\" benchmark supplied to the RNN model\nWe note that these path integrals of gradients have been used to perform attribution in the context of small non-linear polynomials (Sun & Sundararajan (2011)), and also within the cost-sharing literature in economics where function at hand is a cost function that models the cost of a project as a function of the demands of various participants, and the attributions correspond to cost-shares The specific path we use corresponds to a cost-sharing method called Aumann-Shapley (Aumann & Shapley(1974)).\nFormally, this means that the partial derivative of F along each input dimension satisfies Lebesgue's inte. grability condition, i.e., the set of discontinuous points has measure zero. Deep networks built out of Sigmoids. ReLUs, and pooling operators should satisfy this condition..\nbe straightforward in most deep learning frameworks. For instance, in TensorFlow (ten), it es. sentially amounts to calling tf. gradients in a loop over the set of counterfactual inputs (i.e.. y(k) for k = 1, ..., m), which could also be batched. Going forward, we abuse the term \"inte-. grated gradients\"' to refer to the approximation described above..\nIntegrated gradients for Inception. We compute the integrated gradients for the Inception networl using the counterfactuals obtained by scaling the input image; y(a) = a img where img is the inpu image. Similar to the interior gradients, the integrated gradients can also be aggregated along the color channel to obtain pixel importance scores which can then be visualized as discussed earlier Figure 4|shows these visualizations for a bunch of images. For comparison, it also presents the corresponding visualization obtained from the gradients at the actual image. From the visualizations it seems quite evident that the integrated gradients are better at capturing important features.\nWe discuss two desirable axioms for feature attribution methods. We show that our integrated gradi. ents method satisfies both. On the other hand, the other feature attribution methods in literature break. one of the two axioms. These methods include DeepLift (Shrikumar et al.(2016)), Layer-wise rele- vance propagation (LRP) (Binder et al.(2016)), Deconvolutional networks (Zeiler & Fergus(2014)) and Guided back-propagation (Springenberg et al.(2014))"}, {"section_index": "3", "section_name": "Sensitivity.", "section_text": "Integrated Gradients (ignoring the approximation in computing integrals) satisfies Sensitivity. The. attribution to the variable is in fact equal to the change in function value (this is a one-variable instance of Proposition[1)\nGradients break Sensitivity due to saturation (see Section 2.2), i.e., the prediction function may. flatten at the input and thus have zero gradient despite the function value at the input being differen from that at the benchmark. For a concrete example, consider a one variable, one ReLU network f(x) = 1 - ReLU(1 - x). Suppose we change the input from x = 0 to x = 2. The function changes from O to 1, but because f is flat at x = 1, the gradient method gives attribution of 0 to x, violating. sensitivity. We defer the counterexamples for other methods to AppendixB\nImplementation Invariance. Two networks can be functionally equivalent, i.e., their outputs are. equal for all inputs, despite having very different implementations. We would like our attributior. method to satisfy Implementation Invariance, i.e., the attributions are always identical for two func tionally equivalent networks. To motivate this, notice that attribution can be colloquially defined as. distributing the blame (or credit) for the output to the input features. Such a definition does not refe to implementation details. Moreover, the common practice of machine learning tends to evaluate. the models from an input-output point of view, where implementations are purely means to an end..\nAttributions generated by integrated gradients (or gradients, or any function of the interior gradients satisfy Implementation Invariance since they are based only on the gradients of the function repre sented by the network. On the other hand, this fundamental property is unfortunately broken for the DeepLift and LRP methods. Below, we describe intuition for why Implementation Invariance is broken by these methods; a concrete example is provided in AppendixB\nFirst, notice that gradients are invariant to implementation. In fact, the chain-rule for gradients Og Ohdq input and output of a system. The gradient of output f to input g can be computed either directly by.\nA highly desirable property for feature attributions is Sensitivity. If a non-zero change in a single Input variable (holding all other variables fixed) changes the output by a non-zero amount, then this variable should be given a non-zero attribution. In other words, attribution should be sensitive to change.\nAs previously discussed, gradients don't satisfy sensitivity, and are therefore unsuitable for attribu tion. Methods like DeepLift tackle this issue by introducing a benchmark, and in some sense try to. compute \"discrete gradients\"' instead of gradients. They use a backpropagation procedure for com-. posing discrete gradients. Unfortunately, such approaches are problematic because chain rule does. g(x1)-g(xo) h(x1)-h(x0)g(x1)-g(x0) not hold, and therefore these methods fail to satisfy implementation invariance..\nIf an attribution method fails to satisfy Implementation Invariance, the attributions are potentially sensitive to unimportant aspects of the models. For instance, in the example in Section B] the network architecture has more degrees of freedom than needed for representing the function, and as a result there are two set of values for the network parameters that lead to the same function. The training procedure can converge at either set of values depending on the initializtion or for other reasons, but the underlying network function would remain the same. It is undesirable that attributions differ for such reasons.\nThere are many methods that satisfy Implementation Invariance and Sensitivity. In this section we show that Integrated Gradients is not just one of them. It is in fact also the only method that satisfies an extended set of axioms. The additional axioms are reasonably natural but perhaps not as funda mental to attribution. As we shall see in the next section there does not seem to be a perfect empirical evaluation for attribution methods. We hope that these axioms provide a theoretical framework foi evaluating attribution methods, which provide a good complement to empirical evaluations.\nAs discussed earlier Integrated Gradients corresponds to a method called Aumann-Shapley studiec. by economists in the context of cost-sharing. (The function at hand is a cost-function whose inpu variables are demands of different participants and attributions correspond to cost-shares.) Here. is the list of axioms, borrowed from the cost-sharing literature Billera & Heath(1982); a longer. discussion of the desirability of these axioms in the context of attribution can be found in Sun &. Sundararajan(2011)\nWe now discuss an emprical evaluation of integrated gradients as a measure of feature importance using gradients as a benchmark\nDummy: If the function implemented by the deep network does not depend on a variable,. then the attribution to it is always zero.. Additivity: For all inputs, the attributions for a function f1 + f2 is the sum of the attributions for the function f1 and the function f2 Completeness: The attributions add up to the difference between the function values at the input and the benchmark. Scale Invariance: Informally, if the inputs to two networks differ in the scale of one of the. variables (say Farenheit and Celsius), but have the same output for corresponding (rescaled). inputs, then the attributions should be identical.. Proportional Attributions for Homogenous Variables: If a function can be represented by the sum of the two variables, then the two variables should receive attributions proportional. to their input values.\nPixel ablations. The first evaluation is based on a method bySamek et al. (2015). Here we ablate4 the top 5000 pixels (10% of the image) by importance score, and compute the score drop for the highest scoring object class. The ablation is performed 100 pixels at a time, in a sequence of 50 steps. At each perturbation step k we measure the average drop in score up to step k. This quantity is referred to a area over the perturbation curve (AOPC) bySamek et al.(2015).\nLocalization. The second evaluation is to consider images with human-drawn bounding boxes around objects, and compute the percentage of pixel attribution inside the bounding box. We use the 2012 ImageNet object localization challenge dataset to get a set of human-drawn bounding boxes We run our evaluation on 100 randomly chosen images satisfying the following properties - (1) the total size of the bounding box(es) is less than two thirds of the image size, and (2) ablating the bounding box significantly drops the prediction score for the object class. (1) is for ensuring tha the boxes are not so large that the bulk of the attribution falls inside them by definition, and (2) is for ensuring that the boxed part of the image is indeed responsible for the prediction score for the image. We find that on 82 images the integrated gradients technique leads to a higher fraction of the pixel attribution inside the box than gradients at the actual image. The average difference in the percentage pixel attribution inside the box for the two techniques is 8.4%.\nWhile these results are promising, we note the following caveat. Integrated gradients are meant to. capture pixel importance with respect to the prediction task. While for most objects, one would expect the pixel located on the object to be most important for the prediction, in some cases the context in which the object occurs may also contribute to the prediction. The cabbage butterfly image from Figure4is a good example of this where the pixels on the leaf are also surfaced by the. integrated gradients.\nEyeballing. Ultimately, it was hard to come up with a perfect evaluation technique. So we did spenc a large amount of time applying and eyeballing the results of our technique to various networks- the ones presented in this paper, as well as some networks used within products. For the Inception. network, we welcome you to eyeball more visualizations in Figure 11|in the appendix and also at. https://github.com/ankurtaly/Attributions While we found our method to bea gradients at the image for the most part, this is clearly a subjective process prone to interpretatior and cherry-picking, but is also ultimately the measure of the utility of the approach-debugging inherently involves the human.\nFinally, also note that we did not compare against other whitebox attribution techniques (e.g. DeepLift (Shrikumar et al.[(2016)), because our focus was on black-box techniques that are easy to implement, so comparing against gradients seems like a fair comparison."}, {"section_index": "4", "section_name": "2.8 DEBUGGING NETWORKS", "section_text": "4 Ablation in our setting amounts to zeroing out (or blacking out) the intensity for the R, G, B channels. We view this as a natural mechanism for removing the information carried by the pixel (than, say, randomizing th pixel's intensity as proposed by Samek et al.(2015), especially since the black image is a natural baseline fo1 Vision tasks\nFigure 5|shows the AOPC curve with respect to the number of perturbation steps for integrated gradients and gradients at the image. AOPC values at each step represent the average over a dataset of 150 randomly chosen images. It is clear that ablating the top pixels identified by integrated gradients leads to a larger score drop that those identified by gradients at the image.\nHaving said that, we note an important issue with the technique. The images resulting from pixel perturbation are often unnatural, and it could be that the scores drop simply because the network has never seen anything like it in training..\nDespite the widespread application of deep neural networks to problems in science and technology. their internal workings largely remain a black box. As a result, humans have a limited ability to understand the predictions made by these networks. This is viewed as hindrance in scenarios where the bar for precision is high, e.g., medical diagnosis, obstacle detection for robots, etc. (dar (2016)). Quantifying feature importance for individual predictions is a first step towards understanding the behavior of the network; at the very least, it helps debug misclassified inputs, and sanity check the internal workings. We present evidence to support this below..\nWe use feature importance to debug misclassifications made by the Inception network. In particular we consider images from the ImageNet dataset where the groundtruth label for the image not in the top five labels predicted by the Inception network. We use interior gradients to compute pixel importance scores for both the Inception label and the groundtruth label, and visualize them to gain insight into the cause for misclassification..\nFigure 6 shows the visualizations for two misclassified images. The top image genuinely has two. objects, one corresponding to the groundtruth label and other corresponding to the Inception label We find that the interior gradients for each label are able to emphasize the corresponding objects. Therefore, we suspect that the misclassification is in the ranking logic for the labels rather than the recognition logic for each label. For the bottom image, we observe that the interior gradients are largely similar. Moreover, the cricket gets emphasized by the interior gradients for the mantis. (Inception label). Thus, we suspect this to be a more serious misclassification, stemming from the recognition logic for the mantis..\nFaithfullness. A natural question is to ask why gradients of counterfactuals obtained by scaling the input capture feature importance for the original image. First, from studying the visualizations in Figure 4] the results look reasonable in that the highlighted pixels capture features representative of the predicted class as a human would perceive them. Second, we confirmed that the network too seems to find these features representative by performing ablations. It is somewhat natural to expect that the Inception network is robust to to changes in input intensity; presumably there are some low brightness images in the training set.\nHowever, these counterfactuals seem reasonable even for networks where such scaling does not cor- respond to a natural concept like intensity. and when the counterfactuals fall outside the training set for instance in the case of the ligand-based virtual screening network (see Section|3.1). We speculate that the reason why these counterfactuals make sense is because the network is built by composing ReLUs. As one scales the input starting from a suitable baseline, various neurons activate, and the scaling process that does a somewhat thorough job of exploring all these events that contribute to the prediction for the input. There is an analogous argument for other operator such as max pool, average pool, and softmax--here the triggering events arent discrete but the argument is analogous\nLimitations of Approach. We discuss some limitations of our technique; in a sense these are limitations of the problem statement and apply equally to other techniques that attribute to base input features."}, {"section_index": "5", "section_name": "2.10 RELATED WORK", "section_text": "Over the last few years, there has been a vast amount work on demystifying the inner workings. of deep networks. Most of this work has been on networks trained on computer vision tasks, and deals with understanding what a specific neuron computes (Erhan et al.(2009); Le (2013)) and interpreting the representations captured by neurons during a prediction (Mahendran & Vedaldi. (2015); Dosovitskiy & Brox(2015);Yosinski et al.(2015).\nInability to capture Feature interactions: The models could perform logic that effec tively combines features via a conjunction or an implication-like operations; for instance. it could be that a molecule binds to a site if it has a certain structure that is essentially a conjunction of certain atoms and certain bonds between them. Attributions or importance scores have no way to represent these interactions.. Feature correlations: Feature correlations are a bane to the understandability of all ma chine learning models. If there are two features that frequently co-occur, the model is free. to assign weight to either or both features. The attributions would then respect this weight. assignment. But, it could be that the specific weight assignment chosen by the model is not human-intelligible. Though there have been approaches to feature selection that reduce. feature correlations (Yu & Liu (2003)), it is unclear how they apply to deep models on. dense input.\nOur work instead focuses on understanding the network's behavior on a specific input in terms of the base level input features. Our technique quantifies the importance of each feature in the prediction. Known approaches for accomplishing this can be divided into three categories..\nGradient based methods. The first approach is to use gradients of the input features to quantify feature importance (Baehrens et al.(2010); Simonyan et al.(2013)). This approach is the easiest to implement. However, as discussed earlier, naively using the gradients at the actual input does not accurate quantify feature importance as gradients suffer from saturation.\nScore back-propagation based methods. The second set of approaches involve back-propagating the final prediction score through each layer of the network down to the individual features. These include DeepLift (Shrikumar et al.|(2016)), Layer-wise relevance propagation (LRP) (Binder et al. (2016), Deconvolutional networks (DeConvNets) (Zeiler & Fergus(2014)), and Guided back- propagation (Springenberg et al.(2014)). These methods largely differ in the backpropagation logic for various non-linear activation functions. While DeConvNets, Guided back-propagation and LRP rely on the local gradients at each non-linear activation function, DeepLift relies on the deviation in the neuron's activation from a certain baseline input.\nSimilar to integrated gradients, the DeepLift and LRP also result in an exact distribution of the. prediction score to the input features. However, as shown by Figure 14] the attributions are no. invariant across functionally equivalent networks. Besides, the primary advantage of our method over all these methods is its ease of implementation. The aforesaid methods require knowledge ol. the network architecture and the internal neuron activations for the input, and involve implementing. a somewhat complicated back-propagation logic. On the other hand, our method is agnostic to the. network architectures and relies only on computing gradients which can done efficiently in most. deep learning frameworks.\nModel approximation based methods. The third approach, proposed first byRibeiro et al (2016a b), is to locally approximate the behavior of the network in the vicinity of the input be ing explained with a simpler, more interpretable model. An appealing aspect of this approach is tha it is completely agnostic to the structure of the network and only deals with its input-output behav ior. The approximation is learned by sampling the network's output in the vicinity of the input a hand. In this sense, it is similar to our approach of using counterfactuals. Since the counterfactuals are chosen somewhat arbitrarily, and the approximation is based purely on the network's output a the counterfactuals, the faithfullness question is far more crucial in this setting. The method is alsc expensive to implement as it requires training a new model locally around the input being explained"}, {"section_index": "6", "section_name": "APPLICATIONS TO OTHER NETWORKS", "section_text": "The technique of quantifying feature importance by inspecting gradients of counterfactual inputs is generally applicable across deep networks. While for networks performing vision tasks, the coun- terfactual inputs are obtained by scaling pixel intensities, for other networks they may be obtained by scaling an embedding representation of the input.\nAs a proof of concept, we apply the technique to the molecular graph convolutions network of Kearnes et al.(2016) for ligand-based virtual screening and an LSTM model (Zaremba et al. (2014)) for the language modeling of the Penn Treebank dataset (Marcus et al.(1993)).\nThe Ligand-Based Virtual Screening problem is to predict whether an input molecule is active. against a certain target (e.g., protein or enzyme). The process is meant to aid the discovery of. new drug molecules. Deep networks built using molecular graph convolutions have recently been proposed byKearnes et al.(2016) for solving this problem.\nOnce a molecule has been identified as active against a target, the next step for medicinal chemists is to identify the molecular features-formally, pharmacophores5 -that are responsible for the ac-\n5 A pharmacophore is the ensemble of steric and electronic features that is necessary to ensure the a molecul. is active against a specific biological target to trigger (or to block) its biological response\nOriginal image Top label and score Integrated gradients Gradients at image Top label:reflex camera Score:0.993755 Top label:fireboat Score:0.999961 SCHOOL BUSCO Top label:school bus Score0.997033 Top label:mosque Score:0.999127 Top label:viaduct Score:0.999994 Top label:cabbage butterfly Score:0.996838 Top label:starfish Score:0.999992\nFigure 4: Comparing integrated gradients with gradients at the image. Left-to-right: original input image, label and softmax score for the highest scoring class, visualization of integrated gradi- ents, visualization of gradients at the image. Notice that the visualizations obtained from integrated gradients are better at reflecting distinctive features of the image.\n0.6 Integrated gradients Gradients at image 0.5 0.4 OdC AO0 0.3 0.2 0.1 0.0 0 10 20 30 40 50 Number of perturbation steps\nFigure 5: AOPC (Samek et al.(2015)) for integrated gradients and gradients at image\nInception label:strainer Score:0.594582 Groundtruth label:cabbage butterfly Score:0.00256323 Inception label:mantis Score:0.0908096 Groundtruth label:cricket Score:0.0118476\nFigure 6: Interior gradients of misclassified images. Left-to-right: Original image, Softmax score for the top label assigned by the Inception network and the groundtruth label provided by ImageNet visualization of integrated gradients w.r.t. Inception label, visualization of integrated gradients w.r.t groundtruth label.\ntivity. This is akin to quantifying feature importance, and can be achieved using the method of. integrated gradients. The attributions obtained from the method help with identifying the dominant molecular features, and also help sanity check the behavior of the network by shedding light on its inner workings. With regard to the latter, we discuss an anecdote later in this section on how attributions surfaced an anomaly in W1N2 network architecture proposed byKearnes et al.(2016)..\nDefining the counterfactual inputs. The first step in computing integrated gradients is to define the set of counterfactual inputs. The network requires an input molecule to be encoded by hand as a set of atom and atom-pair features describing the molecule as an undirected graph. Atoms are featurized using a one-hot encoding specifying the atom type (e.g., C, O, S, etc.), and atom-pairs are featurized by specifying either the type of bond (e.g., single, double, triple, etc.) between the atoms, or the graph distance between them6\nThe counterfactual inputs are obtained by scaling down the molecule features down to zero vectors i.e., the set {aFeatures(mol) 0 < a < 1} where Features(mol) is an encoding of the molecul into atom and atom-pair features.\nThe careful reader might notice that these counterfactual inputs are not valid featurizations of. molecules. However, we argue that they are still valid inputs for the network. First, all opera. tors in the network (e.g., ReLUs, Linear filters, etc.) treat their inputs as continuous real numbers. rather than discrete zeros and ones. Second, all fields of the counterfactual inputs are bounded be tween zero and one, therefore, we don't expect them to appear spurious to the network. We discuss. this further in section2.9\n6This featurization is referred to as \"simple\" input featurization in|Kearnes et al. (2016)\nMolecule: CID1562745 Atom attribution 0.06 0.03 Attribution summary Softmax score for task PCBA-588342: 0.98 CH2_16 p 0.00 CH2 17 Atom attribution: 0.62 (63%) Bond attribution: 0.45 (46%) H3C 0 CH2_1 CH_1O CH2_15 D2-pair attribution: -0.03 (-3%) CI CH2_3 CH 9 CH22 N_12 Br 0.03 C 5 11 CH2_14 _8 CH2_13 NH 7 Mt1 CH_18 CH_19 0.06 Bond and D2-pair attribution 0.01 0.00 0.01 [0.1][0.2][1.2][1.3][2.3][2.4][3.4][3.5] [4,5][4.6][4.7][5.6] [5.71[5.8][6.7][7.8 12[11. 13]11. 17[11. 13[11. 1912. 13[12. 14[12. 16]12, 17]12. 18[13. 14[13, 15[13.17[14. 1514. 16]15, 16[15.17[16. 17]18. 19 0.02\nMolecule: CID1562745 Atom attribution. 0.06 0.03 Attribution summary O Softmax score for task PCBA-588342: 0.98 F CH216 Atom attribution: 0.62 (63%) 0.00 CH2_17 CH2 1 O6 CH2_15 Bond attribution: 0.45 (46%) H3C 0 CH CH_10 D2-pair attribution: -0.03 (-3%) CI CH2 2 N_12 Br 0.03 C 11 CH2_13 CH2_14 NH_7 Mt1 CH_18 CH_19 [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10][11] [12][13][14][15][16][17][18] [19] 0.06\nBond and D2-pair attribution D2 [0.1] [0. 2] [1.2] [1.3] [2.3] [2.4] [3.4][3. 5] [4. 5] [4.6] [4. 7] [5.6] [5.7][5.8][6.7] [7.8] [7. 9] [7.19] [6.9] [8.10] [8. 18][8.19][9. 10] [9.11][9. 19][10.11[10. 12[10. 18[11.12[11.13]11.17[11. 18[11. 19[12, 13[12.14[12, 1612, 17]12. 18[13. 14[13. 15[13.17[14. 15[14. 16]15.16[15.17[16.17]18. 19]\nFigure 7: Attribution for a molecule under the W2N2 network (Kearnes et al.(2016)) molecules is active on task PCBA-58432\nIn what follows, we discuss the behavior of a network based on the W2N2-simple architecture proposed byKearnes et al.(2016). On inspecting the behavior of the network over counterfactual inputs, we observe saturation here as well. Figure|13a shows the trend in the softmax score for the task PCBA-588342 for twenty five active molecules as we vary the scaling parameter a from zero to one. While the overall saturated region is small, saturation does exist near vicinity of the input (0.9 1). Figure[13bJin the appendix shows that the total feature gradient varies significantly along the scaling path; thus, just the gradients at the molecule is fully indicative of the behavior of the network.\nVisualizing integrated gradients. We cumulate the gradients of these counterfactual inputs t obtain an attribution of the prediction score to each atom and atom-pair feature. Unlike imag inputs, which have dense features, the set of input features for molecules are sparse. Consequently. the attributions are sparse and can be inspected directly. Figure|7|shows heatmaps for the atom anc. atom-pair attributions for a specific molecule.\nUsing the attributions, one can easily identify the atoms and atom-pairs that that have a strongly pos. itive or strongly negative contribution. Since the attributions add up to the final prediction score (see Proposition[1), the attribution magnitudes can be use for accounting the contributions of each fea- ture. For instance, the atom-pairs that have a bond between them contribute cumulatively contribute. 46% of the prediction score, while all other atom pairs cumulatively contribute -3%.\nOn investigating the problem further we found that since the W1N2 network had only one convo. lution layer, the atoms and atom-pair features were not fully convolved. This caused all atoms that\nWe presented the attributions for 100 molecules active against a specific task to a few chemists.. The chemists were able to immediately spot dominant functional groups (e.g., aromatic rings) being surfaced by the attributions. A next step could be cluster the aggregate the attributions across a large. set of molecules active against a specific task to identify a common denominator of features shared. by all active molecules.\nIdentifying Dead Features. We now discuss how attributions helped us spot an anomaly in the. W1N2 architecture. On applying the integrated gradients method to the W1N2 network, we found. that several atoms in the same molecule received the exact same attribution. For instance. for the molecule in Figure 7] we found that several carbon atoms at positions 2, 3, 14, 15, and 16 received. the same attribution of 0.0043 despite being bonded to different atoms, for e.g., Carbon at position 3. is bonded to an Oxygen whereas Carbon at position 2 is not. This is surprising as one would expect. two atoms with different neighborhoods to be treated differently by the network..\nhave the same atom type, and same number of bonds of each type to contribute identically to tl network. This is not the case for networks that have two or more convolutional layers\nDespite the aforementioned problem, the W1N2 network had good predictive accuracy. One hy pothesis for this is that the atom type and their neighborhoods are tightly correlated; for instance an outgoing double bond from a Carbon is always to another Carbon or Oxygen atom. As a result. given the atom type, an explicit encoding of the neighborhood is not needed by the network. This also suggests that equivalent predictive accuracy can be achieved using a simpler \"bag of atoms' type model."}, {"section_index": "7", "section_name": "3.2 LANGUAGE MODELING", "section_text": "To apply our technique for language modeling, we study word-level language modeling of the. Penn Treebank dataset (Marcus et al.(1993)), and apply an LSTM-based sequence model based onZaremba et al.(2014). For such a network, given a sequence of input words, and the softmax. prediction for the next word, we want to identify the importance of the preceding words for the Score.\nAs in the case of the Inception model, we observe saturation in this LSTM network. To describe the setup, we choose 20 randomly chosen sections of the test data, and for each of them inspect the prediction score of the next word using the first 10 words. Then we give each of the 10 input words a weight of a E [0, 1], which is applied to scale their embedding vectors. In Figure[8l we plot the. prediction score as a function of a. For all except one curves, the curve starts near zero at a = 0,. moves around in the middle, stabilizes, and turns flat around a = 1. For the interesting special case. where softmax score is non-zero at a = 0, it turns out that that the word being predicted represents. out of vocabulary words. [!h]\nSoftmax score 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.2 0.4 0.6 0.8 1.0 alpha\nIn Table [9 and Table [10] we show two comparisons of gradients to integrated gradients. Due to saturation, the magnitudes of gradients are so small compared to the prediction scores that it is difficult to make sense of them. In comparison, (approximate) integrated gradients have a total amount close to the prediction, and seem to make sense. For example, in the first example, the integrated gradients attribute the prediction score of \"than\" to the preceding word \"more\"'. This makes sense as \"than\" often follows right after \"more\" in English. On the other hand, standard gradient gives a slightly negative attribution that betrays our intuition. In the second example, in predicting the second \"ual\"', integrated gradients are clearly the highest for the first occurrence of\nFigure 9: Prediction for than: 0.5307, total integrated gradient: 0.5322\nFigure 10: Prediction for ual: 0.0062, total integrated gradient: 0.0063\nual', which is the only word that is highly predictive of the second \"ual'. On the other hand standard gradients are not only tiny, but also similar in magnitude for multiple words."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "We present Interior Gradients, a method for quantifying feature importance. The method can be applied to a variety of deep networks without instrumenting the network, in fact, the amount of code required is fairly tiny. We demonstrate that it is possible to have some understanding of the performance of the network without a detailed understanding of its implementation, opening up the possibility of easy and wide application, and lowering the bar on the effort needed to debug deep. networks.\nWe also wonder if Interior Gradients are useful within training as a measure against saturation, ol indeed in other places that gradients are used."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Patrick Riley and Christian Szegedy for their helpful feedback on the tech nique and on drafts of this paper.\nSentence and N minutes after the ual trading Integrated gradients (*1e-3) 0.0707 0.1286 0.3619 1.9796 -0.0063 4.1565 0.2213 Gradients (*1e-3) 0.0066 0.0009 0.0075 0.0678 0.0033 0.0474 0.0184 Sentence (Cont.) halt came news that the ual group Integrated gradients (*1e-3) -0.8501 -0.4271 0.4401 -0.0919 0.3042 Gradients (*1e-3) -0.0590 -0.0059 0.0511 0.0041 0.0349\nDumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-laye features of a deep network. Technical Report 1341, University of Montreal, 2009\nMitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotate corpus of english: The penn treebank. Computational Linguistics, pp. 313-330, 1993.\nMatthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In 13th European Conference on Computer Vision (ECCV), pp. 818-833, 2014.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural\nOriginal image Top label and score Integrated gradients Gradients at image Top label: spiny lobster Score0.999827 Top label: Rottweiler Score:0.999882 Top label:American coot Score:0.999229 Top label:traffic light Score0.999968 Top label:head cabbage Score:0.99999 Top label: manhole cover Score:1.0 Top label: manhole cover Score0.999974 Top label:golfcart Score0.999726\nFigure 11: More visualizations comparing integrated gradients with gradients at the image Left-to-right: original input image, label and softmax score for the highest scoring class, visualiza-. tion of integrated gradients, visualization of gradients at the image..\n1400 14000 0.8 0.8 1200 12000 1000 0.6 10000 0.6 eosssaessee isaeeee 800 8000 0.4 0.4 600 6000 L72 400 0.2 4000 0.2 200 2000 D.O D.O 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 10 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 a a a Layer mixed5b Layer mixed4d 25000 0.8 35000 0.9 0.7 0.8 30000 20000 0.6 0.7 250000 eossesneee 0.5 0.6 0.4 20000 0.5 0.3 15000 esssne 0.4 10000 0.2 0.3 10000 5000 0.1 0.2 0.0 5000 0.1 0.1 000 0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 a a Q Layer mixed4b Layer mixed3b\nFigure 13: Saturation in the W2N2 network (Kearnes et al.(2016). Plots for the softmax sco. for task PCBA-58834, and the sum of the feature gradients w.r.t. the same task for twenty molecules. All molecules are active against the task.\nFigure 12: Saturation in intermediate layers of Inception. For each layer we plot the L2 and Cosine distance between the activation vector for a scaled down image and the actual input image with respect to the scaling parameter. Each plot shows the trend for 30 randomly chosen images from the ImageNet dataset. Notice that trends in all plots flatten as the scaling parameter increases. For the deepest Inception layer mi xed5b, the Cosine distance to the activation vector at the image is less than 0.01 when a > 0.6, which is really tiny given that this layer has 50176 neurons.\nL.O 0.6 0.5 0.8 0.4 sorneerwnne 0.6 0.3 0.2 0.4 0.1 0.0 0.2 0.1 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Q Q (b) Sum of the feature gradients (a) Softmax score for task\nz1 = ReLU(x1) z = ReLU(x-1) x=3 X=3 = 3 = 2 fxx=ReLUz-1-Z g(x,x)= ReLU(z- Z2) = 1 = 1 z2 = ReLU(x2) z2= ReLU(x2) x2=1 x=1 1 1 Network f(x1, x2) Network g(x1, x2) Attributions at x1 = 3, x2 = 1 Attributions at x1 = 3, x2 = 1 Integrated gradientss x1=2, x2=-1 Integrated gradients x1 =2, x2 =-1 DeepLift x1 = 1.5, x2 = -0.5 DeepLift X1 = 2, x2 = -1 LRP x1 = 1.5, x2 = -0.5 LRP X1 =2, x2 = -1\nz1 = ReLU(x1) Z= ReLU(x-1) x=3 x=3 = 3 = 2 f(x,x= ReLU(z-1- z g(x,x) = ReLU(z- Z) = 1 = 1 z2=ReLU(x) z2 = ReLU(x2) x2 = 1 x2=1 -1 Network f(x1, x2) Network g(x1, x2) Attributions at x1 = 3, x2 = 1 Attributions at x1 = 3, x2 = 1 Integrated gradients X1 =2,x2 =-1 Integrated gradients x1=2,x2=-1 DeepLift x1 = 1.5, x2 = -0.5 DeepLift x1 = 2, x2 =-1 LRP x1 = 1.5, x2 = -0.5 LRP x1 = 2, x2 = -1\nFigure 14: Attributions for two functionally equivalent networks. The figure shows attributions for two functionally equivalent networks f(x1, x2) and g(x1, x2) at the input x1 = 3, x2 = 1 using integrated gradients, DeepLift (Shrikumar et al.(2016)), and Layer-wise relevance propa- gation (LRP) (Binder et al.(2016)). The reference input for Integrated gradients and DeepLift is x1 = 0, x2 = 0. All methods except integrated gradients provide different attributions for the two networks."}, {"section_index": "10", "section_name": "3 ATTRIBUTION COUNTER-EXAMPLES", "section_text": "We show that the methods DeepLift and Layer-wise relevance propagation (LRP) break the imple. mentation invariance axiom, and the Deconvolution and Guided back-propagation methods break the sensitivity axiom.\nFigure 14 provides an example of two equivalent networks f(x1, x2) and g(x1,x2) for which DeepLift and LRP yield different attributions.\nNow we leverage the above example to show that Deconvolution and Guided back-propagation break. sensitivity. Consider the network f(x1, x2) from Figure|14. For a fixed value of x1 greater than 1. the output decreases linearly as x2 increases from 0 to x1 - 1. Yet, for all inputs, Deconvolutional. networks and Guided back-propagation results in zero attribution for x2. This happens because for all inputs the back-propagated signal received at the node ReLU(x2) is negative and is therefore. not back-propagated through the ReLU operation (per the rules of deconvolution and guided back- propagation; seeSpringenberg et al.[(2014) for details). As a result, the feature x2 receives zero attribution despite the network's output being sensitive to it..\nh(x1, x2 ReLU(x1) - 1 ReLU(x2 k(x1,x2) ReLU(x1 - 1) - ReLU(x2)\nNote that h and k are not equivalent. They have different values whenever x1 < 1. But f and g are equivalent. To prove this, suppose for contradiction that f and g are different for some x1, x2. Then it must be the case that ReLU(x1) - 1 ReLU(x1 - 1). This happens only when x1 < 1, which implies that f(x1, x2) = g(x1, x2) = 0."}] |
r1G4z8cge | [{"section_index": "0", "section_name": "MOLLIFYING NETWORKS", "section_text": "Caglar Gulcehre1, Marcin Moczulski2,*, Francesco Visin3,* Yoshua Bengio 1 University of Montreal, 2 University of Oxford, 3 Politecnico di Milano\nThe optimization of deep neural networks can be more challenging than the traditional convex optimization problems due to highly non-convex nature of the loss function, e.g. it can involve pathological landscapes such as saddle-surfaces that can be difficult to escape from for algorithms based on simple gradient descent In this paper, we attack the problem of optimization of highly non-convex neural networks objectives by starting with a smoothed - or mollified - objective function which becomes more complex as the training proceeds. Our proposition is inspired by the recent studies in continuation methods: similarly to curriculum methods we begin by learning an easier (possibly convex) objective function and let it evolve during training until it eventually becomes the original, difficult to optimize objective function. The complexity of the mollified networks is controlled by a single hyperparameter that is annealed during training. We show improvements on various difficult optimization tasks and establish a relationship between recent works on continuation methods for neural networks and mollifiers."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In the last few years, deep neural networks - i.e. convolutional networks (LeCun et al., 1989). LSTMs (Hochreiter & Schmidhuber, 1997a) or GRUs (Cho et al., 2014) - set the state of the art on a range of challenging tasks (Szegedy et al., 2014; Visin et al., 2015; Hinton et al., 2012: Sutskever et al., 2014; Bahdanau et al., 2014; Mnih et al., 2013; Silver et al., 2016). However when trained with variants of SGD (Bottou, 1998) deep networks can be difficult to optimize due to their highly. non-linear and non-convex nature (Choromanska et al., 2014; Dauphin et al., 2014)..\nA number of approaches were proposed to alleviate the difficulty of optimization: addressing the problem of the internal covariate shift with Batch Normalization (Ioffe & Szegedy, 2015 learning with a curriculum (Bengio et al., 2009), recently an approach to train RNNs with diffusio. process (Mobahi, 2016), and graduated optimization (Hazan et al., 2015). The impact of nois injection on the behavior of modern deep learning methods has been explored by Neelakantan et a (2015a). Hazan et al. (2015) have shown that injecting a particular noise and scheduling it carefully can guarantee the convergence in O(1/o2e2) steps for e-optimal and -nice functions. Similar to ou work graduated optimization optimizes a smoothed objective function without performing expensiv convolutions. Injecting noise to the activation functions and scheduling it have been recently show to improve the performance on a wide variety of tasks (Gulcehre et al., 2016).\nWe connect the ideas of curriculum learning and continuation methods with those arising from models with skip connections and using layers that compute near-identity transformations. Skip connections allow to train very deep residual and highway architectures (He et al., 2015; Srivastava et al., 2015) by skipping layers or block of layers. Similarly, it has been shown that stochastically changing the depth of a network during training (Huang et al., 2016b) does not prevent convergence. and enables better generalization performance.\nWe discuss the idea of mollification for neural networks - a form of differentiable smoothing of the loss function connected to noisy activations - which in our case can be interpreted as a form of adaptive noise injection which is controlled by a single hyperparameter. Inspired by Huang et al (2016b), we use a hyperparameter to stochastically control the depth of our network. This allows us to start the optimization from a convex objective function (as long as the optimized criterion is convex, e.g. linear or logistic regression) and to slowly introduce more complexity into the model by annealing the hyperparameter, thus making the network deeper and increasingly non-linear.\nThis work was done while these students were interning at the MILA lab in University of Montreal"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "An important difference of our work compared to injecting noise to the gradients as it is explored ii. Hazan et al., 2015; Neelakantan et al., 2015b) is that we inject the noise in the forward computatior of the graph and we shape the cost function directly. As a result the cost function for the mollifie. network both at the test time and during the training are consistent and this makes the early-stopping. much easier.\nContinuation methods and simulated annealing provide a general strategy to reduce the impact of local minima and deal with non-convex, continuous, but not necessarily everywhere differentiable objective functions by smoothing the original objective function and gradually reducing the amount of smoothing during training (Allgower & Georg, 1980) (see Fig. 1).\nIn machine learning, approaches based on curriculum learning (Bengio et al., 20o9) are inspired by this principle and define a sequence of gradually more difficult training tasks (or training distributions) that eventually converge to the task of interest.\nIn the context of stochastic gradient descent, we use a stochastic estimation of the gradient for the smoothed objective function. This is convenient because it may not be analytically feasible tc compute the smoothed function. but a Monte-Carlo estimate can often be obtained easily\nLk(0) =(L * K)(0) = L0- )K()dT\nAlthough there are many choices for the function K(), we focus on those that satisfy the definition of a mollifier.\n(0) = lim e-nK(/e)L(0- T)dt e->0\n1We plan to release the source code of the models and experiments under, http://github. com caglar/molly_nets/.\nFigure 1: A sequence of optimization problems of increasing complexity, where the first ones are easy to solve but only the last one corresponds to the actual problem of interest. It is possible to. tackle the problems in order, starting each time at the solution of the previous one and tracking the. local minima along the way.\nIn this paper we construct a sequence of smoothed objective functions obtained with a form of mollification and we progressively optimize them. The training procedure iterates over the sequence. of objective functions starting from the simpler ones - i.e. with a smoother loss surface - and moving. towards more complex ones until the last, original, objective function is reached.1.\nWe smooth the loss function L, which is parametrized by 0 E Rn, by convolving it with another function K() with stride t E Rn:.\nA mollifier is an infinitely differentiable function that behaves like an approximate identity in the group of convolutions of integrable functions. If K() is an infinitely differentiable function, that. converges to the Dirac delta function when appropriately rescaled and for any integrable function L, then it is a mollifier:.\nIf we choose K(.) to be a mollifier and obtain the smoothed loss function Lk as in Eqn. 1, we ca. take its gradient with respect to 0 using directly the result from Evans (1998):.\nVeLk(0) = Ve(L * K)(0) = (L * VK)(0)\nTo relate the resulting gradient VeLk to the gradient of the original function L, we introduce the. notion of weak gradient, i.e. an extension to the idea of weak/distributional derivatives to functions with multidimensional arguments, such as loss functions of neural networks.\nwhere K() is an infinitely differentiable function vanishing at infinity. C e [a. bln and E Rn\nVeLk(0) = - VeL(0 - )K()dt C\nVeLN.e(0) = Ve(0-T)e-p(T/e)dT E,[VeL(0-) ], with t ~ N(0, e2I\nVeLN.o(0) = E[VeL(0 - ) l, with t ~ N(0, o2I\nlim VeLN.o(0) = VeL(0 ->0\nAn intuitive interpretation of the result is that o determines the standard deviation of a mollifying Gaussian and is annealed in order to construct a sequence of gradually less \"blurred\" and close.\n2we omit for brevity the algebraic details involved with a translation of the argument\nFor an integrable function in space e L([a, b]), g E L([a, b]n) is a n-dimensional weak gradient of L if it satisfies:.\ng)K()dt = L()VK()dT\nVeLk(0) = (L * VK)(0) by Eqn. 3 L(0-t)VK(t)dT g(0 - t)K()dT by Eqn. 4\nFor a differentiable almost everywhere function L, the weak gradient g(0) is equal to eL almost everywhere. With a slight abuse of notation we can therefore write:\nt is possible to use the standard Gaussian distribution N(0, I) as a mollifier K(), as it satisfies. he desired properties: it is infinitely differentiable, a sequence of properly rescaled Gaussian listributions converges to the Dirac delta function and it vanishes in infinity. With such a K() the. gradient becomes:\nVeLK=N(0) = - VeL(0 - T)p(T)dT E-[VeL(0 -) l, with t ~ N(0, I\nExploiting the fact that a Gaussian distribution is a mollifier, we can focus on a sequence of mollifications indexed by scaling parameter e introduced in Eqn. 2. A single element of this sequence takes the following form:\nSo far we obtained the mollified version Lk(0) of the cost function L(0) by convolving it with a. mollifier K(0). The kernel K(0) corresponds to the average effect of injecting noise sampled. from standard Normal distribution. The amount of noise controls the amount of smoothing Gradually reducing the noise during training is related to a form of simulated annealing (Kirkpatrick. et al., 1983). Similarly to the analysis in Mobahi (2016), we can write a Monte-Carlo estimate of equation in Appendix A.\nThe Monte-Carlo estimators of the mollifiers can be easily implemented with neural networks, where the layers typically have the form:.\nwith h'-1 a vector of activations from the previous layer in the hierarchy, W' a matrix representing a linear transformation and f an element-wise non-linearity of choice\nA mollification of such a laver can be formulated as:\nFrom Eqn. 16, it is easy to see that both weight noise methods proposed by Hinton & van Camp (1993) and Graves (2011) can be seen as a variation of Monte-Carlo estimate of mollifiers\nWe introduce a generalization of the concept of mollifiers that encompasses the approach we explorec here and that is targeted during optimization via a continuation method using stochastic gradien descent.\nlim Tof = f >0\nd(Tf)(x) exists Vx,o > 0 dx\nIn addition, we consider noisy mollifiers which can be defined as an expected value of a stochastic function (x, ) under some noise source & with variance o:.\nDefinition 2.2. (Noisy Mollifier). We call a stochastic function $(x, &) with input x and noise & a noisy mollifier if its expected value corresponds to the application of a generalized mollifier To, as. per Eqn. 20.\nThe composition of two noisy mollifiers sharing the same is also a noisy mollifier, since the three properties in the definition (Eqns. 17,18,19) are still satisfied. When = 0 no noise is injected and therefore the original function will be optimized. If -> oo instead, the function will become an identity function. Thus, for instance, if we mollify each layer of a feed-forward network except the output layer, when -> oo all the mollified layers will become identity function and the objective function of the network with respect to its inputs will be convex.\nh' = f(W'h-1\nh' = f((w' c')h'-1), where &' ~ N(, o2\nDefinition 2.1. (Generalized Mollifier). A generalized mollifier is an operator, where T,(f) defines a mapping between two functions. such that T. : f .\nTof)(x) = Eg[$(x,So)\nConsequently, corrupting separately the activation function of each level of a deep neural network. (but with a shared noise level o) and annealing o yields a noisy mollifier for the objective function This is related to the work of Mobahi (2016), who recently introduced a way of analytically smoothing of the non-linearities to help the training of recurrent networks. The differences of that. approach from our algorithm is two-fold: we use a noisy mollifier (rather than an analytic smoothing. of the network's non-linearities) and we introduce (in the next section) a particular form of the noisy. mollifier that empirically proved to work well..\nShaping the cost function to define a sequence of costs that are progressing from easier to more difficult ones can be related to the reward shaping (Ng et al., 1999; Ng, 2003) algorithms. In oui algorithm, we shape the cost and the model architecture itself, rather than rewards or the target. in order to make the optimization easier. In that sense, reward shaping can be considered to be more closer to curriculum learning."}, {"section_index": "3", "section_name": "3 METHOD", "section_text": "We define the desired behavior of the network in the limit cases where the noise is very large or very small, and modify the model architecture accordingly. Specifically, during training we minimize a sequence of increasingly complex noisy objectives L = (1(0; o), 2(0; 2), ... , k(0; o)) that we obtain by annealing the scale (variance) of the noise o. Let us note that our algorithm satisfies the fundamental properties of the generalized and noisy mollifiers that we introduced earlier\nWe use a noisy mollifier based on our definition in Section 2.4. Instead of convolving the objective function with a kernel:\nTo decide which path to take, for each unit in the network, a binary stochastic decision is taken by drawing from a Bernoulli distribution with probability dependent on the decaying value of p':\nIf the number of hidden units of layer l - 1 and layer l + 1 is not the same, we can either zero-pad laye l -- 1 before feeding it into the next layer or apply a linear projection to obtain the right dimensionality\nThe pseudo-code for the mollified activations is reported in Algorithm 1.\nWe propose an algorithm to mollify the cost of a neural network which also addresses an important drawback of the previously proposed noisy training procedures: as the noise gets larger, it can. dominate the learning process and lead the algorithm to perform a random walk on the energy. landscape of the objective function. Conversely in our algorithm, as the noise gets larger gradient. descent minimizes a simpler (e.g. convex) but still meaningful objective function..\n1. We start the training by optimizing a convex objective function that is obtained by configuring. all the layers between the input and the last cost layer to compute an identity function, i.e., by. skipping both the affine transformations and the blocks followed by nonlinearities.. 2. During training, the magnitude of noise which is proportional to p is annealed, allowing to. gradually evolve from identity transformations to linear transformations between the layers.. 3. Simultaneously, as we decrease the p, the noisy mollification procedure allows the element-wise activation functions to gradually change from linear to be nonlinear.."}, {"section_index": "4", "section_name": "1+ SIMPLIFYING THE OBJECTIVE FUNCTION FOR FEEDFORWARD NETWORKS", "section_text": "For every unit of each layer, we either copy the activation (output) of the corresponding unit of the previous layer (the identity path in Figure 2) or output a noisy activation h' of a non-linear transformation of it (h'-1, ; w'), where is noise, w' is a weight matrix applied on h'-1 and is a vector of binary decisions for each unit (the convolutional path in Figure 2):\nhl =w(hl-1,8 :W W) = oh-1 .h h = o(h-\n' ~ Bernoulli(p)\nFor p' = 1, the layer computes the identity function leading to a convex objective. If p' = 0 the layer computes the original non-linear transformation unfolding the full capacity of the model. We call the connections that are introduced this way as unitwise stochastic skip connections(USSC).\nIn DropIn layers (Smith et al., 2016) a binary random vector is sampled from a Bernoulli distribution to decide whether to introduce skip a connection from the layer below l - 1 for each layer l and. they use this as a regularization. As opposed to the USsC, DropIn layers do not necessarily do not. necessarily achieve a convex objective function as the DropIn ratio (p') increases..\n7h' Convolutions Batch Norm. ReLU Convolutions Batch Norm. h/-1 h' ReLU f(h'-1) Convolutions Batch Norm. ReLU Convolutions Batch Norm. h-1 h' Noisy ReLU ~Bin(p) h-1 h-1\nf(h-1) Convolutions Batch Norm. ReLU Convolutionse Batch Norm. h/-1 h' ReLU h-1 f'(h'-1) Convolutions Batch Norm.. ReLU Convolutions Batch Norm. h'-1 h' Noisy :00 0 ReLU 0 0 ~Bin(p h!-1\nFigure 2: Top: Stochastic depth. Bottom: mollifying network. The dashed line represents the optional residual connection. In the top path, the input is processed with a convolutional block followed by a noisy activation function, while in the bottom path the original activation of the layer l -- 1 is propagated untouched. For each unit, one of the two paths in picked according to a binary. stochastic decision t.\nAlgorithm 1 Activation of a unit i at layer l"}, {"section_index": "5", "section_name": "S LINEARIZING THE NETWORK", "section_text": "Note that centering the sigmoid or hard-sigmoid will make them symmetric with respect to the origin With a proper choice of the standard deviation o(h), the noisy activation function becomes a linear. function of the input when p is large, as illustrated by Figure 10..\nLet u* (x) = u(x)-u(0), where u(0) is the offset of the function from the origin, and x, the i-th dimen sion of an affine transformation of the output of the previous layer hl-1: x, =. = wI hl-1 + b,. Then:\n(x,Si;w) = sgn(u*(x))min(u*(x)],[f*(x) + sgn(u*(x)[sD +u(0\nSi ~N(0, pco(xi)\nWe have a simpler form of the equations to linearize ReLU (Nair & Hinton, 2010) activation function when p' -> oo. Instead of the complicated Eqn. 23. We can use a simpler equation as in Eqn. 26 to\nIn Section 2, we show that convolving the objective function with a particular kernel can be approximated by adding noise to the activation function. This method may suffer from excessive. random exploration when the noise is very large..\nWe address this issue by bounding the element-wise activation function f(.) with its linear approximation when the variance of the noise is very large, after centering it at the origin. The. resulting function f* (:) is bounded and centered around the origin..\nabs(sigmoid(x) - 0.5) sigmoid(x abs(0.25x) 015 0.8 0.5 0.2 0.0 a) b)\nabs(sigmoid(x) - 0.5) abs(0.25x) 0.8 0.6 0.4 0.2 0.0 4 -3 -2 0 2 3 A\nFigure 3: The figures show how to evolve the model to make it closer to a linear network. Arrows denote the direction of the noise pushing the activation function towards the linear function. a) The quasi-convex envelope established by a [sigmoid()] around |0.25x|. b) A depiction of how the noise pushes the sigmoid to become a linear function.\nachieve the linearization of the activation function when we have a very large noise in the activation function:\nS; = minimum(xil, po(x)sD V(xi,i,Wi)=f(xi)-Si\nIn a similar vein it is possible to smooth the objective functions of LSTM and GRU networks by starting the optimization procedure with a simpler objective function such as optimizing a word2vec BoW-LM or CRF objective function at the beginning of training and gradually increasing the difficulty of the optimization by increasing the capacity of the network.\nFor GRUs we set the update gate to 1 where t is the time-step index - and reset the gate to 1 if. the noise is very large, using Algorithm 1. Similarly for LSTMs, we can set the output gate to 1. and input gate to and forget gate to 1 - 1 when the noise is very large. The output gate is 1 or. close to 1 when the noise is very large. This way the LSTM will behave like a BOW model. In order to achieve this behavior, the activations (xt, &) of the gates can be formulated as:.\n(x,)=f(x+p0(x)|)\nBy using a particular formulation of o(x) that constraints it to be in expectation over when p' = 1 we can obtain a function for y E R within the range of f() that is discrete in expectation, but still per sample differentiable.\nWe provide the derivation of Eqn. 28 in Appendix B. The gradient of the Eqn. 28 will be Monte-Carlo approximation to the gradient of f(x).\nWe used a different schedule for each layer of the network, such that the noise in the lower layers will anneal faster. This is similar to the linearly decaying probability of layers in Huang et al. (2016b).\nExponential Decay In our experiments, we focused on using an annealing schedule similar to the inverse sigmoid rule in Bengio et al. (2015) with pt.\nkutl tL\n1.5 sigmoid() 0.25 05 1.0 0.5 0.0 0.5 -4 -3 2 1 0 1 2 3 4 b)\nEg[E]]\nwith hyper-parameter k 0 at tth update for the lth layer, where L is the number of layers of the model. We stop annealing when the expected depth pt = i=1 Pt reaches some threshold . In our experiment we set vt to be a moving average of the loss' of the network, but for some of our experimnets that resulted unstable behaviors in the training and thus we have to fix vt to 1. An advantage of using running average of loss for the vt is because the behavior of the loss/optimization can directly influence the annealing behavior of the network. Because we will have:\nlim p =1 and. lim p = 0 Vt> Vt->O\nWe have compared the plot of different annealing methods described in this paper as in Figure 4\n1.0 sqrt annealing. linear annealing 0.8 exp decay k=100 exp decay k=50 exp decay k=10 0.6 0.4 0.2 0.0 0 100 200 300 400 50\nFigure 4: We compare different annealing schedules with respect to the time in y-axis(iterations)\nIn this section we mainly focus on training of difficult to optimize models, in particular deep MLPs with sigmoid or tanh activation functions. The details of the experimental procedure is providec in Appendix C.\nWe train a thin deep neural network on MNIST (LeCun & Cortes, 1998) dataset with 72 hidden layers and 100 hidden units. We train our model with Adam(Kingma & Ba, 2014) optimizer and fix the\nThis has a desirable property: when the training-loss is high, the noise injected into the system will. be large as well. As a result, the model is encouraged to do more exploration, while if the model converges the noise injected into the system by the mollification procedure will be zero\nFurthermore, in our experiments we observe that training with noisy mollifiers can potentially be helpful for the generalization. This can be due to the noise induced to the backpropagation through the noisy mollification, that makes SGD more likely to converge to a flatter-minima (Hochreiter. & Schmidhuber, 1997b) because the noise will help it escape from sharper local minima..\n'Depending on whether the model overfits or not, this can be a moving average of training or validation los\nlearning rate of all the models to 3e- 4. We have used the same learning-rate for all the models in orde. to factor out the possibility that a model converges faster, due to the fact of using a larger learning rat\nFirstly, in Figure 5, we investigate the effect of using different annealing schedules. Exponential decay converges faster compared to linear decay and square root decay of p. We find it to be very unstable to train our model with linear and square root decay in particular for large c values. Thus we have to use smaller c value (20 instead of 100) to be able to train the model with causing it to diverge\n101 exponential decay 100 linear decay sqrt decay. 10-1 T7N 10-2 10-3 10-4 0 100 200 300 400 500 Iterations\nexponentlaldeca 100 linear decay sqrt decay 10-1 17N 10-2 10-3 10-4 0 100 200 300 400 500 Iterations\n101 train exponential decay det 100 train exponential decay sto 10-1 T7N 10-2 10-3 10-4 0 100 200 300 400 500 Iterations\ntrain exponehtlaldecay det 100 train exponential decay sto 10-1 77N 10-2 10-3 10-4 0 100 200 300 400 500 Iterations\nFigure 6: We show the learning curves of the model where we don't inject the noise during the training and instead use the deterministic approximation for the mollification during the training as. well. The differerence in terms of speed of learning is very small..\nIn Figure 7, we compare the results obtained for the model using mollification obtained with or without batch normalization and feed-forward residual networks. The mollified model performs very closely to the MLP trained with residual connections and the batchnorm. However, using residual connections and batch-norm does not seem to improve the results.\nWe have tried to run experiment with the Monte-Carlo approximation of the mollification which is derived in Appendix A, however when we start with large noise and anneal the noise during the train ing, the model was very unstable and training was diverging. If we start with small noise and anneal the magnitude of the noise during the training, we could not observe any effect of it on the training.\nFigure 5: We compare the training performance of different types of annealing methods used with mollification procedure to anneal the parameter of the mollification p. Decaying the p exponentially achieves both better training and validation performance.\nIn Figure 6, we show the effect of using the noisy training procedure that we have introduced by sam pling a mask from Bernoulli and Gaussian distributions versu using the deterministic approximation of this noisy procedure which we also use for the test time but during the training as well.\n101 mollified net with batchnorm 100 mollified net without batch norm 10-1 resnet model 10-2 I7N 10-3 10-4 10-5 10-6 0 100 200 300 400 500 Iterations\nnoneohetwtnpatcnnorr 100 mollified net without batch norm 10-1 resnet model 10-2 I7N 10-3 10-4 10-5 10-6 0 100 200 300 400 500 Iterations\n1.2 6-layers Mollified Sigmoid MLP 6-layers Residual Sigmoid MLP with Batch Normalization 6-layers of Sigmoid MLP 1.0 0.8 T7N 0.6 0.4 0.2 0.0 0 50 100 150 200 250 300 350 400 450 x250updates\n6-layers Mollified Sigmoid MLP 6-layers Residual Sigmoid MLP with Batch Normalization 6-layers of Sigmoid MLP 1.0 0.8 0.6 0.4 0.2 0.0 AAA 0 50 100 150 200 250 300 350 400 450 x250 updates\nFigure 8: The learning curves of a 6-layers MLP Table 1: CIFAR10 c with sigmoid activation function on 40 bit parity neural network. task.\nDeep Pentomino Pentomino is a toy-image dataset where each image has 3 Pentomino blocks. The task is to predict whether if there is a different shape in the image or not (Gulcehre & Bengio. 2013). The best reported result on this task with MLPs is 68.15% accuracy (Gulcehre et al., 2014) The same model as ours trained without noisy activation function and vanilla residual connections scored 69.5% accuracy, while our mollified version scored 75.15% accuracy after 100 epochs of. training on the 80k dataset.\nCIFAR10We experimented with deep convolutional neural networks of 110-layers with residual. blocks and residual connections comparing our model against ResNet and Stochastic depth. We. adapted the hyperparameters of the Stochastic depth network from Huang et al. (2016a) and we used the same hyperparameters for our algorithm. We report the training and validation curves of the three models in Figure 10 and the best test accuracy obtained early stopping on validation accuracy over 500 epochs in Table 1. Our model achieves better generalization than ResNet. Stochastic depth achieves better generalization, but it might be possible to combine both and obtain better results..\nFigure 7: We investigate the effect of using batch norm and residual connection for mollifcation and compare against to the network with residual connections and batch-norm. The effect of batch norm on this task for mollification seems to be very small and training convergence performance of the all the approaches are very close to each other.\nP Table 1: CIFAR10 deep convolutior ity neural network..\nDeep Parity Experiments Training neural networks on a high-dimensional parity problem can be challenging (Graves, 2016; Kalchbrenner et al., 2015). We experiment on forty dimensional (bits) parity problem with 6-layer MLP using sigmoid activation function. All the models are initialized with Glorot initialization Glorot et al. (2011) and trained with SGD with momentum. We compare an MLP with residual connections using batch normalization and a mollified network with sigmoid activation function. As can be seen in Figure 8, the mollified network converges faster.\n0.50 Mollified Deep LSTM 0.45 Original Model 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0 10000 20000 30000 40000 50000 60000 70000 80000 #Updates\nFigure 9: The training curve of a bidirectional RNN that predicts the embedding corresponding to a sequence of characters..\nPredicting the Character Embeddings from Characters. Learning the mapping from sequences of characters to the word-embeddings is a difficult problem. Thus one needs to use a highly non-linear function. We trained a word2vec model on Wikipedia with embeddings of size 500 (Mikolov et al. 2014) with a vocabulary of size 374557.\nLSTM Language Modeling. We evaluate our model on LSTM language modeling. Our baseline. model is a 3-layer stacked LSTM without any regularization. We observed that mollified model con verges faster and achieves better results. We provide the results for PTB language modeling in Table 2."}, {"section_index": "6", "section_name": "10 CONCLUSION", "section_text": "We propose a novel method for training neural networks inspired by an idea of continuation. smoothing techniques and recent advances in non-convex optimization algorithms. The methoc makes learning easier by starting from a simpler model, solving a well-behaved problem, anc. gradually transitioning to a more complicated setting. We show improvements on very deep models difficult to optimize tasks and compare with powerful techniques such as batch-normalization anc residual connections. We also show that the mollification procedure improves the generalizatior. performance of the model on two tasks."}, {"section_index": "7", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank Nicholas Ballas and Misha Denil for the valuable discussions and their feedback. We would like to also thank the developers of Theano 4, for developing such a powerful tool for scientific computing Theano Development Team (2016). We acknowledge the support of the following organizations for research funding and computing support: NSERC, Samsung, Calcul Quebec Compute Canada, the Canada Research Chairs and CIFAR"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "'http://deeplearning.net/software/theano/\nTable 2: 3-layered LSTM net- work on word-level language modeling for PTB..\nOur future work includes testing this method on large-scale language tasks that require long training time, e.g., machine translation and language modeling. Moreover, (Kaiser & Sutskever, 2015) observed that the training of Neural-GPU model can be improved significantly by using gradient noise which can be related to the smoothing of the loss surface, it would be interesting to try mollification on this model to see if the training of Neural GPU can be made easier by using mollification procedure\nSamy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems pp. 1171-1179, 2015.\nYoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Pr ceedinos ot the26th 1-48. ACM. 200\nLeon Bottou. Online algorithms and stochastic approximations. In David Saad (ed.), Online Learnin in Neural Networks. Cambridge University Press, Cambridge, UK, 1998\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares. Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nAnna Choromanska. Mikael Henaff. Michael Mathieu. Gerard Ben Arous. and Yann LeCun. The loss surface of multilayer networks, 2014.\nYann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS'2014, 2014.\nAlex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pp. 2348-2356, 2011.\nElad Hazan, Kfir Y Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic non-convex problems. arXiv preprint arXiv:1503.03712. 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. 2015\nSepp Hochreiter and Jurgen Schmidhuber. Flat minima. Neural Computation, 9(1):1-42, 1997b\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016a\nGeoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012\nNal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprin arXiv:1507.01526, 2015.\nK1rkpatr1ck,( anaVI CCC Optimization by simulated annealing 671-680.1983. Y. LeCun. B. Boser. J. S. Denker. D. Henderson. R. E. Howard. W. Hubbard. and L. I Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput.,. (4):541-551, December 1989. ISSN 0899-7667. doi: 10.1162/neco.1989.1.4.541. URI http://dx.doi.0rg/10.1162/neco.1989.1.4.541.\nYann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. word2vec, 2014\nAndrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations Theory and application to reward shaping. In ICML, volume 99, pp. 278-287, 1999\nRupesh K Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks. In Advances in Neural Information Processing Systems, pp. 2368-2376, 2015.\nIlya Sutskever. Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolution Technical report, Google, 2014.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, and Daan Wierstra. Playing atari with deep reinforcement learning. Technical report, arXiv:1312.5602, 2013.\nHossein Mobahi. Training recurrent neural networks by diffusion. arXiv preprint arXiv:1601.04114\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nMONTE-CARLO ESTIMATE OF MOLLIFICATION\nWncn can be estaled by aMl. N , where g(i) is a realization of the noise random variable. i=1 dLk(0) yielding d0 N 1 dL(0 - t(i) N d0\nTherefore introducing additive noise to the input of L(0) is equivalent to mollification"}, {"section_index": "9", "section_name": "DERIVATION OE THE NOISY ACTIVATIONS FOR THE GATING", "section_text": "Assume that z = x + po(x)[] and E[(x, )] = t. Thus for all z\nEg[y(xt,st)] = Eg[f(zt)] t = Eg [f(zt)], assuming f() behaves similar to a linear functic Eg [f(z)] ~ f(Eg[t]) since we use hard-sigmoid for f() this will hold. f-1(t) ~ Eg[zt]\nf-1(t) ~ x+ po(x)E[St\nCorollary, the value that o(x) should take in expectation for pt = 1 would be\nIn our experiments for f() we used the hard-sigmoid activation function. We used the following piecewise activation function in order to use it as f--(x) = 4(x - 0.5). During inference we use the expected value of random variables r and &..\nThe weights of the models are initialized with Glorot & Bengio initialization Glorot et al. (2011) We use the learning rate of 4e - 4 along with RMSProp. We initialize ai parameters of mollified. activation function by sampling it from a uniform distribution, U[-2, 2]. We used 100 hidden units at each layer with a minibatches of size 500..\n[y(xt,st)]= Eg[f(zt)] t = Eg [f(z)], assuming f() behaves similar to a linear function: Eg [f(z)] ~ f(Eg[t]) since we use hard-sigmoid for f() this will hold f-1(t) ~ Eg[zt]\nf-t)-x Eg[Et]\nWe train a 6-layer MLP with sigmoid activation function using SGD and momentum. We used 200 units per layer with sigmoid activation functions. We use a learning rate of 1e - 3\n102 101 yalidation losses for mollified convnet train losses for mollified convnet. validation losses for stochastic depth. train losses for stochastic depth. validation losses for resnet train losses for resnet 101 100 100 101 101 10-2 100 200 300 400 500 0 100 200 300 400 500 a) b)\nTO TO validation losses for mollified convnet train losses for mollified convnet. yalidation losses for stochastic depth train losses for stochastic depth. validation losses for resnet. train losses for resnet. 101 100 100 101 101 10-2 0 100 200 300 400 500 0 100 200 300 400 500\nFigure 10: Training and validation losses over 500 epochs of a mollified convolutional network composed by 110-lavers. We compare against ResNet and Stochastic depth"}, {"section_index": "10", "section_name": "C.4 PARITY", "section_text": "The n-dimensional parity task is the task to figure out whether the sum of n-bits in a binary vector is even or odd. We use SGD with Nesterov momentum and initialize the weight matrices by using Glorot&Bengio initializationGlorot et al. (2011). For all models, we use the learning rate of 1e - 3 and momentum of 0.92. a, is the parameters of mollified activation function are initialized by sampling from uniform distribution, U[-2, 2]."}, {"section_index": "11", "section_name": "C.5 LSTM LANGUAGE MODELING", "section_text": "We trained 2-layered LSTM language models on PTB word-level. We used the models with the same hyperparameters as in Zaremba & Sutskever (2014). We used the same hyperparameters foj both the mollified LSTM language model and the LSTM. We use hard-sigmoid activation function. for both the LSTM and mollified LSTM language model. We use hard-sigmoid activation function. for the gates of the LSTM.\nWe use 10k of these words as a validation and another 10k word embeddings as test set. We train a bidirectional-LSTM on top of each sequence of characters for each word and on top of the representation of bidirectional LSTM, we use a 5-layered tanh-MLP to predict the word-embedding\nWe use the same model with the same hyperparameters for both ResNet, mollified network and the stochastic depth. We borrowed the hyperparameters of the model from Huang et al. (2016a). Our mollified convnet model has residual connections coming from its layer below.\nWe train our models using RMSProp and momentum with learning rate of 6e - 4 and momentum 0.92. The size of the minibatches, we used is 64. As seen in Figure 9, mollified LSTM network converges faster."}] |
HyNxRZ9xg | [{"section_index": "0", "section_name": "CAT2VEC: LEARNING DISTRIBUTED REPRESENTA- TION OF MULTI-FIELD CATEGORICAL DATA", "section_text": "Ying Wen, Jun Wang\nying.wen, jun.wang}@cs.ucl.ac.uk\nThis paper presents a method of learning distributed representation for multi-field. categorical data, which is a common data format with various applications such as recommender systems, social link prediction, and computational advertising. The success of non-linear models, e.g., factorisation machines, boosted trees, has. proved the potential of exploring the interactions among inter-field categories. Inspired by Word2Vec, the distributed representation for natural language, we. propose Cat2Vec (categories to vectors) model. In Cat2Vec, a low-dimensional. continuous vector is automatically learned for each category in each field. The. interactions among inter-field categories are further explored by different neural gates and the most informative ones are selected by pooling layers. In our exper-. iments, with the exploration of the interactions between pairwise categories over. layers, the model attains great improvement over state-of-the-art models in a su-. pervised learning task, e.g., click prediction, while capturing the most significant. interactions from the data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "There are different abstraction levels within data. For the low-abstraction continuous sensory data (such as images, videos, and audio) directly acquired from the physical world, quite often, the strong correlations (local patterns) are known a priori within the data. As such, one can directly embed the prior knowledge into a learning model such as neural networks to automatically distil such pat terns and perform predictions (Krizhevsky et al.]2012 Graves et al.|2013). However, on the other hand, for high-abstraction data from our social and business activities, such as natural language and transnational log data, the data is commonly discrete and contains atomic symbols, whose meaning and correlation are unknown a priori. A typical solution is to employ embedding techniques (Bengic et al.]2003] Mikolov et al.]2013) to map the discrete tokens into a (low-dimensional) continuous space and further build neural networks to learn the latent patterns.\nMulti-field categorical data is a type of high-abstraction data where the categories in each field are. heterogeneous with those in other fields. Such a type of data is very widely used in data mining tasks based on transaction logs from many social or commercial applications, such as recommender systems, social link prediction, and computational advertising. Table 1gives an example of multi field categorical data in user behaviour targeting where we observe user browsing patterns, and giver those multi-field categorical features, a common task is to predict their actions such as clicks anc. conversions (Zhang et al. 2014) Liao et al.| 2014]Yuan et al. 2013).\nAs there is no explicit dependency among these inter-field categories, two solutions are mainly used for building machine learning models that extract the local patterns of the data and make. good predictions. The first solution is to create combining features across fields, such as. CiTY:SHANGHAI&WEEKDAY:FRIDAY (Chapelle et al.]2015). Such feature engineering is ex- pensive on human efforts and feature/parameter space. The second solution is to build functions (Rendle2012) or neural networks based on the feature embeddings (Zhang et al.]2016). These. solutions are of low efficiency because of the brute-force feature engineering or aimless embedding. interactions.\nTianyao Chen, Weinan Zhang\n{tychen, wnzhang}@apex.sjtu.edu.cn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Table 1: A simple example of multi-field categorical data from iPinYou dataset (Liao et al.2014)\nIn this paper, we propose an unsupervised pairwise interaction model to learning the distributed rep-. resentation of multi-field categorical data. The interactions among inter-field categories are explored by different neural gates and the informative ones are selected by K-max pooling layers. Note that. the K-max pooling process acts like the classic Apriori algorithm in frequent itemset mining anc. association rule learning (Agrawal et al.||1994). Repeating this pairwise interaction with K-max pooling, our Cat2Vec model automatically extracts salient feature interactions and further explores higher-order interactions.\nTo train the pairwise interaction Cat2Vec model effectively, we present a discriminant training method to estimate the category vectors. Furthermore, with the exploration of the pairwise and high-order category interactions, our Cat2Vec model attains great performance improvement over state-of-the-art models in supervised learning tasks, such as user response rate prediction, while successfully captures the most significant interactions in unsupervised learning tasks.\nIn this section, we outline the major data representation methods that are used for representing the. discrete categorical data. These methods serves as the preliminaries of our Cat2Vec model."}, {"section_index": "3", "section_name": "2.2 DISTRIBUTED REPRESENTATION", "section_text": "TARGET GENDER WEEKDAY CITY BROWSER 1 MALE TUESDAY BEIJING CHROME 0 FEMALE MONDAY SHANGHAI IE 1 FEMALE TUESDAY HONGKONG IE 0 MALE TUESDAY BEIJING CHROME NUMBER OF CATEGORY 2 7 351 6\nIt is common to use one-hot representation for discrete data in natural language processing or com putational advertising tasks. For the first data sample as an example, the data is vectorised by one-hot encoding as\n[0, 1] [0,1, 0, 0, 0, 0, 0], [0,..., 0, 1, 0,..., 0351, [1,0, 0, 0, 0, 0] GENDER:MALE WEEKDAY:TUESDAY CITY:BEIJING BROWSER:CHROME\n[0, 1] [0, 1, 0, 0, 0, 0, 0], [0,..., 0, 1, 0,..., 0]351, [1,0,0,0,0,0 GENDER:MALE WEEKDAY:TUESDAY CITY: BEIJING BROWSER:CHROME\nWith each category as a dimension, one-hot representation preserves full information of the original. data. Two main problems of one-hot representation are that (i) it may suffer from the curse of dimensionality, especially in deep learning-related applications; (ii) it cannot capture the similarity. of each word/category pair, and we cannot even find any relationships among the synonyms or. categories in the same field.\nDistributed representation is first proposed byHinton (1986). The basic idea of distributed represen. tation is training the model to map each word into a d-dimension vector (generally, d is the hyper. parameter of the model, and d is far smaller than whole vocabulary size N of words/categories), and. the semantic similarity between the words/categories can be measured through the distance (such as. cosine similarity, Euclidean distance) of their corresponding low dimension vectors. The Word2Vec. (Mikolov et al.J2013) is one of the most common methods to train the distributed word vector rep-. resentation. Compared with text, with the local patterns among the neighbour words, multi-field. categorical data has no explicit order relationships among inter-field categories. Also, the text vo-. cabulary size (10) is often much smaller than the category size (106 ~ 10), making our problem. more difficult. Another difference between our Cat2 Vec and Word2Vec is that Cat2 Vec does not take the order into account or use any sliding window for context; in other words, we take all categories. in the same training sample as the neighbour of a category..\nInput Output Sample Prediction (O Embedding Gate Internaction K-Max Pooling Internaction K-Max Pooling Gate FC Layer Layer Layer Layer Layer Layer\nPairwise Interaction Sample Encoding Module\nIn this section, we introduce a pairwise interaction Cat2Vec model and its training method in detail We design neural gates in the model to capture the interactions between each pair of categories. followed by the K-max pooling layers to select the most important interactions. We then repeat this processes to explore higher level interactions. Figure 1|illustrates the overview of the proposed architecture."}, {"section_index": "4", "section_name": "3.1 INTERACTION AND POOLING LAYERS", "section_text": "Interaction Layer. To evaluate the interaction between each pair of categories, we use a gate to obtain the interaction result. Mathematically, a gate is a function f : Rd Rd -> Rd that takes any. pair of category vectors c; and c; in the same sample c as input, and outputs interaction result vector Ct,, = f(ci, Cj). The interaction output vector c, acts as a certain combining feature of c, and cj Note that c,., keeps the same dimension as the category embedding vectors like c, and c, so that it. can be further used to interact with other categories..\nWe provide several options of gate f as\nwhere O is the element-wise multiplication operator. We can also can employ more complex gates such as the highway gate (Srivastava et al.|2015), which is formulated as.\nfhighway(Ci,Cj) =T O g(WH(Ci + Cj) + bH) + (1- T) O(Ci + Cj)\nc =c\nAfter the interaction, an activation function will be applied to implement the non-liner transforma tion.\nK-Max Pooling Layer. We next describe a pooling operation that is a generalisation of the max pooling based on the norm length of interaction outputs of each pair of category vectors. We keep\nFigure 1: The proposed sample encoding module. At first, each category pair will be fed into a gate to get the interaction between two categories. Next, using K-max pooling to capture important interactions. Repeat above two steps, which could capture higher level category interactions. Finally we use a full connection layer to transform final interaction vectors into the prediction.\nfsum(Ci,Cj) = Ci+ Cj e mu1 C: ,Cj) = CiO Cj,\nTrue Sample True Sample or Encoding False Randomly Generated Fake Sample Encoded Discriminant Cat2Vec Vector\nFigure 2: The discriminant Cat2Vec model which learns the category embedding by training a dis criminator to distinguish the true samples from the fake ones.\nthe K maximum interaction output vectors ci, according to their norm length, where K is the number of the original categories of the training sample. It would keep the max-pooling resul Ckmax = ... , C'k] having the same size with the original embedding matrix c and c'k is the embedding vector in c' in Eq. (5) that has top-K normal length.\nBefore producing an output for the interaction results, the interaction and K -max pooling operation will be repeated for several times in order to capture high-level interactions among the different field category vectors. After that, we output a prediction from the final interaction vector representation. by a fully connected layer. Note that the above network structure can be used to build an auto. encoder to conduct unsupervised learning (Vincent et al.]2008). We leave this for future work. while staying with the label output network for both supervised (containing both negative and pos itive examples) and unsupervised (only containing positive examples where negative examples are. generated randomly) learning tasks.\nAn interesting discussion is to compare our Cat2Vec model with association rules mining, which. aims to identify the most frequently appeared joint category instances (items), with or without a condition. Apriori (Agrawal et al.1994) is a popular algorithm for association rules mining by. exploiting dependencies between candidate frequent itemsets of length K and frequent itemsets of. length K - 1. In our pairwise interaction Cat2Vec model, with neural networks, we provide an alternative way of generating such high-order interactions (thus itemsets) among category instances.. Via the pooling operation, our model can also find the most frequent category set automatically,. which will be demonstrated and tested from our experiments in the following Sections4|and|5.\nTo train the pairwise interaction Cat2Vec model, we design a training scheme called discriminan Cat2Vec, which would train the model in a supervised way for unsupervised learning of the data\nIn the discriminant Cat2Vec, we feed the Sample Encoding Module showed in Figure|1|with a tru or fake sample, the encoded sample vector will be followed by an MLP to predict the probability p o a true sample. As such, the generation of a fake sample would influence the learned category vector In this paper, we generate a fake sample following this way: first, randomly choose a sample fron the training set; second, randomly choose several categories in this sample and replace them wit randomly chosen categories that belong to the same field. For example, we get a user behaviour in stance x = [WEEKDAY:WEDNESDAY, IP:1.1.*.*, GENDER:MALE, CITY:BEIJING], and we ran domly choose the category CiTy:BeIJING and replace it with CiTy:SHANGHAI, then we buil a fake sample x' = [WEEKDAY:WEDNESDAY, IP:1.1.*.*, GENDER:MALE, CITY:SHANGHAI] The discriminant network is then trained to predict whether the new sample should be a true sam ple. The loss function of discriminant network is average cross entropy, which would maximise th likelihood of correct prediction:\nM 1 L = -yi log(pi) - (1- yi) log(1-pi) M i=1\nwhere M is the number of training samples. The i-th sample is labelled with yi E {1, 0}, which means true or fake sample, and p; is the predicted probability that the given training sample is true\nTrue Sample True Sample or Encoding False Randomly Generated Fake Sample Encoded Discriminant Cat2Vec Vector\n1.0 Embedding = 2 Embedding = 4 Embedding = 8 1.0 1.0 Pair-Wise Ranking Correlation 0.8 0.8 0.8 Triple-Wise Ranking Correlation - Pair-Wise Precision 0.6 0.6 Triple-Wise Precision 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 Dropout Rate Dropout Rate Dropout Rate\nFigure 3: Precision and rank correlation on synthetic data, bigger embedding size and appropriate dropout rate leads to achieve better performance."}, {"section_index": "5", "section_name": "SYNTHETIC DATA EXPERIMENTS", "section_text": "To explore and add our understanding of the pairwise interaction Cat2Vec model, we conduct a. simulation test with synthetic data. In particular, we are interested in understanding how the learned. vectors would be able to capture and leverage the most significant patterns embedded in the data."}, {"section_index": "6", "section_name": "4.1 SYNTHETIC DATASET AND EVALUATION METRICS", "section_text": "To simulate the real-world multi-field categorical data, we use multivariate normal sampling to generate the true data distribution for the following experiments1. Suppose the data has 4 fields {A, B, C, D}, each field contains 10 categories, and a sample can be represented as x = (ai, bi, Ci, di). We then randomly generate the means and covariance matrix for 4-dimensional trun- cated multivariate normal sampling with two-sided truncation. This sampling method can generate 4 float numbers between O and 10. We can convert the float numbers to integer which can represent the categories in 4 fields. In such a way, we can generate the data with specific joint distribution. which means certain categorical pair or 3-tuple like p(a4, b4) or p(a3, C5, d6) may have a higher joint distribution probability. Recall that in our pairwise interaction Cat2Vec model, we have a K- max pooling layer, which will select the most popular category pairs in the dataset. Repeating the pairwise interaction layers and K-max pooling layers, we can also explore a high order categorical 3-tuple or 4-tuple etc. Therefore, our task here is to evaluate if our model would be able to capture these frequently occurred patterns from a given dataset; in other words, to test if our model would be able to keep the category pairs with the highest joint distribution probabilities in the K-max pooling results. This processes is in line with association rule mining (Agrawal et al.1994), exploring the frequent categorical n-tuple from frequent categorical (n - 1)-tuple.\nWe generate the positive data according to the above truncated multivariate normal sampling and choose uniform sampling to generate the fake (negative) data. We then apply discriminant Cat2Vec to train the model. Because we know the true distribution of the generated real data, the most fre- quent category pairs/triples are known. We use precision and Spearman's rank correlation coefficient to evaluate the results of 1st/2nd K-max pooling layer (category pairs/triples pooling results), to see if the model can learn the true joint distribution in the real data. The details of the evaluation metrics are described in the following section.\nTo evaluate how our network structure and K-max pooling help identify the significant n-tuples, we. feed 1000 samples to the trained model and record the 1st and 2nd K-max pooling layers' results. Then we count the frequency of the category pairs/3-tuples in the real samples, and select top 20. ranked category pairs/3-tuples as target. Then we count the frequency of max-pooled category. pairs/triples in the results and compare the top 20 frequent category pairs/3-tuples in the results. to calculate precision and Spearman's rank correlation coefficient. Precision measures the fraction of category pairs/triples in the results that are also in the target. The Spearman's rank correlation. coefficient measures the correlation between two ranked lists.."}, {"section_index": "7", "section_name": "4.2 RESULT AND DISCUSSION", "section_text": "Figure[3|summarises the results of the precision and the rank correlation on synthetic data. We can see that our model can easily find over 80% of the category pairs with high joint distribution proba. bilities under the best parameter settings. From the rank correlation, our model can make the ranking correlation over O.6 of category pairs which means the category pairs with higher joint distribution probability would be more possible to appear in the K-max pooling result. As for the category triples case, the precision and rank correlation become lower than the category pairs', because find-. ing 3-order combination is harder and relies on the accuracy from the 2-order. We also vary the dropout rate against those measures. It shows that dropout tends to help improving the accuracy of captured patterns. This can be explained by considering the fact that dropout brings randomness into the selection and allows exploration. But the best dropout rate seems rather arbitrary and highly dependent on the other parameter settings."}, {"section_index": "8", "section_name": "REAL-WORLD DATA EXPERIMENTS", "section_text": "In this section, we continue our experiment using a real-world advertising dataset for click-through rate estimatior2] The iPinYou dataset (Liao et al.]2014) is a public real-world display ad datase with each ad display information and corresponding user click feedback (Zhang et al.2014). This dataset contains around 19.5M ad display instances with 14.8k positive user feedback (click). Each instance has 23 fields, and we choose 18 fields of them which have categories with occurrence large than 10."}, {"section_index": "9", "section_name": "5.1 UNSUPERVISED LEARNING EXPERIMENT", "section_text": "We continue our study on the model's ability of capturing the most significant patterns as we de. scribed in Section[3.2] Because the iPinYou dataset contains the unencrypted fields and categories e.g. city, region and tag, so we choose the iPinYou dataset which has been introduced above as. real positive) data. As for the fake (negative) data, we randomly choose a sample in the iPinYou dataset and randomly replace some categories with other categories in the same field to generate the fake data, similar to what we have introduced in Section|3.2 We also set up two baseline models to compare the model accuracy performance: (i) DNN Concat model, which concatenates cate gory embedding vectors to make prediction, and (ii) DNN Sum model, which sums up the category. embedding vectors to make the prediction.\nWe have tried different parameter settings and the performance is measured by the accuracy of oui model to predict real samples. We also calculate the rank correlation coefficient and the precision to evaluate our model the same as we described in Section4.1."}, {"section_index": "10", "section_name": "5.1.1 RESULT AND DISCUSSION", "section_text": "From Table[2] we see that on the iPinYou dataset, our pairwise interaction models can achieve th accuracy of 85% which is about 1.7% improvement comparing with the simple DNN models. Ever the worst case in our model is better than the DNN models' best case. It means our model can fin the extra information during the interactions and the K-max pooling processes. In addition, th model with interaction times as 3 usually yields better performance than that with interaction time as 2, which may be due to the fact that the more interaction times capture higher-order interaction and help make more accurate predictions. But the model with different gate types does not lead t significant difference.\nWe next use the same evaluation metrics that described in Section4.1|to test the ability of capturing data patterns. We find that in the real-world dataset, our model is still able to keep high precision and rank correlation and can achieve even better performance. The precision and rank correlation or category pairs are over O.8 which is a 30% improvement comparing to the performance on synthetic\n3The selected fields are WEEKDAY, HOUR, USER AGENT, IP, REGION, CITY, AD EXCHANGE, DOMAIN URL, AD SLOT ID, AD SLOT WIDTH, AD SLOT HEIGHT, AD SLOT VISIBILITY, AD SLOT FORMAT, AD SLOT FLOOR PRICE, CREATIVE ID, KEY PAGE URL, AND USER TAGS.\nEmbedding = 2 1.0 Embedding = 4 1.0 Embedding = 8 1.0 Pair-Wise Ranking Correlation 0.8 Triple-Wise Ranking Correlation 0.8 0.8 Pair-Wise Precision 0.6 0.6 0.6 Triple-Wise Precision 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 Dropout Rate Dropout Rate Dropout Rate\nFigure 4: Precision and Rank Correlation on iPinYou Data; bigger embedding size and appropriate dropout rate leads to achieve better performance.\ndataset. For the category triples case, we also have similar performance compared with the synthetic dataset."}, {"section_index": "11", "section_name": "5.2 CLICK-THROUGH RATE PREDICTION EXPERIMENT", "section_text": "We now move to the evaluation on a supervised learning task. We consider click-through rate (CTR prediction, which is important for many personalised Web services such as E-commerce, social rec. ommendation and computational advertising (Yuan et al.||2013). The most widely used CTR estima. tion model is the logistic regression based on one-hot data representation. Many deep learning mod els have been further investigated in CTR prediction.Zhang et al.(2016) proposed Factorisation. Machine Supported Neural Networks (FNN) models for user response prediction. Convolutiona. Click Prediction Model (CCPM) (Liu et al.] 2015) has been used in CTR prediction and gain some. improvement on this task. To our knowledge, all of above previous work focuses on directly im. proving the prediction performance in supervised learning tasks and none of them investigates the learned representation of multi-field categorical data or how to learn the better representation.\nIn order to investigate our pairwise interaction model in the CTR task, we use the pairwise interaction sample encoding module to encode a training sample concatenated with the embedding vectors which is followed by an MLP (multi-layer perceptron) to predict click-through probability. We choose following models as strong baselines:\nTable 2: Accuracy of distinguishing true impression from fake impression; embedding means em bedding vector size and interaction is interaction times in our model.\nGATE TYPE DNN DNN PARAMETERS SUM MUL HIGHWAY CONCAT SUM interaction = 2 0.836 0.827 0.830 embedding = 8 0.807 0.806 interaction = 3 0.831 0.834 0.830 interaction = 2 0.838 0.836 0.837 embedding = 16 0.828 0.817 interaction = 3 0.843 0.845 0.838 interaction = 2 0.844 0.842 0.843 embedding = 32 0.831 0.833 interaction = 3 0.848 0.850 0.843\nLogistic Regression (LR): LR is a widely used linear model (Richardson et al.]2007). Factorisation Machine (FM): Simply apply the factorisation machine on one-hot encoded sparse features of the training sample (Rendle2010) CCPM: CCPM (Liu et al.] 2015) is a convolutional model for click prediction. FNN: A DNN model based on concatenated category vectors following with MLPs, being. able to capture high-order latent patterns of multi-field categorical data (Zhang et al.[[2016) Cat2Vec-FNN-1: This is our proposed architecture that only concatenates pairwise inter action output vectors among K-max pooling results to form the final vector representation and make prediction.\n0.85 0.85 0.85 AUC AUC 0.84 naaeen nogeee! 0.84 aaaeset noreee! 0.84 0.83 0.83 0.83 nonu!d! 0.82 0.82 0.82 0.81 0.81 0.81 ....- 0.80 0.80 0.80 0.0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 relu identity tanh sigmoid Dropout Interaction Times Activation Functions\nFigure 5: Performance Comparison over different Parameter Settings\nWe use Area Under ROC Curve (AUC) as the evaluation metrics to measure the performance o a prediction. Also we conduct the grid search for each model to make sure that each model ha. achieved its best performance. Specifically, empirically optimal hyperparameters are set as: th category embedding size is 16, the SGD batch size is 64, the Nadam (Sutskever et al.]2013) is se. as SGD optimiser with default settings, the gate type is MUL and the norm type for K-Max Poolin is L2 norm, and the activation function as tanh. Then the model followed by three fully connecte layer with width |128, 32, 1]. We also try different interaction times and finally set it as two (3-tuple suggesting that a high order of interactions helps improve the performance, but more than two would. overfit the data and thus managed the performance,."}, {"section_index": "12", "section_name": "5.2.1 RESULT AND DISCUSSION", "section_text": "Table3|gives the results of our CTR experiment, compared with various baselines. We see that there is about 3% improvement over LR. The AUC performance of the proposed Discrimination Cat2Vec models also outperforms the FM/CCPM/FNN model, as our model would be able to take higher order information into consideration. which helps make better decision.\nIn our pairwise interaction model, we also test different hyperparameters and settings, and the resul. is given in Figure|5] First, we evaluate the performance over different dropout rates, and find tha setting dropout as O.1 would be the best, as shown in Figure 5] We also explore the impact of. interaction. From the result, the model with 2 interaction times would have better generalisation or the test set. Finally, we compare three different activation functions (sigmoid, tanh, relu) and se identity mapping as the baseline. The result shows that \"tanh\"' yields the best performance, whicl. has the advantages of non-linear transformation between (--1, 1), and it may help gain more benefits. on multi-field categorical data.\nIn this paper we have proposed a novel Cat2Vec model working on the multi-field categorical data. Different from the other models, Cat2Vec repetitively computes and selects inter-field category pair. wise interactions to explore high-level interactions, which is analogous to the Apriori algorithm ir association rule mining. Moreover, we present an efficient discriminant training method to estimate. the category vectors. We also apply our pairwise interaction model on CTR prediction, of which. we have observed a significant performance gain over several strong baselines. For future work, we. plan to design more sophisticated gates to explore different interaction patterns among inter-field. categories; also leveraging Cat2Vec in various data mining problems is of great interest to us..\nCat2Vec-FNN-2: This is our proposed architecture that explore category vectors pairwise interaction result between K-max pooling results and category embeddings to form the final vector representation and make prediction.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Rakesh Agrawal, Ramakrishnan Srikant, et al. Fast algorithms for mining association rules. In Proc 20th int. conf. very large data bases, VLDB, volume 1215, pp. 487-499, 1994.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In ICASSP, pp. 6645-6649. IEEE, 2013\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In N1PS, pp. 1097-1105, 2012\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen tations in vector space. arXiv preprint arXiv:1301.3781, 2013.\nIlya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of initialization and momentum in deep learning. 1CML (3). 28:1139-1147. 2013\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096-1103. ACM, 2008\nWeinan Zhang, Tianming Du, and Jun Wang. Deep learning over multi-field categorical data. In European Conference on Information Retrieval. pp. 45-57. Springer. 2016\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic 55"}] |
HJ5PIaseg | [{"section_index": "0", "section_name": "TOWARDS AN AUTOMATIC TURING TEST: LEARNING TO EVALUATE DIALOGUE RESPONSE.", "section_text": "Ryan Lowe\nNicolas Angelard-Gontier\nReasoning and Learning Lab, School of Computer Science, McGill University Montreal Institute for Learning Algorithms, Universite de Montreal CIFAR Senior Fellow\nAutomatically evaluating the quality of dialogue responses for unstructured do mains is a challenging problem. Unfortunately, existing automatic evaluation metrics are biased and correlate very poorly with human judgements of response quality (Liu et al.[2016). Yet having an accurate automatic evaluation procedure is crucial for dialogue research, as it allows rapid prototyping and testing of new models with fewer expensive human evaluations. In response to this challenge, we formulate automatic dialogue evaluation as a learning problem. We present an eval- uation model (ADEm) that learns to predict human-like scores to input responses, using a new dataset of human response scores. We show that the ADEm model's predictions correlate significantly, and at level much higher than word-overlap met rics such as BLEU, with human judgements at both the utterance and system-level We also show that ADem can generalize to evaluating dialogue models unseen during training, an important step for automatic dialogue evaluation."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Learning to communicate with humans is a crucial ability for intelligent agents. Among the primar. forms of communication between humans is natural language dialogue. As such, building system. that can naturally and meaningfully converse with humans has been a central goal of artificia intelligence since the formulation of the Turing test (Turing]1950). Research on one type of sucl. systems, sometimes referred to as non-task-oriented dialogue systems, goes back to the mid-60s witl. Weizenbaum's famous program ELIZA: a rule-based system mimicking a Rogerian psychotherapis. by persistently either rephrasing statements or asking questions (Weizenbaum1966). Recently, ther. has been a surge of interest in the research community towards building large-scale non-task-orientec. dialogue systems using neural networks (Sordoni et al.]2015b} Shang et al.]2015]Vinyals & Le 2015] Serban et al.] [2016a] Li et al.2015). These models are trained in an end-to-end manner tc optimize a single objective, usually the likelihood of generating the responses from a fixed corpus Such models have already had a substantial impact in industry, including Google's Smart Repl. system (Kannan et al.2016), and Microsoft's Xiaoice chatbot (Markoff & Mozur 2015), which ha over 20 million users. More recently, Amazon has announced the Alexa Prize Challenge: a researcl competition with the goal of developing a natural and engaging chatbot system (Farber2016).\nOne of the challenges when developing such systems is to have a good way of measuring progress. in this case the performance of the chatbot. The Turing test provides one solution to the evaluation. of dialogue systems, but there are limitations with its original formulation. The test requires live human interactions, which is expensive and difficult to scale up. Furthermore, the test requires carefully designing the instructions to the human interlocutors, in order to balance their behaviour and expectations so that different systems may be ranked accurately by performance. Although unavoidable, these instructions introduce bias into the evaluation measure. The more common approach of having humans evaluate the quality of dialogue system responses, rather than distinguish. them from human responses, induces similar drawbacks in terms of time, expense, and lack of\n*The second and third authors contributed equally\nMichael Noseworthy"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "scalability. In the case of chatbots designed for specific conversation domains, it may also be difficult to find sufficient human evaluators with appropriate background in the topic (e.g. Lowe et al.[(2015))\nDespite advances in neural network-based models. evaluating the quality of dialogue responses auto-. matically remains a challenging and under-studied problem in the non-task-oriented setting. The most. widely used metric for evaluating such dialogue sys. tems is BLEU (Papineni et al.|2002), a metric mea- suring word overlaps originally developed for ma-. chine translation. However, it has been shown that BLEU and other word-overlap metrics are biased. and correlate poorly with human judgements of re-. sponse quality (Liu et al.|2016). There are many. obvious cases where these metrics fail, as they are. often incapable of considering the semantic similar- ity between responses (see Figure[1). Despite this, many researchers still use BLEU to evaluate their. dialogue models (Ritter et al.|2011 :Sordoni et al. 2015b Li et al. 2015 Galley et al. 2015} Li et a that correlate with human iudgements.. While hund\nTo make progress towards this goal, we first collect a dataset of human scores to various dialogue responses, and we use this dataset to train an automatic dialogue evaluation model, which we call ADEM. The model is trained in a semi-supervised manner using a hierarchical recurrent neural network (RNN) to predict human scores. We show that ADEM scores correlate significantly, and at a level much higher than BLEU, with human judgement at both the utterance-level and system-level Crucially, we also show that ADEm can generalize to evaluating new models, whose responses were unseen during training, without a drop in performance, making ADEM a strong first step towards effective automatic dialogue response evaluation"}, {"section_index": "3", "section_name": "A DATASET FOR DIALOGUE RESPONSE EVALUATION", "section_text": "To train a model to predict human scores to dialogue responses.. we first collect a dataset of human judgements (scores) of Twitter. responses using the crowdsourcing platform Amazon Mechanical. Turk (AMT)The aim is to have accurate human scores for a. variety of conversational responses -- conditioned on dialogue contexts - which span the full range of response qualities. For. example, the responses should include both relevant and irrelevant. responses, both coherent and non-coherent responses and so on. To achieve this variety, we use candidate responses from several. different models. FollowingLiu et al.(2016), we use the following. 4 sources of candidate responses: (1) a response selected by a. TF-IDF retrieval-based model, (2) a response selected by the Dual. Encoder (DE) (Lowe et al.|2015), (3) a response generated using. the hierarchical recurrent encoder-decoder (HRED) model (Serban. et al.]2016a), and (4) human-generated responses. It should be noted that the human-generated candidate responses are not the. reference responses from a fixed corpus, but novel human resp. reference. In addition to increasing response variety, this is necessa. model to learn to compare the reference responses to the candida\n1We will provide open-source implementations of the model upon publication.. 2All data collection was conducted in accordance with the policies of the host institutions' ethics boarc\nFigure 1: Example where word-overlap scores. (e.g. BLEU) fail for dialogue evaluation; al-. though the model response is completely rea sonable, it has no words in common with the reference response, and thus would be given. low scores by metrics such as BLEU..\n# Examples 4104 # Contexts 1026 # Training examples 2,872 # Validation examples 616 # Test examples 616 k score (inter-annotator 0.63 correlation)\nTable e1: Statistics of the. dialogue response evaluation dataset. Each example is in. the form (context, model re- sponse, reference response, hu man score).\nSecond, we filtered these human-generated responses for potentially offensive language, and con bined them with approximately 1,000 responses from each of the above models into a single set c responses. We then asked AMT workers to rate the overall quality of each response on a scale c 1 (low quality) to 5 (high quality). Each user was asked to evaluate 4 responses from 50 differer contexts. We included four additional attention-check questions and a set of five contexts was given t each participant for assessment of inter-annotator agreement. We removed all users who either faile an attention check question or achieved a k inter-annotator agreement score lower than 0.2 (Cohe: 1968). The remaining evaluators had a median k score of 0.63, indicating moderate agreement. Thi is consistent with results fromLiu et al.ld2016). Dataset statistics are provided in Table1\nMeasurement K score Overall 0.63 Topicality 0.57 Informativeness 0.31 Background 0.05\nTable 2: Median k inter- annotator agreement scores for various questions asked in the survey.\nRecurrent neural networks (RNNs) are a type of neural network with time-delayed connection. between the internal units. This leads to the formation of a hidden state ht, which is updated fo every input: h, = f(Wphht-1 + Wihxt), where Wbh and Wh are parameter matrices, f is a smootl non-linear activation function such as tanh, and xt is the input at time t. The hidden state allows fo RNNs to better model sequential data, such as natural language..\nIn this paper, we consider RNNs augmented with long-short term memory (LSTM) units (Hochreiter & Schmidhuber![1997). LSTMs add a set of gates to the RNN that allow it to learn how much to update the hidden state. LSTMs are one of the most well-established methods for dealing with the vanishing gradient problem in recurrent networks (Hochreiter! Beng1o et al. 1994)\nOne of the most popular approaches for automatically evaluating the quality of dialogue responses. is by computing their word overlap with the reference response. In particular, the most popular. metrics are the BLEU and METEOR scores used for machine translation, and the ROUGE score used for automatic summarization. While these metrics tend to correlate with human judgements in. their target domains, they have recently been shown to highly biaqsed and correlate very poorly with.\nWe conducted two rounds of AMT experiments. We first asked AMT workers to provide a reasonable. continuation of a Twitter dialogue (i.e. generate the next response given the context of a conversation). Each survey contained 20 questions, including an attention check question. Workers were instructed. to generate longer responses, in order to avoid simple one-word responses. In total, we obtained approximately 2,000 human responses.\nIn initial experiments, we also asked humans to provide scores fo topicality, informativeness, and whether the context required back ground information to be understandable. Note that we did not asl for fluency scores, as 3/4 of the responses were produced by human (including the retrieval models). We found that scores for informa tiveness and background had low inter-annotator agreement (Table|2 and scores for topicality were highly correlated with the overall score (Pearson correlation of 0.72). Results on these auxiliary questions varied depending on the wording of the question. Thus, we continuec our experiments by only asking for the overall score. We provid more details concerning the data collection in the Appendix, as it ma aid others in developing effective crowdsourcing experiments.\nTo train evaluation models on human judgements, it is crucial that we obtain scores of responses. that lie near the distribution produced by state-of-the-art models. This is why we use the Twitter Corpus (Ritter et al.] 2011), as such models are pre-trained and readily available. Further, the set of. topics discussed is quite broad -- as opposed to the very specific Ubuntu Dialogue Corpus -- and therefore the model should generalize better to other domains involving chit-chat. Finally, since it does not require domain specific knowledge (e.g. technical knowledge), it should be easy for AMT. workers to annotate.\ncontext hidden state encoder hidden state 00 00 00) 00 00 (00) 00 00 00 00 00 Wc,1 .Wc,n Wc.2 ... Wc,n Wr, 1 Wr,2 Wr,n Wp,1 Wc, 2 Wc,1 ..: Wp,2 Context, c True response, r Model response\nFigure 2: The ADEm model, which uses a hierarchical encoder to produce the context embedding c\nhuman judgements for dialogue response evaluation (Liu et al.2016). We briefly describe BLEU here, and provide a more detailed summary of word-overlap metrics in the Appendix.\nDrawbacks One of the major drawbacks of word-overlap metrics is their failure in capturing the. semantic similarity between the model and reference responses when there are few or no common. words. This problem is less critical for machine translation: since the set of reasonable translations of a given sentence or document is rather small, one can reasonably infer the quality of a translated. sentence by only measuring the word-overlap between it and one (or a few) reference translations. However, in dialogue, the set of appropriate responses given a context is much larger (Artstein et al.. 2009); in other words, there is a very high response diversity that is unlikely to be captured by. word-overlap comparison to a single response.\nFurther, word-overlap scores are computed directly between the model and reference responses. As such, they do not consider the context of the conversation. While this may be a reasonable assumption in machine translation, it is not the case for dialogue; whether a model response is an adequate substitute for the reference response is clearly context-dependent. For example, the two responses in Figure|1|are equally appropriate given the context. However, if we simply change the context to. \"Have you heard of any good movies recently?\", the model response is no longer relevant while the. reference response remains valid."}, {"section_index": "4", "section_name": "AN AUTOMATIC DIALOGUE EVALUATION MODEL (ADEM", "section_text": "To overcome the problems of evaluation with word-overlap metrics, we aim to construct a dialogue evaluation model that: (1) captures semantic similarity beyond word overlap statistics, and (2) exploit both the context of the conversation and the reference response to calculate its score for the mode response. We call this evaluation model ADEM.\nADEm learns distributed representations of the context, model response, and reference response using a hierarchical RNN encoder. Given the dialogue context c, reference response r, and model response r, ADEm first encodes each of them into vectors (c, r, and r, respectively) using the RNN encoder Then, ADEm computes the score using a dot-product between the vector representations of c, r, and r in a linearly transformed space: :\nwhere M, N e Rn are learned matrices initialized to the identity, and a, are scalar constants use to initialize the model's predictions in the range [0, 5]. The model is shown in Figure|2\nThe matrices M and N can be interpreted as linear projections that map the model response r into the space of contexts and reference responses, respectively. The model gives high scores to responses that have similar vector representations to the context and reference response after this projection The model is end-to-end differentiable; all the parameters can be learned by backpropagation. In our\nscore(c,r, +) = (cTMr+rTNr-Q)/ context hidden state A encoder hidden state 00 00 00 00 00 00) 00 Wc.1 Wc,n Wc,2 Wc,n Wr,1 Wr,2 Wr,n Wp,1 Wp,2 Wt,n Wc Wc, 1 :: Context, c True response, r Model response, ?\nBLEUBLEU (Papineni et al.2002) analyzes the co-occurrences of n-grams in the ground truth and the proposed responses. It computes the n-gram precision for the whole dataset, which is then multiplied by a brevity penalty to penalize short translations. For BLEU-N, N denotes the largest. value of n-grams considered (usually N = 4).\nscore(c,r,) = (cT Mr + rT Nr - a)/\nimplementation, the parameters 0 = { M, N} of the model are trained to minimize the squared erro between the model predictions and the human score, with L1-regularization:\nL = >`[score(ci,ri,ri) ) - human_score; 2 +xel i=1:K\nwhere y is a scalar constant. The simplicity of our model leads to both accurate predictions and fas evaluation time (see Appendix), which is important to allow rapid prototyping of dialogue systems\nPre-training with VHRED We would like an evaluation model that can make accurate predictions from few labeled examples, since these examples are expensive to obtain. We therefore employ semi-supervised learning, and use a pre-training procedure to learn the parameters of the encoder. In particular, we train the encoder as part of a neural dialogue model; we attach a third decoder RNN that takes the output of the encoder as input, and train it to predict the next utterance of a dialogue conditioned on the context.\nThe dialogue model we employ for pre-training is the latent variable hierarchical recurrent encoder. decoder (VHRED) mode1 (Serban et al.]2016b). The VHRED mode1 is an extension of the origina hierarchical recurrent encoder-decoder (HRED) model (Serban et al., 2016a) with a turn-leve stochastic latent variable. The dialogue context is encoded into a vector using our hierarchica encoder, and the VHRED then samples a Gaussian variable that is used to condition the decoder (see. Appendix for further details). After training VHRED, we use the last hidden state of the context-leve. encoder, when c, r, and + are fed as input, as the vector representations for c, r, and r, respectively. We use representations from the VHRED model as it produces more diverse and coherent responses. compared to its HRED counterpart.\nMaximizing the likelihood of generating the next utterance in a dialogue is not only a convenient way of training the encoder parameters; it is also an objective that is consistent with learning useful representations of the dialogue utterances. Two context vectors produced by the VHRED encoder are similar if the contexts induce a similar distribution over subsequent responses; this is consistent with the formulation of the evaluation model, which assigns high scores to responses that have similar vector representations to the context. VHRED is also closely related to the skip-thought-vector model (Kiros et al.]2015), which has been shown to learn useful representations of sentences for many tasks, including semantic relatedness and paraphrase detection. The skip-thought-vector model takes as input a single sentence and predicts the previous sentence and next sentence. On the other hand, VHRED takes as input several consecutive sentences and predicts the next sentence. This makes it particularly suitable for learning long-term context representations.\nIn order to reduce the effective vocabulary size, we use byte pair encoding (BPE) (Gage 1994 Sennrich et al., 2015), which splits each word into sub-words or characters. We also use laye. normalization (Ba et al.]2016) for the hierarchical encoder, which we found worked better at the task of dialogue generation than the related recurrent batch normalization (Ioffe & Szegedy2015 Cooijmans et al.2016). To train the VHRED model, we employed several of the same technique. found in Serban et al.(2016b) and Bowman et al.(2016): we drop words in the decoder with a fixed\nThe hierarchical RNN encoder in our model consists of two layers of RNNs (El Hihi & Bengio 1995 Sordoni et al.]2015a). The lower-level RNN, the utterance-level encoder, takes as input words from the dialogue, and produces a vector output at the end of each utterance. The context-level encoder takes the representation of each utterance as input and outputs a vector representation of the context. This hierarchical structure is useful for incorporating information from early utterances in the context (Serban et al.] 2016a). Following previous work, we take the last hidden state of the context-level encoder as the vector representation of the input utterance or context.\nAn important point is that the ADem procedure above is not a dialogue retrieval model. The fundamenta1 difference between ADEM and a dialogue model is that ADEM has access to the reference response. Thus, ADEm can compare a model's response to a known good response, which is significantly easier than inferring response quality from solely the context.\nFull dataset Test set Metric Spearman Pearson Spearman Pearson BLEU-1 0.026 (0.102) 0.055 (<0.001) 0.036 (0.413) 0.074 (0.097) BLEU-2 0.039 (0.013) 0.081 (<0.001) 0.051 (0.254) 0.120 (<0.001) BLEU-3 0.045 (0.004) 0.043 (0.005) 0.051 (0.248) 0.073 (0.104) BLEU-4 0.051 (0.001) 0.025 (0.113) 0.063 (0.156) 0.073 (0.103) ROUGE 0.062 (<0.001) 0.114 (<0.001) 0.096 (0.031) 0.147 (<0.001) METEOR 0.021 (0.189) 0.022 (0.165) 0.013 (0.745) 0.021 (0.601) T2V 0.140 (<0.001) 0.141 (<0.001) 0.140 (<0.001) 0.141 (<0.001) VHRED -0.035 (0.062) -0.030 (0.106) -0.091 (0.023) -0.010 (0.805) Validation set Test set C-ADEM 0.272 (<0.001) 0.238 (<0.001) 0.293 (<0.001) 0.303 (<0.001) R-ADEM 0.428 (<0.001) 0.383 (<0.001) 0.409 (<0.001) 0.392 (<0.001) ADEM (T2V) 0.395 (<0.001) 0.392 (<0.001) 0.408 (<0.001) 0.411 (<0.001) ADEM 0.436 (<0.001) 0.389 (<0.001) 0.414 (<0.001) 0.395 (<0.001)\nTable 3: Correlation between metrics and human judgements, with p-values shown in brackets 'ADEM (T2V)' indicates ADEM with tweet2vec embeddings (Dhingra et al.]2016), and 'VHRED indicates the dot product of VHRED embeddings (i.e. ADEM at initialization). C- and R-ADEM represent the ADEM model trained to only compare the model response to the context or reference response, respectively.\n). Scores sccres Madel ooer loael 0.2 Human scores Human scores Human scores (a) BLEU-2 (b) ROUGE (c) ADEM\n0. 0. 4.0 ). 3.5 0.4 lcerees scores Moree meers 3 . 2 Madl 0.2 0.1 . ).C Human scores Human scores Human scores (a) BLEU-2 (b) ROUGE c) ADEM\nFigure 3: Scatter plot showing model against human scores, for BLEU-2 and ROUGE on the full dataset, and ADEm on the test set. We add Gaussian noise drawn from (0, 0.3) to the integer human scores to better visualize the density of points, at the expense of appearing less correlated\nrate of 25%, and we anneal the KL-divergence term linearly from 0 to 1 over the first 60,000 batches We use Adam as our optimizer (Kingma & Ba2014).\nFor training VHRED, we use a context embedding size of 2O00. However, we found the ADEM. model learned more effectively when this embedding size was reduced. Thus, after training VHRED we use principal component analysis (PCA) (Pearson]1901) to reduce the dimensionality of the. context, model response, and reference response embeddings to n. While our results are robust to. n, we found experimentally that n = 7 provided slightly improved performance. We provide other. hyperparameter values in the Appendix.\nWhen evaluating our models, we conduct early stopping on a separate validation set to obtain the best parameter setting. For the evaluation dataset, we split the train/ validation/ test sets such that there is no context overlap (i.e. the contexts in the test set are unseen during training).\nUtterance-level correlations We first present new utterance-level correlation results|for existing word-overlap metrics, in addition to results with embedding baselines and ADEm, in Table 3 The\n3we present both the Spearman correlation (computed on ranks, depicts monotonic relationships) and Pearso correlation (computed on true values, depicts linear relationships) scores.\nBLEU-2 BLEU-4 ROUGE ADEM 1.2 1.0 0.8 0.6 0.4 seal 0.2 0.0 0.2 1.5 2.0 2.5 3.03.5 4.0 4.5 1.5 2.0 2.53.0 3.5 4.0 4.5 1.5 2.0 2.5 3.0 3.5 4.0 4.5 1.5 2.0 2.5 3.0 3.5 4.0 4.5\nBLEU-2 BLEU-4 ROUGE ADEM 1.2 1.0 letr 0.8 V 0.6 0.4 seal 0.2 0.0 -0.2 1.5 2.0 2.5 3.0 3.5 4.0 4.5 1.5 2.0 2.5 3.0 3.5 4.0 4.5 1.5 2.0 2.5 3.0 3.5 4.0 4.5 1.5 2.0 2.5 3.0 3.5 4.0\nFigure 4: Scatterplots depicting the system-level correlation results for BLEU-2, BLEU-4, ROUGE and ADEm on the test set. Each point represents the average scores for the responses from a dialogue mode1 (TFIDF, DE, HRED, human). Human scores are shown on the horizontal axis, with normalized metric scores on the vertical axis. The ideal metric has a perfectly linear relationship.\nbaseline metrics are evaluated on the entire dataset of 4,104 responses4|We measure the correlatiol for ADEM on the validation and test sets (616 responses each)..\nWe also conduct an additional analysis of the response data fromLiu et al.(2016), where the pre. processing is standardized by removing <first_speaker>' tokens at the beginning of each utterance The results are detailed in Table|10 of Appendix D. We can observe from both this data, and the new. data in Table[3] that the correlations for the word-overlap metrics are even lower than estimated in previous studies (Liu et al.] 2016] Galley et al.]2015). In particular, this is the case for BLEU-4 which has frequently been used for dialogue response evaluation (Ritter et al.||2011) Sordoni et al. 2015b Li et al. 2015 Galley et al. 2015fLi et al.|2016a).\nWe can see from Table|3|that ADEm correlates far better with human judgement than the word-overlap baselines. This is further illustrated by the scatterplots in Figure[3] We also compare with ADEm using tweet2vec embeddings for c, r, and r, which are computed at the character-level with a bidirectional GRU (Dhingra et al.]2016), and obtain comparable but slightly inferior performance compared to. using VHRED embeddings.\nSystem-level correlations We show the system-level correlations for various metrics in Table[4] and present it visually in Figure[4] Each ooint in the scatterplots represents a dialogue model; humans give low scores to TFIDF and DE responses, higher scores to HRED and the highest scores to other human responses. It is clear that existing word-overlap metrics are incapable of capturing this relationship for even 4 models. This renders them completely deficient for dialogue evaluation. However, ADem produces the exact same model ranking as humans, achieving a significant Pearson correlation of 0.98Thus. ADEm correlates well with humans both at the response and system level.\nGeneralization to previously unseen models When ADem is used in practice, it will take as input responses from a new model that it has not seen during training. Thus, it is crucial that ADEM correlates with human judgements for new models. We test ADEm's generalization ability by performing a leave-one-out evaluation. For each dialogue model that was the source of response data for training ADEM (TF-IDF, Dual Encoder, HRED, humans), we conduct an experiment where we train on all model responses except those from the chosen model, and test only on the model that was unseen during training.\nThe results are given in Table5] Overall, we observe that the ADem model is very robust, and. is capable of generalizing to new models in all cases. When testing the correlation on the entire. test set, the model achieves comparable correlations to the ADEM model that was trained on 25%\nMetric Pearson BLEU-1 -0.079 (0.921) BLEU-2 0.308 (0.692) BLEU-3 -0.537 (0.463) BLEU-4 -0.536 (0.464) ROUGE 0.268 8 (0.732) ADEM 0.981 (0.019)\nTable 4: System-level cor elation, with the p-value in orackets.\n4Note that our word-overlap correlation results in Table3are also lower than those presented inGalley et al. (2015). This is because Galley et al. measure corpus-level correlation, i.e. correlation averaged across different subsets (of size 100) of the data, and pre-filter for high-quality reference responses.\n'For comparison, BLEU achieves a system-level correlation of O.99 on 5 models in the translation do main (Papineni et al.| 2002)\nTest on full dataset. Test on removed model responses Data Removed Spearman Pearson Spearman Pearson TF-IDF 0.4097 (<0.001) 0.3975 (<0.001) 0.3931 (<0.001) 0.3645 (<0.001) Dual Encoder 0.4000 (<0.001) 0.3907 (<0.001) 0.4256 (<0.001) 0.4098 (<0.001) HRED 0.4128 (<0.001) 0.3961 (<0.001) 0.3998 (<0.001) 0.3956 (<0.001) Human 0.4052 (<0.001) 0.3910 (<0.001) 0.4472 (<0.001) 0.4230 (<0.001) Average 0.4069 (<0.001) 0.3938 (<0.001) 0.4164 (<0.001) 0.3982 (<0.001) 25% at random 0.4077 (<0.001) 0.3932 (<0.001)\nContext Reference re- Model responses Human ADEM sponse score score photo to see my television de-. yeah it was me 1) i'm not sure. i just don't know what to do with it 3 1.602 but go to - some. some on haha i 'd kinda for- 2) you heard the horsepower productions remix of lee. 1 1.513 <url> - hehe <url> - it gotten about it it scratch perry's 'exercising' off his 'mighty upsetter'. really was you? i thought. was filmed a while album? ppl were recognizing someone ago 3) you wont chug a fuzzy peach navel. 1 1.744 who looked like you! were the 4) they were! 5 3.274 oysters worth the wait? just beat call of duty!! -> want im in kenmore at 1) i'm gonna get a new phone some moro. 1 1.848 a cookie? > yes!! -> come the moment 2) no way man. 5 4.265 get it 3) wow i just got a free pizza coupon! get yours. 1 0.921 before theres no more! <url>. 4) i'm going to go to the mall. 1 2.634 am i out of twitter jail yet? test-. any news on meet- 1) i'm not sure if i'm going to be able to get it.. 3 1.912 ing -> yeah. i posted bail -- ing our user ? i 2) good to see another mac user in the leadership. 4 1.417 thanks. i am a right chatter go to the us on fri- ranks tweetbox on sundays. same day and i don 't 3) awww poor baby hope u get to feeling better soon. 2 1.123 happened last sunday lol want to miss any- maybe some many work days at piedmont. thing arranged 4) did you tweet too much?. 5 2.539\nTable 7: Examples of scores given by the ADEm model\nQualitative Analysis To illustrate some strengths and weaknesses of ADEM. we show human and ADEM scores for each of the responses to various contexts in Table There are several instances where ADEM predicts accurately: in particular, ADEm is often very good at assigning low scores to poor responses. This seen in the first two contexts, where most of the responses given a score of 1 from humans are given scores less than 2 by ADem. The single exception in response (4) for the second context seems somewhat appropriate and should perhaps have been scored higher by the human evaluator. There are also several instances where the model assigns high scores to suitable responses. as in the first two contexts.\nthe metric scores have been normalized conservative when predicting response scores. This is the. case in the third context, where the model assigns low scores to most of the responses that a human rated highly (although response (2) is arguably not relevant to the context). This behaviour is likely due to the squared error loss used to train ADem; since the model receives a large penalty for. incorrectly predicting an extreme value, it learns to predict scores closer to the average human score\nTable 5: Correlation for ADem when various model responses are removed from the training set. The left two columns show performance on the entire test set, and the right two columns show. performance on responses only from the dialogue model not seen during training. The last row (25%. at random) corresponds to the ADEM model trained on all model responses, but with the same amount of training data as the model above (i.e. 25% less data than the full training set)..\nless data selected at random. This is particularly surprising for the HRED model; in this case, ADem was trained only on responses that were written by humans (from retrieval models or human generated), but is able to generalize to responses produced by a generative neural network model. This demonstrates ADem's ability to accurately score new neural network-based dialogue models.\nMetric scores # Examples Human > 4 237 out of 616 and (BLEU-2 <2, 146 out of 237 [ROUGE]<2) andADEM> 4 60 out of 146 andADEM<2 42 out of 237 and (|BLEU-2>4, 14 out of 42 or[ROUGE>4)\nTable 6: In 60/146 cases. ADEM scores good responses (human score > 4) highly when word-overlap metrics fail The bars around [metric indicate that the metric scores have been normalized.\nTable 9: Examples where both human and ADEM score the model response highly, while BLEU-2 and ROUGE do not. These examples are drawn randomly (i.e. no cherry-picking) from the examples where ADEM outperforms BLEU-2 and ROUGE (as defined in the text). ADEM is able to correctly assign high scores to short responses that have no word-overlap with the reference response. The bars around metric| indicate that the metric scores have been normalized.\nCorrelation with response length One implicit assumption in the ADEm model is that the human evaluations of model responses is absolutely correct, including the biases that humans exhibit when evaluating dialogues. For example, it has been shown that humans have a tendency to give a higher rating to shorter responses than to longer responses (Serban et al.|2016b), as shorter responses are often more generic and thus are more likely to be suitable to the context. This affects dialogue response evaluation: we calculated the test set correlation between response length and the human score, and obtained a significant Pearson correlation of 0.27, and a Spearman correlation of 0.32 If the assumption that human evaluators are absolutely correct is not accurate, it may be desirable to remove human biases in an automatic evaluation model to improve the model's generalization capabilities. This is an important direction for future work\nImprovement over word-overlap metrics Next, we analyze more precisely how ADEM outper. forms traditional word-overlap metrics such as BLEU-2 and ROUGE. We first normalize the metric. scores to have the same mean and variance as human scores, clipping the resulting scores to the ange 1, 5 (we assign raw scores of O a normalized score of 1). We indicate normalization witl vertical bars around the metric. We then select all of the good responses that were given low score. oy word-overlap metrics (i.e. responses which humans scored as 4 or higher, and which BLEU-2 and [ROUGE| scored as 2 or lower). The results are summarized in Table|6} of the 237 responses tha humans scored 4 or higher, most of them (147/237) were ranked very poorly by both BLEU-2 anc. ROUGE. This quantitatively demonstrates what we argued qualitatively in Figure[1] a major failure of word-overlap metrics is the inability to consider reasonable responses that have no word-overlap with the reference response. We can also see that, in almost half (60/147) of the cases where both. BLEU-2 and ROUGE fail, |ADEm| is able to correctly assign a score greater than 4. For comparison there are only 42 responses where humans give a score of 4 and ADEm gives a score less than 2, and. only 14 of these are assigned a score greater than 4 by either BLEU-2 or ROUGE].\nTo provide further insight, we give specific examples of responses that are scored highly (> 4) by both humans and ADEM], and poorly (< 2) by both BLEU-2and ROUGEin Table 9 We draw 3 responses randomly (i.e. no cherry-picking) from the 60 test set responses that meet this criteria. We can observe that ADEm is able to recognize short responses that are appropriate to the context, without word-overlap with the reference response. This is even the case when the model and reference responses have very little semantic similarity as in the first and third examples in Table 9\nFinally, we show the behaviour of ADem when there is B a discrepancy between the lengths of the reference and b model responses. In (Liu et al.|2016), the authors show 1e that word-overlap metrics such as BLEU-1, BLEU-2 and METEOR exhibit a bias in this scenario: they tend to\nContext Reference response Model re- Human BLEU-2] ROUGE |Adem sponse score score score score i'd recommend <url> - or build buy an an htpc with xmbc is what i because 5 1.0 1.0 4.726 htpc and put <url> on it. -> you're the run . but i 've decked out my it's bril- some nd person this week that's recom- setup . i've got <number> tb liant mended roku to me. of data on my home server imma be an auntie this weekend. i guess lol you sometiming haha, 5 1.0 1.0 4.201 i have to go albany. herewego -> u sup- anyway, posed to been here -> i come off nd on. how're -> never tell me smh you? my son thinks she is plain. and the girl you are too kind for words . i will do 5 1.0 1.0 5.0 that plays her sister. seekhelp4him? send him this. he'll thank you. <url>\nMean score w<6 w > 6 p-value (n=312) (n=304) ROUGE 0.042 0.031 < 0.01 BLEU-2 0.0022 0.0007 0.23 ADEM 2.072 2.015 0.23 Human 2.671 2.698 0.83\nTable 8: Effect of differences in response length on the score, w = absolute differ- ence in #words between the reference re- sponse and proposed response. BLEU-1, BLEU-2, and METEOR have previously been shown to exhibit bias towards similar length responses s (Liu et al. 2016).\ncloser in length to the reference response!|However, humans do not exhibit this bias; in other words the quality of a response as judged by a human is roughly independent of its length. In Table[8] we show that ADem also does not exhibit this bias towards similar-length responses\nRelated to our approach is the literature on novel methods for the evaluation of machine translatior systems, especially through the WMT evaluation task (Callison-Burch et al.. 2011 Machacek & Bojar 2014] Stanojevic et al.[2015). In particular, Gupta et al.(2015) have recently proposed to evaluate machine translation systems using Tree-LSTMs. Their approach differs from ours as, in the. dialogue domain, we must additionally condition our score on the context of the conversation, which is not necessary in translation..\nSeveral recent approaches use hand-crafted reward features to train dialogue models using rein- forcement learning (RL). For example,Li et al.(2016b) use features related to ease of answering and information flow, and Yu et al.(2016) use metrics related to turn-level appropriateness and conversational depth. These metrics are based on hand-crafted features, which only capture a small set of relevant aspects; this inevitably leads to sub-optimal performance, and it is unclear whether such objectives are preferable over retrieval-based cross-entropy or word-level maximum log-likelihood objectives. Furthermore, many of these metrics are computed at the conversation-level, and are not available for evaluating single dialogue responses. The metrics that can be computed at the response-level could be incorporated into our framework, for example by adding a term to equation1 consisting of a dot product between these features and a vector of learned parameters.\nThere has been significant work on evaluation methods for task-oriented dialogue systems, which attempt to solve a user's task such as finding a restaurant. These methods include the PARADISE framework (Walker et al.||1997) and MeMo (Moller et al.[|2006), which consider a task completior signal. Our models do not attempt to model task completion, and thus fall outside this domain."}, {"section_index": "5", "section_name": "7 DISCUSSION", "section_text": "The evaluation model proposed in this paper favours dialogue models that generate responses that are. rated as highly appropriate by humans. It is likely that this property does not fully capture the desired end-goal of chatbot systems. For example, one issue with building models to approximate human judgements of response quality is the problem of generic responses. Since humans often provide higl. scores to generic responses due to their appropriateness for many given contexts, a model trained to predict these scores will exhibit the same behaviour. An important direction for future work is modifying ADEm such that it is not subject to this bias. This could be done, for example, by censoring. ADEm's representations (Edwards & Storkeyl2016) such that they do not contain any information about length. Alternatively, one could build a second evaluation model that assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. In this case, a model that generates generic responses will easily be distinguishable and obtain a low score..\nAn important direction of future research is building models that can evaluate the capability of a dialogue system to have an engaging and meaningful interaction with a human. Compared to. evaluating a single response, this evaluation is arguably closer to the end-goal of chatbots. However. such an evaluation is extremely challenging to do in a completely automatic way. We view the. evaluation procedure presented in this paper as an important step towards this goal; current dialogue. systems are incapable of generating responses that are rated as highly appropriate by humans, and we. believe our evaluation model will be useful for measuring and facilitating progress in this direction..\n6Note that, for our dataset, BLEU-2 almost exclusively assigns scores near O for both w 6 and w > 6 resulting in a p-value >0.05\nWe use the Twitter Corpus to train our models as it contains a broad range of non-task-oriented conversations and has has been used to train many state-of-the-art models. However, our model could easily be extended to other general-purpose datasets, such as Reddit, once similar pre-trained models become publicly available. Such models are necessary even for creating a test set in a new domain, which will help us determine if ADem generalizes to related dialogue domains. We leave investigating the domain transfer ability of ADEm for future work."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. S. Banerjee and A. Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, volume 29, pp. 65-72, 2005. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157-166, 1994. S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio. Generating sentences from a continuous space. COLING, 2016. C. Callison-Burch, P. Koehn, C. Monz, and O. F. Zaidan. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pp. 22-64. Association for Computational Linguistics, 2011. B. Chen and C. Cherry. A systematic comparison of smoothing techniques for sentence-level bleu. ACL 2014, pp. 362, 2014. J. Cohen. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, 70(4):213, 1968. T. Cooijmans, N. Ballas, C. Laurent, and A. Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. B. Dhingra, Z. Zhou, D. Fitzpatrick, M. Muehl, and W. W. Cohen. Tweet2vec: Character-based distributed representations for social media. arXiv preprint arXiv:1605.03481, 2016. CIROO16\nJ. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016\nH. Edwards and A. Storkey. Censoring resentations with an adversary. ICLR, 2016\nP. Gage. A new algorithm for data compression. The C Users Journal, 12(2):23-38, 1994\nWe'd like to thank Casper Liu for his help with the correlation code, Laurent Charlin for helpful discussions on the data collection, Jason Weston for suggesting improvements in the experiments. and Jean Harb and Emmanuel Bengio for their debugging skills. We gratefully acknowledge support from the Samsung Institute of Advanced Technology, the National Science and Engineering Research Council, and Calcul Quebec. We'd also like to thank the developers of Theano (Team et al.]2016).\npreprint arXiv:1506.06863, 2015. R. Gupta, C. Orasan, and J. van Genabith. Reval: A simple and effective machine translation evaluation metric based on recurrent neural networks. In Proceedings of the 2015 Conference on. Empirical Methods in Natural Language Processing (EMNLP), 2015. S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universitat. Minchen, pp. 91, 1991. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.\nIoffe and C. Szegedy. Batch normalization: Accelerating deep network training by reduc. internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Kannan, K. Kurach, S. Ravi, T. Kaufmann, A. Tomkins, B. Miklos, G. Corrado, L. Lukacs, Ganea, P. Young, et al. Smart reply: Automated response suggestion for email. In Proceeding.. the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), volume 36, 495-503, 2016. . Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.69 2014. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. Skip-thou vectors. In Advances in Neural Information Processing Systems, pp. 3276-3284, 2015. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function neural conversation models. arXiv preprint arXiv:1510.03055, 2015. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A persona-based neural conversation mo arXiv preprint arXiv:1603.06155, 2016a. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generati arXiv preprint arXiv:1606.01541, 2016b. -Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branc.. out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004. .-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How not to evaluate y dialogue system: An empirical study of unsupervised evaluation metrics for dialogue respo. generation. arXiv preprint arXiv:1603.08023, 2016. . Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for resea in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909, 2015. I. Machacek and O. Bojar. Results of the wmt14 metrics shared task. In Proceedings of the Ni. Workshop on Statistical Machine Translation, pp. 293-301. Citeseer, 2014. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Tin 2015. Moller, R. Englert, K.-P. Engelbrecht, V. V. Hafner, A. Jameson, A. Oulasvirta, A. Raake, anc. Reithinger. Memo: towards automatic usability evaluation of spoken dialogue services by i. error simulations. In INTERSPEECH, 2006. . Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of mach translation. In Proceedings of the 4Oth annual meeting on association for computational linguisi pp. 311-318. Association for Computational Linguistics, 2002. . Pearson. Principal components analysis. The London, Edinburgh and Dublin Philosophi Magazine and Journal, 6(2):566, 1901. . Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media Proceedings of the conference on empirical methods in natural language processing, pp. 583-5 Association for Computational Linguistics, 2011. . Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword ur. arXiv preprint arXiv:1508.07909, 2015. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialog. systems using generative hierarchical neural network models. In AAAI, pp. 3776-3784, 2016a V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchi. latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv: 1605.060\navelsity-pronotnn neural conversation models. arXiv preprint arXiv:1510.03055, 2015. J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155, 2016a. J. Li, W. Monroe, A. Ritter, and D. Jurafsky. Deep reinforcement learning for dialogue generation.. arXiv preprint arXiv:1606.01541, 2016b. C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8. Barcelona, Spain, 2004.. C.-W. Liu, R. Lowe, I. V. Serban, M. Noseworthy, L. Charlin, and J. Pineau. How not to evaluate your. dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016. R. Lowe, N. Pow, I. Serban, and J. Pineau. The ubuntu dialogue corpus: A large dataset for research. in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909, 2015.\nM. Machacek and O. Bojar. Results of the wmt14 metrics shared task. In Proceedings of the Ninth. Workshop on Statistical Machine Translation, pp. 293-301. Citeseer, 2014.. J. Markoff and P. Mozur. For sympathetic ear, more chinese turn to smartphone program. NY Times,. 2015. S. Moller, R. Englert, K.-P. Engelbrecht, V. V. Hafner, A. Jameson, A. Oulasvirta, A. Raake, and N. Reithinger. Memo: towards automatic usability evaluation of spoken dialogue services by user. error simulations. In INTERSPEECH, 2006. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine. translation. In Proceedings of the 4Oth annual meeting on association for computational linguistics pp. 311-318. Association for Computational Linguistics, 2002. K. Pearson. Principal components analysis. The London, Edinburgh and Dublin Philosophical. Magazine and Journal, 6(2):566, 1901.\nSennrich. B. Haddow. and A. Birch. Neural machine translation of rare words with subword units arXiv preprint arXiv:1508.07909, 2015. I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue. systems using generative hierarchical neural network models. In AAAI, pp. 3776-3784, 2016a. I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical. latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv: 1605.06069 2016b.\nL. Shang, Z. Lu, and H. Li. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364, 2015. A. Sordoni, Y. Bengio, H. Vahabi, C. Lioma, J. Grue Simonsen, and J.-Y. Nie. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 553-562. ACM, 2015a. A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan A neural network approach to context-sensitive generation of conversational responses. arXiv\nO. Vinvals and O. Le. A neural conversational model. arXiv pj reprint arXiv:1506.05869, 2015\nJ. Weizenbaum. ELIZAa computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36-45, 1966. Z. Yu, Z. Xu, A. W. Black, and A. I. Rudnicky. Strategy and policy learning for non-task-oriented conversational systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 404, 2016."}, {"section_index": "7", "section_name": "APPENDIX A: FURTHER NOTES ON CROWDSOURCING DATA COLLECTION", "section_text": "Before conducting the primary crowdsourcing experiments to collect the dataset in this paper, we. ran a series of preliminary experiments to see how AMT workers responded to different questions Unlike the primary study, where we asked a small number of overlapping questions to determine the k score and filtered users based on the results, we conducted a study where all responses (40 in total from 10 contexts) were overlapping. We did this for 18 users in two trials, resulting in 153 pair-wise correlation scores per trial.\nIn the first trial, we asked the following questions to the users. for each response\n1. How appropriate is the response overall? (overall, scale of 1-5). 2. How on-topic is the response? (topicality, scale of 1-5). 3. How specific is the response to some context? (specificity, scale of 1-5). 4. How much background information is required to understand the context? (backg scale of 1-5)\nWe observed that both the overall scores and topicality had fairly high inter-annotator agreement (as shown in Table2), but were strongly correlated with each other (i.e. participants would often put the. same scores for topicality and overall score). Conversely, specificity (k = 0.12) and background (k = 0.05) had very low inter-annotator agreements.\nTo better visualize the data, we produce scatterplots showing the distribution of scores for different. responses, for each of the four questions in our survey (Figure|5). We can see that the overall and. topicality scores are clustered for each question, indicating high agreement. However, these clusters. are most often in the same positions for each response, which indicates that they are highly correlated with each other. Specificity and background information, on the other hand, show far fewer clusters.. indicating lower inter-annotator agreement. We conjectured that this was partially because the terms. specificity' and background information', along with our descriptions of them, had a high cognitive. load, and were difficult to understand in the context of our survey..\nTo test this hypothesis, we conducted a new survey where we tried to ask the questions for specificity. and background in a more intuitive manner. We also changed the formulation of the background question to be a binary O-1 decision of whether users understood the context. We asked the following. questions:\n1. How appropriate is the response overall? (overall, scale of 1-5 2. How on-topic is the response? (topicality, scale of 1-5) 3. How common is the response? (informativeness, scale of 1-5) 4. Does the context make sense? (context. scale of 0-1)\nWe also clarified our description for the third question, including providing more intuitive examples. Interestingly, the inter-annotator agreement on informativeness k = 0.31 was much higher than that for specificity in the original survey. Thus, the formulation of questions in a crowdsourcing survey has a large impact on inter-annotator agreement. For the context, we found that users either agreed. highly (k > 0.9 for 45 participants), or not at all (k < 0.1 for 113 participants)..\nWe also experimented with asking the overall score on a separate page, before asking questions 2-4, and found that this increased the k agreement slightly. Similarly, excluding all scores where participants indicated they did not understand the context improved inter-annotator agreement slightly\nDue to these observations, we decided to only ask users for their overall quality score for each response, as it is unclear how much additional information is provided by the other questions in the context of dialogue. We hope this information is useful for future crowdsourcing experiments in the dialogue domain.\nNote that we do not ask for fluency, as the 3/4 responses for each context were written by a human. including retrieval models). We also provided the AMT workers with examples that have high. topicality and low specificity, and examples with high specificity and low topicality. The background question was only asked once for each context..\nFigure 5: Scatter plots showing the distribution of scores (vertical axis) for different responses (horizontal axis), for each of the four questions in our survey. It can be seen that the overall anc topicality scores are clustered for each question, indicating high agreement, while this is not the case for specificity or background information. Note that all scores are normalized based on a per-use. basis, based on the average score given by each user.\nOverall ! \".. i Topicality . Specificity . Background ... ... - .. : -... ! .... : .. - .. .. : .. : : . .. .- .. ... -. ! i .. . ...\n-- 1i. - ... .. .... 1.D - - 0.8 - 0.6 . ... -. 0.4 ... : 0.2 : 1 i:1 : 1 il 0.0 : . - --\n11 : 11 .- - - -- 1.0 - 0.6- .\" - I- - -. . 0.61 : .. . . ... : .. i 0.4 .... : .. 0.23 : .. 1i 0.00 - 1 . :1 - -\n.. .... - -. - - .... - . .. .. ... .. ... .. .. . .. .... : . 46 .. -. .- ... .. : . i - - 0.4 .. .. : .- .. -. 82 .. - .. .. -\n1.2 -. ... .. - .- 1.0 ... 0.8 -.. - .... .. .. . 0.6 - .. . .. .. . - 0.4 ... - -- ** .- .. .. 0.2 .. . .- ... 0.0 .. . 0.2"}, {"section_index": "8", "section_name": "APPENDIX B: METRIC DESCRIPTION", "section_text": "k min(h(k,r),h(k,ri)) Pn(r,r) = k h(k,ri)\nwhere k indexes all possible n-grams of length n and h(k, r) is the number of n-grams k in r. Note. that the min in this equation is calculating the number of co-occurrences of n-gram k between the ground truth response r and the proposed response r, as it computes the fewest appearances of k in. either response. To avoid the drawbacks of using a precision score, namely that it favours shorter. (candidate) sentences, the authors introduce a brevity penalty. BLEU-N, where N is the maximum. length of n-grams considered, is defined as:.\n3n is a weighting that is usually uniform, and b() is the brevity penalty. The most commonly used version of BLEU assigns N = 4. Modern versions of BLEU also use sentence-level smoothing, as the geometric mean often results in scores of 0 if there is no 4-gram overlap (Chen & Cherry]2014) Note that BLEU is usually calculated at the corpus-level, and was originally designed for use with multiple reference sentences.\nGiven a set of alignments m, the METEOR score is the harmonic mean of precision Pm. and recal Rm between the candidate and target sentence..\nPen m PmRm aPm + (1-a)Rn [m] Pm =n hk(ci) m| Rm=rhk(Sij)\nThe penalty term Pen is based on the 'chunkiness' of the resolved matches. We use the default values for the hyperparameters a, , and 0.\nROUGE ROUGE (Lin2004) is a set of evaluation metrics used for automatic summarization We consider ROUGE-L, which is a F-measure based on the Longest Common Subsequence (LCS) between a candidate and target sentence. The LCS is a set of words which occur in two sentences in the same order; however, unlike n-grams the words do not have to be contiguous, i.e. there can be other words in between the words of the LCS. ROUGE-L is computed using an F-measure between. the reference response and the proposed response.\nwhere l(ci, sij) is the length of the LCS between the sentences. is usually set to favour recal (B = 1.2).\nN BLEU-N := b(r, ) exp Bn log Pn(r, r) n=1\nMETEOR The METEOR metric (Banerjee & Lavie] 2005) was introduced to address several weaknesses in BLEU. It creates an explicit alignment between the candidate and target responses. The alignment is based on exact token matching, followed by WordNet synonyms, stemmed tokens. and then paraphrases. Given a set of alignments, the METEOR score is the harmonic mean of precision and recall between the proposed and ground truth sentence.\nRm=\nl(Ci,Sij) R = max j Sii P = max fracl(ci, Sij)|cij (1 + 2)RP OUGEL(c;,S) R + B2p\n(1 + 2)RP ROUGEL(ci,Si) = R+ 32P"}, {"section_index": "9", "section_name": "APPENDIX C: LATENT VARIABLE HIERARCHICAL RECURRENT ENCODER-DECODER (VHRED)", "section_text": "The VHRED model is an extension of the original hierarchical recurrent encoder-decoder (HRED). model (Serban et al.| 2016a) with an additional component: a high-dimensional stochastic latent variable at every dialogue turn. The dialogue context is encoded into a vector representation using. the utterance-level and context-level RNNs from our encoder. Conditioned on the summary vector at. each dialogue turn, VHRED samples a multivariate Gaussian variable that is provided, along with the context summary vector, as input to the decoder RNN, which in turn generates the response. word-by-word. We use representations from the VHRED model as it produces more diverse and. coherent responses compared to its HRED counterpart.\nThe VHRED model is trained to maximize a lower-bound on the log-likelihood of generating the next response:\nThe multivariate Gaussian latent variable in the VHRED model allows modelling ambiguity anc. uncertainty in the dialogue through the latent variable distribution parameters (mean and variance) This provides a useful inductive bias, which helps VHRED encode the dialogue context into a. real-valued embedding space even when the dialogue context is ambiguous or uncertain, and it helps. VHRED generate more diverse responses.\nW2,1 W2,N W3,1 W3,NW prediction decoder initial hidden state OO OC OO OO W2,1 W3,1 latent variable posterior parameterization prior parameterization encoder hidden state context hidden state O0 O OO OO OO W1,1 W1,N W2,1 W2,N\n- log P(w1,..., wN) -KL[Qy(zn|w1,..,Wn)|Pg(zn|w<n)] +EQy(zn|w1,,wn)[logPg(wn|zn,W<n)\n(W1,...,WN -KL[Qy(zn|w1,...,Wn)|Pg(zn|w<n)]+EQy(zn|w1,wn)[logPg(wn|Zn,W<n)\nwhere KL[Q||P] is the Kullback-Leibler (KL) divergence between distributions Q and P. The distribution Qy(zn | w1,..., wn) = N(posterior(w1,..., Wn), posterior(w1,..., Wn)) is the ap- proximate posterior distribution (or recognition model) which approximates the intractable true posterior distribution Py (zn | w1, . .., w). The posterior mean posterior and covariance posterior (as well as that of the prior) are computed using a feed-forward neural network, which takes as input the concatenation of the vector representations of the past utterances and that of the current utterance\nFigure 6: The VHRED model used for pre-training. The hierarchical structure of the RNN encoder is shown in the red box around the bottom half of the figure. After training using the VHRED procedure the last hidden state of the context-level encoder is used as a vector representation of the input text."}, {"section_index": "10", "section_name": "HYPERPARAMETERS", "section_text": "When evaluating our model, we conduct early stopping on an external validation set to obtain the best parameter setting. We similarly choose our hyperparameters (PCA dimension n, L1 regularization penalty y, learning rate a, and batch size b) based on validation set results. Our best ADem model used y = 0.02, a = 0.01, and b = 16. For ADEm with tweet2vec embeddings, we did a similar hyperparameter searched, and used n = 150, y = 0.01, a = 0.01, and b = 16.\nNew results onLiu et al.(2016) data In order to en- sure that the correlations between word-overlap metrics and human judgements were comparable across datasets. we standardized the processing of the evaluation dataset from[Liu et al.](2016). In particular, the original data from Liu et al.[(2016) has a token (either '<first_speaker> <second_speaker>', or '<third_speaker>') at the begin ning of each utterance. This is an artifact left-over by the processing used as input to the hierarchical recurrent encoder-decoder (HRED) model (Serban et al. 2016a) Removing these tokens makes sense for establishing the ability of word-overlap models, as they are unrelated to the content of the tweets.\nWe perform this processing, and report the updated results 18 for word-overlap metrics in Table[10] Surprisingly, almost C all significant correlation disappears, particularly for all forms of the BLEU score. Thus, we can conclude that the word-overlap metrics were heavily relying on these tokens model responses and reference responses.\nEvaluation speed An important property of evaluation models is speed. We show the evaluation time on the test set for ADEm on both CPU and Meti a Titan X GPU (using Theano, without cudNN) in Table|11] When run ADE on GPU, ADEM is able to evaluate responses in a reasonable amount of ADE ime (approximately 2.5 minutes). This includes the time for encoding he contexts, model responses, and reference responses into vectors with Table he hierarchical RNN, in addition to computing the PCA projection, but time ( does not include pre-training with VHRED. For comparison, if run on a est set of 10,000 responses, ADEM would take approximately 45 minutes. This is significantly less time consuming than setting up human experiments at a ve have not yet made any effort to optimize the speed of the ADEm model.\nLearning curves To show that our learning procedure for ADEm really is necessary, and that the embeddings produced by VHRED are not sufficient to evaluate dialogue systems, we plot the Spearman and Pearson correlations on the test set as a function of the number of epochs in Figure 7 It is clear that, at the beginning of training, when the matrices M and N have been initialized to the identity, the model is incapable of accurately predicting human scores, and its correlation is approximately 0.\nMetric Spearman Pearson BLEU-1 -0.026 (0.80) 0.016 (0.87) BLEU-2 0.065 (0.52) 0.080 (0.43) BLEU-3 0.139 (0.17) 0.088 (0.39) BLEU-4 0.139 (0.17) 0.092 (0.36) ROUGE -0.083 (0.41) -0.010 (0.92)\nTable 10: Correlations between word-. overlap metrics and human judgements. on the dataset from Liu et al.(2016), af- ter removing the speaker tokens at the. beginning of each utterance. The corre. lations are even worse than estimated in the original paper, and none are signifi-. cant.\n0.5 0.40 0.35 0.4 0.30 0.3 0.25 0.2 0.20 0.15 0.1 0.10 0.0 0.05 0.1 0.00 0 20 40 60 80 100 0 20 40 60 80 100 Number of epochs. Number of epochs. (a) Spearman correlation (b) Pearson correlation\nTable 12: Examples where a human and either BLEU-2 or ROUGE (after normalization) score the model response highly (> 4/5), while the ADEm model scored it poorly (< 2/5). These examples are drawn randomly (i.e. no cherry-picking). The bars around metric| indicate that the metric scores have been normalized.\nFailure analysis We now conduct a failure analysis of the ADem model. In particular, we look. at two different cases: responses where both humans and (normalized) ROUGE or BLEU-2 score highly (a score of 4 out of 5 or greater) while ADEm scores poorly (2 out of 5 or lower), and the converse, where ADEM scores the response highly while humans and either ROUGE or BLEU-2 score it poorly. We randomly sample (i.e. without cherry picking) three examples of each case, which. are shown in Tables1213\nFrom Table[12l the cases where ADem misses a good response, we can see that there are a variety of reasons for this cause of failure. In the first example, ADem is not able to match the fact that the model response talks about sleep to the reference response or context. This is possibly because the utterance contains a significant amount of irrelevant information: indeed, the first two sentences are not related to either the context or reference response. In the second example, the model response does not seem particularly relevant to the context - despite this, the human scoring this example gave it 4/5. This illustrates one drawback of human evaluations; they are quite subjective, and often have some noise. This makes it difficult to learn an effective ADEm model. Finally, ADEm is unable to score the third response highly, even though it is very closely related to the reference response.\nWe can observe from the first two examples in Table[13] where the ADEm model erroneously ranks. the model responses highly, that ADEm is occasionally fooled into giving high scores for responses that are completely unrelated to the context. This may be because both of the utterances are short and short utterances are ranked higher by humans in general since they are often more generic (as. detailed in Section[5). In the third example, the response actually seems to be somewhat reasonable given the context; this may be an instance where the human evaluator provided a score that was too loW.\nFigure 7: Plots showing the Spearman and Pearson correlations on the test set as ADem trains. At the beginning of training, the model does not correlate with human judgements.\nContext Reference Model response Human BLEU-2 [ROUGE] ADEm response score score score score what theme do you guys want next on tumblr? maybe you need i'm really im- 4 2.53 5.0 1.0 we've had mariskamommymoments what do a bit more sleep pressed. first you want to see next? -> im sorry. hope you guy to said that feel better soon! -> it will wear off. just hate pwhat'stime? feeling like this -> im sure it will! just relax sleep late its not and take your time -> i'm okay. just overly good. i'm worried tired 1 some pm syria - the editor of syrian govern- msm is very simi- i'm not sure if i'm 4 2.53 4.75 1.22 ment daily tishrin said she had been sacked lar the world over going to be able over remarks to al <url> -> replaced by ! tied to the gov- to see the <unk>. business editor! sounds like the states lol ernment . i'm not sure if i should be able to see it. wonder how long it will take wayne rooney thanks . i won- thanks let me see 5 2.53 4.24 1.53 to get himself into trouble on twitter. odds? der why it didn't if this one works -> probably. a long time. because i used work! :p the address in the paper to find the page and it doesn't exist! -> here you go\nTable 13: Examples where a human and either BLEU-2 or ROUGE (after normalization) score the model response low (< 2/5), while the ADEM model scored it highly (> 4/5). These examples are drawn randomly (i.e. no cherry-picking). The bars around |metric| indicate that the metric scores. have been normalized.\nData efficiency How much data is required to train ADem? We conduct an experiment where. we train ADem on different amounts of training data, from 5% to 100%. The results are shown in Table[14 We can observe that ADEm is very data-efficient, and is capable of reaching a Spearman. correlation of O.4 using only half of the available training data (100o labelled examples). ADEM correlates significantly with humans even when only trained on 5% of the original training data (100 labelled examples).\nTable 14: ADEM correlations when trained on different amounts of data\nContext Reference re- Model re- Human |BLEU-2| |ROUGE| |ADEm sponse sponse score score score score rage slightly dissipated. wouldn't have both- i 'm going to do a some. some 2 2.53 1.57 4.38 ered restoring my phone but i need it to moan little wee blog about unfortunately at tomorrow. -> speaking of moaning. i'm ac- it too . all nice and tually going to email that chap that letter right measured , of course now. > good plan high school flings college relationships -> it word . i 've seen king james 1 2.53 1.57 5.0 seems like the other way around from wat i've a little of both seen more of the college though is it getting light outside? i swear it looks blue for you , i 'm stay- i'm going to 1 2.53 1.57 5.0 -> time to go to sleepppp.. ing up the beach.\nTraining data % Spearman p-value Pearson p-value 100 % of data 0.414 < 0.001 0.395 < 0.001 75 % of data 0.408 < 0.001 0.393 < 0.001 50 % of data 0.400 < 0.001 0.391 < 0.001 25 % of data 0.330 < 0.001 0.331 < 0.001 10 % of data 0.245 < 0.001 0.265 < 0.001 5 % of data 0.098 0.015 0.161 < 0.001"}] |
r17RD2oxe | [{"section_index": "0", "section_name": "DEEP NEURAL NETWORKS AND THE TREE OF LIF", "section_text": "Yan Wang* Kun He\nComputer Science Department\nIn Evolutionary Biology, species close in the tree of evolution are identified by similar visual features. In computer vision, deep neural networks perform image classification by learning to identify similar visual features. This leads to an in- teresting question: is it possible to leverage the advantage of deep networks to construct a tree of life? In this paper, we make the first attempt at building the phylogenetic tree diagram by leveraging the high-level features learned by deep neural networks. Our method is based on the intuition that if two species share similar features, then their cross activations in the softmax layer should be high Based on the deep representation of convolutional neural networks trained for im- age classification, we build a tree of life for species in the image categories of ImageNet. Further, for species not in the ImageNet categories that are visually similar to some category, the cosine similarity of their activation vectors in the same layer should be high. By applying the inner product similarity of the activa- tion vectors at the last fully connected layer for different species, we can roughly build their tree of life. Our work provides a new perspective to the deep repre- sentation and sheds light on possible novel applications of deep representation to other areas like Bioinformatics."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning transforms the data into compact intermediate representations akin to principal com ponents, and derives layered structures by removing the redundancy in representations (Li Deng. 2014). In recent years, deep learning has demonstrated great success with significant improve. ment in various artificial intelligence applications, including speech recognition (Sak et al.| 2015) image recognition (Ciresan et al.]2012]Cir]Krizhevsky et al.]2012), and natural language process- ing (Vinyals et al. 2015} Socher et al.2013).\nConvolutional Neural Networks (CNNs) are mainly designed for image and video recognition. Typ. ical CNN architecture alternates convolutional layers and pooling layers, followed by several fully. connected or sparsely connected layers with a final softmax as the classification layer. Milestone include the 16-layer AlexNet (Krizhevsky et al.||2012), the 19-layer VGG (Simonyan & Zisserman. 2014), and the 22-layer GoogleNet (Szegedy et al.]2015). By adding identity function as a shor. cut,He et al.(2015) are able to build a substantially deeper ResNet with 152 layers, which receivec. the first place on the ILSVRC 2015 image classification task (Russakovsky et al.[|2015). Other very. deep networks include the highway network with depths up to 100 layers (Srivastava et al.|2015). Eldan & Shamir(2016) provide a theoretical justification that reveals the utility of having deepe. networks rather than wider networks, implying that future progress will lead to the development oi. even deeper networks\nUnderstanding the deep representations of neural networks has become increasingly difficult as the state-of-the-art models have more layers. This problem is important because it will help us understand the intrinsic mechanism of deep neural networks and explore possible novel applications based on the understanding.Ballester & de Araujo(2016) show how CNNs, trained to identify ob- jects primarily in photos, could be used for abstract sketch recognition.Gatys et al.(2015a b) utilize\n*The first three authors contribute equally ' Corresponding author.\nJohn E. Hopcroft Yu Sun\nComputer Science Department Cornell University\n{jeh, ys646}@cs.cornell.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "the correlations between feature maps to synthesize natural textures and transfer artistic style with high perceptual quality. In Bioinformatics, deep neural networks are used for the analysis of medi- cal images for cancer detection (Cirean et al.|2013) as well as drug discovery and toxicology (Dahl et al.]2014] Ramsundar et al.[2015] [Wallach et al.[2015). A deep-learning approach based on the autoencoder architecture has been adopted to predict Gene Ontology annotations and gene-function relationships (Chicco et al.2014).\nThe Tree of Life refers to the compilation of a comprehensive phylogenetic (or evolutionary database rooted at the last universal common ancestor of life on Earth. Over the course of hundred. of millions of years, the splitting and subsequent divergence of lineages has produced the tree of life which has as its leaves the many species of organisms (Darwin 1859). Here we refer to a phyloge. netic tree, evolutionary tree or tree of life as a branching diagram showing the inferred genealogica elationships (Evaluate how close two species are in the evolutionary history, as evaluated by ob. served heritable traits, such as DNA sequences) among various biological species (Hug et al.|2016) This is an important problem in evolutionary biology and many attempts have been made (Darwin. [859Doolittle & Bapteste]2007]Bapteste et al.]2009] Edwards2009). Originally tree of life was nanually built based on the understanding of the evolution history or the visual similarity of th species. Today modern techniques have been applied based on the gene similarity.."}, {"section_index": "3", "section_name": "Our contributions are two-fold", "section_text": "1) Provides a potential solution to the important problem of constructing a biology evolutionary tree\nWe propose a novel approach in constructing a tree of life using the deep representation of CNNs trained for image classification. We conjuncture that the hierarchical feature representation learnec by deep networks can be leveraged to quantify the visual similarity of the species. In this way, we might be able to construct tree of life using their feature similarity.\n2) Gives insight into the representations produced by deep neural networks\nWe conjecture that if images of two training categories share some similar features, then their cross. activations in the softmax layer should be high. Hence we could evaluate the genetic distance of species within the training categories. Based on the deep representation of several typical CNNs,. AlexNet (Krizhevsky et al.]2012), VGG (Simonyan & Zisserman]2014) and ResNet (He et al. 2015) that are trained for ImageNet classification, we construct tree of life for dozens of species in. the thousands of ImageNet categories of the training dataset..\nFor species not in the training categories that are visually similar to some species in the training. dataset, could we still utilize their deep representation in order to judge the relationship amon, different species? We conjuncture that they show high cosine similarity of the activation vectors it high-level layers. By applying the inner product similarity of the activation vectors at the last full connected layer for different species, we present empirical evidence that through transfer learning we could roughly construct their tree of life..\nWe have two important criterions in mind while constructing our image dataset. 1) We would like. each image category, which corresponds to a node in the tree (i.e. a species), to have enough sample. such that a statistic from the network activations is reasonably robust to noise. 2) There exists a. ground truth hierarchy on the image categories, so we can objectively evaluate the effectiveness of. our method.\nExperiments show that the proposed method using deep representation is very competitive to humar beings in building the tree of life based on the visual similarity of the species. We also try on net-. works at different epochs during the training, and the quality of tree of life increases over the course. of training. The performance among the three networks, AlexNet, VGG and ResNet, improves with. the improvement of their classification quality..\nFortunately, the ImageNet 2012 Classification dataset provides the raw material we need. This. dataset contains 1000 categories of common life objects, and each category contains 1000 images as the training data. Also, those categories correspond exactly to nodes in the WordNet hierarchy WordNet (Miller1995) is a large lexical database of English, where words are grouped into sets. of cognitive synonyms (synsets), each expressing a distinct concept and synsets are interlinked by. means of conceptual-semantic and lexical relations..\nFor the ground truth, in the smallest WordNet subtree that contains A: 1) we could just consider the categories in A and their positions in this WordNet subtree and build a smallest ground truth tree T. 2) we could additional consider some categories outside A in this WordNet subtree. Then the ground. truth tree T? contains some categories outside the ImageNet training categories. Note that nodes. in T] is basically the intersection of nodes in T? and nodes in the 1000 ImageNet categories. For. each category outside the 1000 training categories, we also use the 1000 images from the ImageNet database"}, {"section_index": "4", "section_name": "2.2 SIMILARITY EVALUATION", "section_text": "We input all selected images for species in T or T? to a reference network and execute the feed forward pass. The feature maps (i.e. the activation vectors) of the last fully connected (FC) laye and the softmax layer are used to build the distance matrix.\n1) The Probability Method. For T, each class is in the training set and their ground truth labels are among the ones represented by the softmax layer. So we utilize the probability distribution of the images at the softmax layer in order to build a distance matrix. Specifically, for two classes of images A and B in the categories of A, we consider their cross activations in the softmax layer. For each image a E A, we obtain the predicted probability Pa2B that this image belongs to node B, and we calculate the average of these values, named PA2B.\nThe closer the genealogical relationship of A and B, the higher the cross predicted probability value should be. As the cross confidence is close to zero, we use the logistic function to enlarge the value Then we add \"_' to assign lower value to closer species and to keep the value nonnegative.\n2) The Inner Product Method.For T2. A, as some species are not in the 1000 classification cate-. gories, we use the centroid vector of the activations at the last fully connected (FC) layer for each species, and calculate the dot product of the two unitized centroid vectors to get their cosine simi- larity. Then we add \"_\" to assign lower value to closer species.\nThe only exception is for Bassarisk which only contains 694 images\nFor the reference network, we select three popular CNNs (AlexNet, VGG-16 and ResNet-152) trained on ImageNet. The top 5 classification errors of AlexNet, VGG and ResNet are 15.3% 9.9% and 6.7% respectively. So they all learn the features of the images very well and we could. leverage their deep representations for the ToL construction..\nTo find a small branch of the phylogenetic tree in order to do the reconstruction, we choose a set. A of genealogically close species (species close in the evolutionary tree of life as evaluated by the branch distance) from the 1o0o ImageNet categories. And for each category A E A, we use all the 1000 images from the training dataset to get robust result..\nPA2B = Pa2B aEA\nFor each image b E B, we obtain the predicted probability P2A that this image belongs to node A and we calculate the average of these values, named Pb2A..\nPB2A Pb2A bEB\nif A = B -log(0.5PA2B + 0.5PB2A if A B\nVA:VB DAB = -log OR\nBased on the distance matrix, we have three methods, namely \"Approximation Central Point\"', \"Mir imum Spanning Tree\"', and \"Multidimensional Scaling\", to construct a tree of life\n2) The \"Minimum Spanning Tree' (MST) based method. In the MST based method, we first construct a Minimum Spanning Tree (MST) based on the distance matrix. Then we build a tree from the root to the leaves, recursively split the current MST subtree into two parts by removing its longest edge until there is only one node in each subtree. In this way we build a \"tree\" with all the leaves corresponding to the species and closest species are splitted in the end..\n3) The \"Multidimensional Scaling'(MDS) based method. In the MDS based method, according to D, we know distances among the points which corresponds to the species. We first apply the MDS (Multi-Dimensional Scaling) (Borg & Groenen]2005) algorithm to do dimension reduction and project the species points into a two dimensional subspace. Then we build a tree bottom up by recursively merging two points with the smallest Euclidean distance in the two dimensional subspace and regard the midpoint of the two merging points as the new representative point.\nOur following experiments show that MST and MDS show similar performance but ACP is consid erably weaker.\nWe conduct a plenty set of experiments to build several branches of the phylogenetic trees of differ. ent granularity. To test whether our method could distinguish tiny visual differences, we first choos genealogically very close species, such as a set of fish species or a set of canine species, and con. struct their tree of life. Then, to test whether our method has good scalability for larger species, sucl. as dog, cat, fish, etc., we choose 39 different large species to build a more general tree of life an. verify whether different breeds of one large species like dogs could be grouped together. In addi. tion, to evaluate the ability of constructing hierarchical trees based on the visual similarity of image. outside the Biology, we choose some vehicle categories from the ImageNet dataset (Russakovsk. et al.[[2015) and build a vehicle tree.\nFor the methods, we use the probability method in Section 2.2|to build the distance matrix, and. apply ACP, MST, and MDS based methods to build the tree of life. For the inner product method. in Section2.2] the results is slightly weaker, but it can deal with species or categories outside the. training set. For details of inner product method, the readers are referred to the Appendix\nTo construct fine-grained tree of life, we select several fish species of high visual similarity and test whether we could identify the tiny differences of the features. We pick six fish species from the ImageNet training set and for each species, we input all the 1000 images in the training dataset to the ResNet network.\nFigure[1shows that the tree of life constructed by MST and MDS coincides with the hierarchial tree built on WordNet. The hierarchical tree constructed by ACP does not coincide with the ground truth at all. The reason may be that in any triangle ABC, the edge length from A to the median of BC say D, is shorter than the average length of edge AB and AC. If A is far more from symmetric as evaluated by edge BC, the recalculated distance of AD does not accurately represent the distance of A to the merged set of {B, C}.\nOur results demonstrate that deep CNNs could capture the local features as well as the global fea tures simultaneously. As to rebuild tree of life for genealogically close species, we need both features of different granularity like the animal's size, skin texture and shape. For instance, the texture of a\n1) The \"Approximation Central Point'(ACP) based method. In the ACP based method, we build a tree bottom up by recursively merging two species points, say A and B, with the smallest distance.. and setting the distance of the new point to other points as the average distance of A and B to other. points respectively.\nAs another example, we choose 11 very similar canine species and build a relatively lager tree, as illustrated in Figure[3] We can correctly build the canine tree, possibly according to their fur texture and shape features. The reconstructed quality is as good as what human beings could reconstruct based on the visual similarity\nlionfish tiger shark tiger shark tiger shark tench great white shark great white shark great white shark puffer lionfish lionfish lionfish tiger shark puffer puffer puffer great white shark tench tench tench goldfish goldfish goldfish goldfish ACP method MST method MDS method WordNet\nFigure2|shows the coarse-grained tree of life for clustering species of different families by different networks: ResNet, VGG and AlexNet. We pick 38 species from five families: bird, canine, plant. fish and feline.ResNet and VGG can correctly cluster the species by families, while AlexNet has makes some mistakes. This result indicates that deep networks with higher classification quality learn the deep representations better, such that the Tree of Life built based on the deep representation also have different reconstruction quality.\nTo show that we not only correctly cluster the species, but also ensure the correct hierarchy within. each family, we further construct a tree containing 20 species of five families, as illustrated in Figure\nFigure 1: Trees of life for fish species. The first three trees are constructed by our methods, and the fourth tree is the ground truth using WordNet. The hierarchy of MST and MDS coincides with that. of the WordNet.\nbird canine plant fish feline ResNet VGG AlexNet\nFigure 2: Constructed tree of life for families of species by different networks. Species of the five families are in different colors. ResNet and VGG can correctly cluster the species but AlexNet does not. Build by MST based method..\nJapanese spaniel Border collie Shetland sheepdog collie Greater Swiss Mountain dog Great Dane Rottweiler Doberman briard schipperke German shepherd\nFigure 3: A constructed tree of life for 11. canine species. Closer species show shorter distance. Build by MDS based method..\nmountain bike mountain bike tendem bicycle tendem bicycle garbage truck garbage truck fire truck fire truck ambulance ambulance model T model T convertible convertible cab cab MST method WordNet\nFigure 5: A constructed vehicle tree. Our result looks more reasonable than that of the WordNet Build by the MDS method.\nTo show the ability of building hierarchical tree for other objects other than animals, we pick eight vehicle categories from the ImageNet training set. Vehicles are very different from the animals\ntoucan brambling house finch jacamar lorikeet pufferfish goldfish sea lion dugong sturgeon Shetland sheepdog Japanese spaniel Greater Swiss Mountain dog German sheepdog great dane tabby Persian cat cougar leopard jaguar\nFigure 4: A constructed small tree of life for different families of species. We not only correctly cluster each family of species, but also present correct hierarchy of the species within each family. Build by MDS based method.\nTheir shapes are kind of fixed and they can only do certain motions like going forward or turnin around. Images of vehicles do not embed abundant features as the animal images do\nNevertheless, our method still output good results, as shown in Figure5] We cluster the ambulance fire truck and garbage truck together, all of which have big carriages. While in WordNet, the ambu lance is close to model T, convertible and cab, but the three do not have carriage and they are much smaller than ambulance. Our result is more reasonable than the WordNet provides."}, {"section_index": "5", "section_name": "4 CONCLUSION", "section_text": "By leveraging the similarity of features extracted automatically by deep learning techniques, we. build a tree of life for various biological species, either belonging to the training categories or not. The results are highly competitive to the level of human beings in building the tree of life based or the visual similarity of the images. Our work provides new understandings to the deep representatior of neural networks and sheds light on possible novel applications of deep learning in the area o. Bioinformatics. An intriguing future work would be on how to utilize deep learning techniques tc. build a more delicate tree of life based on the gene similarity of the species.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research work was supported by US Army Research Office(W911NF-14-1-0477) and Nationa Science Foundation of China(61472147).\nPedro Ballester and Ricardo Matsumura de Araujo. On the performance of googlenet and alexnet applied to sketches. In AAAI, pp. 1124-1128, 2016.\nEric Bapteste, Maureen A O'Malley, Robert G Beiko, Marc Ereshefsky, J Peter Gogarten, Laura Franklin-Hall, Francois-Joseph Lapointe, John Dupre, Tal Dagan, Yan Boucher, et al. Prokaryotic. evolution and the tree of life are two different things. Biology direct, 4(1):1, 2009..\nDan C. Ciresan, Ueli Meier, Jonathan Masci, and Jurgen Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333-338, 2012\nCharles Darwin. On the origin of species by means of natural selection. Nature, pp. 502, 1859\nRonen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In COLT, pp 907-940, 2016.\nIngwer Borg and Patrick JF Groenen. Modern multidimensional scaling: Theory and applications Springer Science & Business Media, 2005..\nDan C. Cirean, Alessandro Giusti, Luca M. Gambardella, and Jrgen Schmidhuber. Mitosis detection in breast cancer histology images using deep neural networks. In MICCAI. pp. 411-418. 2013.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutiona neural networks. In NIPS, pp. 262-270, May 2015b\nDong Yu Li Deng. Deep learning: Methods and applications. Technical report. May 2014\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556. 2014\nRichard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. Parsing with composi tional vector grammars. In ACL, pp. 455-465, 2013.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In N1PS, pp. 1097-1105, 2012\nGeorge A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39-41, 1995.\nBharath Ramsundar, Steven M. Kearnes, Patrick Riley, Dale Webster, David E. Konerding, and Vijay S. Pande. Massively multitask networks for drug discovery. CoRR, abs/1502.02072, 2015"}, {"section_index": "7", "section_name": "APPENDIX", "section_text": "Siamese cat kit fox tiger cat red fox Egyptian cat Jackal tabby jaguar Persian cat leopard African elephant snow leopard Indian elephant tiger cat snow leopard tabby leopard Siamese cat jaguar Persian cat coati Egyptian cat. Alaskan brown bear giant panda. grizzly lesser panda Jackal Bassarisk kit fox coati red fox African elephant. Bassarisk Indian elephant. giant panda Alaskan brown bear lesser panda grizzly MDS method WordNet\nFigure 6: Constructing tree of life containing some species not in training set (marked by pink point). We use inner product method to build the distance matrix. Only coati is in the wrong leaf of the tree\nTo test the inner product method in Section|2.2[ that can build tree of the species not in the training set, we select 5 species not in the training set and 14 species in the training set. We choose 1000 images for each species except for Bassarisk which only contains 694 images. We show the results on ResNet using the MDS based method. Figure|6Jillustrates the result."}] |
HJGODLqgx | [{"section_index": "0", "section_name": "RECURRENT HIDDEN SEMI-MARKOV MODEI", "section_text": "Hanjun Dai1, Bo Dai1, Yan-Ming Zhang?, Shuang Li1, Le Song"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Segmentation and labeling of time series data is an important problem in machine learning an signal processing. Given a sequence of observations {x1, x2,..., xT}, we want to divide the . observations into several segments and label each segment simultaneously, where each segmer consists of consecutive observations. The supervised sequence segmentation or labeling technique have been well studied in recent decades (Sutskever et al.]2014] Kong et al.]2015] Chen et al. 2015). However, for complicated signals, like human activity sensor data, accurately annotating th segmentation boundary or the activity type would be prohibitive. Therefore, it is urgent to develo. unsupervised algorithms that can jointly learn segmentation and labeling information directly fror. the data without supervisions. Figure|1|provides an illustration which we are focus on..\nThe Hidden Semi-Markov Model (HSMM) (Murphy2002) is a powerful model for such task. I. eliminates the implicit geometric duration distribution assumptions in HMM (Yu]2010), thus allow. the state to transit in a non-Markovian way. Most of the HSMM variants make strong parametri. assumptions on the observation model (Rabiner1989| Johnson & Willsky2013]Yu]2010). Thi makes the learning and inference simple, but ignores the nonlinear and long-range dependency withii. a segment. Take the human activity signals as an example. The movements a person performs at . certain time step would rely heavily on the previous movements, like the interleaving actions of lef. hand and right hand in swimming, or more complicated dependency like shooting after jumping i1. playing basketball. Some models have been proposed to tackle this problem (Ghahramani & Hintor 2000 Fox et al.] 2009] Linderman et al.]2016), but are limited in linear case.\nSince people have justified RNN's ability in modeling nonlinear and complicated dependen. cies (Sutskever et a1.]2014f Du et al.[2016), we introduce the recurrent neural emission mode1 into HSMM for capturing various dependencies within each segment to address such issue. However, the. flexibility of recurrent neural model comes with prices: it makes the exact Expectation-Maximization (EM) algorithm computationally too expensive.\nTo speed up the learning and inference, we exploit the variational encoder (VAE) framework (Kingma & Welling 2013). Specifically, we propose to use bidirectional RNN (bi-RNN) encoder. Such"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Segmentation and labeling of high dimensional time series data has wide appli cations in behavior understanding and medical diagnosis. Due to the difficulty of obtaining a large amount the label information, realizing this objective in an. unsupervised way is highly desirable. Hidden Semi-Markov Model (HSMM) is a classical tool for this problem. However, existing HSMM and its variants typically. make strong generative assumptions on the observations within each segment, thus their ability to capture the nonlinear and complex dynamics within each segment is limited. To address this limitation, we propose to incorporate the Recurrent Neural. Network (RNN) as the generative process of each segment, resulting the Recurrent. HSMM (R-HSMM). To accelerate the inference while preserving accuracy, we designed a structure encoding function to mimic the exact inference. By gener- alizing the penalty method to distribution space, we are able to train the model. and the encoding function simultaneously. We also demonstrate that the R-HSMM. significantly outperforms the previous state-of-the-art on both the synthetic and real-world datasets.\n100200300400 500 600 700 800 900 100 1100 1200 100 200300400500600700800 900 10001100 1200 S1 S2 S3 S1 S2 S3 (a) Sine (b) Gaussian Process Figure 1: Synthetic experiment results. Different background colors represent the segmentations\nFigure 1: Synthetic experiment results. Different background colors represent the segmentations. with different labels. In the top row, the black curve shows the raw signal. (a) The Sine data set is generated by a HSMM with 3 hidden states, where each one has a corresponding sine function; (b). Similar to[1a] but the segments are generated from Gaussian processes with different kernel functions.. The first two rows are our algorithms which almost exact locate every segment..\narchitecture will mimic the forward-backward algorithm, and hence is expected to capture simil information as in exact posterior calculation.\nIt should be emphasized that due to the discrete nature of the latent variables in our model, the algorithm proposed in Kingma & Welling(2013) and its extension on time-series models (Gao et al.. 2016}Krishnan et al. 2015) are not directly applicable. There are plenty of work proposed based. on stochastic neuron (Tang & Salakhutdinov2013] Bengio et al.]2013) Mnih & Gregor2014 Raiko et al.]2014] Gu et al.[2015 [Chung et al.[2016) to remedy such issue. However, none of these off-the-shelf methods are easy to achieve good performance according to our experiment: the. hundreds or thousands layers of stochastic neuron (which is equal to the length of sequence), togethe. with the switching generative RNN, make the encoding function very sensitive, and thus, extremely. difficult to train fully on unsupervised setting. We propose a solution, stochastic distributional penalty. method, which introduces auxiliary distributions to separate the decoding R-HSMM and encoding. bi-RNN in training procedure, and thus, reduces the learning difficulty for each component. This. novel algorithm is general enough and can be applied to other VAE with discrete latent variables which can be of independent interest. We emphasize that the proposed algorithm is maximizing. exact the nagative Helmholtz variational free energy. It is different from Johnson et al.(2016) in. which a lower bound of the variational free energy is proposed as the surrogate to be maximized for. convenience.\nWe experimentally justified our algorithm on the synthetic datasets and three real-world datasets. namely the segmentation tasks for human activity. fruit fly behavior and heart sound records. The R-HSMM with Viterbi exact inference significantly outperforms basic HSMM and its variants,. demonstrating the generative model is indeed flexible. Moreover, the trained bi-RNN encoder. also achieve similar state-of-the-art performances to the exact inference, but with 400 times faster. inference speed, showing the proposed structured encoding function is able to mimic the exact. inference efficiently."}, {"section_index": "3", "section_name": "2 MODEL ARCHITECTURE", "section_text": "Given a sequence x = [1, x2, ..., |j], where xt E Rm is an m dimensional observation at time t. our goal is to divide the sequence into meaningful segments. Thus, each observation xt will have the corresponding label zt E Z, where Z = {1, 2, ..., K} is a finite discrete label set and K is predefined. The label sequence z = [1, 22, : . , %lr! should have the same length of x.\nBesides labels, HSMM will associate each position t with additional variable dt E D = {1, 2, . . . , D}. where d, is known as duration variable and D is the maximum possible duration. The duration variable can control the number of steps the current hidden state will remain. We use d to denote the duration sequence. We also use notation xt:t2 to denote the substring [xt, Xt1+1,..., Xt2] of x. Without ambiguity, we use z as a segment label, and d as the duration..\nIn this paper, we focus on one of the variants of HSMM, namely the explicit duration HMM (EDHMM) (Rabiner] 1989), and use Decreasing Count Variables (Chiappa2014) for the notation.\nExplicit Duration Hidden Markov Model. Similar to HMM, this model treats the pair of (z, d) a 'macro hidden state'. The probability of initial macro state is defined as P(z, d) = P(z)P(d|z). We use the notation , P(z) and P(d|z) Bz.d to parametrize the initial probability and duratior probability, respectively. A,; = P(zt = i|zt-1 = j, dt-1 = 1) is the state transition probabilit on the segment boundary. Here E RK is in K-dimensional simplex. For each hidden state z, the corresponding rows Bz.: and Az.: are also in probability simplex. Here we assume the multinomia distribution for P(d|z).\nIn EDHMM, the transition probability of macro hidden state P(zt, dt|Zt-1, dt-1) is decomposed b P(zt|Zt-1. dt-1)P(dtZt, dt_ -1) and thus can be defined as:\n|x L(x) = log II P(zq|zt-1,dt-1)P(dt|zt,dt-1)P(x|z,d) ) T z1 z,d t=2\nht = 0(W(zsi)xt\nFinally, in this model, P(x|z, d) = =1 P(s,s;,+ds,-1|2s,, ds.) is computed by the product of generative probabilities for each segment. In Eq.4 W E Rm xh is a weight matrix capturing the last observation xt-1, and V E Rhh is for the propagation of history ht-1. The b is a bias term. The superscript zs, indexes the RNN we used for the corresponding segment. The segments with different labels are generated using different RNNs. So we should maintain K RNNs. o() is a nonlinear activation function. We use tanh in our experiments.\nif dt-1 = 1 if dt-1 = 1 P(zt|Zt-1,dt-1 P(dt if dt-1>1; if dt-1>1 =dt-1-1\nRecurrent Hidden Semi-Markov Model. For the simplicity of explanation, we focus our algorithm on the single sequence first. It is straightforward to apply the algorithm for dataset that has multiple. sequences. Given the parameters {, A, B}, the log-likelihood of a single observation sequence x. can be written as below,\nsi+ds,-1 si+ds,-1 P(xs;:s;+ds;-1|Zs, ds) = P(xt|Xs;t-1,Zsi) = 11 P(xt|ht,Zsi) t=Si t=Si\nAt the time step t, we assume a diagonal multivariate gaussian distribution over the conditiona likelihood, where the mean and covariance matrix are the output of RNN, i.e...\nP(xt|ht,Zs) ~ N(xt; = Diag(exp()\nThe above formulation indicates that the generative model P(xt[ht, s,) depends not only on the last step observation xt-1, but also the last hidden state ht-1, which is together captured in Eq.4 In summary, we denote all the parameters in the proposed R-HSMM as 0 = {, A, B, 0rnn}. The corresponding graphical model is shown in Figure|2b\nTo obtain the posterior or MAP in the proposed R-HSMM, the classical forward-backward algorithm or Viterbi algorithm needs to solve one dynamic programming per sample, which makes the inference costly, especially for the long sequence with thousands of timestamps. So instead, we treat the Bayesian inference from optimization perspective, and obtain the posterior by maximizing the negative Helmholtz variational free energy (Williams1980] Zellner!1988 Dai et al.2016),\nover the space of all valid densities P. To make the optimization (6) tractable, the variationa autoencoder restricts the feasible sets to be some parametrized density Qw, which can be execute efficiently comparing to the forward-backward algorithm or Viterbi algorithm. However, sucl restriction will introduce extra approximation error. To reduce the approximation error, we use a structured model, i.e., bidirectional RNN, to mimic the dynamic programming in forward-backwar algorithm. Specifically, in the forward-backward algorithm, the forward message at(t, dt) an backward message t(zt, dt) can be computed recursively, and marginal posterior at position depends on both at(Zt, dt) and t(Zt, dt). Similarly, in bi-RNN we embed the posterior messag with RNN's latent vector, and marginal posterior is obtained from the latent vectors of two RNN bi-RNN encoder, the Ql, is decomposed as:\n|x| Qy(z,d|x) = Q(z1|h1;4)Q(d1|z1,h1;y) ]Q(zt|dt-1,ht;W)Q(dt|zt,dt-1,ht;Y t=2\nIt should be emphasized that due to the discrete nature of latent variables in our model, the algorithm proposed in Kingma & Welling(2013) is not directly applicable, and its extension with stochastic neuron reparametrization (Bengio et al.]2013]Raiko et al.2014] Gu et al.2015] Chung et al.]2016 cannot provide satisfied results for our model according to our experiments. Therefore, we extend the penalty method to distribution space to solve optimization (9).\nLQ(x) := EQ(z,d|x) [log Pe(x,z,d) - log Q(z,d|x)], max Q(z,d|x)EP\nwhere ht = [RNN1(x1:t), RNN2(xt:|xJ)] is computed by bi-RNN. We use multinomial distributions Q(zt[ht; V) = M(softmax(W' ht)) and Q(dt[zt, ht; V) = M(softmax(W] ht)). The dependency over dt-1 ensures that the generated segmentation (z, d) is valid according to Eq.1] For example, if we sampled duration d-1 > 1 from Q, at time t - 1, then d and zt should be deterministic. In our experiment, we use LSTM (Hochreiter & Schmidhuber|[1997) as the recursive units in bi-RNN.\nN 1 max L(0,y;x 0,4 N n=1\nAlgorithm 1 Learning sequential VAE with stochastic distributional penalty method\n1: Input: sequences {x(n) 1N_ 2: Randomly initialize y(0) and 0 = {, A, B, 09nn} 3: for =0,..., 0o do 4: for t = 0 to T do 5: Sample {x(n)}M IM=1 uniformly from dataset with mini-batch size M. 6: Get{z(n),d(n)7M 7: 8: 9: 10: end for 11: end for\nAs we discussed, learning the sequential VAE with stochastic neuron reparametrization in unsu pervised setting is extremely difficult, and none the off-the-shelf techniques can provide satisfiec. results. In this section, we introduce auxiliary distribution into (9) and generalize the penalty. method Bertsekas (1999) to distribution space.\nN (x(n), z, d) - log Q(z, d|x max 0,y,{Q(z,d|x(n)}N=1 n= KL(Q(z,d|x(n)||Qy(z,d|x(n S.t.\nN 1 max N 0,4,{Q(z,d|x(n))}N n=1\nLx(0,y\\x) = EQ(z,d|x) log Po(x,z,d) - logQ(z,d|xi)]- XKL (Q(z,d|x)|Qy(z,d\nSpecifically, we first introduce an auxiliary distribution Q(z, d|x) for each x and reformulate the optimization (9) as\nWe enforce the introduced Q(z, d|x) equals to Qw(z, d|x) in term of K L-divergence, so that the. optimization problems (9) and (10) are equivalent. Because of the non-negativity of K L-divergence. itself can be viewed as the penalty function, we arrive the alternative formulation of (10) as\nIn fact, because we are using the stochastic gradient for updating 0 and later, Q*(z, d|x) is never explicitly computed and only samples from it are required. Recall the fact that Qw,(z, d|x) has a nice decomposition|7] we can multiply its factors into each recursion step and still get the same complexity\nas original Viterbi algorithm for MAP or sampling. Specifically, let's define at(j, r) to be the best joint log probability of prefix x1:t and its corresponding segmentation which has the last segment with label j and duration r, i.e.,\nWithout considering the complexity of computing emission probabilities, the dynamic programming needs time complexity O (x|K2 + [x|K D) (Yu & KobayashiJ2 2003) and O(|x|K) memory. We explain the details of optimizing the time and memory requirements in Appendix|A\nM 1 L max +Xlog Q 0,4 M n=1\nUpdate 0: Finding parameters to maximize the likelihood needs to solve the constrained optimization shown below\nM |s| 1 max log - n M n n 0 n= =Si\nM\nQt(j, r) = max logQ(z1:t,d1:t|X1:t), s.t.Zt=j,dt= dt-r=1,dt-r+1= r Z1:t,d1:t\n7 Qt-1(j, r -1) + 1+ l0gBj,r Bj,r P(xt|xt-r+1:t-1,z =j)) r > 1,t > 1 Qy(dt-r+1=r|z=j,x) maxieZ\\j maxr'EDQt-1(i,r')+ 1+x log(Ai,jBj,1P(xt|z=j)) r=1,t>1 +1+x l0g Qy(zt-r+1 = j,dt-r+1 = r|x); 1+x log Qy(z1 = j,d1 =r|x) + 1+x log(rjBj,1P(x1|z = j)); ;r=1,t =1 0. otherwise\nRemark: When X = oo, the Q(z, d|x) will be exactly Q(z, d|x) and the algorithm will reduce to directly working on Q(z, d|x) without the effect from Pe(x, z, d). Therefore, it is equivalent to obtaining MAP or sampling of the latent variables z, d from Q(z, d|x), whose cost is O(|x|K) In practical, to further accelerate the computation, we can follow such strategy to generate samples when X is already large enough, and thus, the effect of Pe(x, z, d) is negligible.\nWith the fixed Q(z, d|x), we can update the 0 and y by exploiting stochastic gradient descent. algorithm to avoid scanning the whole training set. Sample a mini-batch of sequences {n } M= with. size M < N, we proceed to update {0, %} by optimizing the Monte Carlo approximation of (11)\nwhere {z(n), d(n)} is the MAP or a sample of Q(z, d|x(n)). Note that the two parts related to 0 and are separated now, we can optimize them easily..\nwhere , A, B} are constrained to be valid probability distribution. We use stochastic gradient descent to update Ornn in totally K RNNs. For parameters , A, B which are restricted to simplex,. the stochastic gradient update will involve extra projection step. To avoid such operation which may. be costly, we propose the closed-form update rule derived by Lagrangian,.\nSince we already have the segmentation solution, the total number of samples used for training is equal to the number of observations in dataset. The different RNNs use different parameters, and train on different parts of observations. This makes it easy for parallelized training.\nRemark: We can get multiple samples {z, d} for each x from Q(z, d|x) to reduce the variance in. stochastic gradient. In our algorithm, the samples of latent variable come naturally from the auxiliary distributions (which are integrated with penalty method), rather than the derivation from lower bound. of objective (Tang & Salakhutdinov2013] Raiko et al.]2014} [Mnih & Rezende]2016)"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "Baselines We compare with classical HSMM and two popular HSMM variants. The first one is Hierarchical Dirichlet-Process HSMM (HDP-HSMM) (Johnson & Willsky2013), which is the nonparametric Bayesian extension to the traditional HSMM that allows infinite number of hidder states; the second one is called subHSMM (Johnson & Willsky2014), which uses infinite HMM as the emission model for each segment. This model also has two-level of latent structure. It considers the dependency within each segment, which is a stronger algorithm than HDP-HSMM. We also compare with the CRF autoencoder (CRF-AE) (Ammar et al.]2014), which uses markovian CRF as recognition model and conditional i.i.d.model for reconstruction. Comparing to HSMM, this model ignores the segmentation structures in modeling and is more similar to HMM.\nEvaluation Metric We evaluate the performance of each method via the labeling accuracy. Specifi cally, we compare the labels of each single observations in each testing sequence. Since the labels are unknown during training, we use KM algorithm (Munkres||1957) to find the best mapping between predicted labels and ground-truth labels.\nSettings Without explicitly mentioned, we use leave-one-sequence-out protocol to evaluate the. methods. Each time we test on one held-out sequence, and train on other sequences. We report the mean accuracy in Table[1 We set the truncation of max possible duration D to be 400 for all tasks.. We also set the number of hidden states K to be the same as ground truth..\nFor CRF-AE, we extend the origin model for the continuous observations, and learn all parameter similar to|M. Schmidt (2008). We use mixture of Gaussians to model the emission, where the numbe of mixtures is tuned in { 1, : , 10}\nFor the proposed R-HSMM, we use Adam (Kingma & Ba] 2014) to train the K generative RNN and bi-RNN encoder. To make the learning tractable for long sequences, we use back propagation through time (BPTT) with limited budget. We also tune the dimension of hidden vector in RNN, the L2-regularization weights and the stepsize. We implemented with CUDA that parallelized for different RNNs, and conduct experiments on K-20 enabled cluster. We include both the R-HSMM with the exact MAP via dynamic programming (rHSMM-dp) and sequential VAE with forward pass (rHSMM-fw) in experiments. In all tasks, the rHSMM-fw achieves almost the same performance to rHSMM-dp, but 400 times faster, showing the bi-RNN is able to mimic the forward-backward algorithm very well with efficient computation.\nUpdate : Given fixed X, log Q(z(n), d(n) |x(n)) is essentially the sequence to sequence likelihood. where the input sequence is x and output sequence is {z, d}. Using the form of Q, in Eq[7] this likelihood can be decomposed by positions. Thus we can conveniently train a bi-RNN which maximize the condition likelihood of latent variables by stochastic gradient descent\nFor the HDP-HSMM and subHSMM, the observation distributions are initialized as standard Mul-. tivariate Gaussian distributions. The duration is modeled by the Poisson distribution. We tune the concentration parameters a, y E {0.1, 1, 3, 6, 10}. The hyperparameters are learned automatically. For subHSMM, we tune the truncation threshold of the second level infinite HMM from {2 ... 15}.\nSynthetic Experiments We first evaluate the proposed method on two 1D synthetic sequential data sets. The first data set is generated by a HSMM with 3 hidden states, where , A, B are designed beforehand. A segment with hidden state z is a sine function z sin(wzx + e1) + e2, where e1 and e2. are Gaussian random noises. Different hidden states use different scale parameters Az and frequency parameters wz. The second data set also has 3 hidden states, where the segment with hidden state z is sampled from a Gaussian process (GP) with kernel function kz (x, y). Different hidden states employ different kernel functions. The specific kernel functions used here are k1 (x, y) = exp{- min(| x-y |x+y[)2/10},k2(x,y) =exp{-(x-y)2/10} and k3(x,y) =(5-|x-y[)I{(5-|x-y)< 5} For both of the Sine and GP data sets, the duration of a segment is randomly sampled from a distribution defined on {1, ..., 100}, which depends on the hidden states. Thus, the segmentation task. corresponds to finding out different functions embedded in the sequences..\n0.5 200 400 600 800 1000 1200 1400 1600 1800 2000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 dp-WWSH MJ-WWSH NWSHqns WWSH ] Walk Walk upstairs Walk downstairs Sitting Standing Laying Nothing Postural adjustment Running Crabwalking Stand to sit I Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand Cplx motion L1 groom Head groom L2-L3 groom Ab. groom L3 groom (a) Human activity (b) Drosophila\nFigure 3: Segmentation results on Human activity and Drosophila datasets. Different background colors represent the segmentations with different labels. In the top row, the black cure shows the signal sequence projected to the first principle component. The following two rows are our algorithms which almost exact locate every segment. (a) The Human activity data set contains 12 hidden states. each of which corresponds to a human action; (b) The Drosophila data set contains 11 hidden states, each of which corresponds to a drosophila action..\nTable 1: Error rate of segmentation. We report the mean and standard deviation of error rate\nMethods SINE GP HAPT Drosophila Heart PN-Full rHSMM-dp 2.67 1.13% 12.46 2.79% 16.38 5.03% 36.21 1.37% 33.14 7.87% 31.95 4.32% rHSMM-fw 4.02 1.37% 13.13 2.89% 17.74 7.64% 35.79 0.51% 33.36 8.10% 32.34 3.97% HSMM 41.85 2.38% 41.15 1.99% 41.59 8.58% 47.37 0.27% 50.62 4.20 % 45.04 1.87% subHSMM 18.14 2 2.63% 24.81 4.63% 22.18 4.45% 39.70 2.21% 46.67 4.22% 43.01 2 2.35% HDP-HSMM 42.74 2.73% 41.90 1.58% 35.46 6.19% 43.59 1.58% 47.56 4.31% 42.58 1.54% CRF-AE 44.87 1.63% 51.43 2 2.14% 49.26 10.63% 57.62 0.22% 53.16 4.78% 45.73 0.66%\nWe visualize the segmentation results of ground truth and three competitors on Sine and GP data sets in Figure|1a|and Figure 1b|respectively, and report the numerical results in Table|1] As w can see, R-HSMM provides much better results on even small segments, dramatically outperform HSMM variants and CRF-AE. Also note that, the sine function depicts short term dependencies, whil Gaussian process has long dependency that determined by the kernel bandwidth. This demonstrate the ability of R-HSMM in capturing the long or short term dependencies.\nHuman activity This dataset which is collected byReyes-Ortiz et al.(2016) consists of signals collected from waist-mounted smartphone with accelerometers and gyroscopes. Each of the volun- teers is asked to perform a protocol of activities composed of 12 activities (see Figure|3a for the details). Since the signals within an activity type exhibit high correlation, it is natural for RNN to model this dependency. We use these 61 sequences, where each sequence has length around 3000 Each observation is a 6 dimensional vector, consists of triaxial measures from accelerometers and gyroscopes.\nFigure|3a[shows the ground truth and the segmentation results of all methods. Both rHSMM-dp and rHSMM-fw almost perfectly recover the true segmentation. They can also capture the transition activity types, e.g., stand to lie or sit to lie. The HSMM, HDP-HSMM and CRF-AE makes some fragmental but periodical segmentations for walking, caused by lacking the dependency modeling within a segment. The subHSMM also has similar problem, possibly due to the limited ability of HMM generative model.\nDrosophila Here we study the behavior patterns of drosophilas. The data was collected byKain et al.(2013) with two dyes, two cameras and some optics to track each leg of a spontaneously behaving fruit fly. The dimension of observation in each timestamp is 45, which consists of the raw features and some higher order features. See Figure[3b|for the detail of the 11 behavior types. We perform leave-one-sequence-out experiment on 10 sequences of length 10000 each. Figure 3b|shows the segmentation results on the prefix of one sequence, while Table|1|gives the mean accuracy on all sequences. Different from the previous experiment, where the human activity signals are relatively\n0 4 2 -6 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400 3 2 O 1 2 -3 A a) Reconstruction illustration on Sine dataset (b) Reconstruction illustration on GP dataset\nFigure 4: Reconstruction illustration. The generative RNNs (decoders) are asked to reconstruct th signals from only the discrete labels and durations (which are generated from encoder).\nsmooth, here the signals depict high variance. Different activities exhibit quite different duratior and patterns. Also, the activity types changes frequently. The R-HSMM almost captured each. changing point of activities with both long and short durations. The corresponding mean accuracy. also outperforms the baselines. However, we observed there are some correct segmentations witl. wrong labels. This happens mostly to the short segments, in which the RNN doesn't have enougl. history established for distinguishing similar activity types..\nPhysionet The heart sound records, usually represented graphically by phonocardiogram (PCG) are key resources for pathology classification of patients. We collect data from PhysioNet Challenge 2016 (Springer et al.] 2015), where each observation has been labeled with one of the four states namely Diastole, S1, Systole and S2. We experiment with both the raw signals and the signals afte feature extraction. Regarding the raw signals (Heart dataset), we collect 7 1-dimensional sequences of length around 40000. The feature-rich dataset (PN-Full) contains 2750 sequences, where each of them consists of 1500 4-dimensional observations. We do 5-fold cross validation for PN-Full. The visualization of segmentation results are shown in AppendixB.4 As the results shown in Table|1 our algorithm still outperforms the baselines significantly. Also for such long raw signal sequences the speed advantage of bi-RNN encoder over Viterbi is more significant. Viterbi takes 8min to do one inference, while bi-RNN only takes several seconds. Our framework is also flexible to incorporate prior knowledge, like the regularity of heart state transition into HSMM."}, {"section_index": "5", "section_name": "5.2 RECONSTRUCTION", "section_text": "From Fig.4|we can see the generative RNN correctly captures different characteristics from signals of different segment labels, such as different frequencies and scales in Sine dataset, or the differen variance patterns in GP dataset. This is essential to distinguish between different segments.\nWe presented the R-HSMM, a generalization of HSMM by incorporating recurrent neural generative. model as the emission probability. To eliminate the difficulty caused by such flexible and powerful model in inference, we introduced the bi-RNN as the encoding distribution via the variational. autoencoder framework to mimic the forward-backward algorithm. To deal with the difficulty of training VAE containing discrete latent variables, we proposed a novel stochastic distributional penalty. method. We justified the modeling power of the proposed R-HSMM via segmentation accuracy and. reconstruction visualization. From the comprehensive comparison, the proposed model significantly. outperforms the existing models. It should be emphasized that the structured bi-RNN encoder yields similar performance as the exact MAP inference, while being 400 times faster. Future work includes. further speeding up of our algorithm, as well as generalizing our learning algorithm to other discrete. variational autoencoder.\nIn this section, we examine the ability of learned generative model by visualizing the reconstructed signals. Given a sequence x, we use recognition model to get the latent variables z and d, then use learned K generative RNNs to generate signals within each segment. For the ease of visualization. we show the results on 1D signal dataset in Fig.4a and Fig.4b\nThis project was supported in part by NSF IIS-1218749, NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF IIS-1639792 EAGER, ONR N00014-15-1-2340, Nvidia and Intel.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through. stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. 2013.\nD. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, second edition, 1999\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks arXiv preprint arXiv:1609.01704, 2016.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780.1997.\nMatthew J Johnson and Alan S Willsky. Bayesian nonparametric hidden semi-markov models. Th Journal of Machine Learning Research. 14(1):673-701. 2013\nJamey Kain, Chris Stokes, Quentin Gaudry, Xiangzhi Song, James Foley, Rachel Wilson, an Benjamin de Bivort. Leg-tracking and automated behavioural classification in drosophila. Natur communications, 4:1910, 2013.\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016.\nShixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stochastic neural networks. arXiv preprint arXiv:1511.05176, 2015.\nLingpeng Kong, Chris Dyer, and Noah A Smith. Segmental recurrent neural networks. arXiv preprin arXiv:1511.06018, 2015\nScott W Linderman, Andrew C Miller, Ryan P Adam, David M Blei, Liam Paninski, and Matthew Johnson. Recurrent switching linear dynamical systems. arXiv preprint arXiv:1610.08466, 2016.\nKevin P Murphy. Hidden semi-markoy models (hsmms). 2002\nKevin P. Murphy. Machine learnin. probabilistic ective. MIT Press, 2012\nLawrence R Rabiner. A tutorial on hidden markov models and selected applications in speecl recognition. Proceedings of the IEEE, 77(2):257-286, 1989.\nTapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. arXiv preprint arXiv:1406.2989. 2014\nJorge-L Reyes-Ortiz, Luca Oneto, Albert Sama, Xavier Parra, and Davide Anguita. Transition-aware human activity recognition using smartphones. Neurocomputing, 171:754-767, 2016\nP. M. Williams. Bayesian conditionalisation and the principle of minimum information. British Journal for the Philosophy of Science, 31(2):131-144, 1980.\nShun-Zheng Yu. Hidden semi-markov models. Artificial Intellige ence, 174(2):215-243, 2010.\nShun-Zheng Yu and Hisashi Kobayashi. An efficient forward-backward algorithm for an explicit duration hidden markov model. Signal Processing Letters, IEEE, 10(1):11-14, 2003.\nArnold Zellner. Optimal Information Processing and Bayes's Theorem. The American Statistician 42(4), November 1988.\nAndriy Mnih and Danilo J Rezende. Variational inference for monte carlo objectives. arXiv preprini arXiv:1602.06725, 2016"}, {"section_index": "6", "section_name": "OPTIMIZING DYNAMIC PROGRAMMING", "section_text": "It is easy to see that the memory consumption is O([x[K)\nCaching emission probabilityAt each time step t, we compute P(xt+r|xt:t+r-1, z = j) for each j E Z and r E D. That is to say, we compute all the emission probabilities of observations starting from time t, and within max possible duration D. This can be done by performing feed-forward. of K RNNs. After that, storing these results will require O(KD) space. For simplicity, we let. ej,r = P(xt+r|Xt:t+r-1,z = j), where et E RKx D."}, {"section_index": "7", "section_name": "A.2 SOUEEZE THE TIME COMPLEXITY", "section_text": "In Eq.[13] the most expensive part is when r = 1 and t > 1. If we solve this in a naive way, then this step would require O(|x|K2 D) for time complexity, which is quite expensive.\nX+(i. = max max Q og(Ai,iB;.1P(xt|z = ) iEZ r'ED A -r+1=j,dt-r+1=rx - max Yt log(Ai,jBj,1P(xt|z =j)) iEZ A Zt-r+1=j,dt-r+1=r|x)\nThis reduces the complexity to be O(|x|K2)\nIn this section, we show that the Eq.[13|can be computed in a memory efficient way. Specifically, the dynamic programming procedure can be done with O([x|K) memory requirement, and caching for precomputed emission probabilities requires O(D2 K) memory space.\nUpdate forward variable a Note that in Eq.13] when r > 1, we can update at(j, r) deterministi. cally. So it is not necessary to keep the records for r > 1.\nSpecifically, let's only record at(j, 1), and do the updates in a similar way as in Eq.[13] The only difference is that, when constructing the answer, i.e., the last segment solution, we need to do a loop over all possible z and d in order to find the best overall segmentation solution..\nNote that, at a certain time step t, we would require the emission probability of observations P(xt|xt-r+1:t-1, z = j) for some j E Z and r E D. In this case, the corresponding first observation. . , e' at time step t. This makes the memory. consumption goes to O(K D2)\nHere we adopt similar technique as in|Yu & Kobayashi(2003). Let yt(i) = maxr'eD Qt-1(i, r') then we can get\n2 2 0 -1 -1 -2 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400 2 Reennreereee aeennereep W 2 2 3 3\nFigure 5: More reconstruction illustration on Sine dataset\n6 2 -4 4 -6 6 200 400 600 800 1000 1200 1400 200 400 600 800 1000 1200 1400 6 4 Rennreneee uetee 2 O 2 -2 4 4 -6\nFigure 6: More reconstruction illustration on Gaussian Process dataset\nThe reconstructed signals from the original signals are shown in Fig.5|and Fig.6|for sine datase. and gaussian Process dataset respectively. We can see the reconstructed signal almost recovered the original signal. The RNN captured the key differences of states, such as the frequency and scale. while in gaussian process dataset, it also recovered the complicated pattern involving long tern. dependencies.\nWe show the confusion matrix of all methods on synthetic sine and gaussian process dataset i Figure7and Figure|8[respectively."}, {"section_index": "8", "section_name": "B.2 HUMAN ACTIVITY", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure9\nn Figure[10l we also show several other segmentation results on different testing sequences"}, {"section_index": "9", "section_name": "B.3 DROSOPHILA", "section_text": "The confusion matrices of our method and two baseline algorithms are shown in Figure 11\nSince each sequence is too long to be clearly shown in one figure, we split the segmentation results of one sequence into four parts, and show them in Figure|12.\nThe confusion matrices of our method and two baseline algorithms are shown in Figure|13\nted class predicted clas redicted class predicted class oredicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE Figure 7: Confusion matrix on Synthetic Sine dataset. predicted class predicted class predicted class predicted class predicte class predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE Figure 8: Confusion matrix on Synthetic Gaussian Process dataset\nAlso, we split the segmentation results of one sequence into four parts, and show them in Figure[14\nFigure 8: Confusion matrix on Synthetic Gaussian Process dataset\nFigure 9: Confusion matrix on Human Activity dataset\n3 1 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 200 400 600 800 1000 1200 1400 1600 1800 p-WWMH WSMH I-dOH CREAE Walk Walk upstairs Walk downstairs Sitting Standing Laying Walk Walk upstairs Walk downstairs Sitting Standing Laying I Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand L.5 0 0.5 0.5 200 400 600 800 1000 1200 1400 1600 1800 400 600 800 1000 1200 1400 1600 1800 2000 Ip-WWSH WSMH NWSH WWSH-dOH CREAE AE CCRF I Walk I Walk upstairs Walk downstairs Sitting Standing Laying Walk Walk upstairs Walk downstairs Sitting Standing Laying I Stand to sit Sit to stand Sit to lie I Lie to sit Stand to lie Lie to stand Stand to sit Sit to stand Sit to lie Lie to sit Stand to lie Lie to stand\nFigure 10: More segmentation results on Human Activity dataset\npredicted classe predicted class predicted class predicted class predicted class predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE\nFigure 11: Confusion matrix on Drosophila dataset\nFigure 12: More segmentation results on Drosophila dataset\npredicted class predicted class predicted class predicted class predicted class predicted class (a) rHSMM-dp (b) rHSMM-fw (c) subHSMM (d) HSMM (e) HDP-HSMM (f) CRF-AE\nrounrunn wwwwwwwwwwwy 1000 1500 2000 2500 2000 2500 3000 Nothing Postural adjustment Running Turning in place Crabwalking Nothing Postural adjustment 1 Running I Turning in place Crabwalking I Cplx motion L1 groom Head groom J L2-L3 groom |Ab. groom L3 groom I Cplx motion L1 groom Head groom L2-L3 groom |Ab. groom L3 groom 500 1000 1500 2000 2500 500 1000 1500 2000 2500 Nothing Postural adjustment Running Turning in place Crabwalking Nothing Postural adjustment Running I Cplx motion L1 groom Head groom L2-L3 groom Ab. groom L3 groom I Cplx motion L1 groom Head groom L2-L3 groom Ab.groom L3 groom\nFigure 13: Confusion matrix on Heart Sound dataset\nFigure 14: More segmentation results on Heart Sound dataset\n0.5 M wwww 0.5 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 CCEAAEE Diastole |S1 Systole I s2 Diastole | S1 Systole I s2 0.5 0^wWM Whwm A -0.5 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 500 1000 1500 2000 2500 3000 3500 4000 4500 dp-WWSHMNWSH 5000 CRREEE Diastole S1 Systole l s2 Diastole S1 Systole I s2"}] |
rky3QW9le | [{"section_index": "0", "section_name": "TRANSFORMATIONAL SPARSE CODING", "section_text": "Dimitrios C. Gklezakos & Raiesh P. N. Rao\nDepartment of Computer Science and Center for Sensorimotor Neural Engineering University of Washington Seattle. WA 98105. USA\ngklezd, rao}@cs.washington.edu\nA fundamental problem faced by object recognition systems is that objects anc. their features can appear in different locations, scales and orientations. Current. deep learning methods attempt to achieve invariance to local translations via pool ing, discarding the locations of features in the process. Other approaches explic. itly learn transformed versions of the same feature, leading to representations tha1 quickly explode in size. Instead of discarding the rich and useful information. about feature transformations to achieve invariance, we argue that models should learn object features conjointly with their transformations to achieve equivariance We propose a new model of unsupervised learning based on sparse coding tha. can learn object features jointly with their affine transformations directly fron images. Results based on learning from natural images indicate that our approach. matches the reconstruction quality of traditional sparse coding but with signifi cantly fewer degrees of freedom while simultaneously learning transformations from data. These results open the door to scaling up unsupervised learning tc allow deep feature+transformation learning in a manner consistent with the ven tral+dorsal stream architecture of the primate visual cortex.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A challenging problem in computer vision is the reliable recognition of objects under a wide range of transformations. Approaches such as deep learning that have achieved success in recent years usually require large amounts of labeled data, whereas the human brain has evolved to solve the problem using an almost unsupervised approach to learning object representations. During early development, the brain builds an internal representation of objects from unlabeled images that can be used in a wide range of tasks.\nMuch of the complexity in learning efficient and general-purpose representations comes from the fact that objects can appear in different poses, at different scales, locations, orientations and lighting conditions. Models have to account for these transformed versions of objects and their features. Cur rent successful approaches to recognition use pooling to allow limited invariance to two-dimensiona translations (Ranzato et al. (2007)). At the same time pooling discards information about the loca tion of the detected features. This can be problematic because scaling to large numbers of objects requires modeling objects in terms of parts and their relative pose, requiring the pose information tc be retained.\nPrevious unsupervised learning techniques such as sparse coding (Olshausen & Field(1997)) car learn features similar to the ones in the visual cortex but these models have to explicitly learn large numbers of transformed versions of the same feature and as such, quickly succumb to combinatoria explosion, preventing hierarchical learning. Other approaches focus on computing invariant objec signatures (Anselmi et al.(2013) 2016)), but are completely oblivious to pose information.\nIdeally, we want a model that allows object features and their relative transformations to be si multaneously learned, endowing itself with a combinatorial explanatory capacity by being able. to apply learned object features with object-specific transformations across large numbers of ob- jects. The goal of modeling transformations in images is two-fold: (a) to facilitate the learning of"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "I ~ Fw s.t. w is sparse\nSparsity is usually enforced by the appropriate penalty. A typical choice is S1(w) = |w|[1. We can enhance sparse coding with affine transformations by transforming features before combining them The vectorized input image I is then modeled as:\nK L I = WkT(xk)Fk k=1\nwhere wg, Fg denote the k-th weight specific to the image and the k-th feature respectively anc T(xk) is a feature and image specific transformation.\nIn modeling image transformations we follow the approach ofRao & Ruderman|(1999) andMiao & Rao(2007). We consider the 2D general affine transformations. These include rigid motions such as vertical and horizontal translations and rotations, as well as scaling, parallel hyperbolic deformations. along the X/Y axis and hyperbolic deformations along the diagonals. A discussion on why these. are good candidates for inclusion in a model of visual perception can be found in Dodwell[(1983). Figure|5|in Appendix|A|shows the effects of each transformation.\nAny subset of these transformations forms a Lie group with the corresponding number of dimension. (6 for the full set). Any transformation in this group can be expressed as the matrix exponential of a weighted combination of matrices (the group generators) that describe the behaviour of infinitesima transformations around the identity:\nAlthough this model ties sparse coding with transformations elegantly, learning large transforma- tions with it is intractable. The error surface of the loss function is highly non-convex with many shallow local minima. Figures|1(a) 1(b) 1(c) show the surface of L as a function of horizontal and vertical translation, horizontal translation and rotation and vertical translation and rotation parame- ters. The model tends to settle for small transformations around the identity. Due to the size of the parameters that we need to maintain, a random restart approach would be infeasible.\nWe introduce Transformational Sparse Coding Trees to circumvent this problem using hierarchies of transformed features. The main idea is to gradually marginalize over an increasing range of\nWe propose a new model of sparse coding called transformational sparse coding that exploits a tree structure to account for large affine transformations. We apply our model to natural images. We show that our model can extract pose information from the data while matching the reconstruction quality of traditional sparse coding with significantly fewer degrees of freedom. Our approach to insupervised learning is consistent with the concept of \"capsules'' first introduced by Hinton et al. [2011), and more generally, with the dorsal-ventral (features+transformations) architecture observed in the primate visual cortex.\nT(x) = e;x;G\nFor images of M pixels, T(x) is a matrix of size M M. Note that the generator matrices and. the features used are common across all images. The feature weights and transformation parameters can be inferred (and the features learned) by gradient descent on the regularized MSE objective.\nN K 1 I- L(w,x,F) WikT(xik)Fk +XwS1(w)+XF||F N i=1 k=1\n0.9 0.9 0.9 -3 -3 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 3 3 0.1 0.1 -4 -3 -2 1 0 1 2 3 -4 -3 -2 1 0 1 2 3 -4 3 -2 1 0 1 2 3 4 Horizontal Translation Parameter Horizontal Translation Parametere Vertical Translation Parameter (a) (b) (c) 4 4 0.9 0.9 0.9 -3 3 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 3 3 0.1 0.1 1 0 -4 -3 -2 -1 0 1 2 3 -4 -3 2 -1 0 2 3 -4 -3 -2 -1 0 1 2 3 4 Horizontal Translation Parameter Horizontal Translation Parameter Vertical Translation Parameter (d) (e) (f)\nV I~ WhUi v=1 b~ch(v)\nwhere U, = T(xv->b)F, and ch(v) the children of root v. The feature U, is a leaf, derived fron the root feature F, via the fixed (across all data-points) transformation T(xy->b). Deeper trees car be built accordingly (Section|3.3). A small example of a tree learned from natural image patches is shown in Figure|2\nThere are multiple advantages to such a hierarchical organization of sparse features. Some transfor mations are more common in data than others. Each path in the tree corresponds to a transformatio that is common across images. Such a path can be viewed as a \"transformation feature\" learne from the data. Each additional node in the tree \"costs\"' a fixed set of new parameters equal in size t the dimensions of the underlying Lie group (six in our case). At the same time the node contribute a whole new feature to the sparse code. Averaging over many data points, smoothens the surfac of the error function and makes larger transformations more accessible to optimization. Figure 1(d)[1(e) 1(f)|show the error surface averaged over a batch of 2000 patches.\nFor every leaf that is activated, the root template represents the identity of the feature and the trans formation associated with the path to the root, the pose. In other words the tree is an equivarian representation of the feature over the parameter region defined by the set of paths to the leaves, very similar to the concept of a capsule introduced by Hinton et al.[(2011). In fact, every increasing subtree corresponds to a capsule of increasing size..\nFigure 1: Normalized reconstruction error for individual vs. batch 8 8 natural image patches (a),(b),(c) show the surface of the reconstruction error for horizontal and vertical translations, hor- izontal translations and rotation, vertical translations and rotations for an individual data point and feature. (d),(e),(f) show the same, averaged over a batch of 2000 data points. The error is normalized between 0 and 1 for comparison. The global minimum in the range is marked in red. In the batch case, averaging makes the error surface smoother and learning easier.\ntransformations. Each node in the tree represents a feature derived as a transformed version of its parent, with the root being the template of the feature. The leaves are equivalent to a set of sparse. basis features and are combined to reconstruct the input as described above. A version of the model using a forest of trees of depth one (flat trees), is given by:.\nFigure 2: Example of a tree learned from natural image patches. The leaves correspond to rigid transformations of the root\nThe reconstruction mean sq!. quared-error (MSE) for a forest of flat trees is given by:\nN V 1 I- LMSE(w,x, F) WibT(xy-b)F N i=1 v=1 b~ch(v)\nIncreasing the feature magnitudes and decreasing the weights will result in a decrease in loss. We constraint the root feature magnitudes to be of unit l, norm. Consider different, transformed, ver sions of the same root template. For every such version there is a set of tree parameters that com pensates for the intrinsic transformation of the root and results in the same leaves. To make the solution unique we directly penalize the transformation parameter magnitudes. Since scaling and parallel deformation can also change the magnitude of the filter, we penalize them more to keep features/leaves close to unit norm. The full loss function of the model is:\n6 L(w,x,F) = LMSE(w,x,F) + XwS1(w) + ) j=1\nVv,lFyl2=1\nwhere X1 is the vector of the collective parameters for generator G\nLee et al.(2007) use an alternating optimization approach to sparse coding. First the weights are inferred using the feature sign algorithm and then the features are learned using a Lagrange dual approach. We use the same approach for the weights. Then we optimize the transformation param-. eters using gradient descent. The root features can be optimized using the analytical solution and projecting to unit norm.\nThe matrix exponential gradient OL can be computed using the following formula (Ortiz et a Ox (2001):\nA(t A ) do dt dt 0\nwhere D() = eA(t) @A(t) e(1-)A(t) . For our experiments we approximated the gradient by draw- ing a few samples{&s} and computing E [D(a)]. This can be regarded as a stochastic version of the approach used byCulpepper & Olshausen (2009).\nSome features might get initialized near shallow local optima (i.e. close to the borders or outside the. receptive field). These features eventually become under-used by the model. We periodically check\n' In practice even a single sample works well. The computation over samples is easily parallelizable.\nRoot Leaves (a)\nEa~U(0,1) [D(a)]"}, {"section_index": "3", "section_name": "3.1 LEARNING REPRESENTATIONS", "section_text": "We apply transformational sparse coding (TSC) with forests of flat trees to natural image patches Our approach allows us to learn features resembling those of traditional sparse coding. Apart from reconstructing the input, the model also extracts transformation parameters from the data. Figur 3 shows a reconstruction example. Figure|4 shows the root features learned from 10 10 natura image patches using a forest of size 8 with branching factor 8, equipped with the full six-dimensional group. The forest has a total of 64 features. Figure|4(a)|shows the features corresponding to the roots Figure|4(b)|shows the corresponding leaves. Each row contains features derived from the same root. More examples of learned features are shown in Figures[7l 8]9|and 10jin Appendix??\nFigure 3: Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the 8 8 natural image patch in the top right corner.."}, {"section_index": "4", "section_name": "8.2 COMPARISON WITH SPARSE CODING", "section_text": "We compare transformational sparse coding forests of various layouts and choices for Aw with tra. ditional sparse coding on 10 10 natural image patches. Some transformations change the featur magnitudes and therefore the sparsity pattern of the weights. To make the comparison clearer, fo each choice of layout and penalty coefficient, we run sparse coding, constraining the feature mag nitudes to be equal to the average feature magnitude of our model. The results are shown in Tabl. 1] The reconstruction error of our model is close to that of sparse coding, albeit with slightly les sparse solutions, even though it has significantly fewer degrees of freedom. Our model extracts pose. information in the form of group parameters..\n2A feature is under-used when the total number of data-points using it in a batch drops close to zero\nfor under-used features and re-initialize their transformation parameters For re-initialization we select another feature in the same tree at random with probability proportional to the fraction of data points that used it in that batch. We then reset the transformation parameters at random, with small variance and centered around the chosen filter's parameters.\n+ (0.0527) (0.4183) (-0.4114) (a)\nEven though derivative features have to be explicitly constructed for inference, the degrees of free dom of our model are significantly lower than that of traditional sparse coding. Specifically:\nfTSC (# of roots) (# of pixels - 1 + branching factor group dimension)\nNote that the group dimension is equal to 3 for rigid motions and 6 for general 2D affine transfor mations.\n(a) (b)\nFigure 4: Learned features for 8 trees with a branching factor of 8. (a) Features corresponding to the. roots. (b) Features/Leaves: Each row corresponds to leaves/transformations of the same root.\nTable 1: Comparison of transformational sparse coding (TSC) with sparse coding (SC) for 10 10 natural image patches. We compare the error (MSE) and the degrees of freedom (df) over 40000 data points. \"Sparsity\" is the average number of non-zero weights. Aw is the penalty coefficient for the weights and controls the sparseness of the solution\nTSC SC Mw Layout df sc MSE Sparsity dfTsC MSE Sparsity dfsc # of features dfTSC 0.4 1 x 64 2.13 13.3 447 1.71 12.3 6336 64 14.17 0.5 1 128 2.28 12.1 867 1.96 10.3 12672 128 14.62 0.4 8 x 8 1.89 13.3 1176 1.72 12.5 6336 64 5.38 0.4 4 x 16 1.91 13.3 780 1.69 12.3 6336 64 8.12 0.5 8 x 8 2.36 10.4 1176 2.15 9.9 6336 64 5.38 0.5 4 x 16 2.38 11 780 2.12 10.0 6336 64 8.12 0.4 16 16 1.66 14.3 3120 1.56 13.2 25344 256 8.12 0.4 8 x 32 1.67 14.6 2328 1.56 13.2 25344 256 10.88\nWe can define deeper trees by associating a set of transformation parameters with each branch These correspond to additive contributions to the complete transformation that yields the leaf wher applied to the root:\nV I~ WibT(xp)F v=1 b~ch(v)\nwhere x p= eEpath(6,) e. Optimizing deeper trees is more demanding due to the increased number of parameters. Their advantage is that they lend structure to model. The parameters cor- responding to the subtree of an internal node tend to explore the parameter subspace close to the transformation defined by that internal node. In tasks where it is disadvantageous to marginal-. ize completely over transformations, equivariant representations corresponding to intermediate tree layers can be used. An example of such structure is shown in Figure[6|in Appendix[B.\nSohl-Dickstein et al.(2010) present a model for fitting Lie groups to video data. Their approach only works for estimating a global transformation between consecutive video frames. They only support transformations of a single kind (ie only rotations). Different such single-parameter transformations have to be chained together to produce the global one. The corresponding transformation parameters also have to be inferred and stored in memory and cannot be directly converted to parameters of a single transformation. Kokiopoulou & Frossard[(2009) present an approach to optimally estimating transformations between pairs of images. They support rigid motions and isotropic scaling.Culpep- per & Olshausen[(2009) focus on learning the group operators and transformation parameters from pairs of images, but do not learn features from data. Our model supports all six transformations and learns object parts and their individual transformations. In contrast with those approaches, our model learns object parts jointly with their transformations within the same image. Our model uti- lizes the full, six-dimensional, general affine Lie group and captures the pose of each object part in the form of a single set of six transformation parameters.\nGrimes & Rao (2005) propose a bilinear model that combines sparse coding with transformations The model accounts for global transformations that apply to the entire image region. Our mode accounts for individual transformations of image parts. Rao & Ballard[(1998) propose a model tha captures small image transformations with Lie groups using a first-order Taylor approximation. Ou model estimates larger transformations of image parts using the full exponential model.Rao & Ruderman (1999) and [Miao & Rao[(2007) use a first-order Taylor approximation to learn the grouj operators and the transformation parameters for small transformations.\nThe work closest to ours is that of|Hinton et al.(2011) on capsules. A capsule learns to recognize it.. template (feature) over a wide range of poses. The pose is computed by a neural network (encoder). The decoder, resembling a computer graphics engine combines the capsule templates in differen. poses to reconstruct the image. Each transformational sparse coding tree can be thought of as cap. sule. The template corresponds to the root. The tree learns to \"recognize\"' transformed version. of that template. Our work arrives at the concept of a capsule from a sparse coding perspective. A major difference is that our approach allows us to reuse each feature multiple times in different. transformed versions for each data point..\nGens & Domingos (2014) propose a convolutional network that captures symmetries in the data. by modeling symmetry groups. Experiments with rigid motions or various affine transformations. show reduced sample complexity.Cohen & Welling (2016) propose a convolutional network that can handle translations, reflections and rotations of 90 degrees.Dieleman et al.(2016) propose a network. that handles translations and rotations. All the above are supervised learning models and apart from. the first can handle a limited set of transformations. Our model is completely unsupervised, extends sparse coding and can handle all transformations given by the first order differential equation:."}, {"section_index": "5", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed a sparse coding based model that learns object features jointly with theii transformations, from data. Naively extending sparse coding for data-point specific transformations makes inference intractable. We introduce a new technique that circumvents this issue by using a tree structure that represents common transformations in data. We show that our approach can learr interesting features from natural image patches with performance comparable to that of traditiona sparse coding.\nInvestigating the properties of deeper trees, learning the tree structure dynamically from the data anc extending our model into a hierarchy are subjects of ongoing research..\na1(0) = AI(0) de"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Fabio Anselmi, Joel Z. Leibo. Lorenzo Rosasco. Jim Mutch, Andrea Tacchetti, and Tomaso A Poggio. Unsupervised learning of invariant representations in hierarchical architectures. CoRR.. abs/1311.4158,2013. URLhttp://arxiv.0rg/abs/1311.4158\nFabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio Unsupervised learning of invariant representations. Theor. Comput. Sci., 633(C):112-121, June 2016. ISSN 0304-3975. doi: 10.1016/j.tcs.2015.06.048. URL http://dx.doi.0rg/10. 1016/j.tcs.2015.06.048\nDavid B. Grimes and Rajesh P. N. Rao. Bilinear sparse coding for invariant vision. Neural Comput 17(1):47-73, January 2005. ISSN 0899-7667. doi: 10.1162/0899766052530893. URL http : //dx.do1.0rg/10.1162/0899766052530893\nBruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311-3325, 1997.\nM. Ortiz, R. A. Radovitzky, and E. A. Repetto. The computation of the exponential and logarithmi mappings and their first and second linearizations. International Journal for Numerical Method. in Engineering, 52:1431, December 2001. doi: 10.1002/nme.263.\nRobert Gens and Pedro Domingos. Deep symmetry networks. In Proceedings of the 27th Inter- national Conference on Neural Information Processing Systems, NIPS'14, pp. 2537-2545, Cam bridge, MA, USA, 2014. MIT Press. URL http://dl.acm.org/citation.cfm?id= 2969033.2969110\nHonglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms In B. Scholkopf. J. C. Platt. and T. Hoffman (eds.), Advances in Neural Information Process ing Systems 19, pp. 801-808. MIT Press, 2007. URL http: //papers.nips. cc/paper/ 2979-efficient-sparse-coding-algorithms.pdf\nMarc'Aurelio Ranzato. Fu-Jie Huang. Y-Lan Boureau, and Yann LeCun. Unsupervised learning o1 invariant feature hierarchies with applications to object recognition. In Proc. Computer Vision and Pattern Recognition Conference (CVPR'07). IEEE Press, 2007.\nRajesh P. N. Rao and Daniel L. Ruderman. Learning lie groups for invariant visual perception In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems. II, pp. 810-816, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL http: //dl.acm.0rg/citation.cfm?id=340534.340807"}, {"section_index": "7", "section_name": "B DEEPER TREES AND STRUCTURE", "section_text": "Figure6|presents an example of structure learned by deeper trees. This example consists of vertical and horizontal lines. Each image patch is either blank, contains one vertical or one horizontal line or both. A patch is blank with probability , contains exactly one line with probability ? or two lines with probability ?. Each line is then generated at one of eight positions at random. Fitting two binary trees results in some continuity in the features. whereas flat trees provide no such structure\nJascha Sohl-Dickstein, Jimmy C. Wang, and Bruno A. Olshausen. An unsupervised algorithm for learning lie group transformations. CoRR, abs/1001.1027, 2010. URL http: / /arxiv. org/ abs/1001.1027\n(a) - (b) H H (c) 0 (d) H (e) 0 (f) 0 (g)\nFigure 5: Effects of each individual transformation on the template (a): (b) horizontal translation. (c) vertical translation, (d) rotation, (e) scaling, (f) parallel hyperbolic deformation along the X/Y axis, (g) hyperbolic deformation along the diagonals. To compute the generators, we used the sinc interpolation function\n(a) (b) (c)\nFigure 6: Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input.\n(a) (b)\nFigure 7: Learned features for 16 trees with branching factor 16. Each row corresponds tc leaves/transformations of the same root.\n(a) (b)\nFigure 8: Learned features for 8 trees with branching factor 32. Each row corresponds to leaves/transformations of the same root.\nFigure 9: Learned features for 4 trees with branching factor 16. Each row corresponds tc leaves/transformations of the same root.\n(a) (b)\n(a) (b)\nFigure 10: Learned features for 1 tree with branching factor 64. All features are transformations ol the same root.\n(a)"}] |
SJIMPr9eg | [{"section_index": "0", "section_name": "BOOSTED RESIDUAL NETWORKS", "section_text": "Alan Mosca & George D. Magoulas\nDepartment of Computer Science and Information Systems Birkbeck, University of London Malet Street. WC1E 7HX. London. UK\na.mosca,gmagoulas}@dcs.bbk.ac.uk"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual Networks have been developed with many more layers than traditional Deep Networks. in some cases with over 1000 blocks, such as the networks in He et al. (2016). A recent study ir Veit et al. (2016) compares Residual Networks to an ensemble of smaller networks. This is done. by unfolding the shortcut connections into the equivalent tree structure, which closely resembles ar ensemble. An example of this can be shown in Figure1\n>1\nFigure 1: A Residual Network of N blocks can be unfolded into an ensemble of 2N _ 1 smaller networks.\nDense Convolutional Neural NetworksHuang et al. (2016) are another type of network that make use of shortcuts, with the difference that each layer is connected to all its ancestor layers directly b a shortcut. Similarly, these could be also unfolded into an equivalent ensemble.\nTrue ensemble methods are often left as an afterthought in Deep Learning models: it is generally considered sufficient to treat the Deep Learning method as a \"black-box'' and use a well-known generic Ensemble method to obtain marginal improvements on the original results. Whilst this is an effective way of improving on existing results without much additional effort, we find that it can amount to a waste of computations. Instead, it would be much better to apply an Ensemble method that is aware, and makes us of, the underlying Deep Learning algorithm's architecture.\nWe define such methods as \"white-box\"' Ensembles, which allow us to improve on the generalisation. and training speed compared to traditional Ensembles, by making use of particular properties of the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we present a new ensemble method, called Boosted Residual Net works, which builds an ensemble of Residual Networks by growing the member network at each round of boosting. The proposed approach combines recent de- velopements in Residual Networks - a method for creating very deep networks by ncluding a shortcut layer between different groups of layers - with the Deep Incre. nental Boosting, which has been proposed as a methodology to train fast ensem les of networks of increasing depth through the use of boosting. We demonstrate hat the synergy of Residual Networks and Deep Incremental Boosting has better ootential than simply boosting a Residual Network of fixed structure or using the equivalent Deep Incremental Boosting without the shortcut layers\nResidual Networks, a type of deep network recently introduced in He et al. (2015a), are character. ized by the use of shortcut connections (sometimes also called skip connections), which connect. the input of a layer of a deep network to the output of another layer positioned a number of levels. 'above' it. The result is that each one of these shortcuts shows that networks can be build in blocks. which rely on both the output of the previous layer and the previous block..\nThe next section presents the background on Deep Incremental Boosting. Then the proposed Boosted Residual Networks method is described. Experiments and results are discussed next, and the paper ends with conlusions\nDeep Incremental Boosting, introduced in|Mosca & Magoulas (2016a), is an example of such white. box ensemble method developed for building ensembles Convolutional Networks. The methoc. makes use of principles from transfer of learning, like for example those used in Yosinski et al. 2014), applying them to conventional AdaBoost (Schapire(1990)). Deep Incremental Boosting. increases the size of the network at each round by adding new layers at the end of the network. allowing subsequent rounds of boosting to run much faster. In the original paper on Deep Incremen. tal BoostingMosca & Magoulas (2016a), this has been shown to be an effective way to learn the. corrections introduced by the emphatisation of learning mistakes of the boosting process. The argu. ment as to why this works effectively is based on the fact that the datasets at rounds s and t + 1 wil be mostly similar, and therefore a classifier ht that performs better than randomly on the resamplec. dataset Xt will also perform better than randomly on the resampled dataset Xt+1. This is under. the assumption that both datasets are sampled from a common ancestor set Xa. It is subsequently. shown that such a classifier can be re-trained on the differences between Xt and Xt+1.\nThis practically enables the ensemble algorithm to train the subsequent rounds for a considerably. smaller number of epochs, consequently reducing the overall training time by a large factor. The. original paper also provides a conjecture-based justification for why it makes sense to extend the. previously trained network to learn the \"corrections' taught by the boosting algorithm. A high level description of the method is shown in Algorithm 1] and the structure of the network at each round is illustrated in Figure3\nAlgorithm 1 Deep Incremental Boosting\nbase classifier's learning algorithm and architecture. We propose a new such method, which we call Boosted Residual Networks, which makes use of developments in Deep Learning, previous other white-box Ensembles and combines several ideas to achieve improved results on benchmark datasets.\nUsing a white-box ensemble allows us to improve on the generalisation and training speed by making use of the knowledge of the base classifier's structure and architecture. Experimental results show that Boosted Residual Networks achieves improved results on benchmark datasets.\nRound 1 Round 2 Round N 5 N\nFigure 2: Illusration of subsequent rounds of DIB\nIn this section we propose a method for generating Boosted Residual Networks. This works by increasing the size of an original residual network by one residual block at each round of boosting.. The method achieves this by selecting an injection point index p; at which the new block is to be. added, which is not necessarily the last block in the network, and by transferring the weights from the layers below p; in the network trained at the previous round of boosting..\nBecause the boosting method performs iterative re-weighting of the training set to skew the resample at each round to emphasize the training examples that are harder to train, it becomes necessary tc utilise the entire ensemble at test time, rather than just use the network trained in the last round This has the effect that the Boosted Residual Networks cannot be used as a way to train a singl. Residual Network incrementally. However, as we will discuss later, it is possible to alleviate thi. situation by deriving an approach that uses bagging instead of boosting; therefore removing the necessity to use the entire ensemble at test time. It is also possible to delete individual blocks fron a Residual Network at training and/or testing time, as presented in He et al.(2015a), however this issue is considered out of the scope of this paper.\nThe iterative algorithm used in the paper is shown in Algorithm [2] At the first round, the entire training set is used to train a network of the original base architecture, for a number of epochs no After the first round, the following steps are taken at each subsequent round t:\nThe ensemble constructed so far is evaluated on the training set to obtain the set errors e, so that a new training set can be sampled from the original training set. This is a step common to all boosting algorithms. A new network is created, with the addition of a new block of layers Bnew immediately after position pt, which is determined as an initial pre-determined position po plus an offset i * d, for all the blocks added at previous layers. This puts the new block of layers im- mediately after the block of layers added at the previous round, so that all new blocks are effectively added sequentially. The weights from the layers below pt are copied from the network trained at round t - 1 to the new network. This step allows to considerably shorten the training thanks to the transfer of learning shown inYosinski et al. (2014). The newly created network is subsequently trained for a reduced number of epochs nt>0. The new network is added to the ensemble following the traditional rules and weight Qg used in AdaBoost.\nFigure 3 shows a diagram of how the Ensemble is constructed by deriving the next network at eacl round of boosting from the network used in the previous round..\nAlgorithm 2 Boosted Residual Networks\nRound1 Round 2 Round N\nFigure 3: Illusration of subsequent rounds of BRN\nWe identified a number of optional variations to the algorithm that may be implemented in practice which we have empirically established as not having an impact on the overall performance of the. network. We report them here for completeness.\nFreezing the layers that have been copied from the previous round. Only utilising the weights distribution for the examples in the training set instead of resam pling, as an input to the training algorithm. Inserting the new block always at the same position, rather than after the previously inserted block (we found this to affect performance negatively)"}, {"section_index": "3", "section_name": "3.1 COMPARISON TO APPROXIMATE ENSEMBLES", "section_text": "In the case of Densely Connected Convolutional Networks (DCCN) specifically, one may argue. that a partial unfolding of the network could be, from a schematic point of view, very similar to ar. ensemble of incrementally constructed Residual Networks. We make the observation that, althougl. this would be correct, on top of the benefit of diversity, our method also provides a much fastei. training methodology: the only network that is trained for a full schedule is the network created. at the first round, which is also the smallest one. All subsequent networks are trained for a mucl shorter schedule, saving a considerable amount of time. Additionally, while the schematic may. seem identical, there is a subtle difference: each member network outputs a classification of its own which is then aggregated by weighted averaging, whilst in a DCCN the input of the final aggregatior. layer is the output of each underlying set of layers. We conjecture that this aggressive dimensionality. reduction before the aggregation will have a regularising effect on the ensemble..\nSingle Net AdaBoost DIB BRN MNIST 99.41 % 99.41 % 99.47 % 99.53 % CIFAR-10 89.12 % 89.74 % 90.83 % 90.85 % CIFAR-100 67.25 % 68.18 % 68.56 % 69.04 %\nTable 1: Test accuracy in the three bencharks for the methods compared\nIn the experiments we used the MNIST, CIFAR-10 and CIFAR-100 datasets, and compared Boosted Residual Networks (BRN) with an equivalent Deep Incremental Boosting (DIB) without the skip- connections, AdaBoost with the equivalent Residual Network as its base classifier (AdaBoost), and the single Residual Network (Single Net) In order to reduce noise, we aligned the random initialisa- tion of all networks across experiments, by fixing the seeds for the random number generators, and no dataset augmentation was used, both online and offline. Results are reported in Table[1] while Figure4|shows a side-by-side comparison of accuracy levels at each round of boosting for both DIB and BRN on the MNIST and CIFAR-100 test sets. This figure illustrates how BRNs are able to consistently outperform DIB, regardless of ensemble size, and although such differences still fall within a Bernoulli confidence interval of 95%, we make the note that this does not take account of the fact that all the random initialisations were aligned, so both methods started with the exact same network.\nTable[2shows that this is achieved without significant changes in the training timq The main speed increase is due to the fact that the only network being trained with a full schedule is the first network which is also the smallest, whilst all other derived networks are trained for a much shorter schedule (in this case only 10% of the original training schedule).\nThe initial network architectures for the first round of boosting are shown in Table|3a|for MNIST and Table 3b for CIFAR-10 and CIFAR-100. It is worth mentioning that we used relatively sim- ple network architectures that were fast to train, which still perform well on the datasets at hand with accuracy close to, but not comparable to, the state-of-the-art. This enabled us to test larger. Ensembles within an acceptable training time..\nTraining used the WAME method (Mosca & Magoulas (2016b)), which has been shown to be faster. than Adam and RMSprop, whilst still achieving comparable generalisation. This is thanks to a\n'In some cases BRN is actually faster than DIB, but we believe this to be just noise due to external factor. such as system load.\nWhile both Residual Networks and Densely Connected Convolutional Networks may be unfolded into an equivalent ensemble, we note that there is a differentiation between an actual ensemble method and an ensemble \"approximation'. During the creation of an ensemble, one of the principal factors is the creation of diversity: each base learner is trained independently, on variations (resam-. ples in the case of boosting algorithms) of the training set, so that each classifier is guaranteed to learn a different function that represents an approximation of the training data. This is the enabling. factor for the ensemble to perform better in aggregate..\nTable 3: Network structures used in experiments. The layers marked with \"*\" indicate the locatior after which we added the residual blocks..\nspecific weight-wise learning rate acceleration factor that is determined based only on the sign of the current and previous partial derivative dwij networks in AdaBoost, we trained each member for 100 epochs. For Deep Incremental Boosting and Boosted Residual Networks, we trained the first round for 50 epochs, and every subsequent round for 10 epochs, and ran all the algorithms for 10 rounds of boosting, except for the single network. The structure of each incremental block added to Deep Incremental Boosting and Boosted Residual Networks at each round is shown in Table|4a for MNIST, and in Table 4b for CIFAR-10 and CIFAR-100. All layers were initialised following the reccommendations in He et al. (2015b).\nDistilled Boosted Residual Network: DBRN In another set of experiments we tested the per. formance of a Distilled Boosted Residual Network (DBRN). Distillation has been shown to be an effective process for regularising large Ensembles of Convolutional Networks in[Mosca & Magoulas. (2016c), and we have applied the same methodology to the proposed Boosted Residual Network.. For the distilled network structure we used the same architecture as that of the Residual Network. from the final round of boosting. Accuracy results in testing are presented in Table[5] and for com-. pleteness of comparison we also report the results for the distillation of DIB, following the same. procedure, as DDIB.\nTable 4: Structure of blocks added at each round of DIB and BRN\nResNet AdaBoost DIB BRN MNIST 115 min 442 min 202 min 199 min CIFAR-10 289 min 1212 min 461 min 449 min CIFAR-100 303 min 1473 min 407 min 448 min\nTable 2: Training times comparison\n2 96 conv, 3 3 96 conv. 3 3. 2 2 strides 96 conv. 3 3.2 2 strides 96 conv, 3 3, 2 2 strides 2 2 max-pooling 64 conv, 5 x 5 2 x 192 conv.3 x 3 2 2 max-pooling 192 conv. 3 3, 2 2 strides 128 conv, 5 x 5 192 conv. 3 3. 2 2 strides 2 2 max-pooling * 192 conv, 3 3, 2 2 strides Dense. 1024 nodes 2 2 max-pooling * 50% dropout 192 conv, 3 x 3 (a) MNIST 192 conv, 1 x 1 10 conv, 1 1 global average pooling 10-way softmax (b) CIFAR-10 and CIFAR-100\n192 conv. 3 x 3 Batch Normalization 64 conv. 3 x 3 ReLu activation Batch Normalization 192 conv, 3 x 3 ReLu activation Batch Normalization (a) MNIST ReLu activation (b) CIFAR-10 and CIFAR- 100\n99.5 69.2 D1B DIB 99.48 BRN 69 BRN 99.46 68.8 68.6 99.44 68.4 Aeerey 99.42 Aeeunrer 68.2 99.4 68 99.38 67.8 99.36 67.6 99.34 67.4 99.32 67.2 1 2 3 4 5 6 7 8 9 10 2 3 4 5 6 1 7 8 9 Boosting round Boosting round (a) MNIST (b) CIFAR-100\nFigure 4: Round-by-round comparison of DIB vs BRN on the test set\nTable 5: Comparative results in terms of testing accuracy\nBagged Residual Networks: BARN We experimented with substituting the boosting algorithm with a simpler bagging algorithm (Breiman (1996)) to evaluate whether it would be possible to only use the network from the final round of bagging as an approximation of the Ensemble. We called this the Bagged Approximate Residual Networks (BARN) method. We then also tested the performance of the Distilled version of the whole Bagging Ensemble for comparison. The results are reported as \"DBARN'. The results are reported in Table[6] It is clear that trying to use the last round of bagging is not comparable to using the entire Bagging ensemble at test time, or deriving a new distilled network from it.\nIn this paper we have derived a new ensemble algorithm specifically tailored to Convolutional Net works to generate Boosted Residual Networks. We have shown that this surpasses the performance of a single Residual Network equivalent to the one trained at the last round of boosting, of an en semble of such networks trained with AdaBoost, and Deep Incremental Boosting on the MNIST anc CIFAR datasets, without using augmentation techniques.\nWe then derived and looked at a distilled version of the method, and how this can serve as an effective way to reduce the test-time cost of running the Ensemble. We used Bagging as a proxy. to test generating the approximate Residual Network, which, with the parameters tested, does not. perform as well as the original Residual Network, BRN or DBRN..\nTable 6: Test accuracy for BARN\nDBRN DDIB MNIST 99.49 % 99.44 % CIFAR-10 91.11 % 90.66 % CIFAR-100 66.63 % 65.91 %\nFurther experimentation of the Distilled methods presented in the paper, namely DBRN and. DBARN, is necessary to fully investigate their behaviour. This is indeed part of our work in the near. future. Additionally, the Residual Networks built in our experiments were comparatively smaller than those that achieve state-of-the-art performance. Reaching state-of-the-art on specific bench- mark datasets was not our goal, instead we intended to show that we developed a methodology that makes it feasible to created ensembles of Residual Networks following a \"white-box'' approach to.\nBRN Bagging BARN DBARN MNIST 99.50 % 99.55 % 99.29 % 99.36 % CIFAR-10 90.56 % 91.43 % 88.47 % 90.63 % CIFAR-100 69.04 % 68.15 % 69.42 % 66.16 %\nsignificantly improve the training times and accuracy levels. Nevertheless, it might be appealing ir the future to evaluate the performance improvements obtained when creating ensembles of larger state-of-the-art, networks. Additional further investigation could also be conducted on the creatior of Boosted Densely Connected Convolutional Networks, by applying the same principle to DCCN instead of Residual Networks."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "L. Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1996.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015a.\nAlan Mosca and George Magoulas. Deep incremental boosting. In Christoph Benzmuller, Geof. Sutcliffe, and Raul Rojas (eds.), GCAI 2016. 2nd Global Conference on Artificial Intelligence volume 41 of EPiC Series in Computing, pp. 293-302. EasyChair, 2016a..\nAlan Mosca and George D. Magoulas. Training convolutional networks with weight-wise adaptiv learning rates. In Under Review, 2016b\nR. E. Schapire. The strength of weak learnability. Machine Learning, 5:197-227, 1990\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016."}] |
HyWWpw5ex | [{"section_index": "0", "section_name": "RECURRENT COEVOLUTIONARY FEATURE EMBEDDING PROCESSES FOR RECOMMENDATION", "section_text": "Hanjun Dai, Yichen Wang, Rakshit Trivedi & Le Song\nRecommender systems often use latent features to explain the behaviors of users and capture the properties of items. As users interact with different items over time, user and item features can influence each other, evolve and co-evolve over time. To accurately capture the fine grained nonlinear coevolution of these features, we propose a recurrent coevolutionary feature embedding process model, which combines recurrent neural network (RNN) with a multi-dimensional point process model. The RNN learns a nonlinear representation of user and item embeddings which take into account mutual influence between user and item features, and the feature evolution over time. We also develop an efficient stochastic gradient algorithm for learning parameters. Experiments on diverse real-world datasets demonstrate significant improvements in user behavior prediction compared to state-of-the-arts."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "E-commerce platforms and social service websites, such as Reddit, Amazon, and Netflix, attracts thousands of users every second. Effectively recommending the appropriate service items to users is a fundamentally important task for these online services. It can significantly boost the user activities on these sites and leads to increased product purchases and advertisement clicks.\nThe interactions between users and items play a critical role in driving the evolution of user interests and item features. For example, for music streaming services, a long-time fan of Rock music listens to an interesting Blues one day, and starts to listen to more Blues instead of Rock music. Similarly, a single music may also serve different audiences at different times,e.g., a music initially targeted fo an older generation may become popular among the young, and the features of this music need to be updated. Furthermore, as users interact with different items, users' interests and items' features car also co-evolve over time, i.e., their features are intertwined and can influence each other:\n. User -> item. In online discussion forums such as Reddit, although a group (item) is initially. created for statistics topics, users with very different interest profiles can join this group. Hence the participants can shape the features of the group through their postings. It is likely that this. group can finally become one about deep learning because most users concern about deep learning. Item -> user. As the group is evolving towards topics on deep learning, some users may become more interested in deep learning topics, and they may participate in other specialized groups on deep learning. On the opposite side, some users may gradually gain interests in pure math groups. lose interests in statistics and become inactive in this group..\nSuch co-evolutionary nature of user-item interactions raises very important questions on how to. learn them from the increasingly available data. However, existing methods either treat the tempora user-item interactions data as a static graph or use epoch based methods such as tensor factorization. to learn the latent features (Chi & Kolda]2012] Koren]2009] [Yang et al.]2011). These methods are not able to capture the fine grained temporal dynamics of user-item interactions. Recent point. process based models treat time as a random variable and improves over the traditional methods. significantly (Du et al. 2015 Wang et al. 2016b). However, these works make strong assumptions.\nAuthors have equal contributions\n{hanjundai, yichen.wang, rstrivedi}@gatech.edu, lsong@cc.gatech.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Item Initialize item feature feature ii,(to)=0(V1 i{) Item profile ii(t) V1 i(ti) Evolution JACKS Co-evolution +V2.uu (t) i(ti) = User >Item Interaction +V3.q1 Context +feature u1,1 +V4(t1-to) t191 Drift q(t) 07 Initialize user feature. uu (to) = o(W1.uu1) . User profile W.uu(t1) .Evolution User Christine Alice +W2:i(t)< Co-evolution David Alice feature uuq(t1) =o Item->User +W3.91 Context uu(t) +W4(t1-to) Drift\nabout the function form of the generative processes, which may not reflect the reality or accurat enough to capture the complex and nonlinear user-item influence in real world.\nNovel model. We propose a novel model that captures the nonlinear co-evolution nature of user and items' embeddings. It assigns an evolving feature embedding process for each user and iten and the co-evolution of these latent feature processes is modeled with two parallel components: ( item -> user component, a user's latent feature is determined by the nonlinear embedding of later features of the items he interacted with; and (ii) user -> item component, an item's latent feature are also determined by the latent features of the users who interact with the item Technical Challenges. We use RNN to parametrize the interdependent and intertwined user an item embeddings. The increased flexibility and generality further introduces technical challenge on how to train RNN on the co-evolving graphs. The co-evolution nature of the model makes th samples inter-dependent and not identically distributed, which is contrary to the assumptions i the traditional setting and significantly more challenging. We are the first to propose an efficier stochastic training algorithm that makes the BTPP tractable in the co-evolving graph. Strong performance. We evaluate our method over multiple datasets, verifying that our metho can lead to significant improvements in user behavior prediction compared to previous state-of-th arts. Precise time prediction is especially novel and not possible by most prior work.\nRecent work predominantly fix the latent features assigned to each user and item dSalakhutdinoy & Mnih2008 Chen et al.f2009} |Agarwal & Chen]2009] Ekstrand et al.[[2011] Koren & Sill]2011 Yang et al.|2011} Yi et al. 2014 Wang & Pal]2015). In more sophisticated methods, the time is divided into epochs, and static latent feature models are applied to each epoch to capture some. temporal aspects of the data (Koren]2009] Karatzoglou et al.|2010f|Xiong et al.|2010f Karatzoglou et al.[2010} Xiong et al.|2010f Chi & Kolda 2012f|Gultekin & Paisley2014fCharlin et al.[2015 Preeti Bhargava2015] Gopalan et al.[2015 Hidasi & Tikk[ 2015}|Wang et al.|2016a). For such methods, it is not clear how to choose the epoch length parameter. First, different users may have very. different timescale when they interact with those service items, making it difficult to choose a unified. epoch length. Second, it is not easy for these methods to answer time-sensitive queries such as when. a user will return to the service item. The predictions are only in the resolution of the chosen epoch length. Recently, (Du et al.|2015) proposed a low-rank point process based model for time-sensitive. recommendations from recurrent user activities. However, it fails to capture the heterogeneous coevolutionary properties of user-item interactions. [Wang et al.[(2016b) models the co-evolutionary property, but uses a simple linear representation of the users' and items' latent features, which might. not be expressive enough to capture the real world patterns. As demonstrated in Du et al.[(2016).\nFigure 1: Model illustration. (a) User-item interaction events data. Each edge stands for a tuple. and contains the information of user. item. interaction time. and interaction feature. (b) The latent feature of the user and item are updated at each event time, by a nonlinear activation function o(.). and contain four terms: self evolution, co-evolution, context (interaction feature), and self drift.\nIn this paper, we propose a recurrent coevolutionary feature embedding process framework. It. combines recurrent neural network (RNN) with point process models, and efficiently captures the. co-evolution of user-item features. Our model can automatically find an efficient representation of. the underlying user and item latent feature without assuming a fixed parametric forms in advance. Figure|1 summarizes our framework. In particular, our work makes the following contributions:\nthe nonlinear RNN is quite flexible to approximate many point process models. Also we will show. that, our model only has O(#user + #item) regardless of RNN related parameters, and can also be potentially applied to online setting."}, {"section_index": "3", "section_name": "3 BACKGROUND ON TEMPORAL POINT PROCESSES", "section_text": "A(t)dt := P{event in [t,t + dt)[H(t)} = E[dN(t)[H(t)]\nX(t)dt := P{event in |t,t + dt)|H(t)} = E|dN(t)|H(\nThe function form of the intensity X(t) is often designed to capture the phenomena of interests. Som commonly used form includes:\nHawkes processes (Hawkes 1971 Wang et al.]2016c), whose intensity models the mutua is an exponential triggering kernel, > 0 is a baseline intensity. Here, the occurrence of each historical event increases the intensity by a certain amount determined by the kernel k., and the weight a > 0, making the intensity history dependent and a stochastic process by itself. Rayleigh process, whose intensity function is X(t) = at, where a > 0 is the weight parameter. 4 RECURRENT COEVOLUTIONARY FEATURE EMBEDDING PROCESSES\nIn this section, we present the generative framework for modeling the temporal dynamics of user-iten interactions. We first use RNN to explicitly capture the co-evolving nature of users' and items' laten feature. Then, based on the compatibility between the users' and items' latent feature, we model th user-item interactions by a multi-dimensional temporal point process. We further parametrize the intensity function by the compatibility between users' and items' latent features.\nEVENT REPRESENTATION Given m users and n items, we denote the ordered list of N observed events as O = {e; 0 < t1 < t2 ... < T. This represents the interaction between user uj, item i, at time t, with th interaction context q; E Rd. Here q; can be a high dimension vector such as the text review,\nn the deep learning community, (Wang et al.]2015a) proposed a hierarchical Bayesian model hat jointly performs learning for the content features and collaborative filtering for the ratings natrix. (Hidasi et al.2016) applied RNN and adopt item-to-item recommendation approach with session based data. (Tan et al.f |2016) improved this model with techniques like data augmentation emporal change adaptation.(Ko et al.2016) proposed collaborative RNN that extends collaborative iltering method to capture history of user behavior. Specifically, they used static global latent factors or items and assign separate latent factors for users that are dependent on their past history. (Song et al.] 2016) extended the deep semantic structured model to capture multi-granularity tempora oreference of users. They use separate RNN for each temporal granularity and combine them with feed forward network which models users' and items' long term static features. However, none of hese works model the coevolution of users' and items' latent features and are still extensions of epocl oased methods. Our work is unique since we explicitly treat time as a random variable and captures he coevolution of users' and items' latent features using temporal point processes. Finally, our work s inspired from the recurrent marked temporal point process mode1 (Du et al.J2016). However, this vork only focuses on learning a one-dimension point process. Our work is significantly differen ince we focus on the recommendation system setting with the novel idea of feature coevolution and ve use multi-dimensional point processes to capture user-item interactions.\nA temporal point process (Cox & Isham]1980] Cox & Lewis]2006] Aalen et al.]2008) is a random process whose realization consists of a list of discrete events localized in time, {t} with t, E R+. Equivalently, a given temporal point process can be represented as a counting process, N(t), which records the number of events before time t. An important way to characterize temporal point processes is via the conditional intensity function X(t), a stochastic model for the time of the next event given all the previous events. Formally, X(t)dt is the conditional probability of observing an event in a small window (t, t + dt) given the history H(t) up to t and that the event has not happen before t, i.e.,\nWe associate feature embeddings uu(t) E Rk with each user u and i,(t) E Rk with each item. i. These features represent the subtle properties which cannot be directly observed, such as the. interests of a user and the semantic topics of an item. Specifically, we model the drift, evolution, and co-evolution of uu(t) and i,(t) as a piecewise constant function of time that has jumps only at event. times. Specifically, we define:.\nUser latent feature embedding process. For each user u, the corresponding embedding after use. u's k-th event ef' = (i%, t%, q) can be formulated as:.\nHere both the user and item's feature embedding processes are piecewise constant functions of time and only updated if an interaction event happens. A user's attribute changes only when he has a new interaction with some item. For example, a user's taste for music changes only when he listens to some new or old musics. Also, an item's attribute changes only when some user interacts with it Different from|Chen et al.[(2013) who also models the time change with piecewise constant function, but their work has no coevolve modeling, and is not capable of predicting the future time point.\nNext we discuss the rationale of each term in detail.\nTemporal drift. The first term is defined based on the time difference between consecutive events of specific user or item. It allows the basic features of users (e.g., a user's self-crafted interests). and items (e.g., textual categories and descriptions) to smoothly drift through time. Such changes. of basic features normally are caused by external influences.. Self evolution. The current user feature should also be influenced by its feature at the earlier time.. This captures the intrinsic evolution of user/item features. For example, a user's current taste. should be more or less similar to his/her tastes two days ago.. User-item coevolution. Users' and items' latent features can mutually influence each other. This term captures the two parallel processes. First, a user's embedding is determined by the latent. features of the items he interacted with. At each time tk, the latent item feature is ii, (t-).. We capture both the temporal influence and feature of each history item as a latent embedding. Conversely, an item's embedding is determined by the feature embedding of the user who just. interacts with the item. Evolution with interaction features. Users' and items' features can evolve and be influenced by.. the characteristics of their interactions. For instance, the genre changes of movies indicate the. changing tastes of users. The theme of a chatting-group can be easily shifted to certain topics of. the involved discussions. In consequence, this term captures the influence of the current interaction. features to the changes of the latent user (item) features.. . Interaction feature. This is the additional information happened in the user-item interactions. For. example, in online discussion forums such as Reddit, the interaction features are the posts and comments. In online review sites such as Yelp, it is the reviews of the businesses..\nsimply the embedding of static user/item features such as user's profile and item's categorical features For notation simplicity, we define Ou = {e = (i, t, q)} as the ordered listed of all events related. to user u, and O' =- {e, = (u', t3, q)} as the ordered list of all events related to item i. We also set. = t = O for all the users and items. t denotes the time point just before time tk..\nW1(tk )+W2uu(tk-1) W3iik(tk W4qk U,ik U temporal drift self evolution co-evolution: item feature interaction feature\nVi(tk tk-1)+V2ii(tk-1) V3Uuk V4qk i,uk K temporal drift self evolution co-evolution: item feature interaction feature\nwhere t means the time point just before time t, W4, V4 E Rkd are the embedding matrices mapping from the explicit high-dimensional feature space into the low-rank latent feature space and W1, V E Rk, W2, V2, W3, V3 E IRkk are weights parameters. o() is the nonlinear activation function, such as commonly used Tanh or Sigmoid for RNN. For simplicity, we use basic recurrent neural network to formulate the recurrence, but it is also straightforward to extend it using GRU or LSTM to gain more expressive power. Figure[1summarizes the basic setting of our model.\nTo summarize, each feature embedding process evolves according to the respective base temporal user (item) features and also are mutually dependent on each other due to the endogenous influences from the interaction features and the entangled latent features.\nTime as a random variable. Instead of discretizing the time into epochs as traditional meth ods (Charlin et al.]2015]Preeti Bhargava] 2015] Gopalan et al.]2015] Hidasi & Tikk2015] Wang et al.|[2016a), we explicitly model the timing of each interaction event as a random variable, which naturally captures the heterogeneity of the temporal interactions between users and items. Short term preference. The probability for user u to interact with item i depends on the compatibility of their instantaneous embeddings, which is evaluated through the inner product at the last event time t'. Because uy(t) and i,(t) co-evolve through time, their inner-product measures a general representation of the cumulative influence from the past interactions to the occurrence of the current event. The exp() function ensures the intensity is positive and well defined. Rayleigh time distribution. The user and item embeddings are piecewise constant, and we use the time lapse term to make the intensity piecewise linear. This form leads to a Rayleigh distribution for the time intervals between consecutive events in each dimension. It is well-adapted to modeling fads, where the event-happening likelihood f(.) in (1) rises to a peak and then drops extremely rapidly. Furthermore, it is computationally easy to obtain an analytic form of f(.). One can then use f(.) to make item recommendation by finding the dimension that f(:) reaches the peak.\nWith the parameterized intensity function, we can further estimate the parameters using maximum likelihood estimation of all events. The joint negative log-likelihood is (Daley & Vere-Jones 2007):.\nN m n 7 l =-log(Xuj2j(tj|t's)) Au,i(T|T') d7 j=1 u=1 i=1 4 ent"}, {"section_index": "4", "section_name": "5 PARAMETER LEARNING", "section_text": "In this section, we propose an efficient algorithm to learn the parameters {Vi}4-1 and {Wi}4-1. The batch objective function is presented in (5). The Back Propagation Through Time (BPTT) is the standard way to train a RNN. To make the back propagation tractable, one typically needs to do truncation during training. However, due to the novel co-evolutionary nature of our model, all the events are related to each other by the user-item bipartite graph (Figure2), which makes it hard to decompose.\nHence, in sharp contrast to works (Hidasi et al.2016] Du et al.]2016) in sequential data where one. can easily break the sequences into multiple segments to make the BPTT trackable, it is a challenging. task to design BPTT in our case. To efficiently solve this problem, we first order all the events globally and then do mini-batch training in a sliding window fashion. Each time when conducting. feed forward and back propagation, we take the consecutive events within current sliding window to. build the computational graph. Thus in our case the truncation is on the global timeline, instead over. individual independent sequences as in prior works..\nNext, we explain our procedure in detail. Given a mini-batch of M ordered events O = {ej}=1, we set the time span to be [To = t1, T = tm]. Below we show how to compute the intensity and survival probability term in the objective function (5) respectively.\nXu,r(t|t') = exp(uu(t' li(t (t - user-item compatibility time lapse\nwhere t > t', and t' is the last time point where either user u's embedding or item i's embedding changes before time t. The rationale behind this formulation is three-fold:.\nThe rationale of the objective two-fold: (i) the negative intensity summation term ensures the probability of all interaction events is maximized; (ii) the second survival probability term penalizes the non-presence of an interaction between all possible user-item pairs on the observation window Hence, our framework not only explains why an event happens, but also why an event did not happen\n(user, forum) 1:45pm 3:45pm 5:00pm Jacob Jacob 3:30p Jacob 3:1%nm Sophie 9:45am 10:15am 1:30pm 2:45pm Sophie 2:30p 4:25pm events Jacob Sophie Jacob Sophie (a) Graph of embedding computation (b) Dependency between events\n(user, Torum) 1:45pm 3:45pm 5 Jacob Jacob 3:30pr 2 Jacob 3:1pm Sophie 9:45am 10:15am 1:30pm 2:45pm Sophie 2:30p 4:25pm events Jacob Sophie Jacob Sophie\nFigure 2: Intensity computation. (a) Each arrow means the flow of feature embedding computa. tion, e.g., Jacob interacts with basketball at 10:15am. Then the embeddings are updated: his feature. at 10:15 am is influenced by his feature and the basketball feature at 9:45am (arrow 1 and 2): th basketball's feature is influenced by Jacob's feature and its feature (arrow 3 and 4). (b) The events. lependency for two users and two forums (items). It shows how event at one dimension influence. other dimensions. Each orange arrow represents the dependency within each dimension, and the. black arrow denotes the cross-dimension dependency, e.g., Sophie interacts with volleyball at 2:30pm. and this event changes the volleyball embedding, thus will affect Jacob's visit at 3:3Opm..\nyu,i (t)dt = uut3Iit3) time (u1,i1,t1,q1) u2,i2,t2,q2)u2,i1,t3,q3) u1,i1,t4,q4) 1 dimension of user embedding 1 dimension of Item embedding (a) Piecewise constant embedding visualization (b) Survival probability computation\nFigure 3: Survival probability computation. (a) A user or item's feature embedding is piecewise constant and will change only after an interaction event happens. Only one dimension of the feature embedding is shown. (b) Survival probability for a user-item pair (u, i). The integral f' Xu, (r|t')d7 is decomposed into 4 inter-event intervals separated by { to, : : . , t3 }, with close form on each interval.\nComputing the intensity function. Each time when a new event e; happens between u; and ij their corresponding feature embeddings will evolve according to a computational graph, as illustrated in Figure[2al Due to the change of feature embedding, all the dimensions related to u; or i; wil be influenced and the intensity function for that dimension will change consequently. Such cross dimension influence dependency is shown in Figure|2b In our implementation, we first compute the corresponding intensity Auj; (t,|t's) according to (4), and then update the embedding of u, and ij This operation takes O(M) complexity, and is independent to the number of users or items.\nComputing the survival function. To compute the survival probability - ST. Au,i(r|t')dr for each pair (u, i), we first collect all the time stamps {tk} that have events related to either u or i. For notation simplicity, let |{t}| = nu,i and t1 = To, tnu. = T. Since the embeddings are piecewise constant, the corresponding intensity function is piecewise linear, according to (4). Thus, the integration is decomposed into each time interval where the intensity is constant, i.e.,\nnu,i-1 nu,i- tk+1 U,l Tdt = U,2 Tdt = >(tk+1-tk)exp(uu(tk)'i(tk) TO k=1 tk k=1\nFigure 3|visualizes the computation. Although the survival probability term exists in close form, we still need to solve two challenges. First, it is still expensive to compute it for each user item pair Moreover, since the user-item interaction bipartite graph is very sparse, it is not necessary to monitor each dimension in the stochastic training setting. To speed up the computation, we propose a novel random-sampling scheme as follows.\nNote that the intensity term in the objective function (5) tries to maximize the inner product between user and item that has interaction event. while the survival term penalize over all other pairs of innei\nTable 1: Comparison with different methods\nproducts. We observe that this is similar to Softmax computing for classification problem. Hence inspired by the noise-contrastive estimation method (Gutmann & Hyvarinen!2012) that is widely used in language models (Mnih & Kavukcuoglu]2013), we keep the dimensions that have events on them, while randomly sample dimensions without events in current mini-batch.\nThe second challenge lies in the fact that the user-item interactions vary a lot across mini-batches hence the corresponding computational graph also changes greatly. To make the learning efficient, we use the graph embedding framework (Dai et al.|2016) which allows training deep learning models where each term in the objective has a different computational graphs but with shared parameters The Adam Optimizer (Kingma & Ba]2014) together with gradient clip is used in our experiment.."}, {"section_index": "5", "section_name": "6 EXPERIMENTS", "section_text": "We evaluate our model on real-world datasets. For each sequence of user activities, we use all the. events up to time T : p as the training data, and the rest events as the testing data, where T is the. observation window. We tune the latent rank of other baselines using 5-fold cross validation with grid search. We vary the proportion p E 0.7, 0.72, 0.74, 0.76, 0.78} and report the averaged results over five runs on two tasks (we will release code and data once published):.\nLowRankHawkes (Du et al.|2015): This is a low rank Hawkes process model which assumes user-item interactions to be independent of each other and does not capture the co-evolution of user and item features. Coevolving (Wang et al.||2016b): This is a multi-dimensional point process model which uses a simple linear embedding to model the co-evolution of user and item features. PoissonTensor (Chi & Kolda2012): Poisson Tensor Factorization has been shown to perform better than factorization methods based on squared loss (Karatzoglou et al.]2010] Xiong et al. 2010f [Wang et al.|[2015b) on recommendation tasks. The performance for this baseline is reported using the average of the parameters fitted over all time intervals. TimeSVD++ (Koren]2009) and FIP (Yang et al.]2011): These two methods are only designed for explicit ratings, the implicit user feedbacks (in the form of a series of interaction events) are converted into the explicit ratings by the respective frequency of interactions with users. STIC (Kapoor et al.]2015): it fits a semi-hidden markov model (HMM) to each observed user-item pair and is only designed for time prediction.\nWe use three real world datasets as follows\nIPTV. It contains 7,100 users' watching history of 385 TV programs in 11 months (Jan 1 - Nov 30 2012), with around 2M events, and 1,420 movie features (including 1,073 actors, 312 directors, 22 genres, 8 countries and 5 years). Yelp. This data was available in Yelp Dataset challenge Round 7. It contains reviews for various businesses from October, 2004 to December, 2015. The dataset we used here contains 1,005 users and 47,924 businesses, with totally 291,716 reviews.\nMethod DeepCoevolveLowRankHawkes Coevolving PoissonTensor TimeSVD++ FIP STIC Continuous time V V V V Predict Item V V V V V V Predict Time V V V V Computation RNN Factorization Factorization Factorization Factorization Factorization HMM.\n. Item prediction. At each test time t, we predict the item that the user u will interact with. We rank all the items in the descending order of the conditional density fu,i(t) = Xu,i(t) Su,i(t). We report. the Mean Average Rank (MAR) of each test item at the test time. Ideally, the item associated with the test time t should rank one, hence smaller value indicates better predictive performance.. .Time prediction. We predict the expected time when a testing event will occur between a given. TT We report the Mean Absolute Error (MAE) between the predicted and true time..\nMethods Methods Methods DeepCoevolve DeepCoevolve DeepCoevolve LowRankHawkes 177.2 191.3 LowRankHawkes 450.1 510.7 540.7 Coevolving 8823.3 9104.2 9318.2 100 Coevolving 150.3 Coevolving LowRankHawkes PoissonTensor PoissonTensor PoissonTensor TimeSVD++ 100TimeSVD++ 1000 TimeSVD++ 2128.3 FIP FIP FIP MAR MAR MAR 10 107.16 120.1 10.4 10 13.2 10 Methods Methods Methods Methods Methods Methods DeepCoevolve DeepCoevolve 1000 DeepCoevolve Coevolving Coevolving Coevolving 12423.4 14847.4 11043.5 LowRankHawkes 830.2 901.1 LowRankHawkes 186.4 203 LowRankHawkes PoissonTensor 100 PoissonTensor PoissonTensor STIC 356 STIC 1000STIC 67.2 1360.5 884.3 MAE MAE MAE 34.5 10 10 10.4 8.1 10.79 10 Methods Methods Methods (a) IPTV (b) Reddit (c) Yelp\nReddit. We collected discussion related data on different subreddits (groups) for the month c January 2014. We filtered all bot users' and their posts from this dataset. Furthermore, we randoml selected 1,000 users, 1,403 groups, and 10,000 discussion events.\nFigure 4 shows that DeepCoEvOLVE significantly outperforms both epoch-based baselines and. state-of-arts point process based methods. LOwRANKHAwKEs has good performance on iten prediction but not on time prediction, while CoevOLvinG has good performance on time predictior. but not on item prediction. We discuss the performance regarding the two metrics below.\nItem prediction. Note that the best possible MAR one can achieve is 1, and our method gets quite accurate results: with the value of 1.7 on IPTV and 1.9 on Reddit. Note LOwRANKHAwKES achieves comparable item prediction performance, but not as good on the time prediction task. We think the reason is as follows. Since one only need the rank of conditional density f(.) in (1) to conduct item prediction, LowRANKHAwKEs may still be good at differentiating the conditional density function, but could not learn its actual value accurately, as shown in the time prediction task where the value of the conditional density function is needed for precise prediction.\nTime prediction. The second row of Figure4shows that DeEpCoevOLVE outperforms other meth ods. Compared with LowRANKHAwKEs that achieves comparable time predication performance 6 improvement on Reddit, it has 10 improvement on Yelp, and 30 improvement on IPTV. The time unit is hour. Hence it has 2 weeks accuracy improvement on IPTV and 2 days on Reddit. This is important for online merchants to make time sensitive recommendations. An intuitive explanation is that our method accurately captures the nonlinear pattern between user and item interactions The competitor LowRANKHAwKEs assumes specific parametric forms of the user-item interaction process, hence may not be accurate or expressive enough to capture real world temporal patterns Furthermore, it models each user-item interaction dimension independently, which may lose the important affection from user's interaction with other items while predicting the current item's reoccurrence time. Our work also outperforms CoEvoLvING, e.g., with around 3 MAE improve on IPTV. Moreover, the item prediction performance is also much better than CoevoLvING. It shows the importance of using RNN to capture the nonlinear embedding of user and item latent features instead of the simple parametrized linear embedding in CoEvOLVING."}, {"section_index": "6", "section_name": "6.4 INSIGHT OF RESULTS", "section_text": "We will look deeper and provide rationale behind the prediction results in the following two sub sections. First, to understand the difficulty of conducting prediction tasks in each dataset, we study their different sparsity properties. For the multidimensional point process models, the fewer events we observe in each dimension, the more sparse the dataset is. Our approach alleviates the sparsity problem via the modeling of dependencies among dimensions, thus is consistently doing better than other baseline algorithms.\nNext, we fix one dataset and evaluate how different levels of sparsity in training data influences each algorithm's performance.\nFigure 4: Prediction results on three real world datasets\nFigure 5: Visualization of the sparsity property in each dataset. The first row shows the distribution of number of events per user. The second row shows the user-item interaction graph. It is generated as follows. For each dataset, we randomly pick 10 users with 100 history events each user and collect all items they have interacted with. The interaction graph itself is a bipartite graph, and we put users on left side, and items on the right side\nSparsity in terms of the number of events per user. Typically, the more user history data we have. the better results we will obtain in the prediction tasks. We can see in IPTV dataset, users typically have longer length of history than the users in Reddit and Yelp datasets. Thus our algorithm and all other baseline methods have their best performance on this dataset. However, the Reddit dataset and Yelp dataset are hard to tell the performance based only on the distribution of history length, thus we do a more detailed visualization.\nSparsity in terms of diversity of items to recommend. From the bipartite graph, it is easy to see that Yelp dataset has higher density than the other two datasets. The density of the interaction graph reflects the variety of history per each user. For example, the users in IPTV only has 385 programs to watch, but they can have 47,924 businesses to choose in Yelp dataset. Also, the Yelp dataset has 9 times more items than IPTV and Reddit dataset in the bipartite graph. This means the users in Yelp dataset has more diverse tastes than users in other two datasets. This is because if users has similar tastes, the distinct number of items in the union of their history should be small.\nBased on the above two facts, we can see Yelp dataset is the most sparse, since it has shorter length of history per user, and much more diversity of the items, it is not surprising that this dataset is much. harder than the other IPTV and Reddit dataset.."}, {"section_index": "7", "section_name": "6.4.2 ROBUSTNESS OF THE ALGORITHM", "section_text": "With the case study on the most challenging Yelp dataset, we further evaluate how each algorith performs with lower level of sparsity as compared to the one used in Figure 4|(c).We use this tc demonstrate that our work is most robust and performs well across different levels of sparsity.\nOn this dense dataset, Figure [6|(b) and (c) show that all the algorithms' performances improve with more history events, comparing to the performance in original Yelp dataset. For example LOwRANKHAwKEs has similar rank prediction results as our DEEPCOEvOLVE on this dense dataset However, as the dataset becomes sparse, the performance of LowRANKHAwKEs drops significantly as shown in Figure4(c). For example, the rank prediction error goes from 90 to 2128, and the\nIPTV Reddit Yelp 0.3 0.7 0.5 0.6 0.25 0.4 0.5 0.2 39.04 0.15 6 0.3 0.1 0.1 0.05 0.1 # 0 0 0 200 400 800 1000 0 200 400 600 800 1000 200 400 600 600 800 1000 # events per user # events per user # events per user users items users items users items (a) IPTV, 385 items (b) Reddit, 1,403 groups (c) Yelp, 47,924 businesses\nWe first create Yelp100, a more dense dataset, by filtering the original Yelp dataset to keep the top 100 users. Each user would have at least 200 events. Figure[6(a) shows the statistics of this dataset.. On average the users have more history events than the original Yelp dataset in Figure|5(c)\nMethods Methods Yelp100 0.3 DeepCoevolve 1000DeepCoevolve LowRankHawkes 7800.1 8100.3 8320.5 Coevolving Coevolving LowRankHawkes 724.3 768.4 883 0.25 PoissonTensor PoissonTensor 1000 TimeSVD++ STIC FIP 0.2 ussr MAR 125.9 or tttitter 72.81 0.15 raeeeonn 87.16 90.1 80.1 0.1 10 10 0.05 0 200 400 600 800 S 1000 Methods Methods # events per user (a) # events distribution (b) MAR (c) MAE\nOn the contrary, our DeepCoevoLVE still has superior performance with such high level of sparsity. The rank error only changes from 87 to 107, and the time error changes from 72 to 884 as the data becomes sparse. It shows that our work is the most robust to the sparsity in the data. We think it is because our work accurately captures the nonlinear multidimensional dependencies between users. and items latent features."}, {"section_index": "8", "section_name": "7 CONCLUSION", "section_text": "We have proposed an efficient framework to model the nonlinear co-evolution nature of users' and. items' latent features. Moreover, the user and item's evolving and co-evolving processes are captured. by the RNN. It is based on temporal point processes and models time as a random variable. Hence. it is in sharp contrast to prior epoch based works. We demonstrate the superior performance of our. method on both the time and item prediction task, which is not possible by most prior work. Future. work includes extending to other social applications, such as group dynamics in message services.\nFigure 6: Comparison of performance with different amount of history.\ntime error goes from 724 to 11043.5. We think it is because this model relies more on the history information per each user-item pair.\nD.R. Cox and V. Isham. Point processes, volume 12. Chapman & Hall/CRC, 1980.\nHanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. In ICML, 2016.\nD.J. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: genera. theory and structure, volume 2. Springer, 2007.\nNan Du, Yichen Wang, Niao He, and Le Song. Time sensitive recommendation from recurrent user activities. In NIPS, 2015.\nNan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song Recurrent marked temporal point processes: Embedding event history to vector. In KDD, 2016..\nPrem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchica poisson factorization. UAI, 2015.\nBalazs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. Session-based recommendations with recurrent neural networks. In ICLR. 2016.\nKomal Kapoor, Karthik Subbian, Jaideep Srivastava, and Paul Schrater. Just in time recommendations Modeling the dynamics of boredom in activity streams. In WsDM. 2015.\nAlexandros Karatzoglou, Xavier Amatriain, Linas Baltrunas, and Nuria Oliver. Multiverse recom mendation: n-dimensional tensor factorization for context-aware collaborative filtering. In Recsys 2010.\nEric C Chi and Tamara G Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal on Matrix Analysis and Applications. 33(4):1272-1299. 2012.\nMichael D Ekstrand, John T Riedl, and Joseph A Konstan. Collaborative filtering recommender\nY. Koren. Collaborative filtering with temporal dynamics. In KDD, 2009\nYong K Tan, Xinxing Xu, and Yong Liu. Improved recurrent neural networks for session-based recommendations. arXiv:1606.08117v2, 2016.\nYichen Wang and Aditya Pal. Detecting emotions in social media: A constrained optimizatior approach. In IJCAI, 2015.\nYichen Wang, Bo Xie, Nan Du, and Le Song. Isotonic hawkes processes. In ICML, 2016c\nLiang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff G. Schneider, and Jaime G. Carbonell. Tempora collaborative filtering with bayesian probabilistic tensor factorization. In SDM, 2010.\nShuang-Hong Yang, Bo Long, Alex Smola, Narayanan Sadagopan, Zhaohui Zheng, and Hongyuan Zha. Like like alike: joint friendship and interest propagation in social networks. In Www, 2011\nXing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. Beyond clicks: Dwel time for personalization. In RecSys, 2014.\nYoung-Jun Ko, Lucas Maystre, and Matthias Grossglauser. Collaborative recurrent neural networks for dynamic recommender systems. Journal of Machine Learning Research, pp. 1-16, 2016..\nYehuda Koren and Joe Sill. Ordrec: an ordinal model for predicting personalized item rating distributions. In RecSvs. 2011.\nAndriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimationIn Advance 265.22732013\nJiayu Zhou Juhan Lee Preeti Bhargava, Thomas Phan. Who, what, when, and where: Multi dimensional collaborative recommendations using tensor factorization on sparse user-generated. data. In WWW, 2015. R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte. carlo. In ICML, 2008. Yang Song, Ali Mamdouh Elkahky, and Xiaodong He. Multi-rate deep learning for temporal. recommendation. In Proceedings of the 39th International ACM SIGIR Conference on Research. and Development in Information Retrieval, pp. 909-912, 2016.\nYichen Wang, Nan Du, Rakshit Trivedi, and Le Song. Coevolutionary latent feature processes for continuous-time user-item interactions. In NIPS, 2016b."}, {"section_index": "9", "section_name": "DETAILS ON GRADIENT COMPUTATION", "section_text": "Computing gradient. For illustration purpose, we here use Sigmoid as the nonlinear activatior function o. In order to get gradient with respect to parameter W's, we first compute gradients witl respect to each varying points of embeddings. For user u's embedding after his k-th event, the corresponding partial derivatives are computed by:\nal Td7 dl O (1-uu(tk+1)) O uu(tk+1)W2 duu(tk) duu(tk) duu(tk from intensity\ndl + liu..(tk+ O lu di +\nwhere O denotes element-wise multiplication\nThe gradient coming from the second term (i.e., the survival term) is also easy to compute, since the Rayleigh distribution has closed form of survival function. For a certain item i, if its feature doesn't. changed between time interval [t4, ty+ l, then we have\nk+1 Xu,i(T|T')dT fu2 exp(uu(th)'ii(t%)ii(tk) Ouu(t%) 2\nOn the other hand, if the embedding of item i changes during this time interval, then we should break this interval into segments and compute the summation of gradients in each segment in a way similar to (7). Thus, we are able to compute the gradients with respect to W,, i E {1, 2, 3, 4} as follows.\nSince the items are treated symmetrically as users, the corresponding derivatives can be obtained in a similar way.\n-uu(tk)) O uu(tk)(tk-tk-1 aw1 u=1 K al m dl L (i-uu(th)) Ouu(th) Uu aW2 yu u=1 k dl m al i-uu(tk)) O uu(tk aw3 du u=1 k dl m dl (i- uu(tk)) O uu(tk . aw4 du. u=1 k"}] |
BJrFC6ceg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The PixelCNN, introduced byvan den Oord et al.(2016b), is a generative model of images with a tractable likelihood. The model fully factorizes the probability density function on an image x ovei. all its sub-pixels (color channels in a pixel) as p(x) = 1 I, p(x;[x<i). The conditional distributions. p(x;[x<i) are parameterized by convolutional neural networks and all share parameters. The Pixel CNN is a powerful model as the functional form of these conditionals is very flexible. In additior it is computationally efficient as all conditionals can be evaluated in parallel on a GPU for an ob. served image x. Thanks to these properties, the PixelCNN represents the current state-of-the-art in. generative modeling when evaluated in terms of log-likelihood. Besides being used for modeling images, the PixelCNN model was recently extended to model audio (van den Oord et al.]2016a). video (Kalchbrenner et al.2016b) and text (Kalchbrenner et al.2016a).\nFor use in our research, we developed our own internal implementation of PixelCNN and made a number of modifications to the base model to simplify its structure and improve its performance We now release our implementation at https : //github. com/openai/pixe1-cnn hoping that it will be useful to the broader community. Our modifications are discussed in Section [2l and evaluated experimentally in Section 3 State-of-the-art log-likelihood results confirm their useful- ness."}, {"section_index": "1", "section_name": "MODIFICATIONS TO PIXELCNN", "section_text": "We now describe the most important modifications we have made to the PixelCNN model archite cure as described by van den Oord et al. (2016c). For complete details see our code release at h+ + n\nThe standard PixelCNN model specifies the conditional distribution of a sub-pixel, or color channel. of a pixel, as a full 256-way softmax. This gives the model a lot of flexibility, but it is also very costly. in terms of memory. Moreover, it can make the gradients with respect to the network parameters"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "very sparse, especially early in training. With the standard parameterization, the model does noi know that a value of 128 is close to a value of 127 or 129, and this relationship first has to be learned before the model can move on to higher level structures. In the extreme case where a particulai sub-pixel value is never observed, the model will learn to assign it zero probability. This would be especially problematic for data with higher accuracy on the observed pixels than the usual 8 bits: Ir the extreme case where very high precision values are observed, the PixelCNN, in its current form. would require a prohibitive amount of memory and computation, while learning very slowly. We therefore propose a different mechanism for computing the conditional probability of the observed discretized pixel values. In our model, like in the VAE of|Kingma et al.(2016), we assume there is a latent color intensity v with a continuous distribution, which is then rounded to its nearest 8-bit representation to give the observed sub-pixel value x. By choosing a simple continuous distribution for modeling v (like the logistic distribution as done by|Kingma et al.(2016)) we obtain a smooth and memory efficient predictive distribution for x. Here, we take this continuous univariate distributior to be a mixture of logistic distributions which allows us to easily calculate the probability on the observed discretized value x, as shown in equation (2). For all sub-pixel values x excepting the edge cases 0 and 255 we have:\nK Tilogistic(i, Si) i=1 K P(x[, , s) i[o((x+0.5-i)/si)-((x-0.5-i)/si)] i=1\nwhere o( is the logistic sigmoid function. For the edge case of 0, replace x - 0.5 by oo, and for 255 replace x + 0.5 by +oo. Our provided code contains a numerically stable implementation for. calculating the log of the probability in equation2.\nOur approach follows earlier work using continuous mixture models (Domke et al.. 2008Theis et al.]2012Uria et al.]2013] Theis & Bethge2015], but avoids allocating probability mass to values outside the valid range of [0, 255] by explicitly modeling the rounding of v to x. In addi-. tion, we naturally assign higher probability to the edge values 0 and 255 than to their neighboring. values, which corresponds well with the observed data distribution as shown in Figure[1] Experi- mentally, we find that only a relatively small number of mixture components, say 5, is needed to. accurately model the conditional distributions of the pixels. The output of our network is thus of much lower dimension, yielding much denser gradients of the loss with respect to our parameters. In. our experiments this greatly sped up convergence during optimization, especially early on in train-. ing. However, due to the other changes in our architecture compared to that of|van den Oord et al.. (2016c) we cannot say with certainty that this would also apply to the original PixelCNN model..\n0.010 0.008 0.006 hrenneeey 0.004 0.002 0.000 0 50 100 150 200 250\nFigure 1: Marginal distribution of all sub-pixel values in CIFAR-10. The edge value of 255 is much more frequent than its neighbouring values: This is easy to model using our rounding based approach, but harder using continuous or truncated distributions..\nThe pixels in a color image consist of three real numbers, giving the intensities of the red, blue and green colors. The original PixelCNN factorizes the generative model over these 3 sub-pixels. This allows for very general dependency structure, but it also complicates the model: besides keeping track of the spatial location of feature maps, we now have to separate out all feature maps in 3 groups depending on whether or not they can see the R/G/B sub-pixel of the current location. This added complexity seems to be unnecessary as the dependencies between the color channels of a pixel are likely to be relatively simple and do not require a deep network to model. Therefore, we instead condition only on whole pixels up and to the left in an image, and output joint predictive distributions over all 3 channels of a predicted pixel. The predictive distribution on a pixel itself can be interpreted as a simple factorized model: We first predict the red channel using a discretized mixture of logistics. as described in section|2.1] Next, we predict the green channel using a predictive distribution of the same form. Here we allow the means of the mixture components to linearly depend on the value of the red sub-pixel. Finally, we model the blue channel in the same way, where we again only allow linear dependency on the red and green channels. For the pixel (ri,j, gi,j, bi,j) at location (i, j) in. our image, the distribution conditional on the context C,j, consisting of the mixture indicator and. the previous pixels, is thus\n(Ti,j,gi,j,bi,j|Ci,j) )= P(ri,j|r(Ci,j),sr(Ci,j)) x P(gi,j|g(Ci,j,ri XP(bi,j|b(Ci,j,ri,j,Ji,j),sb(Ci,j)) Hg(Ci,j) + Q(Ci,j)ri,j Hb(Ci,j,Ti,j,Ji,j) b(Ci,j)+B(Ci,j)ri,j+y(Ci,j)bi,j\nwith a. 3. ~ scalar coefficients depending on the mixture com ponent and previous pixels\nThe mixture indicator is shared across all 3 channels; i.e. our generative model first samples a mix. ture indicator for a pixel, and then samples the color channels one-by-one from the corresponding. mixture component. Had we used a discretized mixture of univariate Gaussians for the sub-pixels instead of logistics, this would have been exactly equivalent to predicting the complete pixel using. a (discretized) mixture of 3-dimensional Gaussians with full covariance. The logistic and Gaus sian distributions are very similar, so this is indeed very close to what we end up doing. For full. implementation details we refer to our code at https : //github. com/openai/pixel-cnn.\nThe original PixelCNN only uses convolutions with small receptive field. Such convolutions are good at capturing local dependencies, but not necessarily at modeling long range structure. Al-. though we find that capturing these short range dependencies is often enough for obtaining very good log-likelihood scores (see Table 2), explicitly encouraging the model to capture long range. dependencies can improve the perceptual quality of generated images (compare Figure |3|and Fig- ure [5). One way of allowing the network to model structure at multiple resolutions is to introduce. dilated convolutions into the model, as proposed by van den Oord et al.(2016a) and Kalchbren- ner et al.(2016b). Here, we instead propose to use downsampling by using convolutions of stride. 2. Downsampling accomplishes the same multi-resolution processing afforded by dilated convo- lutions, but at a reduced computational cost: where dilated convolutions operate on input of ever. increasing size (due to zero padding), downsampling reduces the input size by a factor of 4 (for stride of 2 in 2 dimensions) at every downsampling. The downside of using downsampling is that. it loses information, but we can compensate for this by introducing additional short-cut connections. into the network as explained in the next section. With these additional short-cut connections, we found the performance of downsampling to be the same as for dilated convolution.."}, {"section_index": "3", "section_name": "2.4 ADDING SHORT-CUT CONNECTIONS", "section_text": "For input of size 32 32 our suggested model consists of 6 blocks of 5 ResNet layers. In betweer the first and second block, as well as the second and third block, we perform subsampling by stridec convolution. In between the fourth and fifth block, as well as the fifth and sixth block, we perforn upsampling by transposed strided convolution. This subsampling and upsampling process loses information, and we therefore introduce additional short-cut connections into the model to recover\nthis information from lower layers in the model. The short-cut connections run from the ResNel layers in the first block to the corresponding layers in the sixth block, and similarly between blocks two and five, and blocks three and four. This structure resembles the VAE model with top down inference used byKingma et al.(2016), as well as the U-net used by Ronneberger et al.(2015) for image segmentation. Figure2 shows our model structure graphically.\n32x32 16x16 8x8 8x8 16x16 32x32 Sequence of 6. layers = Downward stream = Downward and. rightward stream. = Identity (skip) connection Convolutional connection\n32x32 16x16 8x8 8x8 16x16 32x32 Sequence of 6. layers = Downward stream = Downward and. rightward stream. =Identityskip connection = Convolutional connection\nFigure 2:Like van den Oord et al.(2016c), our model follows a two-stream (downward, and. downward+rightward) convolutional architecture with residual connections: however, there are two significant differences in connectivity. First, our architecture incorporates downsampling and up. sampling, such that the inner parts of the network operate over larger spatial scale, increasing com-. putational efficiency. Second, we employ long-range skip-connections, such that each k-th layer. provides a direct input to the (K - k)-th layer, where K is the total number of layers in the net. work. The network is grouped into sequences of six layers, where most sequences are separated by. downsampling or upsampling."}, {"section_index": "4", "section_name": "2.5 REGULARIZATION USING DROPOUT", "section_text": "We apply our model to modeling natural images in the CIFAR-10 data set. We achieve state-of-the art results in terms of log-likelihood, and generate images with coherent global structure"}, {"section_index": "5", "section_name": "3.1 UNCONDITIONAL GENERATION ON CIFAR-10", "section_text": "We apply our PixelCNN model, with the modifications as described above, to generative modeling o the images in the CIFAR-10 data set. For the encoding part of the PixelCNN, the model uses 3 Resne blocks consisting of 5 residual layers, with 2 2 downsampling in between. The same architecture is used for the decoding part of the model, but with upsampling instead of downsampling in betweer blocks. All residual layers use 192 feature maps and a dropout rate of 0.5. Table[1|shows the state of-the-art test log-likelihood obtained by our model. Figure3|shows some samples generated by the model.\nThe PixelCNN model is powerful enough to overfit on training data. Moreover, rather than just reproducing the training images, we find that overfitted models generate images of low perceptual quality, as shown in Figure[8] One effective way of regularizing neural networks is dropout (Srivas- tava et al.|2014). For our model, we apply standard binary dropout on the residual path after the first convolution. This is similar to how dropout is applied in the wide residual networks of|Zagoruyko & Komodakis(2016). Using dropout allows us to successfully train high capacity models while avoiding overfitting and producing high quality generations (compare figure[8 and figure 3).\nFigure 3: Samples from our PixelCNN model trained on CIFAR-10\nTable 1: Negative log-likelihood for generative models on CIFAR-10 expressed as bits per sub-pixel\nNext, we follow van den Oord et al.(2016c) in making our generative model conditional on the. class-label of the CIFAR-10 images. This is done by linearly projecting a one-hot encoding of the. class-label into a separate class-dependent bias vector for each convolutional unit in our network. We. find that making the model class-conditional makes it harder to avoid overfitting on the training data:. our best test log-likelihood is 2.94 in this case. Figure 4 shows samples from the class-conditional. model, with columns 1-10 corresponding the 10 classes in CIFAR-10. The images clearly look qualitatively different across the columns and for a number of them we can clearly identify their class label.\nFigure 4: Class-conditional samples from our PixelCNN for CIFAR-10 (left) and real CIFAR-1 images for comparison (right).\nIt is hypothesized that the size of the receptive field and additionally the removal of blind spots i the receptive field are important for PixelCNN's performance (van den Oord et al.J2016b). Indee van den Oord et al.(2016c) specifically introduced an improvement over the previous PixelCNI model to remove the blind spot in the receptive field that was present in their earlier model..\nHere we present the surprising finding that in fact a PixelCNN with rather small receptive field can attain competitive generative modelling performance on CIFAR-10 as long as it has enough capacity.. Specifically, we experimented with our proposed PixelCNN++ model without downsampling blocks and reduce the number of layers to limit the receptive field size. We investigate two receptive field sizes: 11x5 and 15x8, and a receptive field size of 11x5, for example, means that the conditional distribution of a pixel can depends on a rectangle above the pixel of size 11x5 as well as 111 = 5x1 2 block to the left of the pixel.\nAs we limit the size of the receptive field, the capacity of the network also drops significantly since it contains many fewer layers than a normal PixelCNN. We call the type of PixelCNN that's simply limited in depth \"Plain'' Small PixelCNN. Interestingly, this model already has better performance than the original PixelCNN in van den Oord et al.[(2016b) which had a blind spot. To increase capacity, we introduced two simple variants that make Small PixelCNN more expressive withou growing the receptive field:\nNIN (Network in Network): insert additional gated ResNet blocks with 1x1 convolution be- tween regular convolution blocks that grow receptive field. In this experiment, we inserted 3 NIN blocks between every other layer. Autoregressive Channel: skip connections between sets of channels via 1x1 convolution gated ResNet block.\nBoth modifications increase the capacity of the network, resulting in improved log-likelihood as shown in Table [2 Although the model with small receptive field already achieves an impressive likelihood score, its samples do lack global structure, as seen in Figure|5.\nTable 2: CIFAR-10 bits per sub-pixel for Small PixelCNN\nFigure 5: Samples from 3.03 bits/dim Small PixelCNN\nIn order to test the effect of our modifications to PixelCNN, we run a number of ablation experiments where for each experiment we remove a specific modification.\nIn order to test the contribution of our logistic mixture likelihood, we re-run our CIFAR-10 experi ment with the 256-way softmax as the output distribution instead. We allow the 256 logits for each sub-pixel to linearly depend on the observed value of previous sub-pixels, with coefficients that are given as output by the model. Our model with softmax likelihood is thus strictly more flexible thar our model with logistic mixture likelihood, although the parameterization is quite different from that used by|van den Oord et al.[(2016c). The model now outputs 1536 numbers per pixel, describing the logits on the 256 potential values for each sub-pixel, as well as the coefficients for the dependencies between the sub-pixels. Figure|6[shows that this model trains more slowly than our original model In addition, the running time per epoch is significantly longer for our tensorflow implementation For our architecture, the logistic mixture model thus clearly performs better. Since our architecture differs from that of[van den Oord et al.(2016c) in other ways as well, we cannot say whether this would also apply to their model."}, {"section_index": "6", "section_name": "3.4.2 CONTINUOUS MIXTURE LIKELIHOOD INSTEAD OF DISCRETIZATION", "section_text": "Instead of directly modeling the discrete pixel values in an image, it is also possible to de-quantize. them by adding noise from the standard uniform distribution, as used byUria et al.[(2013) and others,. and modeling the data as being continuous. The resulting model can be interpreted as a variational autoencoder (Kingma & Welling]2013f Rezende et al.]2014), where the dequantized pixels z form a latent code whose prior distribution is captured by our model. Since the original discrete pixels x can be perfectly reconstructed from z under this model, the usual reconstruction term vanishes from\nodel Bits per sub-pixe eld=11x5. Plain 3.11 eld=11x5, NIN 3.09 eld=11x5, Autoregressive Channel 3.07 eld=15x8, Plain 3.07 eld=15x8. NIN 3.04 eld=15x8, Autoregressive Channel 3.03\n5.0 original 4.8 softmax likelihood 4.6 4.4 4.2 4.0 3.8 3.6 3.4 3.2 0 2 4 6 8 10 12 14 16 18 epochs\nFigure 6: Training curves for our model with logistic mixture likelihood versus our model with softmax likelihood.\nthe variational lower bound. The entropy of the standard uniform distribution is zero, so the term that remains is the log likelihood of the dequantized pixels, which thus gives us a variational lower bound on the log likelihood of our original data..\nWe re-run our model for CIFAR-10 using the same model settings as those used for the 2.92 bit per dimension result in Table 1] but now we remove the discretization in our likelihood model anc. instead add standard uniform noise to the image data. The resulting model is a continuous mixtur model in the same class as that used byTheis et al.(2012);Uria et al.(2013);Theis & Bethge(2015 and others. After optimization, this model gives a variational lower bound on the data log likelihoo of 3.11 bits per dimension. The difference with the reported 2.92 bits per dimension shows th benefit of using discretization in the likelihood model.."}, {"section_index": "7", "section_name": "3.4.3 NO SHORT-CUT CONNECTIONS", "section_text": "Next, we test the importance of the additional parallel short-cut connections in our model, indicatec by the dotted lines in Figure2] We re-run our unconditional CIFAR-10 experiment, but remove the. short-cut connections from the model. As seen in Figure[7] the model fails to train without these. connections. The reason for needing these extra short-cuts is likely to be our use of sub-sampling. which discards information that otherwise cannot easily be recovered,.\n6.0 original no short-cuts 5.5 5.0 bld deer 4.5 4.0 3.5 3.0 0 10 20 30 40 50 epochs\nFigure 7: Training curves for our model with and without short-cut connections"}, {"section_index": "8", "section_name": "3.4.4 NO DROPOUT", "section_text": "We re-run our CIFAR-10 model without dropout regularization. The log-likelihood we achieve oI the training set is below 2.0 bits per sub-pixel, but the final test log-likelihood is above 6.0 bits pe\nFigure 8: Samples from intentionally overfitted PixelCNN model trained on CIFAR-10, with train log-likelihood of 2.0 bits per dimension: Overfitting does not result in great perceptual quality.."}, {"section_index": "9", "section_name": "4 CONCLUSION", "section_text": "We presented PixelCNN++, a modification of PixelCNN using a discretized logistic mixture like lihood on the pixels among other modifications. We demonstrated the usefulness of these mod- ifications with state-of-the-art results on CIFAR-10. Our code is made available at https: qi + hub. cOm nixe1 cnnand can easily be adanted for use on other data sets"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti mation. arXiv preprint arXiv:1410.8516, 2014.\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016b\nsub-pixel. At no point during training does the unregularized model get a test-set log-likelihood. below 3.0 bits per sub-pixel. Contrary to what we might naively expect, the perceptual quality of the generated images by the overfitted model is not great, as shown in Figure[8.\nNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv: 1610.10099, 2016a\nDiederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. Proceedings of the 2nd\nDiederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling Improving variational inference with inverse autoregressive flow. In Advances in Neural Informa tion Processing Systems, 2016.\nLucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances ir Neural Information Processing Systems, pp. 1927-1935, 2015.\nLucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mix tures applied to multiscale image representations. PloS one. 7(7):e39857. 2012\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499. 2016a\nAaron van den Oord. Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks In International Conference on Machine Learning (ICML), 2016b\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko- ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016c\nDanilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi mate inference in deep generative models. In ICML, pp. 1278-1286, 2014\nBenigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems, pp. 2175-2183, 2013"}] |
rJqFGTslg | [{"section_index": "0", "section_name": "PRUNING FILTERS FOR EFFICIENT CONVNETS", "section_text": "Asim Kadav\nUniversity of Marylanc\nThe success of CNNs in various applications is accompanied by a significant. increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers. due to irregular sparsity in the pruned networks. We present an acceleration method. for CNNs, where we prune filters from CNNs that are identified as having a small. effect on the output accuracy. By removing whole filters in the network together. with their connecting feature maps, the computation costs are reduced significantly In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and. can work with existing efficient BLAS libraries for dense matrix multiplications We show that even simple filter pruning techniques can reduce inference costs for. VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The ImageNet challenge has led to significant advancements in exploring various architectural. choices in CNNs (Russakovsky et al.. (2015);Krizhevsky et al.(2012); Simonyan & Zisserman (2015); Szegedy et al.(2015a); He et al.(2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have significant inference costs especially. when used with embedded sensors or mobile devices where computational and power resources. may be limited. For these applications, in addition to accuracy, computational efficiency and small. network sizes are crucial enabling factors (Szegedy et al.[(2015b)). In addition, for web services. that provide image search and image classification APIs that operate on a time budget often serving. hundreds of thousands of images per second, benefit significantly from lower inference times..\nThere has been a significant amount of work on reducing the storage and computation costs by model compression (Le Cun et al.(1989);Hassibi & Stork(1993);Srinivas & Babu (2015); Han et al. (2015);[Mariet & Sra (2016)). Recently|Han et al.(2015} 2016b) report impressive compression rates on AlexNet (Krizhevsky et al.(2012)) and VGGNet (Simonyan & Zisserman(2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall floating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Iandola et al.(2016)), but additionally require sparse\nIgor Durdanovic\nHans Peter Graf NEC Labs Americ"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent work on CNNs have yielded deep architectures with more efficient design (Szegedy et al (2015a b);He & Sun (2015); He et al.(2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al.[(2013);He et al.(2016)), which reduces the number of parameters significantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun|(2015). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate.\nCNNs with large capacity usually have significant redundancy among different filters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning filters. Compared to pruning weights across the network, filter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned filters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative fine-tuning (retraining), we adopt a one-shot pruning and retraining strategy tc save retraining time for pruning filters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have significantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacrificing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets.\nThe early work byLe Cun et al.(1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justified saliency measure. Later,Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information Mariet & Sra|(2016) reduce the network redundancy by identifying a subset of diverse neurons tha1 does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections.\nTo reduce the computation costs of the convolutional layers, past work have proposed to approximate. convolutional operations by representing the weight matrix as a low rank product of two smaller . matrices without changing the original number of filters (Denil et al.(2013);Jaderberg et al.(2014) Zhang et al.(2015b a); Tai et al.(2016); Ioannou et al.(2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al.(2013)) and fast. convolution using the Winograd algorithm (Lavin & Gray(2016)). Additionally, quantization (Han. et al.(2016b)) and binarization (Rastegari et al.[(2016);Courbariaux & Bengio[(2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition. to these techniques to reduce computation costs without incurring additional overheads\nSeveral work have studied removing redundant feature maps from a well trained network (Anwar et al (2015);Polyak & Wolf (2015)).Anwar et al.(2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle filtering, which selects the best combination from a number of random generated masks.Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the filter weights and prune filters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune filters for simple and complex convolutional network architectures.\nConcurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky(2016); Zhou et al.(2016); [Wen et al.[(2016).Lebedev & Lempitsky (2016) leverage group-sparsity on the convolutional filters to achieve structured brain. damage, i.e., prune the entries of the convolution kernel in a group-wise fashion.Zhou et al.(2016). add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters.Wen et al.(2016) add structured sparsity regularizer on each layer to reduce trivial filters,. channels or even layers. In the filter-level pruning, all above work use l2.1-norm as a regularizer.\nSimilar to the above work, we use l1-norm to select unimportant filters and physically prune them Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desirec speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one. Sta ge."}, {"section_index": "3", "section_name": "3 PRUNING FILTERS AND FEATURE MAPS", "section_text": "Let n; denote the number of input channels for the ith convolutional layer and hi/w, be the height/width of the input feature maps. The convolutional layer transforms the input feature maps x; E Rn;xh;xwi into the output feature maps Xi+1 E IRni+1hi+1wi+1, which are used as in- put feature maps for the next convolutional layer. This is achieved by applying ni+1 3D filters filter is composed by n, 2D kernels K E Rkk (e.g., 3 3). All the filters, together, constitute the kernel matrix F, E Rnn+1 xkxk. The number of operations of the convolutional layer is ni+1n;k2hi+1Wi+1. As shown in Figure[1] when a filter Fi,, is pruned, its corresponding feature map xi+1,j is removed, which reduces nyk2hi+1Wi+1 operations. The kernels that apply on the removed feature maps from the filters of the next convolutional layer are also removed, which saves an additional ni+2k2hi+2Wi+2 operations. Pruning m filters of layer i will reduce m/ni+1 of the computation cost for both layers i and i + 1.\nFigure 1: Pruning a filter results in removal of its corresponding feature map and related kernels i the next layer.\nOur method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each laye. by calculating the sum of its absolute weights |Fi,], i.e., its l1-norm ||Fi,||1. Since the number of input channels, ni, is the same across filters, |Fi,j also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure2(a)|illustrates the distribution of filters' absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section|4.4j. Comparec to other criteria for activation-based feature map pruning (Section4.5), we find l1-norm is a good criterion for data-free filter selection.\nThe procedure of pruning m filters from the ith convolutional layer is as follows.\nkernel matrix W; Fi,j ni ni+1 hi K Ni+1 ni+2 Xi Xi+1 Xi+2\n1. For each filter Fi,j, calculate the sum of its absolute kernel weights s; = r=1 |Ki|. 2. Sort the filters by S. 3. Prune m filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed. 4. A new kernel matrix is created for both the ith and i + 1th layers, and the remaining kernel weights are copied to the new model.\nFigure 2: (a) Sorting filters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the filter index divided by the total number of filters. The y-axis is the filter weight sum divided by the max sum value among filters in that layer. (b) Pruning filters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them.\nRelationship to pruning weights Pruning filters with low absolute weights sum is similar to pruning low magnitude weights (Han et al.[(2015)). Magnitude-based weight pruning may prune away whole filters when all the kernel weights of a filter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difficult to predict the exact number of filters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efficient sparse libraries, especially for the case of low-sparsity.\nRelationship to group-sparse regularization on filters Recent work (Zhou et al.(2016);Wen et al. (2016)) apply group-sparse regularization (j=1 ||Fi, ||2 or l2,1-norm) on convolutional filters which also favor to zero-out filters with small l2-norms, i.e. Fi.; = 0. In practice, we do not observe noticeable difference between the l2-norm and the l1-norm for filter selection, as the important filters tend to have large values for both measures (Appendix [6.1). Zeroing out weights of multiple. filters during training has a similar effect to pruning filters with the strategy of iterative pruning and. retraining as introduced in Section3.4"}, {"section_index": "4", "section_name": "3.2 DETERMINING SINGLE LAYER'S SENSITIVITY TO PRUNING", "section_text": "To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned network's accuracy on the validation set. Figure 2(b)[shows that layers that maintair their accuracy as filters are pruned away correspond to layers with larger slopes in Figure|2(a)| On the contrary, layers with relatively flat slopes are more sensitive to pruning. We empirically determine the number of filters to prune for each layer based on their sensitivity to pruning. For deep network. such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them"}, {"section_index": "5", "section_name": "3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS", "section_text": "To prune filters across multiple layers, we consider two strategies for layer-wise filter selection\nCIFAR-10, VGG-16 CIFAR10, VGG-16, pruned smallest filters CIFAR10, VGG-16, prune smallest filters, retrain 20 epochs 1.0 100 conv 1 conv 2 90 conv 3 conv_1 64 0.8 conv 1 conv 4 80 conv_2 64 conv 2 conv 5 . conv_3 128 conv 3 conv 6 70 conv_4 128 conv 4 0.6 conv 7 euree 60 oconv_5 256 conv 5 conv 8 conv_6 256 91 conv 6 conv 9 C 50 conv_7 256 conv 7 0.4 conv 10 conv_8 512 conv 8 conv 11 40 90 conv_9 512 conv 9 conv 12 30 conv 10 512 conv 10 conv 13 conv_11 512 conv 11 30 20 - conv_12 512 conv 12 conv_13 512 conv 13 0.00 20 40 60 80 100 120 140 20 40 60 80 100 20 40 60 80 100 filter index / # filters (%) Filters Pruned Away(%) Filters Pruned Away(%) (a) Filters are ranked by sj. (b) Prune the smallest filters. (c) Prune and retrain\nWe now discuss how to prune filters across the network. Previous work prunes the weights on a layer. by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al.. (2015)). However, understanding how to prune filters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2). Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for. the ResNet, pruning the identity feature maps or the second layer of each residual block results in. additional pruning of other layers..\nFigure 3|illustrates the difference between two approaches in calculating the sum of absolute weights The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many filters are pruned.\nNi+1 ni+2 Xi+1 Xi+2\nFigure 3: Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a (n+1 - 1) (n+2 - 1) kernel matrix.\nprojection shortcut Xi Xi+1 Xi+2 Pxi\nFigure 4: Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions..\nFor simpler CNNs like VGGNet or AlexNet, we can easily prune any of the filters in any convolutiona. layer. However, for complex network architectures such as Residual networks (He et al.(2016)). pruning filters may not be straightforward. The architecture of ResNet imposes restrictions and the. filters need to be pruned carefully. We show the filter pruning for residual blocks with projectior. mapping in Figure4] Here, the filters of the first layer in the residual block can be arbitrarily pruned. as it does not change the number of output feature maps of the block. However, the correspondence. between the output feature maps of the second convolutional layer and the identity feature maps. makes it difficult to prune. Hence, to prune the second convolutional layer of the residual block, the. corresponding projected feature maps must also be pruned. Since the identical feature maps are more. important than the added residual maps, the feature maps to be pruned should be determined by the. pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we. use the same selection criterion based on the filters of the shortcut convolutional layers (with 1 . kernels). The second layer of the residual block is pruned with the same filter index as selected by. the pruning of the shortcut layer..\nAfter pruning the filters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the filters across multiple layers:.\nIndependent pruning determines which filters should be pruned at each layer independent of. Other layers. Greedy pruning accounts for the filters that have been removed in the previous layers This strategy does not consider the kernels for the previously pruned feature maps while. calculating the sum of absolute weights..\nNi+1 ni+2 Xi+1 Xi+2\nWe find that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away significant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some filters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "Table 1: Overall results. The best test/validation accuracy during the retraining process is reportec Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity..\nVGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan &. Zisserman (2015)). Recently,Zagoruyko(2015) applies a slightly modified version of the model on CIFAR-10 and achieves state of the art results. As shown in Table [2] VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use. the model described inZagoruyko(2015) but add Batch Normalization (Ioffe & Szegedy(2015))\nWe prune two types of networks: simple CNNs (VGG-16 on CIFAR-1O) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet that are often used to demonstrate model compression, both VGG (on CIFAR-1O) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our filter pruning method in Torch7 (Collobert et al.[(2011)). When filters are pruned, a new model with fewer filters is created and the remaining parameters of the modified layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al (2016). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work has reported up to 3 original training times to retrain pruned networks (Han et al.(2015)).\nModel Error(%) FLOP Pruned % Parameters Pruned % VGG-16 6.75 3.13 108 1.5 10 VGG-16-pruned-A 6.60 2.06 108 34.2% 5.4 106 64.0% VGG-16-pruned-A scratch-train 6.88 ResNet-56 6.96 1.25 108 8.5 10 ResNet-56-pruned-A 6.90 1.12 108 10.4% 7.7 105 9.4% ResNet-56-pruned-B 6.94 9.09 107 27.6% 7.3 105 13.7% ResNet-56-pruned-B scratch-train 8.69 ResNet-110 6.47 2.53 108 1.72 106 ResNet-110-pruned-A 6.45 2.13 108 15.9% 1.68 106 2.3% ResNet-110-pruned-B 6.70 1.55 108 38.6% 1.16 106 32.4% ResNet-110-pruned-B scratch-train 7.06 ResNet-34 26.77 3.64 109 2.16 107 ResNet-34-pruned-A 27.44 3.08 109 15.5% 1.99 107 7.6% ResNet-34-pruned-B 27.83 2.76 109 24.2% 1.93 107 10.8% ResNet-34-pruned-C 27.52 3.37 109 7.5% 2.01 107 7.2%\nTable 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model..\nlayer type Wi X hi #Maps FLOP #Params #Maps FLOP% Conv_1 32 32 64 1.8E+06 1.7E+03 32 50% Conv_2 32 x 32 64 3.8E+07 3.7E+04 64 50% Conv_3 16 16 128 1.9E+07 7.4E+04 128 0% Conv_4 16 16 128 3.8E+07 1.5E+05 128 0% Conv_5 8 x 8 256 1.9E+07 2.9E+05 256 0% Conv_6 8 x 8 256 3.8E+07 5.9E+05 256 0% Conv_7 8 x 8 256 3.8E+07 5.9E+05 256 0% Conv_8 4 x 4 512 1.9E+07 1.2E+06 256 50% Conv_9 4 x 4 512 3.8E+07 2.4E+06 256 75% Conv_10 4 x 4 512 3.8E+07 2.4E+06 256 75% Conv_11 2 x 2 512 9.4E+06 2.4E+06 256 75% Conv_12 2 x 2 512 9.4E+06 2.4E+06 256 75% Conv_13 2 x 2 512 9.4E+06 2.4E+06 256 75% Linear 1 512 2.6E+05 2.6E+05 512 50% Linear 1 10 5.1E+03 5.1E+03 10 0% Total 3.1E+08 1.5E+07 34%\nAs shown in Figure[2(b)] each of the convolutional layers with 512 feature maps can drop at least 60% of filters without affecting the accuracy. Figure 2(c)|shows that with retraining, almost 90% of the filters of these layers can be safely removed. One possible explanation is that these filters operate on 4 4 or 2 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 8 dimensions. Unlike previous work (Zeiler & Fergus(2014);Han et al.[(2015)), we observe that the first layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much usefu1 filters as or ImageNet (as shown in Figure.5). Even when 80% of the filters from the first layer are pruned, the number of remaining filters (12) is still larger than the number of raw input channels. However, wher removing 80% filters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose significant information from previous layers, thereby hurting the accuracy. With 50% of the filters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy.\nFigure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by l1-norm"}, {"section_index": "7", "section_name": "4.2 RESNET-56/110 ON CIFAR-10", "section_text": "ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 32. 16 16 and 8 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature. maps, we only consider pruning the first layer of the residual block. As shown in Figure[6l most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even.\n94 94 conv_216 conv 20 32 conv_38 64 Ceunee conv_4 16 conv_22 32 conv_40 64 conv_6 16 conv_24 32 conv_42 64 91 conv_8 16 conv_26 32 conv_44 64 conv_10 16 conv_28 32 conv_46 64 conv_12 16 conv_30 32 conv 48 64 90 conv_14 16 conv_32 32 90 conv_50 64 conv_1616 conv_34 32 conv_52 64 conv_18 16 conv_36 32 conv_54 64 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) CIFAR10, ResNet-110, prune smallest filters CIFAR10, ResNet-110, prune smallest filters CIFAR10, ResNet-110, prune smallest filters 94 conv_2 16 conv_38 32 conv_74 64 conv_4 16 conv 40 32 conv_76 64 conv_6 16 conv 42 32 conv_78 64 conv_8 16 conv_44 32 conv_80 64 conv 10 16 conv_46 32 conv_8264 conv 12 16 conv_48 32 conv_84 64 conv_14 16 CCuueey conv_50 32 conv_86 64 conv_16 16 conv 52 32 conv_88 64 conv 18 16 conv 54 32 conv_90 64 91 conv_20 16 conv_56 32 91 conv_92 64 conv_22 16 conv_58 32 conv_94 64 conv_24 16 conv_60 32 conv_96 64 conv_26 16 90 conv_62 32 90 conv_98 64 conv_28 16 conv_64 32 conv_100 64 conv_30 16 conv_66 32 conv_102 64 conv_32 16 40 conv_68 32 :o conv_104 64 40 20 40 60 80 20 40 60 80 40 20 40 60 80 Filters Pruned Away(%) conv_34 16 Filters Pruned Away(%) conv_70 32 Filters Pruned Away(%) conv_106 64 . conv_36 16 . conv_72 32 conv_10864\nFigure 6: Sensitivity to pruning for the first layer of each residual block of ResNet-56/110\nimproves the performance. In addition, we find that layers that are sensitive to pruning (layers 20 38 and 54 for ResNet-56. layer 36. 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the first and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps."}, {"section_index": "8", "section_name": "4.3 RESNET-34 oN ILSVRC2012", "section_text": "ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of 56 56 28 28, 14 14 and 7 7. ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We first prune the first layer of each residual block. Figure7|shows the sensitivity o the first layer of each residual block. Similar to ResNet-56/110, the first and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table[1|we compare two configurations of pruning percentages for the first three stages: (A) p1=30%, p2=30% p3=30%; (B) p1=50%, p2=60%, p3=40%. Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difficult to prune as compared to deeper ResNets.\nWe also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of filters, they are pruned equally. As shown in Figure 7(b) these layers are more sensitive to pruning than the first layers. With retraining, ResNet-34-pruned-C prunes the third stage with p3=20% and results in 7.5% FLOP reduction with 0.75% loss in accuracy Therefore, pruning the first layer of the residual block is more effective at reducing the overall FLOP\nCIFAR10, ResNet-56, prune smallest filters CIFAR10, ResNet-56, prune smallest filters CIFAR10, ResNet-56, prune smallest filters 94 94 94 93 93 93 92 92 conv_2 16 conv_20 32 92 ACeenney ACeunrey ACeuneey conv_38 64 conv 4 16 conv 22 32 conv 40 64 conv_6 16 conv 24 32 conv 42 64 9 conv_8 16 91 conv 26 32 conv 44 64 conv_10 16 oconv_28 32 conv_46 64 conv_12 16 conv_30 32 conv_48 64 90 conv_14 16 90 conv_32 32 90 conv_50 64 conv_16 16 conv_34 32 conv_52 64 conv_18 16 conv_36 32 conv_54 64 RO 89 20 40 100 20 40 80 100 89 60 80 60 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) CIFAR10, ResNet-110, prune smallest filters. CIFAR10, ResNet-110, prune smallest filters. CIFAR10, ResNet-110, prune smallest filters. 94 94 94 conv_2 16 conv 38 32 conv_74 64 conv_4 16 conv_40 32 conv_76 64 93 conv_6 16 93 conv_42 32 93 conv_78 64 conv_8 16 conv_44 32 conv_80 64 conv_10 16 conv_46 32 conv_82 64 92 conv_12 16 92 conv_48 32 conv_84 64 conv_14 16 92 ACeunrey ACeeuney conv_50 32 ACeunrey conv_86 64 conv_16 16 conv_52 32 conv_88 64 conv_18 16 conv_54 32 conv_90 64 91 91 conv_20 16 conv_56 32 91 conv_92 64 conv_22 16 conv_58 32 conv_94 64 conv_24 16 conv_60 32 conv_96 64 90 90 conv_26 16 conv_62 32 90 conv_98 64 conv_28 16 conv_64 32 conv_100 64 conv_30 16 conv_66 32 conv_102 64 conv_32 16 40 89 conv_104 64 40 20 40 60 80 20 40 60 80 conv_68 32 40 20 conv_34 16 conv_70 32 40 60 80 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) conv_106 64 conv_36 16 conv_72 32 . conv_108 64\nThe retraining performance can be improved by skipping these sensitive layers. As shown in Table1 ResNet-56-pruned-A improves the performance by pruning 10% filters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we find that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use pi to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16 18, 20, 34, 38, 54) and prunes layers with p1=60%, p2=30% and p3=10%. For ResNet-110, the firs1 pruned model gets a slightly better result with p1=50% and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with p1=50%, p2=40% and p3=30%. When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56.\nImageNet, ResNet-34, prune smallest filters. ImageNet, ResNet-34, prune the second layer of the basicblock 75 conv_2 64 70 : 1 - 7, step=2 conv_4 64 9 - 15, step=2 conv 6 64 17 - 27, step=2 70 60 conv_8 128 29 - 33, step=2 conv_10 128 50 conv_12 128 65 ACeunrey conv_14 128 40 conv_16 256 conv_18 256 60 30 conv_20 256 conv_22 256 conv_24 256 20 55 conv_26 256 conv 28 512 10 conv 30 512 50 conv 32 512 0 20 40 60 80 40 0 20 40 60 80 100 Filters Pruned Away(%) Parameter Pruned Away(%) (a) Pruning the first layer of residual blocks. (b) Pruning the second layer of residual blocks.\nFigure 7: Sensitivity to pruning for the residual blocks of ResNet-34\nWe compare our approach with pruning random filters and largest filters. As shown in Figure [8 pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning fo all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest l1-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger l1-norms.\nFigure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest filters, pruning random filters and pruning the largest filters. In random filter pruning, the order of filters to be pruned is randomly permuted..\nThe activation-based feature map pruning method removes the feature maps with weak activation. patterns and their corresponding filters and kernels (Polyak & Wolf (2015), which needs sample generated by applying filter Fi, E Rn, kk to feature maps of previous layer x; E Rn, w; hi, i.e.. Xi+1,j = Fi, * x;. Given N randomly selected images {x } N=1 from the training set, the statistics. of each feature map can be estimated with one epoch forward pass of the N sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch. normalization or non-linear activation. We compare our l1-norm based filter pruning with feature map pruning using the following criteria: Omean-mean(xi,j. N n=1 mean(x,j), Omean-std(xi,j\nthan pruning the second layer. This finding also correlates with the bottleneck block design for deeper. ResNets, which first reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping..\n100 CIFAR10,VGG-16, prune filters with smallest -norm CIFAR10,VGG-16, prune random filters CIFAR10,VGG-16, prune filters with largest -norm 100 100 80 conv_1 64 80 BC conv_2 64 conv_3 128 conv_4 128 60 conv_5 256 60 ACeenrey 0 conv_6 256 conv_7 256 40 conv_8 512 conv_9 512 conv_10 512 20 conv_11 512 20 20 0conv_12 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%)\nCIFAR10,VGG-16, prune filters with smallest l-norm CIFAR10, VGG-16, prune random filters 100 CIFAR10,VGG-16, prune filters with largest -norm 100 100 conv_1 64 30 conv 264 conv_3128 conv_4 128 50 conv_5 256 conv_6256 oconv_7 256 40 conv_8 512 conv_9 512 conv_10 512 20 conv 11 512 20 20 conv_12 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%)\nFigure 9: Comparison of activation-based feature ma oruning for VGG-16 on CIFAR-10\nOvar-e,(xi,j) = var({||x, ||2}N-1), where mean, std and var are standard statistics (average. standard deviation and variance) of the input. Here, Ovar-l, is the contribution variance of channel. criterion proposed in Polyak & Wolf|(2015), which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias."}, {"section_index": "9", "section_name": "5 CONCLUSIONS", "section_text": "Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune filters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-1O) and deep ResNets without significant loss in the original accuracy Instead of pruning with specific layer-wise hayperparameters and time-consuming iterative retraining we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank the anonymous reviewers for their valuable feedback"}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "CIFAR10, VGG-16, prune filters with smallest -norm CIFAR10, VGG-16, prune feature maps with smallest Cmean- mean CIFAR10, VGG-16, prune feature maps with smallest omean- std 100 100 100 80 conv_1 64 conv_1 64 conv_164 BO oconv_2 64 BO conv_2 64 conv_2 64 conv_3128 conv_3128 conv_3128 conv_4 128 conv_4 128 conv_4 128 60 60 conv_5 256 conv_5 256 60 ACeenney ACeunney ACeunney conv_5 256 conv_6 256 conv 6256 conv_6 256 conv_7 256 conv 7 256 conv 7256 40 40 conv_8 512 conv_8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 conv_10 512 20 conv_l1 512 20 conv_11 512 20 conv_11 512 conv_12 512 oconv_12 512 -0conv_12 512 conv_13 512 conv_13 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) (a) l|F,j|1 (b) Omean-mean (c) Omean-std CIFAR10, VGG-16, prune feature maps with smallest mean- CIFAR10, VGG-16, prune feature maps with smallest mean- CIFAR10, VGG-16, prune feature maps with smallest ovar-4 100 100 100 80 conv_164 80 conv_164 80 conv_1 64 conv_2 64 conv_2 64 conv_2 64 conv_3128 conv_3128 conv_3128 conv_4 128 conv_4 128 conv_4 128 60 conv_5 256 60 o conv_5 256 60 ACenney ACeuney ACeuney conv_5 256 conv_6 256 conv 6256 conv 6256 conv_7 256 conv_7 256 conv_7 256 40 conv_8 512 40 conv_8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 conv_10 512 20 conv_l1 512 20 conv_11 512 20 conv_11 512 conv_12 512 0conv_12 512 0conv_12 512 conv_13 512 conv_13 512 conv_13 512 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) Filters Pruned Away(%) d) Omean-l1 (e) Omean-l2 Ovar-l2\nThe estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (N = 50, 000 for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure[9] Smallest filter pruning outperforms feature map pruning with the criteria Omean-mean, mean-l, Omean-l2 and Ovar-l. The Omean-std criterion has better or similar performance to l1-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find l1-norm is a good heuristic for filter selection considering that it is data free.\nMatthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.\nMisha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a\nSong Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b..\nBabak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS. 1993\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016\nYani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Trainin CNNs with Low-Rank Filters for Efficient Image Classification. In ICLR, 2016.\nSergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classification with Deep Conve lutional Neural Networks. In NIPS, 2012.\nYann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPs, 1989\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400 2013.\nZelda Mariet and Suvrit Sra. Diversity Networks. In ICLR. 2016\nAdam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access. 2015\nForrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ; 1MB model size. arXiv preprint arXiv:1602.07360, 2016.\nAndrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016\nBaoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolu tional Neural Networks. In CVPR, 2015\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015.\nCheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-ran regularization. In ICLR, 2016.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona Networks for Classification and Detection. IEEE T-PAMI, 2015a.\nXiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. In CVPR. 2015b.\nHao Zhou. Jose Alvarez. and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 2014..\nSergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http: //torch. ch/b1og/2015/07/307 cifar.htm1|2015\nWe compare l1-norm with l2-norm for filter pruning. As shown in Figure[10] l1-norm works slightly better than l2-norm for layer conv_2. There is no significant difference between the two norms for other layers.\nCIFAR10,VGG-16,prune filters with smallest l-norm CIFAR10,VGG-16,prune filters with smallest l-norm 100 100 conv_1 64 . conv_1 64 80 80 conv_2 64 . conv_2 64 conv_3 128 . conv_3 128 11 conv_4 128 . conv_4 128 11 60 60 ACeunrey C conv 5 256 ACeunrey conv 5 256 conv_6 256 conv_6 256 =0 conv_7 256 conv_7 256 40 conv 8 512 40 conv_8 512 conv_9 512 conv_9 512 conv_10 512 conv_10 512 20 conv_11 512 20 conv_11 512 conv 12 512 conv 12 512 conv 13 512 conv 13 512 0 0 O 20 40 60 80 100 0 20 40 60 80 100 Filters Pruned Away(%) Filters Pruned Away(%) (a) l|Fi,j|1 (b) l|Fi,j|2\nFigure 10: Comparison of l1-norm and l2-norm based filter pruning for VGG-16 on CIFAR-10"}, {"section_index": "12", "section_name": "6.2 FLOP AND WALL-CLOCK TIME", "section_text": "FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to. compute and can be done statically, which is independent of the underlying hardware and software implementations. Since we physically prune the filters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller. number of filters from scratch..\nWe report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 32 images and 50,000 224 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table [3] the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted.\nTable 3: The reduction of FLOP and wall-clock time for inference\nModel FLOP Pruned % Time (s) Saved % VGG-16 3.13 108 1.23 VGG-16-pruned-A 2.06 108 34.2% 0.73 40.7% ResNet-56 1.25 108 1.31 ResNet-56-pruned-B 9.09 107 27.6% 0.99 24.4% ResNet-110 2.53 108 2.38 ResNet-110-pruned-B 1.55 108 38.6% 1.86 21.8% ResNet-34 3.64 109 36.02 ResNet-34-pruned-B 2.76 109 24.2% 22.93 28.0%"}] |
H1GEvHcee | [{"section_index": "0", "section_name": "ANNEALING GAUSSIAN INTO RELU: A NEW SAM PLING STRATEGY FOR LEAKY-RELU RBM", "section_text": "Chun-Liang Li Siamak Ravanbakhsh Barnabas Poczos\nchunlial,mravanba,bapoczos}@cs.cmu.edu\nRestricted Boltzmann Machine (RBM) is a bipartite graphical model that is usec. as the building block in energy-based deep generative models. Due to its numer. ical stability and quantifiability of its likelihood, RBM is commonly used witl Bernoulli units. Here, we consider an alternative member of the exponential fam ily RBM with leaky rectified linear units - called leaky RBM. We first study the joint and marginal distributions of the leaky RBM under different leakiness, which leads to interesting interpretation of the leaky RBM model as truncated Gaussiar distribution. We then propose a simple yet efficient method for sampling fron this model, where the basic idea is to anneal the leakiness rather than the energy. - i.e., start from a fully Gaussian/Linear unit and gradually decrease the leakiness over iterations. This serves as an alternative to the annealing of the temperature parameter and enables numerical estimation of the likelihood that are more effi. cient and far more accurate than the commonly used annealed importance sam pling (AIS). We further demonstrate that the proposed sampling algorithm enjoys. relatively faster mixing than contrastive divergence algorithm, which improves the training procedure without any additional computational cost.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this paper, we are interested in deep generative models. One may naively classify these model into a family of directed deep generative models trainable by back-propagation (e.g., Kingma 8 Welling, 2013; Goodfellow et al., 2014), and deep energy-based models, such as deep belief net work (Hinton et al., 2006) and deep Boltzmann machine (Salakhutdinov & Hinton, 2009). Th building block of deep energy-based models is a bipartite graphical model called restricted Boltz mann machine (RBM). The RBM model consists of two layers, visible and hidden. The resultin graphical model which can account for higher-order interactions of the visible units (visible layer using the hidden units (hidden layer). It also makes the inference easier that there are no interaction between the variables in each layer.\nThe conventional RBM uses Bernoulli units for both the hidden and visible units (Smolensky, 1986) One extension is using Gaussian visible units to model general natural images (Freund & Haussler 1994). For hidden units, we can also generalize Bernoulli units to the exponential family (Welling et al., 2004; Ravanbakhsh et al., 2016).\nNair & Hinton (2010) propose a variation using Rectified Linear Unit (ReLU) for the hidden laye. with a heuristic sampling procedure, which has promising performance in terms of reconstructior. error and classification accuracy. Unfortunately, due to its lack of strict monotonicity, ReLU RBM does not fit within the framework of exponential family RBMs (Ravanbakhsh et al., 2016). In stead we study leaky-ReLU RBM (leaky RBM) in this work and address two important issues i) a better training (sampling) algorithm for ReLU RBM and; ii) a better quantification of leaky RBM. -i.e., evaluation of its performance in terms of likelihood..\nWe study some of the fundamental properties of leaky RBM, including its joint and marginal dis tributions (Section 2). By analyzing these distributions, we show that the leaky RBM is a union oJ"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "truncated Gaussian distributions. In this paper, we show that training leaky RBM involves under lying positive definite constraints. Because of this, the training can diverge if these constrains are. not satisfied. This is an issue that was previously ignored in ReLU RBM, as it was mainly used for. pre-training rather than generative modeling\nOur contribution in this paper is three-fold: I) we systematically identify and address model con. straints in leaky RBM (Section 3); II) for the training of leaky RBM, we propose a meta algo. rithm for sampling, which anneals leakiness during the Gibbs sampling procedure (Section 3) anc. empirically show that it can boost contrastive divergence with faster mixing (Section 5); III) We. demonstrate the power of the proposed sampling algorithm on estimating the partition function. Ir particular, comparison on several benchmark datasets shows that the proposed method outperform: the conventional AIS (Salakhutdinov & Murray, 2008) in terms of efficiency and accuracy (Sec tion 4). Moreover, we provide an incentive for using leaky RBM by showing that the leaky ReLt hidden units perform better than the Bernoulli units in terms of the model log-likelihood (Section 4).\nThe Boltzmann distribution is defined as p(x) = e-E(x) /Z where Z = x e-E(x) is the partition function. Restricted Boltzmann Machine (RBM) is a Boltzmann distribution with a bipartite struc- ture It is also the building block for many deep models (e.g., Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Lee et al., 2009), which are widely used in numerous applications (Bengio, 2009). The. conventional Bernoulli RBM, models the joint probability p(v, h) for the visible units v E [0, 1]' and. the hidden units h E [0,1]J as p(v, h) x exp(-E(v, h)), where E(v, h) = aTv - vTWh + bTh.. The parameters are a E RI, b E RI and W RIJ. We can derive the conditional probabilities as.\nL p(vi =1|h) = Wijhj+ai and p(h;=1v)= j=1 i=1\nOne extension of Bernoulli RBM is replacing the binary visible units by linear units v E RI with independent Gaussian noise. The energy function in this case is given by\nThe conditional distributions are as follows:\nWighj,1 p(v;[h) =N and p(h=1v)= Wii j=1 i=1\nFrom (1) and (2), we can see that the mean of the p(h;[v) is the nonlinearity of the hidden unit at nj = i=1 Wijui + b - e.g., mean of the Bernoulli unit is the sigmoid function. From this perspective, we can extend the sigmoid function to other functions and thus allow RBM to have more expressive power (Ravanbakhsh et al., 2016). In particular, it would be interesting to use rectified linear unit (ReLU) nonlinearity, f(n) = max(0, n), for generative modeling.\nI I J E(v,h)= Whj 20? 0 i i=1 i=1 j=1\nTo simplify the notation, we assume a normalized data so that a; and o; 1s no longer required elimination does not influence the discussion and one can easily extend all the results in this paper. to the model that includes a; and oi.)..\nwhere N(, V) is a Gaussian distribution with mean and variance V. To simplify the notation, in the following we define nj = WigUi + b, - that is nj is the input to the jth hidden layer neuron and similarly define v = 1 Wih, + ai. Using this notation the conditionals in the (2) are p(vi|vi) = N(vi,1) and p(h; = 1|nj) = o(nj).\nBy Ravanbakhsh et al. (2016), the conditional probability of the activation, assuming the nonlinear ity f(nj), is generally defined as p(h;[v) = exp(-Df(nj||hj) + g(hj)), where Df(n;||hj) is the Bregman Divergence associated with f, and g(h;) is the base (or carrier) measure in the exponential family which ensures the distribution is well-defined. The Bergman divergence, for strictly mono the anti-derivative (integral) of f and F* is the anti-derivative of f-1 (i.e., f-1(f(n)) = n); Note that due to the strict monotonicity of f, f-1 is well-defined, and F and F* are commonly referred to as conjugate duals.\nN(nj,1), if nj > O N(cnj,c), if nj 0.\nHaving these two conditional distributions is enough for training a leaky RBM model using con trastive divergence (Hinton, 2002) or some other alternatives (e.g., Tieleman, 2008; Tieleman & Hinton, 2009).\nGiven the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the genera treatment for MRF model is (Yang et al., 2012; Ravanbakhsh et al., 2016).\n(vWh-(F*(vi)+g(vi))-(F*(hj)+g(hj) p(v, h) x exp i=1 j=1\nv`Wh p(v,h) x exp 2 2c nj>0 nj0\nNair & Hinton (2010) use an RBM with visible Gaussian unit and ReLU hidden activation functions for pretraining. They suggest sampling from max(0, n +N(0, o(n)) for conditional sampling from the hidden units (compare to (2)). However, this sampling heuristic does not suggest the parametric form of the joint ReLU-Gaussian distribution. This also means we cannot evaluate it using methods such as Annealed Importance Sampling that require access to this parametric form. In fact, only strictly monotonic activation functions can derive feasible joint and conditional distributions in the exponential familly RBM and ReLU is not strictly monotonic Ravanbakhsh et al. (2016). Similar activation functions that are monotonic are Softplus, f(n) = log(1 + e) and leaky ReLU (Maas et al., 2013), defined as f(n) = max(cnj, nj), where c E (0, 1) is the leakiness parameter. In con- trast to the ReLU RBM the joint parametric form of these two distributions are available. However, the energy (logarithm of the joint probability) in the case of Softplus activation function contains a polylogarithmic term that requires evaluation of an infinite series; see Table 1 in Ravanbakhsh et al. (2016). For this reason, here we focus on Leaky-ReLU activation function.\nConsidering the leaky ReLU activation function f(n) = max(cn,n), using this formalism, the conditional distributions of hidden units in the leaky RBM simplifies to (see Appendix A.1 for details)\nSince the visible units uses the identity function, the corresponding conditional distribution is a Gaussian'\nJ p(vi|h) =N Wijhj,1 j=1\nand the corresponding visible marginal distribution is\nFrom (6) we see that the marginal probability is determined by the affine constraints n; > 0 or nj O for all hidden units j. By combinatorics, these constraints divide RI (the visible domain). into at most M = i=1 () ) convex regions R1, :.. Rm. An example with I = 2 and J = 3 is. shown in Figure 1. If I > J, then we have at most 2' regions..\nWe discuss the two types of these regions. For bounded regions, such as Ri in Figure 1, the integra. tion of (6) is also bounded. which results in a valid distribution. Before we discuss the unboundec cases, we define = I - j=1 a,W,WJ, where a; = 1n >o + clln,o. For the unbounded region, if E R I I is a positive definite (PD) matrix, then the probability density is proportional to. (covariance matrix -1) but over an affine-constrained region. Therefore, the distribution of each. unbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can be treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su . et al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu. tions and require approximated and more complicated sampling algorithms for truncated Gaussian. 1istributior vhile 1eakv R BM onlv re mnlefrom distrihutio\na multivariate Gaussian distribution with mean and precision matrix .\n(covariance matrix -1) but over an affine-constrained region. Therefore, the distribution of each unbounded region can be treated as a truncated Gaussian distribution. The marginal distrubution can be treated as a union of truncated Gaussain distribution. Note that leaky RBM is different from Su et al. (2017), which use single truncated Gaussian distribution to model joint (conditional) distribu- tions and require approximated and more complicated sampling algorithms for truncated Gaussian distribution, while leaky RBM only requires to sample from Gaussian distributions.\nOn the other hand, if is not PD, and the region R, contains the eigenvectors with negative eigen values of , the integration of (6) over R, is divergent (infinite), which can not result in a valic. probability distribution. In practice, with this type of parameter, when we do Gibbs sampling on the conditional distributions, the sampling will diverge. However, it is unfeasible to check exponentially many regions for each gradient update..\nTheorem 1. If I WwT is positive definite, then I - , ;W;WJ is also positive definite, fo all Q; E [0, 1].\nTheorem 2. The above projection step (7) can be done by shrinking the singular values to be less than 1.\nW3 R3 R5 V 2 y R6 R4 R1 W3 W R7 R3 R2 W2 Figure 2: An one dimensional Figure 3: A three dimensional example of truncated Gaussian. 1 example with 3 hidden units,. Figure 1: A two dimensional. distributions with different vari- where W, are orthogonal to example with 3 hidden units.. ances. each other. and the corresponding visible marginal distribution is - wwI-c wW v+bjWv+cbjW exp nj>0 nj0 nj>0 nj0 (6)\nI-ww-cww v+b;Wfv+cb;W exp X nj>0 nj<0 nj>0 nj<0\nThe proof is shown in Appendix 1. From Theorem 1 we can see that if the constraint I - WwT is PD, then one can guarantee that the distribution of every region is a valid truncated Gaussian distribution. Therefore, we introduce the following projection step for each W after the gradient update.\nargmin W I-WWT 0 s.t.\nThe proof is shown in Appendix C. The training algorithm of the leaky RBM is shown in Algo rithm 1. By using the projection step (7), we could treat the leaky RBM as the union of truncate Gaussian distributions, which uses weight vectors to divide the space of visible units into severa regions and use a truncated Gaussian distribution to model each region. Note that the leaky RBM model is different from Su et al. (2016), which uses a truncated Gaussian distribution to model th conditional distribution p(h|v) instead of the marginal distribution.\nThe empirical study about the divergent values and the necessity of the projection step is shown i. Appendix D. Without the projection step, when we run Gibbs sampling for several iterations from the. model, the sampled values will diverge because the model does not have a valid marginal distributio. p(v). It also implies that we cannot train leaky RBM with larger CD steps without projection, whicl. would result in divergent gradients. The detailed discussion is shown in Appendix D..\nIf we set the leakiness c to be 1, then (6) becomes a simple multivariate Gaussian distribution. N ((I - WWT)-1Wb, (I WWT)-1), which can be easily sampled without Gibbs sampling.. Also, the projection step (7) guarantees it is a valid Gaussian distribution. Then we decrease the. leakiness with a small e, and use samples from the multivariate Gaussian distribution when c = 1. as the initialization to do Gibbs sampling. Note that the distribution of each region is a truncated. Gaussian distribution. When we only decrease the leakiness with a small amount, the resulted dis- tribution is a \"similar' truncated Gaussian distribution with more concentrated density. From this. observation, we could expect the original multivariate Gaussian distribution serves as a good initial-. ization. The one-dimensional example is shown in Figure 2. We then repeat this procedure until we. reach the target leakiness. The algorithm can be seen as annealing the leakiness during the Gibbs. sampling procedure. The meta algorithm is shown in Algorithm 2. Next, we show the proposed. sampling algorithm can help both the partition function estimation and the training of leaky RBM.."}, {"section_index": "3", "section_name": "PARTITION FUNCTION ESTIMATION", "section_text": "Gibbs sampling is the core procedure for RBM, including training, inference, and estimating the partition function (Fischer & Igel, 2012; Tieleman, 2008; Salakhutdinov & Murray, 2008). For ev- ery task, we start from randomly initializing v by an arbitrary distribution q, and iteratively sample from the conditional distributions. Gibbs sampling guarantees the procedure result in the stationary distribution in the long run for any initialized distribution q. However, if q is close to the target dis- tribution p, it can significantly shorten the number of iterations to achieve the stationary distribution\nSannpnng1on DDIVI Sample v from N ((I - WWT)-1Wb,(I - WWT)-1) e = (1 - c)/T and c = 1 for t = 1,..., T do Decrease c' = c' e and perform Gibbs sampling by using (13) and (4) with leakiness c' end for\nIt is known that estimating the partition function of RBM is intractable (Salakhutdinov & Murray,. 2008). Existing approaches, including Salakhutdinov & Murray (2008); Grosse et al. (2013); Liu. et al. (2015); Carlson et al. (2016) focus on using sampling to approximate the partition function of. the conventional Bernoulli RBM instead of the RBM with Gaussian visible units and non-Bernoulli hidden units. In this paper, we focus on extending the classic annealed importance sampling (AIS). algorithm (Salakhutdinov & Murray, 2008) to leaky RBM..\nTable 1: The true partition function for Leaky-ReLU RBM with different number of hidden units\nand = 0"}, {"section_index": "4", "section_name": "4.1 STUDY ON TOY EXAMPLES", "section_text": "As we discussed in Section 3.1, leaky RBM with J hidden units is a union of 2J truncated Gaussian distributions. Here we perform a study on the leaky RBM with a small number hidden units. Since in this example the number of hidden units is small, we can integrate out all possible configurations of h. However, integrating a truncated Gaussian distribution with general affine constraints does not have analytical solutions, and several approximations have been developed (e.g., Pakman & Paninski, 2014). To compare our results with the exact partition function, we consider a special case that has the following form:\nJ 1 1- Z (2)- Qj 2J QjE{1,c},Vj j=1\nWe randomly initialize W and use SVD to make columns orthogonal. Also, we scale |W l tc satisfy I - WWT 0. The leakiness parameter is set to be O.01. For Salakhutdinov & Murray (2008) (AIS-Energy), we use 105 particles with 105 intermediate distributions. For the proposed method (AIS-Leaky), we use only 104 particles with 103 intermediate distributions. In this small problem we study the cases when the model has 5, 10, 20 and 30 hidden units and 3072 visible units The true log partition function log Z is shown in Table 1 and the difference between log Z and the estimates given by the two algorithms are shown in Table 2.\nAssuming that we want to estimate the partition function Z of p(v) with p(v) = p*(v)/Z and p*(v) x _n exp(-E(v,h)), Salakhutdinov & Murray (2008) start from a initial distribution. Po(v) x h exp(-Eo(v, h)), where computing the partition Zo of po(v) is tractable and we can. draw samples from po(v). They then use the \"geometric path' to anneal the intermediate distribution. as pk(v) p(v) = n exp(-Eo(v, h) - (1 - k)E(v, h)), where they grid k from 1 to 0. If we let o = 1, we can draw samples vk from pk(v) by using samples Uk-1 from pk-1(v) for k 1 .. (i)\nSalakhutdinov & Murray (2008) use the initial distribution with independent visible units and with-. which results in a multivariate Gaussian distribution po(v). Compared with the meta algorithm. shown in Algorithm 2 which anneals between leakiness, AIS anneals between energy functions.\nw,wf-cw,w p(v) x exp I _ U nj>0 nj0\nCompared to (6), it is equivalent to the setting where b = 0. Geometrically, every W, passes through the origin. We further put the additional constraint W, I W;, Vi / j. Therefore. we divide the whole space into 2 equally-sized regions. A three dimensional example is shown in Figure 3. Then the partition function of this special case has the analytical form\nFrom Table 1, we observe that AIS-Leaky has significantly better and more stable estimations than AIS-Energy especially and this gap increases as we increase the number of hidden units.. AIS-Leaky achieves this with orders magnitude reduced computation -e.g., here it uses ~.1%. of resources used by conventional AIS. For example, when we increase J from 5 to 30, the bias (dif-. ference) of AIS-Leaky only increases from 0.02 to 0.13; however, the bias of AIS-Energy increases from 1.76 to 9.6. We further study the implicit connection between the proposed AIS-Leaky and. AIS-Energy in Appendix E, which shows AIS-Leaky is a special case of AIS-Energy under certain. conditions.\nTable 2: The difference between the true partition function and the estimations of two algorithn with standard deviation.\nTable 3: The log-likelihood performance of Bernoulli-Gaussian RBM and leaky RBM\nIt is known that the reconstruction error is not a proper approximation of the likelihood (Hinton, 2012). One commonly adopted way to compare generative models is to sample from the model, and visualize the images to check the quality. However, Theis et al. (2016) show the better visu- alization does not imply better likelihood. Also, the single layer model cannot adequately model the complicated natural images (the result for Bernoulli-Gaussian RBM has been shown in Ran- zato & Hinton (2010)), which makes the visualization comparison difficult (Appendix F has few visualization results).\nFortunately, our accurate estimate of the partition function for leaky RBM can produce a reli able quantitative estimate of the representation power of leaky RBM. We compare the Bernoulli. Gaussian RBM2, which has Bernoulli hidden units and Gaussian visible units. We trained both. nodels with CD-203 and momentum. For both model, we all used 500 hidden units. We initializec W by sampling from Unif(0, 0.01), a = 0, b = 0 and o = 1. The momentum parameter was 0.9 anc. the batch size was set to 100. We tuned the learning rate between 10-1 and 10-6. We studied twc. benchmark data sets, including CIFAR10 and SVHN. The data was normalized to have zero mear. and standard deviation of 1 for each pixel. The results of the log-likelihood are reported in Table 3.\nFrom Table 3, leaky RBM outperforms Bernoulli-Gaussian RBM significantly. The unsatisfactory performance of Bernoulli-Gaussian RBM may be in part due to the optimization procedure. If we tune the decay schedule of the learning-rate for each dataset in an ad-hoc way, we observe the performance of Bernoulli-Gaussian RBM can be improved by ~ 300 nats for both datasets. Also. increasing CD-steps brings slight improvement. The other possibility is the bad mixing during the CD iterations. The advanced algorithms Tieleman (2008); Tieleman & Hinton (2009) may help Although Nair & Hinton (2010) demonstrate the power of ReLU in terms of reconstruction error and classification accuracy, it does not imply its superior generative capability. Our study confirms leaky RBM could have much better generative performance compared to Bernoulli-Gaussian RBM.\nIn this section, we show the idea of annealing between leakiness benefit the mixing in Gibbs sam pling in other settings. A common procedure for comparison of sampling methods for RBM is through visualization. Here, we are interested in more quantitative metrics and the practical benefits of improved sampling. For this. we consider optimization performance as the evaluation metric.\nThe gradient of the log-likelihood function L(0|vd of general RBM models is\naL(0\\vdata) aE(v,h) dE(v,h) Eh\\vdata de de de\nSince the second expectation in (9) is usually intractable, different approximation algorithms are used (Fischer & Igel, 2012).\n-1520 -2000 -1540 -2020 -1560 -2040 -1580 -2060 -1600 2080 607 607 -1620 -Q CD -2100 Q CD -x Mix -x Mix -1640 X - Leaky -2120 & - Leaky - PCD PCD -1660 -2140 0 0.5 1 1.5 2 0 0.5 1 1.5 2 Iterations 104 Iterations x104 (a) SVHN (b) CIFAR10\nFigure 4: Training leaky RBM with different sampling algorithms\nThe results are shown in Figure 4. The proposed sampling procedure is slightly better than typical CD steps. The reason is we only anneals the leakiness for 20 steps. To get accurate estimation requires thousands of steps as shown in Section 4 when we estimate the partition function. There- fore, the estimated gradient is still inaccurate. However, it still outperforms the conventional CD algorithm. On the other hand, unlike the binary RBM case shown in Tieleman (2008), PCD does not outperform CD with 20 mixing steps for leaky RBM.\nThe drawback of Algorithm 2 is that sampling v from N ((I - WW')-1Wb, (I - WW')-1. requires computing mean, covariance and the Cholesky decomposition of the covariance matrix in every iteration, which are computationally expensive. We study a mixture algorithm by combin ing CD and the idea of annealing leakiness. The mixture algorithm replaces the sampling fron N ((I - WW ')-1Wb, (I - WW ')-1) with sampling from the empirical data distribution. The. resulted mix algorithm is almost the same as CD algorithm while it anneals the leakiness over the iterations as Algorithm 2. The results of the mix algorithm is also shown in Figure 4..\nIn this paper, we study the properties of the exponential family distribution produced by leaky RBM This study relates the leaky RBM model and truncated Gaussian distribution and reveals an under lying positive definite constraint of training leaky RBM. We further proposed a meta sampling algo rithm, which anneals between leakiness during the Gibbs sampling procedure. We first demonstrate the proposed sampling algorithm is significantly more effective and efficient in estimating the par tition function than the conventional AIS algorithm. Second, we show that the proposed sampling algorithm has comparatively better mixing properties (compared to CD). A few direction are worth further study; in particular we are investigating on speeding up the naive projection step; either us ing the barrier function as shown in Hsieh et al. (2011) or by eliminating the need for projection by artificially bounding the domain via additional constraints..\n4We studied the PCD extension of the proposed sampling algorithm. However, the performance is not a stable as CD.\nIn this section, we compare two gradient approximation procedures. The baselines are the conven- tional contrastive divergence (CD) (Hinton, 2002) and persistent contrastive divergence (Tieleman 2008) (PCD). The second method is using Algorithm 2 (Leaky) with the same number of mixing steps as CD. The experiment setup is the same as that of Section 4.\nThe mix algorithm is slightly worse than the original leaky algorithm, but it also outperforms the conventional CD algorithm without additional computation cost. The comparison in terms of CPU time is shown in Appendix F. Annealing the leakiness helps the mix algorithm explore different modes of the distribution, thereby improves the training. The idea could also be combined with more advanced algorithms (Tieleman, 2008; Tieleman & Hinton, 2009)4."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Jorg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In ICLR, 2015.\nA. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP, 2012\nD. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013\nN. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim., 2014.\nR. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In AISTATS, 2009\nD. E. Carlson, P. Stinson, A. Pakman, and L. Paninski. Partition functions from rao-blackwellized tempered sampling. In ICML, 2016. KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted boltz- mann machines. Neural Computation, 2013. A. Fischer and C. Igel. An introduction to restricted boltzmann machines. In CIARP, 2012. Y. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two layer networks. Technical report, 1994. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In ICML. 2014. R. B. Grosse, C. J. Maddison, and R. Salakhutdinov. Annealing between distributions by averaging moments. In NIPS, 2013. G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computa- tion, 2002. G. E. Hinton. A practical guide to training restricted boltzmann machines. In Neural Networks. Tricks of the Trade (2nd ed.). 2012. G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006. C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estima- tion using quadratic approximation. In NIPS, 2011. D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, 2013. T D D o1ob1e\nH. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009. Q. Liu, J. Peng, A. Ihler, and J. Fisher III. Estimating the partition function by discriminance sampling. In UAI, 2015. A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. 2013. V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. A. Pakman and L. Paninski. Exact hamiltonian monte carlo for truncated multivariate gaussians. Journal of Computational and Graphical Statistics, 2014\nM. Ranzato and G. E. Hinton. Modeling pixel means and covariances using factorized third-order. boltzmann machines. In CVPR, 2010. S. Ravanbakhsh, B. Poczos, J. G. Schneider, D. Schuurmans, and R. Greiner. Stochastic neural networks with monotonic activation functions. In A1STATS, 2016..\nFor leaky RBM, the activation function of hidden units is defined as f(nj) = max(cnj, nj), where c E (0,1) and nj = i=1 WU; + bj. The inverse function of f is f-1(hj) = min(hj, hj/c) Therefore, the anti-derivatives are.\nif nj>O h3 else.\nThe activation function of Gaussian visible units can be treated as the linear unit f(v) = V, where V; = j=1 Wighj. Following the similar steps for deriving F and F*, we get the anti-derivatives F(vi) = 3v? and F*(vi) = v?\nFrom Ravanbakhsh et al. (2016), the conditional distribution is defined as\nBy plugging F' and F* into (12). we.. get the conditional distribution for leaky RBM\nJN(nj,1)with g(hj) = -log(2), if nj > 0 N(cn;,c)with g(h;) = -log(2cr), if nj 0. .1) with a(w\nif nj > O 2n? 2nj, else,\np(h;[nj) = exp(-njh;+ F(ni) + F*(hj))"}, {"section_index": "6", "section_name": "A.2 JOINT AND MARGINAL DISTRIBUTIONS", "section_text": "Given the conditional distributions p(v|h) and p(h|v), the joint distribution p(v, h) from the general treatment for MRF model given by Yang et al. (2012) is\nvWh-(F*(vi)+g(vi))-(F*(hj)+g(hj) ) x exp OU. i=1 j=1\nI ||w -W| =|UsvT-USvT|(Su-Su)? i=1"}, {"section_index": "7", "section_name": "D NECESSITY OF THE PROJECTION STEP", "section_text": "We conduct a short comparison to demonstrate the projection step is necessary for the leaky RBM. on generative tasks. We train two leaky RBM as follows. The first model is trained by the same. setting in Section 4. We use the convergence of log likelihood as the stopping criteria. The second. model is trained by CD-1 with weight decay and without the projection step. We stop the training when the reconstruction error is less then 10-2. After we train these two models, we run Gibbs. sampling with 1000 independent chains for several steps and output the average value of the visible units. Note that the visible units are normalized to zero mean. The results on SVHN and CIFAR10 are shown in Figure 5.\nFrom Figure 5, the model trained by weight decay without projection step is suffered by the problen of the diverged values. It confirms the study shown in Section 3.1. It also implies that we canno\nlu|2 h v`wh x exp 2 nj>0 nj0\nh)dh h X ex exp 2CT dh 2 2 2c nj>0 nj<0 lu|2 n3 I cna 11 X exp exp 2 2 2 nj>0 nj0 I- w,w-cw,w b;Wv+cb;W X exp u+ nj>0 nj0 nj>0 nj0\nf. Since WwT-,a;W;W; = ,(1-a;)W;W 0, we have WWT , a;W;Wj efore, I -, a,W,W' I-WW' 0.\nG Weight Decay G Weight Decay * Projection * Projection 5 3 2 *-*-*-*-* *-*-*-*-**-*-*-* 2 3 20 40 60 80 100 0 2 3 4 5 Gibbs Sampling Iterations Gibbs Sampling Iterations x104 (a) SVHN (b) CIFAR10\nFigure 5: Divergence results on two datasets\ntrain leaky RBM with larger CD steps when we do not do projection; otherwise, we would have the diverged gradients. Therefore, the projection is necessary for training leaky RBM for the generative purpose. However, we also oberseve that the projection step is not necessary for the classification and reconstruction tasks. he reason may be the independency of different evaluation criteria (Hinton. 2012; Theis et al., 2016) or other implicit reasons to be studied.\nWe analyze the performance gap between AIS-Leaky and AIS-Energy. One major difference is the initial distribution. The intermediate marginal distribution of AIS-Energy has the following form:\nI(1k) wW-(1-Bk)cW;W, x exp nj>0 nj0\nTo address the higher bias problem of AIS-Energy, we replace the initial distribution with the one used in Algorithm 2. By elementary calculation, the marginal distribution becomes\nI-WW-(Bk+(1-Bk)c) x exp )) WW U nj>0 nj0\nWe show the sampled images from leaky RBM train on CIFAR10 and SVHN datasets. We randomly initialize 20 chains and run Gibbs sampling for 1000 iterations. The sampled results are shown in Figure 6 The results shows that single layer RBM does not adequately model CIFAR10 and SVHN\nwhich recovers the proposed Algorithm 2. From this analysis, we understand AIS-Leaky is a special. case of conventional AIS-Energy with better initialization inspired by the study in Section 3. Also, by this connection between AIS-Energy and AIS-Leaky, we note that AIS-Leaky can be combined. with other extensions of AIS (Grosse et al., 2013; Burda et al., 2015) as well..\nFigure 6: Sampled images from leaky RBM\nFigure 7: Sampled images in gray-scale from Bernoulli-Gaussian RBM trained on CIFAR10 (Ran zato & Hinton, 2010).\nwhen compared to multilayer models. The similar results for single layer Bernoulli-Gaussian RBM from Ranzato & Hinton (2010) (in gray scale) is shown in Figure 7. Therefore, we instead focused on quantitative evaluation of the log-likelihood in Table 3.."}, {"section_index": "8", "section_name": "F.2 COMPUTATIONAL TIME BETWEEN DIFFERENT SAMPLING STRATEGIES", "section_text": "The comparison in terms of CPU time of different sampling algorithms discussed in Section 5 is shown in Figure 8. Please note that the complexity of CD and Mix are the almost the same. Mix only need a few more constant time steps which can be ignored compared with sampling steps. Leaky is more time-consuming because of computing and decomposing the covariance matrix as we discussed in Section 5. We also report the execution time of each step of algorithms in Table 4."}, {"section_index": "9", "section_name": "F.3 STUDY ON RELU-BERNOULLI RBM", "section_text": "We study the idea of annealing leakiness on the RBM model with leaky ReLU hidden units anc. Bernoulli visible units. We create the toy dataset with 20, 25 and 30 visible units as shown i1 Figure 9. The small datasets allow exact computation of the partition function. For each dataset, we. sample 60,000 images for training and 10,000 images for testing. We use 100 hidden units and PCI. to train the model. The log likelihood results are shown in Table 5..\nCompared to the Gaussian visible units case we study in Section 3, where p(v) is a multi-variate. Gaussian distribution when c = 1, the partition function of p(v) in ReLU-Bernoulli when c = 1 does not have the analytical form. Therefore, we do the following two-stage alternative. We first. run the standard AIS algorithm, which anneals the energy, to the distribution with leakiness c = 1.. We then change to anneals the leakiness from 1 to the target value. For the typical AIS algorithm. (AIS-Energy), we use 104 chains with 2 104 intermediate distributions. For the proposed two-. staged algorithm (AIS-Leaky), we use 104 chains with 104 intermediate distributions for annealing. to c = 1 and the other 104 distributions for annealing the leakiness. The results are shown in Table 6..\nIn Table 6, the standard AIS algorithm (AIS-Energy) has unsatisfactory performance. We show the performance of AIS for estimating the partition function of models with different leakiness on Toy20. We use the 104 independent chains and 2 104 intermediate distributions. The results are shown in Table 7. From Table 7, we observe that the AIS performances worse when the leakiness is closer to 0. Although we observed that increasing chains and intermediate distributions could improve the performance, but the improvements are limited. The study demonstrates when the\nTable 4: The execution time (s) of each step of algorithms (1000 iterations)\n-1520 -2000 -1540 -2020 -1560 -2040 C -1580 -2060 -1600 -2080 60 607 -1620 -2100 Q Q CD Q CD -1640 x x Mix -2120 -x Mix C & * Leaky * Leaky -1660 -2140 0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 Running Time (s) Running Time (s) (a) SVHN (b) CIFAR10\nFigure 8: Training leaky RBM with different sampling algorithms\n(a) I = 20 (b) I = 25 (c) I = 30\nFigure 9: Toy Datasets with different number of visible units\nTable 5: The log lokelihood and true partition function for ReLU-Bernoulli RBM with different number of visible units.\nTable 6: The difference between the true partition function and the estimations of two algorithms with standard deviation.\nnon-linearity of the distribution increases (the leakiness value c decreases), the standard AIS cannot effectively estimate the partition function within feasible computational time. On the other hand, it. also confirm the proposed idea, annealing the leakiness, can serve as an effective building block for. algorithms without enhancing the algorithm complexity. Note that the unsatisfactory performance. of AIS may be addressed by Grosse et al. (2013). From Appendix E, the two-stage algorithm used. here can also be improved by applying Grosse et al. (2013).\nTable 7: The difference (with standard deviation) between the true partition function and the esti mations of AIS-Energy under different leakiness.."}, {"section_index": "10", "section_name": "F.3.1 MNIST AND CALTECH DATASETS", "section_text": "We study MNIST and Caltech 101 Silhouettes datasets with 500 hidden units and train the mode with CD-25. The results are shown in Table 8 and Table 9. The leaky RBM is better than con ventional Bernoulli RBM and some deep models on MNIST data. Although leaky RBM deos no outperform Su et al. (2017), but it enjoys the advantage of the simpler sampling procedure (Gaussia1 distribution vs truncated Gaussian distribution) in the binary visible unit case.\nTable 8: The testing log-likelihood result on MNIST\nTable 9: The testing log-likelihood result on Caltech 101 Silhouettes."}] |
HyenWc5gx | [{"section_index": "0", "section_name": "REPRESENTATION STABILITY AS A REGULARIZER FOR IMPROVED TEXT ANALYTICS TRANSFER LEARNING", "section_text": "Matthew Riemer. Elham Khabiri. and Richard Goodwin\nAlthough neural networks are well suited for sequential transfer learning tasks, the. catastrophic forgetting problem hinders proper integration of prior knowledge. In. this work, we propose a solution to this problem by using a multi-task objective. based on the idea of distillation and a mechanism that directly penalizes forget-. ting at the shared representation layer during the knowledge integration phase of. training. We demonstrate our approach on a Twitter domain sentiment analysis. task with sequential knowledge transfer from four related tasks. We show that our. technique outperforms networks fine-tuned to the target task. Additionally, we. show both through empirical evidence and examples that it does not forget useful. knowledge from the source task that is forgotten during standard fine-tuning. Sur-. prisingly, we find that first distilling a human made rule based sentiment engine. into a recurrent neural network and then integrating the knowledge with the target. task data leads to a substantial gain in generalization performance. Our experi-. ments demonstrate the power of multi-source transfer techniques in practical text analytics problems when paired with distillation. In particular, for the SemEval. 2016 Task 4 Subtask A (Nakov et al.]2016) dataset we surpass the state of the. art established during the competition with a comparatively simple model archi-. tecture that is not even competitive when trained on only the labeled task specific. data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Sequential transfer learning methodologies leverage knowledge representations from a source task. in order to improve performance for a target task. A significant challenge faced when transferring. neural network representations across tasks is that of catastrophic forgetting (or catastrophic inter-. ference). This is where a neural network experiences the elimination of important old information when learning new information. The very popular strategy of fine-tuning a neural network involves. first training a neural network on a source task and then using the model to simply initialize the. weights of a target task network up to the highest allowable common representation layer. However. it is highly susceptible to catastrophic forgetting, because in training for the target task it has no ex-. plicit incentive to retain what it learned from the source task. While one can argue that forgetting the. source task should not matter if only the target task is of interest, our paper adds to the recent empir-. ical evidence across problem domains (Li & Hoiem]2016),(Rusu et al.2016) that show additional network stability can lead to empirical benefits over the fine-tuning algorithm. It seems as though. for many Deep Learning problems we can benefit from an algorithm that promotes more stability. to tackle the well known stability-plasticity dilemma. One popular approach for addressing this. problem is rehearsals (Murre1992), (Robins]1995). Rehearsals refers to a neural network training. strategy where old examples are relearned as new examples are learned. In the transfer setting it can. be seen as related to multi-task learning (Caruana[1997) where two tasks are trained at the same. time, rather than sequentially, while sharing a common input encoder to a shared hidden represen-. tation. However, in rehearsals the representation is biased in favor of the source task representation. through initialization. This technique is very sensible because while fine-tuning is susceptible to. catastrophic forgetting, multi-task learning is not (Caruana1997).\nOne of the biggest issues with the standard rehearsals paradigm is that it requires a cached mem. ory of training examples that have been seen in the past. This can be a massive requirement a the number of source tasks and training data sizes scale. One compelling technique for addressing this problem is the concept of pseudorehearsals (Robins|1995), (Robins] 1996), where relearning i performed on an artificially constructed population of pseudoitems instead of the actual old exam. ples. Unfortunately, current automatic techniques in the text analytics domain have not yet mastere. producing linguistically plausible data. As such, the pseudorehearsals paradigm is likely to waste. computational time that could be spent on learning realistic patterns that may occur during testing In our work, we extend the Learning without Forgetting (LwF) paradigm of (Li & Hoiem2016) tc. the text analytics domain using Recurrent Neural Networks. In this approach, the target task data is. used both for learning the target task and for rehearsing information learned from the source task b. leveraging synthetic examples generated for the target task input by the model that only experiencec training on the source task data. As argued byLi & Hoiem (2016), this setup strikes an importan balance between classification performance, computational efficiency, and simplicity in deployment\nRegardless of whether they are applied to real source task examples, real target task examples or synthetic examples, paradigms in the style of rehearsals all address the shortcomings of neural network forgetting by casting target task integration as a multi-task learning problem. However, this is not quite the purpose of the multi-task learning architecture, which was designed for joint learning of tasks from scratch at the same time. The key disconnect is that in multi-task learning, the transformation from the shared hidden layer to the outputs for each task are all learned and updated with the changing hidden representation. This would imply that, in the framework of rehearsals, it is possible for there to be significant changes during learning of the network's representation, and thus its abilities on the source task itself. While it would be desirable to claim we were allowing our source task network to become even better based on the target task than it was before, this motivation seems idealistic in practice. One reason this is idealistic is because multi-task learning generally only works well when tasks are sampled at different rates or alternatively given different priority in the neural network loss function (Caruana1997). As a result, it is most likely that auxilirary source tasks will receive less priority from the network for optimization than the target task. Additionally, we observe in our experiments, and it has been observed by others in (Rusu et al.|[2015), that it is generally not possible to distill multiple complex tasks into a student network at full teacher performance for all tasks. This seems to imply the degradation of the source task performance during training is somewhat inevitable in a multi-task learning paradigm.\nWe address this issue with our proposed forgetting cost technique. We demonstrate that it, in fact. can be valuable to keep the hidden to output transformation of the source tasks fixed during knowl edge integration with the target task. This way, we impose a stronger regularization on the hidder representation during target task integration by not allowing it to change aspects that were importan to the source task's performance without direct penalization in the neural network's loss function We demonstrate empirically both that freezing the source task specific weights leads to less deterio. ration in the accuracy on the source task after integration, and that it achieves better generalizatior performance in our setting. The forgetting cost is practical and easy to implement in training any. kind of neural network. In our experiments, we explore application of the forgetting cost in a re current neural network to the three way Twitter sentiment analysis task of SemEval 2016 Task Subtask A and find it to achieve consistently superior performance to reasonable baseline transfe learning approaches in four examples of knowledge transfer for this task..\nWe also demonstrate how powerful distillation can be in the domain of text analytics when paired with the idea of the forgetting cost. Significantly, we show that a high quality gazetteer based logical rule engine can be distilled using unlabeled data into a neural network and used to significantly im- prove performance of the neural network on the target task. This is achieved with a novel extension of the LwF paradigm byLi & Hoiem|(2016) to the scenario of a source task with the same output space as the target task. This can be a very promising direction for improving the ability of humans to directly convey knowledge to deep learning algorithms. Indeed, a human defined rule can contain far more information than a single training example, as that rule can be projected on to many unla- beled examples that the neural network can learn from. This is the reason human teachers generally begin teaching human students tasks by going over core rules at the onset of learning. Moreover, we showcase that multiple expert networks trained on the target task with prior knowledge from different source tasks can be effectively combined in an ensemble and then distilled into a single GRU model (Cho et al.2014), (Chung et al.]2014). Leveraging this combination of distillation\nand knowledge transfer techniques allows us to achieve state of the art accuracy on the SemEval task with a model that performs 11% worse than the best prior techniques when trained only on the. labeled data.\nThe combination of logic rules and neural networks has been explored in a variety of different archi. tectures and settings. These neural-symbolic systems (Garcez et al.[2012) include early examples. such as KBANN (Towell et al.][1990) that construct network architectures from given rules to per- form reasoning. (Hu et al.|2016) very recently also looked at the problem of distilling logical rules. into a neural network text analytics classifier. However, our approach is much more generic as it can. be applied to integrate knowledge from any kind of pre-made classifier and treats the rule engine as. a black box. In (Hu et al.[2016) they consider the individual rules and leverage an iterative convex. optimization algorithm alongside the neural network to regularize the subspace of the network. In our work we demonstrate that, by guarding against catastrophic forgetting, it is possible to efficiently. leverage rules for transfer by utilizing a generic sequential knowledge transfer framework. We do\nSince the work of (Bucilu et al.|2006) and (Hinton et al.| 2015) showed that an ensemble of neural network classifier can be distilled into a single model, knowledge distillation from a teacher network to a student network has become a growing topic of neural network research. In (Ba & Caruana 2014) it was shown that a deep teacher neural network can be learned by a shallow student network This idea was extended in (Romero et al.[2014), where it was demonstrated that a deep and nar- row neural network can learn a representation that surpasses its teacher. The use of distillation as a means of sharing biases from multiple tasks was explored in (Lopez-Paz et al.]2016), where the teacher network is trained with the output of the other tasks as input. It is not obvious how to extend a recurrent neural network to best use this kind of capability over a sequence. The idea of distill- ing from multiple source task teachers into a student network was highlighted in the reinforcement learning setting in (Rusu et al.]2015). Additionally, the concept of using distillation for knowledge transfer was also explored in (Chen et al.|2015), where function preserving transformations from smaller to bigger neural network architectures were outlined. This technique could also provide value in some instances for our approach where wider or deeper neural networks are needed for the task being transferred to than was needed for the original task. Distillation over target task data was first proposed as a means of elevating catastrophic forgetting in sequential knowledge transfer as ap- plied to image classification in (Li & Hoiem2016). We extend this approach for its first application to our knowledge for text analytics problems, with a recurrent neural network architecture, and in the setting where the source task and target task have the same output. The chief distinction of our proposed forgetting cost is that source task specific parameters are held fixed during integration with the target task as opposed to the joint training of all parameters used byLi & Hoiem(2016). Our ex- periments empirically support the intuition that freezing these parameters leads to greater retention of source task performance after target task integration and better generalization to the target task.\nAn ensemble over multiple diverse models trained for the same sentiment analysis task was also considered in (Mesnil et al.]2014) for the IMDB binary movie reviews sentiment dataset (Maas et al.|2011). We tried this ensemble model in our work and found that it gave very limited improve- nent. Our ensemble technique learns a more powerful weighted average based on the soft targets of each task and a multi-step greedy binary fusion approach that works better for the Twitter senti- nent analysis task in our experiments. Knowledge transfer from multiple tasks was considered tc estimate the age of Twitter users based on the content of their tweets in (Riemer et al.]2015). We xperimented with the hidden layer sharing approach outlined in that work and found that even wher using just a single softmax combining layer, it would overfit on our limited training and validation data. Progressive neural networks (Rusu et al.|2016) is a recently proposed method very similar in motivation to our forgetting cost as it is directly trying to solve the catastrophic forgetting problem The idea is that learned weight matrices relate the fixed representations learned on the source task to the construction of representations for the target task. In our experiments, the progressive neural networks approach consistently fails to even match the results achieved with fine-tuning. We hy- oothesize that although using fixed representations to aid learning addresses catastrophic forgetting it suffers from the curse of dimensionality. As such, when training data is relatively small given the complexity of the task, it is prone to overfitting as it effectively increases the input dimension size hrough shared fixed representations.\nnot need to make any modification to the architecture of the neural network during testing and d not need iterative convex optimization during training..\nIn the sequential knowledge transfer problem setting explored in this paper, training is first con- ducted solely on the source task examples S, including Ks training examples (xsi, ysi) E S where x si is the input representation and ysi is the output representation. After training is complete on S we would like to now use prior knowledge obtained in the model trained on S to improve general- ization on a new target task with examples T, which includes KT training examples (xTi, yTi) E T. Here we assume that the input representations x s; and xTi are semantically aligned in the same rep- resentation space. As such, if there is useful knowledge in S that applies in some direct or indirect way to the target task that is not present in T, we would expect a good knowledge integration ap- proach to generalize better to the target task than it is possible to using the training data in T alone Strong performance for the sequential knowledge transfer problem is a first step towards the greater goal of a mechanism for effective lifelong learning (Thrun][1996)."}, {"section_index": "2", "section_name": "3.2 FORGETTING COST FOR TUNING A TARGET TASK MODEL", "section_text": "where L is some loss function (we use mean squared error in our experiments) and yinit is the sofi label generated for the target task input xT; based on the model after training just on S. The model trained just on S is also used to initialize the weights of the target task model before integratior with T as we do in the standard fine-tuning paradigm. Q f is a hyperparameter that can be utilized to control the extent of allowed forgetting. Of course, a very similar way to express this idea would be to mix synthetic training examples T' with the same input as T and output generated by the model trained just on S' with the true target task training examples T. In this case, the mixing rate of the teacher generated training examples is analogous to our forgetting parameter a f determining the prioritization. These techniques perform quite similarly in our experiments, but we actually find that the formulation in equations|1|and 3|perform slightly better on the test set. For example, this formulation is superior by O.4% accuracy in tuning a distilled representation of a logical rule engine We conjecture that learning tasks in the same gradient step when they are related to the same input data results in slightly less noisy gradients."}, {"section_index": "3", "section_name": "3.3 FORGETTING COST FOR KNOWLEDGE TRANSFER FROM A RELATED TASK", "section_text": "The assumption in section3.2|that the output of the source task data S' should be in the same rep-. resentation space as the output for the target task data T is quite a big one. It rules out the vast majority of knowledge sources that we can potentially leverage. As such, we propose an extension that does not make this restriction for application in sequential knowledge transfer of tasks that are not directly semantically aligned. We update our model to include another predicted output separate from y:\nYinit = finit(Wfixedhshared + bfixed\nwhere yinit is a predicted output attempting to recreate the soft labels of the original model trained just on S. finit is the non-linearity used in the final layer of the source task model. Weight matrix Wfixed and bias b fixed are taken from the final layer of the source task model and are not updated\nThe most straightforward application of our proposed forgetting cost paradigm is for the case of. integrating a neural network that has been trained on source task data S, which has outputs in the. same representation space as the outputs for the target task data T. In this case, the forgetting cost. amounts to the addition of a regularization term in the objective function during the integration phase when we train using T. This promotes the neural network to be able to recreate the soft labels of the. initialized model found after training on S before integration is started with T. More formally:\nLoss = L(y,y) + QfL(yinit,Y\nduring integration with the target task data T. As a result, the loss function is updated from section 3.2\nwhere the hidden state is shared between both terms in the objective function. Up to the shared hid den layer, we initialize the model for the target task with the weights learned just using S. Randon matrices and bias vectors are now used to initialize the prediction of y based on the shared hidder representation. This can be seen as a weak form of restricting the model parameters that can be useful for regularization. The hidden representation is in effect constrained so that it is promotec not to change in key areas that have a large effect on the output vector of the source task model. Or the other hand, there is little regularization for parameters that have little effect on the output vecto for the source task model."}, {"section_index": "4", "section_name": "RECURRENT NEURAL NETWORK MODEI", "section_text": "In recent years, recurrent neural network models have become a tool of choice for many NLP tasks In particular, the LSTM variant (Hochreiter & Schmidhuber1997) has become popular as it allevi- ates the vanishing gradients problem (Bengio et al.f 1994) known to stop recurrent neural networks from learning long term dependencies over the input sequence. In our experiments we use the sim- pler GRU network (Cho et al.2014), (Chung et al.2014) that generally achieves the same accuracy despite a less complex architecture. Each time step t is associated with an input xt and a hidden state ht. The mechanics of the GRU are defined with the following equations:\nt- rt = o(Wxrxt+ Wnrh ht = tanh(Wxhxt +rt 0 Wnhht ht =Zt0 hp-1+(1-zt) o h\nwhere o denotes an element-wise product. Wxz, Wxr, and Wxh represent learned matrices that project from the input size to the hidden size. Wnz, Whr, and Wnh represent learned matrices that project from the hidden size to the hidden size. In our work we evaluate the GRU in the categorical prediction setting. For each document, the hidden state after the last word h1 is used for the prediction y of the label y. As such, we treat ht, as the shared hidden representation hshared from section3.3 for our experiments.\nThe prediction goes through one other non-linear function f after the final hidden state is derived. In our experiments we use the softmax function, but others are useful in different settings. A mode that builds on top of GRUs with an external memory storage paradigm (Kumar et al.]2015) currently. holds the state of the art on movie review sentiment analysis. However, we focus just on the straight. forward single layer GRU model in our experiments so that we can more easily disentangle factors. of influence on performance. Our GRU model was fed a sequence of fixed 300 dimensional Glove. vectors (Pennington et al.|2014), representing words based on analysis of 840 billion words from a. common crawl of the internet, as the input xt for all tasks. It has been shown in a number of paper. that tuning the word embeddings during training could increase performance, and it is possible ou. approach could have performed better had we done so..\nOur neural network models were implemented in Theano (Theano Development Team 2016) an trained with Stochastic Gradient Descent. As we did not use an advanced optimization method an\nLoss = L(y,y) + QfL(yinit, Yinit\nZt = o(Wxzxt+ Wnzht-1 rt = o(Wxrxt+ Wn t = tanh(Wxhxt + rt o Wnhht- h+ =Zt O ht-1 1-z\ny = f(WyhhL+by\nnoticed run to run variation in performance, for all of our transfer learning models we trained 10. parallel versions and chose the one with the highest validation accuracy. The SemEval 2016 Task 4. Subtask A training set consists of 10,000 total training examples, but we were only able to receive. 8,906 because of tweet removals when we used the downloading script. For the target task data. across our experiments, 7,600 examples of the SemEval training set examples were used for training. and the rest for validation. The GRU model achieves only 53.6% accuracy on the SemEval testing. data when just training with the target task data and random initialization. In order to improve, we consider knowledge transfer from GRUs trained for the following source tasks to the SemEval targe task data:\nDistilling Logical Rules: Knowledge distillation can be performed using teacher models that are. very different in structure than their neural network based student models. We demonstrate with this task that a compilation of logical linguistic rules can be used as an effective teacher for a GRU by. having the GRU attempt to create the output of the rule engine generated over unlabeled in domain. data. Specifically, our gazetteer based logical rule engine separates sentences and phrases in the text.. It then applies dictionaries of positive and negative sentiment words and phrases to the corresponding. text. For each positive or negative phrase found, it checks to see if negation or double negation are. applied, and modifies the polarity of the sentiment accordingly. The result for any piece of text is. a count of positive and negative sentiment occurrences. For this task, we simply count the total. number of positive and negative indicators to give an overall positive, negative or neutral score. We. provide addition details on how we mapped rules to soft targets for the student network to recreate in. Appendix|A] We utilized a GRU model with 50 hidden units and 50,000 unlabeled examples for our. source task model. We distill off the soft labels as in (Hinton et al.]2015), but set our temperature. fixed at 1.0. It is possible that our performance could have improved by tuning this parameter.. Additional details about the selection of the network and data size are included in Appendix B. The logical rule model itself achieves 57.8% accuracy on the SemEval testing data and the rules. distilled into a GRU as explained in section4 achieves 58.9% accuracy before any integration with. the SemEval target task data. We leverage this task for comparison of knowledge transfer techniques. when the source task and target task share an output space as discussed in section|3.2.\nBinary Movie Reviews: For knowledge transfer from related tasks as discussed in section|3.3|we first consider the Stanford Sentiment Treebank (Socher et al.]2013), which is a popular sentiment dataset based on the movie review domain. We consider one source task to be the binary (positive. and negative) sentence level sentiment subtask which contains 6,920 training examples, 872 valida- tion examples, and 1,821 testing examples. Our GRU model with 40 hidden units achieves 85.5% accuracy on this task.\nFive Class Movie Reviews: We also consider another source task leveraging the Stanford Sentimen. Treebank data from the fine grained (very positive, positive, neutral, negative, and very negative. sentence level sentiment substask which contains 8,544 training examples, 1,101 validation exam ples, and 2,210 testing examples. We use a GRU model with 200 hidden units to accommodate for. the increased task complexity and achieve 45.9% accuracy. This fine grained model can actually be assessed directly on the SemEval task by projecting from five classes to three classes, but it only. achieves 44.2% accuracy with no tuning on the target task data. Our performance on these twc. movie review source tasks is quite similar to what was reported in (Tai et al.]2015) when using a. similar setup, but with LSTMs for both subtasks..\nEmoticon Heuristic: Finally, we consider a semi-supervised task based on emoticon prediction mo. tivated by the successful work in (Go et al.]2009), leveraging it in the twitter sentiment domain anc. its use as a vital component of the SemEval competition winning system (Bethard et al.|2016). We. find unlabelled tweets that contain smileys, frowns, or laughing emoticons. We remove emoticon. from the tweet before prediction and compile a dataset of 250,000 training examples, 50,000 vali. dation examples, and 100,O00 testing examples for each of the three classes. This is multiple order of magnitude smaller than the 90 million tweets used in (Bethard et al.]2016) to allow for quic experimentation. Our GRU model with 50 hidden units achieves 63.4% accuracy on the emoticoi. prediction test set.\nFine-Tuning: The representation is simply initialized with the representation found after training on the source task and then trained as usual on the target task. This approach was pioneered in (Hinton & Salakhutdinov|2006), in application to unsupervised source tasks and applied to transfer learning in (Bengio et al.|2012), and (Mesnil et al.). The learning rate is tuned by a grid search based on the validation set performance.\nProgressive Networks: We also compare with our implementation of a progressive neural network (Rusu et al.|2016), where the representation learned for the source task is held fixed and integratec with a target task specific model via lateral connections trained using the target task data. The learning rate is also tuned based on a grid search using the validation set.\nLearning without Forgetting (LwF): In the LwF paradigm, joint training is performed after pa rameter initialization. This is achieved by treating the target task data and the output generated by the source task model based on the target task input data as two jointly learned tasks as in (Caruana 1997). As opposed to our proposed forgetting cost, the source task specific parameters are not held fixed while training on the target task data. The learning rate and mixing rate between the tasks are tuned by a grid search based on validation set performance. We first consider a version of the LwF model that leverages a random initialization of the target task specific parameters and initializatior of all parameters learned on the source task with the learned values. We also consider another for mulation that we call Greedy LwF. This is actually more closely aligned with the original paper (L & Hoiem2016). All source task parameters are first held fixed, and the target task specific param eters are learned alone before joint training with all of the parameters unfrozen as a second step For the case of source tasks with output in the space of the target task output, there are no source task specific parameters, so the forgetting cost can be viewed as a viable interpretation of the LwF paradigm appropriate in that setting.\nForgetting Cost: Finally, we compare each baseline model with our proposed forgetting cost de scribed in section[3] The learning rate as well as a f from equations[1and 3]were tuned by a grid search based on the validation set performance.\nOur experimental results on the SemEval data validate our intuition that the forgetting cost shoul lead to stronger regularization and better generalization performance. One thing to note about ou orogressive neural networks implementation is that it effectively has only one hidden layer, becaus we hold our embeddings fixed during model training and the same embeddings are shared amon he models used for all of the tasks. It is possible that having multiple layers of lateral connec tions is important to achieving good performance. However, this setting was not applicable in ou experiments. Our results for sequential knowledge transfer on the SemEval benchmark are quit encouraging as the forgetting cost outperforms baselines significantly in all cases.\nWe additionally have validated the intuition that equation[1should perform stronger regularization than equation |3|when equation|1|is applicable. In fact, for our distilled logical rule model tuning experiments, we found that equation|1performs 3% better on the test set. In an attempt to understand more about what caused this performance difference, we monitored testing set performance at each epoch and noticed that equation|3|is actually prone to overfitting away from a good solution on the test set. However, it often finds a pretty good one comparable to equation|1|early in training. When equation |1|could be applied, it seems to be a useful regularization to constrain both the hidden layer and the output layer to align with the model learned on the source task. In equation [3] the\nWe consider multiple sequential knowledge transfer algorithms for experimental comparison. Each. uses only the source task data for learning the source task and only the target task data for integrating with the target task. This way integration is fast and simple, because it does not incorporate storage and replay of examples from the potentially very large source task as argued in (Li & Hoiem2016).\nWe empirically evaluate the generalization performance of the forgetting cost for sequential knowl edge transfer from four different source tasks in Table[1and Table[2] The source task considered in Tableis distilling a logical rule model, leveraging the technique outlined in equation In Table2 we leverage the forgetting cost for related task knowledge transfer as outlined in equation[3\nhidden to output transformation learned for the target task can in contrast learn to deviate from th transformation learned for the source task.."}, {"section_index": "5", "section_name": "5.4 SOURCE TASK PERFORMANCE AFTER TARGET TASK INTEGRATION", "section_text": "In Table 3|we explore the retention of empirical performance on the source task for knowledge transfer algorithms after integration with the target task is complete. Apparently in these cases allowing relearning of the source task model during integration with the target task data is indeed destructive to source task performance. LwF outperforms Fine-Tuning significantly in knowledge retention for movie reviews, but interestingly does not for the emoticon heuristic. The effect of the greedy target task initialization strategy also appears inconsistent. It seems it is possible that this greedy initialization could improve our proposed forgetting cost paradigm in some cases as well. However, a rigorous analysis of the tradeoffs for this initialization approach is beyond the scope of this paper.\nAs the source task representation is literally stored fixed as part of the target task representation in progressive neural networks, it is not clear how to assess any effective forgetting of the source task during target task integration. As a result. we omit them from our source task forgetting experiments"}, {"section_index": "6", "section_name": "5.5 INSPECTION OF LEARNED REPRESENTATIONS", "section_text": "Now that we have established the empirical benefits of our proposed forgetting cost, we will demon strate what it achieves qualitatively through examples. In Table4|we include a sample of examples that are predicted correctly by transferring the knowledge source with the forgetting cost paradigm and not with fine-tuning based integration. The effect is, perhaps, easiest to understand for the rule based and movie review based transfer scenarios. For the rule based transfer setting you can liter- ally map insights that are not forgotten to their respective logical rule in the model, as is the case in these examples. Moreover, we can see movie domain specific terminology such as '\"May the force be with' is seemingly forgotten with standard fine-tuning, but not when the forgetting cost regularization is applied.\nTable 3: Evaluation of accuracy on the source task after integration with the target task data o. SemEval 2016 Task 4 Subtask A. The accuracy after only source task training prior to integratior with the target task is included for reference as a baseline..\nTable 1: Evaluation of target task tuning methodologies for a distilled rule model to the task of SemEval 2016 Task 4 Subtask A.\nSource Task Fine-Tuning Progressive Networks LwF Greedy LwF Forgetting Cost 57.3% 54.5% 58.1% 58.8% 59.7% Binary Movie Reviews Five Class Movie Reviews 57.4% 54.6% 57.1% 56.6% 58.2% Emoticon Heuristic 55.8% 53.2% 57.7% 56.7% 58.6%\nTable 2: Evaluation of knowledge transfer from three source tasks to the task of SemEval 2016 Task 4 Subtask A.\nTable 4: Some transfer learning examples from each knowledge source to SemEval 2016 where the GRU model successfully predicts sentiment when using the forgetting cost paradigm, but not with. fine-tuning based integration.\nConsidering that we have shown a neural network can distill and improve a representation learnec by a logical rule engine, how the final representation differs from the logic of the original engin. is of practical interest. We thus compare the agreement of our fine-tuned rule based GRU with the original rule model on the SemEval testing set. We find that the transferred model achieves 78.7%. agreement with the rule model when the rule model is right. This clearly indicates that our fina. model is not deterministic based on the rule engine, and has a probability of adding errors ever. when the original rule model works well. However, our model actually has 44.7% accuracy on the. examples the rule model got wrong. Our approach yields significant gains in comparison to the. original rule classifiers, improving from 57.8% to 64.4% test set accuracy before even incorporating in auxiliary knowledge sources.\nIn our experiments we tried to find a balance between an ensemble model that is powerful enougl. to have an adaptive weighted average decision function and not so powerful that it overfits on our. limited training and validation data. Our model is quite similar in architecture to the gating network. component of a hierarchical mixture of experts model (Jacobs et al.[1991), (Jordan & Jacobs|[1994) We tried our model over all four representations at once and found that it overfits. Our experiments. showed it is more effective to adopt a greedy ensembling strategy where all models are combined. with the best performing model on the validation set at each phase until only two models are left Finally, these two models are combined with the same mechanism. (Riemer et al.2016) suggests. that a many element gating network can be improved with a sparsity constraint, but this did not work. as well as the greedy strategy for our model and experiments..\nMore formally, for any two models A and B combined in an ensemble, we train the followin mechanism using Stochastic Gradient Descent:\nSource Tweet Label Fine-Tuning Forgetting Cost Logical Rules John Kasich should feel proud of his performance at the. Positive Neutral Positive #GOPDebate Thursday night. He looked more presi-. dential than the rest of the field.. Logical Rules @ BrunoMars I'm so tired of you dressing like you ain't Negative Neutral Negative got no money. You went from wearing Gucci loafers to. 6th grade boy Sketchers. Logical Rules @DavidVonderhaar loving the beta Vahn, even playing. Positive Neutral Positive it on PC with a PS4 controller without aim assist, can't. wait for November 6 Movie Reviews Selena Gomez presented Amy Schumer with an award. Positive Negative Positive and a heap of praise at the Hollywood Film Awards on. November 1. Movie Reviews mailjet: It's Fri...we mean Star Wars Day. May the force Positive Neutral Positive be with all of your emails! https://t.co/FbDdjiJVUT Movie Reviews Straight Outta Compton's success hopefully convinces Positive Neutral Positive New Line Cinema to give Ice Cube the right budget for. the last Friday movie. Emoticons That ball Kris Bryant just hit is the 2nd farthest ball I've Positive Neutral Positive ever seen hit. He is officially ridiculous.. Emoticons This fandom's a mess omg, I wouldn't be surprise if to-. Negative Positive Negative morrow there's a trend who says Niall's going to marry. his cousin #WeKnowTheTruth. Emoticons Christians snapchat story makes me want to kill my-. Negative Neutral Negative self..like I feel like a depressed 8th grader going through. that emo phase\nModel Description Accuracy on SemEval Test Set. Distilled GRU Trained on Full Ensemble. 66.0 % Full Ensemble 65.9% Ensemble with Logical Rules and Both Movie Review Tasks. 65.7% Ensemble with Logical Rules and Binary Movie Reviews. 65.4% Ensemble with Logical Rules and Five Class Movie Reviews 65.1% Ensemble with Logical Rules and Emoticon Prediction. 65.0% Ensemble with Both Movie Review Tasks. 62.1% GRU Trained on Only SemEval Data. 53.6% SwissCheese (Bethard et al. 2016) 64.6% NTNUSentEval (Jahren et al.T|2016) 64.3% UniPI (Attardi & Sartiano). 2016 63.9% CUFE (Nabil et al.]|2016) 63.7% INSIGHT-1 (Ruder et al. 2016) 63.5%\nTable 5: Empirical three way sentiment classification results on the SemEval 2016 Task 4 Subtasl A test set.\nwhere yensemble is the prediction vector of the combined ensemble. yA and yb are the output vectc of the individual models.."}, {"section_index": "7", "section_name": "6.2 ENSEMBLE RESULTS", "section_text": "Our ensemble model was trained on what was set aside as the validation data during the initia training with early stopping. In the first phase of combining, the model transferred from the logica rule source task was combined with each model. In the second phase, the model based on transfel from the binary movie review sentiment model was combined with each model. In the third phase the two remaining models were combined. The results of our ensemble in Table|5 suggest that i1 is possible to further improve the performance of a single sequential transfer model by intelligently combining its predictions with models that have other perspectives. This is because they are modeled using different source tasks for prior knowledge. Impressively, our final distilled model surpasses results from all prior models on the SemEval 2016 benchmark using the same final architecture of a 50 hidden unit GRU model that is clearly not even competitive when trained simply on the task specific labeled data. The prior best model SwissCheese (Bethard et al.] 2016) consists ol random forests ensemble built utilizing multiple convolutional neural network models and distant supervision. In fact, we achieve superior results despite using over an order of magnitude less total data for training our model.\nWe would also like to underscore that our total improvement of 1.5% as a result of creating an en. semble with our best transferred model from the logical rule source task can be viewed as quit. disappointing, despite achieving state of the art results. In fact, in the theoretical limit of having. decision model that switches to the best already learned model at each point, our four transferre representations would achieve 85.1% accuracy together. For the combination of the movie reviev. based models and logical rule based model we can get to 81.4% accuracy. Moreover, we can ge. 76.5% accuracy with just the logical rule based transfer model and the emoticon prediction base. transfer model. Unfortunately, we achieve nowhere near these theoretical results despite represen. tations that are apparently quite diverse. This seems indicative that there are significant gains yet t. be uncovered in integrating these representations..\nmA =o(WAyA+bA mB = o(WByB + bB) mA aA= mA+mB m B aB mA+mB\nmA aA= mA+mB\nmB a B mA+mB\nYensemble = aAyA + aBYB"}, {"section_index": "8", "section_name": "7 CONCLUSION", "section_text": "We consider a new methodology called the forgetting cost for preventing the catastrophic forgetting. problem of neural network sequential transfer learning. The forgetting cost is practical and easy to. implement. We have demonstrated for the challenging task of Twitter sentiment analysis that it can. uncover significant gains in generalization performance and that it seems to not forget knowledge. traditionally forgotten from the source task during fine-tuning. Our strong empirical results still mo tivate multiple avenues with high potential for continued exploration in text analytics. Using logical. rules to improve neural network models is a promising direction for humans to efficiently contribute. to increased model performance. Additionally, the large diversity of representations learned from multiple classifiers with the same target task but different source tasks seems to indicate there is. potential to see even much greater gains when integrating multiple sources of knowledge transfer.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Giuseppe Attardi and Daniele Sartiano. Unipi at semeval-2016 task 4: Convolutional neural net works for sen-timent classification. Proceedings of SemEval, pp. 220-224, 2016.\nYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157-166, 1994.\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nArtur S d'Avila Garcez, Krysia Broda, and Dov M Gabbay. Neural-symbolic learning system. foundations and applications, 2012\nAlec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision 2009.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006.\nSteven Bethard, Daniel M. Cer, Marine Carpuat, David Jurgens, Preslav Nakov, and Torsten Zesch (eds.). Proceedings of the 1Oth International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, 2016. The Associa- tion for Computer Linguistics. ISBN 978-1-941643-95-2. URL http://aclweb.org/ anthology/s/s16/\nBrage Ekroll Jahren, Valerij Fredriksen, Bjorn Gamback, and Lars Bungum. Ntnusenteval at semeval-2016 task 4: Combining general classifiers for fast twitter sentiment analysis. Proceed- ings of SemEval, pp. 103-108, 2016.\nMichael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm Neural computation, 6(2):181-214, 1994\nDavid Lopez-Paz, Leon Bottou, Bernhard Scholkopf, and Vladimir Vapnik. Unifying distillation and privileged information. stat, 1050:26, 2016\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christophe. Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting. of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 142-150. Association for Computational Linguistics, 2011.\nJacob MJ Murre. Learning and categorization in modular neural networks. 1992\nMahmoud Nabil. Mohamed Aly. and Amir F Atiya. Cufe at semeval-2016 task 4: A gated recurrent model for sentiment classification. Proceedings of SemEval, pp. 52-57, 2016.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-1543, 2014.\nAnthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2) 123-146, 1995.\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. Harnessing deep neural networks with logic rules. arXiv preprint arXiv:1603.06318, 2016\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nZhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer Vision, pp. 614-629. Springer, 2016\nAnthony Robins. Consolidation in neural networks and in the sleeping brain. Connection Science 8(2):259-276. 1996\nSebastian Ruder. Parsa Ghaffari, and John G Breslin. Insight-1 at semeval-2016 task 5: Deep learn ing for multilingual aspect-based sentiment analysis. arXiv preprint arXiv:1609.02748, 2016\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil lation. arXiv preprint arXiv:1511.06295, 2015.\nAndrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray. Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, pp. 1642. Citeseer, 2013.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations. from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015\nSebastian Thrun. Is learning the n-th thing any easier than learning the first? Advances in neura information processing systems, pp. 640-646. 1996\nGeoffrey G Towell, Jude W Shavlik, and Michiel O Noordewier. Refinement of approximate do main theories by knowledge-based neural networks. In In Proceedings of the Eighth National Conference on Artificial Intelligence. Citeseer. 1990."}, {"section_index": "10", "section_name": "A MAPPING SENTIMENT RULES TO SOFT TARGETS", "section_text": "The gazetteer based logical rule engine separates sentences and phrases in the text. It then applies dictionaries of positive and negative sentiment words and phrases to the corresponding text. For. each positive or negative phrase found, it checks to see if negation or double negation are applied. and modifies the polarity of the sentiment accordingly. The result for any piece of text is a count of positive and negative sentiment occurrences. For this task, we simply count the total number of positive and negative indicators to give an overall positive, negative or neutral score. To be concrete,. we have a simple procedure for mapping positive and negative word counts to soft labels that could. be used for distillation. If there are no positive or negative words, the output vector is a one hot. vector corresponding to a neutral label. If there are an unequal number of positive and negative sentiment words, the neutral label is zero and the raw counts are sent to the softmax function to create a soft label over the positive and negative word occurrences. Finally, if there are an equal amount of positive and negative words, we consider the added total sentiment words plus one in the neutral label as well as the number of positive words and negative words before sending these totals through a softmax function."}, {"section_index": "11", "section_name": "B SIZE SELECTION FOR THE RULE DISTILLATION TASK", "section_text": "In Table|6 we detail the performance of distilling a logical rule engine into a GRU based recurren. neural network by imposing soft labels over unlabeled tweets. The fact that we keep our word rep. resentations fixed with general purpose unsupervised data makes it difficult for the GRU to distil. the entire model without a large number of examples. Additionally, as there were a large numbe. of examples in our distillation experiments, we did not experience high run to run variation anc. only trained a single GRU model for each distillation experiment (as opposed to picking the best. validation error of 10 parallel training routines as in our transfer experiments). Our distilled GRU i\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014\nTable 6: Logical rule engine distillation performance and SemEval 2016 Task 4 Subtask A accuracy as a function of the number of hidden units in the GRU and the number of training examples. The 50 hidden unit and 50,o00 training example model performs the best on the SemEval training set.\nbetter on the testing set than the original classifier, likely because this input representation prevent the model from overfitting to the idiosyncrasies of the rule engine. This actually underscores a important point for the distillation of abstract knowledge. If the target task is known during distil lation, it may be beneficial to stop short of totally distilling the original knowledge as it may hur down stream performance past a certain point. We impose a simple policy where the best hidder unit and training example combination is selected based on performance on the training data of th target task. As a result, we use the model with 50 hidden units based on 50,000 training examples i our experiments integrating with other knowledge. This model is a pretty good one to choose, anc achieves high transfer performance relative to models that overfit on the teacher network.\nHidden Units Examples Alignment with Teacher Accuracy on SemEval Test Set 25 50,000 88.3% 59.1% 25 300,000 91.9% 58.6% 50 50,000 88.6% 58.9% 50 300,000 93.0% 58.5% 75 50,000 88.7% 58.9% 75 300,000 93.6% 58.3% 100 50,000 88.6% 58.7% 100 300,000 93.8% 58.1% 125 50,000 88.5% 58.7% 125 300,000 93.7% 58.3% 150 50.000 88.5% 59.0% 150 300,000 94.0% 58.5%"}] |
HJjiFK5gx | [{"section_index": "0", "section_name": "NEURAL PROGRAM LATTICES", "section_text": "Chengtao Li\nMassachusetts Institute of Technology Cambridge. MA 02139. USA\n{dtarlow, algaunt, mabrocks, nkushman}@microsoft.com\nWe propose the Neural Program Lattice (NPL), a neural network that learns to per form complex tasks by composing low-level programs to express high-level pro grams. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which can only learn from strong supervision that contains the whole hi- erarchy of low-level and high-level programs. NPLs remove this limitation by providing the ability to learn from weak supervision consisting only of sequences of low-level operations. We demonstrate the capability of NPL to learn to perform long-hand addition and arrange blocks in a grid-world environment. Experiments show that it performs on par with NPI while using weak supervision in place of most of the strong supervision, thus indicating its ability to infer the high-level program structure from examples containing only the low-level operations."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A critical component of learning to act in a changing and varied world is learning higher-level. abstractions of sequences of elementary tasks. Without such abstractions we would be forced to reason at the level of individual muscle contractions, making everyday tasks such as getting ready. for work and making dinner almost impossible. Instead, as humans, we learn a hierarchy of skills. starting with basic limb movements and eventually getting to the level of tasks such as get ready. for work or drive to the airport. These abstractions have many different names. For example, in. computer programming they are called functions or subroutines and in reinforcement learning they. are called options or temporally extended actions. They facilitate learning in two important ways.. First, they enable us to learn faster, i.e. with lower sample complexity. Second, they enable us to. strongly generalize from our prior experience so that we can, for example, drive to a new location. once we have learned how to drive to a few other locations..\nA primary mechanism used for learning is watching others perform a task. During such demon-. strations, one typically observes the elementary operations performed, such as the movements of. individual limbs or the mouse clicks in a computer interface. In some cases, the demonstrations can. also provide supervision of the abstract operations (i.e., the abstraction hierarchy) that generated the elementary operations, either through a formal annotation process or through informal natural. language descriptions. Recent work on Neural Programmer-Interpreters, NPI (Reed & de Freitas.. 2016), has shown that when the training data includes both elementary and abstract operations,. learning the abstractions results in strong generalization capabilities. This enables, for example, the. ability to add very large numbers when trained only on the addition of relatively small numbers..\n*Work done primarily while author was an intern at Microsoft Research."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Providing supervision of the abstract operations during a demonstration requires significant addi. tional effort, however, and so in typical real-world scenarios we will observe only the elementary. operations. For example, we can see a person's limbs move (elementary operations), but we can- not see the mental states that led to these movements (abstract operations). In the same vein, we\ncan easily capture a user's clicks in an online application or their real-world movements using a skeletal tracking depth camera (Microsoft Corp. Redmond WA). NPI cannot directly be applied on data like this, however, because the data does not contain the abstraction hierarchy. This motivates the desire for a model which can learn an abstraction hierarchy from only sequences of elementary operations, but this is an ill-posed problem that requires either additional modeling assumptions or some strongly supervised data. In this work, we take a first step by assuming access to a small number of strongly supervised samples that provide the components of the abstraction hierarchy and disambiguate which of infinitely many abstraction hierarchies are preferred. While we currently only consider domains without noise, we believe our work provides a starting point for future re- search on adding additional modeling assumptions that could remove the need for strong supervision altogether.\nOur key contributions can be summarized as follows:"}, {"section_index": "3", "section_name": "2 MODEL BACKGROUND", "section_text": "The NPI model is based on a Recurrent Neural Network (RNN) which, at each step, either calls ar abstract program, performs an elementary operation, or returns from the current program. To make this decision, each step of the RNN takes as input: (1) a learnable embedding of program to execute (2) embedded arguments for this program, and (3) an embedding of the current world state. Calling an abstract program resets the LSTM hidden state to zero and updates the program and argument: provided as input to the following steps. Returning from an abstract program inverts this process restoring the hidden state and input program and arguments to those from before the program was called. Performing an elementary operation updates the world state, but leaves the current progran and arguments in place, and performs the standard LSTM update of the hidden state.\nRather than present the details of the NPI model as in Reed & de Freitas (2016), we will cast it in. the formulation that we will use throughout the paper. The main difference is that our presentation will explicitly maintain a call stack, which we will refer to as Stack-based NPI. Morally this does not change the model, but it will enable the extension to weaker supervision described in section|3.\nThe basic structure of the reformulated model can be seen in Figure[1] The model learns a library of programs, G, and arguments, R, to these programs, where each program g E Rn and each argument\nThere are several technical issues that arise in developing NPL, which are addressed in this paper In section 2 we reformulate the NPI model to explicitly include a program call stack, which is. necessary for the later modeling developments. Next we need to formulate a training objective for weakly supervised data instances. Ideally we could treat the abstract operations as latent quantities and optimize the marginalized log probability that arises from summing out the abstract operations. However, there are exponentially many such abstraction hierarchies, and so this is computationally. intractable. To overcome this challenge, we compute an approximate dynamic program by building. on two ideas from the literature. First, we draw inspiration from Connectionist Temporal Classifica- tion, CTC (Graves et al.]2006), observing that it provides a method for learning with latent align-. ments. In section|3.1|we reformulate the CTC objective into a feedforward process that executes a dynamic program. Applying this to our problem, however, requires handling the program call stack.. In section|3.2|we do this through an approximation analogous to that of Stack-Augmented Recurrent Nets, StackRNNs (Joulin & Mikolov2015), resulting in a fully-differentiable feedforward process. that executes a dynamic program to approximately compute the marginalized log probability that we. desire. Finally, we observe in section 3.3|that there are alternative dynamic programs for approxi-. mating the desired marginalized log probability and present one that uses more computation to more. closely resemble the exact (exponentially expensive) dynamic program while remaining tractable.\nWe show how ideas from CTC and StackRNNs can be adapted and extended to enable the training of NPI-like models from only flat sequences of elementary operations and world states. We introduce a method to compute a more accurate approximation of marginalized log proba- bilities in such models. On the long-hand addition task from|Reed & de Freitas|(2016) and a new task involving arrang- ing blocks in a grid-world, we demonstrate empirically that using NPL to train with elementary operation sequences combined with only a few training samples with full program traces can achieve similar performance to NPI but with weaker supervision.\nFigure 1: Stack-based NPI: Four time steps from the execution of the stack-based NPI model. Each color/hash pattern represents a unique set of unchanging data values which, over time, move up and down (and in and out of) the stack. Operations below the dotted line to calculate the new world state are executed only at test time, since we do not have access to fworld at training time, and the training data contains the correct sequence of world states.\nr E Rm is represented as an embedding, with n and m as the embedding dimensions. When a program is called with a list of arguments it performs a sequence of actions, where each action is one of: OP, PUSH, or POP. OP performs an elementary operation, e.g. move one step. PUSH calls to another program. POP returns from the current program back to the parent program.\nAn LSTM-based controller, shown in Figure|2 is used to generate the sequence of actions, de- ciding the action at timestep t based on the cur- rently running program and arguments, gr . the LSTM's internal state ht. and an observation of the current world state, wt. To support calls to and returns from subprograms, the controller state contains two call stacks. one for the inter nal RNN state. which we denote as M (green in Figure[1], and one for the program and argu- ments, which we denote as S (red in Figure[1) M and St refer to the elements at depth-d of the stacks at timestep t\nThe training data for NPI requires full execu-. tion traces. We use to denote all the observa tions recorded in a single full exectution trace.. Specifically, for timestep t in the execution we. define t , to be the input world state, and t. to be the decision of which of the following ac-. tions to take:\nt t+1 t+2 t+3 Ta =PUSH Tto t+1 S M FSt St+3=St RNN Mt FMt+3=hout Cell Mt St+1=gout = gin1 St+2 = gout RNN RNN Mt+1=0 Mf+2 = hout Cell Cell (empty) (empty) +1 t+2 W W W Jworld Jworld Jworld\nt t Pa Po action op prog LSTM out RNN yt Cell gin Iout enc W\nFigure 2: RNN Cell: A zoomed in view of the internals of an RNN cell from Figure[1\nNote that, as with the original NPI model, we also include arguments for both the operation anc program calls, but for notational simplicity we subsume those into t and t respectively..\nThe stack updates are formally defined as.\n[t = POP]M + [t = OP]hout + [t = PUSH]O, d = 0 [t = POP]M + [t = OP]M + [t = PUSH]hout d = 1 [t = POP]Md+1+ [t =OP]Mt+ [t = PUSH]Md-1 d > 1 [t = POP]St + [t = OP]St + [t = PUSH]gout d = 0 St+1 [t = POP]Sa+1+ [t = OP]St + [ = PUSH]Sd-1 d > 0\nThe conditions in the Iverson brackets choose which type of update should be performed based on the action type. POPing from the stack moves all items up one location in the stack. Performing an elementary OP, updates the top element of stack M to contain the new RNN hidden state but other- wise leaves the stacks unchanged. PUsHing onto the stack pushes the new program and arguments. gout, onto stack S, pushes a default (zero) hidden state onto stack M, and moves all of the other. elements in the stacks down one location..\nThe LSTM output is passed in parallel through four different decoder networks to generate the following probability distributions:\nthe action the arguments for the program c CA the program to be called. the elementary operation to be p\n= argmaxyeg Pg\nAt training time our objective is to find neural network parameters 0 which maximize the following (log) likelihood function:\nOPp(OP)p()+t = PUSHpPUSH)p() POP] pt(POP p(t +\nOPp(OPp.() +l =PUSHp +[t =POP] pP PUSHn.(.\nT p(r) = II p(t) (0) = logp( t=1\nIn this section we introduce our core contribution, a new framework for training NPI-like model when the training data contains only sequences of elementary actions instead of full program abstrac. tions. The basis of our framework is the Neural Program Lattice, which approximately compute. marginal probabilities using an end-to-end differentiable neural network..\nIn this section, the training data is an elementary operation trace , which includes a sequence of elementary steps, Ao, and a corresponding sequence of world states, Aw. For each elementary step. X', the elementary operation performed is ? and the input world state is Aw. We define O as a many-to-one map from a full execution trace to it's elementary operation trace . With these\n= M = the current LSTM internal state, gin = St = the current program and arguments, W Tt = the current world state..\nt=fe t = fistm(ut, htn.\ndefinitions and p() as defined in equation2.3 our desired (log) marginal likelihood for a single example becomes\nComputing this quantity is intractable because the number of possible executions [O-1()| is ex ponential in the maximum length of and each execution may have unique stack states. In the following sections, we describe how to approximately compute this quantity so as to enable learning from weak supervision. To also learn from strong supervision, we simply add log p() terms to the objective for each strongly supervised example r."}, {"section_index": "4", "section_name": "3.1 CTC AS A FEED-FORWARD NETWORK", "section_text": "In formulating a loss function which approximates the exponential sum in equation 3.1] the first. challenge is aligning the elementary steps, A, in the training data, to the timesteps, t, of the model. Specifically, when the model calls into a program or returns from a program in a given timestep,. it does not perform any elementary operation in that timestep. As a result, the alignment between elementary steps in the data and the timesteps of the model depends crucially on the choice of high level abstraction. To overcome this challenge, we draw inspiration from CTC (Graves et al.]2006).\nCTC is an RNN-based neural network architecture used in speech recognition to handle the analo gous problem of aligning audio sequence inputs to word sequence outputs. It can be seen as a com bination of an RNN and a graphical model. The RNN computes a distribution over possible output for each timestep, while the graphical model consumes those distributions and uses a dynamic pro gram to compute the marginal distribution over possible label sequences. A crucial assumption i that the RNN outputs at each timestep are conditionally independent, i.e. no feedback connection exist from the output layer back into the rest of the network. This assumption is incompatible witl the NPI model because action decisions from timestep t determine the world state, hidden state, anc program input for the next timestep. In section|3.2|we will adapt the CTC idea to work in the NP setting. In this section we prepare by reformulating CTC into a feed forward neural network tha can be trained with standard back propagation.\nThe main challenge solved by CTC is finding the alignment between the elementary steps, i, ob. served in the training data and the timesteps, t, of the model. To facilitate alignment discovery, the output layer in a CTC network is a softmax layer with a unit for each elementary operation in O, the. set of elementary operations, as well as one additional unit for a BLANK output where no elementary. operation is performed because (in our case) the model calls into a new program or returns from the. current program. Define E O'T as an output sequence over the alphabet O' = O U BLANK. Additionally, define the many-to-one map B from an output sequence to A, the sequence of el-. ementary operations created by removing all of the BLANK outputs from . As discussed above. the CTC model assumes that the RNN inputs at time t are independent of the decisions made by. w = (w1, .., wT) and gin = (gin, ..., 9in) are provided as inputs and are thus independent of the. output decisions. We can then formally define\npa(POP|w,gin) + pt(PUSH|w,gin), Bt = BL U, Jin (pa(OP|w, gin)Pt(t|w, gin), otherwise |w| p(B|w,gin) =][pt(Bt|w,Jin) t=1 L(0|Xo, w, gin) = logp(o|w, gin) = log p(|w, gin) eB-1()\nThe dynamic program used by CTC to compute this likelihood is based on y,, the total probability that as of timestep t in the model we have generated X1:i, the first i elementary actions in Ao. yt is.\n(0) = log p() tEO-1()\ncalculated from w1:t and c the first t elements in w and qi. respectively. Formally\nyt =pt -+ p(BLANK|w1:t\nIn the last section we assumed that the RNN inputs w, and gin were defined independently of th assumptions to handle the full Stack-based NPI model described in section[2] The key idea is tha rather than propagating forward all possible stack states, which leads to a combinatorial explosion we will propagate forward a single stack state which is a weighted average of all possible stacl states, where the weights are computed based on local probabilities of actions at each timestep This operation is analogous to that used in StackRNNs (Joulin & Mikolov2015). The result is a. tractable and differentiable forward execution process that no longer exactly computes the desirec. marginal likelihood. However, we will show experimentally that learning with this model for weakl. supervised examples leads to the behavior that we would hope for if we were learning from the tru marginal log likelihood. That is, we can share model parameters while training on strongly an weakly labeled examples, and adding the weakly labeled data improves generalization performance.\nIn more detail, we estimate all quantities specified in r but not in X using a soft-argmax function that computes deterministic functions of the previously observed or estimated quantities. These estimated quantities are a, g, and implicitly w. Both w and g can be directly replaced with a. soft-argmax as follows:\nReplacing decision t with a soft-argmax changes the stack updates from equation|2.1|into differ entiable stack updates similar to those used in|Joulin & Mikolov(2015). Formally,.\n1t 1:t ,Jin EB-1(1:i)\nThis formulation allows the likelihood to be computed in a feed-forward manner and the gradients of 0 to be computed using standard back propagation through time. Note that if there were feedback connections in the model, then it would not be sufficient to only use y, as the dynamic programming state; we would need to keep track of all the different possible stack states after having produced the sequence prefix, which is what leads to the intractability.\nwt = y iEI g)=>pg(y) Jout = YEG\niEI (ie1(yt/yt+1)pt(a)pt(), a=OP (yt/yt+1)pa(a), a F OP at(POP)M + a(OP)htut + at(PUSH)O, d = 0 t(POP)M+ at(OP)M + (PUSH)hout, Mt+1 d = 1 at(POP)Ma+1+ at(OP)Mt+ at(PUSH)Mt-1, d > 1 at(POP)St + a(OP)St + at(PUSH)gout, d = 0 lat(POP)Sd+1+ at(OP)St + at(PUSH)St-1 d > 0\nvith a introduced for notational simplicity. This change enables htn and gin to now depend on the listribution over output decisions at time t -- 1 via the stack, as gtn = St and htn = Mt, where St\nFigure 3: NPL lattice: Eac responds to one timestep, an in a timestep corresponds to. depth, l, and elementary op dex. i. A subset of the lattice are shown with blue arrows. transitions, green for OP anc. POP. Blurred Blurred All Paths Computational Gra Stack World Return Cost Acc Execute All Paths. False False True Highest E NPL True False Trued Medium Me CTC+StackRNN True True False Lowest Lo\nTable 1: Outlines the tradeoff between representational accuracy and computational cost for twc extreme solutions and NPL\n(0) = log pa(POP)yi t<T\nThis gives a fully differentiable model for approximately maximizing the marginal probability of X\nAlthough the model we have defined so far is fully differentiable, the difficultly in training smoothe models of this form has been highlighted in the original Neural Turing Machine work (Graves et al 2014) as well as much of the follow on work (Gaunt et al.] 2016]Kurach et al.]2016Grave et al. 2016]Neelakantan et al.[ 2016} Joulin & Mikolov2015). To help alleviate this difficulty, w introduce in this section the neural lattice structure after which Neural Program Lattices are namec\nTo motivate the need for this lattice, consider the set of possible program execution paths as a tree with a branch point for each timestep in the execution and a probability assigned to each path. Exaci gradients could be computed by executing every path in the tree, calculating the gradient for each path, and then taking an average of the gradients weighted by the path probabilities. This solution is impractical however since it requires computation and memory that scales exponentially with the number of timesteps. To avoid this problem, the NTM and related techniques perform a single forward execution which is meant to approximately represent the simultaneous execution of all of the paths in the tree. To avoid the exponential explosion, the state at each timestep, i.e. tree depth, is approximated using a fixed-sized, representation. The approximation representation chosen by both NTM and Joulin & Mikolov|(2015) is a soft-argmax of the states generated by performing each of the possible actions on the previous approximate state.\nWe observe that these two choices are really extreme points on what is a continuous spectrum ol options. Instead of choosing to maintain a separate state representation for every path, or to group together all paths into a single representation, we can group together subsets of the paths and main tain an approximate state representation for each subset. This allows us to move along this spectrum by trading higher memory and computational requirements for a hopefully closer approximation of. the marginal probability.\nigure 3: NPL lattice: Each slice cor-. esponds to one timestep, and each node. n a timestep corresponds to a given call. epth, l, and elementary operation in-. ex. i. A subset of the lattice transitions re shown with blue arrows for PUSH ansitions, green for OP and orange for. OP.\nThe last remaining complexity is that X does not indicate the necessary number of model timesteps Thus the likelihood function must sum over all possible execution lengths up to some maximum T and ensure that the final action is a return, i.e. POp. If we define I = o then formally,\nIn our implementation we group together execution paths at each timestep by call depth, l E L and number of elementary operations performed so far, i E I, and maintain at each timestep a separate embedded state representation for each group of execution paths. Thus the unrolled linea architecture shown in Figure [1|becomes instead a lattice, as shown in Figure 3] with a grid oi approximate program states at each timestep. Each node in this lattice represents the state of al paths that are at depth l and elementary operation i when they reach timestep t. Each node contains a soft-argmax of the stack states in M and S and an RNN cell identical to that in Figure2 Fo each node we must also compute yt', the probability that at timestep t the execution is at depth and at elementary operation i and has output the elementary operation sequence A1:i. As before we can compute this recursively as:\n+1.l t,l+] L.. . -] y Pa,i Pa.i\nt+1,l t,l+ t.- (PUSH)yi Yi Pa,i POF Yi +Pa,i\nSimilarly, the averaged call stack values are computed recursively as follows:\nWe have left out the boundary conditions from the above updates for readability, the details of these are discussed in AppendixA.4\nFinally, the likelihood function approximately maximizes the probability of paths which at any timestep have correctly generated all elementary operations in , are currently at depth 0 and are returning from the current program. Formally,\n(0) = log ) t. 9 tET\nRemark: The specific choice to group by elementary operation index, and call depth was motivated by the representational advantages each provides. Specifically:\nTable[1summarizes these advantages and the computational trade-offs discussed earlier"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "11/yt+1,2)pa{t(a) 21. Q++1'(POP)Mt + a1,(OP)p,s- )hoy Q, d = 0 Qt,7-1,l(PUSH)ht,lt1 t,l,l d = 1 2.i 'out,i-1' 1.l- Qi,i t,l+1 t,l, d > 1 Qi,i d = 0 +1 di t,l,l Qii OP)p )st,i d > 0 A\nGrouping by elementary operation index: allows the model to represent the input worl. state exactly instead of resorting to the fuzzy world state representation from equation|3.2 Grouping by call depth: allows the representation to place probability only on executio paths that return from all subprograms they execute, and return only once from the top leve. program as specified in equation|3.4\nFinally, in practice we find that values of the y's quickly underflow, and so we renormalize them at each timestep, as discussed in Appendix[A.3\nIn this section, we demonstrate the capability of NPL to learn on both the long-hand addition task (ADDiTion) from|Reed & de Freitas (2016) and a newly introduced task involving arranging. blocks in a grid-world (NANOCRAFT). We show that using the NPL to train with mostly the weak supervision of elementary operation traces, and very few full program traces, our technique sig. nificantly outperforms traditional sequence-to-sequence models, and performs comparably to NPI models trained entirely with the strong supervision provided by full program traces. Details of the. experimental settings are discussed in Appendix|A.5.\nFigure 4: NANOCRAFT: illustrative example progra NANOCRAFT, PUSH MOVE_MANY(right),PUSH where the agent (denoted. ACT_MOVE(right),STAY \"*\") is required to build 3> <END>,POP rectangular red wooden bui. MOVE_MANY(dOwn),PUSH ing at a certain location. ACT_MOVE(right),STAY a 66 grid world.We c <END>,POP see that some of the bloc. BUILD_WALL(right),PUSH PLACE_AND_MOVE(right),PUSH are already in place in t. ACT_MOVE(right),STAY ACT_PLACE_BLOCK(wood, red),STAY initial world-state. To bu. <END>,POP the building, the agent (pr. PLACE_AND_MOVE(right),PUSH ACT_MOVE(right),STAY gram) first makes two calls. <END>,POP MOVE_MANY to move into pla <END>,POP in the X and Y dimensions, a. BUILD_WALL(down),PUSH then calls BUILD_WALL fc <END>,POP times to build the four walls. the building.. <END>,POP NANOCRAFT WITH FULL WORLD 0.8 0.6 0.4 0.2 1 # FULL 0 16 32 64 128 256 -NPI 0- NPL-64 0=NPL-128 =0=NPL-256 0Seq-64 0 Seq-128 0 Seq-256\nMOVE MANY(dOWn),PUSH ACT_MOVE(right),STAY <END>,POP"}, {"section_index": "6", "section_name": "4.1 SAMPLE COMPLEXITY", "section_text": "Task: We study the sample complexity using a task we call NANoCRAFT. In this task we consider an environment similar to those utilized in the reinforcement learning literature. The perceptual input comes from a 2-D grid world where each grid cell can be either empty or contain a block with both color and material attributes. The task is to move around the grid world and place blocks in the appropriate grid cells to form a rectangular building. The resulting building must have a set of provided attributes: (1) color, (2) material, (3) location, and sizes in the (4) X and (5) Y dimensions As shown in the example in Figure4 at each step the agent can take one of two primitive actions, place a block at the current grid cell with a specific color and material, or move in one of the four\nwith additional indexes for i and l on all of the inputs and outputs\nF1gure 4: ANORAEL: An illustrative example program,. where the agent (denoted as. \"*\") is required to build 34. rectangular red wooden build-. ing at a certain location in. a 6x6 grid world. Wecan see that some of the blocks. are already in place in the. initial world-state.. Tobuild the building, the agent (pro-. gram) first makes two calls to. MOVE_MANY to move into place in the X and Y dimensions. and. then calls BUILD WALL four times to build the four walls of the building..\nFigure 5: NANoCRAFT Sample Complexity: The x-axis varies the number of samples containing. full program abstractions, while the y-axis shows the accuracy. NPL-{64,128,256} shows the accu- racy of our model when trained with 64/128/256 training samples. NPI shows the accuracy of NPI. which can utilize only the samples containing full program abstractions. Finally, Seq-{64,128,256} shows the accuracy of a seq2seq baseline when trained on 64/128/256 samples. It's performance does not change as we vary the number of samples with full program abstractions since it cannot. utilize the additional supervision they provide..\n4 8* ADD, PUSH 0 ADD1, PUSH ACT WRITE(3),STAY 0 2 5 CARRY, PUSH ACT PTR MOVE(1, 1eft),STAY 0 0 0 0 4 8* Act_wRIte(1),sTAy *s ACT_PTR_MOVE(1, right),STAY .... 0 0 0 0 2 <END>,POP LSHIFT, PUSH 1 0 0 ACT PTR MOVE(0, 1eft),STAY ACT_PTR_MOVE(1, 1eft),STAY *3 0 0 ACT PTR MOVE(2, 1eft),STAY ACT_PTR_MOVE(3, 1eft),STAY <END>,POP 0 4* 8 <END>,POP ADD1, PUSH 0 2* 5 AcT_WRITe(7),STAy LSHIFT, PUSH 0 1' 0 0* 4 8 ACT_PTR_MOVE(0, 1eft),STAY ACT PTR MOVE(1, 1eft),STAY 0 0 3 0* 2 5 ACT PTR MOVE(2, 1eft),STAY .... ACT PTR MOVE(3, 1eft),STAY 0* 1 0 <END>,POP <END>,POP 0* 7 3 <END>,POP\ncardinal directions. We explored both a fully observable setting, and a partially observable setting In the fully observable setting, the world is presented as a stack of 3 grids, one indicating the material of the block at each location (or empty), a similar one for color and a final one-hot grid indicating the agent's location. In the partially observable setting, the agent is provided only two integers. indicating the color and material of the block (if any) at the current location. Finally, in both settings the world input state contains an auxiliary vector specifying the five attributes of the building to be built. In each sample, a random subset of the necessary blocks have already been placed in the world, and the agent must walk right over these locations without placing a block.\nExperiment Setup: We assume that data with full programmatic abstractions is much more diffi cult to obtain than data containing only flat operation sequences!2|so we study the sample complexity in terms of the number of such samples. All experiments were run with 10 different random seeds, and the best model was chosen using a separate validation set which is one-quarter the size of the training set.\nResults: Figure5|shows the sample complexity for the NANoCRAFT task in the fully observable setting. We can see that NPL significantly outperforms the NPI baseline (NPI) when only a subset the total training samples have full abstractions. NPL similarly outperforms a sequence-to-sequence baseline (Seq-*) trained on all of the available data. We also performed preliminary experiments for the partially observable setting, and obtained similar results.."}, {"section_index": "7", "section_name": "4.2 GENERALIZATION ABILITY", "section_text": "Task:We study generalization ability using the ADDiTioN task fromReed & de Freitas(2016) The objective of this task is to read in two numbers represented as digit sequences and compute the. digit sequence resulting from the summation of these two numbers. The goal is to let the mode. learn the basic procedure of long-hand addition: repeatedly add two one-digit numbers, write dowr . the result (and the carry bit if necessary) and move to the left until the beginning of the numbers. is reached. The whole procedure is represented using a four-row scratch pad, where the first anc. second rows are input digit sequences, the third row is the carry digit and the forth row the result The model is provided a world-state observation which only provides a partial view into the ful. scratchpad state. Specifically, it is provided the integers at the location of four different pointers. each in one row of the scratchpad. The model has two possible elementary operations, either move. a pointer left or right, or write a single digit into one of the four pointer locations. All four pointer.. start at the rightmost location (the least significant digit), and are gradually moved to the left by the.\n2Operation sequences can be obtained by observing a human demonstrating a task, whereas full abstractions require additional effort to annotate such traces.\nFigure 6: ADDITION: An il lustrative example program of the addition of 25 to 48. We have four pointers (denoted \"*'), one for each row as of the scratch pad. We re- peatedly call ADD1 until we hit the left most entry in the scratch pad. Each call to ADD1 we call ACT_WRITE tc write the result, CARRY to write the carry digit (if nec essary) and LSHIFT to shifi all four pointers to the left tc work on the next digit. The digit sequence on the fourth row of scratch pad is the result of the addition.\nGENERALIZATION ON ADDITION 0.8 0.6 0.4 0.2 # DIGITS 5 50 500 0=S2S-Easy-16 =0=S2S-Easy-32 ==NPI-1 =0= NPI-16 =NPL-16-1\nFigure 7: ADDiTion Generalization Performance: The x-axis varies the number of input digits for the samples in the test set, while the y-axis shows the accuracy. All models are trained on addition programs with inputs of 1 to 10 digits. NPL-16-1 shows the accuracy of our model when trained with 16 total samples (per number of digits), of which 1 sample (per number of digits) includes full program abstractions. NPI-1 and NPI-16 show the accuracy of the NPI model when trained with 1 total samples and 16 total samples respectively (per number of digits), all containing full program abstractions. S2s-Easy-16 and S2S-Easy-32 show the performance of the S2s-Easy baseline when trained with 16 and 32 samples respectively (per number of digits).\nprogram throughout the execution. Figure 6 gives an example of a full program trace as well as stat of the scratch pad at a particular timestep..\nExperiment Setup: A primary advantage of learning programmatic abstractions over sequences. is an increased generalization capability. To evaluate this, we train our model on samples ranging. from 1 to 1O input digits . The training data contains an equal number of samples of each length (number of digits), and includes full program abstractions for only one randomly chosen sample for each length such that [FULL = 10. We then test NPL using samples containing a much large. number of digits, ranging up to 1,o0o. On this task we found that both our model and the original. NPI model were somewhat sensitive to the choice of initial seed, so we sample many different seeds. and report both the mean and standard deviation, using a bootstrapping setup (Efron & Tibshirani. (1994)) which is detailed in Appendix[A.6.2\nCompared Models: We originally compared to a standard flat LSTM sequence model. However, we found that even with 32 samples per digit such a model was not able to fit even the training data for samples with more than 4 or 5 digits, so we did not present these results3 Instead, we compare to a model called S2s-Easy, which is the strongest baseline for this task from (Reed & de Freitas [2016). This model is custom-designed for learning addition and so it represents a very strong baseline. We discuss the model details in Appendix[A.6.1 For completeness we also compare to a reimplementation of NPI in two different training regimes.\nResults: Figure7|shows the generalization capabilities of our model on the ADDiTion task. Our model with \"one-shot\"' strong supervision (NPL-16-1) significantly outperforms the S2S-Easy base-. line even when the baseline is provided twice as many training samples (S2s-Easy-32). This is particularly notable given that the S2s-Easy model is specifically designed for the addition task. This result highlights the generalization capabilities our model brings by learning the latent struc-. tures which generate the observed sequences of elementary operations. Furthermore, we can see that\nthese latent structures are learned mostly from the unlabeled sequences, since the vanilla NPI mode trained with only 1 sample per digit (NPI-1) cannot generalize beyond the 10-digit data on which it was trained. Finally, we can see that just a single fully supervised sample is sufficient since it enables our model to perform comparably with a vanilla NPI model trained with FULL supervisior for all samples (NPI-16)."}, {"section_index": "8", "section_name": "We have already discussed the most relevant past work upon which we directly build: CTC (Graves et al.]2006), StackRNNs (Joulin & Mikolov]2015) and NPI (Reed & de Freitas] 2016)", "section_text": "Neural Programs Training neural networks to perform algorithmic tasks has been the focus of. much recent research. This work falls into two main categories: weakly supervised methods that learn from input-output examples, and strongly supervised methods that additionally have access to. the sequence of elementary actions performed to generate the output..\nThe work on learning neural programs from input-output data was sparked by the surprising effec-. tiveness of the Neural Turing Machine (NTM) (Graves et al.|2014). Similar to NTMs, many of the. proposed architectures have used differentiable memory (Kurach et al.2016 Graves et al.. 2016 Weston et al.f2014] Sukhbaatar et al.]2015bf Neelakantan et al.2016f Gaunt et al. 2016 Feser et al.]2016), while others have used REINFORCE (Williams|1992) to train neural networks that use sampling-based components to model memory access (Andrychowicz & Kurach2016f|Zaremba & Sutskever2015). Some of this work has considered learning addition from input-output samples,. a similar, but more challenging setup than our ADDiTioN domain.Zaremba & Sutskever(2014) makes use of a few training tricks to enable a standard LSTM to learn to add numbers up to length. 9 when training on numbers of the same length.Kalchbrenner et al.(2015) proposes an architec-. ture that is able to learn to add 15-digit numbers when trained on numbers of the same length. The Neural GPU model from (Kaiser & Sutskever2015) learns to add binary numbers 100 times longer. than those seen during training, but requires tens of thousands of training samples and extensive. hyperparameter searches. Additionally, using a decimal instead of binary representation with the. Neural GPU model (as in our ADDiTiON task) is also reported to have a significant negative impact. on performance.\nReinforcement Learning. In the reinforcement learning domain the most related work to ours is the options framework, for building abstractions over elementary actions (Sutton et al.]1999). This framework bears many similarities to both our model and to NPI. Specifically. at each time step the\nThe work on learning algorithms from sequence data has utilized both related techniques to ours as well as tackled related tasks. The most related techniques have augmented RNNs with various. attention and memory architectures. In addition to those we have discussed earlier (Reed & de Fre tas]2016] Joulin & Mikolov2015), Grefenstette et al.(2015) proposes an alternative method fo augmenting RNNs with a stack. From a task perspective, the most related work has considered vari. nts of the scratchpad model for long-hand addition, similar or our ADDiTion domain. This worl nas focused largely on more standard RNN architectures, starting with Cottrell & Tsung(1993] which showed that the standard RNN architectures at the time (Jordan]1997|Elman1990) coulc successfully generalize to test samples approximately 5 times as long as those seen during training. f a few longer samples were included in the training set. More recently,Zaremba et al.(2015. showed that an RNN architecture using modern LSTM or GRU controllers can perfectly generaliz. o inputs 20 times as long as than those seen in the training data when trained in either a supervise. r reinforcement learning setting. However this work was focused on trainability rather than dat efficiency and so they utilized hundreds of thousands of samples for training..\nNPI (Reed & de Freitas 2016) and NPL distinguish themselves from the above work with the. explicit modeling of functional abstractions. These abstractions enable our model, with only 16 samples, to perfectly generalize to data sequences about 100 times as long as those in the training. data. Furthermore, concurrent work (Cai]2016) has shown that an unmodified NPI model can be trained to perform more complex algorithms such as BubbleSort, QuickSort and topological sorting. by learning recursive procedures, and we expect that our method can be directly applied to reduce. the amount of needed supervision for these tasks as well..\nagent can choose either a one-step primitive action or a multi-step action policy called an option. As with our procedures, each option defines a policy over actions (either primitive or other options. and terminates according to some function. Much of the work on options has focused on the tabular. setting where the set of possible states is small enough to consider them independently. More recen work has developed option discovery algorithms where the agent is encouraged to explore regions. that were previously out of reach (Machado & Bowling2016) while other work has shown the. benefits of manually chosen abstractions in large state spaces (Kulkarni et al.]2016). However. option discovery in large state spaces where non-linear state approximations are required is still. an open problem, and our work can be viewed as a method for learning such options from expert. trajectories.\nMuch work in reinforcement learning has also considered domains similar to ours. Specifically. grid-world domains similar to NANOCRAFT are quite standard environments in the reinforcement learning literature. One recent example is Sukhbaatar et al.(2015a), which showed that even the strongest technique they considered struggled to successfully perform many of the tasks. Their. results highlight the difficultly of learning complex tasks in a pure reinforcement learning setup. In future work we would like to explore the use of our model in setups which mix supervised learning. with reinforcement learning.\nIn this paper, we proposed the Neural Program Lattice, a neural network framework that learns a hi-. erarchical program structure based mostly on elementary operation sequences. On the NANoCRAFT and ADDiTioN tasks, we show that when training with mostly flat operation sequences, NPL is able. to extract the latent programmatic structure in the sequences, and achieve state-of-the-art perfor mance with much less supervision than existing models.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Making neural programming architectures generalize via recursion. 2016. Under submission to ICLR 2017.\nBradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994\nJeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179-211, 1990.\nJohn K Feser, Marc Brockschmidt, Alexander L Gaunt, and Daniel Tarlow. Neural functional pro gramming. arXiv preprint arXiv:1611.01988, 2016\nAlexander L Gaunt. Marc Brockschmidt. Rishabh Singh. Nate Kushman. Pushmeet Kohli. Jonathar Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428, 2016.\nAlex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. Connectionist tem-. poral classification: labelling unsegmented sequence data with recurrent neural networks. In Pro. ceedings of the 23rd international conference on Machine learning. pp. 369-376. ACM. 2006\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska. Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538. (7626):471-476, 2016.\nMichael I Jordan. Serial order: A parallel distributed processing approach. Advances in psychology 121:471-495, 1997.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems, pp. 190-198, 2015.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nMarlos C Machado and Michael Bowling. Learning purposeful behaviour in the absence of rewards arXiv preprint arXiv:1605.07700, 2016\nMicrosoft Corp. Redmond WA. Kinect for Xbox 360\nScott Reed and Nando de Freitas. Neural p ammer-interpreters. ICLR, 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Maze base: A sandbox for learning from games. arXiv preprint arXiv:1511.07401, 2015a\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440-2448, 2015b.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen learning. Machine learning, 8(3-4):229-256, 1992\nWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv rint arXiv:1410.4615, 2014\nRichard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework. for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1):181-211. 1999"}, {"section_index": "10", "section_name": "A.1 DATASET DETAILS", "section_text": "Table|2[lists the set of programs and elementary operations we used to generate the data for ADDI TION and NANOCRAFT. The programs and elementary operations for ADDiTiON are identical to those in Reed & de Freitas(2016). Note that when training with weak supervision the training data contains only the elementary operations and does not contain the programs or arguments.\nTable 2: Programs, arguments and elementary operations used for generating training data of AD DITION and NANOCRAFT tasks."}, {"section_index": "11", "section_name": "A.2 IMPLEMENTATION DETAILS", "section_text": "Programs Description Calls ADD Multi-digit addition. ADD1 ADD1 Single-digit addition. ACT_WRITE/CARRY/LSHIFT. CARRY Write carry digit. ACT_PTR_MOVE/ACT_WRITE LSHIFT Shift four pointers left. ACT_PTR_MOVE ACT_WRITE Write result to environment. Elementary Operation ACT_PTR_MOVE Move pointer to left/right. Elementary Operation NANOCRAFT Build a rectangular fence. MOVE_MANY/BUILD_WALL MOVE_MANY Move multiple steps in one direction ACT_MOVE BUILD_WALL Build a wall along one direction PLACE_AND_MOVE PLACE_AND_MOVE Move one step and build a block ACT_MOVE/ACT_PLACE_BLOCK ACT_MOVE Move one step to a direction.. Elementary Operation ACT_PLACE_BLOCK Build a block at current location Elementary Operation\nHere we describe the implementation details of the various component neural networks inside our implementation of the NPL. Note that the mappings are all the same for both ADDiTioN and NANOCRAFT except for fenc which is task dependent.\nfenc for ADDiTion: We represent the environment observation, (latent) programs and ar-. guments as one-hot vectors of discrete states. We feed the concatenation of one-hot vectors. for environment observation and argument through a linear decoder (with bias) to get a uni-. fied arg-env representation. We then embed the programs (via fembed) into an embedding. space. Finally we feed the concatenation of arg-env vector and program vector through a. 2-layer MLP with rectified linear (ReLU) hidden activation and linear decoder.. fenc for NAnoCRAFT: We represent the environment observation as a grid of discrete. states. Here we first embed each entry into an embedding space, and then feed this embed- ding through two convolutional layers and two MLP layers with ReLU hidden activation. and linear decoder. We represent argument again as one-hot vectors and embed programs. into an embedding space. Finally we feed the concatenation of argument vectors, convolu- tional vectors of environment observation and program vector through a 2-layer MLP with. ReLU hidden activation and linear decoder.. fistm: We employ a two-layer LSTM cell for the mapping. The size of the hidden states is. set to 128 for both ADDITION and NANOCRAFT. fprog: This mapping will map the LSTM hidden state to a probability distribution over. programs. The hidden state output of fistm is mapped through a linear projection to an 8-. dimensional space, and then another linear projection (with bias) with so ftmax generates faction and fop: Each of these encoders will output a probability distribution. We feed the. top hidden states by fistm first through a linear projection (with bias) and then a sof tmax. function to pt. and pt respectively..\nWhen the operation sequence is too long, y' ,' will become vanishingly small as t grows. To prevent our implementation from underflowing, we follow|Graves et al. (2006) by renormalizing y,' at each t,l timestep and storing the normalized values and normalization constant separately. The new update rule becomes:\n1 lpt,l- (PUSH)yt,l Yi\n(PUSH)y; U OP\nand we normalize the values and maintain a log-summation of the normalization constants\n) = log_sum_exp(log(y'), log(pt(pop)) + log(yt,o) + Yt\nlog(yt+1) = 1og_sum_exp(log(y),log(p (pop)) + log(yt) + Y\nIn Section [3.3|we did not include the boundary conditions in our discussion to improve the read- ability. Our implementation, however, must account for the bounds on l, and i, as shown in Iverson brackets in the full update equations below:\nAs mentioned before, NPL can be trained jointly with full program abstractions (referred to as FULL) as well as elementary operation sequences (referred to as OP). When training with FULI samples, the training procedure is similar to that for NPI and we use this setting as one of our baselines. For each dataset on which we test NPL, we include mostly OP samples with only a smal number of FULL samples. We pre-train the model solely on FULL samples for a few iterations tc get a good initialization. After that, in each step we train with a batch of data purely from FULL o1 OP based on their proportions in the dataset and generate the parameter update in that step using the corresponding objective. For all tasks, we train the NPL using ADAM (Kingma & Ba] 2015) with base learning rate of 10-4 and batch size of 1. We decay the learning rate by a factor of 0.95 every 10,000 iterations. These settings were chosen using a manual search based on performance on the validation data.\nt. ^t,l =yi t.l t.l Yi Yi Yi i,l i,l\n(yt'/yt+1,)pa,s(a) a. [t<L]a}+1(POP)M;+1+ [0<i]a}1(OP)p}-1A%)hbut,i-1+ [0 <l]at,l~1(PUSH)0, d = 0 [4<L]a}+1(POP)M},++[0<ia}(OP)Po,-1(A)M},-1+ rt+1,l M d = 1 d > 1 (POP)StI+1+ [0 <i]q1(0P)p6,i-1(A%)S6,-1+ d = 0 at,l [1 < L]Qt\"4+1(pOP)S4'th+ [0 <i]Q4(OP)po2~1(A%)st}-1+ d > 0"}, {"section_index": "12", "section_name": "A.6.1 S2S-EaSy BASELINE", "section_text": "In our initial seq2seq baseline tests for ADDiTioN we represented the data for 90 + 160 = 250 as the sequence: 90X160X250 However, we found that such a model was not able to fit the training data even when trained with 32 samples per number of digits. So we instead compared to the much stronger S2s-Easy baseline presented in Reed & de Freitas (2016). This baseline makes it much easier to learn addition through the following two modifications to the model: 1) reverse input digits and 2) generate reversed output digits immediately at each time step, such that the data sequence looks like: output: 052 input 1: 090 input 2: 061 This model is quite specific to the ADDiTiON task (and would not work on the NANoCRAFT task for instance) and results in a very strong baseline None-the-less, as we showed in Figure 7 our model still significantly outperforms this baseline."}, {"section_index": "13", "section_name": "A.6.2 BOOTSTRAPPING", "section_text": "On the ADDiTioN task we found that both our model and the original NPI model were somewhat. sensitive to the choice of initial seed. To test this sensitivity we ran our experiments for this task. using a bootstrapping process (Efron & Tibshirani]1994). We ran all models using 100 different. seeds for each model. We then sampled 25 seed subsets, with replacement. For each subset, we. choose the best seed using a validation set which was one-quarter the size of the original dataset but consisted only of 10-digit samples. We performed this resampling procedure 100 times, and in. Figure 7|we report the mean and standard deviation across the resampled seed sets.."}] |
HJOZBvcel | [{"section_index": "0", "section_name": "LEARNING TO DISCOVER SPARSE GRAPHICAL MODELS", "section_text": "Eugene Belilovsky\nUniversity of Paris-Saclay, France"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Probabilistic graphical models provide a powerful framework for describing the dependencies betweer a set of variables. Many applications infer the structure of a probabilistic graphical model from data to elucidate the relationships between variables. These relationships are often represented by ar undirected graphical model also known as a Markov Random Field (MRF). We focus on a commor MRF model, Gaussian graphical models (GGMs). GGMs are used in structure-discovery settings fo rich data such as neuroimaging, genetics, or finance (Friedman et al.]2008] Ryali et al]2012)Mohar et al.[2012f|Belilovsky et al.[2016). Although multivariate Gaussian distributions are well-behaved determining likely structures from few examples is a complex task when the data is high dimensiona It requires strong priors, typically a sparsity assumption, or other restrictions on the structure of the graph, which now make the distribution difficult to express analytically and use.\nA standard approach to estimating structure with GGMs in high dimensions is based on the classic. result that the zeros of a precision matrix correspond to zero partial correlation, a necessary and sufficient condition for conditional independence (Lauritzen|1996). Assuming only a few conditional. dependencies corresponds to a sparsity constraint on the entries of the precision matrix, leading to a. combinatorial problem. Many popular approaches to learning GGMs can be seen as leveraging the.\nUniversity of Montreal, Canada\nmatthew.blaschko@esat.kuleuven.be"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the setting of Gaussian Graphical Models (GGMs) a popular estimator is a maximum likelihood objective with a penalization on the precision matrix. Adapting this estimator to capture domain-specific knowledge as priors or a new data likelihood requires great effort. In addition, structure recovery is an indirect consequence of the data-fit term. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function mapping from empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. We apply this framework to several real-world problems in structure discovery and show that it can be competitive to standard approaches such as graphical lasso, at a fraction of the execution speed. We use convolutional neural networks to parametrize our estimators due to the compositional structure of the problem. Experimentally our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain competitive (and generally superior) performance, compared with analytical methods.\nwhich can be seen as a penalized maximum-likelihood estimator. Here O and are the precision and sample covariance matrices, respectively. A large variety of alternative regularization penalties extend the priors of the graphical lasso (Danaher et al.]2014]Ryali et al]2012][Varoquaux et al.]2010). How- ever, several problems arise in this approach. Constructing novel surrogates for structured-sparsity assumptions on MRF structures is challenging, as a prior needs to be formulated and incorporated into a penalized maximum likelihood objective which then needs an efficient optimization algorithm to be developed, often within a separate research effort. Furthermore, model selection in a penalized. maximum likelihood setting is difficult as regularization parameters are often unintuitive..\nWe propose to learn the estimator. Rather than manually designing a specific graph-estimatioi. procedure, we frame this estimator-engineering problem as a learning problem, selecting a functioi. from a large flexible function class by risk minimization. This allows us to construct a loss functio that explicitly aims to recover the edge structure. Indeed, sampling from a distribution of graphs an. empirical covariances with desired properties is often possible, even when this distribution is no. analytically tractable. As such we can perform empirical risk minimization to select an appropriat. function for edge estimation. Such a framework gives more easy control on the assumed level o. sparsity (as opposed to graph lasso) and can impose structure on the sampling to shape the expecte. distribution, while optimizing a desired performance metric.\nFor particular cases we show that the problem of interest can be solved with a polynomial function. which is learnable with a neural network (Andoni et al.|. 2014). Motivated by this fact, as well as theoretical and empricial results on learning smooth functions approximating solutions to combinato. rial problems (Cohen et al.]2016f Vinyals et al.]2015), we propose to use a particular convolutional neural network as the function class. We train it by sampling small datasets, generated from graphs. with the prescribed properties, with a primary focus on sparse graphical models. We estimate from. this data small-sample covariance matrices (n < p), where n is the number of samples and p is the. dimensionality of the data. Then we use them as training data for the neural network (Figure[2) where. target labels are indicators of present and absent edges in the underlying GGM. The learned network can then be employed in various real-world structure discovery problems..\nIn Section|1.1|we review the related work. In Section 2|we formulate the risk minimization view of graph-structure inference and describe how it applies to sparse GGMs. Section2.3|describes and motivates the deep-learning architecture we chose to use for the sparse GGM problem in this work In Section|3|we describe the details of how we train an edge estimator for sparse GGMs. We then evaluate its properties extensively on simulation data. Finally, we show that this edge estimator trained only on synthetic data can obtain state of the art performance at inference time on real neuroimaging and genetics problems, while being much faster to execute than other methods.\nLopez-Paz et al. (2015) analyze learning functions to identify the structure of directed graphica models in causal inference using estimates of kernel-mean embeddings. As in our work, they demonstrate the use of simulations for training while testing on real data. Unlike our work, they primarily focus on finding the causal direction in two node graphs with many observations.\nOur learning architecture is motivated by the recent literature on deep networks.Vinyals et al.(2015 have shown that neural networks can learn approximate solutions to NP-hard combinatorial problems,. and the problem of optimal edge recovery in MRFs can be seen as a combinatorial optimization. problem. Several recent works have been proposed which show neural architectures for graph input. data (Henaff et al.]2015] Duvenaud et all2015] Li et al.]2016). These are based on multi layer convolutional networks, as in our work, or multi-step recurrent neural networks. The input in our approach can be viewed as a complete graph, while the ouput a sparse graph, thus none of these are. directly applicable. A related use of deep networks to approximate a posterior distribution can be. found in|Balan et al.(2015). Finally,Gregor & LeCun(2010); Xin et al.(2016) use deep networks to approximate steps of a known sparse recovery algorithm..\nThe design of the estimator in Equation (1) is not explicitly minimizing this risk functional. Thus modifying the estimator to fit a different class of graphs (e.g. small-world networks) while minimizing R(f) is not obvious. Furthermore, in practical settings the optimal X is unknown and precision matrix entries can be very small. We would prefer to directly minimize the risk functional. Desired structural assumptions on samples from P on the underlying graph, such as sparsity, may imply that the distribution is not tractable for analytic solutions. Meanwhile, we can often devise a sampling procedure for P allowing us to select an appropriate function via empirical risk minimization. Thus it is sufficient to define a rich enough F over which we can minimize the empirical risk over the W\nrrogate, l : RNe Ne, given by:. l(fw(),Y) =(Yij log(fw()) + (1-yij) log(1- f())) itJ\nBayesian approaches to structure learning rely on priors on the graph combined with sampling. techniques to estimate the posterior of the graph structure. Some approaches make assumptions. on the decomposability of the graph (Moghaddam et al.2009). The G-Wishart distribution is a. popular distribution which forms part of a framework for structure inference, and advances have been. recently made in efficient sampling (Mohammadi & Wit] 2015). These methods can still be rather. slow compared to competing methods, and in the setting of p > n we find they are less powerful.\nXj xj[XV\\ Xj L xj[XV\\i,]\n-L(,Y)~P[(J(), 1 Here l : LNe LNe -> R+ is the loss function. For graphical model selection the 0/1 loss function is. the natural error metric to consider (Wang et al.|2010). The estimator with minimum risk is generally. not possible to compute as a closed form expression for most interesting choices of P, such as those arising from sparse graphs. In this setting, Eq. (1) achieves the information theoretic optimal recovery. rate up to a constant for certain P corresponding to uniformly sparse graphs with a maximum degree. but only when the optimal X is used and the non-zero precision matrix values are bounded away from. zero (Wang et al.]2010]Ravikumar et al.]2011).\nWe discuss how the described approach can be applied to recover sparse Gaussian graphical models. A typical assumption in many modalities is that the number of edges is sparse. A convenient property. of these GGMs is that the precision matrix has a zero value in the (i, j)th entry precisely when. variables i and j are independent conditioned on all others. Additionally, the precision matrix anc partial correlation matrix have the same sparsity pattern, while the partial correlation matrix has. normalized entries.\ne propose to sinulale our a prlorl assunptons ol for i E {1,. parsity and Gaussianity to learn fw(), which can Sample G hen produce predictions of edges from the input data. Sample We model P(x|G) as arising from a sparse prior on Xi{x he graph G and correspondingly the entries of the Construct recision matrix O. To obtain a single sample of end for X corresponds to n i.i.d. samples from N(0, O-1) Select Func1 We can now train fw() by generating sample pairs Optimize: n , Y). At execution time we standardize the input f lata and compute the covariance matrix before evaluating fw(). 7 he sparse GGM is given in Algorithm[1] A weakly-informative sp dge is equally likely with small probability, versus structured spars onfigurations. For obtaining the training samples (, Y) in this case v recision matrix, O, with the desired number of zero entries distribu lo this and assure the precision matrices lie in the positive definite co riangular sparse matrix and then multiply it by its transpose. This p he experimental section. Alternatively, an MCMC based G-Wisha mployed if specific structures of the graph are desired (Lenkoski]20\nWe can now train fw() by generating sample pairs Optimize: min N N=1 i(f(s), Yr)) (, Y). At execution time we standardize the input f EF data and compute the covariance matrix before evaluating fw(). The process of learning fw fo1 the sparse GGM is given in Algorithm|1] A weakly-informative sparsity prior is one where each edge is equally likely with small probability, versus structured sparsity where edges have specific configurations. For obtaining the training samples (, Y) in this case we would like to create a sparse precision matrix, O, with the desired number of zero entries distributed uniformly. One strategy tc do this and assure the precision matrices lie in the positive definite cone is to first construct an uppe. triangular sparse matrix and then multiply it by its transpose. This process is described in detail in the experimental section. Alternatively, an MCMC based G-Wishart distribution sampler can be employed if specific structures of the graph are desired (Lenkoski]2013).\nThe sparsity patterns in real data are often not uniformly distributed. Many real world networks. have a small-world structure: graphs that are sparse and yet have a comparatively short average. distance between nodes. These transport properties often hinge on a small number of high-degree.. nodes called hubs. Normally, such structural patterns require sophisticated adaptation when applying. estimators like Eq. (1). Indeed, high-degree nodes break the small-sample, sparse-recovery properties. of l1-penalized estimators (Ravikumar et al.|2011). In our framework such structural assumptions. appear as a prior that can be learned offline during training of the prediction function. Similarly. priors on other distributions such as general exponential families can be more easily integrated. As. the structure discovery model can be trained offline, even a slow sampling procedure may suffice"}, {"section_index": "3", "section_name": "2.3 NEURAL NETWORK GRAPH ESTIMATOR", "section_text": "We may ignore the denominator, D, as we are interested in I(Pi,j|z = 0). Thus we are left with a recursive formula that yields a high degree polynomial. From|Andoni et al.(2014 Theorem 3.1 using gradient descent, a neural network with only two layers can learn a polynomial function of degree d to arbitrary precision given sufficient hidden units.\nRemark 1. Naively the polynomial from the recursive definition of partial correlation is of degree bounded by 2p-2. In the worst case, this would seem to imply that we would need an exponentially\nAlgorithm 1 Training a GGM edge estimator\nIn this work we propose to use a neural network as our function fw. To motivate this let us consider the extreme case when n > p. In this case ~ and thus entries of -1 or the partial correlation that are almost equal to zero can give the edge structure\nPi,j|Z = (Pi,j|Z\\zo - Pi,zo|Z\\zoPj,zo|Z\nFigure 1: (a) Illustration of nodes and edges \"seen\" at edge 4,13 in layer 1 and (b) Receptive field at layer 1. All entries in grey show the o, in covariance matrix used to compute o4,13. (c) shows the. dilation process and receptive field (red) at higher layers.\ngrowing number of hidden nodes to approximate it. However, this problem has a great deal oJ. structure that can allow efficient approximation. Firstly, higher order monomials will go to zerc. quickly with a uniform prior on Pi,j, which takes values between O and 1, suggesting that in many. cases a concentration bound exists that guarantees non-exponential growth. Furthermore, the. existence result is shown already for a shallow network, and we expect a logarithmic decrease in the number of parameters to peform function estimation with a deep network (Cohen et al.]2016).\nMoreover, there are a great deal of redundant computations in Eq. (5) and an efficient dynamic. programming implementation can yield polynomial computation time and require only low orde. polynomial computations with appropriate storage of previous computation. Similarly we would like to design a network that would have capacity to re-use computations across edges and approximate low order polynomials. We also observe that the conditional independence of nodes i, j given Z car be computed equivalently in many ways by considering many paths through the nodes Z. Thus we can choose any valid ordering for traversing the nodes starting from a given edge..\nWe propose a series of shared operations at each edge. We consider a feedforward network wher each edge i, j is associated with a fixed sized vector, oh,;, of dimensionality d at each layer, k > 0 ., is initialized to the covariance entries at k = 0. For each edge we start with a neighborhood o the 6 adjacent nodes, i, j, i-1, i+1, j-1, j+1 for which we take all corresponding edge values from the covariance matrix illustrated in Figure[1 We proceed at each layer to increase the nodes considere for each edge, the output at each layer progressively increasing the receptive field making sure al values associated with the considered nodes are present. The receptive field here refers to the origina covariance entries which are accessible by a given, oh ; (Luo et al.2010). The equations defining the process are shown in Figure 1 Here a neural network fwk 1s applied at each edge at each layer and a dilation sequence dk is used. We call a network of this topology a D-Net of depth l. We use dilatior here to allow the receptive field to grow fast, so the network does not need a great deal of layers. We make the following observations:\nlayer 1, edge 4,13 layer 2, edge 4,13 =Pi,. O,j=fw2(0i,j,O-d2,3Oi,j-d2Oi+d2,j-d2 111415 Oi.j=fw(0) a) Yi,j=0(W 10111213141516 10111213141516\n): Yi,i. 10111213141516 10111213141516 (b) (c\nProposition 2. For general P it is a necessary condition for P-consistency that the receptive field of D-Net covers all entries of the covar at any edge it is applied.\nProposition 2. For general P it is a necessary condition for P-consistency that the receptive field of\nProof. Consider nodes i and j and a chain graph such that i and j are adjacent to each other in the. matrix but are at the terminal nodes of the chain graph. One would need to consider all other variables to be able to explain away the correlation. Alternatively we can see this directly from expanding. Eq. (5).\nIntuitively adjacent edges have a high overlap in their receptive fields and can easily share information about the non-overlapping components. This is analogous to a parametrized message passing. For example if edge (i,j) is explained by node k, as k enters the receptive field of edge (i,j - 1)\nthe path through (i, j) can already be discounted. In terms of Eq.5|this can correspond to storing computations that can be used by neighbor edges from lower levels in the recursion.\nHere f,k is shared amongst all nodes and thus we can implement this as a special kind of convolutional. network. We make sure that to have considered all edges relevant to the current set of nodes in the receptive field which requires us to add values from filters applied at the diagonal to all edges In Figure[1|we illustrate the nodes and receptive field considered with respect to the covariance. matrix. This also motivates a straightforward implementation using 2D convolutions (adding separate convolutions at i, i and j, j to each i, j at each layer to achieve the specific input pattern described) shown in (Figure2).\nConsidering the general n > p case is illustrative. However, the main advantages of making the. computations differentiable and learned from data is that we can take advantage of the sparsity and. structure assumptions on the target function to obtain more efficient results than naive computation of partial correlation or matrix inversion. As n decreases our estimate of pi,; becomes inexact and here. a data driven model which can take advantage of the assumptions on the underlying distribution can. more accurately recover the graph structure..\nThe convolution structure is dependent on the order of the variables used to build the covariance. matrix. which is arbitrary. Permuting the input data we can obtain another estimate of the output. In the experiments, we leverage these various estimate in an ensembling approach, averaging the results. of several permutations of input. We observe that this generally yields a modest increase in accuracy. but that even a single node ordering can show substantially improved performance over competing. methods in the literature."}, {"section_index": "4", "section_name": "3 EXPERIMENTS", "section_text": "Our experimental evaluations focus on the challenging high dimensional settings in which p > r. and consider both synthetic data and real data from genetics and neuroimaging. In our experiments. we explore how well networks trained on parametric samples generalize, both to unseen syntheti data and to several real world problems. In order to highlight the generality of the learned networks. we apply the same network to multiple domains. We train networks taking in 39, 50, and 500 node. graphs. The former sizes are chosen based on the real data we consider in subsequent sections. We. refer to these networks as DeepGraph-39, 50, and 500. In all cases we have 50 feature maps of 3 : kernels. The 39 and 50 node network with 6 convolutional layers and dx = k + 1. For the 500 node. network with 8 convolutional layers and dk = 2k+1. We use ReLU activations. The last layer has. 1 1 convolution and a sigmoid outputing a value of 0 to 1 for each edge..\nWe sample P(X|G) with a sparse prior on P(G) as follows. We first construct a lower diagonal. matrix, L, where each entry has a probability of being zero. Non-zero entries are set uniformly between c and c. Multiplying LLI gives a sparse positive definite precision matrix, O. This gives. us our P(O|G) with a sparse prior on P(G). We sample from the Gaussian V(0, O-1) to obtain.\nInput Data 1.00 0.20 0.40 0.10 0.26 0.20 0.02 0.18 0.12 0.20 Standardize 0.20 1.00 0.37 0.06 0.04 0.57 0.23 0.04 0.19 0.30 0.40 0.37 1.00 0.03 0.14 0.39 0.07 0.14 0.00.28 0.04 Conv. Dilated 0.11 0.06 0.03 1.00 0.20 0.02 0.25 0.00 . 1x1 Estimate 0.26 0.04 0.14 0.20 1.00 0.17 0.23 0.04 0.28 0.15 Conv. Conv. layer Covariance 0.20 0.57 0.32 0.02 0.17 1.00 0.04 0.05 0.05 U.22 layers layer 0.02 0.23 0.07 0.04 0.23 0.04 1.00 0.03 0.06 0.42 0.18 0.04 0.14 0.25 0.04 0.05 0.03 1.00 0.23 0.09 0.12 0.19 0.02 0.09 0.28 0.05 0.06 0.23 1.00 0.24 0.20 0.30 0.28 0.15 0.15 0.22 0.42 0.09 0.24 1.00\nInput Data 1.00 0.20 0.40 0.10 0.26 0.20 0.02 0.18 0.12 0.20 Standardize 0.20 1.00 0.37 0.06 0.04 0.57 0.23 0.04 0.19 0.30 0.40 0.37 1.00 0.03 0.14 0.39 0,07 0.14 0.02 0.2 1.00 0.20 0.02 0.00 Conv. Dilated 0.11 0.06 0.03 0.04 0.25 0.15 1x1 Estimate 0.26 0.04 0.14 0.20 1.00 0.17 0.23 0.04 0.28 0.151 Conv. Conv. layer Covariance 0.20 0.57 0.32 0.02 0.17 1.00 0.04 0.05 0.05 U.22 layers layer 0.02 0.23 0.07 0.04 0.23 0.04 1.00 0.03 0.06 0.42 0.18 0.04 0.14 0.25 0.04 0.05 0.03 1.00 0.23 0.09 0.12 0.19 0.02 0.09 0.28 0.05 0.06 0.23 1.00 0.24 0.20 0.30 0.28 0.15 0.15 0.22 0.42 0.09 0.24 1.00\nFigure 2: Diagram of the DeepGraph structure discovery architecture used in this work. The input is first standardized and then the sample covariance matrix is estimated. A neural network consisting of multiple dilated convolutions and a final 1 1 convolution layer is used to predict edges corresponding to non-zero entries in the. precision matrix.\nUltimately our choice of architecture that has shared computations and multiple layers is highly. scalable as compared with a naive fully connected approach and allows leveraging existing optimized. 2-D convolutions. In preliminary work we have also considered fully connected layers but this proved. to be much less efficient in terms of storage and scalibility than using deep convolutional networks\nsamples of X. Here a corresponds approximately to a specific sparsity level in the final precisior matrix, which we set to produce matrices 92 - 96% sparse and c chosen so that partial correlations range 0 to 1.\nSynthetic Data Evaluation To understand the properties of our learned networks, we evaluated. them on different synthetic data than the ones they were trained on. More specifically, we used a completely different third party sampler so as to avoid any contamination. We use DeepGraph-39 on a variety of settings. The same trained network is utilized in the subsequent neuroimaging evaluations. as well. DeepGraph-500 is also used to evaluate larger graphs..\nFor each scenario we repeat the experiment for 100 different graphs and small sample observations showing the average area under the ROC curve (AUC), precision@k corresponding to 5% of possible edges, and calibration error (CE) (Mohammadi & Wit2015).\nFor graphical lasso we use the partial correlations to indicate confidence in edges; BDGraph automatically returns posterior probabilities as does our method. Finally to understand the effect of the regularization parameter we additionally report the result of graphical lasso under optimal regularizer setting on the testing data.\nOur method dominates all other approaches in all cases with p > n (which also corresponds to the training regime). For the case of random Gaussian graphs with n=35 (as in our training data), and graph sparsity of 95%, we have superior performance and can further improve on this by averaging permutations. Next we apply the method to a less straightforward synthetic data, with distributions typical of many applications. We found that, compared to baseline methods, our network performs particularly well with high-degree nodes and when the distribution becomes non-normal. In particular our method performs well on the relevant metrics with small-world networks, a very common family of graphs in real-world data, obtaining superior precision at the primary levels of interest. Figure|3 shows examples of random and Watts-Strogatz small-world graphs used in these experiments.\nTraining a new network for each number of samples can pose difficulties with our proposed method Thus we evaluted how robust the network DeepGraph-39 is to input covariances obtained from fewer or more samples. We find that overall the performance is quite good even when lowering the number of samples to n = 15, we obtain superior performance to the other approaches (Table|1). We also applied DeepGraph-39 on data from a multivariate generalization of the Laplace distribution (Gomez et al.] 1998). As in other experiments precision matrices were sampled from the G-Wishart at a sparsity of 95%.Gomez et al.(1998| Proposition 3.1) was applied to produce samples. We find that DeepGraph-39 performs competitively, despite the discrepancy between train and test distributions Experiments with variable sparsity are considered in the supplementary material, which find that for very sparse graphs, the networks remain robust in performance, while for increased density performance degrades but remains competitive.\nUsing the small-world network data generator (Peeters et al.]2015), we demonstrate that we can. update the generic sparse prior to a structured one. We re-train DeepGraph-39 using only 1000. examples of small-world graphs mixed with 1000 examples from the original uniform sparsity model We perform just one epoch of training and observe markedly improved performance on this test case. as seen in the last row of Table[1.\nEach network is trained continously with new samples generated until the validation error saturates For a given precision matrix we generate 5 possible X samples to be used as training data, with a total of approximately 100K training samples used for each network. The networks are optimized using ADAM (Kingma & Ba] 2015) coupled with cross-entropy loss as the objective function (cf Sec.[2.1). We use batch normalization at each layer. Additionally, we found that using the absolute value of the true partial correlations as labels, instead of hard binary labels, improves results.\nWe used the BDGraph R-package to produce sparse precision matrices based on the G-Wishart distribution (Mohammadi & Wit, 2015) as well as the R-package rags2ridges (Peeters et al. 2015) to generate data from small-world networks corresponding to the Watts-Strogatz model (Watts & Strogatz| 1998). We compared our learned estimator against the scikit-learn (Pedregosa et alf 2011) implementation of Graphical Lasso with regularizer chosen by cross-validation as well as the Birth-Death Rate MCMC (BDMCMC) method from Mohammadi & Wit(2015).\nFor our final scenario we consider the very challenging setting with 500 nodes and only n = 5. samples. We note that the MCMC based method fails to converge at this scale, while graphical lass. is very slow as seen in the timing performance and barely performs better than chance. Our metho. convincingly outperforms graphical lasso in this scenario. Here we additionally report precision a just the first 0.05% of edges since competitors perform nearly at chance at the 5% level..\nExperimental Setup Method Prec @ 5% AUC CE Glasso 0.361 0.011 0.624 0.006 0.07 Glasso (optimal) 0.384 0.011 0.639 0.007 0.07 Gaussian Random Graphs BDGraph 0.441 0.011 0.715 0.007 0.28 n= 35,p = 39 DeepGraph-39 0.463 0.009 0.738 0.006 0.07 DeepGraph-39+Perm 0.487 0.010 0.740 0.007 0.07 Glasso 0.539 0.014 0.696 0.006 0.07 Glasso (optimal) 0.571 0.011 0.704 0.006 0.07 Gaussian Random Graphs BDGraph 0.648 0.012 0.776 0.007 0.16 (n = 100,p = 39) DeepGraph-39 0.567 0.009 0.759 0.006 0.07 DeepGraph-39+Perm 0.581 0.008 0.771 0.006 0.07 Glasso 0.2330.010 0.566 0.004 0.07 Glasso (optimal) 0.263 0.010 0.578 0.004 0.07 Gaussian Random Graphs BDGraph 0.261 0.009 0.630 0.007 0.41 n=15,p=39) DeepGraph-39 0.3260.009 0.6640.008 0.08 DeepGraph-39+Perm 0.360 0.010 0.672 0.008 0.08 Glasso 0.312 0.012 0.605 0.006 0.07 Glasso (optimal) 0.337 0.011 0.622 0.006 0.07 Laplacian Random Graphs BDGraph 0.298 0.009 0.687 0.007 0.36 (n = 35, p = 39) DeepGraph-39 0.415 0.010 0.711 0.007 0.07 DeepGraph-39+Perm 0.445 0.011 0.717 0.007 0.07 Glasso 0.3870.012 0.5880.004 0.11 Glasso (optimal) 0.4530.008 0.6400.004 0.11 Gaussian Small-World Graphs BDGraph 0.428 0.007 0.691 0.003 0.17 (n=35,p=39) DeepGraph-39 0.479 0.007 0.709 0.003 0.11 DeepGraph-39+Perm 0.453 0.007 0.712 0.003 0.11 DeepGraph-39+update 0.560 0.008 0.821 0.002 0.11 DeepGraph-39+update+Perm 0.555 0.007 0.805 0.003 0.11\nGaussian Random Graphs n=100p=39\nGaussian Random Graphs (n = 15,p = 39)\nGaussian Small-World Graphs (n=35,p=39)\nGlasso Glasso (optimal) BDGraph DeepGraph-39 DeepGraph-39+Perm DeepGraph-39+update DeepGraph-39+update+Perm\nTable 1: For each case we generate 100 sparse graphs with 39 nodes and data matrices sampled (with n samples) from distributions with those underlying graphs. DeepGraph outperforms other methods in terms of AP, AUC and precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has better performance in all cases except n > p. We compute the average executi\nfrom distributions with those underlying graphs. DeepGraph outperforms other methods in terms of AP, AUC and precision at 5% (the approximate true sparsity). In terms of precision and AUC DeepGraph has bette:. performance in all cases except n > p. We compute the average execution time of our method compared to Graph Lasso and BDGraph on a. CPU in Table4 We note that we use a production quality version of graph lasso (Pedregosa et al. 2011), whereas we have not optimized the network execution, for which known strategies may be. applied (Denton et al.]2014).\nExperimental Setup Method Prec @0.05% Prec @ 5% AUC CE random 0.052 0.002 0.053 0.000 0.500 0.000 0.05 Glasso 0.156 0.010 0.055 0.001 0.501 0.000 0.05 Gaussian Random Graphs Glasso (optimal) 0.162 0.010 0.055 0.001 0.501 0.000 0.05 (n=50,p=500) DeepGraph-500 0.449 0.018 0.109 0.002 0.543 0.002 0.06 (a) (b) DeepGraph-500+Perm 0.583 0.018 0.116 0.002 0.547 0.002 0.06\nExperimental Setup Method Prec@0.05% Prec @ 5% AUC CE random 0.052 0.002 0.053 0.000 0.500 0.000 0.05 Glasso 0.156 0.010 0.055 0.001 0.501 0.000 0.05 Gaussian Random Graphs Glasso (optimal) 0.162 0.010 0.055 0.001 0.501 0.000 0.05 (n=50,p=500) DeepGraph-500 0.449 0.018 0.109 0.002 0.543 0.002 0.06 (a) (b) DeepGraph-500+Perm 0.583 0.018 0.116 0.002 0.547 0.002 0.06\nTable 2: Experiment on 500 node graphs with only 50 samples repeated 100 times. Figure 3: Example o Improved performance in all metrics. (a) random and (b) smal\nCancer Genome Data We perform experiments on a gene expression dataset described in|Honoric et al.(2012). The data come from a cancer genome atlas from 2360 subjects for various types of cancer. We used the first 50 genes from[Honorio et al.(2012] Appendix C.2) of commonly regulatec genes in cancer. We evaluated on two groups of subjects, one with breast invasive carcinoma (BRCA consisting of 590 subjects and the other colon adenocarcinoma (CODA) consisting of 174 subjects\nEvaluating edge selection in real-world data is challenging. We use the following methodology: for each method we select the top-k ranked edges, recomputing the maximum likelihood precision matrix with support given by the corresponding edge selection method. We then evaluate the likelihood on a held-out set of data. We repeat this procedure for a range of k. We rely on Algorithm O in Hara & Takemura (2010) to compute the maximum likelihood precision given a support. The experiment is repeated for each of CODA and BRCA subject groups 150 times. Results are shown in Figure|4] In all cases we use 40 samples for edge selection and precision estimation. We compare with graphical lasso as well as the Ledoit-Wolf shrinkage estimator (Ledoit & Wolf] 2004). We additionally consider the MCMC based approach described in previous section. For graphical lasso and Ledoit-Wolf, edge selection is based on thresholding partial correlation (Balmand & Dalalyan]2016).\nAdditionally, we evaluate the stability of the solutions provided by the various methods. In several applications a low variance on the estimate of the edge set is important. On Table 3] we report\nEdge Selection Colon adenocarcinoma Subjects. Edge Selection Breast invasive carcinoma Subjects. 63 DeepGraph ledoit DeepGraph ledoit -64 DeepGraph+Permute bayesian 64 DeepGraph+Permute bayesian glasso glasso -65 65 66 66 67 67 -68 68 -69 69 1 -71 -71 72 20 60 100 120 -72 40 80 20 40 60 80 100 120 Edges in support Edges in support Edge Selection Autism Subjects. Edge Selection Control Subjects. 51.0 51.0 51.5 51.5 52.0 52.0 HHHHHH 52.5 -52.5 53. 53.0 53.5 53.5 54.0 54.0 54.5 54.5 IIII IIIIIITII IIIIIIIIIIIIIIIIIIIII 10 20 30 40 50 60 70 10 20 30 40 50 60 70 Edges in Graph Support. Edges in Graph Support.\nFigure 4: Average test likelihood for COAD and BRCA subject groups in gene data and neuroimaging data using different number of selected edges. Each experiment is repeated 50 times for genetics data. It is repeated approximately 1500 times in the fMRI to obtain significant results due high variance in the data. DeepGraph with averaged permutation dominates in all cases for genetics data, while DeepGraph+Permutation is superior o1 equal to competing methods in the fMRI data.\nSpearman correlations between pairs of solutions, as it is a measure of a monotone link between twc variables. DeepGraph has far better stability in the genome experiments and is competitive in the fMRI data.\nTable 4: Avg. execution time over 10 trials for. Table 3: Average Spearman correlation results for real data 50 and 500 node problem on a CPU for Graph. showing stability of solution amongst 50 trials. Lasso, BDMCMC, and DeepGraph We use the network DeepGraph-39, the same network and parameters from synthetic experiments, using the same evaluation protocol as used in the genomic data. For both control and autism patients we use time series from 35 random subjects to estimate edges and corresponding precision matrices We find that for both the Autism and Control group we can obtain edge selection comparable to graph. lasso for very few selected edges. When the number of selected edges is in the range above 25 we begin to perform significantly better in edge selection as seen in Fig.4 We evaluated stability of the. results as shown in Tab.[3] DeepGraph outperformed the other methods across the board..\nABiDE has high variability across sites and subjects. As a result, to resolve differences between approaches, we needed to perform 1O0o folds to obtain well-separated error bars. We found that the birth-death MCMC method took very long to converge on this data, moreover the need for many folds to obtain significant results amongst the methods made this approach prohibitively slow to evaluate\nhttp://preprocessed-connectomes-project.github.io/abide.\nResting State Functional Connectivity We evaluate our graph discovery method to study brain functional connectivity in resting-state fMRI data. Correlations in brain activity measured via fMRI reveal functional interactions between remote brain regions. These are an important mea- sure to study psychiatric diseases that have no known anatomical support. Typical connec- tome analysis describes each subject or group by a GGM measuring functional connectivity be- tween a set of regions (Varoquaux & Craddock2013). We use the ABIDE dataset (Di Mar- tino et all2014), a large scale resting state fMRI dataset. It gathers brain scans from 539 in- dividuals suffering from autism spectrum disorder and 573 controls over 16 sites For our. experiments we use an atlas with 39 regions of interest derived in Varoquaux et al.[(2011) 50 nodes (s) 500 nodes (s) Gene BRCA Gene COAD ABIDE Control ABIDE Autistic Graph Lasso 0.25 .003 0.34 0.004 0.21 .003 0.21 .003 sk1ea rn GraphLassoCV 4.81 554.7 Ledoit-Wolfe 0.12 0.002 0.15 0.003 0.13 .003 0.13 .003 BDgraph 42.13 N/A Bdgraph 0.07 0.002 0.08 0.002 N/A N/A DeepGraph 0.27 5.6 003\nGraphLasso(35 samples) GraphLasso(368 samples DeepGraph(35 samples)\nGraphLasso(35 samples) DeepGraph(35 samples) GraphLasso(368 samples O 8"}, {"section_index": "5", "section_name": "DISCUSSION AND CONCLUSIONS", "section_text": "Our method was competitive with strong baselines. Even in cases that deviate from standard GGM sparsity assumptions (e.g. Laplacians, small-world) it performed substantially better. When fine. tuning on the target distribution performance further improves. Most importantly the learned estimator. generalizes well to real data finding relevant stable edges. We also observed that the learned estimators. generalize to variations not seen at training time (e.g. different n or sparsity), which points to this. potentialy learning generic computations. This also shows potential to more easily scale the method. to different graph sizes. One could consider transfer learning, where a network for one size of data is used as a starting point to learn a network working on larger dimension data..\nPenalized maximum likelihood can provide performance guarantees under restrictive assumptions on. the form of the distribution and not considering the regularization path. In the proposed method one could obtain empirical bounds under the prescribed data distribution. Additionally, at execution time. the speed of the approach can allow for re-sampling based uncertainty estimates and efficient model. selection (e.g. cross-validation) amongst several trained estimators..\nWe have introduced the concept of learning an estimator for determining the structure of an undirectec. graphical model. A network architecture and sampling procedure for learning such an estimatoi. for the case of sparse GGMs was proposed. We obtained competitive results on synthetic data witl. various underlying distributions, as well as on challenging real-world data. Empirical results show. that our method works particularly well compared to other approaches for small-world networks, ar. important class of graphs common in real-world domains. We have shown that neural networks car. obtain improved results over various statistical methods on real datasets, despite being trained witl samples from parametric distributions. Our approach enables straightforward specifications of nev. priors and opens new directions in efficient graphical structure discovery from few examples.."}, {"section_index": "6", "section_name": "ACKNOWLEDGEMENTS", "section_text": "Figure 5: Example solution from DeepGraph and Graph Lasso in the small sample regime on the same 35 samples, along with a larger sample solution of Graph Lasso for reference. DeepGraph is able to extract similat key edges as graphical lasso\nFigure 5: Example solution from DeepGraph and Graph Lasso in the small sample regime on the same 35 samples, along with a larger sample solution of Graph Lasso for reference. DeepGraph is able to extract similar key edges as graphical lasso We show the edges returned by Graph Lasso and DeepGraph for a sample from 35 subjects (Fig.5) in the control group. We also show the result of a large-sample result based on 368 subjects from graphical lasso. In visual evaluation of the edges returned by DeepGraph we find that they closely align with results from a large-sample estimation procedure. Furthermore we can see several edges in the subsample which were particularly strongly activated in both methods.\nThis work is partially funded by Internal Funds KU Leuven, FP7-MC-CIG 334380, DIGITEO 2013 0788D - SOPRANO, and ANR-11-BINF-0004 NiConnect. We thank Jean Honorio for providing pre-processed Cancer Genome Data.\nAlexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks\nmean | - | mean || Empirical 0.0267 0.543 Graph Lasso 0.0223 0.680 DeepGraph 0.0232 0.673\nTable 5: Covariance prediction of ABIDE data. Averaged over 50 trials of 35 samples from the ABIDE Contro\nExperimental Setup Method Prec@5% AUC CE Glasso 0.464 0.038 0.726 0.021 0.02 Glasso (optimal) 0.519 0.035 0.754 0.019 0.02 Gaussian Random Graphs BDGraph 0.587 0.033 0.811 0.017 0.15 (n=35,p=39,sparsity=2%) DeepGraph-39 0.590 0.026 0.810 0.019 0.03 DeepGraph-39+Perm 0.598 0.026 0.831 0.017 0.03 Glasso 0.732 0.046 0.562 0.013 0.32 Glasso (optimal) 0.847 0.029 0.595 0.011 0.33 Gaussian Random Graphs BDGraph 0.861 0.015 0.654 0.013 0.33 (n=35,p=39,sparsity=15%) DeepGraph-39 0.678 0.032 0.643 0.012 0.33 DeepGraph-39+Perm 0.792 0.023 0.660 0.011 0.33\nGaussian Random Graphs (n=35,p=39,sparsity=15%\nTable 6: For each scenario we generate 100 graphs with 39 nodes, and corresponding data matrix sampled from distributions with those underlying graphs. The number of samples is indicated by n..\nUsing our framework it is possible to attempt to directly predict an accurate covariance matrix given a noisy one constructed from few observations. This is a more challenging task than predicting the edges. In this section we. show preliminay experiments which given an empirical covariance matrix from few observations attempts tc. predict a more accurate covariance matrix that takes into account underlying sparse data dependency structure.\nWe evaluate this network using the ABIDE dataset described in Section 3] The ABIDE data has a large number of samples allowing us to obtain a large sample estimate of the covariance and compare it to our estimator as well as graphical lasso and empirical covariance estimators. Using the large sample ABIDE empirical covariance matrix We find that we can obtain competitive l2 and loo norm using few samples. We use 403 subjects from the ABIDE Control group each with a recording of 150 - 200 samples to construct covariance matrix, totaling 77 330 samples (some correlated). This acts as our very approximate estimate of the population . We then evaluate covariance estimation on 35 samples using the empirical covariance estimator, graphical lasso, and DeepGraph trained to output covariance matrices. We repeat the experiment for 50 different subsamples of the data. We see in|5|that the prediction approach can obtain competitive results. In terms of l2 graphical lasso performs better however our estimate is better than empirical covariance estimation and much faster then graphical lasso. In some applications such as robust estimation a fast estimate of the covariance matrix (automatically embedding sparsity assumptions) can be of great use. For loo error we see the empirical covariance estimation outperforms graphical lasso and DeepGraph for this dataset, while DeepGraph performs better in terms of this metric."}, {"section_index": "7", "section_name": "A.2 ADDITIONAL SYNTHETIC RESULTS ON SPARSITY", "section_text": "We investigate the affect of sparsity on DeepGraph-39 which has been trained with input that has sparsity 96% - 92% sparse. We find that DeepGraph performs well at the 2% sparsity level despite not seeing this at training time. At the same time performance begins to degrade for 15% but is still competitive in several categories. The results are shown in Table[6] Future investigation can consider how alternate variation of sparsity at training time will affect these results."}, {"section_index": "8", "section_name": "A.3 APPLICATION OF LARGER NETWORK ON SMALLER INPUT", "section_text": "We perform preliminary investigation of application of a network trained for a larger number of nodes to a smaller set of nodes. Specifically, we consider the breast invasive carcinoma groups gene data. We now take all. 175 valid genes from Appendix C.2 of Honorio et al.(2012). We take the network trained on 500 nodes in the. synthetic experiments section. We use the same experimental setup as in the gene experiments. The 175 175\nOne challenge is that outputs of our covariance predictor must be on the positive semidefinite cone, thus we choose to instead predict on the cholesky decompositions, which allows us to always produce positive definite covariances. We train a similar structure to DeepGraph-39 structure modifying the last layer to be fully connected linear layer that predicts on the cholesky decomposition of the true covariance matrices generated by our model with a squared loss.\nWe note these results are preliminary, as the covariance predicting networks were not heavily optimized, moreover the ABIDE dataset is very noisy even when pre-processed and thus even the large sample covariance estimate. may not be accurate. We believe this is an interesting alternate application of our paper..\nEdge Selection Breast invasive carcinoma Subjects 234 DeepGraph ledoit DeepGraph+Permute random -236 glasso 238 240 242 244 246 248 250 20 40 60 80 100 120 Edaesin support\nFigure 6: Average test likelihood over 50 trials of applying a network trained for 500 nodes, used on a 175 nod. problem"}, {"section_index": "9", "section_name": "A.4 PERMUTATION AS ENSEMBLE METHOD", "section_text": "As discussed in Section[2.3] permuting the input and averaging several permutations can produce an improvec result empirically. We interpret this as a typical ensembling method. This can be an advantage of the proposec. architecture as we are able to easily use standard ensemble techniques. We perform an experiment to furthe verify that indeed the permutation of the input (and subsequent inverse permutation) allows us to produce. separate classifiers that have uncorrelated errors.\nWe use the setup from the synthetic experiments with DeepGraph-39 in Section[3with n = 35 and p = 39 We construct 20 permutation matrices as in the experimental section. Treating each as a separate classifier. we compute the correlation coefficient of the errors on 50 synthetic input examples. We find that the average. correlation coefficient of the errors of two classifiers is 0.028 0.002, suggesting they are uncorrelated. Finally. we note the individual errors are relatively small, as can already be inferred from our extensive experimental results in Section[3 We however compute the average absolute error of all the outputs across each permutation for this set of inputs as 0.03, notably the range of outputs is 0 to 1. Thus since prediction error differ at each. permutation but are accurate we can average and yield a lower total prediction error..\nFinally we note that our method is extremely efficient computationally thus averaging the results of severa permutations is practical even as the graph becomes large.\nDeepGraph ledoit DeepGraph+Permute random -236 glasso 238 240 -242 1- -244 -246 -248 -250 20 40 60 80 100 120 Edges in support\ncovariance matrix from 40 samples and padded to the appropriate size. We observe that DeepGraph has similar performance to graph lasso while permuting the input and ensembling the result gives substantial improvement"}] |
BJVEEF9lx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recent progress in artificial intelligence is driven by the ability to learn representations from data Yet not all kinds of representations are equal, and many of the fundamental properties of representa tions (both as theoretical constructs and as observed experimentally in humans) are missing. Perhaps the most critical property of a system of representations is compositionality, which as described suc cinctly in (Fodor & Lepore|2002), is when (i) it contains both primitive symbols and symbols tha are complex; and (ii) the latter inherit their syntactic/semantic properties from the former. Compo sitionality is powerful because it enables a system of representation to support an infinite number oi semantically distinct representations by means of combination. This argument has been supportec experimentally; a growing body of evidence (Spelke & Kinzler| 2007) has shown that humans pos- sess a small number of primitive systems of mental representation - of objects, agents, number anc geometry - and new representations are built upon these core foundations.\nRepresentations learned with modern machine learning methods possess few or none of these prop-. erties, which is a severe impediment. For illustration consider that navigation depends upon some representation of geometry, and yet recent advances such as end-to-end autonomous driving (Bo-. jarski et al.|2016) side-step building explicit geometric representations of the world by learning to map directly from image inputs to motor commands. Any representation of geometry is implicit. and has the advantage that it is economical in only possessing information necessary for the task However, this form of representation lacks (i) the ability to reuse these representations for other related tasks such as predicting object stability or performing mental rotation, (ii) the ability to com-. pose these representations with others, for instance to represent a set or count of geometric objects. and (iii) the ability to perform explicit inference using representations, for instance to infer why a. particular route would be faster or slower.\nThis contribution provides a computational model of mental representation which inherits the com. positional and productivity advantages of symbolic representations, and the data-driven and eco-. nomical advantages of representations learned using deep learning methods. To this end, we model. mental representations as a form of data-structure, which by design possess various forms of com-. positionality. In addition, in step with deep learning methods we refrain from imposing a particular. representations on a system and allow it instead be learned. That is, rather than specify a concrete. data type (for example polygons or voxels for geometry), we instead define a class of representations. as abstract data types, and impose invariants, or axioms, that any representation must adhere to.\nMathematicians have sought an axiomatic account of our mental representations since the end of the nineteenth century, but both as an account of human mental representations, and as a means of specifying representations for intelligent systems, the axiomatic specifications suffer from a number"}, {"section_index": "1", "section_name": "LEARNING APPROXIMATE DISTRIBUTION-SENSITIVE DATA STRUCTURES", "section_text": "Armando Solar Lezama\nasolar@csail.mit.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We model representations as data-structures which are distribution sensitive, i.e.,. which exploit regularities in their usage patterns to reduce time or space com-. plexity. We introduce probabilistic axiomatic specifications to extend abstract. data structures - which specify a class of representations with equivalent logi. ca1 behavior - to a distribution-sensitive data structures. We reformulate synthesis of distribution-sensitive data structures as a continuous function approximation. problem, such that the functions of a data-structure deep neural networks, such as. a stack, queue, natural number, set, and binary tree.\nof problems. Axioms are universally quantified - for all numbers, sets, points, etc - while humans in contrast, are not uniformly good at manipulating numbers of different magnitude (Hyde| 2011 Nuerk & Willmes] 2005]Dehaene1997), rotating geometry of different shapes (Izard et al.]2011 or sets of different cardinality. Second, axioms have no algorithmic content: they are declarativ rules which do not suggest how to construct concrete representations that satisfy them. Third, onl simple systems have reasonable axioms, whereas many representations are complex and cannc in practice be fully axiomitized; conventional axiomatic specifications do not readily accommo date partial specification. A fourth, potentially fatal threat is offered by|Dehaene (1997), where h shows that there are infinitely many systems, most easily dismissed by even a child as clearly nc number-like, which satisfy Peano's axioms of arithmetic. Moreover these ''nonstandard models c arithmetic\"' can never be eliminated by adding more axioms, leading Dehaene to conclude \"'Hence our brain does not rely on axioms.\".\nWe extend, rather than abandon, the axiomatic approach to specifying mental representations, and employ it purely as mechanism to embed domain specific knowledge. We model a mental represen- tation as an implementation of an abstract data type which adheres approximately to a probabilistic axiomatic specification. We refer to this implementation as a distribution-sensitive data-structure.\nIn summary, in this paper:\nempty : Stack push : Stack Item -> Stack pop : Stack -> Stack Item isempty : Stack -> {0, 1}\nThe meaning of the constants and functions is not specified in the interface. To give meaning to these names, we supplement the abstract data type with a specification as a set of axioms. The specification as a whole is the logical conjunction of this set of axioms. Continuing our example for all s E Stack, i E Item:.\npop(push(s, i)) = (s, i) isempty(empty) = 1 isempty(push(s, i)) = 0 pop(empty) =\npop(push(s, i)) = (s,i isempty(empty) = 1 sempty(push(s, i)) = pop(empty) =\nWe introduce probabilistic axiomatic specifications as a quantifier-free relaxation of a con- ventional specification, which replaces universally quantified variables with random vari- ables. Synthesis of a representation is formulated as synthesis of functions which collectively satisfy the axioms. When the axioms are probabilistic, this is amounts of maximizing the probability that the axiom is true.. We present a number of methods to approximate a probabilistic specification, reducing it to a continuous loss function.. We employ neural networks as function approximators, and through gradient based opti- mization learn representations for a number of fundamental data structures..\nAbstract data types model representations as a set of types and functions which act on values of those types. They can also be regarded as a generalized approach to algebraic structures, such as lattices, groups, and rings. The prototypical example of an abstract data type is the Stack, which models an ordered, first-in, last-out container of items. We can abstractly define a Stack of Items. in part. by defining the interface:\nempty : Stack push : Stack Item -> Stack pop : Stack -> Stack Item sempty : Stack -> {0,1}\nThe interface lists the function names and types (domains and range). Note that this is a functional (rather than imperative) abstract data type, and each function in the interface has no internal state. For example, push is a function that takes an instance of a Stack and an Item and returns a Stack. empty : Stack denotes a constant of type Stack. the empty stack of no items.\nAn important property of an abstract data types which supports algorithmic compositionality is. encapsulation. Encapsulation means that the particular details of how the functions are implemented should not matter to the user of the data type, only that it behaves as specified. Many languages. enforce that the internals are unobservable, and that the data type can only be interacted with through. its interface. Encapsulation means that data-structures can be composed without reasoning about. their internal behavior.\nIn this paper however, we focus on parametric compositionality. Some data structures, in particular. containers such as a stack, or set, or tree are parametric with respect to some other type, e.g. the type of item. Parametric compositionality means for example that if we have a representation of a. set, and a representation of a number, we get a set of numbers for free. Or, given a representations for a tree and representations for Boolean logic, we acquire the ability to form logical expressions for free."}, {"section_index": "3", "section_name": "2.2 DISTRIBUTION SENSITIVE DATA STRUCTURES", "section_text": "Axiomatic specifications almost always contain universal quantifiers. The stack axioms are quan tified over all possible stacks and all possible items. Real world use of a data structure is howeve never exhaustive, and rarely uniform. Continuing our stack example, we will never store an infinit number of items, and the distribution over how many items are stored, and in which order relative tc each other, will highly non-uniform in typical use cases. Conventional data structures are agnostic to these distributional properties.\nData structures that exploit non-uniform query distributions are typically termed distribution. sensitive (Bose et al.|2013), and are often motivated by practical concerns since queries observed in. real-world applications are not uniformly random. An example is the optimum binary search tree on. n keys, introduced by Knuth (Bose et al.]2013), which given a probability for each key has an av-. erage search cost no larger than any other key. More generally, distribution-sensitive data structures. exploit underlying patterns in a sequence of operations in order to reduce time and space complexity\nTo make the concept of a distribution-sensitive data-structure precise, we first develop the concept o1 an probabilistically axiomatized abstract data type (T, O, F), which replaces universally quantified variables in its specification with random variables. T and O are respectively sets of type and interface names. F is a set of type specifications, each taking the form m : t for a constant of type T, or o : T1 -> T2 denoting a function from T1 to T2. Here t E T or a Cartesian product T . . . Tn\nA concrete data type implements an abstract data type by assigning a value (function or constant) to each name in O. A concrete data type is deemed a valid implementation only with respect to an algebraic specification A. A is a set of equational axioms of the form p = q, p and q are constants,. random variables, or transformations of random variables by functions in O..\nSince a transformation of a random variable yields a random variable, and an axiom is simply a predicate of its left and right hand side arguments, random variables present in an axiom implies that the axiom itself is a Boolean valued random variable. For example if we have a distribution over items i of the stack, axiom (1) itself is a random variable which is true or false depending on i, push, pop, and can only be satisfied with some probability. We let P[A(o)] denote the probability\nA concrete representation of a stack is a data structure which assigns constants and functions to the names empty, push, pop and isempty. The data structure is a stack if and only if it satisfies the specification.\nThere are a number of distinct forms of compositionality with respect to data structures. One ex ample is algorithmic compositionality, by which we can compose algorithms which use as primitive operations the interfaces to these representations. These algorithms can in turn form the interfaces to other representations, and so on.\nProbabilistic axioms do not imply that the concrete data-structure itself is probabilistic. On the. contrary, we are concerned with specifying and synthesizing deterministic concrete data structures which exploit uncertainty stemming only from the patterns in which the data-structure is used.\nEach type t E T will correspond to a finite dimensional real valued multidimensional array Rn Interface functions are continuous mappings between these arrays.."}, {"section_index": "4", "section_name": "UNROLL AXIOMS", "section_text": "Axiom (1) of the stack is intensional in the sense that it refers to the underlying stack s. This provides an inductive property allowing us to fully describe the behavior of an unbounded number of push and pop operations with a single equational axiom. However, from an extensional perspective, we do not care about the internal properties of the stack; only that it behaves in the desired way. Put plainly, we only care that if we push an item i to the stack, then pop, that we get back i. We do not care that the stack is returned to its initial state, only that it is returned to some state that will continue to obey this desired behavior.\nAn extensional view leads more readily to approximation; since we cannot expect to implement stack which satisfies the inductive property of axiom 1 if it is internally a finite dimensional vector Instead we can unroll the axiom to be able to stack some finite number of n items:"}, {"section_index": "5", "section_name": "APPROXIMATE DISTRIBUTIONS WITH DATA", "section_text": "We approximate random variables by a finite data distribution assumed to be a representative set of samples from that distribution. Given an axiom p = q, we denote p and q as values (arrays computed by evaluating p and q respectively with concrete data from the data distributions of random variables and the interface functions.\nWe relax equality constraints in axioms to a distance function, in particular the L2 norm. This. transforms the equational axioms into a loss function. Given i axioms, the approximate maximum likelihood concrete data type o* is then:\n* = argmin >llpi-qill 2\nConstants and parameterized functions (e.g. neural networks) which minimizes this loss function then compose a distribution-sensitive concrete data type.\nP[A(o)]:= P[^iPi= qi\nWhen P[A(o)] = 1, can be said to fully satisfy the axioms. More generally, with respect to a. space of concrete data types, we denote the maximum likelihood o* as one which maximizes the probability that the axioms hold:.\n= arg max P[A(o)\nA probabilistic specification is not easier to satisfy than a universally quantified one, but it can lend itself more naturally to a number of approximations. In this section we outline a number of. relaxations we apply to a probabilistic abstract data type to make synthesis tractable."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "Natural number (from Peano's axioms Stack Queue Set Binary tree\nWith the exception of natural number (for which we used Peano's axioms), we use axiomitizations from (Dale & WalkerJ 1996). As described in section 4, since we use finite dimensional representa tions we unroll the axioms some finite number of times (e.g., to learn a stack of three items rathei than it be unbounded) and''extensionalize' them.\nIn each example we used we used single layer convolutional neural networks with 24, 3 by 3 filters and rectifier non-linearities. In container examples such as Stack and Queue, the Item type was sampled from MNIST dataset, and the internal stack representation was chosen (for visualization) to also be a 28 by 28 matrix. We minimized the equational distance loss function described in section 3 using the adam optimization algorithm, with a learning rate of O.0001 In figures 1 and 2 we visualize the properties of the learned stack.\nTo explore compositionality, we also learned a Stack, Queue and Set of Number, where Numbe. was itself a data type learned from Peano's axioms\n1. 2. 3. 4. 5. 6. 7. push empty stack pop\nFigure 1: Validation of stack trained on MNIST digits, and introspection of internal representation. Row push shows images pushed onto stack from data in sequence. Row pop shows images taken from stack using pop function. Their equivalence demonstrates that the stack is operating correctly.. Row stack shows internal representation after push and pop operations. The stack is represented as an image of the same dimension as MNIST (28 by 28) arbitrarily. The stack learns to compress three images into the the space of one, while maintaining the order. It deploys an interesting interlacing. strategy, which appears to exploit some derivative information..\nThe learned internal representations depend on three things (i) the axioms themselves, (ii) the archi. tecture of the networks for each function in the interface, and (iii) the optimization procedure. In th stack example, we observed that if we decreased the size of the internal representation of a stack, w. would need to increase the size and complexity of the neural network to compensate. This implie. that statistical information about images must be stored somewhere, but there is some flexibility ove. where.\nFigure 2: Generalization of the stack. Top left to top right, 10 images stacked in sequence using push. Bottom right to bottom left: result from calling pop on stack 10 times. This stack was trained to stack three digits. It appears to generalize partially to four digits but quickly degrades after that Since the stack is finite dimensional, it is not possible for it to generalize to arbitrarily long sequences of push operations.\nitem queue stack\nFigure 3: Left: Stack versus queue encoding. Three MNIST images (top row) were enqueued onto the empty queue (middle row left), and pushed onto the empty stack (bottom row left). Middle row shows the internal queue representation after each enqueue operation, while bottom is the internal stack representation after each push. In this case, the learned stack representation compresses pixel intensities into different striated sections of real line, putting data about the first stacked items at lower values and then shifting these to higher values as more items are stacked. This strategy appears different from that in figure 1, which notably was trained to a lower error value. The internal queue representation is less clear; the hexagonal dot pattern may be an artifact of optimization or critical to its encoding. Both enqueue and push had the same convolutional architecture. Right: Internal representations of natural numbers from O (top) to 19 (bottom). Natural numbers are internally represented as a vector of 10 elements. Number representations on the left are found by repeateding the succesor function, e.g. (succ(zero), succ(succ(zero)), ...). Numbers on the right are found by encoding machine integers into this internal representation.\nGiven the same architecture, the system learned different representations depending on the axioms. and optimization. The stack representation learned in figure 1 differs from that in figure 3, indicating. that there is not a unique solution to the problem, and different initialization strategies will yield different results. The queue internal representation is also different to them both, and the encoding. is less clear. The queue and stack representations could have been the same (with only the interface functions push, pop, queue and dequeue taking different form)..\nAs shown in figure 2, data-structures exhibit some generalization beyond the data distributions or which they are trained. In this case, a stack trained to store three items, is able to store four with some error, but degrades rapidly beyond that. Of course we cannot expect a finite capacity represen tation to store an unbounded number of items; lack of generalization is the cost of having optimized performance on the distribution of interest.\nOur contribution builds upon the foundations of distribution-sensitive data structures (Bose et al.. 2013), but departs from conventional work on distribution-sensitive data structures in that: (i) we\npush pop\nsynthesize data structures automatically from specification, and (ii) the distributions of interest ar complex data distributions, which prevents closed form solutions as in the optimum binary tree\nOur approach to learning representation can be viewed as a form of data-type synthesis from speci-. fication. From the very introduction of abstract data types, verification that a given implementation. satisfies its specification was a motivating concern (Guttag et al.[1978] Guttag1978] Spitzen & Wegbreit]1975). Modern forms of function synthesis (Solar-Lezama2009 Polikarpova & Solar- Lezama2016) use verification as an oracle to assist with synthesis. Our approach in a broad sense is. similar, in that derivatives from loss function which is derived from relaxing the specification, guide. the optimization through the paramterized function spaces.\nProbabilistic assertions appear in first-order lifting (Poole 2003), and Sampson (Sampson et al. 2014) introduce probabilistic assertions. Implementation of data type is a program. Main difference is that we synthesize data type from probabilistic assertion. Sumit's work (Sankaranarayanan2014 seeks upper and lower bounds for the probability of the assertion for the programs which operate on uncertain data.\nRecent work in deep learning has sought to embed discrete data structures into continuous form. Examples are the push down automata (Sun et al.||1993), networks containing stacks (Grefenstette et al.2015), and memory networks (Sukhbaatar et al.[2015). Our approach can be used to synthe- size arbitrary data-structure, purely from its specification, but is parameterized by the neural network structure. This permits it more generality, with a loss of efficiency."}, {"section_index": "7", "section_name": "8 DISCUSSION", "section_text": "In this contribution we presented a model of mental representations as distribution sensitive data. structures. and a method which employs neural networks (or any parameterized function) to syn thesize concrete data types from a relaxed specification. We demonstrated this on a number ol examples, and visualized the results from the stack and queue.\nOne of the important properties of conventional data structures is that they compose; they can be. combined to form more complex data structures. In this paper we explored a simple form of para- metric composition by synthesizing containers of numbers. This extends naturally to containers of containers, .e.g sets of sets, or sets of sets of numbers. Future work is to extend this to richer forms. of composition. In conventional programming languages, trees and sets are often made by compos-. ing arrays, which are indexed with numbers. This kind of composition ls fundamental to building. complex software from simple parts.\nIn this work we learned representations from axioms. Humans, in contrast, learn representations mostly from experience in the world. One rich area of future work is to extend data-structure learning to the unsupervised setting, such that for example an agent operating in the real world would learn a geometric data-structures purely from observation."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning Deep Architectures for AI, volume 2. 2009. ISBN 22000o0006. doi: 10.1561/2200000006.\nVarious forms of machine learning and inference learn representations of data. Our approach bears. resemblance to the auto-encoder (Bengio]2009), which exploits statistics of a data distribution to. learn a compressed representation as a hidden layer of a neural network. As in our approach, an. auto-encoder is distribution sensitive by the constraints of the architecture and the training proce-. dure (the hidden layer is of smaller capacity than the data and which forces the exploitation of regularities). However, an auto-encoder permits just two operations: encode and decode, and has no notion explicit notion of compositionality.\nA step closer to our approach than the auto-encoder are distributed representations of words as developed in (Mikolov et al.]2ooo). These representations have a form of compositionality such that vector arithmetic on the representation results in plausible combinations (Air + Canada = Air- Canada).\nStanislas Dehaene. The Number sense, volume 53. 1997. ISBN 9780199753871. doi: 10.1017. CB09781107415324.004.\nJerry A Fodor and Ernest Lepore. The Compositionality Papers. Oxford University Press, 2002\nJohn Guttag. Algebraic Specification of Abstract Data Types. Software Pioneers, 52:442-452, 197 ISSN 0001-5903. doi: 10.1007/BF00260922\nDaniel C. Hyde. Two Systems of Non-Symbolic Numerical Cognition. Frontiers in Human Neuro science, 5(November):1-8, 2011. 1SSN 1662-5161. doi: 10.3389/fnhum.2011.00150.\nDavid Poole. First-order probabilistic inference. In IJCAI International Joint Conference on Artif cial Intelligence. pp. 985-991. 2003. URL[ht t p /www.cs.ubc.ca/spider/poole/\nSriram Sankaranarayanan. Static Analysis for Probabilistic Programs : Inferring Whole Program Properties from Finitely Many Paths. pp. 447-458, 2014. 1SSN 15232867. doi: 10.1145/2462156. 2462179.\nProsenjit Bose, John Howat, and Pat Morin. Space-Efficient Data Structures, Streams, and Al gorithms: Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday. chapter A History, pp. 133-149. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013. ISBN 978-3-642- 40273-9. doi: 10.1007/978-3-642-40273-9{\\-}10. URLhttp://dx.doi.0rg/10.1007/ 978-3-642-40273-9{}10\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations ofWords and Phrases and their Compositionality. Arvix, 1:1-9, 2000. ISSN 0003-6951. doi: 10.1162/jmlr. 2003.3.4-5.951. URLhttp://www.crossref.org/deleted{_}D0I.htm1\nHc Nuerk and K Willmes. On the magnitude representations of two-digit numbers. Psy- chology Science, 47(1):52-72, 2005. URL http://www.pabst-publishers.de/ psychology-science/1-2005/ps{_}1{_}2005{_}52-72.pdf\nNadia Polikarpova and Armando Solar-Lezama. Program Synthesis from Polymorphic Refinement Types. PLDI: Programming Languages Design and Implementation, 2016. URL http:// arxiv.0rg/abs/1510.08419\nArmando Solar-Lezama. The sketching approach to program synthesis. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioin formatics), 5904 LNCS:4-13, 2009. ISSN 03029743. doi: 10.1007/978-3-642-10672-9{\\-}3.\nElizabeth S. Spelke and Katherine D. Kinzler. Core knowledge, 2007. ISSN 1363755X\nJay Spitzen and Ben Wegbreit. The verification and synthesis of data structures. Acta Informatica 4(2):127-144, 1975. 1SSN 00015903. doi: 10.1007/BF00288745\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net works. pp.1-11,2015. URLhttp://arxiv.org/abs/1503.08895"}] |
S1Jhfftgx | [{"section_index": "0", "section_name": "ENFORCING CONSTRAINTS ON OUTPUTS WITH UNCONSTRAINED INFERENCE", "section_text": "Jay Yoon Lee\nCarnegie Mellon University Pittsburgh, PA.\nIncreasingly, practitioners apply neural networks to complex problems in natu ral language processing (NLP), such as syntactic parsing, that have rich output structures. Many such applications require deterministic constraints on the output values; for example, requiring that the sequential outputs encode a valid tree. While hidden units might capture such properties, the network is not always able to learn them from the training data alone, and practitioners must then resort to post processing. In this paper, we present an inference method for neural networks that enforces deterministic constraints on outputs without performing post-processing or expensive discrete search over the feasible space. Instead, for each input, we nudge the continuous weights until the network's unconstrained inference proce dure generates an output that satisfies the constraints. We find that our method reduces the number of violating outputs by up to 81%, while improving accuracy"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many neural networks have discrete-valued output units that correspond to an inference or predictior about an input. Often, a problem might involve multiple discrete outputs. Unlike multiclass classi fication, which associates a single discrete output with each input, so called structured predictior problems associate multiple outputs with each input. For example, in multi-label classification instead of predicting a single relevant class pertaining to the image or sentence, we must predict all relevant classes: the image contains a dog, a tree, and a sky. In sequence prediction problems, the discrete outputs might be a sequence of words or symbols that must form a coherent translation of a source language sentence (Cho et al.|2014) Sutskever et al.2014), description of an image (Vinyals et al. 2015b), answer to a question (Kumar et al.|2016), or a parse-tree for an input sentence (Vinyals et al. 2015a). Crucially, in structured prediction, the output values are interdependent. Even though neural networks usually predict outputs independently or sequentially (one output at a time), the hidden units allow them to successfully capture many dependencies.\nAs a motivating example, consider a sequence-to-sequence network that inputs a sentence and outputs a sequence of \"shift-reduce'\"' commands that describe the sentence's parse tree. Briefly, the shift-.\nMichael Wick. Jean-Baptiste Tristan\n{michael.wick, jean.baptiste.tristan}@oracle.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Sometimes, the outputs must obey hard constraints. For example, in sequence labeling with BILOU encoding, a 'begin' marker B cannot immediately follow an 'inside' marker I (Ratinov & Roth 2009). In clustering, pairwise binary decisions must obey transitivity so that they yield a valid equivalence class relation over the data points (McCallum & Wellner2005][Wick et al.]2006}2008] n syntactic/dependency parsing, the output sequence must encode a valid parse tree (McDonald & Pereira]2006][Vinyals et al.]2015a]Dyer et al.2016). In formal language generation or neural compilers the output must belong to a context free language or compile (Reed & de Freitas2016). In dual decomposition approaches to joint inference, copies of variables must satisfy equality constraints Koo et al.] 2010] Rush et al.2010] Rush & Collins2012). Finally, in some ensemble methods he outputs of multiple conditionally independent classifiers must reach a consensus on the outpu class. Indeed, there are a tremendous number of problems that require hard constraints on the outputs Unlike softer dependencies, violating a hard-constraint is often unacceptable because the output ol he network would not \"type-check\"' causing problems for downstream components. Unfortunately n practice, networks are not always able to exactly learn constraints from the training data alone\nreduce commands control a parsing algorithm by indicating how and when to use its stack. Each. command controls whether to shift (s) a token onto the stack, reduce (r) the top of the stack into a parent tree node, or push (!) the current reduction back onto the stack.\nTo be successful, the network must generate commands that imply a valid tree over the entire input. sentence. However, the decoder outputs just a single command at a time, producing some outputs. that are not globally-consistent, valid shift-reduce programs. Indeed, the output may not have enough shifts to include every input token in the tree or may attempt to reduce when the stack is empty. For example, the following input sentence \" So it 's a very mixed bag . \" comprises ten space-delimited. tokens (the quotations are part of the input), but our unconstrained sequence-to-sequence network. outputs an invalid sequence with only nine shifts ssr!sr!ssssrrr!rr!ssrrrrrr!. We must introduce another shi ft so the last token is pushed onto the stack and issue another reduce so it is inserted into the tree.\nVe could attempt to fix the output with post-processing, but where is the right place to inse hese commands in the sequence? There are 406 = choose(29, 2) candidate locations. Furthe omplicating our post-processing dilemma is the fact that the output contains several other error hat are seemingly unrelated to the constraint. Instead, we could attempt to fix the problem with nore sophisticated decoder, but this is difficult because the decoder outputs a single character at eac ime-step and our constraints are global, limiting corrections to the end of the sequence when it is to ate to rectify an earlier decision. A beam search is less myopic, but in practice most of the network utput mass is peaked on the best output token, resulting in little improvement.\nIn this paper, we propose an inference method for neural networks that enforces output constraint.. without employing combinatorial discrete search. The idea is to modify some (or all) of the weight. for each instance at test-time, iteratively nudging them, until the network's efficient unconstrainec. inference procedure produces a valid output. We achieve this by expressing the hard constraints a an optimization problem over the continuous weights and employ back-propagation to change ther. Prima facie, back-propagation is doomed because the constraint loss is necessarily a function of th. argmax that produced the discrete values. However, we circumvent this problem by optimizing ove. the energy of the violating outputs instead. Since the weights directly determine the output throug. the energy, we are able to manipulate the unconstrained inference procedure to produce the desire. result. Much like scoped-learning, the algorithm customizes the weights for each example at test-tim. (Blei et al.|2002), but does so in a way to satisfy the constraints.."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Consider a neural network that generates a variable length output vector y = {yi}1 from a variable length input vector x = {x}mx. For example, in image classification, the input vector encodes fixed multi-dimensional tensor of pixel intensities and the output vector comprises just a single element corresponding to the discrete class label. In sequence-to-sequence, the input might be a variable length vector of French tokens, and the output would be a variable length vector of its English translation. It is sometimes convenient to think of the network as a function from input to output\nWhen applied to the above example, our method removes enough energy mass from the invalid outpu space in only twelve steps, allowing unconstrained decoding to produce a valid output sequence:.\nInterestingly, the network generates an additional s command at the beginning of the sequence while also producing a cascade of error correction in later time steps: the new output now satisfies the constraints and is a perfectly correct parse. Of course, enforcing constraints does not always lead to an improvement in accuracy, but we find that often it does in practice, especially for a well-trained network. We find that our method is able to completely satisfy constraints in up to 81% of the outputs\nf(x;W)+>y\nHowever, for the purpose of exposition, we separate the neural network into a real-valued model. (negative energy function) that scores the compatibility of the outputs (given the weights and input and an inference procedure that searches for high scoring outputs..\nThen, inference is the problem of finding the values of the outputs y that maximize the negative. energy given fixed inputs x and weights W. Thus, we can rewrite the neural network as the function:\nThe purpose of separating the model from the inference procedure is so we can later formalize oui optimization problem. We emphasize that this formulation is consistent with existing neural networks Indeed, inference in feed-forward networks is a single feed-forward pass from inputs to outputs When the outputs only depend on each other through hidden states that only depend on earlier layer. of the network, feed-forward inference is exact in the sense that it finds the optimum of Equation|3 For recurrent neural networks (RNNs), each output depends on hidden states that are functions ol previous output values. However, we can still think of the usual procedure that produces the highes scoring output at each time step as a local greedy approximation to global inference; of course, the procedure can optionally be improved with a beam.\nA major advantage of neural networks is that once trained, inference is extremely efficient. Howeve constraints can render inference intractable due to discrete search. Our goal is take advantage of the fact that unconstrained inference is inexpensive and design a constrained inference algorithm tha exploits such a procedure as a black box. Our method iteratively adjusts the weights for each test-tim input, concentrating the probability mass on the feasible region so that unconstrained inference becomes increasingly likely to generate an output that satisfies the constraints.\nConsider the following. constrained inference problem for neural networks\nmax J(x,y,W y s.t. y E Lx\nWith this in mind, let g(y, L) +> r for r E R+ be a function that measures a loss between a sentence y and a grammar such that g(y, L) = O if and only if there are no grammatical errors in y. That is, g(y, ) = 0 for the feasible region and is strictly positive everywhere else. For a large class of CFLs, g could be the least errors count function (Lyon[1974) or a weighted version thereof. We could then express CFL membership as an equality constraint and minimize the Lagrangian\nFor the model, let y; be a discrete output from an output unit and let (yi; x, W) be its corresponding real-valued log-space activation score (e.g., the log of the softmax for locally normalized models or simply a linear activation value for globally normalized models). Define the negative energy I over a collection of output values y as an exponentiated sum of log-space activation scores\nI(y;x, W) = exp Y(Yi;X,W)\nf(x; W) +> argmax I(y;x, W y\nIn this work, we focus on constraints that require the outputs to belong to an input-dependent context-. free language L* (CFL). The idea is to treat the output space of the neural network as the terminal. symbols, and devise the appropriate production rules and non-terminals to express constraints on. them. An advantage of employing CFLs over other formalisms such as first order logic (FOL) is. that CFLs are intuitive for expressing constraints on the outputs, especially for language models and. sequence-to-sequence networks. For example, when modeling Python or Java code, it is easy to express many of the desired programming language's constraints using a CFL, but cumbersome in. FOL. Indeed, CFLs are an expressive class of languages..\nTo motivate our algorithm, we begin with the ideal optimization problem and argue that unlike for linear models with local constraints, the resulting Lagrangian is not well suited for globally constrained inference in neural networks. We ultimately settle on an alternative objective function that reasonably models our constrained inference problem. Although our algorithm lacks the theoretical guarantees enjoyed by classic relaxation algorithms we nevertheless find it works well in practice.\nNaively enforcing the constraint requires combinatorial discrete search, which is intractable in general. Instead. we prefer a smooth optimization problem with meaningful gradients to guide the search.\nmin max (x, y, W) + Xg(y, L A y\nHowever, this dual optimization problem has a major flaw. Our constraints are global and do no. necessarily factorize over the individual outputs. Consequently, there is just a single dual variable. X. Optimizing does little more than eliminate a single contour of output configurations at a time resulting in a brute-force trial and error search.\nInstead, observe that the network's weights control the negative energy of the output configurations By properly adjusting the weights, we can affect the outcome of inference by removing mass from invalid outputs. The weights are likely to generalize much better than the single dual variable because in most neural networks, the weights are tied across space (e.g., CNNs) or time (e.g., RNNs). As a result, lowering the negative energy for a single invalid output has the effect of lowering the negative energy for an entire family of invalid outputs, enabling faster search. With this in mind, we introduce an independent copy Wy of the network's weights W and minimize with respect to these \"dual weights\" instead of the dual variable. This is powerful because we have effectively introduced an exponential number of \"dual variables\"' (via the energy, which scores each output) that we can easily control via the weights; although similar, the new optimization is no longer equivalent to the original\nWhile a step in the right direction, the objective still requires combinatorial search because (1) th maximization involves two non-linear neural networks and (2) a greedy decoding algorithm is unabl to cope with the global loss gO because the constraints do not factorize over the individual output In contrast the functions involved in classic Lagrangian relaxation methods for NLP have multipliei for each output variable that can be combined with linear models to form a single unified decodin problem for which efficient inference exists (Koo et al.[2010] Rush et al.]2010] Rush & Collin 2012). Since our non-linear functions and global constraints do not afford us the same ability, w must modify the optimization problem for a final time so that we can employ the network's efficier inference procedure as a black-box. In particular, we (1) remove the negative-energy term tha involves the original weights W and compensate with a regularizer that attempts to keep the dua weights Wx as close to these weights as possible and (2) maximize exclusively over the networ parameterized by Wx. The result is a different optimization problem on which our algorithm is basec\nmin I(x,y,Wx)g(y,L*) + a|W - Wx||2 y = argmax I(x, y, Wx Wx y\nAlgorithm 1 Constrained inference for neural nets\nConsider the structured prediction problem of syntactic parsing in which the goal is to input a sentence comprising a sequence of tokens and output a tree describing the grammatical parse of the sentence One way to model the problem with neural networks is to linearize the representation of the parse tree and then employ the familiar sequence-to-sequence model (Vinyals et al.|[2015a).\nLet us suppose we linearize the tree using a sequence of shift (s) and reduce (r, r!) commands that control an implicit shift reduce parser. Intuitively, these commands describe the exact instructions for converting the input sentence into a complete parse tree: the interpretation of the symbol s is that we\nmin max I(x,y, W) + I(x,y, W)g(y, L Wx y\nInformally, our algorithm alternates the maximization (by running efficient unconstrained inference) and minimization (by performing SGD) until it produces a feasible output or it exceeds a maximum number of iterations. For each test-example, we re-initialize the dual weights to the trained weights to ensure the network does not deviate too far from the trained network. More precisely see Algorithm[1\nTeulalnels Inputs: test instance x, input specific CFL L*, pretrained weights W. W W #reset instance-specific weights while not converged do. y < f(x; W) #perform inference using weights W. Wx W n #update instance-specific weights with SGD or a variant thereof end while\nshift an input token onto the stack and the interpretation of the symbol r is that we start (or continue) reducing (popping) the top elements of the stack, the interpretation of a third symbol ! is that we stop reducing and push the reduced result back onto the stack. Thus, given an input sentence and an output sequence of shift-reduce commands, we can deterministically recover the tree by simulating a shift reduce parser. For example, the sequence ssrr! ssr! rr!rr! encodes a type-free version of the parsetree (S (NP the ball) (VP is (NP red))) forthe input sentence\"the ballisred' It is easy to recover the tree structure from the input sentence and the output commands by simulating a shift reduce parser, performing one command at a time as prescribed by the classic algorithm.\nNote that for output sequences to form a valid tree over the input, the sequence must satisfy a number. of constraints. First, the number of shifts must equal the number of input tokens mx, otherwise either. the tree would not cover the entire input sentence or the tree would contain spurious terminal symbols. Second, the parser cannot issue a reduce command if there are no items left on the stack. Third, the. number of reduces must be sufficient to leave just a single item, the root node, on the stack.\nWe can express most of these constraints with a CFI\nG -> sRr! R >sRr L = R > Rr! R >RR R ->e\nIntuitively, Rule 1 states that a valid shift-reduce command set must begin with a shift (since stack is initially empty, there is nothing to reduce) and end with a reduce that places the final result on the stack. Rule 2 states that if we do a shift, then we need to reduce the shifted token at some point in the future. Rule 3 states that if we do not shift then we are allowed to reduce only if we also push the result on the stack. Rule 4 allows for multiple subtrees. Rule 5 is the base case.\nNote, however, that this grammar is for a general purpose shift-reduce language, but we need to. constrain the number of shifts to equal the number of input tokens mx. Since the constraint is a bit verbose to express with production rules, we can instead write the regular language (s(r!)*)mx (r!)*. where m is the number of elements in x and intersect it with our CFL..\nLx = Ln(s(r!)*)mx(r!)"}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "There has been recent work in applying neural networks to structured prediction problems. For. example, the recent structured prediction energy networks (SPENS) combines graphical models and. neural networks via an energy function defined over the output variables (Belanger & McCallum. 2016). SPENS focuses on soft constraints (via the energy function) and performs inference by. relaxing the binary output variables to be continuous and then backpropagating into them. In contrast.. our method focuses on hard constraints and we backpropagate into the weights rather than into the. outputs directly. We could combine our method with SPENs to handle soft constraints; for example. by back-propagating the output energy into the weights instead of the relaxed outputs themselves\nThere has been recent work on applying neural networks to parsing problems that require the ability to handle hard constraints. For example, by employing a sequence-to-sequence network (Vinyals et al. 2015a) or a custom network designed for shift reduce parsing (Dyer et al.2016). The former requires\nRather than relying on a general purpose algorithm to compute g(y, L) that measures the number. of grammatical errors, we instead implement it specifically for our language. Let ctt-1(b(i)) be the. function that counts the number of times proposition b(i) is true. Now. define the following loss\n=(m-ct(yi=s))2 ct (yj=r)-ct(yj E{s,!} +ct(yi=r)-(ct(yiE{s,!})) y.Lx\nThe first term measures the amount of violation due to the regular language and the second and third erms measure the amount of violation according to the CFL..\ntask inference weights changed (W) conversion rate accuracy unconstrained none 0.0% 75.6% constrained all 65.2 % 82.4% azbz constrained output only. 20.9% 77.8% constrained encoder only 58.2% 82.5% constrained decoder only 57.4% 82.3% unconstrained none 0.0% 84.0% sr no types. constrained all 81.8% 84.4% unconstrained none 0.0% 87.8% constrained all 79.2% 88.3% constrained output only. 5.0% 88.1% sr with types. constrained decoder (top layer) 36.2% 88.2% constrained decoder (all layers) 54.7% 88.3% constrained decoder (top) + attention 38.0% 88.1% constrained decoder (all) + attention 56.5% 88.2%\nTable 1: Conversion rates on all three tasks with 100 steps of SGD. Note that satisfying the constraints has no negative affect on accuracy and often has a positive affect.\nbzazbzazbzazazbzbzbzbzbz -) zbaaazbaaazbaaaaaazbzbzbzbzb\niteration Output loss accuracy 0 zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.260 75.0 39 zbaaazbaaazbaaaaaazbzbzbaaazbzb 0.259 75.0 40 zbaaazbaaazbaaaaaazbzbzbaaazb 0.250 80.0 72 zbaaazbaaazbaaaaaazbzbzbaaazb 0.249 80.0 73 0.0 100.0 zbaaazbaaazbaaaaaazbzbzbzbzb\nTable 2: An example for which enforcing the constraints improves accuracy. Red indicates errors The output changes more than once before the constraints are finally enforced. Greedy decoding with constraints might correct this example because the spurious a's are at the end of the sequence.\nAnother intriguing approach is to distill the hard constraints into the weights at training time using a. teacher network (Hu et al.|2016). The method is appealing because it does not require constrained inference or combinatorial search. However, the method must achieve a difficult balance between the loss due to the training data and the loss due to the constraint violations. Further, it would crucially rely on network's ability to generalize the constraints learned on the training data to the testing data.\nFinally, our method highly resembles dual decomposition and more generally Lagrangian relaxatior for structured prediction (Koo et al. 2010] Rush et al.]2010, Rush & Collins2012).In such techniques, it is assumed that a computationally efficient inference algorithm can maximize over a superset of the feasible region (indeed this assumption parallels our exploitation of the fact that unconstrained inference in the neural network is efficient). Then, the method employs gradient descent to gradually concentrate this superset onto the feasible region until the constraints are satisfied. However, for computational reasons, these techniques assume that the constraints factorize over the output and that the functions are linear so that they can be combined into a single model. In contrast, we have a single dual variable so we instead minimize with respect to the weights, which generalize better over the output. Further, we are unable to combine the dual into a single model ove which we can do inference because the network is highly non-linear.\nthe output to form a valid parse tree and hence they employ post-processing to ensure this property The latter satisfies constraints as part of the decoding process by sampling over a combinatorial space Our approach does not rely on post processing or discrete search..\nIn this section we empirically evaluate our constrained inference procedure on two sequence-to sequence tasks. The first is a transduction task between two simple languages, which we describe. next. The second is the sequence-to-sequence shift-reduce parsing task described in Section4.\nazazbzazbzbzazbzbzbzbzbz > aaaaaazbaaazbzbaaazbzbzbzbzb\niteration output loss accuracy 0 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2472 66.7 1 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2467 66.7 2 aaaaaazbaaazbaaazbzbzbzbaaazb 0.2462 66.7 3 aaaaaazbaaazbzbaaazbzbzbzbzb 0.0 100.0\nTable 3: An example for which enforcing the constraints improves accuracy. Red indicates errors. Note that greedy decoding with constraints would not fix the errors in the middle since errors are made before constraints are violated. In contrast, the proposed method takes the constraints into account in a globall manner, allowing earlier errors to be corrected by future constraint violations.\nbzbzbzbzazbzbzazazazazbz- >zbzbzbzbaaazbzbaaaaaaaaaaaazb\niteration Output loss accuracy 0 0.2954 74.2 zbzbzbzbaaazbaaaaaaaaaaaazbaaa 4 zbzbzbzbzbaaaaaaaaazbzbaaaaaa 0.0 60.0\nTable 4: An example for which enforcing the constraints degrades accuracy. Errors in red\nA transducer T : L1 -> L2 is a function from a source language to a target language. For the purpose of the experiments T is known and our goal is to learn it from data. We choose a transducer similal to those studied in recent work (Grefenstette et al.|2015). The source language Lo is (az |bz) * and the target language L1 is (aaa | zb) *. The transducer is defined to map az to aaa and bz to zb. For example, T(bzazbz)+ >zbaaazb. The training set comprises 1934 sequences of length 2-20 and the test set contain sentences of lengths 21-24. As is common practice, we employ shorter sentences for training to require generalization to longer sentences at test time.\nWe employ a thirty-two hidden unit single-layered, attentionless, sequence-to-sequence long short term memory (LSTM) in which the decoder LSTM inputs the final encoder state at each time-step. Th encoder and decoder LSTMs each have their own set of weights. We train the network for 1000 epoch using RMSProp to maximize the likelihood of the output (decoder) sequences in the training set. Th network achieves perfect train accuracy while learning the rules of the output grammar nearly perfectl even on the test-set. However, despite learning the train-set perfectly, the network fails to learn th input-specific constraint that the number of a's in the output should be three times as the number ir the input. We implement a loss for this constraint and evaluate how well our method enforces th\nat test-time: g(y, L) = (n + m)-1 3r.I(x = a))\nThe top section of Table[1contains the results for this a zbz task. We use the term converted to refe to a sentence that initially had a constraint-violation, but was later fixed by the constrained-inferenc. procedure. The conversion rate is the percentage of such sentences that we convert: on this task, up to two-thirds. We experiment with which subset of the weights is best for satisfying the constraints finding that it is best to modify them all. We also report accuracy to study an initial concern Specifically, we had to omit the negative energy of the original weights W from our optimization problem, Equation7I potentially allowing the network to find a set of dual weights W that happer to satisfy the constraints, but that have poor performance. However, we found this not to be the case In fact, we report the token-wise accuracy over the examples for which the unconstrained neura network violated constraints and find that on the contrary, accuracy improves. Further, we find the regularizer is unnecessary since the initialization W, = W ensures the network never drifts too far\nYi n + m, the combined intput/output length, normalizes between O and 1. For constrained inference we run Algorithm|1and employ vanilla stochastic gradient descent with a learning rate of O.05 and no weight decay. We cap the number of iterations at a maximum of 100..\nIn order to gain a better understanding of the algorithm's behavior, we provide data-cases that highlight both success and failure (Tables 2 3 4). The title of these tables is the input and the desired ground truth output. The rows of the table show the network's output at each iteration (as indicated) The loss column is the constraint loss weighted by the output's energy (x, y, W)g(y, L), and the final column is the token-wise accuracy between the output and the ground truth.\niteration output loss accuracy 0 ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0857 33.3% 11 ssr!sr!ssssrrr!rr!ssrrrrrr! 0.0855 33.3% 12 Sssr!ssssrr!srrr!rr!ssrrrrrr! 0.0000 100.0%\nTable 5: A shift-reduce example for which the method successfully enforces constraints. The initial output has only nine shifts, but there are ten tokens in the input. Enforcing the constraint not only corrects the number of shifts to ten, but changes the implied tree structure to the correct tree.\nTable|2|contains an example for which our method successfully satisfies the constraints resulting. in perfect accuracy. However, because the constraint violation appears at the end of the string, a. greedy decoder that opportunistically enforces constraints on the fly could potentially correct this. error. In Table|3|we show a more interesting example for which such a greedy decoder would not. be as successful. In particular, the unconstrained network outputs the final aaa too early in the. sequence, but the constraint that controls the number of a's in the output is not violated until the end of the sequence. In contrast, our method takes the constraint into account globally, allowing. the network to not only rectify the constraint, but to achieve perfect accuracy on the sentence. (in just four gradient updates). Finally, in Table 4] we show an example for which enforcing the. constraints hurts the accuracy. The updates causes the network to erroneously change outputs that. were actually correct. This can happen if (a) the underlying network is sometimes inaccurate in. its output or confidence/probabilities thereon or (b) the gradient steps are too large causing the. network to completely leapfrog over the correct solution in a single step. The latter can be avoided by. normalizing the constraint loss so it does not grow unbounded with the number of outputs and by. erring on the side of a smaller learning rate.."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "We presented an algorithm for satisfying constraints in neural networks that avoids combinatoria search, but employs the network's efficient unconstrained procedure as a black box. We evaluate the algorithm on two sequence to sequence tasks, a toy transducer problem and a real-world shif reduce parsing problem. We found that the method was able to completely rectify up to 80% o violated outputs when capping the number of iterations at 100. Often, enforcing constraints cause the accuracy to improve, dispelling initial concerns that adjusting the weights at test-time woul be treacherous. Our method currently lacks the same theoretical guarantees as classic Lagrangia relaxation methods, so in future work we want to focus on supplemental theory and additiona objective functions. We also hope to extend the work to handle soft constraints, for example, a imposed by an external language model.\nWe repeat the same experiment (middle section of Table|1), but on the shift-reduce parsing tasl lescribed in Section4 We convert the Wall Street Journal portion of the Penn Tree Bank (PTB) intc hift-reduce commands and randomly split into 30k train and 9.2k test examples. We increase the umber of hidden units to sixty-four to accommodate the larger input space (50k words) and emplo Equation[10](normalized by sequence length) for the constraint loss. We measure the sequence ligned token accuracy. Otherwise, we employ the exact same experimental parameters as the a zbz ask, both for training the LSTM and for our algorithm. We find that our algorithm performs ever etter on the real-world task, converting over 80% of the violated outputs. We again find that ou rocedure has no negative impact on accuracy, which in fact improves, but not as substantially as fo he a zbz task. Table5|contains a successful example that we had previously highlighted in Section1 The algorithm satisfies the constraints, and also corrects the remaining output errors.\nFinally, we conduct a version of the shift-reduce experiment that includes the phrase types (e.g noun-phrase (NP)). To accommodate the larger output space (output alphabet size increases to 479) ve employ a larger network with 128 hidden units, attention and three-layers. Note that even this nore sophisticated network fails to learn the constraints from data and adding layers does not help The larger network affords us the opportunity to experiment with modifying different subsets of weights for enforcing constraints. As seen in the last section of Table[1 modifying all the weights vorks best, converting 79.2% of the violating sentences; again without negatively affecting accuracy"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "David M. Blei, Andrew Bagnell, and Andrew K. McCallum. Learning with scope, with applicatior to information extraction and classification. In Uncertainty in Artificial Intelligence (UAI). 2002\nZhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric P. Xing. Harnessing deep neura. networks with logical rules. In Association for Computational Linguistics (ACL), 2016.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victo Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks fo. natural language processing. Machine Learning, pp. 1378-1387, 2016.\nAndrew McCallum and Ben Wellner. Conditional models of identity uncertainty with applications tc noun coreference. In Neural Information Processing Systems (NIPS), 2005.\nRyan McDonald and Fernando Pereira. Learning of approximate dependency parsing algorithms. In EACL, 2006.\nLev Ratinov and Dan Roth. Design challenges and misconceptions in named entity recognition. I Computational Natural Language Learning (CoNNL). 2009\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In Neural Information Processing Systems (NIPS), 2014..\nEdward Grefenstette. Karl Moritz Hermann. Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Neural Information Processing Systems (NIPS). 2015.\nTerry Koo, Alexander M Rush, Michael Collins, Tommi Jaakkola, and David Sontag. Dual decompo sition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 1288-1298. Association for Computa tional Linguistics, 2010.\nGordon Lyon. Syntax-directed least-errors anallysis for context-free languages: A practical approach Programming Languages, 17(1), January 1974\nMichael Wick, Aron Culotta, and Andrew McCallum. Learning field compatibilities to extrac database records from unstructured text. In Proceedings of the 2006 Conference on Empirica Methods in Natural Language Processing, EMNLP '06, pp. 603-611, Stroudsburg, PA, USA, 2006 Association for Computational Linguistics. ISBN 1-932432-73-6.\nMichael Wick, Khashayar Rohanimanesh, Andrew McCallum, and AnHai Doan. A discriminative approach to ontology alignment. In In proceedings of the 14th NTII WS at the conference for Very Large Databases (VLDB), 2008."}] |
BJ46w6Ule | [{"section_index": "0", "section_name": "DYNAMIC PARTITION MODELS", "section_text": "Marc Goessling"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We consider the task of learning a compact binary representation (e.g. Goessling & Amit, 2015) That means we are seeking a parsimonious set of experts, which can explain a given collection o multivariate data points. In contrast to most existing approaches the emphasis here is on finding experts that are individually meaningful and that have disjoint responsibilities. Ideally, each exper explains only one factor of variation in the data and for each factor of variation there is exactly on expert that focuses on it.\nWe start by describing a simple model family, which forms the basis of our work. A partition model (Hartigan, 1990) makes use of a manually specified partitioning of the D variables into subsets\nL {1,...,D}= Se l=1\nFor each subset of variables x(Se) = (x(d))des, there exists a separate model Pe. It is then typicall assumed that variables in different subsets are conditionally independent, i.e.,\nL P(x|h)=II Pe(x(Se)[h(l)) l=1\nThe model is completed by specifying a prior distribution P(h) for the latent state h. One advantag. of partition models is that estimating Pe from observations is straightforward, while learning exper. models in general requires computationally involved procedures (Bengio et al., 2013). However, i. order to be able to define a satisfactory partitioning of the variables some prior knowledge aboi. the dependence structure is needed. For image data a common choice is to use a regular grid tha. divides the image into patches (e.g. Pal et al, 2002). In general, a good partitioning is characterize. by providing weakly dependent subsets of variables so that the conditional independence assumptio. () is reasonable and the distribution of the latent variables is easy to model. Unfortunately, ofte. there simply is no single fixed partitioning that works well for the whole dataset because the st\namit@galton.uchicago.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present a new approach for learning compact and intuitive distributed rep resentations with binary encoding. Rather than summing up expert votes as in products of experts, we employ for each variable the opinion of the most reliable expert. Data points are hence explained through a partitioning of the variables into expert supports. The partitions are dynamically adapted based on which ex- perts are active. During the learning phase we adopt a smoothed version of this model that uses separate mixtures for each data dimension. In our experiments we achieve accurate reconstructions of high-dimensional data points with at most a dozen experts.\nFormally, the experts P, k = 1, ..., K, are probability distributions that depend on binary latent variables h(k). The latent state h specifies which experts are active and has to be inferred for each D-dimensional data point x. The active experts then define a probability distribution P. The goal of representation learning is to train experts such that the conditional likelihood P(x | h) of the data given the latent activations is maximized.\nof variables, which are affected by different factors of variation, might overlap. This restricts th scenarios in which partition models are useful.\nThat means, each variable x(d) is explained by only a single expert k*(d). The partitioning intc expert supports S(h) = {d E {1,..., D} : k*(d) = k} is determined dynamically based on the latent configuration h. We hence call our model a dynamic partition model."}, {"section_index": "3", "section_name": "2.1 INFERENCE", "section_text": "In the inference step we try to find for each data point xn the subset of experts {k : hn(k) = 1} tha maximizes P(xn hn). To do this, we suggest to sequentially activate the expert that most improve. the likelihood, until the likelihood cannot be improved anymore. This approach is called likelihood matching pursuit (Goessling & Amit, 2015). The greedy search works well for our model because we are working with a small set of experts and each expert focuses on a rather different structure in the data. Consequently, the posterior distribution on the latent variables given xn is often highly peaked at a state hn (note that for high-dimensional data the effect of the prior P(h) is typically negligible).\nHere, k*(d) denotes the expert with the highest level of expertise e(d) among all experts k wit hn(k) = 1."}, {"section_index": "4", "section_name": "2.2.1 EXPERTISE-WEIGHTED COMPOSITION", "section_text": "In order to compute the estimator in () the levels of expertise e have to be known. Since in this. paper we are trying to train the experts as well as the associated levels of expertise we consider a. smoothing of the maximum-expertise composition () to motivate our learning procedure. Rather than using the expert with the highest level of expertise, we form a mixture of the active experts where the mixture weight is proportional to the level of expertise. Thus, the smoothed composition\nIn this paper we extend partition models to allow for dynamically adapting partitionings. In Section. we introduce the model and present an appropriate learning procedure. Related work is discussed. in Section 3. Special emphasis is given to the comparison with products of experts (Hinton, 2002). Experiments on binary and real-valued data are performed in Section While it is important to explain high-dimensional data points through multiple experts, our work shows that it is possible to assign the responsibility for individual variables to a single expert (rather than having all active experts speak for every variable).\nOur main proposal is to define for each expert Pg its level of expertise ek E RP for all variables. We can then dynamically partition the variables based on the active experts. Specifically, for each variable we employ the most reliable (active) expert.\nD P(x|h) = Pk*(d)(x(d)) k*(d) = argmax ek k:h(k)=1 d=1\nIn contrast to traditional approaches, which combine multiple experts for individual variables, train- ing the experts in a dynamic partition model is trivial. Indeed, the maximum-likelihood estimates are simply the empirical averages over all observations for which the expert was responsible. For example, the expert means can be estimated from training data xn., n = 1, . . . , N, as\nN 1{k*(d)=k}xn(d uk(d) = n=1 N 1{kn(d)=k} n=1\nD K ek(d) if h(k) =1 P(x|h)=II k':n(k')=1 Ck(d) rk(d)Pk(x(d)) if h(k) = 0 d=1 k=1\nIn contrast to classical mixture models (e.g.McLachlan & Peel, 20o4) we use different mixture weights for each dimension d E {1, ..., D}. The mixture weight r(d) is the degree of responsibil-. ity of k-th expert for the d-th dimension and depends on the latent state h. An expert with a medium level of expertise assumes full responsibility if no other reliable expert is present and takes on a low. degree of responsibility if experts with a higher level of expertise are present..\nV[P] = Err[V[Pe]] + Vrn[E[Pk]"}, {"section_index": "5", "section_name": "2.2.2 EXPERT UPDATE", "section_text": "The sequential inference procedure (from Section 2.1) provides for each data point xn the latent rep resentation hn. We denote the corresponding expert responsibilities (using the current estimates fo the level of expertise) by rnk. The smooth analog to the hard update equation (B) is a responsibility weighted average of the training samples\nwhere vo is the empirical variance of all training samples"}, {"section_index": "6", "section_name": "2.2.3 EXPERTISE UPDATE", "section_text": "We now turn to the updates of the levels of expertise. The log-likelihood of the smoothed mode () as a function of eg is rather complex. Using gradient descent is thus problematic because the derivatives with respect to ek can have very different scales, which makes it difficult to choose ar appropriate learning rate and hence the convergence could be slow. However, exact optimization i. not necessary because in the end only the order of the levels of expertise matters. Consequently, w propose to adjust e(d) only based on the sign of the gradient. We simply multiply or divide the current value by a constant C. If the gradient is very close to 0 we leave e(d) unchanged. For al our experiments we used C = 2. Larger values can speed up the convergence but sometimes lead t a worse solution. Using an exponential decay is common practice when learning levels of expertis (e.g.Herbster & Warmuth, 1998).\nIn the learning procedure we perform the expertise update first. We then recompute the responsibil ities using these new levels of expertise and update the experts. Our algorithm typically converges after about 10 iterations.\nthe variance of a mixture is always larger than the smallest variance of its components. In other words, the precision of the smoothed model is maximized when all the mixture weight (individually for each dimension) is concentrated on the most precise expert. We can thus learn a dynamic parti- tion model in an EM manner (Dempster et al., 1977) by interleaving inference steps with updates of the experts and levels of expertise in the smoothed model.\nN rnk(d)xn(d) + E0 Pk(d) = n=1 N rnk(d)+ e n=1\nFor stability we added a term that shrinks the updated templates towards some target o if the total responsibility of the expert is small. In our experiments we set o to the average of all training examples. The update rule implies that the experts have local supports, in the sense that they are uninformative about variables for which they are not responsible..\nFor binary data the mean templates k are all we need. Continuous data x E RD is modeled through Gaussians and hence we also have to specify the variance vk of the experts. We again use a responsibility-weighted average\nN rnk(d)(xn(d) - k(d))2 + eV0 n=1 Jb N rnk(d) + e n=1\nHerbster & Warmuth (1998) proposed an algorithm for tracking the best expert in a sequential pre. diction task. In their work it is assumed that a linear ordering of the variables is known such that. the expert with the highest level of expertise is constant on certain segments. In contrast to that,. our approach can be applied to an arbitrary permutation of the variables. Moreover, they consider. a single sequence of variables with a fixed partitioning into experts supports. In our setup the par-. titioning changes dynamically depending on the observed sample. However, the greatest difference. to our work is that Herbster & Warmuth (998) do not learn the individual experts but only focus on training the levels of expertise\nLuicke & Sahani (oo8) studied a composition rule that also partitions the variables into expert supports. In their model the composed template is simply the maximum of the experts templates k. This rule is only useful in special cases. A generalization, in which the composition depends. on the maximum and the minimum of the expert templates s(d), was considered by Goessling & Amit (oi5). While the motivation for that rule was similar, the maximum-expertise rule in this paper is more principled and can be applied to continuous data..\nIn the work by Amit & Trouve (oo7) a simple average (i.e., an equal mixture) of the individua. templates was used. With such a composition rule, all experts are equally responsible for each of th variables and hence specialization on local structures is not possible. To circumvent this problem. in their work e(d) was manually set to 1 for some subset of the dimensions (depending on a laten. shift variable) and to O elsewhere.\nA popular model family with latent binary representation are products of experts (Hinton, 2002). In such a model the individual distributions Pg are multiplied together and renormalized. Computation of the normalizing constant is in general intractable though. A special case, in which an explicit normalization is possible, are restricted Boltzmann machines (Hinton, 2002). In these models the experts are product Bernoulli distributions with templates k E 0, 1D. The composed distribution is then also a product Bernoulli distribution with composed template.\nPRBM(d) =0 w k:h(k)=1\nAnother common model for representation learning are autoencoders (Vincent et al., 20o8), which can be considered as mean-field approximations of restricted Boltzmann machines that use latent variables h(k) with values in 0, 1. To obtain a sparse representation a penalty on the number of active experts can be added (Ng, 20l1). Such approaches are also known as sparse dictionaries (e.g., Elad, 20i0) and are based on opinion pools of the form k h(k)wx(d). The strength of the sparsity penalty is an additional tuning parameter which has to be tuned. In dynamic partition models sparse activations are inherent. In the next section, we experimentally compare products of experts, autoencoders and sparse dictionaries to our proposed model.\nwhere the weights wx(d) = log(x(d)/(1 - x(d)) E R are the log-odds of the experts and. o(t) = (1 + exp(-t))-1 is the logistic function. This sum-of-log-odds composition rule arises. naturally from generalized linear models for binary data because the log-odds are the canonical parameter of the Bernoulli family. In a product of experts, the variance of the composition is usually. smaller than the smallest variance of the experts. As a consequence, products of experts tend to. employ many experts for each dimension (for more details on this issue see Goessling & Amit (2015)). Even with an L1-penalty on the votes wg(d) the responsibility for individual variables. x(d) is typically still shared among many experts. The reason for this is that under the constraint. k w(d) = w(d) the quantity k|w(d)|is minimized whenever wk(d) has the same sign for. all k. The usual inference procedure for products of experts independently activates experts based on their inner product with the data point. In particular, not just the most probable expert configuration. is determined but the whole posterior distribution on latent states given the data is explored through. Monte Carlo methods. For learning in products of experts, simple update rules like () and () cannot. be used because for each expert the effects of all other experts have to be factored out. Dynamic partition models essentially decompose the expert votes wk into expert opinions k and levels of expertise e. Apart from the computational advantages for learning, this introduces an additional. degree of flexibility because the expert supports are adjusted depending on which other experts are. present (cf. Figure ). Moreover, the decomposition into opinions and levels of expertise avoids ambiguities. For example, a vote w(d) ~ 0 could mean that (d) ~ 1/2 or that e(d) ~ 0..\nFigure 1: Expert training for the synthetic dataset. Each panel shows the probabilities (white/black corresponds to (d) = 0/1) of the 10 experts (rows) for the 10 dimensions (columns). 1st panel: Random initialization. 2nd-4th panel: Our learning procedure after 3/5/15 iterations.\nFigure 2: Trained experts for the synthetic data after 1,o0o iterations using an autoencoder (1st panel), a sparse dictionary (2nd panel) and a restricted Boltzmann machine (3rd panel)."}, {"section_index": "7", "section_name": "4.1 SYNTHETIC DATA", "section_text": "We consider a synthetic example and try to learn the underlying factors of variation. The datase consists of the 32-element subset {(0, 1), (1, 0)}5 C {0, 1}10. Note that there are 5 factors of. variation corresponding to the state of the pairs (x(2l-1), x(2l)) for l = 1,...,5 with the two. factor levels (0, 1) and (1, 0). Indeed, the distribution can be easily expressed through a partition. model with partitioning\n1,2}U{3,4}U{5,6}U{7,8}U{9,10"}, {"section_index": "8", "section_name": "and corresponding models", "section_text": "Pe(x(2l-1),x(2l)) =21{x(2l-1)=0, x(2l)=1} + 21{x(2l-1)=1, x(2l)=0}.\nWe show that our dynamic partition model is able to learn these factors of variation without requiring a manual specification of the partitioning. Here, the total number of experts we need to accuratel reconstruct all data points happens to be equal to the number of dimensions. However, in other cases the number of required experts could be smaller or larger than D. We ran our learning algorithn for 15 iterations starting from a random initialization of the experts. The resulting templates afte 3, 5 and 15 iterations are shown in Figure . We see that each of the final experts specializes ir exactly two dimensions d and d + 1. Its opinion for these variables are close to O and 1, respectively while the opinions for the remaining variables are about 1/2. Every data point can now be (almost) perfectly reconstructed by using exactly 5 of these experts.\nFor comparison we trained various other models with 10 experts, which use a sum-of-log-odd composition. We first tried an autoencoder (Vincent et al., 20o8), which in principle could adop. the identity map because it uses (in contrast to our model) a bias term for the observable and latent. variables. However, the gradient descent learning algorithm with tuned step size yielded a different. representation (Figure 1st panel). While the reconstruction errors are rather low, they are clearly. nonzero and the factors of variations have not been disentangled. Next, we considered a dictionary. with a sparse representation (e.g., Elad, 2010). The sparsity penalty was adjusted so that the average. number of active dictionary elements was around 5. The learning algorithm again yielded highly. dependent experts (Figure , 2nd panel). Finally, we trained a restricted Boltzmann machine through. batch persistent contrastive divergence (Tieleman, 2oo8) using a tuned learning rate. Note that a.\n930 7 X\nFigure 3: Trained experts for MNIST digits. Left: Expert probabilities (white/black corresponds to k(d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\nFigure 4: Reconstruction of MNIST test examples using likelihood matching pursuit. Each column. visualizes the composed Bernoulli templates during the sequential inference procedure (top down for one sample. The bottom row are the original data points..\nrestricted Boltzmann machine in principle only requires 5 experts to model the data appropriately because it uses bias terms. However, we again learned 10 experts (Figure 3rd panel). While the results look better than for the previous two models they are still far from optimal. In earlier work Goessling & Amit (2015) we performed a quantitative comparison for a similar dataset, which showed that the reconstruction performance of models with sum-of-log-odds composition is indeed suboptimal."}, {"section_index": "9", "section_name": "4.2 MNIST DIGITS", "section_text": "We now consider the MNIST digits dataset (LeCun et al.l, 1998), which consists of 60,000 training samples and 10,000 test samples of dimension 28 28 = 784. We ran our learning algorithm for 10\n4 4 4 X 7141z1954563207 4\n8288822888 9997774 7 G 4 003092239 6694444280 33553535333\nFigure 5: Dynamic supports for 5 MNIST experts. Left column: Expert probabilities. Remaining columns: Composed Bernoulli templates for 10 latent configurations. The cast opinion of the expert is shown in shades of red (white/red corresponds to (d) = 0/1).\nFigure 6: Trained experts for Weizmann horses. Left: Expert probabilities (white/black corresponds to (d) = 0/1). Right: Levels of expertise (blue/red corresponds to small/large values).\niterations and trained 100 experts (Figure 3). We see that some experts specialize on local structures while others focus on more global ones. In Figure we visualize the inference procedure for some test samples using these 100 learned experts. On average 12 experts were activated for each data point. For easier visualization we show at most 10 iterations of the likelihood matching pursuit algorithm. The reconstructions are overall accurate and peculiarities of the samples are smoothec out. In Figure we illustrate how the expert supports change based on the latent representation Depending on which other experts are present the supports can vary quite a bit."}, {"section_index": "10", "section_name": "4.3 WEIZMANN HORSES", "section_text": "The following experiment shows that our model is able to cope with very high-dimensional data. The Weizmann horse dataset (Borenstein & Ullman, 2008) consists of 328 binary images of siz 200 240. We used the first 300 images to train 20 experts (Figure ) and used the remaining 28. images for testing. Some of the experts are responsible for the background and the central regio of the horse while other experts focus on local structures like head posture, legs and tail. In Figur we illustrate the partitioning of the test examples into expert opinions. For simplicity we use exactly 4 experts to reconstruct each sample. Not all characteristics of the samples are perfectl reconstructed but the general pose is correctly recovered. The same dataset was used to evaluat the shape Boltzmann machine (Eslami et al., 2014), where 2,000 experts were learned. For thos experiments the images were downsampled to 32 32 pixels. This is a factor 50 smaller than th full resolution of 48,000 dimensions that we use..\n82888228880 77699977749 0030922339 6644444280 33553535333\nFFFKFF\nFigure 7: Decomposition of the test examples from the Weizmann horse dataset. 1st column: Original data points. 2nd column: Reconstructions (shown are the composed Bernoulli templates) 3rd-6th column: Partitioning into experts opinions (white/black corresponds to (d) = 0/1, gray indicates regions for which the expert is not responsible).\nFigure 8: Reconstructions of the test examples from the Caltech motorcycle dataset. Odd rows Original data. Even rows: Reconstructions (shown are the composed Gaussian means)"}, {"section_index": "11", "section_name": "4.4 CALTECH MOTORCYCLES", "section_text": "We also experimented with real-valued data using the Caltech-101 motorcycle dataset (Fei-Fei et al 2007), which consists of 798 images of size 100 180. The first 750 images were used for trainin. and the remaining 48 images for testing. We trained 50 experts by running our learning procedur for 10 iterations. In Figure we visualize the reconstructed test examples. The reconstruction are a bit blurry since we use a fairly sparse binary representation. Indeed, for each data point ol average only 7 experts were employed. Note that the shapes of the motorcycles are reconstructec quite accurately."}, {"section_index": "12", "section_name": "5 DISCUSSION", "section_text": "In order to improve the reconstructions for continuous image data we could use real-valued latent variables in addition to binary ones (as in Hinton et al ( 998)). This would allow us to model inten- sities and contrasts more accurately. The inference procedure would have to be adapted accordingly. such that continuous activations can be returned.\nOur work focused on product distributions. In order to apply the proposed approach to models with dependence structure one can make use of an autoregressive decomposition (e.g., Goessling & Amit 2016). If the joint distribution is written as a product of conditional distributions then we can employ the same composition rule as before. Indeed, we can model composed the conditionals as\nP(x(d)|x(1:d-1),h) = Pz*(d(x(d)|x(1:d-1))\nP(x(d)|x(1:d-1),h) = Pk*(d)(x(d)|x(1:d-1))\nwhere Px are autoregressive expert models and k*(d) is the active expert with the highest level of expertise for dimension d."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Eran Borenstein and Shimon Ullman. Combined top-down/bottom-up segmentation. IEEE Trans actions on Pattern Analysis and Machine Intelligence. 30(12):2109-2125. 2008\nArthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (methodological), pp. 1-38, 1977.\nMichael Elad. Sparse and redundant representations. Springer, 2010.\nJohn A Hartigan. Partition models. Communications in statistics-Theory and methods, 19(8):2745 2756, 1990.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neura computation, 14(8):1771-1800, 2002.\nGeoffrey McLachlan and David Peel. Finite mixture models. John Wiley & Sons, 2004\nAndrew Ng. Sparse autoencoder. CS294A Lecture Notes, 72:1-19, 2011.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting anc composing robust features with denoising autoencoders. In International Conference on Machine Learning, pp. 1096-1103, 2008.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828 2013.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998."}, {"section_index": "14", "section_name": "6 DERIVATIVES", "section_text": "f() = x log + (1 - x) log(1 -\nek rkHk7 rk k\nThe first and second derivative of the log-likelihood with respect to the composed probability are\ndu Tk dk dhk\nConsequently, the derivatives of the log-likelihood with respect to the expert probabilities are\ndf df du x - Tk dk d u dk\nd p\nThe derivative of the con. osed probability with respect to the levels of expertise is\nd p kE- ek'k' Pk - l dek E2 E\nek p = TkHk, rk(Vk+ 3)- 2 U rk ek k k\ndf 1-x x x - du\nL2 x 1-x u2(1 -\nVe see that d2 f /d? < 0 for E (0, 1), i.e., the log-likelihood is a strictly concave function of j\ndf df d dek dek\n1 f(p,v) = 2v 2\nThe derivative of the log-likelihood with respect to the composed mean and variance are\ndf x - df x-)2 1 x - d dv 2v2 2v 2v2 U\nThe derivative of the composed mean and variance with respect to the levels of expertise are\nd kE-ek'k' Hk - l dek E2 E 9\ndv qkE du ek'qk' qk q Uk Vk-U+ (k- )2 - dek E2 dek E E E\nFor binary data, the log-likelihood of the smoothed model is a concave function of x(d), see Section 6.1.2 We could therefore in principal perform an optimization for the experts opinions using Newton's method. There are a few complications though. One problem is that the second derivative is proportional to the squared responsibility and hence close to O if the level of expertise is small. Consequently, template updates in regions with low expertise would be unstable. To deal with that we could add a penalty on the squared log-odds for example. Another problem is that the Newton steps may lead to probability estimates outside of [0, 1]. This can be dealt with by pulling the estimates back into the unit interval. Note that working on the log-odds scale is not possible because the log-likelihood of our model is not concave in the expert log-odds. Because of these complications we use the simple, fast and robust heuristic () instead of Netwon's method.\ndf df d p df dv dek du dek dv dek"}] |
HJ9rLLcxg | [{"section_index": "0", "section_name": "DATASET AUGMENTATION IN FEATURE E SPACE", "section_text": "Terrance DeVries and Graham W. Taylor\nDataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in su- pervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a sim pler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating. or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsu- pervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Work- ing in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the major catalysts for the resurgence of neural networks as \"deep learning\"' was the influx. of the availability of data. Labeled data is crucial for any supervised machine learning algorithm to. work, even moreso for deep architectures which are easily susceptible to overfitting. Deep learning. has flourished in a few domains (e.g. images, speech, text) where labeled data has been relatively. simple to acquire. Unfortunately most of the data that is readily available is unstructured and un- labeled and this has prevented recent successes from propagating to other domains. In order to. leverage the power of supervised learning, data must be manually labeled, a process which requires. investment of human effort. An alternative to labeling unlabeled data is to generate new data with. known labels. One variant of this approach is to create synthetic data from a simulation such as a. computer graphics engine (Shotton et al.]2013] Richter et al.2016), however, this may not work if. the simulation is not a good representation of the real world domain. Another option is dataset aug-. mentation, wherein the existing data is transformed in some way to create new data that appears to. come from the same (conditional) data generating distribution (Bengio et al.2011). The main chal-. lenge with such an approach is that domain expertise is required to ensure that the newly generated. data respects valid transformations (i.e. those that would occur naturally in that domain)..\nIn this work, we consider augmentation not by a domain-specific transformation, but by perturb. ing, interpolating, or extrapolating between existing examples. However, we choose to operate nc in input space, but in a learned feature space.Bengio et al.(2013) and Ozair & Bengio (2014 claimed that higher level representations expand the relative volume of plausible data points withi the feature space, conversely shrinking the space allocated for unlikely data points. As such, whe traversing along the manifold it is more likely to encounter realistic samples in feature space tha compared to input space. Unsupervised representation learning models offer a convenient way o learning useful feature spaces for exploring such transformations. Recently, there has been a retur to interest in such techniques, leading to, e.g., variational autoencoders (Kingma & Welling]2014 generative adversarial networks (Goodfellow et al.|2014), and generative stochastic networks (Alai et al.][2016), each of which could be used to generate useful feature spaces for augmentation..\nBy manipulating the vector representation of data within a learned feature space a dataset can be augmented in a number of ways. One of the most basic transformations that can be applied to the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "data is to simply add random noise to the context vector. In the context of class-imbalanced data Chawla et al.(2002) proposed interpolating between samples in feature space. Similarly extrapola- tion between samples could also be applied. We investigate some of these methods to see which is. most effective for improving the performance of supervised learning models when augmented data is added to the dataset.\nIn this work, we demonstrate that extrapolating between samples in feature space can be used t augment datasets and improve the performance of supervised learning algorithms. The main benef of our approach is that it is domain-independent, requiring no specialized knowledge, and can there fore be applied to many different types of problems. We show that models trained on datasets tha have been augmented using our technique outperform models trained only on data from the origi nal dataset. Just as dataset augmentation in input space has become standard for visual recognitio tasks, we recommend dataset augmentation in feature space as a domain-agnostic, general-purpos framework to improve generalization when limited labeled data is available."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "or many years, dataset augmentation has been a standard regularization technique used to reduc. verfitting while training supervised learning models. Data augmentation is particularly popular fc. isual recognition tasks as new data can be generated very easily by applying image manipulatior. uch as shifting, scaling, rotation, and other affine transformations. When training LeNet5, on. f the most early and well-known convolutional neural network architectures, LeCun et al.(1998 pplied a series of transformations to the input images in order to improve the robustness of th. nodel. Krizhevsky et al.[(2012) also used image transformations to generate new data when trainin. he renowned AlexNet model for the 2012 Large Scale Visual Recognition Challenge (ILSVRC. hey claimed that dataset augmentation reduced the error rate of the model by over 1%. Creatin. ew data has since been a crucial component of all recent large-scale image recognition models."}, {"section_index": "4", "section_name": "3 MODEL", "section_text": "Our dataset augmentation technique works by first learning a data representation and then applying transformations to samples mapped to that representation. Our hypothesis is that, due to manifold unfolding in feature space, simple transformations applied to encoded rather than raw inputs will result in more plausible synthetic data. While any number of representation learning models could\nUnfortunately, dataset augmentation is not as straightforward to apply in all domains as it is for im ages. For example, Schluter & Grill (2015) investigated a variety of data augmentation techniques for application to singing voice detection. These include adding Gaussian noise to the input, shifting the pitch of the audio signal, time stretching, varying the loudness of the audio signal, applying ran- dom frequency filters, and interpolating between samples in input space. They found that only pitch shifting and random frequency filtering appeared to improve model performance. While performing well on audio data, these augmentation techniques cannot be applied to other domains. As such, the process of designing, implementing, and evaluating new data augmentation techniques would need to be repeated for each new problem.\nImportant to our work are sequence-to-sequence learning (seq2seq) models which were first de-. veloped independently by Cho et al.(2014) and Sutskever et al.(2014). Generally these models convert a sequence of inputs from one domain into a fixed-length context vector which is then used. to generate an output sequence, usually from a different domain. For example, the first application. of seq2seq learning by Cho and Sutskever was to translate between English and French. Sequence-. to-sequence learning has recently been used to achieve state-of-the-art results on a large variety of. sequence learning tasks including image captioning (Vinyals et al.2015b), video captioning (Venu- gopalan et al.2015), speech recognition ((Chan et al.2016), (Bahdanau et al.2016)), machine translation ((Jean et al.]2015), (Luong et al.]2015)), text parsing (Vinyals et al.]2015a), and con- versational modeling (Vinyals & Le2015). The seq2seq architecture can also be used to create. sequence autoencoders (SA) by creating a model that learns to reconstruct input sequences in its. output (Srivastava et al.J2015}Dai & Le 2015). We use a variant of sequence autoencoders in our. work to create a feature space within which we can manipulate data to augment a training set.\nYT Y2 Decoder eeeeden Encoder Data Augmentation Encoder Jepoou Ck Sequence Classifier Static Classifier X1 X2 XT (a) Sequence autoencoder (b) Encode and apply data transform (c) Decode and/or classify\nFigure 1: System architecture composed of three steps. (a) A sequence autoencoder learns a feature space from unlabeled data, representing each sequence by a context vector (C). (b) Data is encodec to context vectors and augmented by adding noise, interpolating, or extrapolating (here we depic interpolation). (c) The resulting context vectors can either be used directly as features for supervised learning with a static classifier, or they can be decoded to reconstruct full sequences for training a sequence classifier.\nbe explored, we use a sequence autoencoder to construct a feature space. The main reason we adop. SA is that we favour a generic method that can be used for either time series or static data"}, {"section_index": "5", "section_name": "3.1 SEOUENCE AUTOENCODER", "section_text": "An autoencoder consists of two parts: an encoder and a decoder. The encoder receives data as in put and, by applying one or more parametrized nonlinear transformations, converts it into a new. representation, classically lower-dimensional than the original input. The decoder takes this repre. sentation and tries to reconstruct the original input, also by applying one or more nonlinear trans-. formations. Various regularized forms of autoencoders have been proposed to learn overcomplete. representations.\nA sequence autoencoder works in a similar fashion as the standard autoencoder except that the encoder and decoder use one or more recurrent layers so that they can encode and decode variable- length sequences. In all of our experiments, we use a stacked LSTM (Li & Wu]2015) with two layers for both the encoder and decoder (Figure[1a). During the forward pass, the hidden states of the recurrent layers are propagated through the layer stack. The encoder's hidden state at the final time step, called the context vector, is used to seed the hidden state of the decoder at its first time step.\nThe main difference between our implementation of the SA and that of[Dai & Le(2015) is how the context vector is used in the decoder. Dai and Le follow the original seq2seq approach of|Sutskever. et al.[(2014) and use the context vector as input to the decoder only on the first time step, then use. the output of the previous times step as inputs for all subsequent time steps as follows..\nwhere f is the LSTM function. s is the state of the LSTM (both hidden and cell state). c is the context vector, and y is the output of the decoder. We instead modify the above equation so that the decoder is conditioned on the context vector at each time step as was done in (Cho et al.[2014):\nyo = fso,c) Yt =f(St-1,Yt-1,C)\nWe found that conditioning the decoder on the context vector each time step resulted in improved reconstructions, which we found to be critical to the success of the data augmentation process..\nyo = fso,c) Yt=fSt-1,Yt-1\nIn order to augment a dataset, each example is projected into feature space by feeding it through the sequence encoder, extracting the resulting context vector, and then applying a transformation in feature space (Figure|1b). The simplest transform is to simply add noise to the context vectors. however, there is a possibility with this method that the resulting vector may not resemble the same class as the original, or even any of the known classes. In our experiments, we generate noise by drawing from a Gaussian distribution with zero mean and per-element standard deviation calculated across all context vectors in the dataset. We include a y parameter to globally scale the noise:\nc, =ci+yX,X ~N{0,o?}\nwhere i indexes the elements of a context vector which corresponds to data points from the training set. A more directed approach for data augmentation follows the techniques introduced by Chawla et al. (2002). For each sample in the dataset, we find its K nearest neighbours in feature space which share its class label. For each pair of neighbouring context vectors, a new context vector can then be generated using interpolation:\nc'=(CkCj) +Cj\nC=CCk)+Cj\nIn the case of extrapolation, X is a value in the range {0, oo} which controls the degree of extrapola tion. While X could be drawn from a random distribution for each new sample we found that setting X = 0.5 worked well as a default value in most cases, so we use this setting in all of our tests"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In all experiments, we trained a LSTM-based sequence autoencoder in order to learn a feature space from the available training examples. Each hidden layer, including the context vector, had the same number of hidden units and a dropout probability of p = O.2. The autoencoders were trained using Adam (Kingma & Ba]2015) with an initial learning rate of O.001, which was reduced by half whenever no improvement was observed in the validation set for 10 epochs. Finally, we reversed the order of the input sequences as suggested by Sutskever et al.(2014). We found that reversing the order of input sequences caused the model to train faster and achieve better final solutions..\nFor all classification experiments where interpolation or extrapolation was applied to generate new samples, we applied the following procedure unless otherwise stated. For each sample in the datase we found the 10 nearest in-class neighbours by searching in feature space. We then interpolated or extrapolated between each neighbour and the original sample to produce a synthetic example which was added to the augmented dataset. For all tests, the baseline model and the augmented dataset model(s) were trained for the same number of weight updates regardless of dataset size."}, {"section_index": "7", "section_name": "4.1 VISUALIZATION - SINUSOIDS", "section_text": "To gain an intuition of the method we start by working with a synthetic dataset of sinusoids. Si nusoids work well as a test case for this technique as they have a known behaviour and only two dimensions (amplitude and time), so we can easily observe the effects of the dataset augmentation process. To create a training set, sinusoids were generated with amplitude, frequency, and phase drawn from a uniform distribution.\nFor this toy problem, we trained a sequence autoencoder with 32 hidden units in each layer. We then applied different data augmentation strategies to observe the effects on the \"synthetic\" sinusoids.\nwhere c' is the synthetic context vector, c; and c; are neighbouring context vectors, and is a variable in the range {0, 1} that controls the degree of interpolation. In our experiments, we use X = 0.5 so that the new sample balances properties of both original samples. In a similar fashion,. extrapolation can also be applied to the context vectors:.\nOnce new context vectors have been created, they can either be used directly as input for a learning task, or they can be decoded to generate new sequences (Figure 1c). When interpolating between two samples, the resulting decoded sequence is set to be the average length of the two inputs. When extrapolating between two samples the length of the new sequence is set to be the same as that of c;\nFor each test we extracted the context vectors of two input sinusoids, performed an operation, ther decoded the resulting context vectors to generate new sequences..\nWe first augmented data by adding random noise to the context vectors before decoding. The nois. magnitude parameter from Equation|1|was set to O.5. In Figure2a|the blue and green \"parent samples are shown in bold while the augmented \"child' samples are thinner, lighter lines. Impor. tantly, we observe that all new samples are valid sinusoids with stable, regular repeating patterns Although mimicking the major properties of their parents the generated samples have small change in amplitude, frequency, and phase, as would be the expected effect for the addition of random noise\nFor a more directed form of data augmentation we experimented with interpolating between sinu-. soids within the space of the context vectors. Figure [2b|demonstrates interpolation between two sinusoids using Equation|2|while varying the parameter from O to 1. Unlike the results obtained by Bengio et al.(2013) where the transition between classes occurs very suddenly we find that the. samples generated by our model smoothly transition between the two parent sinusoids. This is an. exciting observation as it suggests that we can control characteristics of the generated samples by combining two samples which contain the desired properties..\nIn a similar fashion to interpolation we can also extrapolate between two samples using Equation. 3] For this experiment we again vary the parameter from O to 1 to generate a range of samples.. As seen in Figure [2cl this appears to have the effect of exaggerating properties of each sinusoid. with respect to the properties of the other sinusoid. For example, we see that new samples generated from the blue parent sinusoid increase in amplitude and decrease in phase shift. Conversely, samples generated from the green parent sinusoid decrease in amplitude and increase in phase shift. The behaviour of the extrapolation operation could prove very beneficial for data augmentation as it could be used to generate extra samples of rare or underrepresented cases within the dataset, which. is a common failure case.\nThe UJ1 Pen Characters dataset (v2) contains 11,640 instances of 97 different characters hand written by 60 participants (Llorens et al.]2008). All samples were collected using a tablet PC and a stylus. Characters are defined by a sequence of X and Y coordinates, and include upper and lower case ASCII letters, Spanish non-ASCII letters, the 10 digits, and other common punctuation and symbols. As with the sinusoids in Section4.1] handwritten characters are suitable for evaluating dataset augmentation methods as they have an expected shape and can be easily visualized.\nAs a preprocessing step for this dataset we first applied local normalization to each sample to get a fixed size, followed by a global normalization across the dataset as a whole. A sequence autoencoder with 128 hidden units per layer was trained to construct the feature space within which data aug mentation could take place. Figure 3a demonstrates the effects of interpolating between characters in feature space. In this example we use the \"@\" symbol. We see that the resulting characters share\n(a) Random noise (b) Interpolation (c) Extrapolation\nFigure 2: Sinusoids with various transformations applied in feature space. (a) Random noise added with y = 0.5. (b) Interpolation between two sinusoids for values of X between O and 1. (c) Extrap- olation between two sinusoids for values of between 0 and 1. Best viewed in colour\nFigure 3: Interpolation (a) and extrapolation (b) between handwritten characters. Character (O,i) is interpolated/extrapolated with character (j,O) to form character (i,j), where i is the row number and j is the column number. Original characters are shown in bold.\ncharacteristics of the two parent inputs, such as the length of the symbol's tail or the shape of the central \"a\"'. Visually the majority of generated samples appear very similar to their parents, which is expected from interpolation, but is not necessarily useful from the perspective of data augmentation.\nWhen augmenting data for the purpose of improving performance of machine learning algorithms i1 is desirable to create samples that are different from the data that is already common in the dataset To this end, extrapolating between samples is preferable, as shown in Figure 3b] Extrapolated data. displays a wider variety compared to samples created by interpolation. We hypothesize that it is this added variability that is necessary in order for data augmentation to be useful.."}, {"section_index": "8", "section_name": "4.3 SPOKEN ARABIC DIGITS", "section_text": "For our first quantitative test we use the Arabic Digits dataset (Lichman2013) which contains 8,800 samples of time series mel-frequency cepstrum coefficients (MFCCs) extracted from audio clips o1 spoken Arabic digits. Thirteen MFCCs are available for each time step in this dataset. To preprocess the data we apply global normalization. To evaluate our data augmentation techniques we used the official train/test split and trained ten models with different random weight initializations\nAs a baseline model we trained a simple two layer MLP on the context vectors produced by a SA. Both models used 256 hidden units in each hidden layer. The MLP applied dropout with p = 0.5. after each dense layer. To evaluate the usefulness of different data augmentation techniques we trained a new baseline model on datasets that had been augmented with newly created samples The techniques we evaluated were: adding random noise to context vectors, interpolating between two random context vectors from the same class, interpolating between context vectors and their nearest neighbours from the same class, and extrapolating between context vectors and their nearest neighbours from the same class. The results of our tests are summarized in Table[1\nTable 1: Test set error on Arabic Digits dataset averaged over 10 runs\nWe find that our simple baseline model achieves competitive performance after training on the extracted context vectors, demonstrating the feature extracting capability of the sequence autoen- coder. The naive data augmentation approach of adding random noise to the context vectors further improves performance. Of interest, we find that adding new samples generated using interpolation techniques diminishes the performance of the model, which confirms our hypothesis that good data augmentation techniques should add variability to the dataset. Of the two interpolation techniques.\n(a) Interpolation (b) Extrapolation\nwe see that interpolating between neighbouring samples performs better than simply interpolating with randomly chosen samples of the same class. Finally we observe that extrapolating between samples improves model performance significantly, reducing the baseline error rate by almost half Our results rival those of Hammami et al.(2012), which to our knowledge are state-of-the-art on this dataset.\nOur second quantitative test was conducted on the Australian Sign Language Signs dataset (AUS. LAN). AUSLAN was produced byKadous(2002) and contains 2,565 samples of a native signer. signing 95 different words or phrases while wearing high quality position tracking gloves. Each. time series sample is, on average, 57 frames in length and includes 22 features: roll, pitch, yaw finger bend, and the 3D coordinates of each hand. To preprocess the raw data we first locally centre each sample and then apply global normalization. For evaluation, we perform cross validation with. 5 folds, as is common practice for the AUSLAN dataset..\nThe baseline model for these tests was a two layer MLP with 512 hidden units in each layer, witl. dropout (p = 0.5) applied on each. Similar to Arabic Digits, dataset we find that the simple MLI. can achieve competitive results when trained on the context vectors extracted from the sequence au toencoder (see Table2). In this case, however, we observe that adding random noise to the contex. vectors did not improve performance. One possible explanation for this outcome is that the AUS. LAN dataset has much more classes than the Arabic Digits dataset (95 versus 10) so there is highe. probability of a randomly augmented context vector jumping from one class manifold to another. Traversing instead along the representational manifold in a directed manner by extrapolating be. tween neighbouring samples results in improved performance over that of the baseline model. Ou. results also match the performance of Rodriguez et al.(2005), which to our knowledge is the bes. 5-fold cross validation result for the AUSLAN dataset..\nThe final time series dataset we considered was the UCF Kinect action recognition dataset (Ellis. et al.]2013). It contains motion capture data of participants performing 16 different actions such. as run, kick, punch, and hop. The motion capture data consists of 3-dimensional coordinates fo. 15 skeleton joints for a total of 45 attributes per frame. In total there are 1,280 samples within the. dataset. To preprocess the dataset we first shift the coordinates of each sample so that the central. shoulder joint of the first frame is located at the origin. Global normalization is also applied.\nWith the UCFKinect dataset our main goal was to determine the effectiveness of interpolation ii. feature space for generating new sequences that combine the characteristics and actions of the tw. \"seed\"' examples. We found that in order to produce natural looking results, the two actions to b combined must already share some properties. For example, Figure 4a|and |4b show motion captur sequences of a person stepping forward and a person stepping to the left, respectively. Both of these. actions take approximately the same amount of time to perform, and each skeleton moves thei left leg first, then their right leg. Due to these preexisting similarities the action sequences can b interpolated in feature space to produce a natural looking sequence of a skeleton stepping diagonall. forward and to the left (Figure4c). These results emulate what was previously observed in Sectio .3l which indicated that similar properties are necessary for successful blending of examples..\nOur secondary goal with the UCFKinect dataset was to quantitatively evaluate the performance. of extrapolation-based data augmentation. To compare to previous results, we used 4-fold cross. validation (see Table|3[for a summary of results). We found that extrapolating between samples in\nTable 2: CV error on AUSLAN dataset averaged over 5 folds\n200 0 200 600 800 1000 600 6008001000 200 400600800 600 600 600 800 8001000 600 -1000 (a) \"Step front' action from validation set. 200 200 200 :1 200 200 200 400 400 400 600 600 600 800 800 -800 800 100 1000 200 200 -200 200 200 200 400600800 1000 600 2004006008001000 600 200 400 6008001000 600 200 400 600 800 1000 600 1000 1000 1000 1000 (b) \"Step left' action from validation set. 200 200 200 0 0 0 200 200 200 400 600 600 800 -800 800 1000 1000 1000 200 200 200 200 400 200 400 200 200 600 200 -600 -600 600 8001000 8001000 800 6008001000 800 800\n200 200 400 600 800 800 1000 1000 200 200 200 200 200 600 200 600 200400 600 600 800 800 600 800 800 800 1000 1000 1000\n200 200 200 +-200 100 400 600 800 800 200 200 600 600 800 800 800 1000 -1000\n1:9 200 100 -600 600 800 -800 1000 200 200 200 600 600 600 -600 600 8001000 800 600 800 600 800 1000 800 1000 100 800 1000 1000\n(c) Generated sequence combining \"step front\"' and ''step left\"'\nHaving successfully applied dataset augmentation in feature space to improve the accuracy of se. quence classification tasks, we now experiment with applying our technique to static data. For. these experiments we concentrate on the image domain where manual data augmentation is already prevalent. We find that augmenting datasets by extrapolating within a learned feature space improves. classification accuracy compared to no data augmentation, and in some cases surpasses traditional. (manual) augmentation in input space.\nIn our experiments we consider two commonly used small-scale image datasets: MNIST and CIFAR-10. MNIST consists of 2828 greyscale images containing handwritten digits from O to 9. There are 60,000 training images and 10,000 test images in the official split. CIFAR-10 consists of 32 32 colour images containing objects in ten generic object categories. This dataset is typically. split into 50,000 training and 10,000 test images.\nIn all of our image experiments, we apply the same sequence autoencoder (SA) architecture as shown in Figure[1a to learn a representation. No pre-processing beyond a global scaling is applied to the MNIST dataset. For CIFAR-10 we apply global normalization and the same crop and flip operations\nFigure 4: A new motion capture sequence can be generated by interpolating between samples. By combining the 'step front\"' action (a) with the \"step left' action (b) we can generate a new sequence of a character stepping diagonally forward and the to left (c)..\nrepresentational space improved the performance of our untuned model by more than 1%, which is quite significant. Our results are 2.5 percentage points below the current state-of-the-art result produced by Beh et al.(2014), but further tuning of the model could improve results.\nTable 3: CV error on UCFKinect dataset averaged over 4 folds\nthatKrizhevsky et al.(2012) used for input space data augmentation when training AlexNet (we crop to 2424). To simulate sequence input the images are fed into the network one row of pixels per. time step similar to the SA setup in (Dai & Le2015).\nFor each dataset we train a 2-layer MLP on the context vectors produced by the sequence encoder. Both MLP and SA use the same number of hidden units in each layer: 256 per layer for MNIST. and 1024 per layer for CIFAR-10. We conduct four different test scenarios on the MNIST dataset. To control for the representation, as a baseline we trained the classifier only on context vectors from. the original images (i.e. SA with no augmentation). We then compare this to training with various. kinds of dataset augmentation: traditional affine image transformations in input space (shifting, ro-. tation, scaling), extrapolation between nearest neighbours in input space, and extrapolation betweer nearest neighbours in representational space. For both extrapolation experiments we use three near-. est neighbours per sample and = 0.5 when generating new data. For CIFAR-10, our baseline is. trained using context vectors extracted from cropped and flipped images. Against this baseline we. test the addition of extrapolation between nearest neighbours in representational space, using the. same setup as the MNIST test. Due to the size of the datasets we apply an approximate nearest. neighbour algorithm (Wan et al.]2016).\nResults are reported in Table 4 For MNIST, we find that extrapolating in feature space not only performs better than the baseline, but it also achieves a lower error rate compared to domain-specific data augmentation in input space. A similar outcome is observed in CIFAR-10, where feature space. extrapolation reduces error rate by O.3%. Interestingly, we note that the baseline test for this dataset already leveraged image transformations to improve performance, so the additional reduction in error rate could indicate that both kinds of augmentation, extrapolation in feature space and manual transformation in pixel space. could complement each other..\nTable 4: Test error (%) on MNIST and CIFAR-10. Averages over 10 and 5 runs, respectively\nIn this paper, we demonstrate a new domain-independent data augmentation technique that car. be used to improve performance when training supervised learning models. We train a sequenc. autoencoder to construct a learned feature space in which we extrapolate between samples. Thi technique allows us to increase the amount of variability within the dataset, ultimately resulting ir. a more robust model. We demonstrate our technique quantitatively on five datasets from differen. domains (speech, sensor processing, motion capture, and images) using the same simple architecture. and achieve near state-of-the-art results on two of them. Moreover, we show that data augmentatior. in feature space may complement domain-specific augmentation.\nAn important finding is that the extrapolation operator, when used in feature space, generated usefu ynthetic examples while noise and interpolation did not. Additional synthetic data experiment where we could control the complexity of the decision boundary revealed that extrapolation onl mproved model performance in cases where there were complex class boundaries. In cases witl simple class boundaries, such as linear separability or one class encircling another, extrapolatio nindered model performance, while interpolation helped. Our current hypothesis is that interpola tion tends to tighten class boundaries and unnecessarily increase confidence, leading to overfitting This behaviour may cause the model to ignore informative extremities that can describe a comple decision boundary and as a result produce an unnecessarily smooth decision boundary. As mos igh-dimensional, real datasets will typically have complex decision boundaries, we find extrapola tion to be well suited for feature space dataset augmentation."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Gregoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep represen tations. In ICML (1), pp. 552-560, 2013.\nNitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. Smote: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research. 16:321-357. 2002\nKyunghyun Cho, Bart van Merrienboer, Calar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for. statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in. Natural Language Processing (EMNLP), pp. 1724-1734, 2014.\nChris Ellis, Syed Zain Masood, Marshall F Tappen, Joseph J Laviola Jr, and Rahul Sukthankar Exploring the trade-off between accuracy and observational latency in action recognition. Inter national Journal of Computer Vision, 101(3):420-436, 2013.\nIan Goodfellow. Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Svstems. pp. 2672-2680. 2014.\nNacereddine Hammami, Mouldi Bedda, and Nadir Farah. Spoken Arabic digits recognition using MFCC based on GMM. In Sustainable Utilization and Development in Engineering and Tech. nology (STUDENT), 2012 IEEE Conference on, pp. 160-163. IEEE, 2012.\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large. target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting. of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pp. 1-10, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In The Internationa Conference on Learning Representations (ICLR), 2015..\nJuan Jose Rodriguez, Carlos J Alonso, and Jose A Maestro. Support vector machines of interval based features for time series classification. Knowledge-Based Systems. 18(4):171-178. 2005\nJamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake Mat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116-124, 2013.\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc representations using 1stms. In Proceedings of the 32nd International Conference on Machine Learning (1CML-15), pp. 843-852, 2015\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing svstems.. pp. 3104-3112. 2014\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nXiangang Li and Xihong Wu. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4520-4524. IEEE. 2015.\nMoshe Lichman. UCI machine learning repository, 2013. URL http: //archive.ics. uci. edu/m1\nDavid Llorens, Federico Prat, Andres Marzal, Juan Miguel Vilar, Maria Jose Castro, Juan-Carlos Amengual, Sergio Barrachina, Antonio Castellanos, Salvador Espana Boquera, JA Gomez, et al. The UJIpenchars database: a pen-based database of isolated handwritten characters. In LREC, 2008.\nOriol Vinyals and Quoc Le. A neural conversational model. In International Conference on Machine Learning:Deen Ie ino Workshon.2015"}] |
BJC_jUqxe | [{"section_index": "0", "section_name": "A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING", "section_text": "*Montreal Institute for Learning Algorithms (MILA), Universite de Montreal t CIFAR Senior Fellow\nThis paper proposes a new model for extracting an interpretable sentence embed- ding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, senti- ment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Much progress has been made in learning semantically meaningful distributed representations of. individual words, also known as word embeddings (Bengio et al.. 2001 Mikolov et al.]2013) On the other hand, much remains to be done to obtain satisfying representations of phrases and. sentences. Those methods generally fall into two categories. The first consists of universal sentence embeddings usually trained by unsupervised learning (Hill et al.|2016). This includes SkipThought. vectors (Kiros et al.|2015), ParagraphVector (Le & Mikolov|[2014), recursive auto-encoders (Socher et al.|2011f|2013), Sequential Denoising Autoencoders (SDAE), FastSent (Hill et al.]2016), etc.\nThe other category consists of models trained specifically for a certain task. They are usually. combined with downstream applications and trained by supervised learning. One generally finds. that specifically trained sentence embeddings perform better than generic ones, although generic. ones can be used in a semi-supervised setting, exploiting large unlabeled corpora. Several models. have been proposed along this line, by using recurrent networks (Hochreiter & Schmidhuber|1997 Chung et al.[[2014), recursive networks (Socher et al.[|2013) and convolutional networks (Kalchbren-. ner et al.2014f dos Santos & Gatti]2014Kim2014) as an intermediate step in creating sentence representations to solve a wide variety of tasks including classification and ranking (Yin & Schutze 2015} Palangi et al.]2016] Tan et al.]2016] Feng et al.2015).A common approach in previous methods consists in creating a simple vector representation by using the final hidden state of the. RNN or the max (or average) pooling from either RNNs hidden states or convolved n-grams. Ad-. ditional works have also been done in exploiting linguistic structures such as parse and dependence. trees to improve sentence representations (Ma et al.]2015] Mou et al.2015b] Tai et al.]2015).\nFor some tasks people propose to use attention mechanism on top of the CNN or LSTM model to. introduce extra source of information to guide the extraction of sentence embedding (dos Santos et al.]2016). However, for some other tasks like sentiment classification, this is not directly appli-. cable since there is no such extra information: the model is only given one single sentence as input. In those cases, the most common way is to add a max pooling or averaging step across all time steps\n*This work has been done during the 1st author's internship with IBM Watson"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "0000c n m 2 h1h2e nn M Ws1 tanh n1 <>h2<>h3>h4 Ws2 softmax A C reennnnnnn n en (b) (a)\nFigure 1: A sample model structure showing the sentence embedding model combined with a fully connected and softmax layer for sentiment analysis (a). The sentence embedding M is computed as multiple weighted sums of hidden states from a bidirectional LSTM (h1, ..., hn), where the summa tion weights (A1, ..., Ain) are computed in a way illustrated in (b). Blue colored shapes stand fo hidden representations, and red colored shapes stand for weights, annotations, or input/output."}, {"section_index": "3", "section_name": "Lee & Dernoncourt2016), or just pick up the hidden representation at the last time step as th encoded embedding (Margarit & Subramaniam2016).", "section_text": "Section 2 details on our proposed self-attentive sentence embedding model, as well as a regular ization term we proposed for this model, which is described in Section2.2[ We also provide a visualization method for this sentence embedding in section 2.3] We then evaluate our model ir author profiling, sentiment classification and textual entailment tasks in Section|4"}, {"section_index": "4", "section_name": "2.1 MODEL", "section_text": "The proposed sentence embedding model consists of two parts. The first part is a bidirectiona. LSTM. and the second part is the self-attention mechanism, which provides a set of summatior weight vectors for the LSTM hidden states. These set of summation weight vectors are dottec. with the LSTM hidden states, and the resulting weighted LSTM hidden states are considered as. an embedding for the sentence. It can be combined with, for example, a multilayer perceptron tc\nA common approach in many of the aforementioned methods consists of creating a simple vector representation by using the final hidden state of the RNN or the max (or average) pooling from either RNNs hidden states or convolved n-grams. We hypothesize that carrying the semantics along all time steps of a recurrent model is relatively hard and not necessary. We propose a self-attention mechanism for these sequential models to replace the max pooling or averaging step. Different from previous approaches, the proposed self-attention mechanism allows extracting different aspects of the sentence into multiple vector representations. It is performed on top of an LSTM in our sentence embedding model. This enables attention to be used in those cases when there are no extra inputs. In addition, due to its direct access to hidden representations from previous time steps, it relieves some long-term memorization burden from LSTM. As a side effect coming together with our proposed self-attentive sentence embedding, interpreting the extracted embedding becomes very easy and explicit.\nbe applied on a downstream application. Figure[1 shows an example when the proposed sentence. embedding model is applied to sentiment analysis, combined with a fully connected layer and a softmax layer. Besides using a fully connected layer, we also proposes an approach that prunes. weight connections by utilizing the 2-D structure of matrix sentence embedding, which is detailed. in Appendix|A] For this section, we will use Figure|1[to describe our model..\nSuppose we have a sentence. which has n tokens. resented in a se lence of word embeddings\nNow each entry in the sequence S are independent with each other. To gain some dependency be tween adjacent words within a single sentence, we use a bidirectional LSTM to process the sentence:.\nAnd we concatenate each ht with ht to obtain a hidden state ht. Let the hidden unit number for each unidirectional LSTM be u. For simplicity, we note all the n hs as H, who have the size n-by-2u\na = softmax (ws2tanh (Ws1HT)\nHere Ws1 is a weight matrix with a shape of da-by-2u. and ws2 is a vector of parameters with size da, where da is a hyperparameter we can set arbitrarily. Since H is sized n-by-2u, the anno- tation vector a will have a size n. the softmax() ensures all the computed weights sum up to 1 Then we sum up the LSTM hidden states H according to the weight provided by a to get a vector representation m of the input sentence.\nThis vector representation usually focuses on a specific component of the sentence, like a special set of related words or phrases. So it is expected to reflect an aspect, or component of the semantics in a sentence. However, there can be multiple components in a sentence that together forms the overall semantics of the whole sentence, especially for long sentences. (For example, two clauses linked together by an\"'and.'') Thus, to represent the overall semantics of the sentence, we need multiple m's that focus on different parts of the sentence. Thus we need to perform multiple hops of attention Say we want r different parts to be extracted from the sentence, with regard to this, we extend the ws2 into a r-by-da matrix, note it as Ws2, and the resulting annotation vector a becomes annotation matrix A. Formally,\nHere the softmax() is performed along the second dimension of its input. We can deem Equation 6Jas a 2-layer MLP without bias, whose hidden unit numbers is da, and parameters are {Ws2, Ws1}.\nThe embedding vector m then becomes an r-by-2u embedding matrix M. We compute the r weighted sums by multiplying the annotation matrix A and LSTM hidden states H, the resulting matrix is the sentence embedding:"}, {"section_index": "5", "section_name": "2.2 PENALIZATION TERM", "section_text": "The embedding matrix M can suffer from redundancy problems if the attention mechanism always. provides similar summation weights for all the r hops. Thus we need a penalization term to encour- age the diversity of summation weight vectors across different hops of attention..\nS = (w1, W2,... Wn\nHere w; is a vector standing for a d dimentional word embedding for the i-th word in the sentence S is thus a sequence represented as a 2-D matrix, which concatenates all the word embeddings together. S should have the shape n-by-d.\nht = LSTM(wt,\nht = LSTM(wt,ht\nH = (h1, h2,... hn)\nOur aim is to encode a variable length sentence into a fixed size embedding. We achieve that by choosing a linear combination of the n LSTM hidden vectors in H. Computing the linear combina- tion requires the self-attention mechanism. The attention mechanism takes the whole LSTM hidden states H as input, and outputs a vector of weights a:.\nA = softmax (Ws2tanh (Ws1HT)\nM = AH\nThe best way to evaluate the diversity is definitely the Kullback Leibler divergence between any of the summation weight vectors. However, we found that not very stable in our case. We conjectur it is because we are maximizing a set of KL divergence (instead of minimizing only one, which i the usual case), we are optimizing the annotation matrix A to have a lot of sufficiently small o even zero values at different softmax output units, and these vast amount of zeros is making th training unstable. There is another feature that KL doesn't provide but we want, which is, we wan each individual row to focus on a single aspect of semantics, so we want the probability mass in th annotation softmax output to be more focused. but with KL penalty we cant encourage that.\nWe hereby introduce a new penalization term which overcomes the aforementioned shortcomings Compared to the KL divergence penalization, this term consumes only one third of the computation. We use the dot product of A and its transpose, subtracted by an identity matrix, as a measure of redundancy.\nLet's consider two different summation vectors a' and a' in A. Because of the softmax, all entries. within any summation vector in A should sum up to 1. Thus they can be deemed as probability masses in a discrete probability distribution. For any non-diagonal elements ai (i j) in the AAT matrix, it corresponds to a summation over elementwise product of two distributions:\nn 0<aij = > k=1\nwhere af. and al. are the k-th element in the a' and a' vectors, respectively. In the most extreme case, where there is no overlap between the two probability distributions a' and a', the correspond ay; will be O. Otherwise, it will have a positive value. On the other extreme end, if the two distributions are identical and all concentrates on one single word, it will have a maximum value of 1. We subtract an identity matrix from AAT so that forces the elements on the diagonal of AAT to approximate 1, which encourages each summation vector a' to focus on as few number of words as possible, forcing each vector to be focused on a single aspect, and all other elements to O, which punishes redundancy between different summation vectors."}, {"section_index": "6", "section_name": "2.3 VISUALIZATION", "section_text": "The interpretation of the sentence embedding is quite straight forward because of the existence o1. annotation matrix A. For each row in the sentence embedding matrix M, we have its corresponding. annotation vector a'. Each element in this vector corresponds to how much contribution the LSTM. hidden state of a token on that position contributes to. We can thus draw a heat map for each row of. the embedding matrix M This way of visualization gives hints on what is encoded in each part ol the embedding, adding an extra layer of interpretation. (See Figure|3aand 3b).\nVarious supervised and unsupervised sentence embedding models have been mentioned in Section. 1 Different from those models, our proposed method uses a new self-attention mechanism that. allows it to extract different aspects of the sentence into multiple vector-representations. The matrix structure together with the penalization term gives our model a greater capacity to disentangle the. latent information from the input sentence. We also do not use linguistic structures to guide our. sentence representation model. Additionally, using our method we can easily create visualizations. that can help in the interpretation of the learned representations..\nP=||(AAT -I) 2\nHere I| ll stands for the Frobenius norm of a matrix. Similar to adding an L2 regularization term this penalization term P will be multiplied by a coefficient, and we minimize it together with the original loss, which is dependent on the downstream application.\nThe second way of visualization can be achieved by summing up over all the annotation vectors,. and then normalizing the resulting weight vector to sum up to 1. Since it sums up all aspects of semantics of a sentence, it yields a general view of what the embedding mostly focuses on. We can. figure out which words the embedding takes into account a lot, and which ones are skipped by the. embedding. See Figure3c|and3d\nSome recent work have also proposed supervised methods that use intra/self-sentence attention.Ling. et al.[(2015) proposed an attention based model for word embedding, which calculates an attention weight for each word at each possible position in the context window. However this method cannot be extended to sentence level embeddings since one cannot exhaustively enumerate all possible sentences. Liu et al.(2016a) proposes a sentence level attention which has a similar motivation but done differently. They utilize the mean pooling over LSTM states as the attention source, and use that to re-weight the pooled vector representation of the sentence..\nApart from the previous 2 variants, we want to note that Li et al.[(2016) proposed a same self attention mechanism for question encoding in their factoid QA model, which is concurrent to oui work. The difference lies in that their encoding is still presented as a vector, but our attention produces a matrix representation instead, with a specially designed penalty term. We applied the model for sentiment anaysis and entailment, and their model is for factoid QA.\nThe LSTMN mode1 (Cheng et al. 2016) also proposed a very successful intra-sentence level atten tion mechanism, which is later used byParikh et al.|(2016). We see our attention and theirs as having different granularities. LSTMN produces an attention vector for each of its hidden states during the recurrent iteration, which is sort of an \"online updating'\"' attention. It's more fine-grained, targeting at discovering lexical correlations between a certain word and its previous words. On the contrary our attention mechanism is only performed once, focuses directly on the semantics that makes senst for discriminating the targets. It is less focused on relations between words, but more on the seman tics of the whole sentence that each word contributes to. Computationally, our method also scales up with the sentence length better, since it doesn't require the LSTM to compute an annotation vectoi over all of its previous words each time when the LSTMN computes its next step."}, {"section_index": "7", "section_name": "EXPERIMENTAL RESULTS", "section_text": "We first evaluate our sentence embedding model by applying it to 3 different datasets: the Age dataset, the Yelp dataset, and the Stanford Natural Language Inference (SNLI) Corpus. These 3 datasets fall into 3 different tasks, corresponding to author profiling, sentiment analysis, and tex tual entailment, respectively. Then we also perform a set of exploratory experiments to validate properties of various aspects for our sentence embedding model."}, {"section_index": "8", "section_name": "4.1 AUTHOR PROFILING", "section_text": "We compare our model with two baseline models: biLSTM and CNN. For the two baseline model: The biLSTM model uses a bidirectional LSTM with 300 dimensions in each direction, and use ma pooling across all LSTM hidden states to get the sentence embedding vector, then use a 2-laye. ReLU output MLP with 3000 hidden states to output the classification result. The CNN mode. uses the same scheme, but substituting biLSTM with 1 layer of 1-D convolutional network. Durin training we use O.5 dropout on the MLP and O.o001 L2 regularization. We use stochastic gradier descent as the optimizer, with a learning rate of O.06, batch size 16. For biLSTM, we also clip th norm of gradients to be between -0.5 and 0.5. We searched hyperparameters in a wide range an. find the aforementioned set of hyperparameters yields the highest accuracy.\nFor our model, we use the same settings as what we did in biLSTM. We also use a 2-layer ReLU. output MLP, but with 2000 hidden units. In addition, our self-attention MLP has a hidden layer with. 350 units (the dg in Section 2), we choose the matrix embedding to have 30 rows (the r), and a coefficient of 1 for the penalization term..\nhttp://pan.webis.de/clef16/pan16-web/author-profiling.html\nThe Author Profiling dataset'Iconsists of Twitter tweets in English, Spanish, and Dutch. For some of the tweets, it also provides an age and gender of the user when writing the tweet. The age range are split into 5 classes: 18-24, 25-34, 35-49, 50-64, 65+. We use English tweets as input, and use those tweets to predict the age range of the user. Since we are predicting the age of users, we refer to it as Age dataset in the rest of our paper. We randomly selected 68485 tweets as training set, 4000 for development set, and 4o0o for test set. Performances are also chosen to be classification accuracy.\nTable 1: Performance Comparision of Different Models on Yelp and Age Dataset\nWe train all the three models until convergence and select the corresponding test set performance according to the best development set performance. Our results show that the model outperforms. both of the biLSTM and CNN baselines by a significant margin.\nFigure 2: Heatmap of Yelp reviews with the two extreme score"}, {"section_index": "9", "section_name": "4.2 SENTIMENT ANALYSIS", "section_text": "We choose the Yelp datasef2[for sentiment analysis task. It consists of 2.7M yelp reviews, we take the review as input and predict the number of stars the user who wrote that review assigned to the corresponding business store. We randomly select 500K review-star pairs as training set, and 2000\nhttps://www.yelp.com/dataset_challenge\nfor development set, 20oo for test set. We tokenize the review texts by Stanford tokenizer. We use 100 dimensional word2vec as initialization for word embeddings, and tune the embedding during training across all of our experiments. The target number of stars is an integer number in the range of 1, 5], inclusive. We are treating the task as a classification task, i.e., classify a review text into one of the 5 classes. We use classification accuracy as a measurement.\nFor the two baseline models, we use the same setting as what we used for Author Profiling dataset except that we are using a batch size of 32 instead. For our model, we are also using the same setting, except that we choose the hidden unit numbers in the output MLP to be 30o0 instead. We also observe a significant performance gain comparining to the two baselines. (Table|1)\nAs an interpretation of the learned sentence embedding, we use the second way of visualizatio described in Section2.3|to plot heat maps for some of the reviews in the dataset. We randoml select 5 examples of negative (1 star) and positive (5 stars) reviews from the test set, when the mode has a high confidence (> 0.8) in predicting the label. As shown in Figure[2] we find that the mode majorly learns to capture some key factors in the review that indicate strongly on the sentimen behind the sentence. For most of the short reviews, the model manages to capture all the key factor. that contribute to an extreme score, but for longer reviews, the model is still not able to capture al related factors. For example, in the 3rd review in Figure[2b], it seems that a lot of focus is spent o one single factor, i.e., the \"so much fun', and the model puts a little amount of attention on othe key points like 'highly recommend\", \"amazing food\", etc.."}, {"section_index": "10", "section_name": "4.3 TEXTUAL ENTAILMENT", "section_text": "We use the biggest dataset in textual entailment, the SNLI corpus (Bowman et al.] 2015) for oui evaluation on this task. SNLI is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral. The mode. will be given a pair of sentences, called hypothesis and premise respectively, and asked to tell if the semantics in the two sentences are contradicting with each other or not. It is also a classificatior task, so we measure the performance by accuracy.\nWe process the hypothesis and premise independently, and then extract the relation between the twc. sentence embeddings by using multiplicative interactions proposed in Memisevic (2013) (see Ap. pendix[B|for details), and use a 2-layer ReLU output MLP with 4000 hidden units to map the hidden representation into classification results. Parameters of biLSTM and attention MLP are shared across hypothesis and premise. The biLSTM is 300 dimension in each direction, the attention MLP has 150 hidden units instead, and both sentence embeddings for hypothesis and premise have 30 rows (the r). The penalization term coefficient is set to 0.3. We use 300 dimensional GloVe (Pennington et al.]2014) word embedding to initialize word embeddings. We use AdaGrad as the optimizer with a learning rate of O.01. We don't use any extra regularization methods, like dropout or L2 normalization. Training converges after 4 epochs, which is relatively fast..\nTable 2: Test Set Performance Compared to other Sentence Encoding Based Methods in SNLI Datse\nThis task is a bit different from previous two tasks, in that it has 2 sentences as input. There are. a bunch of ways to add inter-sentence level attention, and those attentions bring a lot of benefits To make the comparison focused and fair, we only compare methods that fall into the sentence. encoding-based models. i.e., there is no information exchanged between the hypothesis and premise before they are encoded into some distributed encoding..\nWe find that compared to other published approaches, our method shows a significant gain ( 1%) to them, except for the 3ooD NSE encoders, which is the state-of-the-art in this category. However the 0.2% different is relatively small compared to the differences between other methods.\nIn this subsection we are going to do a set of exploratory experiments to study the relative effect of each component in our model."}, {"section_index": "11", "section_name": "4.4.1 EFFECT OF PENALIZATION TERM", "section_text": "it ' s an interesting phenomena . Not sure what the spammers get from. it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from. it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get fron it . If you comment on Fastco you will get a lot of mail-replies spam .\nwe have a great work dinner here there be about 20 us and the staf do a great job time the course the food be nothing extraordinary I order the New York strip the meat can have use a little more marbling the cornbread we get before the salad be the good thing I eat the whole night 1 annoying thing at this place be the butter be s hard / cold you can not use it on the soft bread get with it\nthis place be great for lunch / dinner happy hour too the staff be very nice and helpful my new spot\nprice reasonable staff - helpful attentive portion huge enough for 2. you get the chimichanga plate food too salty as u know when you. cook or add anything with cheese it have it own salt no need add more to the meat ... pls kill the salt and then you can taste the.. goodness of the food ... ty.\nFigure 4: Attention of sentence embedding on 3 different Yelp reviews. The left one is trainec without penalization, and the right one is trained with 1.0 penalization.\nSince the purpose of introducing the penalization term P is majorly to discourage the redundancy in the embedding, we first directly visualize the heat maps of each row when the model is presented with a sentence. We compare two identical models with the same size as detailed in Section 4.1 trained separately on Age dataset, one with this penalization term (where the penalization coefficient is set to 1.0) and the other with no penalty. We randomly select one tweet from the test set and compare the two models by plotting a heat map for each hop of attention on that single tweet. Since there are 30 hops of attention for each model, which makes plotting all of them quite redundant, we only plot 6 of them. These 6 hops already reflect the situation in all of the 30 hops.\nit ' s an interesting phenomena . Not sure what the spammers get frorr it . If you comment on Fastco you will get a lot of mail-replies spam .\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam ..\nit ' s an interesting phenomena . Not sure what the spammers get from it . If you comment on Fastco you will get a lot of mail-replies spam .\nFigure 3: Heat maps for 2 models trained on Age dataset. The left column is trained without the. penalization term, and the right column is trained with 1.0 penalization. (a) and (b) shows detailed. attentions taken by 6 out of 30 rows of the matrix embedding, while (c) and (d) shows the overall attention by summing up all 30 attention weight vectors..\nwe have a great work dinner here there be about 20 us and the staff. do a great job time the course the food be nothing extraordinary I order the New York strip the meat can have use a little more marbling the cornbread we get before the salad be the good thing I. eat the whole night 1 annoying thing at this place be the butter be so. hard / cold you can not use it on the soft bread get with it.\nthis place be great for lunch / dinner happy hour too the staff be very nice and helpful my new spot\nFrom the figure we can tell that the model trained without the penalization term have lots of redun dancies between different hops of attention (Figure|3a), resulting in putting lot of focus on the wor 'it\" (Figure|3c), which is not so relevant to the age of the author. However in the right column, the model shows more variations between different hops, and as a result, the overall embedding focuse. on '\"mail-replies spam\"' instead. (Figure 3d)\nFor the Yelp dataset, we also observe a similar phenomenon. To make the experiments more ex- plorative, we choose to plot heat maps of overall attention heat maps for more samples, instead of plotting detailed heat maps for a single sample again. Figure 4 shows overall focus of the sentence embedding on three different reviews. We observe that with the penalization term, the model tends to be more focused on important parts of the review. We think it is because that we are encouraging it to be focused, in the diagonals of matrix AAT' (Equation|8).\nTo validate if these differences result in performance difference, we evaluate four models trained. on Yelp and Age datasets, both with and without the penalization term. Results are shown in Table 3] Consistent with what expected, models trained with the penalization term outperforms their. counterpart trained without.\nIn SNLI dataset, although we observe that introducing the penalization term still contributes to en couraging the diversity of different rows in the matrix sentence embedding, and forcing the network to be more focused on the sentences, the quantitative effect of this penalization term is not so obvious on SNLI dataset. Both models yield similar test set accuracies.\nHaving multiple rows in the sentence embedding is expected to provide more abundant information. about the encoded content. It makes sence to evaluate how significant the improvement can be brought by r. Taking the models we used for Age and SNLI dataset as an example, we vary r from. 1 to 30 for each task, and train the resulting 10 models independently (Figure5). Note that when. r = 1, the sentence embedding reduces to a normal vector form.\nFrom this figure we can find that, without having multiple rows, the model performs on-par with. its competitiors which use other forms of vector sentence embeddings. But there is significant\n0.85 0.85 0.80 0.75 0.80 ACernncy 0.70 Acennecy 0.65 0.75 1 1 0.60 5 5 10 10 0.55 0.70 20 20 0.50 30 30 0.45 0.65 0 5 10 15 20 25 30 0 2 4 6 8 10 12 14 Epoches Epoches (a) (b)\nFigure 5: Effect of the number of rows (r) in matrix sentence embedding. The vertical axes indicates test set accuracy and the horizontal axes indicates training epoches. Numbers in the legends stand for the corresponding values of r. (a) is conducted in Age dataset and (b) is conducted in SNLI. dataset.\nTable 3: Performance comparision regarding the penalization term\ndifference between having only one vector for the sentence embedding and multiple vectors. The. models are also quite invariant with respect to r, since in the two figures a wide range of values between 10 to 30 are all generating comparable curves..\nIntroducing attention mechanism allows the final sentence embedding to directly access previou LSTM hidden states via the attention summation. Thus the LSTM doesn't need to carry every piec of information towards its last hidden state. Instead, each LSTM hidden state is only expected t provide shorter term context information around each word, while the higher level semantics, whicl requires longer term dependency, can be picked up directly by the attention mechanism. This settin reliefs the burden of LSTM to carry on long term dependencies. Our experiments also support that as we observed that our model has a bigger advantage when the contents are longer. Further more the notion of summing up elements in the attention mechanism is very primitive, it can be something more complex than that, which will allow more operations on the hidden states of LSTM.\nThe model is able to encode any sequence with variable length into a fixed size representation without suffering from long-term dependency problems. This brings a lot of scalability to the model: without any modification, it can be applied directly to longer contents like paragraphs, articles, etc Though this is beyond the focus of this paper, it remains an interesting direction to explore as a future work.\nAs a downside of our proposed model, the current training method heavily relies on downstream. applications, thus we are not able to train it in an unsupervised way. The major obstacle towards enabling unsupervised learning in this model is that during decoding, we don't know as prior how the different rows in the embedding should be divided and reorganized. Exploring all those possible divisions by using a neural network could easily end up with overfitting. Although we can still dc unsupervised learning on the proposed model by using a sequential decoder on top of the sentence. embedding, it merits more to find some other structures as a decoder.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to acknowledge the developers of Theano (Theano Development Team 2016) and Lasagne. The first author would also like to thank IBM Watson for providing resources fundings and valuable discussions to make this project possible, and Caglar Gulcehre for helpfu discussions."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Rejean Ducharme, and Pascal Vincent. A neural probabilistic language model. Ir Advances in Neural Information Processing Systems, pp. 932-938, 2001.\nSamuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large anno tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. A fast unified model for parsing and sentence understanding. arXiv preprini arXiv:1603.06021. 2016\nJianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. In Conference on Empirical Methods in Natural Language Processing (EMNLP). Asso ciation for Computational Linguistics, 2016.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nIn this paper, we introduced a fixed size, matrix sentence embedding with a self-attention mecha nism. Because of this attention mechanism, there is a way to interpret the sentence embedding in depth in our model. Experimental results over 3 different tasks show that the model outperforms other sentence embedding models by a significant margin.\nMinwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. Applying deep learn. ing to answer selection: a study and an open task. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2015, Scottsdale, AZ, USA, December 13-17, 2015, pp 813-820, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nNal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. 2014\nQuoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICMI volume 14, pp. 1188-1196, 2014\nJi Young Lee and Franck Dernoncourt. Sequential short-text classification with recurrent and con volutional neural networks. arXiv preprint arXiv:1603.03827. 2016\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using. bidirectional 1stm model and inner-attention. arXiy preprint arXiv:1605.09090. 2016b\nHoria Margarit and Raghav Subramaniam. A batch-normalized recurrent network for sentimen classification. In Adyances in Neural Information Processing Systems. 2016\nYang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. Learning natural language inference using bidirectional LSTM model and inner-attention. CoRR, abs/1605.09090, 2016a.\nMingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural networks for sentence embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pp. 174-179, 2015.\nLili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. Discriminative neural sentence model ing by tree-based convolution. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2315-2325, Lisbon, Portugal, September 2015b. Association for Computational Linguistics. URL http://aclweb.org/anthology/D15-1279\nTsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. arXiv preprin arXiv:1607.04492, 2016a\nHamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song. and Rabab Ward. Deep sentence embedding using long short-term memory networks: Analysi. and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Lan. guage Processing, 24(4):694-707, 2016.\nAnkur P. Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attentio. model for natural language inference. In Proceedings of EMNLP, 2016..\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nIvan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. arXiv preprint arXiv:1511.06361, 2015.\nWenpeng Yin and Hinrich Schutze. Convolutional neural network for paraphrase identification In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 901-911, 2015.\nTheano Development Team. Theano: A {Python} framework for fast computation of mathemati cal expressions. arXiv e-prints, abs/1605.0, 2016. URLhttp://arxiv.org/abs/1605. 02 6 8 8"}, {"section_index": "14", "section_name": "PRUNED MLP FOR STRUCTURED MATRIX SENTENCE EMBEDDING", "section_text": "As a side effect of having multiple vectors to represent a sentence, the matrix sentence embedding is usually several times larger than vector sentence embeddings. This results in needing more pa rameters in the subsequent fully connected layer, which connects every hidden units to every units in the matrix sentence embedding. Actually in the example shown in Figure[1] this fully connected layer takes around 90% percent of the parameters. See Table4 In this appendix we are going tc introduce a weight pruning method which, by utilizing the 2D structure of matrix embedding, is able to drastically reduce the number of parameters in the fully connected hidden layer.\nInheriting the notation used in the main paper, let the matrix embedding M has a shape of r by u. and let the fully connected hidden layer has b units. The normal fully connected hidden layer wil require each hidden unit to be connected to every unit in the matrix embedding, as shown in Figure 1 This ends up with r u b parameters in total.\nHowever there are 2-D structures in the matrix embedding, which we should make use of. Each row (m; in Figure[1) in the matrix is computed from a weighted sum of LSTM hidden states, which means they share some similarities\nTo reflect these similarity in the fully connected layer, we split the hidden states into r equally sized groups, with each group having p units. The i-th group is only fully connected to the i-th row in the matrix representation. All connections that connects the i-th group hidden units to other rows of the matrix are pruned away. In this way, Simillarity between different rows of matrix embedding are reflected as symmetry of connecting type in the hidden layer. As a result, the hidden layer can be interperated as also having a 2-D structute, with the number (r) and size (p) of groups as its two dimensions (The M in Figure6). When the total number of hidden units are the same (i.e.,\nOQ00 q 0 Mv Mh mi m2 m M\nFigure 6: Hidden layer with pruned weight connections. M is the matrix sentence embedding, M and Mh are the structured hidden representation computed by pruned weights.\nTable 4: Model Size Comparison Before and After Pruning\nMv Mh m m m idde ddi\nHidden layer. Softmax Other Parts Total Accuracy Yelp, Original, 6=3000 54M 15K 1.3M 55.3M 64.21% Yelp, Pruned, p=150, q=10 2.7M 52.5K 1.3M 4.1M 63.86% Age, Original, b=4000 72M 20K 1.3M 73.2M 80.45% Age, Pruned, p=25, q=20 822K 63.75K 1.3M 2.1M 77.32% SNLI, Original, b=4000 72M 12K 22.9M 95.0M 84.43% SNLI, Pruned, p=300, q=10 5.6M 45K 22.9M 28.6M 83.16%\nTable4|takes the model we use for yelp dataset as a concrete example, and compared the number of parameters in each part of the model, both before and after pruning. We can see the above pruning. method drastically reduces the model size. Note that the p and q in this structure can be adjusted freely as hyperparameters. Also, we can continue the corresponding pruning process on top of M. and Mh over and over again, and end up with having a stack of structured hidden layers, just like. stacking fully connected layers.\nThe subsequent softmax layer will be fully connected to both M, and Mn, i.e., each unit in the softmax layer is connected to all units in M, and Mn. This is not a problem since the speed of softmax is largely dependent of the number of softmax units, which is not changed.In addition, for applications like sentiment analysis and textural entailment, the softmax layer is so tiny that only contains several units.\nExperimental results in the three datasets has shown that, this pruning mechanism lowers perfor mances a bit, but still allows all three models to perform comparable or better than other models. compared in the paper."}, {"section_index": "15", "section_name": "B DETAILED STRUCTURE OF THE MODEL FOR SNLI DATASET", "section_text": "In Section [2|we tested our matrix sentence embedding model for the textual entailment task on the SNLI dataset. Different from the former two tasks, the textual entailment task consists of a pail of sentences as input. We propose to use a set of multiplicative interactions to combine the twc\nI Gated Encoder C Hypothesis Premise Mr\nFigure 7: Model structure used for textual entailment task\nOn the other dimension, another form of similarity exists too. For each vector representation m; in M, the j-th element m; is a weighted sum of an LSTM hidden unit at different time steps. And for. a certain j-th element in all vector representations, they are summed up from a same LSTM hidden unit. We can also reflect this similarity into the symmetry of weight connections by using the same pruning method we did above. Thus we will have another 2-D structured hidden states sized u-by-q,. noted as Mh in Figure6\nComparing the two matrix embeddings corresponds to the green dashed rectangle part in the figure which computes a single matrix embedding (Fr) as the factor of semantic relation between the two sentences. To represent the relation between Mn and Mp, Fr can be connected to Mn and M, through a three-way multiplicative interaction. In a three-way multiplicative interaction, the value of anyone of Fr, Mp and M, is a function of the product of the others. This type of connection is originally introduced to extract relation between images (Memisevic2013). Since here we are jusi computing the factor of relations (Fr) from M and Mp, it corresponds to the encoder part in the Factored Gated Autoencoder inMemisevic(2013). We call it Gated Encoder in Figure7\nFirst we multiply each row in the matrix embedding by a different weight matrix. Repeating it over all rows, corresponds to a batched dot product between a 2-D matrix and a 3-D weight tensor.. Inheriting the name in (Memisevic2013), we call the resulting matrix as factor. Doing the batched. dot for both hypothesis embedding and premise embedding, we have Fh and Fp, respectively..\nFh. = batcheddot(Mn, Wfh Fp = batcheddot(Mp, W\nHere Wfb and W. are the two weight tensors for hypothesis embedding and premise embedding\nThe factor of the relation (Fr) is just an element-wise product of Fn and Fp (the triangle in the middle of Figure7:\nF =FnO Fp\nHere O stands for element-wise product. After the Fr layer, we then use an MLP with softmax output to classify the relation into different categlories\nThe overall structure of our model for SNLI is dipicted in Figure[7 For both hypothesis and premise, we extract their embeddings (Mn and Mp in the figure) independently, with a same LSTM and attention mechanism. The parameters of this part of model are shared (rectangles with dashed orange line in the figure).\nFp. = batcheddot(Mn, Wfh Fp = batcheddot(Mp, Wfp"}] |
SJ8BZTjeg | [{"section_index": "0", "section_name": "UNSUPERVISED LEARNING USING GENERATIVE AD VERSARIAL TRAINING AND CLUSTERING", "section_text": "Vittal Premachandran and Alan L. Yuille\nDepartment of Computer Science Johns Hopkins University.\nvittalp, ayuillel}@jhu.edu\nIn this paper, we propose an unsupervised learning approach that makes use of two components; a deep hierarchical feature extractor, and a more traditional cluster- ing algorithm. We train the feature extractor in a purely unsupervised manner using generative adversarial training and, in the process, study the strengths of. learning using a generative model as an adversary. We also show that adversar- ial training as done in Generative Adversarial Networks (GANs) is not sufficient to automatically group data into categorical clusters. Instead, we use a more tra- ditional grouping algorithm, k-means clustering, to cluster the features learned using adversarial training. We experiment on three well-known datasets, CIFAR- 10, CIFAR-100 and STL-10. The experiments show that the proposed approach. performs similarly to supervised learning approaches, and, might even be better. in situations with small amounts of labeled training data and large amounts of unlabeled data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Much of the recent work in machine learning and computer vision has focused on llearning tech niques for high-level tasks such as image classification (Krizhevsky et al.(2012); Simonyan & Zisserman(2014); He et al.(2015)). Many of the state-of-the-art models employ Convolutional Neural Networks (CNNs) to extract high-level feature representations by processing the input data using multiple layers of convolutions, usually followed by some non-linear transform. CNNs have successfully demonstrated to yield high-quality feature representations that produce state-of-the-art. results on a variety of tasks, not only on image classification (as mentioned above), but also on semantic segmentation (Long et al.[(2015);Chen et al.(2016a)), boundary detection (Xie & Tu (2015); Premachandran et al.(2015)), and object detection (Girshick et al.(2014)), among oth- ers. These models are trained to produce high-quality features using backpropagation, usually by. pretraining on a large dataset (such as ImageNet) and then fine tuning on the relevant dataset. Un fortunately, supervised learning suffers from certain challenges, especially, in terms of scalability. since it requires large amounts of labeled data. Labeling millions of images requires extensive effort and is time consuming. Moreover, supervised training with a predefined set of classes, limits the generalizability of the learned feature representations to novel classes..\nTo overcome the difficulties of labeling large amounts of training data, effort has gone into the. development of semi-supervised and unsupervised learning techniques. The goal of unsupservisec. learning techniques is to learn representations that are interpretable, easily transferable to nove. tasks and novel object categories, and to disentangle the informative representation of the data fron. nuisance variables (e.g. lighting, viewpoint, etc.) purely from unlabeled data. A common and widely. used method for unsupervised learning is to do clustering using k-Means. k-Means clustering is a. simple method that groups input features into different clusters. Traditionally, this approach mainly. used low-level features such as raw pixel intensities, HOG features, GIST features, SIFT features. etc. Although the performance of k-means on such features is usually poor,Wang et al.[(2015) usec. deep network features and employed k-means clustering to show strong results on grouping objec. parts. But, the deep network that was used to extract the features was pre-trained on ImageNet using. class-label supervision (so, object knowledge was known). It would be a natural extension to see i. one can learn robust features using hierarchical feature learning in a purely unsupervised manner"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "However, since the objectives of unsupervised learning are not as concrete as the objectives of. supervised learning, optimizing deep hierarchical models using backpropagation becomes difficult.\nAttempts have been made to come up with \"pretext' objective functions, which are usually driven by \"common sense\"' requirements, to do unsupervised learning. Some examples of these objec. tives include minimizing the reconstruction error (Vincent et al.[(2008), training models to identify surrogate classes (Dosovitskiy et al.(2014)), predicting spatial position of image patches (Doersch et al.(2015);Noroozi & Favaro((2016)), and minimizing the distance in the representation space for objects tracked over a time period in a video sequence (Wang & Gupta(2015))\nIn this paper, we learn a deep network using generative adversarial training. We use the features. extracted from the discriminative component and fuse it with traditional unsupservised learning al. gorithms like k-Means to improve their performance. We perform various experiments over many. different datasets (CIFAR-10, CIFAR-100 and STL-1O) and show that the representations that car be learned purely by unsupervised learning from an adversarial signal helps to learn meaningfu. representations of input data. Our experiments show that under situations with minimal amounts oj. supervised training examples (and large amounts of unsupervised data), the representations learne. with adversarial training perform competitively in comparison to supervised training on a similar. architecture. We now provide a brief summary of adversarial training employed by GAN and Info. GAN.\nGenerative Adversarial Networks (Goodfellow et al.(2014)) are composed of two components; th generator, G(.), and the discriminator, D(.). The generator maps a latent encoding to the data space. while the discriminator distinguishes between samples generated by the generator and real data. The. generator is trained to fool the discriminator, while the discriminator is trained to not get fooled by. the generator.\nMore formally, given training data samples, x ~ Pdata(x), where Pdata(x) is the true data dis- tribution, the training of GANs proceeds by iterating between two-steps. In the first step, we fix the parameters of the generative model, sample a latent code, z ~ Pnoise(z), and generate data samples, G(z), which is then used to train the discriminator, D(.), by updating its parameters to dis- tinguish between G(z) and x. The parameters of the discriminator can be updated by maximizing the expected log-likelihood,\nEz~Pnoise(z)[log(1- D(G(z)))]\nmin max V(G, D) = Ex~Paata(x)[log(D(x))] + Ez~Pnoise(z)[log(1 - D(G(z)) G D"}, {"section_index": "3", "section_name": "2.1 INFOGAN", "section_text": "The formulation described above uses a noise vector, z, which is used by the generator, G(.), to synthesize data. This noise vector does not impose any constraints on what the generated data should look like.Chen et al.[(2016b) introduce a neat and simple idea to extend GANs into a feature identifying system called InfoGAN. InfoGAN uses a structured latent code, c, which is input to\nRecently, much interest has gone into adversarial training.. Generative Adversarial Networks. (GANs) (Goodfellow et al.(2014) are of particular interest in this work. Progress in GANs have enabled significant improvement in the quality of images being generated in the past couple of years (Denton et al.[(2015); Radford et al.(2015)). While much of the recent effort has gone in the de- velopment of better architectures and training procedures for modeling and training the generative network, in this work, we systematically study the power of the representations learned by the gen-. erator's adversary, i.e., the discriminative model..\nEx~Pdg (x)|log(D(x)] + Ez~Pnois. [log(1 D(G(z)))]\nthe generator, G(.), in addition to the noise vector, z. The code can either be a discrete code or a continuous code. In order to encourage the code to capture the inherent semantic structures in th training data, a new term is introduced to the objective function, which acts as a regularizer tha forces high mutual information between the latent code, c and the generated sample, G(z, c). Since it is hard to maximize the mutual information, I(c; G(z, c)), directly (because one would need tc know the true distribution P(c[x)), Chen et al.(2016b) provide a variational lower bound, whicl can be obtained when using a parametric auxiliary, Q(c[x), to approximate P(c|x). The variationa lower bound that is obtained is,\nThe InfoGAN objective is a regularized version of the original GAN objective (Eq. 3), where the regularizer is the variational lower bound of mutual information,\nminmax V1nfoGAn(G, D, Q) = V(G, D) - AL1(G, Q) G,Q D\nChen et al.(2016b) share the parameters between Q(.) and D(.), which helps reduce the computa tional cost. We do the same in all of our experiments..\n3 UNSUPERVISED LEARNING WITH ADVERSARIAL TRAINING AND K-MEANS++ CLUSTERING\nAs mentioned in Section[1] we are interested in learning representations of images in a purely unsu pervised manner. Both GAN, and InfoGAN provide a way to train the discriminative network using the generated images as an adversary. InfoGAN, is particularly interesting since it has the ability to directly predict the different categories that might be present in the training database. While the qualitative results presented in [Chen et al.[(2016b) shows that the categories can be automatically identified on the MNIST dataset, unfortunately, the same result does not seem to extend to more complicated and realistic datasets (CIFAR-10, CIFAR-100 and STL-1O). We modified the InfoGAN code released by the authors to enable support of the more realistic RGB data. We then trained the model on the above mentioned datasets to experiment if it could automatically identify the categor- ical clusters present in the respective datasets. We found that while InfoGAN that we trained on the above-mentioned datasets was successful in generating images that looked different for different categorical codes, it was unable to identify the class-level grouping that is present in these datasets.\nnstead, we adopt a hybrid strategy for unsupervised learning. We first use the generative networl. as an adversary to train the discriminative network until convergence. Upon convergence, we ex tract features from the penultimate layer of the D(.) network and run a more traditional clustering. algorithm, i.e., k-means++. Surprisingly, this simple strategy turns out to be much more effectiv. at grouping data from similar categories than the approach of directly predicting the categorica. groups. Note that one can plug in more sophisticated unsupervised learning algorithms instead o. <-means++. We use k-means++ to show that even a simple approach can produce reasonable results.\nAnother motivation for using the features from the penultimate layers is that it facilitates featur. transferability to novel classes and tasks. It is common in the supervised learning approaches to firs. train a deep network on ImageNet images using class-level supervision, then to perform net surger. to chop off the top level weights, and using this truncated network as a feature extractor for furthe. fine tuning on different datasets and tasks. Doing so does not prevent the model from being traine. only on the ultimate task that it might be used for. One can train the network on a \"pretext'' tas. and transfer the learned weights to other novel tasks. This is especially crucial for unsupervise. learning since the pretext task that is used to train the models is almost always much different fron. the specific task that the model will ultimately be used for..\nL1(G,Q) = Ec~P(c),z~Pnoise( (z)[logQ(c|G(c,z))]+ H(c)\nAs can be seen from the first term of Eq. 4] the lower bound of the mutual information regularizer conveniently turns out to be a recognition model. If the optimization procedure converges success. fully, one can hope to have learned a latent code that ends up representing the most salient and structured semantic features present in the data. The noise parameters, z, end up providing the stochasticity to the input that result in the production of samples with diversity.\nGenerative Network Batch Batch Norm Batch Batch Norm Norm Norm deconv2D deconv2D deconv21 deconv2D size=5x5 size=5x5 i ze=5x5 size=5x5 dim=64 dim=3 dim=256 dim=128 stride=2 stride=2 stride=2 fc ReLU ReLU ReLU G(z,c) ReLU tanh Discriminative Network fc dim=512 Leaky Batch Leaky Batch Leaky BatchLeaky ReLU Norm ReLU Norm ReLU Norm ReLU Q(c|x) conv2d Conv2d Conv2d Conv2d size=5x5 size=5x5 size=5x5 size=5x5 dim=64 dim=128 dim=256 dim=512 stride=2 stride=2 stride=2 stride=2 X $(x) T/F\nFigure 1: Figure shows the InfoGAN architecture that was used in all our experiments. Notice that the input to G(.) is a combination of z and c. Also notice that most of the parameters are shared between the Q(.) network and the D(.) network, thus improving the computational efficiency.\nGenerator: Note that the generator has been slightly modified to accept the structured latent code. c, in addition to the random noise, z. The first layer is a fully-connected (fc) layer which is then reshaped into a 2-D grid of spatial resolution s/16 s/16, where s is the size of the output image to be produced. Subsequent to this reshaping, the architecture has four layers of transposed_convolution (sometimes referred to as deconvolution) with a stride of 2, each of which upsamples the input features to twice the spatial resolution. These layers are sandwiched by batch_norm and ReLU layers. Finally, we use a tanh non-linearity to map the features into [-1, 1].\nDiscriminator: The discriminator is a standard CNN with a series of convolutional layers followec by non-linearities. The architecture uses four convolutional layers sandwiched by batch_norn and 1eakyReLU layers. We don't use max_pooling to reduce the spatial resolution of the input Instead, we convolve the feature maps with a stride of two, which results in the output of eacl convolution layer to be half the spatial resolution of the input feature map. This base architecture is shared between D(.) and Q(.). On top of this shared network, we use an fc layer to extraci the features, which are then used to predict the categorical distribution. Notice that most of the computational cost is shared between the D(.) and the Q(.) networks thereby making the entire training process to be computationally efficient.\nAs mentioned previously, while InfoGAN has the ability to group data into multiple groups automat-. cally, there is no constraint to enforce that the groups need to correspond to the various object-level. ategories that are present in the dataset. While this turned out to be true for the MNIST dataset. Chen et al.[(2016b)), we believe that it was possible because the variations in the strokes that pro. luce different digits correspond to the source of biggest variation in the dataset, which conveniently orresponds to the various digit categories, thereby enabling InfoGAN to act as a category recogni-. ion model. In more realistic datasets, the sources of biggest variation need not (and, usually, do not). orrespond to variations in the object-level categories. Our experiments show this to be true. When ve trained InfoGAN to automatically group the CIFAR-10 images into 10 categories, we found that while InfoGAN was able to group the images into different groups, the groups did not correspond. o object category-level groupings. Figure|2|shows some example samples generated by the model\nEach row corresponds to a different category and each column in the row corresponds to a differen sample from that category (obtained by keeping c fixed and by varying z). We can see that while each row look different from each other, it does not correspond to the CIFAR-10 categories\nTherefore, we employ a hybrid approach to unsupervised clustering. We first train the discriminativ. network using either the vanilla GAN objective or the InfoGAN objective, until convergence. Upo. convergence, we extract features for each image in the training set, from the top of the share. network, labeled as $(x) in Figure 1] and do average_pooling across the spatial resolutior. for each feature channel. We then cluster these features using k-means++ into a discrete set of k. categories. We set k to be the number of object classes that are present in the respective datase The cluster centers learned by k-means++ clustering act as the templates for the k categories tha. are present in the dataset.\nDuring testing, we extract the feature representation of the test images by passing them through the discriminative network trained using the generator as an adversary, do average_pooling on (x), and compute the distance of the test feature vector to each of the centers learnt by k- means++ clustering during the training phase. The test image is assigned an index corresponding to the index of the closest center. Our experiments show that clustering on (x) produces better results than directly using the recognition model of InfoGAN. Note that while we use the simple k- means++ algorithm for clustering, it could be replaced by more sophisticated unsupervised learning algorithms. We do not explore further down this route since the scope of this work is to study the strength of the features learned by adversarial training.\nFigure 2: Figure shows samples generated from InfoGAN trained on the CIFAR-10 dataset wher the system was encouraged to identify 10 categories. Each row corresponds to a different cluste identified by InfoGAN. Each column corresponds to a different sample from that clusters. We can see that while InfoGAN can identify clusters that are different from each other, they do not correspond to the CIFAR-10 categories. See Sec. 4.1|for quantitative results.\nAn advantage of the hybrid approach is that it now allows us to use a variety of different \"pretext'. objectives. In other words one can decouple the training objective from the testing requirements. In. fact, we experimented by encouraging InfoGAN to identify more groups in the training data than. number of object-categories in the dataset. For example, we trained InfoGAN on CIFAR-10 dataset by encouraging the system to identify [10, 20, 30, 35, 40, 50 and 75] groups. Of course, these groups. do not correspond to category-level groupings. However, to our surprise, we found that when the. features obtained from InfoGANs trained on large number of categories were used for clustering. they performed better at object categorization than the features obtained from an InfoGAN trained. on the same number of object categories as present in the dataset. Section |4|provides quantitative. results on these experiments."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We perform experiments on multiple datasets; CIFAR-10, CIFAR-100 and STL-1d1 We use ground truth labels only for evaluation purposes and for training the supervised learning baseline. The train ing procedure is entirely unsupervised. We report results using two standard metrics that are used for evaluating unsupervised learning algorithms; Adjusted RAND Index (ARI) and the Normalized Mutual Information (NMI) score. We provide three baselines; (i) we report results using simple features such as pixel intensities, HOG and GIST, which we call low-level visual features, (ii) we report results on the features obtained using standard GAN training, (iii) as an upper bound, we report results using supervised learning where we train the weights in a discriminator network with the same architecture using category-level labels that are provided by the datasets.\nIt is important to remember that we are interested in comparing the quality of the learned feature that can be used for transfer to novel images and not just the classification score on an pre-define. set of categories. The classification accuracy captures only how well a test image was correctly classified. If incorrectly classified, it does not quantify how bad the mistake was. ARI, on the othe hand, is a better metric for evaluating the properties of the features because it measures not only how accurately pairs of objects were correctly grouped together, but also takes into account how many pairs of data points were incorrectly grouped. Therefore, when comparing with the model that was trained using supervised learning, we ignore the top-level classification layer of that model, anc quantify the quality of the representations, i.e., the features extracted from the penultimate layer using ARI after clustering on them.\nFigure 3: This figure shows all the 64 filters from the first layer of the discriminative network trained on CIFAR-10. The visualization on the left corresponds to the filters learned using adversarial training. The visualization on the right corresponds to the filters learned for the same architecture using supervised learning. It is interesting to see that there the filters on the left have more high frequency components and the filters on the right are more smooth.\nBefore we go into the quantitative results, we visualize the filters of the first layer of the discrim inative network and compare them across two different training procedures. Figure 3 shows the visualization. On the left are the filters from the network that was trained using adversarial training On the right are the filters from a network with the same architecture but trained using class-level supervision. Both these networks were trained using the CIFAR-10 dataset. We can see that while some of the filters look similar to each other, many of them are quite different. It is clear that the filters on the right are more smooth than the filters on the left. Recollect that filters on the left are trained to fit both the real images and the generated images. When the generated images are not as high-quality as the real images, the filters that D(.) learns might not be as regularized as the ones\nWe have released the code that was used in all our experiments at https://github.com/VittalP/UnsupGAN\nCIFAR-10 0.25 0.2 0.15 CIFAR-10 0.6 0.1 0.4 0.05 0.2 0 0 10 20 30 35 40 50 75 0 # Groups in InfoGAN Visual GAN InfoGAN Supervised ARI-32 NMI-32 +-ARI-64 Features NMI-64 +ARI-32-InfoGAN --NMI-32-InfoGAN ARI NMI -ARI-64-InfoGAN +NMI-64-InfoGAN (a) (b)\nFigure 4: CIFAR-10: (a) Plots the performance of the grouping algorithm when using the features. learned from InfoGAN training when trained over multiple categories. Zero groups corresponds. to vanilla GAN. -32 and -64 correspond to the output sizes of the generated images. -InfoGAN. corresponds to the results obtained with direct prediction using the recognition model in InfoGAN.. (b) Note that InfoGAN features perform better than vanilla GAN features. However, supervised learning outperforms unsupervised learning on this database..\nlearnt using only real data. We hypothesize that improving the quality of the generated images can help regularize the first layer filters in D(.). We leave this route of exploration for future work.."}, {"section_index": "5", "section_name": "4.1 CIFAR-10", "section_text": "The CIFAR-10 consists of 50k training images and 10k testing images, of size 32 32, dividec. among 10 categories. We trained the model for two different image sizes; 32 32 and 64 64. We. trained InfoGAN with different numbers of categories {10, 20, 30, 35, 40, 50, 75}. Figure|4a|shows. a plot of the performance measures versus the number of groups InfoGAN was trained to identify We can see from the figure that as we increase the number of categories, the performance of the. model goes up into a certain point and drop after that. This indicates that there exists databases for. which grouping into more categories than present in the ground truth might help. We also plot the. performance of the InfoGAN model when used directly as a prediction model. We can see from. the plots that k-means++ clustering produces better results (ARI-32=0.097; NMI-32=0.18) than direct prediction (ARI-32-InfoGAN: 0.085; NMI-32-InfoGAN: 0.14). We label the direct prediction. results with a (-InfoGAN).\nFigure 4b|compares the performance when using different features. We can see that InfoGAN features trained with 50 clusters beats the features learned using vanilla GAN by a small margin However, supervised training does much better (as one might have expected).\nIn these sets of experiments, we use the images from the CIFAR-100 database for training. This database also contains 50k training examples and 10k test images, divided among 100 fine scale categories and 20 coarse level categories. We test the performance on the coarse categories. As before, we experiment the InfoGAN training with multiple categories {10, 20, 35, 50}. While the trend is not as noticeable as in the case of CIFAR-10, the best performance is obtained when we us. 50 categories. Also, as before, the k-means++ clustering of the features produces better performance (ARI=0.04) than the recognition model of InfoGAN (ARI=0.036).\nCIFAR-100 0.15 CIFAR-100 0.1 0.2 0 0.05 0.15 0.1 0 0.05 0 10 20 35 50 # Groups in InfoGAN 0 Visual GAN InfoGAN Supervised Features +-ARI-32 +NMI-32 +-ARI-InfoGAN --NMI-InfoGAN ARI NMI (a) (b)\nFigure |5b|compares the performance when we use different different features. Notice that the fea tures obtained by adversarial training are as competitive as the features obtained using supervised training. We that this is because of two reasons; (i) CIFAR-100 coarse level categories are mucl harder to distinguish than the CIFAR-10 categories, making it difficult for the supervised model tc learn good features, (ii) the number of training examples per category in CIFAR-100 is lesser thar CIFAR-10 because we are training using the 20 coarse categories compared with 10 of CIFAR-10 We label the direct prediction results with a (-InfoGAN).\nFinally, we also perform experiments on the STL-10 dataset. This database consists of 5000 images for training with labels, 100ooo training images without labels, and 800o images for testing. The dataset consists of 10 categories, and all the images are of size 96 96. This dataset brings out the advantages of unsupervised learning algorithms. The database is more than two times bigger thar CIFAR-10 and CIFAR-100 datasets in terms of the number of images and each image is 9 times the size of the CIFAR images. Figure|6b|shows that the unsupervised learning with adversarial training outperforms the same models trained using supervised learning. From Figure 6a we also notice that the features learned using vanilla GAN does better than the features learned using InfoGAN Increasing the complexity of the datasets makes it difficult for InfoGAN to group the images in the dataset."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "In this paper, we explore an unsupervised feature learning technique where the model is trained us. ing adversarial training from a generative network. We use a generative model to generate image. that act as an adversary to the discriminative network. We explore the standard GAN architectur and the InfoGAN architecture for training the discriminative model. We also show that direct predic tion using InfoGAN's recognition model does not always result in identifying object category-leve. information. Instead, we fuse the features learned by adversarial training with a traditional unsu pervised learning approach, k-means clustering, and show that this combination produces bette results than direct prediction. We also show that, in situations where there are limited amounts o labeled training data and large amounts of unlabeled data, adversarial training has the potential t. outperform supervised learning.\nFigure 5: CIFAR-100: (a) # of groups used to train InfoGAN has less of an effect on CIFAR-100 than. it had on CIFAR-10. However, the performance of k-means++ clustering is still better than direct prediction using the recognition model of InfoGAN. Please see Fig. 4a|for labeling conventions.. (b) InfoGAN features and GAN features perform similarly on this dataset. However, supervised. learning features are only slightly better than the unsupervised counterparts..\nSTL-10 0.25 STL-10 0.2 0.25 0.15 0.2 0.1 0.15 0.05 0.1 0 0.05 0 10 20 35 50 75 # Groups in InfoGAN 0 GAN InfoGAN Supervised ARI-96 --NMI-96 ARI NMI (b)\nFigure 6: STL-10: (a) InfoGAN's performance drops with increase in the number of groups. (b Vanilla GAN's features outperform InfoGAN-trained features. Also, notice that, with just 5000 labeled training images, supervised learning starts to reach its limits. However, our model make. use of the additional 10oo00 unlabeled images and is able to learn representations that surpass the performance of features learned using the supervised model.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016a.\nRoss Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for ac curate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo. lutional neural networks. In Advances in neural information processing systems, pp. 1097-1105 2012.\nAlexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina- tive unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 766-774, 2014.\nAlec Radford. Luke Metz. and Soumith Chintala. Unsupervised representation learning with deej convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nJianyu Wang, Zhishuai Zhang, Vittal Premachandran, and Alan Yuille. Discovering internal repre sentations from object-cnns using population encoding. arXiv preprint arXiv:1511.06855, 2015\nXiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos nProceedinos Ottn0HH 11 794-2802. 2015\nSaining Xie and Zhuowen Tu. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1395-1403, 2015."}] |
ryxB0Rtxx | [{"section_index": "0", "section_name": "IDENTITY MATTERS IN DEEP LEARNING", "section_text": "MoritzHardt Google Brain 1600 Amphitheatre Parkway Mountain View. CA. 94043 m@mrtz.Org\nAn emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normal- iz.ation, but was also key to the immense success of residual networks\nIn this work, we put the principle of identity parameterization on a more solid. theoretical footing alongside further empirical progress. We first give a strikingly. simple proof that arbitrarily deep linear residual networks have no spurious local. optima. The same result for feed-forward networks in their standard parameter-. ization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that. the network can represent any function of its sample provided that the model has. more parameters than the sample size.\nDirectly inspired by our theory, we experiment with a radically simple residual ar-. chitecture consisting of only residual convolutional layers and ReLu activations but no batch normalization, dropout, or max pool. Our model improves signifi cantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This shortcoming was observed and partially addressed by Ioffe & Szegedy (2015) through batch normalization, i.e., layer-wise whitening of the input with a learned mean and covariance. But the idea remained somewhat implicit until residual networks (He et al. (2015); He et al. (2016)) explic itly introduced a reparameterization of the convolutional layers such that when all trainable weights are O, the layer represents the identity function. Formally, for an input x, each residual layer has the form x + h(x), rather than h(x). This simple reparameterization allows for much deeper architec- tures largely avoiding the problem of vanishing (or exploding) gradients. Residual networks, and subsequent architectures that use the same parameterization, have since then consistently achieved state-of-the-art results on various computer vision benchmarks such as CIFAR10 and ImageNet."}, {"section_index": "2", "section_name": "1.1 OUR CONTRIBUTIONS", "section_text": "In this work, we consider identity parameterizations from a theoretical perspective, while translating some of our theoretical insight back into experiments. Loosely speaking, our first result underlines how identity parameterizations make optimization easier, while our second result shows the same is true for representation.\nDepartment of Computer Sciene Princeton University 35 Olden Street, Princeton, 0854"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Traditional convolutional neural networks for image classification, such as AlexNet (Krizhevsky. et al. (2012)), are parameterized in such a way that when all trainable weights are 0, a convolutional layer represents the 0-mapping. Moreover, the weights are initialized symmetrically around 0. This standard parameterization makes it non-trivial for a convolutional layer trained with stochastic gra- dient methods to preserve features that were already good. Put differently, such convolutional layers cannot easily converge to the identity transformation at training time..\nLinear residual networks. Since general non-linear neural networks, are beyond the reach of cur-. rent theoretical methods in optimization, we consider the case of deep linear networks as a simplified model. A linear network represents an arbitrary linear map as a sequence of matrices A, . .. A,A1 The objective function is E[y - Ae ... Ajx|2, where y = Rx for some unknown linear transfor- mation R and x is drawn from a distribution. Such linear networks have been studied actively in recent years as a stepping stone toward the general non-linear case (see Section 1.2). Even though. Ae ... A1 is just a linear map, the optimization problem over the factored variables (Ae, ..., A1) is. non-convex.\nIn analogy with residual networks, we will instead parameterize the objective function as\nTo give some intuition, when the depth l is large enough, we can hope that the target function I has a factored representation in which each matrix A, has small norm. Any symmetric positiv semidefinite matrix O can, for example, be written as a product O = Oe ... O1, where each O, = O1/l is very close to the identity for large l so that A, = O, - I has small spectral norm. We firs prove that an analogous claim is true for all linear transformations R. Specifically, we prove tha for every linear transformation R, there exists a global optimizer (A1, . . ., Ae) of (1.1) such that fo large enough depth l,\nHere, A denotes the spectral norm of A. The constant factor depends on the conditioning of R We give the formal statement in Theorem 2.1. The theorem has the interesting consequence that as. the depth increases, smaller norm solutions exist and hence regularization may offset the increase ir parameters.\nHaving established the existence of small norm solutions, our main result on linear residual networks shows that the objective function (1.1) is, in fact, easy to optimize when all matrices have sufficiently small norm. More formally, letting A = (A1,..., Ae) and f(A) denote the objective function in (1.1), we can show that the gradients of vanish only when f(A) = 0 provided that max; ||A,|| O(1/l). See Theorem 2.2. This result implies that linear residual networks have no critical points other than the global optimum. In contrast, for standard linear neural networks we only know, by work of Kawaguchi (2016) that these networks don't have local optima except the global optimum but it doesn't rule out other critical points. In fact, setting A, = 0 will always lead to a bad critical point in the standard parameterization.\nUniversal finite sample expressivity. Going back to non-linear residual networks with ReLU ac. tivations, we can ask: How expressive are deep neural networks that are solely based on residua. layers with ReLU activations? To answer this question, we give a very simple construction showin, that such residual networks have perfect finite sample expressivity. In other words, a residual net. work with ReLU activations can easily express any functions of a sample of size n, provided tha. it has sufficiently more than n parameters. Note that this requirement is easily met in practice. Or. CIFAR 10 (n = 50000), for example, successful residual networks often have more than 10 param. eters. More formally, for a data set of size n with r classes, our construction requires O(n log n +r2. parameters. Theorem 3.2 gives the formal statement.\nEach residual layer in our construction is of the form x + VReLU(Ux), where U and V are linear transformations. These layers are significantly simpler than standard residual layers, which typically have two ReLU activations as well as two instances of batch normalization\nThe power of all-convolutional residual networks. Directly inspired by the simplicity of ou. expressivity result, we experiment with a very similar architecture on the CIFAR10, CIFAR100, anc. ImageNet data sets. Our architecture is merely a chain of convolutional residual layers each with a single ReLU activation, but without batch normalization, dropout, or max pooling as are common. in standard architectures. The last layer is a fixed random projection that is not trained. In line. with our theory, the convolutional weights are initialized near 0, using Gaussian noise mainly as a. symmetry breaker. The only regularizer is standard weight decay (l2-regularization) and there is nc. need for dropout. Despite its simplicity, our architecture reaches 6.38% top-1 classification erroi. on the CIFAR10 benchmark (with standard data augmentation). This is competitive with the best.\nE[y-(I+Ae)...(I+A)x2 A\nmax AilO(1/e) 1<i<l\nSince the advent of residual networks (He et al. (2015); He et al. (2016)), most state-of-the-art net- works for image classification have adopted a residual parameterization of the convolutional layers Further impressive improvements were reported by Huang et al. (2016) with a variant of residual networks, called dense nets. Rather than adding the original input to the output of a convolutiona layer, these networks preserve the original features directly by concatenation. In doing so, dense nets are also able to easily encode an identity embedding in a higher-dimensional space. It would be interesting to see if our theoretical results also apply to this variant of residual networks.\nThere has been recent progress on understanding the optimization landscape of neural networks though a comprehensive answer remains elusive. Experiments in Goodfellow et al. (2014 and Dauphin et al. (2014) suggest that the training objectives have a limited number of bad loca minima with large function values. Work by Choromanska et al. (2015) draws an analogy betweer the optimization landscape of neural nets and that of the spin glass model in physics (Auffinger et al (2013)). Soudry & Carmon (2016) showed that 2-layer neural networks have no bad differentiable local minima, but they didn't prove that a good differentiable local minimum does exist. Baldi & Hornik (1989) and Kawaguchi (2016) show that linear neural networks have no bad local minima In contrast, we show that the optimization landscape of deep linear residual networks has no bac critical point, which is a stronger and more desirable property. Our proof is also notably simple illustrating the power of re-parametrization for optimization. Our results also indicate that deepe networks may have more desirable optimization landscapes compared with shallower ones.\nConsider the problem of learning a linear transformation R: Rd -> Rd from noisy measurements y = Rx + &, where & E N(0, Id) is a d-dimensional spherical Gaussian vector. Denoting by D the distribution of the input data x, let = Ex~D[xx ' 1 be its covariance matrix.\nho = x, h=hj-1+Ahj-1, y =he.\ny=(Ida+ Ae)...(Id+A1)x\nfA,x,y))=[yyll2=l|Id+Ae...(Id+Ax-Rx-l\nThe first theorem of this section states that the objective function f has an optimal solution with small Il-lll-norm, which is inversely proportional to the number of layers l. Thus, when\nresidual network reported in He et al. (2015), which achieved 6.43%. Moreover, it improves upon. the performance of the previous best all-convolutional network, 7.25%, achieved by Springenberg. et al. (2014). Unlike ours, this previous all-convolutional architecture additionally required dropout and a non-standard preprocessing (ZCA) of the entire data set. Our architecture also improves significantly upon Springenberg et al. (2014) on both Cifar100 and ImageNet..\nThere are, of course, many ways to solve this classical problem, but our goal is to gain insights into the optimization landscape of neural nets, and in particular, residual networks. We therefore\nIt is easy to see that this model can express any linear transformation R. We will use A as a shorthand for all of the weight matrices, that is, the l d d-dimensional tensor the contains A1, ..., Ae as slices. Our objective function is the maximum likelihood estimator,.\nf(A):= E[f(A,(x,y))]\nRecall that || A,|l is the spectral norm of A,. We define the norm |l-ll for the tensor A as the maximum of the spectral norms of its slices.\nA := max IAilL 1<i<l\nthe architecture is deep, we can shoot for fairly small norm solutions. We define y:= max{| log max(R)|, | log min(R)|}. Here min(), max() denote the least and largest singular values of R respectively.\nTheorem 2.1. Suppose l 3y. Then, there exists a globa ptimum solution A* of the populatior\nGiven the observation of Theorem 2.1, we restrict our attention to analyzing the landscape of f(: in the set of A with II Ill-norm less than T,\nHere using Theorem 2.1, the radius - should be thought of as on the order of 1/l. Our main theoren in this section claims that there is no bad critical point in the domain B, for any t < 1. Recall tha. a critical point has vanishing gradient..\nTheorem 2.2. For any t < 1, we have that any critical point A of the objective function f() inside the domain B, must also be a global minimum..\nTheorem 2.2 suggests that it is sufficient for the optimizer to converge to critical points of the popi lation risk, since all the critical points are also global minima..\nMoreover, in addition to Theorem 2.2, we also have that any A inside the domain B, satisfies that\n||Vf(A)|l 4l(1-r)l-10min()2(f(A) - Copt\nEquation (2.3) says that the gradient has fairly large norm compared to the error, which guarantees convergence of the gradient descent to a global minimum (Karimi et al. (2016)) if the iterates stay inside the domain B-. which is not guaranteed by Theorem 2.2 by itself..\nTowards proving Theorem 2.2, we start off with a simple claim that simplifies the population risl We also use |I-ll F to denote the Frobenius norm of a matrix..\nClaim 2.3. In the setting of this section, we have\nf(A)\nHere C is a constant that doesn't depend on A, and 1/2 denote the square root of , that is, the unique symmetric matrix B that satisfies B2 = .\nProof of Claim 2.3. Let tr(A) denotes the trace of the matrix A. Let E = (Id+ Ae) ... (Id+ A1)- R Recalling the definition of f(A) and using equation (2.2), we have.\nA* <2(v+v3y)\nHere y should be thought of as a constant since if R is too large (or too small), we can scale the data properly so that min(R) 1 max(R). Concretely, if max(R)/min(R) = k, then we can scaling for the outputs properly so that min(R) = 1/ and max(R) = . In this case, we have = log k, which will remain a small constant for fairly large condition number k. We also. point out that we made no attempt to optimize the constant factors here in the analysis. The proof of Theorem 2.1 is rather involved and is deferred to Section A..\nHere Copt is the global minimal value of f() and |Vf(A)|[F denotes the euclidean norm' of the l d d-dimensional tensor f(A). Note that min() denote the minimum singular value of .\nf(A) =E [||Ex-||2] (by equation (2.2)) =E[l|Ex|2+l|S|l2-2(Ex,)] =E[tr(ExxET)] +E[|S|l2] (since E[<Ex,)]=E[<Ex,E[|x])]= 0) tr(EE[xx]E)+C (where C = E[xx' ]) =tr(EET)+ C=|E1/2|IF+ C (since E[xx] = )\nNext we compute the gradients of the objective function f() from straightforward matrix calculus We defer the full proof to Section A.\nLemma 2.4. The gradients of f(.) can be written as\nd f =2(Id+A)...(Id+A+1)E(Id+A1)... (Id+A) dAi =(Id+Ae)...(Id+A1)-R\nProof of Theorem 2.2. Using Lemma 2.4, we have\nd f = 2||Id+ A)...(Id+ A+1)E(Id+ A1)... (Id+A)E (by Lemma 2.4 dAi 2min(Id+ AT) 0min()|E|F (by Claim C.2 jFi 2(1 - r)l-1min()|E|l (since min(Id + A) 1 A\nTherefore we complete the proof of equation (2.3). Finally, if A is a critical point, namely, f(A) 0, then by equation (2.3) we have that f(A) = Copt. That is, A is a global minimum.\nIn this section we characterize the finite-sample expressivity of residual networks. We consider a residual layers with a single ReLU activation and no batch normalization. The basic residual building block is a function Tu,v,s() : Rk -> Rk that is parameterized by two weight matrices. U E Rxk, V E Rkxk and a bias vector s E Rk\nTu.v.s(h) = VReLu(Uh + s)\nWe assume the data has r labels, encoded as r standard basis vectors in Rr, denoted by e1, ..., er. We have n training examples (x(1), y(1)),..., (x(n), y(n)), where x(i) E Rd denotes the i-th data. and y(i) e {e1,..., er} denotes the i-th label. Without loss of generality we assume the data are. normalized so that x(i) = 1. We also make the mild assumption that no two data points are very. close to each other.\nAssumption 3.1. We assume that for every 1 < i < j < n, we have [x() - x()|[2 p for som. absolute constant p > 0.\nImages, for example, can always be imperceptibly perturbed in pixel space so as to satisfy thi assumption for a small but constant p.\nUnder this mild assumption, we prove that residual networks have the power to express any possible. labeling of the data as long as the number of parameters is a logarithmic factor larger than n.\nNow we are ready to prove Theorem 2.2. The key observation is that each matric A, has small. norm and cannot cancel the identity matrix. Therefore, the gradients in equation (2.5) is a product of. non-zero matrices, except for the error matrix E. Therefore, if the gradient vanishes, then the only possibility is that the matrix E vanishes, which in turns implies A is an optimal solution..\ndf l|Vf(A)|I- > 4l(1 - )l-10min()2|E|2 dAi 1E i=1 4l(1-r)l-1min()2(f(A) - C) (by the definition of E and Claim 2 4l(1-r)e-1min()2(f(A) - Copt). (since Copt = minA f(A) C by Claim 2\nA residual network is composed of a sequence of such residual blocks. In comparison with the full pre-activation architecture in He et al. (2016), we remove two batch normalization layers and one ReLU layer in each building block\nTheorem 3.2. Suppose the training examples satisfy Assumption 3.1. Then, there exists a residual network N (specified below) with O(n log n + r2) parameters that perfectly expresses the training data, i.e., for all i E {1, ..., n}, the network N maps x(i) to y(i)\nIt is common in practice that n > r2, as is for example the case for the Imagenet data set where n > 106 and r = 1000.\nWe construct the following residual net using the building blocks of the form Tu,v,s as defined in equation (3.1). The network consists of l + 1 hidden layers ho, . . . , he, and the output is denoted by y E Rr. The first layer of weights matrices Ao maps the d-dimensional input to a k-dimensional hid- den variable ho. Then we apply l layers of building block T with weight matrices A, B, E Rk xk. Finally, we apply another layer to map the hidden variable he to the label y in Rk. Mathematically, we have\nTowards constructing the network N of the form above that fits the data, we first take a random to denote the j-th layer of hidden variable of the i-th example. By Johnson-Lindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)), with good probability, the resulting are not very correlated.\n= e, then v() -\nVj E{1,...,r},qj+ TAe+1,Be+1,be+1 9i = ej\nIn computer vision, typically r is less than 103 and d is less than 105 while n is larger than 10\nhi-1 +TAa.Ba.ba 1. Vje{1,..., y=he+T 1,Be+1,se+\nWe note that here Ae+1 E Rkr and Be+1 E Rrr so that the dimension is compatible. We assume. the number of labels r and the input dimension d are both smaller than n, which is safely true in practical applications.2 The hyperparameter k will be chosen to be O(log n) and the number of. layers is chosen to be l = n/k|. Thus, the first layer has dk parameters, and each of the middle l. building blocks contains 2k2 parameters and the final building block has kr + r2 parameters. Hence,. the total number of parameters is O(kd + lk2 + rk + r2). = O(n log n + r2).\nThen we construct l middle layers that maps h. for every i E {1,...,n}. These vectors. as desired. Concretely, we design this cluster centers by picking r random unit vectors q1, ..., qr. in Rk. We view them as the surrogate label vectors in dimension k (note that k is potentially much. smaller than r). In high dimensions (technically, if k > 4 log r) random unit vectors q1, ..., qr are. pair-wise uncorrelated with inner product less than < 0.5. We associate the i-th example with the target surrogate label vector v(i) defined as follows,.\nVi e{1,...,n}\nVi e{1,...,n}\nWe briefly sketch the proof of the Lemma to provide intuitions, and defer the full proof to Section B The operation that each residual block applies to the hidden variable can be abstractly written as,\n+T UV Vi E S Vi E S\nThis claim is formalized in Lemma B.1. We can use it repeatedly to construct l layers of building vectors in {v(1), ..., v(n)}, and maintains the values of the others. Recall that we have l = [n/k the proof sketch.\nInspired by our theory, we experimented with all-convolutional residual networks on standard image classification benchmarks"}, {"section_index": "4", "section_name": "4.1 CIFAR10 AND CIFAR100", "section_text": "Our architectures for CIFAR10 and CIFAR100 are identical except for the final dimension corre sponding to the number of classes 10 and 100, respectively. In Table 1, we outline our architecture Each residual block has the form x + C2(ReLU(C x)), where C1, C2 are convolutions of the spec ified dimension (kernel width, kernel height, number of input channels, number of output channels) The second convolution in each block always has stride 1, while the first may have stride 2 where indicated. In cases where transformation is not dimensionality-preserving, the original input x i. adjusted using averaging pooling and padding as is standard in residual layers..\nWe trained our models with the Tensorflow framework, using a momentum optimizer with momen tum 0.9, and batch size is 128. All convolutional weights are trained with weight decay 0.0001 The initial learning rate is 0.05, which drops by a factor 10 and 30000 and 50000 steps. The model reaches peak performance at around 50k steps, which takes about 24h on a single NVIDIA Tesla K40 GPU. Our code can be easily derived from an open source implementation' by removing batch normalization, adjusting the residual components and model architecture. An important departure from the code is that we initialize a residual convolutional layer of kernel size k k and c output channels using a random normal initializer of standard deviation o = 1/k2c, rather than 1/k/ used for standard convolutional layers. This substantially smaller weight initialization helped train ing, while not affecting representation.\nA notable difference from standard models is that the last layer is not trained, but simply a fixec random projection. On the one hand, this slightly improved test error (perhaps due to a regularizin effect). On the other hand, it means that the only trainable weights in our model are those of the convolutions, making our architecture \"all-convolutional'.\n3https://github.com/tensorflow/models/tree/master/resnet\nh->h+ Tu.v,s(h).\nwhere h corresponds to the hidden variable before the block and h corresponds to that after. We. claim that for an (almost) arbitrary sequence of vectors h(1), ..., h(n), there exist Tu,v,s(-) such that. operation (3.5) transforms k vectors of h(i)'s to an arbitrary set of other k vectors that we can freely. choose, and maintain the value of the rest of n - k vectors. Concretely, for any subset S of size k. and any desired vector v(i) (i E S), there exist U, V, s such that.\nh(i) +Tu.v.s(h Vi E S = h(i) +Tu,v,s(h(i) Vi S\nTable 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters) variable dimensions initial stride description 3 3 x 3 x 16 1 1 standard conv 3 x 3 x 16 x 64 1 9 residual blocks 3 x 3 x 64 x 128 2 9 residual blocks 3 x 3 x 128 x 256 2 9 residual blocks 8 8 global average pool 256 X num_classes random projection (not trained) Cifar10 Precision Cifar100 Precision 0.6 0.6 train train 0.5 0.5 test test min min 0.4 0.4 Uo!S! 0.3 0.3 0.2 0.2 0.1 0.1 0.0 0.0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Steps (x1000) Steps (x1000)\nTable 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters)\nFigure 1: Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a gradient update with batch size 128..\nAn interesting aspect of our model is that despite its massive size of 13.59 million trainable pa. rameters, the model does not seem to overfit too quickly even though the data set size is 50o00. In contrast, we found it difficult to train a model with batch normalization of this size without signifi cant overfitting on CIFAR10.\nTable 2: Comparison of top-1 classification error on different benchmarks"}, {"section_index": "5", "section_name": "4.2 IMAGENET", "section_text": "The ImageNet ILSVRC 2012 data set has 1, 281, 167 data points with 1000 classes. Each image is resized to 224 224 pixels with 3 channels. We experimented with an all-convolutional variant of the 34-layer network in He et al. (2015). The original model achieved 25.03% classification error. Our derived model has 35.7M trainable parameters. We trained the model with a momentum optimizer (with momentum 0.9) and a learning rate schedule that decays by a factor of 0.94 every two epochs, starting from the initial learning rate 0.1. Training was distributed across 6 machines\nTable 2 summarizes the top-1 classification error of our models compared with a non-exhaustive list of previous works, restricted to the best previous all-convolutional result by Springenberg et al (2014), the first residual results He et al. (2015), and state-of-the-art results on CIFAR by Huang et al. (2016). All results are with standard data augmentation.\nTable 2: Comparison of top-1. classification error on different benchmarks Method CIFAR10 CIFAR100 ImageNet remarks All-CNN 7.25 32.39 41.2 all-convolutional, dropout, extra data processing. Ours 6.38 24.64 35.29 all-convolutional ResNet 6.43 25.16 19.38 DenseNet 3.74 19.25 N/A\nupdating asynchronously. Each machine was equipped with 8 GPUs (NVIDIA Tesla K4O) and usec. batch size 256 split across the 8 GPUs so that each GPU updated with batches of size 32\nIn contrast to the situation with CIFAR10 and CIFAR100, on ImageNet our all-convolutional mode performed significantly worse than its original counterpart. Specifically, we experienced a signifi cant amount of underfitting suggesting that a larger model would likely perform better.\nDespite this issue, our model still reached 35.29% top-1 classification error on the test set (50000. data points). and 14.17% top-5 test error after 700. 000 steps (about one week of training). While no longer state-of-the-art, this performance is significantly better than the 40.7% reported by. Krizhevsky et al. (2012), as well as the best all-convolutional architecture by Springenberg et al.. (2014). We believe it is quite likely that a better learning rate schedule and hyperparameter settings. of our model could substantially improve on the preliminary performance reported here.."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "Our theory underlines the importance of identity parameterizations when training deep artificia neural networks. An outstanding open problem is to extend our optimization result to the non-lineai case where each residual has a single ReLU activiation as in our expressivity result. We conjecture that a result analogous to Theorem 2.2 is true for the general non-linear case. Unlike with the standard parameterization, we see no fundamental obstacle for such a result."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Antonio Auffinger, Gerard Ben Arous, and Jiri Cerny. Random matrices and complexity of spi glasses. Communications on Pure and Applied Mathematics, 66(2):165-201, 2013.\nAnna Choromanska. Mikael Henaff. Michael Mathieu. Gerard Ben Arous. and Yann LeCun. Th loss surfaces of multilayer networks. In A1STATS, 2015.\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in neural information processing systems, pp. 2933-2941, 2014.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional network CoRR, abs/1608.06993,2016. URL http://arxiv.0rg/abs/1608.06993.\nWe hope our theory and experiments together help simplify the state of deep learning by aiming to. explain its success with a few fundamental principles, rather than a multitude of tricks that need to be delicately combined. We believe that much of the advances in image recognition can be achieved. with residual convolutional layers and ReLU activations alone. This could lead to extremely simple. (albeit deep) architectures that match the state-of-the-art on all image classification benchmarks..\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, pp. 630-645, 2016. doi: 10.1007/ 978-3-319-46493-0_38.URL http://dx.doi.0rg/10.1007/978-3-319-46493-0_ 3 8.\nK. Kawaguchi. Deep Learning without Poor Local Minima. ArXiv e-prints, May 2016\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012. D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for. multilayer neural networks. ArXiv e-prints, May 2016.. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The All. Convolutional Net. ArXiv e-prints, December 2014.."}, {"section_index": "8", "section_name": "MISSING PROOFS IN SECTION 2", "section_text": "In this section, we give the complete proofs for Theorem 2.1 and Lemma 2.4, which are omitted in Section 2."}, {"section_index": "9", "section_name": "A.1 PROOF OF THEOREM 2.1", "section_text": "It turns out the proof will be significantly easier if R is assumed to be a symmetric positive semidef. inite (PsD) matrix, or if we allow the variables to be complex matrices. Here we first give a proof sketch for the first special case. The readers can skip it and jumps to the full proof below. We will also prove stronger results, namely, I|l A*|l 3y/l, for the special case.\nWe see that the network defined by A* reconstruct the transformation R, and therefore it's a global minimum of the population risk (formally see Claim 2.3 below). Next, we verify that each of the A*. has small spectral norm:\nIA*= IId- U diag(z IO(Id - diag(z. diag(z;)1/lj\nmax 2\n(Id + At) . (Id + A) = (U diag(z1/e)UT)e = U diag(z1/e)eU (since UT U = Id =UZU' = R.\n|e(log zi)/l _1| 3|(log zi)/l| 3y/l. (since |ex 1| 3|x| for all |x| 1) z1/l\nThen using equation (A.1) and the equation above, we have that ||A|I max;||A*|| 3/l, which completes the proof for the special case..\nNext we give the formal full proof of Theorem 2.1\nProof of Theorem 2.1. We assume the dimension d is an even number. The odd case has very similai. proof and is left to the readers. Let R = U KV' be its singular value decomposition, where U,V are. two orthonormal matrices and K is a diagonal matrix. Since U is a normal matrix (that is, U satisfies that UUT = UT U), by Claim C.1, we have that U can be block-diagnolaized by orthonormal matrix. S into U = SDS-1, where D = diag(D1,..., Da/2) is a real block diagonal matrix with each. block D; being of size 2 2.\nTasanlllselgenvaluesIylng onllne ICic(ConnpicAplanc. and U are unitarily similar to each other, D also has eigenvalues lying on the unit circle, and so does. each of the block D;. This means that each D; is a 2 2 dimensional rotation matrix. Each rotation cos 0- sin 0 matrix can be written as T(0) Suppose D; = T(0) where 0, E [-, ]. Then sin 0cos 0 we have that D; = T(0i/q) for any integer q (that is chosen later). Let W = diag(T(0i/q)). Therefore, it follows that D = diag(D) = Wq. Moreover, we have U = SDS-1 = (SW S-1)q Therefore, let B1 = B2 = ... = Bg = Id - SW S-1, then we have U = (Id + Bq) ... (Id + B1) We verify the spectral norm of these matrices are indeed small,.\n|Bj|l =Id SWS-1=|s(Id-W)S-1 = ||Id - W|| max |T(O) - T(0i/q)l (sin iE[d/2] = max sin(0/q)< /q\nSimilarly, we can choose B. < /q so that VT = (Id + B)...(Id + BD)\n4Here for notational convenience, p, q are not chosen to be integers. But rounding them to closest intege will change final bound of the norm by small constant factor.\nSince U is orthonormal, U has all its eigenvalues lying on the unit circle (in complex plane). Since D and U are unitarily similar to each other, D also has eigenvalues lying on the unit circle, and so does each of the block D,. This means that each D, is a 2 2 dimensional rotation matrix. Each rotation\nl|Bl =Id-SWS-1=s(Id-W)S-1 =|Id - W|| (since S is unitary). max.||T(0)-T(0i/q)|l (since W = diag(T(0/q)) is block diagonal iE[d/2] = max|sin(0i/q)| /q\n||K' - Id|| max|elog ki1/p - 1| 3 max|log k : 1/p| = 3y/p (since [ex 1| 3|x| for |x| 1)\nR=UKV=(Id+Ae)...(Id+A1)."}, {"section_index": "10", "section_name": "A.2 PROOF OF LEMMA 2.4", "section_text": "We compute the partial gradients by definition. Let ; E Rdd be an infinitesimal change to A, Using Claim 2.3, consider the Taylor expansion of f(A1,. :,A0 +\nf(A1,...,Ae+j,...,Ae ((Id + Ae). (Id + Aj + A;)... Id + A1) - R)1/2| ((Id + Ae). (Id+ A1) - R)1/2 + (Id+ Ae) .j...(Id + A1)1/2 (Id + Ae) .. Id + A1) - R)1/2 2(((Id+ Ae) .. (Id + A1) - R)1/2, (Id + Ae) .j... (Id + A)1/2) + O(l|j||) f(A)+2((Id+A)... (Id+A+1)E(Id+ A-1)...(Id+ A),j)+O(||j||F).\nIn this section, we provide the full proof of Theorem 3.2. We start with the following Lemma that constructs a building block T that transform k vectors of an arbitrary sequence of n vectors to any arbitrary set of vectors, and main the value of the others. For better abstraction we use a(i),3(i) to. denote the sequence of vectors.\nLemma B.1. Let S C [n] be of size k. Suppose a(1) a(n) is a sequences of n vectors satisfying a) for every 1 < i < n, we have 1-p' < ai[2 < 1+ p', and b) ifi j and S contains at least one of i, j, then ||(i) - (j)|| 3p'. Let (1), ..., (n) be an arbitrary sequence of vectors. Then, there exists U, V E Rkk, s such that for every i E S, we have Tu,v,s(a(i)) = (i) - (i), and moreover, for every i E [n]\\S we have Tu.v.s(a(i)) = 0.\nWe can see that the conclusion implies\nVi E S Vi E S\nVi E S +Tuys( Vi S\nProof of Lemma B.1. Without loss of generality, suppose S = {1, ..., k}. We construct U, V, s as follows. Let the i-th row of U be a(i) for i E [k], and let s = (1 - 2p') : 1 where 1 denotes the all. B(i) _ a(i)) for i E [k]. 1's vector. Let the i-column of V be.\nNext we verify that the correctness of the construction. We first consider 1 < i < k. We have that Ua(i) is a a vector with i-th coordinate equal to ||a(i) |2 1 - p'. The j-th coordinate of Ua(i) is equal to ((), (i)), which can be upperbounded using the assumption of the Lemma by\n+ l|a(i)| l|a(i)_a(j)|2<1+p'-3p'1-2p'\nFinally, consider n > i > k. Then similarly to the computation in equation (B.1), Ua(i) is a vector with all coordinates less than 1 - 2p'. Therefore Ua(i) + b is a vector with negative entries. Hence. we have ReLu(Ua(i) + b) = 0, which implies VReLu(Ua(i) + b) = 0.\nNow we are ready to state the formal version of Lemma 3.3\nVie{1,...,n\nNow we construct the other layers inductively. We will construct the layers such that the hidder jk. Assume that we have constructed the first j layer and next we use Lemma B.1 to construci the j + 1 layer. Then we argue that the choice of a(1) = v(1),..., Q(jk) = v(jk), a(jk+1) z(jk+1), ..., Q(n) = z(n), and S = {jk + 1, ..., (j + 1)k} satisfies the assumption of Lemma B.1. Indeed, because qi's are chosen uniformly randomly, we have w.h.p for every s and i, (qs, z(i)) 1 - p'. Thus, since v(i) E {1, .:., qr}, we have that v(i) also doesn't correlate with any of the z(i) Then we apply Lemma B.1 and conclude that there exists A+1 = U, B+1 = V, bj+1 = s such that TAj+1,bj+1,b+1(v()) = 0 f0r i jk, TAj+1,b+1,bj+1(z(i) =y(i) _ z(i) f0r jk < i < (j +1)k, and TA;+1,bj+1,b,+1(z(i)) = 0 for n i > (j + 1)k. These imply that ) + TAj V1 < i < jk +1,bj+1,bj+1(v(i)) = v(i) ) = v(i) Vjk+1 < i < (j +1)k 2 (2) V(j + 1)k < i < n\nVjk+1<i<(j+1)k\ni TAj+1,bj+1,bj+1 V(j+1)k<i<n\nNow we ready to prove Theorem 3.2, following the general plan sketched in Section 3\nej = Vj + TAe+1,Be+1,be+1(Vj), for every j E {1,...,r}\nWe will use Lemma B.1 repeatedly to construct building blocks TAj,B,s,(-), and thus prove. Lemma B.2. Each building block TA,,B,s, (-) takes a subset of k vectors among {z(1), .., z(n)} and convert them to v(i)'s, while maintaining all other vectors as fixed. Since they are totally n/k layers, we finally maps all the z(i)'s to the target vectors v(i)'s..\nTherefore we constructed the j + 1 layers that meets the inductive hypothesis for layer j + 1 example i.\nProof of Theorem 3.2. We use formalize the intuition discussed below Theorem 3.2. First, take k = c(log n)/p2 for sufficiently large absolute constant c (for example, c = 10 works), by Johnson- Lindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)) we have that when Ao is a random matrix with standard normal entires, with high probability, all the pairwise. distance between the the set of vectors {0, x(1), ..., x(n)} are preserved up to 1 p/3 factor. That. is, we have that for every i, 1-p/3 ||Aox(i) |I < 1+p/3, and for every i j, ||Aox(i) -Aox() || p(1 - p/3) 2p/3. Let z(i) = Aox(i) and p' = p/3. Then we have z(i)'s satisfy the condition. of Lemam B.2.We pick r random vectors q1,.., qr in Rk. Let u(1),..., u(n) be defined as in. equation (3.2). Then by Lemma B.2, we can construct matrices (A1, B1),..., (Ae, Be) such that (B.2)\nIn this section, we state two folklore linear algebra statements. The following Claim should be known, but we can't find it in the literature. We provide the proof here for completeness\nU = SDST\nwhere D is a real block diagonal matrix that consists of blocks with size at most 2 2. Moreover, iJ d is even, then D consists of blocks with size exactly 2 2..\nProof. Since U is a normal matrix, it is unitarily diagonalizable (see Weisstein (2016) fo backgrounds). Therefore, there exists unitary matrix V in Cdd and diagonal matrix in trix, we have that the eigenvalues (the diagonal entries of A) come as conjugate pairs and so do the eigenvectors (which are the columns of V). That is, we can group the columns of V into pairs (1, U1),..., (Vs, Us), Vs+1,..., Ut, and let the corresponding eigenval- ues be A1,1,...,s,As, As+1,...,At. Here As+1,...,At E R. Then we get that U i=1 2H(v;Av$) + t=s+1 UiAivT. Let Q; = R(v;A;v*), then we have that Qi is a real matrix of rank-2. Let S, E Rd2 be a orthonormal basis of the column span of Q; and then we have that Q; car be written as Qi = SD,S, where D, is a 2 2 matrix. Finally, let S = [S1, ..., Ss, Vs+1, ..., Vt] and D = diag(D1,..., Ds, As+1,..., At) we complete the proof.\nThe following Claim is used in the proof of Theorem 2.2. We provide a proof here for completeness\nABF min(A)BF.\nProof. Since (A)2 is the smallest eigenvalue of A' A, we have that\nTherefore, it follows that\nTaking square root of both sides completes the proof\n|AB|F = tr(B'A' AB) tr(B'.0min(A)2Id B =0min(A)2tr(B B) = 0min(A)2|B|F."}] |
B1ckMDqlg | [{"section_index": "0", "section_name": "OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "Noam Shazeer', Azalia Mirhoseini*l, Krzysztof Maziarz*2, Andy Davis', Quoc Le', Geoffre Hinton' and Jeff Dean'\nThe capacity of a neural network to absorb information is limited by its number ol parameters. Conditional computation, where parts of the network are active on a. per-example basis, has been proposed in theory as a way of dramatically increas. ing model capacity without a proportional increase in computation. In practice however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditiona computation, achieving greater than 1000x improvements in model capacity witl only minor losses in computational efficiency on modern GPU clusters. We in. troduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up tc thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the Mo. to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training. corpora. We present model architectures in which a MoE with up to 137 billion. parameters is applied convolutionally between stacked LSTM layers. On large. language modeling and machine translation benchmarks, these models achieve. significantly better results than state-of-the-art at lower computational cost.."}, {"section_index": "1", "section_name": "1.1 CONDITIONAL COMPUTATION", "section_text": "Exploiting scale in both training data and model size has been central to the success of deep learn ing. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text (Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images. (Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to. a roughly quadratic blow-up in training costs, as both the model size and the number of training. examples increase. Unfortunately, the advances in computing power and distributed computation. fall short of meeting such demand..\n1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff} @ google.com 2Jagiellonian University, Cracow, krzysztof.maziarz@ student.uj.edu.pl"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Various forms of conditional computation have been proposed as a way to increase model capacity. without a proportional increase in computational costs (Davis & Arel, 2013:; Bengio et al., 2013: Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi. et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example. basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating. decisions.\nMoE layer G(x)2 G(x)n-1 MoE MoE layer layer Expert 1 Expert 2 Expert 3 Expert n-1 Expert n Gating Network\nFigure 1: A Mixture of Experts (MoE) layer embedded within a recurrent language model. In this case, the sparse gating function selects two experts to perform computations. Their outputs are modulated by the outputs of the gating network\nWhile these ideas are promising in theory, no work to date has yet demonstrated massive improve. nents in model capacity, training time, or model quality. We blame this on a combination of the following challenges:\nIn this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1o00x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets."}, {"section_index": "3", "section_name": "1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER", "section_text": "Our approach to conditional computation is to introduce a new type of general purpose neural net work component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a num. ber of experts, each a simple feed-forward neural network, and a trainable gating network whicl. selects a sparse combination of the experts to process each input (see Figure 1). All parts of the. network are trained jointly by back-propagation..\nModern computing devices, especially GPUs, are much faster at arithmetic than at branch-. ing. Most of the works above recognize this and propose turning on/off large chunks of the. network with each gating decision. Large batch sizes are critical for performance, as they amortize the costs of parameter trans- fers and updates. Conditional computation reduces the batch sizes for the conditionally. active chunks of the network.. Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power. thousands of times greater than the aggregate inter-device network bandwidth. To be com-. putationally efficient, the relative computational versus network demands of an algorithm. must exceed this ratio. Embedding layers, which can be seen as a form of conditional com-. putation, are handicapped by this very problem. Since the embeddings generally need to. be sent across the network, the number of (example, parameter) interactions is limited by. network bandwidth instead of computational capacity.. Depending on the scheme, loss terms may be necessary to achieve the desired level of. sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These. issues can affect both model quality and load-balancing.. Model capacity is most critical for very large data sets. The existing literature on condi-. tional computation deals with relatively small image recognition data sets consisting of up. to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient. signal to adequately train a model with millions, let alone billions of parameters..\nWhile the introduced technique is generic, in this paper we focus on language modeling and machine. translation tasks, which are known to benefit from very large models. In particular, we apply a MoE. convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. The MoE is called once for each position in the text, selecting a potentially different combination. of experts at each position. The different experts tend to become highly specialized based on syntax. and semantics (see Appendix E Table 9). On both language modeling and machine translation. benchmarks, we improve on best published results at a fraction of the computational cost.\nSince its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994). the mixture-of-experts approach has been the subject of much research. Different types of expert. architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp,. 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009). and deep networks. Other work has focused on different expert configurations such as a hierarchical structure (Yao et al., 2009), infinite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble. model in the format of mixture of experts for machine translation. The gating network is trained on. a pre-trained ensemble NMT model.\nThe works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex prob- lems may contain many sub-problems each requiring different experts. They also allude in their. conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation.\nThe Mixture-of-Experts (MoE) layer consists of a set of n \"expert networks\" E1, ... , En, and a. 'gating network\" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. Although in principle we only require that the experts accept the same sized inputs and produce the. same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters..\nLet us denote by G(x) and E,(x) the output of the gating network and the output of the i-th experi network for a given input x. The output y of the MoE module can be written as follows:\nn G(x)iEi(x i=1\nWe save computation based on the sparsity of the output of G(x). Wherever G(x); = 0, we need not compute E,(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating. network chooses a sparse weighted combination of \"experts\", each of which is itself a secondary. mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B..\nOur implementation is related to other models of conditional computation. A MoE whose experts are. simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout describec in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers.\nOur work builds on this use of MoEs as a general purpose neural network component. While Eigen. et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional. application of the MoE allows for different gating decisions at each position in the text. We also. realize sparse gating and demonstrate its use as a practical way to massively increase model capacity\nSoftmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is tc multiply the input by a trainable weight matrix W. and then apply the So ftmax function\nGo(x) = Softmax(x. Wq\nNoisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values.. setting the rest to oo (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically. scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix Wnoise..\nH(x); = (x : Wg)i + StandardNormal() : Softplus((x Wnoise)\nif v, is in the top k elements of eepTopK(v, k)i Otherwise.\nTraining the Gating Network We train the gating network by simple back-propagation, along. with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectifiers. Gradients also back-. propagate through the gating network to its inputs. Our method differs here from (Bengio et al., 2015) who use boolean gates and a REINFORCE-style approach to train the gating network..\nOn modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as. to amortize the overhead of parameter loads and updates. If the gating network chooses k out of. n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately kb < b examples. This causes a naive MoE implementation to become. very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the. following techniques for increasing the batch size:.\nMixing Data Parallelism and Model Parallelism: In a conventional distributed training setting. multiple copies of the model on different devices asynchronously process distinct batches of data. and parameters are synchronized through a set of parameter servers. In our technique, these differen batches run synchronously so that they can be combined for the MoE layer. We distribute the. standard layers of the model and the gating network according to conventional data-parallel schemes.. but keep only one shared copy of each expert. Each expert in the MoE layer receives a combinec. batch consisting of the relevant examples from all of the data-parallel input batches. The same set. of devices function as data-parallel replicas (for the standard layers and the gating networks) anc. as model-parallel shards (each hosting a subset of the experts). If the model is distributed over a. devices, and each device processes a batch of size b, each expert receives a batch of approximately. kbd examples. Thus, we achieve a factor of d improvement in expert batch size..\nIn the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device\nG(x) = Softmax(KeepTopK(H(x), k)\nThis technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillion- parameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware.\nTaking Advantage of Convolutionality: In our language models, we apply the same MoE to each. time step of the previous layer. If we wait for the previous layer to finish, we can apply the Mo to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps..\nIncreasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may. involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN could be replaced by a MoE. Sadly, such models break the convolutional trick from the last para graph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow. for a large increase in batch size."}, {"section_index": "4", "section_name": "3.2 NETWORK BANDWIDTH", "section_text": "Another major performance concern in distributed computing is network bandwidth. Since the ex perts are stationary (see above) and the number of gating parameters is small, most of the communi- cation involves sending the inputs and outputs of the experts across the network. To maintain com- putational efficiency, the ratio of an expert's computation to the size of its input and output must ex ceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_size hidden_size and hidden_size output_size, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers."}, {"section_index": "5", "section_name": "BALANCING EXPERT UTILIZATION", "section_text": "We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss Limportance, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor wimportance. This additional loss encourages all experts to have equal importance.\nImportance(X) = G(x) xEX\nWe have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate. 1\nLimportance (X) ) = wimportance : CV(Importance(X))\n'Bengio et al. (2015) also include two additional losses. One controls per-example sparsity, which we do not need since it is enforced by the fixed value of k. A third loss encourages diversity of gate values. In our experiments, we find that the gate values naturally diversify as the experts specialize (in a virtuous cycle), and we do not need to enforce diversity of gate values.\nWhile this loss function can ensure equal importance, experts may still receive very different num bers of examples. For example, one expert may receive a few examples with large weights, anc another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function Lload , which ensures balanced loads. Appendix A contains the definition of this function, along with experimental results."}, {"section_index": "6", "section_name": "5.1 1 BILLION WORD LANGUAGE MODELING BENCHMARK", "section_text": "Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words\nPrevious State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use. models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreite & Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these. models vary from 2 million to 151 million. Quality increases greatly with parameter count, as dc. computational costs. Results for these models form the top line of Figure 2-right..\nMoE Models: Our models consist of two stacked LSTM layers with a MoE layer between then (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on mode architecture, training regimen, additional baselines and results, see Appendix C.\nThe results of these models are shown in Figure 2-left. The model with 4 always-active experts. performed (unsurprisingly) similarly to the computationally-matched baseline models, while the. largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set\n55 Baseline Models LL LSTM Models F F Flat MoE Models M M MoE Models 45 50 H II Hierarchical MoE Models 45 40 40 35 M 35 30 H M 107 108 109 1010 106 107 108 Model Par eters Excluding Embedding and Softm\nFigure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). The bottom line represents 4-billion parameter MoE models with different computational budgets.\nVaried Computation, High Capacity: In addition to the largest model from the previous section we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger experts. Details can\nLow Computation, Varied Capacity: To investigate the effects of adding capacity, we trained. a series of MoE models all with roughly equal computational costs: about 8 million multiply-and-. adds per training example per timestep in the forwards pass, excluding the softmax layer. We call. this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1. million parameters. For all the MoE layers, 4 experts were active per input..\nTable 1: Summary of high-capacity MoE-augmented models with varying computational budgets vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C.\nbe found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right Table 1 compares the results of these models to the best previously-published result on this dataset Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation..\nComputational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clus ters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational effi. ciency in TFLOPS/GPU by dividing the number of floating point operations required to proces. one training batch by the observed step time and the number of GPUs in the cluster. The operatior. counts used here are higher than the ones we report in our ops/timestep numbers in that we includ the backwards pass, we include the importance-sampling-based training of the softmax layer, anc we count a multiply-and-add as two separate operations. For all of our MoE models, the floating. point operations involved in the experts represent between 37% and 46% of the total..\nFor our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29. TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from O.74- 0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available. parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely. due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7.\n55 After Training on 10B words After Training on 100B words 50 45 40 35 30 107 108 109 1010 1011 Model Parameters Excluding Embedding and Softmax\nFigure 3: Language modeling on a 100 billion word corpus. Models have similar computationa budgets (8 million ops/timestep).\nOn the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as. the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements.\nWe constructed a similar training set consisting of shuffled unique sentences from Google's internal. news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a. baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024,.\nTest Test #Parameters ops/timestep Training TFLOPS Perplexity Perplexity excluding embedding Time /GPU 10 epochs 100 epochs and softmax layers 10 epochs Best Published Results 34.7 30.6 151 million. 151 million 59 hours, 32 k40s 1.09 Low-Budget MoE Model 34.1 4303 million 8.9 million 15 hours, 16 k40s 0.74 Medium-Budget MoE Model 31.3 4313 million 33.8 million. 17 hours, 32 k40s 1.22 High-Budget MoE Model 28.0 4371 million. 142.7 million 47 hours, 32 k40s 1.56\n4303 million 4313 million 4371 million\n4096. 16384. 65536, and 131072 experts. This corresponds to up to 137 billion parameters in th MoE layer. Details on architecture, training, and results are given in Appendix D.\nResults: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words. (top line) and 100 billion words (bottom line). When training over the full 100 billion words, test. perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower. than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too. much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets..\nEven at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at respectable 0.72 TFLOPS/GPU."}, {"section_index": "7", "section_name": "5.3 MACHINE TRANSLATION (SINGLE LANGUAGE PAIR)", "section_text": "Model Architecture: Our model was a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E.\nTable 2: Results on WMT'14 En-> Fr newstest2014 (bold values represent best results)\nTable 3: Results on WMT'14 En -> De newstest2014 (bold values represent best results).\nTable 4: Results on the Google Production En-> Fr dataset (bold values represent best results)\nModel Eval Eval Test Test ops/timestep Total Training Perplexity BLEU Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.60 37.27 2.69 36.57 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 2.78 35.80 2.87 35.56 214M 278M 6 days/96 k80s\nDatasets: We benchmarked our method on the WMT'14 En->Fr and En->De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental proto cols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combina-. tion of newstest2012 and newstest2013 was used as the development set. We also tested the same. model on Google's Production English to French data..\nModel Test Test ops/timenstep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.69 40.35 85M 8.7B 3 days/64 k40s MoE with 2048 Experts (longer training). 2.63 40.56 85M 8.7B 6 days/64 k40s GNMT (Wu et al., 2016) 2.79 39.22 214M 278M 6 days/96 k80s GNMT+RL (Wu et al., 2016) 2.96 39.92 214M 278M 6 days/96 k80s PBMT (Durrani et al., 2014) 37.0 LSTM (6-layer) (Luong et al., 2015b) 31.5 LSTM (6-layer+PosUnk) (Luong et al., 2015b) 33.1 DeepAtt (Zhou et al., 2016) 37.7 DeepAtt+PosUnk (Zhou et al., 2016) 39.2\nModel Test Test ops/timestep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 4.64 26.03 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 5.25 24.91 214M 278M 1 day/96 k80s GNMT +RL (Wu et al., 2016) 8.08 24.66 214M 278M 1 day/96 k80s PBMT (Durrani et al., 2014) 20.7 DeepAtt (Zhou et al., 2016) 20.6\nResults: Tables 2, 3, and 4 show the results of our largest models, compared with published. results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En->Fr and En->De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity. scores are also better.? On the Google Production dataset, our model achieved 1.01 higher test BLEU. score even after training for only one sixth of the time.."}, {"section_index": "8", "section_name": "5.4 MULTILINGUAL MACHINE TRANSLATION", "section_text": "Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large com bined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this ex- periment with a single MoE-augmented model. See Appendix E for details on model architecture We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model.\nResults: Results for the single-pair GNMT models, the multilingual GNMT model and the mul tilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beat. the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and ever beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on Englisl > Korean seems to be a result of severe overtraining, as for the rarer language pairs a small numbe of real examples were highly oversampled in the training corpus.\nTable 5: Multilingual Machine Translation (bold values represent best results)."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks. also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better.\n2Reported perplexities relative to the tokenization used by both our models and GNMT.\nGNMT-Mono GNMT-Multi MoE-Multi MoE-Multi vs. GNMT-Multi Parameters[278M / mode] 278M 8.7B ops/timestep 212M 212M 102M training time, hardware. various 21 days, 96 k20s|12 days, 64 k40s Perplexity (dev) 4.14 3.35 -19% French > English Test BLEU 36.47 34.40 37.46 +3.06 German -> English Test BLEU 31.77 31.17 34.80 +3.63 Japanese -> English Test BLEU 23.41 21.62 25.91 +4.29 Korean -> English Test BLEU 25.42 22.87 28.71 +5.84 Portuguese -> English Test BLEU 44.40 42.53 46.13 +3.60 Spanish -> English Test BLEU 38.00 36.04 39.39 +3.35 English -> French Test BLEU 35.37 34.00 36.59 +2.59 English -> German Test BLEU 26.43 23.15 24.53 +1.38 English -> Japanese Test BLEU 23.66 21.10 22.78 +1.68 English -> Korean Test BLEU 19.75 18.41 16.62 -1.79 English -> Portuguese Test BLEU 38.40 37.35 37.90 +0.55 English -> Spanish Test BLEU 34.50 34.25 36.21 +1.96\nThis work is the first to demonstrate major wins from conditional computation in deep networks. We carefully identified the design considerations and challenges of conditional computing and ad dressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large train- ing sets. We look forward to seeing many novel implementations and applications of conditional. computation in the years to come."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nEmmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computatior in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.\nRonan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. Neural Computing, 2002.\nMarc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In ICML, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- gory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Good- fellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Gor- don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URLhttp://arxiv.org/abs/1603.04467.\nAndrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461, 2013\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. IEEE Conference on Computer Vision and Pattern Recognition, 2015..\nGeoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jait Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networ for acoustic modeling in speech recognition: The shared views of four research groups. IEl Signal Processing Magazine, 2012.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 199\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nMelvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viegas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558,2016. URL http://arxiv.org/abs/1611.04558\nMichael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm Neural Computing, 1994.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nReinhard Kneser and Hermann. Ney. Improved backingoff for m- gram language modeling.. 199.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS. 2012.\nQuoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corradc. Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervise learning. In ICML, 2012.\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention based neural machine translation. EMNLP, 2015a.\nHasim Sak. Andrew W Senior, and Francoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338-342, 2014.\nMike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. ICASsP, 2012\nBabak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. JMLI 2009.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b.\nVolker Tresp. Mixtures of Gaussian Processes. In NIPS. 2001\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa. Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.\nBangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classificatior experts uncovers interactions between brain regions. In NIPS. 2009..\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward. connections for neural machine translation. arXiv preprint arXiv:1606.04199, 2016\nucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In NIPs, 2015"}, {"section_index": "11", "section_name": "A LOAD-BALANCING LOSS", "section_text": "As discussed in section 4, for load-balancing purposes, we want to define an additional loss functio. to encourage experts to receive roughly equal numbers of training examples. Unfortunately, th. number of examples received by an expert is a discrete quantity, so it can not be used in back propagation. Instead, we define a smooth estimator Load(X) of the number of examples assigned t. each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients throug. the estimator. This is the purpose of the noise term in the gating function. We define P(x, i) as th. probability that G(x); is nonzero, given a new random choice of noise on element i, but keepin. the already-sampled choices of noise on the other elements. To compute P(x, i), we note that th. G(x); is nonzero if and only if H(x); is greater than the kth-greatest element of H(x) excludin. itself. The probability works out to be:.\nWhere kth_excluding(v, k, i) means the kth highest component of v, excluding component i. Sim plifying, we get:\nWhere is the CDF of the standard normal distribution\nWe can now define the load loss to be the square of the coefficient of variation of the load vector. multiplied by a hand-tuned scaling factor wload.\nExperiments: We trained a set of models with identical architecture (the MoE-256 model de. scribed in Appendix C), using different values of wimportance and wload. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in Importance and Load, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches.\nTable 6: Experiments with different combinations of losses\nCV(Load(X)) max(Load(X)) Wimportance Wload Test Perplexity CV(Importance(X)) mean(Load(X)) 0.0 0.0 39.8 3.04 3.01 17.80 0.2 0.0 35.6 0.06 0.17 1.47 0.0 0.2 35.7 0.22 0.04 1.15 0.1 0.1 35.6 0.06 0.05 1.14 0.01 0.01 35.7 0.48 0.11 1.37 1.0 1.0 35.7 0.03 0.02 1.07\nTest Perplexity CV(Importance(X)) CV(Load(X)) max(Load(X)) Wimportance Wload 1 mean(Load(X)) 0.0 0.0 39.8 3.04 3.01 17.80 0.2 0.0 35.6 0.06 0.17 1.47 0.0 0.2 35.7 0.22 0.04 1.15 0.1 0.1 35.6 0.06 0.05 1.14 0.01 0.01 35.7 0.48 0.11 1.37 1.0 1.0 35.7 0.03 0.02 1.07\nP(x,i) = Pr x : Wg)i + StandardNormal( . Softplus((x : Wnoise)i > kth_excluding(H(x), k,\nP(x,i) = Pr((x . Wg)i + StandardNormal( . Softplus((x : Wnoise)i > kth_excluding(H(x), k,i)\n(x . Wg)i - kth_excluding(H(x), k,i) P(x,i) = Softplus((x : Wnoise)i\nLoad(X); = P(x,i) xEX\nLload(X) = wload : CV(Load(X))\nInitial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a. state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and Wnoise to all zeros, which yields no signal and. some noise.\na b Gprimary(x)iG;(x)j Ei,j(x) YH = i=1 j=1\nLoadprimary(X);. Load,(X(i) LoadH(X)i,j X\nLoadprimary and Load, deonte the Load functions for the primary gating network and ith sec ondary gating network respectively. X(i) denotes the subset of X for which G : > 0\nModel Architecture: Our model consists of five layers: a word embedding layer, a recurren. Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer. the number of units in each LSTM layer, and the input and output dimensionality of the MoE laye. are all equal to 512. For every layer other than the softmax, we apply dropout (Zaremba et al.. 2014) to the layer output, dropping each activation with probability DropProb, otherwise dividing. by (1 - DropProb). After dropout, the output of the previous layer is added to the layer output This residual connection encourages gradient flow (He et al., 2015)..\nMoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contain [512 * 1024] + [1024 * 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinar MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096 h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to th number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for th ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example i processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M ops/timestep each for the desired total of 8M.\n3we have not found the need for deeper hierarchies\nResults: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of wload had lower loads on the most overloaded expert.\nIf the number of experts is very large, we can reduce the branching factor by using a two-level. hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted com. bination of \"experts\", each of which is itself a secondary mixture-of-experts with its own gating network.3 If the hierarchical MoE consists of a groups of b experts each, we denote the primary gat- ing network by Gprimary, the secondary gating networks by (G1, G2..Ga), and the expert networks. by (Eo.o, Eo.1..Ea.b). The output of the MoE is given by:.\nImportanceH(X)i,j = Gprimary(x)i. Gi(x)j xEX\nIt would seem simpler to let LoadH(X)i.; = Load,(X) , but this would not have a gradient with respect to the primary gating network, so we use the formulation above..\nComputationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4. experts are always used. In addition, we trained four more computationally-matched baseline model with no sparsity:\nTraining: The models were trained on a cluster of 16 K40 GPUs using the synchronous methoc. described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,o00 words. Ir. the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours fo. all models, except for MoE-4, which took 18 hours (since all the expert computation was performe on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning. rate was increased linearly for the first 1ooo training steps, and decreased after that so as to be. proportional to the inverse square root of the step number. The Softmax output layer was trainec. efficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For eacl. model, we performed a hyper-parmeter search to find the best dropout probability, in increments o. 0.1.\nTo ensure balanced expert utilization we set wimportance = 0.1 and wload = 0.1, as described in Section 4 and Appendix A..\nResults: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al. 2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words in cluding the end of sentence symbol. Results are reported in Table 7. For each model, we repor the test perplexity, the computational budget, the parameter counts, the value of DropProb, and the computational efficiency.\nTable 7: Model comparison on 1 Billion Word Language Modeling Benchmark. Models marked with * are from (Jozefowicz et al., 2016).\nModel Test Test ops/timestep #Params excluding Total Drop- TFLOPS Perplexity Perplexity (millions) embed. & softmax #Params Prob per GPU 10 epochs (final) (millions) (billions) (observed) Kneser-Ney 5-gram* 67.6 0.00001 1.8 LSTM-512-512* 54.1 2.4 2.4 0.8 0.1 LSTM-1024-512* 48.2 4.7 4.7 0.8 0.1 LSTM-2048-512* 45.0 43.7 9.4 9.4 0.8 0.1 0.61 LSTM-2048-512 44.7 9.4 9.4 0.8 0.1 1.21 4xLSTM-512 46.0 8.4 8.4 0.8 0.1 1.07 MoE-1-Wide 46.1 8.4 8.4 0.8 0.1 1.29 MoE-1-Deep 45.7 8.4 8.4 0.8 0.1 1.29 MoE-4 45.0 8.4 8.4 0.8 0.1 0.52 MoE-32 39.7 8.4 37.8 0.9 0.1 0.87 MoE-256 35.7 8.6 272.9 1.1 0.1 0.81 MoE-256-h 36.0 8.4 272.9 1.1 0.1 0.89 MoE-1024-h 34.6 8.5 1079.0 1.9 0.2 0.90 MoE-4096-h 34.1 8.9 4303.4 5.1 0.2 0.74 2xLSTM-8192-1024* 34.7 30.6 151.0 151.0 1.8 0.25 1.09 MoE-34M 31.3 33.8 4313.9 6.0 0.3 1.22 MoE-143M 28.0 142.7 4371.1 6.0 0.4 1.56\nMoE-1-Wide: The MoE layer consists of a single \"expert\" containing one ReLU-activatec hidden layer of size 4096. MoE-1-Deep: The MoE layer consists of a single \"expert\" containing four ReLU-activatec hidden layers, each with size 1024. 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The out put of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models publishec in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones."}, {"section_index": "12", "section_name": "C.2 MORE EXPENSIVE MODELS", "section_text": "We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding mor computation in the presence of a large MoE layer. These models have computation budgets of 347. and 143M ops/timestep. Similar to the models above, these models use a MoE layer between tw LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionalit of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 unit For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al. 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of siz. 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of siz. 8192. Both models have 4B parameters in the MoE layers. We searched for the best DropProb fc. each model, and trained each model for 10 epochs..\nThe two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%."}, {"section_index": "13", "section_name": "D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS", "section_text": "Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep. models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32. 32, 64, 128, 256 and 256, respectively.\nTraining: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models. which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the param eters. For all models, training batch sizes are approximately 2.5 million words. Models are trained. once-through over about 100 billion words..\nWe implement several memory optimizations in order to fit up to 1 billion parameters per GPU First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage:\nThe Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the per. parameter gradients. This triples the required memory. To avoid keeping a first-moment estimator. we set 1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment. estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step. the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010)..\nTable 8: Model comparison on 100 Billion Word Google News Dataset\nModel Test Test ops/timestep #Params excluding Total TFLOPS Perplexity Perplexity (millions) embed. & softmax #Params per GPU .1 epochs 1 epoch (millions) (billions) (observed) Kneser-Ney 5-gram 67.1 45.3 0.00001 76.0 4xLSTM-512 54.5 47.0 8.4 8.4 0.1 1.23 MoE-32 48.5 40.4 8.4 37.8 0.1 0.83 MoE-256-h 42.8 35.3 8.4 272.9 0.4 1.11 MoE-1024-h 40.3 32.7 8.5 1079.0 1.2 1.14 MoE-4096-h 38.9 30.9 8.6 4303.4 4.4 1.07 MoE-16384-h 38.2 29.7 8.8 17201.0 17.3 0.96 MoE-65536-h 38.2 28.9 9.2 68791.0 68.9 0.72 MoE-131072-h 39.8 29.2 9.7 137577.6 137.7 0.30\nResults: We evaluate our model using perplexity on a holdout dataset. Results are reported in. Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE\nmodel than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4"}, {"section_index": "14", "section_name": "E MACHINE TRANSLATION - EXPERIMENTAL DETAILS", "section_text": "Model Architecture for Single Language Pair MoE Models: Our model is a modified versior of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the numbe of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoI ayers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We us. an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention . All of the layers in our model have input anc output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensiona output projection. We add residual connections around all LSTM and MoE layers to encourag gradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used sub word units (also known as \"wordpieces\") (Schuster & Nakajima, 2012) for inputs and outputs in ou System.\nWe use a shared source and target vocabulary of 32K wordpieces. We also used the same bear search technique as proposed in (Wu et al., 2016).\nWe train models with different numbers of experts in the MoE layers. In addition to a baselin model with no MoE layers, we train models with flat MoE layers containing 32 experts, and model with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 anc the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input i processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forwar network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains |512 2048] + [2048 * 512] = 2M parameters. The output of the MoE layer is passed through a sigmoic function. We use the strictly-balanced gating function described in Appendix F.\nTraining: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 2ooo training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout (Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using DropProb =- 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU.\nTo ensure balanced expert utilization we set wimportance = 0.01 and wload = 0.01, as described in Section 4 and Appendix A.\nMetrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the publi implementation of Moses (on Github), which was also used in (Luong et al., 2015a).\n5For performance reasons, we use a slightly different attention function from the one described in (Wu et al. 2016) - See Appendix G\nModel Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as. described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the. computational budget of the entire model from 85M to 102M ops/timestep.\n4While the original size of the corpus was 130 billion words, the neural models were trained for a maximum of 100 billion words. The reported Kneser-Ney 5-gram models were trained over 13 billion and 130 billion words respectively. giving them a slight advantage over the other reported results.\nResults: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other publishec. methods. Figure 4 shows test perplexity as a function of number of words in the (training data's source sentences processed for models with different numbers of experts. As can be seen from the. Figure, as we increased the number of experts to approach 2048, the test perplexity of our mode. continued to improve.\n6.0 #Experts=0 #Experts=0 5.5 #Experts=32 #Experts=32 #Experts=512 #Experts=512 #Experts=2048 #Experts=2048 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1 2 3 4 5 6 7 8 9 0.0 0 0.5 1.0 1.5 2.0 Number of source words processed 1e9 Number of source words processed 1e10\n#Experts=0 #Experts=0 5.5 #Experts=32 #Experts=32 #Experts=512 #Experts=512 #Experts=2048 #Experts=2048 5.0 letiy 4.5 Pereeey errl 4.0 3.5 3.0 2.5 2.0 0 1 2 3 4 5 6 8 9 0.0 0.5 1.0 1.5 2.0 1e9 1e10\nFigure 4: Perplexity on WMT'14 En-> Fr (left) and Google Production En-> Fr (right) datasets as a function of number of words processed. The large differences between models at the beginnin, of training are due to different batch sizes. All models incur the same computational budget (85M ops/timestep) except the one with no experts.\nWe found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indefinite article \"a\" introduces the direct object in a verb phrase indicating importance or leadership..\nExpert 381 Expert 752 Expert 2004 ... with researchers , .... ... plays a core .... ... with rapidly growing .... ... to innovation .. ... plays a critical .... .. under static conditions ... ... tics researchers .. ... provides a legislative .... ... to swift ly .... ... the generation of .... ... play a leading .... ... to dras tically .... ... technology innovations is ... ... assume a leadership .... ... the rapid and .... ... technological innovations , ... ... plays a central .... ... the fast est .... ... support innovation throughout .. ... taken a leading .... ... the Quick Method .... ... role innovation will .... ... established a reconciliation .... ... rec urrent ) .... .. research scienti st .... ... played a vital .... ... provides quick access .... ... promoting innovation where ... ... have a central .... ... of volatile organic ....\nDue to some peculiarities in our infrastructure which have since been fixed, at the time we ran some. of the machine translation experiments, our models ran faster if every expert received exactly the. same batch size. To accommodate this, we used a different gating function which we describe below\nRecall that we define the softmax gating function to be\nSparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply G,(x component-wise with a sparse mask M(G,(x)) and normalize the output. The mask itself is a function of G,(x) and specifies which experts are assigned to each input example:\nTable 9: Contexts corresponding to a few of the 2048 experts in the MoE layer in the encoder portion of the WMT'14 En-> Fr translation model. For each expert i, we sort the inputs in a training batch. in decreasing order of G(x)i, and show the words surrounding the corresponding positions in the. input sentences.\nGo(x) = Softmax(x. Wq\nG,(x);M(G(x)) G(x)i =\nif v; is in the top k elements of v TopK(v, k)i otherwise.\nBatchwise Mask: To force each expert to receive the exact same number of examples, we intro duce an alternative mask function, Mbatchwise(X, m), which operates over batches of input vectors Instead of keeping the top k values per example, we keep the top m values per expert across th k|X| training batch, where m = , so that each example is sent to an average of k experts. n\nif Xy,i is in the top m values for to expert Mbatchwise(X,m)j,i otherwise\nif x; > T Mthreshold(x, ) otherwise\nX n Lbatchwise(X,T, m) =>~>(Mthreshold(x,T)i- Mbatchwise(X, m)j,s)(Xj,i- Ti j=1 i=1\nWhere U and W are trainable weight matrices and V is a trainable weight vector\nFor performance reasons, in our models, we used a slightly different attention functior\nWith our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions..\nTop-K Mask: To implement top-k gating in this formulation, we would let M(v) = TopK (v, k) where:\nAs our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise func- tion during training (such as Mbatchwise) requires modifications to the inference when we may not. have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time:\nTo learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical.\nThe attention mechanism described in GNMT (Wu et al.. 2016) involves a learned \"Attention Func tion\" A(xi, y) which takes a \"source vector\" x; and a \"target vector\" y, and must be computed for. every source time step i and target time step j. In GNMT, the attention function is implemented as. a feed forward neural network with a hidden layer of size n. It can be expressed as:.\nn AGNMT(xi,yj) =>` Vatanh((xiU)d+ (yjW)d d=1\nn A(xi,Yj) => Vatanh((x;U)d)tanh((y;W)d d=1"}] |
B1hdzd5lg | [{"section_index": "0", "section_name": "WORDS OR CHARACTERS? FINE-GRAINED GATING FOR READING COMPREHENSION", "section_text": "zhiliny,wcohen, rsalakhu}@cs.cmu.edu\nPrevious work combines word-level and character-level representations using con-. catenation or scalar weighting, which is suboptimal for high-level tasks like read ing comprehension. We present a fine-grained gating mechanism to dynamically. combine word-level and character-level representations based on properties of the. words. We also extend the idea of fine-grained gating to modeling the interaction. between questions and paragraphs for reading comprehension. Experiments show. that our approach can improve the performance on reading comprehension tasks. achieving new state-of-the-art results on the Children's Book Test and Who Did. What datasets. To demonstrate the generality of our gating mechanism, we also. show improved results on a social media tag prediction task|1."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Finding semantically meaningful representations of the words (also called tokens) in a document i. necessary for strong performance in Natural Language Processing tasks. In neural networks, token. are mainly represented in two ways, either using word-level representations or character-level repre. sentations. Word-level representations are obtained from a lookup table, where each unique token i represented as a vector. Character-level representations are usually obtained by applying recurren. neural networks (RNNs) or convolutional neural networks (CNNs) on the character sequence of the. token, and their hidden states are combined to form the representation. Word-level representation . are good at memorizing the semantics of the tokens while character-level representations are mor. suitable for modeling sub-word morphologies (Ling et al.2015}[Yang et al.]2016a). For example considering \"cat\"' and \"cats\", word-level representations can only learn the similarities between th. two tokens by training on a large amount of training data, while character-level representations, b. design, can easily capture the similarities. Character-level representations are also used to alleviat. the difficulties of modeling out-of-vocabulary (OOV) tokens (Luong & Manning2016).\nHybrid word-character models have been proposed to leverage the advantages of both word-level. and character-level representations. The most commonly used method is to concatenate these twc representations (Yang et al.]2016a). However, concatenating word-level and character-level repre-. sentations is technically problematic. For frequent tokens, the word-level representations are usually. accurately estimated during the training process, and thus introducing character-level representa. tions can potentially bias the entire representations. For infrequent tokens, the estimation of word-. level representations have high variance, which will have negative effects when combined with the. character-level representations. To address this issue, recentlyMiyamoto & Cho(2016) introduced a scalar gate conditioned on the word-level representations to control the ratio of the two repre-. sentations. However, for the task of reading comprehension, preliminary experiments showed that. this method was not able to improve the performance over concatenation. There are two possible. reasons. First, word-level representations might not contain sufficient information to support the. decisions of selecting between the two representations. Second, using a scalar gate means applying. the same ratio for each of the dimensions, which can be suboptimal.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this work, we present a fine-grained gating mechanism to combine the word-level and character-. level representations. We compute a vector gate as a linear projection of the token features followed\nby a sigmoid activation. We then multiplicatively apply the gate to the character-level and word level representations. Each dimension of the gate controls how much information is flowed from the word-level and character-level representations respectively. We use named entity tags, part-of- speech tags, document frequencies, and word-level representations as the features for token proper ties which determine the gate. More generally, our fine-grained gating mechanism can be used tc model multiple levels of structure in language, including words, characters, phrases, sentences and paragraphs. In this work we focus on studying the effects on word-character gating.\nTo better tackle the problem of reading comprehension, we also extend the idea of fine-grained. gating for modeling the interaction between documents and queries. Previous work has shown the importance of modeling interactions between document and query tokens by introducing various. attention architectures for the task (Hermann et al.. 2015] Kadlec et al.2016).Most of these use an inner product between the two representations to compute the relative importance of documen tokens. The Gated-Attention Reader (Dhingra et al.|2016a) showed improved performance by re placing the inner-product with an element-wise product to allow for better matching at the semantic level. However, they use aggregated representations of the query which may lead to loss of infor-. mation. In this work we use a fine-grained gating mechanism for each token in the paragraph and. each token in the query. The fine-grained gating mechanism applies an element-wise multiplicatior. of the two representations.\nWe show improved performance on reading comprehension datasets, including Children's Book Test. (CBT), Who Did What, and SQuAD. On CBT, our approach achieves new state-of-the-art results without using an ensemble. Our model also improves over state-of-the-art results on the Who Did What dataset. To demonstrate the generality of our method, we apply our word-character fine. grained gating mechanism to a social media tag prediction task and show improved performance over previous methods.\nOur contributions are two-fold. First, we present a fine-grained word-character gating mechanism. and show improved performance on a variety of tasks including reading comprehension. Second. to better tackle the reading comprehension tasks, we extend our fine-grained gating approach to modeling the interaction between documents and queries.."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Hybrid word-character models have been proposed to take advantages of both word-level and character-level representations.Ling et al.(2015) introduce a compositional character to word (C2w) model based on bidirectional LSTMs.Kim et al.(2016) describe a model that employs a convolutional neural network (CNN) and a highway network over characters for language model- ing.Miyamoto & Cho[(2016) use a gate to adaptively find the optimal mixture of the character-level and word-level inputs.Yang et al.(2016a) employ deep gated recurrent units on both character and word levels to encode morphology and context information. Concurrent to our work, Rei et al. (2016) employed a similar gating idea to combine word-level and character-level representations, but their focus is on low-level sequence tagging tasks and the gate is not conditioned on linguistic features.\nThe gating mechanism is widely used in sequence modeling. Long short-term memory (LSTM) networks (Hochreiter & Schmidhuber 1997) are designed to deal with vanishing gradients through. the gating mechanism. Similar to LSTM, Gated Recurrent Unit (GRU) was proposed byCho et al.. (2014), which also uses gating units to modulate the flow of information. The gating mechanism can also be viewed as a form of attention mechanism (Bahdanau et al.]2015] Yang et al.. 2016b) Over two inputs.\nSimilar to the idea of gating, multiplicative integration has also been shown to provide a benefit. in various settings.Yang et al.(2014) find that multiplicative operations are superior to additive. operations in modeling relations. Wu et al.(2016) propose to use Hadamard product to replace sum operation in recurrent networks, which gives a significant performance boost over existing RNN. models.Dhingra et al.(2016a) use a multiplicative gating mechanism to achieve state-of-the-art. results on question answering benchmarks.\nReading comprehension is a challenging task for machines. A variety of models have been proposed to extract answers from given text (Hill et al.[2016] Kadlec et al.[2016]Trischler et al.[2016] |Chen\net al.[2016] Sordoni et al.]2016] Cui et al.[2016). Yu et al.(2016) propose a dynamic chunk reader to extract and rank a set of answer candidates from a given document to answer questions. Wang & Jiang(2016) introduce an end-to-end neural architecture which incorporates match-LSTM and pointer networks (Vinyals et al.2015)\nIn this section, we will describe our fine-grained gating approach in the context of reading com prehension. We first introduce the settings of reading comprehension tasks and a general neural network architecture. We will then describe our word-character gating and document-query gating approaches respectively."}, {"section_index": "4", "section_name": "3.1 READING COMPREHENSION SETTING", "section_text": "The reading comprehension task involves a document P = (p1, P2, .::,Pm) and a query Q (q1, q2, ::. , qn), where M and N are the lengths of the document and the query respectively. Eacl token pi is denoted as (w, C), where w; is a one-hot encoding of the token in the vocabulary an C; is a matrix with each row representing a one-hot encoding of a character. Each token in the quer qj is similarly defined. We use i as a subscript for documents and j for queries. The output of th problem is an answer a, which can either be an index or a span of indices in the document.\nNow we describe a general architecture used in this work, which is a generalization of the gatec attention reader (Dhingra et al.|[2016a). For each token in the document and the query, we compute a vector representation using a function f. More specifically, for each token pi in the document we have h = f(w, Ct). The same function f is also applied to the tokens in the query. Let H? and Hg denote the vector representations computed by f for tokens in documents and queries respectively. In Section|3.2 we will discuss the \"word-character\"' fine-grained gating used to define the function f.\nSuppose that we have a network of K layers. At the k-th layer, we apply RNNs on Hk-1 and Hy to. obtain hidden states Pk and Qk, where Pk is a M d matrix and Qk is a N d matrix with d being. the number of hidden units in the RNNs. Then we use a function r to compute a new representation for the document Hk = r(Pk, Qk). In Section3.3. 3 we will introduce the \"document-query\" fine-. grained gating used to define the function r..\nand Hg tc\nAfter going through K layers, we predict the answer index a using a softmax layer over hidder states Hk. For datasets where the answer is a span of text, we use two softmax layers for the start. and end indices respectively.\nGiven a one-hot encoding w; and a character sequence C;, we now describe how to compute the vector representation h, = f(w;, C) for the token. In the rest of the section, we will drop the subscript i for notation simplicity\nWe first apply an RNN on C and take the hidden state in the last time step c as the character-level representation (Yang et al.]2016a). Let E denote the token embedding lookup table. We perform a matrix-vector multiplication Ew to obtain a word-level representation. We assume c and Ew have the same length de in this work\nPrevious methods defined f using the word-level representation Ew (Collobert et al.2011), the character-level representation c (Ling et al.[[2015), or the concatenation Ew; c (Yang et al.l[2016a) Unlike these methods, we propose to use a gate to dynamically choose between the word-level and. character-level representations based on the properties of the token. Let v denote a feature vector that encodes these properties. In this work, we use the concatenation of named entity tags, part of-speech tags, binned document frequency vectors, and the word-level representations to form the feature vector v. Let d, denote the length of v..\nThe gate is computed as follows.\ng = o(Wgv+ bg\nCombined Representation 1 - x Sigmoid Concat Lookup Char RNN NER POS Frequency Lookup Word token.\nFigure 1: Word-character fine-grained gating. The two lookup tables are shared. \"NER\", \"POs\", \"frequency' refer to named entity tags, part-of-speech tags, document frequency features.\nThe final representation is computed using a fine-grained gating mechanism\nh=f(c,w)=gOc+1-g)O(Ew\nAn illustration of our fine-grained gating mechanism is shown in Figure [1 Intuitively speaking. when the gate g has high values, more information flows from the character-level representation t the final representation; when the gate g has low values, the final representation is dominated by th word-level representation.\nThough|Miyamoto & Cho(2016) also use a gate to choose between word-level and character-level. representations, our method is different in two ways. First, we use a more fine-grained gating mech. anism, i.e., vector gates rather than scalar gates. Second, we condition the gate on features that better reflect the properties of the token. For example, for noun phrases and entities, we would expect the. gate to bias towards character-level representations because noun phrases and entities are usually. less common and display richer morphological structure. Experiments show that these changes are. key to the performance improvements for reading comprehension tasks..\nOur approach can be further generalized to a setting of multi-level networks so that we can combine multiple levels of representations using fine-grained gating mechanisms, which we leave for future work."}, {"section_index": "5", "section_name": "3.3 DOCUMENT-OUERY FINE-GRAINED GATING", "section_text": "Given the hidden states Pk and Qk, we now describe how to compute a representation H that. encodes the interactions between the document and the query. In this section, we drop the superscrip1. k (the layer number) for notation simplicity. Let p; denote the i-th row of P and q; denote the j-row of Q. Let dn denote the lengths of pi and qj..\nAttention-over-attention (AoA) (Cui et al.2016) defines a dot product between each pair of tokens in the document and the query, i.e., pt qj, followed by row-wise and column-wise softmax non- linearities. AoA imposes pair-wise interactions between the document and the query, but using a dot product is potentially not expressive enough and hard to generalize to multi-layer networks. The gated attention (GA) reader (Dhingra et al.|2016a) defines an element-wise product as p Og where gi is a gate computed by attention mechanism on the token pi and the entire query. The intuition for the gate gi is to attend to important information in the document. However, there is no direct pair-wise interaction between each token pair.\nwhere Wg and bg are the model parameters with shapes de d, and de, and o denotes an element wise sigmoid function.\n(M*N) * d Element-wise product Document Tanh Attention i-th vector Hidden States M * d j-th vector M * d Hidden States. N * d Query\nFigure 2: Paragraph-question fine-grained gating\nI; = tanh(p O q;\nhi = softmax(uIj + w, w;bh1 + bn2)I j\nwhere up is a d,-dimensional model parameter, bp1 and bp2 are scalar model parameters, w; and w, are one-hot encodings for pi and q; respectively. We additionally use one-hot encodings in the. attention mechanism to reinforce the matching between the same tokens since such information is not fully preserved in I; when k is large. The softmax nonlinearity is applied over all j's. The final. hidden states H are formed by concatenating the h,'s for each token pi.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We first present experimental results on the Twitter dataset where we can rule out the effects of. different choices of network architectures. to demonstrate the effectiveness of our word-character fine-grained gating approach. Later we show experiments on more challenging datasets on reading. comprehension to further show that our approach can be used to improve the performance on high-. level NLP tasks as well."}, {"section_index": "7", "section_name": "4.1 EVALUATING WORD-CHARACTER GATING ON TWITTER", "section_text": "We evaluate the effectiveness of our word-character fine-grained gating mechanism on a social media tag prediction task. We use the Twitter dataset and follow the experimental settings in|Dhingra et al (2016b). We also use the same network architecture upon the token representations, which is ar LSTM layer followed by a softmax classification layer (Dhingra et al.]2016b). The Twitter datase consists of English tweets with at least one hashtag from Twitter. Hashtags and HTML tags hav been removed from the body of the tweet, and user names and URLs are replaced with specia tokens. The dataset contains 2 million tweets for training, 10K for validation and 50K for testing with a total of 2,039 distinct hashtags. The task is to predict the hashtags of each tweet.\nWe present a fine-grained gating method that combines the advantages of the above methods (i.e. both pairwise and element-wise). We compute the pairwise element-wise product between the hid- den states in the document and the query, as shown in Figure[2] More specifically, for pi and qj, we have\nwhere q; can be viewed as a gate to filter the information in pi. We then use an attention mechanism over I,; to output hidden states h, as follows.\nWe compare several different methods as follows. Word char concat uses the concatenation of word-level and character-level representations as inYang et al.[(2016a); word char feat concat concatenates the word-level and character-level representations along with the features described in\nTable 1: Performance on the Twitter dataset. \"word\"' and \"char' means using word-level and character-level representations respectively.\nModel Precision@ 1 Recall@10 Mean Rank word (Dhingra et al. 2016b 0.241 0.428 133 char (Dhingra et al. 2016b) 0.284 0.485 104 word char concat 0.2961 0.4959 105.8 word char feat concat 0.2951 0.4974 106.2 scalar gate 0.2974 0.4982 104.2 fine-grained gate 0.3069 0.5119 101.5\nTable 2: Performance on the CBT dataset. The \"GA word char concat' results are extracted from Dhingra et al.[(2016a). Our results on fine-grained gating are based on a single model. \"CN\" and \"NE\" are two widely used question categories. \"dev\" means development set, and \"test' means test set.\nModel CN dev CN test NE dev NE test GA word char concat 0.731 0.696 0.768 0.725 GA word char feat concat 0.7250 0.6928 0.7815 0.7256 GA scalar gate 0.7240 0.6908 0.7810 0.7260 GA fine-grained gate 0.7425 0.7084 0.7890 0.7464 FG fine-grained gate 0.7530 0.7204 0.7910 0.7496 Sordoni et al.(2016 0.721 0.692 0.752 0.686 Trischler et al.72016 0.715 0.674 0.753 0.697 Cu1 et al.(2016) 0.722 0.694 0.778 0.720 Munkhdalai & Yu|(2016 0.743 0.719 0.782 0.732 Kadlec et al.. ((2016) ensemble 0.711 0.689 0.762 0.710 Sordoni et al.(2016) ensemble 0.741 0.710 0.769 0.720 Trischler et al.. .(2016) ensemble 0.736 0.706 0.766 0.718\nSection 3.2; scalar gate uses a scalar gate similar to|Miyamoto & Cho(2016) but is conditioned on the features; fine-grained gate is our method described in Section 3.2. We include word char feat concat for a fair comparison because our fine-grained gating approach also uses the token features.\nThe results are shown in Table[1 We report three evaluation metrics including precision@1, re call@10, and mean rank. Our method outperforms character-level models used inDhingra et al (2016b) by 2.29%, 2.69%, and 2.5 points in terms of precision, recall and mean rank respectively We can observe that scalar gating approach (Miyamoto & Cho] 2016) can only marginally improve over the baseline methods, while fine-grained gating methods can substantially improve model per formance. Note that directly concatenating the token features with the character-level and word-leve representations does not boost the performance, but using the token features to compute a gate (as done in fine-grained gating) leads to better results. This indicates that the benefit of fine-grained gating mainly comes from better modeling rather than using additional features."}, {"section_index": "8", "section_name": "4.2 PERFORMANCE ON READING COMPREHENSION", "section_text": "We evaluate our model on cloze-style question answering benchmarks\nAfter investigating the effectiveness of the word-character fine-grained gating mechanism on the Twitter dataset, we now move on to a more challenging task, reading comprehension. In this section, we experiment with two datasets, the Children's Book Test dataset (Hill et al.2016) and the SQuAD dataset (Rajpurkar et al.]2016)\nTable 3: Performance on the Who Did What dataset. \"dev\" means development set, and \"test\"' means test set 'WDW-R' is the relaxed version of WDW.\nModel WDW dev WDW test WDW-R dev WDW-R test Kadlec et al.(2016 0.570 0.590 Chen et al.(2016) 0.640 0.650 Munkhdalai & Yu (2016 0.665 0.662 0.670 0.667 Dhingra et al.(2016a 0.716 0.712 0.726 0.726 this paper 0.723 0.717 0.731 0.726\nTable 4: Performance on the SQuAD dev set. Test set results are included in the brackets\nThe Children's Book Test (CBT) dataset is built from children's books. The whole dataset has 669,343 questions for training, 8,000 for validation and 10,o00 for testing. We closely follow the setting inDhingra et al.[(2016a) and incrementally add different components to see the changes in performance. For the fine-grained gating approach, we use the same hyper-parameters as inDhingra et al.(2016a) except that we use a character-level GRU with 100 units to be of the same size as the word lookup table. The word embeddings are updated during training..\nIn addition to different ways of combining word-level and character-level representations, we also compare two different ways of integrating documents and queries: GA refers to the gated attention reader (Dhingra et al.[2016a) and FG refers to our fine-grained gating described in Section[3.3\nThe results are reported in Table2. We report the results on common noun (CN) questions anc. named entity (NE) questions, which are two widely used question categories in CBT. Our fine. grained gating approach achieves new state-of-the-art performance on both settings and outperforms. he current state-of-the-art results by up to 1.76% without using ensembles. Our method outperform the baseline GA reader by up to 2.4%, which indicates the effectiveness of the fine-grained gating. mechanism. Consistent with the results on the Twitter dataset, using word-character fine-grainec. gating can substantially improve the performance over concatenation or scalar gating. Furthermore. we can see that document-query fine-grained gating also contributes significantly to the final results.\nWe also apply our fine-grained gating model to the Who Did What (WDw) dataset (Onishi et al. 2016). As shown in Table[3] our model achieves state-of-the-art results compared to strong baselines We fix the word embeddings during training.\nThe Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset collected recently (Rajpurkar et al.2016). It contains 23,215 paragraphs come from 536 wikipedia articles. Unlike other reading comprehension datasets such as CBT, the answers are a span of text rather than a single word. The dataset is partitioned into a training set (80%, 87,636 question-answer pairs), a development set (10%, 10,600 question-answer pairs) and a test set which is not released.\nModel F1 Exact Match GA word 0.6695 0.5492 GA word char concat. 0.6857 0.5639 GA word char feat concat 0.6904 0.5711 GA scalar gate 0.6850 0.5620 GA fine-grained gate 0.6983 0.5804 FG fine-grained gate 0.7125 0.5995 FG fine-grained gate + ensemble 0.7341 (0.733) 0.6238 (0.625) Yu et al.(2016 0.712 (0.710) 0.625 (0.625) Wang & Jiang (2016 0.700 (0.703) 0.591 (0.595)\nFigure 3: Visualization of the weight matrix Wg. Weights for each features are averaged. Red means high and. yellow means low. High weight values favor character-level representations, and low weight values favor word level representations. \"Organization\", \"\"Person', \"Location\", and \"O\" are named entity tags; \"DOCLEN-n'. are document frequency features (larger n means higher frequency, n from O to 4); others are POS tags..\nFigure 4: Visualization of gate values in the text. Red means high and yellow means low. High gate values favor character-level representations, and low gate values favor word-level representations..\nWe report our results in Table4 Exact match' computes the ratio of questions that are answered correctly by strict string comparison, and the F1 score is computed on the token level. We car observe that both word-character fine-grained gating and document-query fine-grained gating car substantially improve the performance, leading to state-of-the-art results among published papers Note that at the time of submission, the best score on the leaderboard is 0.716 in exact match anc 0.804 in F1 without published papers. A gap exists because our architecture described in Section 3.1|does not specifically model the answer span structure that is unique to SQuAD. In this work, we focus on this general architecture to study the effectiveness of fine-grained gating mechanisms.\nWe visualize the model parameter W g as described in Section[3.2 For each feature, we average the corresponding weight vector in Wg. The results are described in Figure[3] We can see that namec entities like \"Organization\"' and noun phrases (with tags \"NNP\" or \"NNPS') tend to use character level representations, which is consistent with human intuition because those tokens are usuall infrequent or display rich morphologies. Also, DOCLEN-4, WH-adverb (\"WRB\"), and conjunctior (\"IN'' and \"CC'') tokens tend to use word-level representations because they appear frequently\nWe also sample random span of text from the SQuAD dataset, and visualize the average gate values in Figure4] The results are consistent with our observations in Figure[3] Rare tokens, noun phrases. and named entities tend to use character-level representations, while others tend to use word-level representations. To further justify this argument, we also list the tokens with highest and lowest gate values in Table 5\nN 8 O E NN VP9 RF DOCLEN- Or B9 EN-2 PER C DO\nBS Et SYM ON O PER DOCLEN-\nTable 5: Word tokens with highest and lowest gate values. High gate values favor character-level representa tions, and low gate values favor word-level representations..\nGate values Word tokens\nWe present a fine-grained gating mechanism that dynamically combines word-level and character- level representations based on word properties. Experiments on the Twitter tag prediction dataset show that fine-grained gating substantially outperforms scalar gating and concatenation. Our method also improves the performance on reading comprehension and achieves new state-of-the-art results on CBT and WDW. In our future work, we plan to to apply the fine-grained gating mechanism for combining other levels of representations, such as phrases and sentences. It will also be intriguing to integrate NER and POS networks and learn the token representation in an end-to-end manner."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by NVIDIA, the Office of Naval Research Scene Understanding grant N000141310721, the NSF grant IIS1250956, and Google Research"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.\nDanqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.\nBhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen. Tweet2ve Character-based distributed representations for social media. In ACL, 2016b.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phi1 Blunsom. Teaching machines to read and comprehend. In NIPS, pp. 1693-1701, 2015..\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. In ICLR, 2016.\neSt or but But These these However however among Among that when When although Although because Because until many Many than though Though this This Since since date where Where have That and And Such such number so which by By. how before Before with With between Between even Even if. est Sweetgum Untersee Jianlong Floresta Chlorella Obersee PhT Doctorin Jumonville WFTS WTSP Boven Pharm Nederrijn Otrar Rhin Magicicada WBKB Tanzler. KMBC WPLG Mainau Merwede RMJM Kleitman Scheur Bodensee Kromme Horenbout Vorderrhein Chlamydomonas Scantlebury Qingshui Funchess.\nBhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016a.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention su. reader network. In ACL, 2016.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. In AAAI, 2016.\nMinh-Thang Luong and Christopher D Manning. Achieving open vocabulary neural machine translation wit hybrid word-character models. In ACL, 2016.\nYasumasa Miyamoto and Kyunghyun Cho. Gated word-character recurrent language model. In EMNLP, 2016\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint arXiv:1607.04315, 2016\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,o00+ questions for machine comprehension of text. In EMNLP, 2016.\nMarek Rei, Gamal KO Crichton, and Sampo Pyysalo. Attending to characters in neural sequence labelin. models. arXiv preprint arXiv:1611.04361, 2016.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machin reading. arXiv preprint arXiv:1606.02245, 2016.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, pp. 2692-2700, 2015\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprir arXiv:1608.07905, 2016.\nYuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative inte gration with recurrent neural networks. In NIPS, 2016.\nZhilin Yang, Ruslan Salakhutdinov, and William Cohen. Multi-task cross-lingual sequence tagging fron scratch. arXiv preprint arXiv:1603.06270, 2016a\nZhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, and William W Cohen. Review networks for captior generation. In NIPS, 2016b.\nYang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. arXiv preprint arXiv:1610.09996, 2016.\nishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Learning multi-relational semantic using neural-embedding models. In NIPS 2014 workshop on Learning Semantics, 2014.."}] |
ryUPiRvge | [{"section_index": "0", "section_name": "EXTRAPOLATION AND LEARNING EOUATIONS", "section_text": "Georg Martius & Christoph H. Lampert\nIn classical machine learning, regression is treated as a black box process oj identifying a suitable function from a hypothesis set without attempting to gair insight into the mechanism connecting inputs and outputs. In the natural sciences however, finding an interpretable function for a phenomenon is the prime goal as i allows to understand and generalize results. This paper proposes a novel type o function learning network, called equation learner (EQL), that can learn analytica expressions and is able to extrapolate to unseen domains. It is implemented as ar end-to-end differentiable feed-forward network and allows for efficient gradien based training. Due to sparsity regularization concise interpretable expressions car be obtained. Often the true underlying source expression is identified."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "The quality of a model is typically measured by its ability to generalize from a training set to previously unseen data from the same distribution. In regression tasks generalization essentially boils down to interpolation if the training data is sufficiently dense. As long as models are selected correctly, i. e. in a way to not overfit the data, the regression problem is well understood and can - at least conceptually - be considered solved. However, when working with data from real-world devices, e. g. controlling a robotic arm, interpolation might not be sufficient. It could happen that future data lies outside of the training domain, e. g. when the arm is temporarily operated outside of its specifications. For the sake of robustness and safety it is desirable in such a case to have a regression model that continues to make good predictions, or at least does not fail catastrophically This setting, which we call extrapolation generalization, is the topic of the present paper.\nWe are particularly interested in regression tasks for systems that can be described by real-valued analytic expression, e. g. mechanical systems such as a pendulum or a robotic arm. These are typically governed by a highly nonlinear function but it is nevertheless possible, in principle, to infer theit behavior on an extrapolation domain from their behavior elsewhere. We make two main contributions 1) a new type of network that can learn analytical expressions and is able to extrapolate to unseen domains and 2) a model selection strategy tailored to the extrapolation setting.\nThe following section describes the setting of regression and extrapolation. Afterwards we introduce our method and discuss the architecture, its training, and its relation to prior art. We present our results in the Section Experimental evaluation and close with conclusions"}, {"section_index": "2", "section_name": "REGRESSION AND EXTRAPOLATION", "section_text": "We consider a multivariate regression problem with a training set {(x1, y1), ..., (xN, yn)} with. x E Rn, y E Rm. Because our main interest lies on extrapolation in the context of learning the. dynamics of physical systems we assume the data originates from an unknown analytical function (o. system of functions), : Rn -> Rm with additive zero-mean noise, , i. e. y = $(x) + and E = 0 The function may, for instance, reflect a system of ordinary differential equations that govern the movements of a robot arm or the like. The general task is to learn a function y : Rn -> Rm that. approximates the true functional relation as well as possible in the squared loss sense, i. e. achieves. minimal expected error E|[(x) - $(x)|[2. In practice, we only have particular examples of the. function values available and measure the quality of predicting in terms of the empirical error or."}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "If training and test data are sampled from the same distribution then we speak about an interpolatio problem. In the extrapolation setting the training data is assumed to cover only a limited range of th data domain. In the example of the robot arm, for instance, the training may be restricted to a certai joint angle range or maximal velocity. For testing we want to make predictions about the unseer domains, e. g. for higher velocities. To succeed in this task, it is essential to identify the underlying functional relationship instead of just minimizing the empirical error, as detailed below. As usual, w split the data that is available at training time into a part for model training and a part for validatior or model selection."}, {"section_index": "4", "section_name": "LEARNING A NETWORK FOR FUNCTION EXTRAPOLATION", "section_text": "The main model we propose is a multi-layered feed-forward network with computational units. specifically designed for the extrapolation regression tasks. For an L-layer network, there are L - ]. hidden layers, each consisting of a linear mapping followed by non-linear transformations. For. simplicity of notation, we explain the network as if each hidden layer had the same structure (k inputs, k outputs). In practice, each layer can be designed independently of the others, of course, as. long as input/output dimensions match.\nThe linear mapping at level l maps the k'-dimensional input y(l-1) to the d-dimensional intermediate representation z given by\nwhere y(l-1) is the output of the previous layer, with the convention y(0) = x. The weight matrix. W(l) E Rdk' and the bias vector b(l) E Rd are free parameters that are learned during training. The. non-linear transformation contains u unary units, f : R -> R, for i = 1, ..., u, and v binary units. g; : R R -> R for j = 1, ..., v. Their outputs are concatenated to form the layer output.\n(0) := +2\ngj(Zu+2j-1,Zu+2j) := zu+2j-1:Zu+2j for j =1,...,U\n/L (L-1) +b(L)\nThe architecture is depicted in Fig.1] We call the new architecture Equation Learner (EQL) and denote the function it defines by\nN 1 E(D) = ly(xi) -yill2 N i=1\nC 1 bl\nIn total, the nonlinear stage has k = u + v outputs and d = u + 2v inputs. The unary units, f1, . .., f. receive the respective component, z1, .. . , Zu as inputs, and each unit may be one of the following base functions as specified in a fixed type parameter I; E {0, 1, 2, 3}\nZi if I = 0, sin(zi) if I = 1, for i = 1,..., u cOs(zi) if I; = 2, sigm(zi) if I; = 3,\nremaining component, Zu+1, ..., Zu+2u, as input in pairs of two. They are multiplication units that compute the product of their two input values:.\n1 1 2 y2 x OOO sin sin COS COS W(1) W(2) W(L) (all-to-all) sigm (all-to-all) sigr (all-to-all) X X\nFigure 1: Network architecture of the proposed Equation Learner (EQL) for 3 layers (L = 3) and one neuron per type (u = 4, v = 1).\nThe proposed network architecture differs in two main aspects from typical feed-forward networks the existence of multiplication units and the possibility of sine and cosine as nonlinearities for the unary units. Both design choices are motivated by our objective of learning a system of equations that govern a physical system and can extrapolate to new parts of the input space.\nSigmoid nonlinearities are the canonical choice of activation function for artificial neural networks (ANN) and proved to be successful. In fact, we include sigmoids in our architecture, making it a super class of ANNs. However, they were typically disabled by the training procedure corresponding to their absence in the considered physical equations. Other, predominantly local nonlinearities, in particular radial basis functions Broomhead & Lowe(1988) we do not include, since one cannot expect them to extrapolate at all. Further nonlinearities, such as (square) roots and logarithms, could in principle be useful for learning physical equations, but they pose problems because their domains of definition is restricted to positive inputs. We leave the task of incorporating them in a principled way to future work.\nThe ability to multiply two values is a second crucial component of our network architecture. Again, it is inspired by the typical form of physical equations, where multiplication of components is arguably second common basic operation after addition (which the linear layers can perform). Multiplication was introduced into neural networks long ago as product-units Durbin & Rumelhart(1989) and Pi-Sigma-unitShin & Ghosh(1991). The product-units have large fan-in that compute products over all their inputs, potentiated by the respective weights. The result is typically the behavior of a high order polynomial, which are powerful function approximators, but rarely occur in physical equations Polynomials are also known to require careful fine-tuning in order not to overfit, which makes them a risky choice for the purpose of extrapolation. The Pi-Sigma units are multiplication units with a fixed number of factors and our multiplication units are a special for 2 factors. We find that multiplying just two values at a time is well adjusted to the task we aim at, as it allows to control the maximal degree of the learned polynomial by the depth of the network.\nFinally. each layer of the network contains unary units that act as identity maps. which in particula gives the network the option to learn functions with smaller number of nonlinearities than the total. network depths\nThe EQL is fully differentiable in its free parameters 0 = { W(1), . ..,. b(L)}. whicl allows us to train it in an end-to-end fashion using back-propagation. We adopt a Lasso-like objectiveTibshirani(1996).\n|D| L 1 |y(x;)-yil2+x L(D) = N i=1 l=1\nwhere D() denotes the current mini-batch and a is the stepsize parameter. The choice of Adam is not critical and standard stochastic gradient descent also works. In all numerical experiments we use Q = 0.001 and a mini-batch size of 20\nThe role of the L regularization is to encourage networks with sparse connections, matching the intuition that a typical formula describing a physical system contains only a small number of terms each operating only on a few variables. However, in a non-convex setting where local minima are likely to occur, this type of regularization can have an undesirable side-effect: during the course of the optimization the weights hardly ever change their sign. The reason is that the regularization leads to a constant rate of weight decay whereas the counteracting derivative with respect to the square loss is proportional to the backpropagated error signal and the input to the unit. The latter contributions are often smaller along paths with small weights, such that many weights go to zero and stay there. Additionally, any non-zero regularization term causes the learned weights to reflect a trade-off between minimizing the loss and the regularizer. Although, this can lead to improvec generalization, it also results in a systematic underestimation of the function values.\nTherefore, we follow a hybrid regularization strategy: at the beginning of the training procedure (t < t1) we use no regularization ( = 0), such that parameters can vary freely and reach reasonable starting points. Afterwards, we switch on the regularization by setting A to a nonzero value, which has the effect that a sparse network structure emerges. Finally, for the last steps of the training (t > t2 we disable L1 regularization ( = 0) but enforce the same Lo norm of the weights. This is achieved by keeping all weights w E W1..L that are close to 0 at 0, i. e. if |w| 0.001 then w = 0 during the remaining epochs. This ensures that the learned model finds not only a function of the righ parametric form, but also fits the observed values as closely as possible. We observed that the exact T is total number of update steps. T was selected large enough to ensure convergence. Note, thai convergence to a sparse structure is important here, so early stopping will be disadvantageous."}, {"section_index": "5", "section_name": "MODEL SELECTION FOR EXTRAPOLATION", "section_text": "EQL networks have a number of hyper-parameters, e. g. the number of layers, the number of units. and the regularization constant. Unfortunately, standard techniques for model selection, such as. evaluation on a hold-out set or cross-validation, will not be optimal for our purpose, since they rely on. interpolation quality. In order to extrapolate the network has to find the \"right' formula. But how can. we tell? Using Occams razor principle: the simplest formula is most likely the right one. Intuitively, if. we have the choice between cos(x) and its truncated power series approximation 1 - x2 /2 + x4/24 the first one is preferred. We use the number of active hidden units in the network as a proxy for. the complexity of the formula, see Appendix A1 for details. One could also think of differentiating. between the unit types. In any case, this argumentation is only correct if the model explains the data. well, i. e. it has a low validation error. So we have a dual objective to minimize, which we solve by ranking the instances w. r. t. validation error and sparsity and select the one with the smallest L2 norm. (in rank-space), see Eq. (15)."}, {"section_index": "6", "section_name": "RELATED WORK", "section_text": "In the field of machine learning, regression is often treated as a black box process of identifying. a suitable real-valued function from a hypothesis set, e. g. a reproducing kernel Hilbert space for Gaussian Processes Regression (GPR)Williams & Rasmussen (2006) or Support Vector Regression (SVR) Smola & Scholkopf (2004), or a multi-layer network of suitable expressive power Specht (1991). The goal is to find a prediction function that leads to a small expected error on future data, not\nthat is, a linear combination of L2 loss and L1 regularization, and apply a stochastic gradient descent algorithm with mini-batches and Adam Kingma & Ba (2015) for calculating the updates:\n0t+1 =0t+ Adam de\nFurthermore, the optimization process may only find a local optimum of the training objective vhich depends on the initialization of the parameters. We use independent runs to quantify expecte Derformance deviations.\nnecessarily to gain insight into the mechanism of how the output values derive from the inputs. The goal of finding an interpretable function is rather common in the natural sciences, such as biology where high noise levels and strong inter-system variability often make it important to rely on external prior knowledge, and finding a \"biologically plausible\"' model is often preferable over finding one that makes the highest prediction accuracy. As a consequence, model classes are often highly constrained e. g. allowing only for sparse linear models.\nThe task of learning a true, nonlinear, functional dependence from observing a physical system has received little attention in the machine learning literature so far, but forms the basis of the field of system identification. There, typically the functional form of the system is known and only the parameters have to be identified. Another approach is to model the time evolution with autoregressive models or higher order convolution integrals (Volterra series) but learning analytic formulas is not common.\nCausal learning is an area of recent research that aims at identifying a causal relation between multiple.. observables, which are typically the result of a physical process. Classically, this tasks reduces to. finding a minimal graphical model based only on tests of conditional independence Pearl|(2o00). Although very successful in some fields, this classical approach only provides a factorization of the. problem, separating causes and effects, but it leaves the exact functional dependency unexplained. Recent extensions of causal learning can take a functional view, but typically do not constrain the regression functions to physically plausible ones, but rather constrain the noise distributions Peters. et al.[(2014). The topic of learning a regression function with emphasis on extrapolation performance. has not been studied much in the literature so far. Existing work on time series prediction deals. with extrapolation in the temporal domain, i. e. predict the next value(s) Wiener (1949). By our. nomenclature, this is typically rather an interpolation task, when the prediction is based on the. behaviour of the series at earlier time steps but with similar value distribution[Muller et al.(1997):. Gyorfi et al.[(2013). Extrapolating in the data domain implies that the data distribution at prediction time will differ from the data distribution at training time. This is traditionally called the domain. adaptation setting. In particular, since we assume a common labeling function, our setting would fall. under the covariate shift settingQuionero-Candela et al.[(2009). Unfortunately, this connection is. not particularly useful for our problem. As domain adaptation typically does not make additional. assumptions about how the data distribution may change, existing methods need access to some. unlabeled data from the test distribution already at training time Ben-David et al.(2010). In our. setting this is not possible to obtain.\nOn the technical level, EQL networks are an instance of general feed-forward networks for function approximation Bishop (1995). In contrast to recent trends towards deep learning Bengio (2009) Bengio et al.(2013), our goal is not to learn any data representation, but to learn a function which compactly represents the input-output relation and generalizes between different regions of the data space, like a physical formula. Structurally, EQL networks resemble sum-product networks (SPNs)Poon & Domingos(2012) and Pi-Sigma networks (PSNs) Shin & Ghosh (1991), in the sense that both are based on directed acyclic graphs with computational units that allows for summation and multiplication. Otherwise, SPNs are different as they act as efficient alternative to probabilistic graphical models for representing probability distributions, whereas EQL networks are meant for the classical task of function approximation. In PSNs each output needs to be passed through multiplicative units, whereas in EQL multiplication is optional.\nFinding equations for observations is also known as symbolic regression where a search is performed. in a certain function space, typically done with evolutionary computation. With these techniques it. is possible to discover physical laws such as invariants and conserved quantities|Schmidt & Lipson (2009). Unfortunately, the computational complexity/search time explodes for larger expressions and high-dimensional problems. We attempt to circumvent this by modeling it as a gradient based. optimization problem. Related to symbolic regression is finding mathematical identities for instance. to find computationally more efficient expressions. InZaremba et al.(2014) this was done using. machine learning to overcome the potentially exponential search space."}, {"section_index": "7", "section_name": "EXPERIMENTAL EVALUATION", "section_text": "We demonstrate the ability of EQL to learn physically inspired models with good extrapolatior. quality by experiments on synthetic and real data. For this, we implemented the network training anc\nTable 1: Numeric results on pendulum dataset. Reported are the mean and standard deviation of the root mean squares error (RMS) ( E, Eq. (1) on different test sets for 10 random initializations..\nPendulum. We first present the results of learning the equations of motion for a very simple. physical system: a pendulum. The state space of a pendulum is X = R R where the first value is. the angle of the pole in radians and the second value is the angular velocity. In the physics literature these are usually denoted as (0, w), but for our purposes, we call them (x1, x2) in order to keep the. notation consistent between experiments. The pendulum's dynamic behavior is governed by the. following two ordinary differential equations:.\nx1 = X2 and x2 = -g sin(x1)\nAs training data, we sample 1000 points uniformly in the hypercube [-h, h] [-h, h] for h = 2 Note that this domain contains more than half of a sine period, so it should be sufficient to identif the analytic expression. The target values are disturbed by Gaussian noise with standard derivatio = O.01. We also define three test sets, each with 10o0 points. The interpolation test set i sampled from the same data distribution as the training set. The extrapolation (near) test set contain data sampled uniformly from the data domain [- 3h, 3h] [- 3h, 3h]\\ [-h, h] [-h, h], which i relatively near the training region and the extrapolation (far) test set extends the region to furthe outside: [-2h, 2h] [-2h, 2h]\\ [-h, h] [h, h]. We train a 2-layer EQL and perform model selectio among the hyper-parameters: the regularization strength E 10{-7,-6.3,-6,-5.3,-5,-4.3,-4,-3.3,-3 and the number of nodes u = v E {1, 3, 5}. All weights are randomly initialized from a norma distribution with o = /1/(k' + d). The unit selection I is set such that all unit types are equall often. To ensure convergence we chose T = 10o00 epochs. We compare our algorithm to a standar multilayer perceptron (MLP) with tanh activation functions and possible hyperparameters: X a for EQL, number of layers L E {2,3}, and number of neurons k E {5, 10, 20}. A second baselin is given by epsilon support vector regression (SVR) Basak et al.(2007) with two hyperparameter C E 10{-3,-2,-1,0,1,2,3,3.5} and e E 10{-3,-2,-1,0} using radial basis function kernel with widtl Y E {0.05, 0.1, 0.2, 0.5, 1.0}.\ninterpolation extrapol. (near) extrapol. (far) EQL 0.0102 0.0000 0.012 0.002 0.016 0.007 MLP 0.0138 0.0002 0.150 0.012 0.364 0.036 SVR 0.0105 0.041 0.18\nNumeric results are reported in Tab.1] As expected all models are able to interpolate well with a test error on the order of the noise level (o = 0.01). For extrapolation however, the performance. differ between the approaches. For MLP the prediction quality decreases quickly when leaving the. training domain. SVR remains a bit better in the near extrapolation but also fails catastrophically. on the far extrapolation data. EQL, on the other hand, extrapolates well, both near and far away from the training domain. The reasons can be seen in Figure2} while the MLP and SVR simply learns a function that interpolates the training values, EQL finds the correct functional expression and. therefore predicts the correct values for any input data..\nDouble pendulum kinematics. The second system we consider real double pendulum where the forward kinematics should be learned. For that we use recorded trajectories of a real double pendulum Schmidt & Lipson(2009). The task here is to learn the position of the tips of the double pendulum segments from the given joint angles (x1, x2). These positions where not measured such that we supply them by the following formula: y1 = cos(x1), y2 = cos(x1) + cos(x1 + x2), y3 = sin(x1), y4 = sin(x1) + sin(x1 + x2) where (y1, y3) and (y2, y4) correspond to x-y-coordinates of the first and second end-point respectively. The dataset contains two short traiectories. The first\n(a) b 1.0 1.0 MLP SVR 0.5 0.5 EQL -1.0 -0.32 System 0.0 0.0 0.0 0.0 sin id 0.5 0.5 1.0 0.32 -1.0 -1.0 -6 -4 -2 0 2 4 6 6 -4 -2 0 2 4 6 -0.0 0.0 y2 y1 L1 X1\nFigure 3: Double pendulum kinematics. (a) training trajectory (in y-space). (b) extrapolation test trajectory (in y-space) with output of a learned EQL instance. (c) slices of output y4 for inputs x1 = x2 = x for the true system, one of EQL, MLP, and SVR instances. (d) numeric results see Tab.1|for details. Note, that predicting 0 would yield a mean error of 0.84.\ncovers only part of the domain (input as well as output) and consists of 819 samples where 10% was used as validation set (randomly sampled), see Fig. 3(a). The second trajectory corresponds to a behavior with several spins of both pendulum segments such that a much larger domain is covered Nevertheless the angle values are confined to [, ]. We use this trajectory as extrapolation test set. The trajectory and the outputs of our method are shown in Fig.3(b). The prediction for unseen domains is perfect, which is also illustrated in a systematic sweep, see Fig.[3(c). The performance of MLP is off already near the training domain. SVR is a bit better, but still does not give usable predictions for the test data, see also the root means square error in Fig.3(d).\nModel selection is performed to determine as above, u = v E {3, 5}, (MLP: k E {5, 10, 20}) anc layer number L E {2, 3}\nRobotic arms. A more complicated task is to learn the forward kinematics of multi-segment robotic. arms. We consider planar arms with 3, 4, and 5 joints, where each segment is 0.5 units long. Foi training the arm is controlled by sinusoidal joint target angles with amplitude in [-/2, /2], eacl. joint with a different frequency. The number of data points are: 3000, 6000, and 18000 for the 3, 4 and 5 segment arms respectively, with added noise as above. For testing extrapolation performance. the amplitude -, was used. Note that the extrapolation space is much larger than the training. space. The task is to predict the coordinates of the end-effector of the arms (kin-3-end, kin-4-end. and the coordinates of all segment positions kin-5-all. The numerical results, see Tab.[2] shows tha. our method is able to extrapolate in these cases. Model selection as above with u = v E {10, 20}. (MLP: k E {10, 50?) and layer number L E {2, 3, 4}. To illustrate the dependence on the amount o.\nFigure 2: Learning pendulum dynamics. (a) slices of outputs y1 (left) and y2 (right) for inputs x1 = x2 = x for the true system equation (Eq.9) and one of EQL, MLP, SVR instances. The shaded area marks the training region and the vertical bars show the size of the near and far extrapolation domain. (b) one of the learned networks. Numbers on the edges correspond to the entries of W and numbers inside the nodes show the bias values b. All weights with w] < 0.01 and orphan nodes are omitted. Learned formulas: y1 = 0.103x2, y2 = sin(-x1), which are correct up to symmetry (1/g = 1.01).\n(a) (b) (c) 2 2.0 Test MLP data 1.5 SVR EQL 1.0 EQL output 0.5 System 0 0.0 -0.5 1.0 1.5 -2 2.0 2 4 -2 -1 0 1 2 -2 -1 0 1 2 -2 0 2 4 Y2 Y2 X1 = X2 = X EQL MLP SVR (d) extrapolation error 0.0003 0.00003 0.58 0.03 0.25\nTable 2: Extrapolation performance for kinematic of robotic arms. See Tab.[1for details. Standarc. deviations for 5 random initializations. Interpolation error for all methods is around 0.012 0.02\nnoise and the number of available training points we provide a quantification in Appendix A2. In short, increasing noise can be compensated by increasing amount of data to keep the performance.\nLearning complex formula. In order to find out whether EQL can also learn more complicate. formulas, we consider three examples with four-dimensional input and one-dimensional output:\ny = 1/3(sin(x1) + sin (2x2 + /8) + x2 - x3x4 F- y = 1/3 (sin(x1) + x2 cos(2x1 + /4) + x3 - x? F- = 1/3((1 + x2) sin(x1) + x2x3x4 F-\nThe first equation requires only one hidden layer to be represented. The second equation and third equation should requires two hidden layers. In particular, F-2 contains a product of x2 and cos and F-3 contains a product of three terms, and we use it to test if our restriction to only pairwise produc units causes problems for more complex target functions. We follow the same procedure as in the pendulum case for building training and test sets, though with h = 1 as input data range. We use 10000 points for training set and validation set (90%-10% split) and 5000 points for each of the test sets. Model selection for EQL is performed as above using the number of layers L E 2, 3, 4. The number of units is set to u = v = 10. For the MLP, we select L and X from the same set as above as well as k E {10, 30}\nTable [3 shows the numerical results. Again, all methods are able to interpolate, but only EQL achieves good extrapolation results, except for equation F-3. There it settles in 9 out of 10 cases into a local minimum and finds only an approximating equation that deviates outside the training domain. Interestingly, if we restrict the base functions to not contain cosine, the algorithm finds the right formula. Note, the sparsity of the correct formula is lower than those of the approximation, so it should be selected if found. Figure Fig.4|illustrates the performance and the learned networks visually. It shows one of the model-selected instances for each case. For F-1 the correct formula was identified, so correct predictions can be made even far outside the training region (much further than illustrated). For F-2 the network provided us with a surprise, because it yields good extrapolation performance with only one hidden layer! How can it implement x2 cos(2x1 + /4)? Apparently it uses 1.21(cos(-2x1++/4+0.41x2)+sin(2x1+/4+0.41x2)) which is a good approximation for x2 E [-2, 2]. The sparsity of this solution is 5 whereas the true solution needs at least 6, which explains its selection. For F-3 the suboptimal local minima uses some strange way of approximating (1 + x2) sin(x1) using (x1 + x1x2) cos(x1), which deviates fast, however the true solution would be sparser but was not found. Only if we remove cosine from the base functions we get always the correct formula, see Fig.4(c).\nX-Ray transition energies. As a further example we consider data measured in atomic physics When shooting electron beams onto atoms one can excite them and they consequently emit x-ray radiation with characteristic peak energies. For each element/isotope these energies are different as they correspond to the potential difference between the electron shells, such that one can identify elements in a probe this way. The data is taken from Deslattes et al.(2003), where we consider one specific transition, called the K a2 line, because it was measured for all elements. The true relationship between atomic number Z and transition energies is complicated, as it involves many body interactions and no closed-form solution exists. Nevertheless we can find out which relationships our system proposes. It is known that the main relationship is K a2 Z2 according to Moseley's law Further correction terms for elements with larger Z are potentially of higher order. We have data for elements with 10 Z 100, which is split into training/validation sets in the range [10, 91] (70/10 data points) and extrapolation test set in the interval [92, 100] (14 data points because of isotops) Since we have so little data we evaluate the performance for 10 independent training/validation\nkin-3-end kin-4-end kin-5-all EQL 0.017 0.000 0.012 0.000 0.011 0.000 MLP 0.389 0.014 0.415 0.020 0.346 0.013 SVR 0.235 0.590 0.260\ny = 1/3 (sin(x1) + sin (2x2 + /8) + x2 - x3x4) F-1 y = 1/3 (sin(x1) + x2 cos(2x1 + /4) + x3 - x4) F-2 y = 1/3((1 + x2) sin(x1) + x2x3x4) F-3\n(a) F-1 1.0 0.5 MLP 0.0 EQL 0.5 6.28 0.94 0.83 -0.95 -3.13 1.0 System 5 1.5 0.39 -0.17 -0.00.0 0.0 sin id mult sin 2.0 2.5 3.0 35 0.42 -0.33 3.5 3 -2 -1 0 1 2 3 -0.06 y1 X1 = X2 = X3 = X4 = X learned formula: -0.33 sin(-3.13x1) + 0.33 sin(6.28x2 + 0.39) + 0.33x2 - 0.056 - 0.33x3x4 (b) F-2 1.5 MLP 1.0 0.5 EQL -0.69 0.41 0.41 6.29 /-6.28\\-3.14 [0.77 0.77 0.0 System -0.5 0.0 0.79 3.93 1.57 0.0-0.0 id sin cOs cOs mult -1.0 -1.5 0.480.41 0.41 0.56 -2.0 3 -2 -1 0 1 2 3 -0.0 x1 = x2 = x3 = xx4 = 0.5x y1 learned formula: 0.33 cos(3.141 + 1.57) + 0.33x3 - 0.33x3+ 4 0.41 cos(-6.28x1 + 3.93 + 0.41x2) + 0.41 sin(6.29x1 + 0.79 + 0.41x2) (c) F-3 2.5 2.0 MLP 1.5 .81 3.14 EQL 1.0 0.02 0.0-0.0 0.90.0 0.0 0.0 0.5 EQL (no cos) mult mult mult 2 0.0 -0.82 0.86 -0.92 -0.83 0.98 0.68 0.33 0.58 System -0.5 0.01 0.0 0.00.59 0.02-0.0 -1.0 mult mult mult -1.5 0.57 -2 -1 0 1 2 3 0.2r FOI FOL\n(c) F-3 2.5 2.0 MLP 1.5 -0.81 -2.36 .14 EQL 1.0 0.0 0.0-0.0 0.9-0.0 0.0 -0.0 0.5 EQL (no cos) mult mult mult 2 0.0 0.82 0.86 0.92 -0.83 0.98 0.68 0.33 0.58 System 0.5 0.01 0.0 0.00.59 0.02-0.0 1.0 mul mult 1.5 0.77 0.82 -1.02 /0.57 3 -2 -1 0 1 2 3 EQL EQL (no cos X1 = X2 = X3 = X x4 = -0.2x\nlearned formula (EQL): 0.61(x1 + x1x2)(cos(-2.36x1) + 0.71) + 0.33x2x3x4 learned formula (EQL (no c0s)): 0.33(1 + x2) sin(3.14x1) + 0.33x2x3x4\nFigure 4: Formula learning analysis. (a) for F-1, (b) for F-2, and (c) for F-3. (left) y for a single cut. through the input space for the true system equation (1012), and for an instance of EQL, and MLP (right) shows the learned networks correspondingly, see Fig.2|for details. The formula representations. where extracted from the networks. For F-3 the algorithm fails with the overcomplete base and typically (9/10 times) ends up in a local minima. With less base function (no cosine) the right formula. is found. Both results are presented. See text for a discussion..\nTable 3: Interpolation and extrapolation performance for formula learning. See Tab.1for detail\ndataset method interpolation extrapol. (near) extrapol. (far) F-1 EQL 0.010 0.000 0.015 0.005 0.026 0.015 MLP 0.011 0.000 0.32 0.12 0.920 0.420 SVR 0.011 0.28 1.2 F-2 EQL 0.01 0.00 0.013 0.004 0.026 0.019 MLP 0.01 0.00 0.2 0.014 0.49 0.043 SVR 0.011 0.3 0.94 F-3 EQL 0.01 0.000 0.047 0.012 0.35 0.11 EQL (no cos) 0.01 0.000 0.01 0.000 0.011 0.001 MLP 0.01 0.000 0.084 0.007 0.4 0.021 SVR 0.01 0.071 0.39\nLet us now go beyond our assumptions and consider cases where the true target function is not an element of the hypothesis set.\n-x1 - 0.01x3 + x3 sin (x2) + 0.1x4 cos (x2) + 9.81 sin (x2) cos (x2) Y3= sin? (x2) + 1 -0.2x4 - 19.62 sin (x2) + x1 cos (x2) + 0.01x3 cos (x2) - x3 sin (x2) cos (x2) Y4 = sin2 (x2) + 1\nThe formulas contain divisions which are not included in our architecture due to their singularitie To incorporate them in a principled manner is left for future work. Thus, the cart-pendulum dynamic is outside the hypothesis class. In this case we cannot expect great extrapolation performance an this is confirmed by the experiments. In Fig.6(b,c) the extrapolation performance is illustrated b slicing through the input space. The near extrapolation performance is still acceptable for both EQ and MLP, but as soon as the training region is left further even the best instances differ considerabl from the true values, see also the numeric results in Tab.[4 The SVR is performing poorly also fo near extrapolation range. Inspecting the learned expressions we find that the sigmoid functions ar rarely used.\n(a) (b) (c) 1.4 Annnnr e!) nnner pnen 0.01 extrapol. EQL 1.2 MLP valerror True 0.00 SVR error 000001/0y 1.0 EQL 1 0.05 0.8 -0.01 0.04 0.6 0.100 0.02 0.03 0.4 0.02 0.2 0.03 0.010 1 ... .. 0.01 0.0 -0.04 0.001 : 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 . x = Z/100 x = Z/100 0 2 4 6 8 (e) (d) s formula interpolation extrapolation 1 y = 1.28x2 0.183x + 0.026 2 y = 1.98x2 - 1.42x + 0.618 - 1.45sigm(3.65x - 0.3) EQL 0.00042 0.0061 0.0038 3 y = -0.38z + 2.47sigm(-2.25z - 2.77) + 0.38 MLP 0.002 0.0180 0.0024 with z = cos(2.32x - 0.08) SVR 0.00067 0.0057 0.0014 4 y = 0.221z + 0.42sigm(0.75z - 3.73)\nFigure 5: X-Ray transition energies. (a) Measured data and predicted values by EQL and (b). visualized prediction error for all methods for one train/validation splitting. (c) EQL solutions during. model selection in validation error sparsity space, see Appendix A1 for details. (d) numeric results. Reported are RMS errors with standard deviation for 10 independent train/validation splits. In real units the error is in 100 keV and is well below the difference between neighboring high-Z elements.. (e) learned formulas for different sparsities s (lowest dot for each s in (c))..\nsplits. The data is scaled to lie in [0, 1], i. e. x = Z/100 and y = Ka2/100000. Model selection is here based on validation error only. The selection for sparsity and validation error only yields the Z2 relationship. Mini-batch size is 2 here and T = 50000 was used. Figure 5|presents the data, the predictions, the learned formulas and the numerical results. EQL and SVR achieve similar performance and MLP is significantly worse. However, EQL also yields interpretable formulas, see Fig.5(e) that can be used to gain insights into the potential relationship.\nConsider a pendulum attached to a cart that can move horizontally along a rail but that is attached to a spring damper system, see Fig.6(a). The system is parametrized by 4 unknowns: the position of the cart, the velocity of the cart, the angle of the pendulum and the angular velocity of the pendulum. We combine these into a four-dimensional vector x = (x1, ..., x4).\nWe set up a regression problem with four outputs from the corresponding system of ordinary differential equations where y1 = 1 = x3, y2 = x2 = x4 and\n(a) (b) (c) >p 1.0 MLP 1.0 EQL 0.5 0.5 System 0.0 0.0 44 0.1 0.5 -0.5 1.0 -1.0 2 -1 0 1 2 -2 -1 0 2 X1= X2 = X3 =X4 = x X1 = X2 = X3 =x4=x\nFigure 6: Cart-pendulum system. (a) sketch of the system. The lengths and masses are set to 1, the gravitation constant is 9.81 and the friction constant is 0.01. (b,c) slices of outputs y3 and y4 for. inputs x1 = x2 = x3 = x4 = x for the true system equation (Eq.13), and best EQL, MLP instances.\nTable 4: Interpolation and extrapolation performance for cart-pendulum dynamics. See Tab.1|f details. Note that predicting O would yield an error of 0.96 on the far test set.\ninterpolation extrapol. (near) extrapol. (far) EQL 0.0103 0.0000 0.0621 0.0208 0.180 0.056 MLP 0.0101 0.0000 0.0184 0.0008 0.195 0.006 SVR 0.0118 0.227 0.639"}, {"section_index": "8", "section_name": "CONCLUSIONS", "section_text": "We presented a new network architecture called EQL that can learn analytic expressions that typically occur in equations governing physical, in particular mechanical, systems. The network is fully differ. entiable, which allows end-to-end training using backpropagation. By sequencing L1 regularization. and fixing Lo norm we achieve sparse representations with unbiased estimation of factors within. the learned equations. We also introduce a model selection procedure specifically designed to select. for good extrapolation quality by a multiobjective criterion based on validation error and sparsity The proposed method is able to learn functional relations and extrapolate them to unseen parts of the data space, as we demonstrate by experiments on synthetic as well as real data. The approach learns. concise functional forms that may provide insights into the relationships within the data, as we show. on physical measurements of x-ray transition energies..\nThe optimization problem is nontrivial and has many local minima. We have shown cases where the algorithm is not reliably finding the right equation but instead finds an approximation only, in which. case extrapolation may be poor.\nIf the origin of the data is not in the hypothesis class, i.e. the underlying expression cannot be. represented by the network and good extrapolation performance cannot be achieved. Thus it is important to increase the model class by incorporating more base functions which we will address ir future work alongside the application to even larger examples. We expect good scaling capabilities tc larger systems due to the gradient based optimization. Apart from the extrapolation we also expec improved interpolation results in high-dimensional spaces, where data is less dense.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Debasish Basak, Srimanta Pal, and Dipak Chandra Patranabis. Support vector regression. Neura Information Processing-Letters and Reviews, 11(10):203-224, 2007\nShai Ben-David. John Blitzer. Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1-2):151-175, 2010\n(a) (b) 1.0 MLP 1.0 EQL 0.5 0.5 System 0.0 0.0 0.1 0.5 -0.5 -1.0 1.0 2 -1 0 1 2 -2 -1 0 1 2 X1 =X2 =X3 =X4= x X1=X2=X3 =X4= x\nThis work was in parts funded by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 308036: \"Life-long learning of visual scene understanding\" (L3ViSU). GM received funding from the People Programme (Marie Curie Actions) in FP7/2007-2013 under REA grant agreement no. 291734.\nChristopher M Bishop. Neural networks for pattern reco ognition. Oxford University Press, 1995\nDavid S Broomhead and David Lowe. Radial basis functions, multi-variable functional interpolatior and adaptive networks. Technical report, DTIC Document, 1988..\nLazlo Gyorfi, Wolfgang Hardle, Pascal Sarda, and Philippe Vieu. Nonparametric curve estimation from time series, volume 60. Springer, 2013.\nJudea Pearl. Causality. Cambridge University Press, 2000\nHoifung Poon and Pedro M. Domingos. Sum-product networks: A new deep architecture, 2012\nJoaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.\nMichael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. Science 324(5923):81-85, 2009. ISSN 0036-8075. doi: 10.1126/science.1165893. URL http:/7 science.sciencemag.org/content/324/5923/81\nAlex J Smola and Bernhard Scholkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199-222, 2004.\nDonald F. Specht. A general regression neural network. IEEE Transactions on Neural Networks (TNN), 2(6):568-576, 1991\nRobert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistica Society. Series B (Methodological), pp. 267-288, 1996.\nK-R Muller. Alexander J Smola, Gunnar Ratsch, Bernhard Scholkopf, Jens Kohlmorgen, and Vladimir Vapnik. Predicting time series with support vector machines. In Artificial Neural Networks. (ICANN), pp. 999-1004. Springer, 1997."}, {"section_index": "10", "section_name": "A1: MODEL SELECTION DETAILS", "section_text": "L k (9) = ev 0.01) l=1 i=1\nwhere O is the heavyside function and O.01 is an arbitrary threshold. For the multiplication units the norm of the incoming weights for both inputs are added (omitted to avoid clutter in the formula)"}, {"section_index": "11", "section_name": "SELECTION CRITERIA", "section_text": "As stated in the main text, we strive to choose the model that is both simple and has good performance. in terms of the validation set. Since both quantities have different scales, we proposed to choose them based on their ranking. Let r() and rs($) be the ranks of the network w. r. t. the validation error. and sparsity s()respectively, then the network with minimal squared rank norm is selected:.\narg min [r +rs($)2\n(a) (b) val. error extrapol. 140 error 0.10 0.10 120 100 0.08 0.05 80 0.06 60 0.04 0.02 40 0.02 20 0.01 .:i. 0 0 S 0 10 20 30 40 0 50 100 150\nal. error rV extrapo 140 error 120 0. ).10 100 0.0 ).05 80 0.0 60 0.0 0.02 40 0.0 20 ).01 0 0 :: S 0 10 20 30 40 0 50 100 150\nFigure 7: Model selection criteria. (a) extrapolation performance depending on validation error and sparsity (s) for the kin-4-end dataset as an illustration. (b) the same as in (a) but in rank-space. Circle. arcs indicate the L2 norm iso-lines."}, {"section_index": "12", "section_name": "A2: DEPENDENCE ON NOISE AND NUMBER OF DATA POINTS", "section_text": "In order to understand how the method depends on the amount of noise and the number of datapoint we scan through the two parameters and present the empirical results in Fig.[8] In general the method. is robust to noise and as expected, more noise can be compensated by more data..\nWe actually want a measure of complexity of the formula, however, since it is not clear what is the right choice of a measure, we use the sparsity instead, by counting the number of active/used hidden units denoted by s. For a given network phi we get\nIn Fig.7|the extrapolation performance of all considered networks for the kin2D-4-end dataset is visualized in dependence of validation error and the sparsity. It becomes evident that the best performing networks are both sparse and have a low validation error..\nFigure 8: Interpolation performance (a) and extrapolation performance (b) (on the noise-free test set depending on the number of data points and the size of the additive noise for kin-4-end dataset as an illustration. The white line represent an arbitrary threshold below which we consider a successful solution of the interpolation and extrapolation task.\n(a) (b) 0.01 0.032 0.1 0.32 0.56 1. 0.010.032 0.1 0.32 0.56 1. (4OdB) (30dB) (20dB) (10dB) (5dB) (0dB) (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) 20000 20000 20000 20000 extrapol. test error 10000 10000 10000 10000 error 0.32 5000 5000 5000 5000 poone poonis 0.32 0.10 0.10 # 1000 1000 0.03 # 1000 1000 0.03 0.01 500 500 500 500 0.01 100 100 100 100 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (40dB) (30dB) (20dB) (10dB) (5dB) )(OdB) (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) -noise (SNR) 0-noise (SNR)\n(a) (b) 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) (40dB) (30dB) (20dB) (10dB) )(5dB) (0dB) 20000 20000 20000 20000 extrapol. test error 10000 10000 10000 10000 error 0.32 poonis 5000 5000 poonie : 5000 5000 0.32 0.10 0.10 # 1000 1000 0.03 # 1000 1000 0.03 0.01 500 500 500 500 0.01 100 100 100 100 0.01 0.032 0.1 0.32 0.56 1. 0.01 0.032 0.1 0.32 0.56 1. (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) (40dB) (30dB) (20dB) (10dB) (5dB) (0dB) -noise (SNR) -noise (SNR)"}] |
SJCscQcge | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Convolutional neural networks (CNNs) are among the most popular techniques employed for com. puter vision tasks, including but not limited to image recognition, localization, video tracking, and. image and video segmentation (Goodfellow et al.||2016). Though these deep networks have exhibited. good performances for these tasks, they have recently been shown to be particularly susceptible to. adversarial perturbations to the input images (Szegedy et al.[2014f Goodfellow et al.|2015 Moosavi Dezfooli et al.[2016] |Papernot et al.2016c bf|Kurakin et al.[2016]|Grosse et al.|2016]Zagoruyko 2016b). Vulnerability of these networks to adversarial attacks can lead to undesirable consequences in. many practical applications using them. For example, adversarial attacks can be used to subvert fraud. detection, malware detection, or mislead autonomous navigation systems (Papernot et al.]2016c. Grosse et al.|2016). Further strengthening these results is a recent observation byKurakin et al.. (2016) who showed that a significant fraction of adversarial images crafted using the original network. are misclassified even when fed to the classifier through a physical world system (such as a camera).\nIn this paper, we investigate the problem of robustness of state-of-the-art convolutional neural networks (CNNs) to simple black-box adversarial attacks. The rough goal of adversarial attacks is as follows: Given an image I that is correctly classified by a machine learning system (say, a CNN), is it possible to construct a transformation of I (say, by adding a small perturbation to some or all the pixels) that now leads to misclassification by the system. Since large perturbations can trivially lead to misclassification, the attacks seek to limit the amount of perturbation applied under some chosen metric. More often than not, in these attacks, the modification done to the image is so subtle that the changes are imperceptible to a human eye. Our proposed attacks also share this property, in addition to being practical and simplistic, thus highlighting a worrying aspect about lack of robustness prevalent in these modern vision techniques."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "DeDlleldlleOnlddyeSdpldldlldCksDasedOnldllleeh assumptions about the adversarial knowledge of the target network. The first line of work assumes that the adversary has detailed knowledge of the network architecture and the parameters resulting from training (or access to the labeled training set) (Szegedy et al.2014f |Goodfellow et al.|2015 Moosavi-Dezfooli et al.||2016f |Papernot et al.[[2016c). Using this information, an adversary constructs a perturbation for a given image. The most effective methods are gradient-based: a small perturbation is constructed based on the gradients of the loss function w.r.t. the input image and a target label Often, adding this small perturbation to the original image leads to a misclassification. In the second line of work an adversary has restricted knowledge about the network from being able to only observe the network's output on some probed inputs (Papernot et al.2016b). Our work falls into this category While this black-box model is a much more realistic and applicable threat model, it is also more challenging because it considers weak adversaries without knowledge of the network architecture parameters, or training data. Interestingly, our results suggest that this level of access and a small number of queries provide sufficient information to construct an adversarial image.\n(a) (b) (c) (d)\nTable 1: The top row shows the original images and the bottom row shows the perturbed images. The misclassification is as follows: (a) a stingray misclassified as a sea lion, (b) an ostrich misclassified as a goose, (c) a jay misclassified as a junco, and (d) a water ouzel misclassified as a redshank.\nAs we operate in a black-box setting, we use a gradient-free approach to adversarial image generatior. Papernot et al.(2016b) were the first to discuss a black-box attack against deep learning systems. Their attack crucially relies on the observation that there is a transferability (generalization) propert. in adversarial examples, i.e., adversarial examples form one model transfers to another. Our propose. attacks on the other hand is much more simple and direct, does not require this transferability property. and hence is more effective in constructing adversarial images, in addition to having some othe. computational advantages. We demonstrate that our method is capable of constructing adversaria. images for several network architectures trained on different datasets. In particular in this pape. we consider the CIFAR10, MNIST, SVHN, STL10, and ImageNet1000 datasets, and two popula network architectures, Network-in-Network (Lin et al.,2014) and VGG (Simonyan & Zissermar 2014). In Table[1] we show four images from the ImageNet1000 dataset. The original images are ii the upper row. The bottom row shows the corresponding perturbed images produced by our algorithn. which are misclassified by a VGG CNN-S network (Chatfield et al.]2014a).\nOur Contributions. In this work, we present simple and effective black-box adversarial attacks or deep convolutional neural networks. We make the following main contributions in this paper\n(1) The first question we investigate is the influence of perturbing a single pixel on the prediction To do so, we devise a simple scheme, based on randomly selecting a single pixel and applying a strong perturbation to it. Somewhat surprisingly, we noticed that a few trails of this random experiment is already quite enough in generating adversarial images for low resolution image sets In fact, in many cases, for misclassification, the amount of perturbation needed to be applied to the\nselected pixel is also quite small. For high-resolution images, a similar phenomena holds, excep. our scheme now picks a random set of around 50 pixels. These simple experiments show the eas of generating adversarial images for modern deep CNNs without knowledge of either the networl. architecture or its parameters. There is however one shortcoming in these approaches in that the. perturbed image might have pixel values that are outside some expected range.. (2) We overcome this above shortcoming by showing that lower perturbation suffices if we carefull. select the pixels for perturbation. The approach is based the idea of greedy local search, an iterative. search procedure, where in each round a local neighborhood is used to refine the current image an in process minimizing the probability of the network assigning high confidence scores to the tru class label. Again while the algorithm is quite simple, it is rather effective in generating adversaria. images with quite small perturbations. We also show an interesting connection between the pixel. chosen for perturbation by our approach and the saliency map of an image, as defined by Simonyar. et al.(2014), that ranks pixels based on their influence on the output score. In effect our approacl. identifies pixels with high saliency scores but without explicitly using any gradient informatior. (as needed in the definition of saliency map (Simonyan et al.f2014)). Intuitively, in each round. our local-search based approach computes an implicit approximation to the gradient of the curren. image by understanding the influence of a few pixels on the output, which is then used to update. the current image. (3) We perform extensive experimental evaluations, and show that our local-search based approacl. reliably generates adversarial examples with little perturbation (even when compared to a recen. elegant adversarial attack proposed by|Goodfellow et al.[(2015) which needs perfect knowledg. of the network). Another feature of our attack is that, by design, our approach only perturbs a. very small fraction of the pixels during the adversarial image generation process (e.g., on the. ImageNet1000 dataset we on average perturb only about 0.5% of the pixels per image). Mos. previous attacks require the ability to perturb all the pixels in the image.. (4) Our approaches naturally extend to a stronger notion of misclassification (that we refer to as. k-misclassification), where the goal is to ensure that the true label of the image does not ever. appear in the top-k predictions of the network (obtained by sorting the confidence score vector. This notion especially captures the fact that many modern systems (e.g., ImageNet competitior. entrants) are evaluated based on top-k predictions. To the best of our knowledge, these are the firs. adversarial attacks on deep neural networks achieving k-misclassification..\nSzegedy et al.[(2014) used a box-constrained L-BFGS technique to generate adversarial examples They also showed a transferability (or generalization) property for adversarial examples, in that. adversarial examples generated for one network might also be misclassified by a related network with. possibly different hyper-parameters (number of layers, initial weights, etc.). However, the need for. a solving a series of costly penalized optimization problems makes this technique computationally. expensive for generating adversarial examples. This issue was fixed by Goodfellow et al.(2015. who motivated by the underlying linearity of the components used to build a network proposed an. elegant scheme based on adding perturbation proportional to sign of the network's cost function. gradient. Recently, Moosavi-Dezfooli et al.(2016) used an iterative linearization procedure to. generate adversarial examples with lesser perturbation. Another recent attack proposed byPapernot. et al.(2016c) uses a notion of adversarial saliency maps (based on the saliency maps introduced. by (Simonyan et al.|2014)) to select the most sensitive input components for perturbation. This attack. has been adapted by[Grosse et al.[(2016) for generating adversarial samples for neural networks used. as malware classifiers. However, all these above described attacks require perfect knowledge of the target network's architecture and parameters which limits their applicability to strong adversaries. with the capability of gaining insider knowledge of the target system..\nOur focus in this paper is the setting of black-box attacks, where we assume that an adversary ha. only the ability to use the network as an oracle. The adversary can obtain output from supplied input. and use the observed input-output relationship to craft adversarial images' In the context of deej. neural networks, a black-box attack was first proposed byPapernot et al.(2016b) with the motivatior. of constructing an attack on a remotely hosted system?|Their general idea is to first approximate th. target network by querying it for output labels, which is used to train a substitute network, which i. then used to craft adversarial examples for the original network. The success of the attack cruciall depends on the transferability property to hold between the original and the substitute network. Ou. black-box attack is more direct, and completely avoids the transferability assumption, making it fa. more applicable. We also avoid the overhead of gathering data and training a substitute networl. Additionally, our techniques can be adapted to a stronger notion of misclassification..\nA complementary line of work has focused on building defenses against adversarial attacks. Although. designing defenses is beyond scope of this paper, it is possible that adapting the previous suggested defense solutions such as Jacobian-based regularization (Gu & Rigazio|2015) and distillation (Pa. pernot et al.]2016d) can reduce the efficacy of our proposed attacks. Moreover, the recently proposed technique of differentially private training (Abadi et al.]2016) can also prove beneficial here..\nThe study of adversarial instability have led to development of solutions that seeks to improv training to in return increase the robustness and classification performance of the network. I some case, adding adversarial examples to the training (adversarial training) set can act like regularizer (Szegedy et al.]2014] Goodfellow et al.]2015] Moosavi-Dezfooli et al.2016). Th phenomenon of adversarial instability has also been theoretically investigated for certain familie of classifiers under various models of (semi) random noise (Fawzi et al.20152016). However, a we discuss later, due to peculiar nature of adversarial images generated by our approaches, a simpl adversarial training is only mildly effective in preventing future similar adversarial attacks.\nThe security of machine learning in settings distinct from deep neural networks is also an area of active research with various known attacks under different threat models. We refer the reader to a recent survey byMcDaniel et al.(2016) and references therein.\nNotation and Normalization. We denote by [n] the set {1,..., n}. The dataset of images is partitioned into train and test (or validation) subsets. An element of a dataset is a pair (I, c(I)) foi an image I and a ground truth label c(I) of this image. We assume that the class labels are drawn from the set {1, ..., C}, i.e., we have a set of C E N possible labels. We assume that images have l channels (in experiments we use the RGB format) and are of width w E N and height h E N. We say that (b, x, y) is a coordinate of an image for channel b and location (x, y), and (+, x, y) is a pixel of an image where (+, x, y) represents all the l coordinates corresponding to different channels at location (x, y). I(b, x, y) E R is the value of I at the (b, x, y) coordinate, and similarly I(*, x, y) e R represents the vector of values of I at the (*, x, y) pixel.\nIt is a common practice to normalize the image before passing it to the network. A normalized. image has the same dimension as the original image, but differs in the coordinate values. In this work we treat the normalization procedure as an external procedure and assume that all images are normalized. As we always work with normalized images, in the following, a reference to image means a normalized input image. We denote by LB and UB two constants such that all the coordinates of. all the normalized images fall in the range [LB, UB]. Generally, LB < 0 and UB > 0. We denote by I C Rl wh the space of all (valid) images which satisfy the following property: for every I E I, for. all coordinates (b, x, y) E [l] [w] [h], I(b, x, y) E [LB, UB].\nWe denote by NN a trained neural network (trained on some set of training images). NN takes an. image I as an input and outputs a vector NN(I) = (o1, ..., 0c), where o; denotes the probability. as determined by NN that image I belongs to class j. We denote (NN(D),k) a function that returns a set of indices that are the top-k predictions (ranked by decreasing probability scores with.\nties broken arbitrarily) of the network NN. For example, if NN(I) = (0.25, 0.1, 0.2, 0.45), then (NN(I),1) = {4} (corresponding to the location of the entry 0.45). Similarly, (NN(I),2) = {4,1}, (NN(I),3) ={4, 1, 3}, etc.\nAdversarial Goal. Before we define the goal of black-box adversarial attacks, we define misclassi fication for a NN. In this paper, we use a stronger notion of misclassification, which we refer to as k-misclassification for k E N.\nIn other words, k-misclassification means that the network ranks the true label below at least k other labels. Traditionally the literature on adversarial attacks have only considered the case where k = 1 Note that an adversary that achieves a k-misclassification for k > 1 is a stronger adversary than one achieving an 1-misclassification (k-misclassification implies k'-misclassification for all 1 < k' < k) If k = 1, we simply say that NN misclassifies the image.\nIn our setting, an adversary Adv is a function that takes in image I as input and whose output is another image ADv(I) (with same number of coordinates as I). We define an adversarial image as. one that fools a network into k-misclassification..\nDefinition 2 (Adversarial Image) Given access to an image I, we say that an ADv(I) is a k- adversarial image (resp. adversarial image) if c(I) E (NN(I), k) and c(I) (NN(ADv(I)), k (resp. c(I) E (NN(I),1) and c(I) (NN(ADv(I)),1))\nThe goal of adversarial attacks is to design this function ADv that succeeds in fooling the network fo. a large set of images. Ideally, we would like to achieve this misclassificatior3 by adding only some. small perturbation (under some metric) to the image. The presence of adversarial images shows tha there exist small perturbations in input that produce large perturbations at the output of the last laye\nAdversarial threat models can be divided into two broad classes|The first class of models roughl. assumes that the adversary has a total knowledge of the network architecture and the parameter resulting from training (or access to the labeled training set). The second class of threat models, a considered in this paper, make no assumptions about the adversary having access to the networl. architecture, network parameters, or the training set. In this case, the adversary has only a black-bo. (oracle) access to the network, in that it can query the network NN on an image I and observe the. output NN(I). In our experimental section (Section|6), we also consider a slight weakening of thi. black-box model where the adversary has only the ability to use a proxy of the network NN as al. Oracle.\nA black-box threat model in the context of deep neural networks was first considered byPaperno1. et al.(2016b). There is however one subtle difference between the threat model considered here. and that considered by Papernot et al.[(2016b) in what the adversary can access as an output. While the adversary presented in (Papernot et al.||2016b) requires access to the class label assigned by the. network which is the same level of access needed by our simple randomized adversary (presented in. Section4), our local-search adversary (presented in Section5) requires access to oc(1) (the probability assigned to the true label c(I) by the network on input I) and the vector (for checking whether. k-misclassification has been achieved). Our adversarial approaches does not require access to the. complete probability vector (NN(I)). Also as pointed out earlier, compared to (Papernot et al.. 2016b), our approach is more direct (needs no transferability assumption), requires no retraining, and. can be adapted to achieve k-misclassification rather than just 1-misclassification.."}, {"section_index": "2", "section_name": "BLACK-BOX GENERATION: A FIRST ATTEMPT", "section_text": "In this section, we present a simple black-box adversary that operates by perturbing a single pixel (or. a small set of pixels) selected at random. In the next section, we build upon this idea to construct an adversary that achieves better success by making adaptive choices..\n3Note that the misclassification is at test time, once the trained network has been deployed. 4More fine-grained classification has also been considered in (Papernot et al.|2016c) where adversaries are categorized by the information and capabilities at their disposal.\nPower of One Pixel. Starting point of our investigation is to understand the influence of a singl pixel in an adversarial setting. Most existing adversarial attacks operate by applying the sam perturbation on each individual pixel while minimizing the overall perturbation (Szegedy et al.|2014 Goodfellow et al.[2015, Moosavi-Dezfooli et al.[2016), while recent research have yielded attack that perturb only a fraction of the pixels (Papernot et al.[2016c bfGrosse et al.[[2016). However, in al these cases, no explicit restriction is placed on the number of pixels that can be perturbed. Therefore it is natural to ask: whether it is possible to force the network to misclassify an image by modifying a single pixel? If so, how strong should this perturbation be? We run several experiments to she light on these questions. For simplicity, in this section, we focus the case of 1-misclassification, eve though all discussions easily extend to the case of k-misclassification for k > 1. We begin with useful definition.\nDefinition 3 (Critical Pixel) |Given a trained neural network NN and an image I, a pixel (+, x, y. in I is a critical pixel if a perturbation of this pixel generates an image that is misclassified by the network NN. In other words, (+, x, y) is a critical pixel in I if there exists another neighboring image Ip which differs from I only in values at the pixel location (x, y) such that c(I) (NN(Ip), 1).\nf(I(b,u,v) if x # u or y F r defn (x,y) p sign(I(b, u, v) otherwise\nIn the following, we sa pixel (*.x.u) in image I is critical iff c(D) d (NN(I.~ 9\nCritical Pixels are Common. Our first experiment is to investigate existence of critical pixels ii. he considered dataset of images. To do so, we perform a simple procedure that picks a location (x, y he perturbed image is run through the trained network, and we check whether it was misclassified o an exhaustively repeat this procedure for all pixels in an image, for computational efficiency w nstead perform it only on a fraction of randomly chosen pixels, and our results somewhat surprisingl. suggest that in many cases this is sufficient to generate an adversarial image. Algorithm RANDAD. resents the pseudo-code for this experiment. Algorithm RANDADv, selects U random pixels (wit eplacement) and performs checks whether the pixel is critical or not. The algorithm output is ai inbiased estimate for the fraction of critical pixels in the input image I. Note that the algorithm ca. ail in generating an adversarial image (i.e., in finding any critical pixel for an image). The following. lefinition will be useful for our ensuing discussion..\nOur first observation is that sometimes even small perturbation to a pixel can be sufficient to obtain an adversarial image. Table2 shows two images and their adversarial counterparts, with p = 1. Often original and adversarial images are indistinguishable to the human eye, but sometimes the critical pixel is visible (Table2).\n5In the definition of critical pixel we have not considered how well the original image I is classified by NN, i.e., whether c(I) E (NN(I), 1). In particular, if c(I) (NN(I), 1) then by definition all pixels in the image are critical even without any perturbation. In our experiments, we ignore these images and only focus on images I where c(I) E (NN(I), 1), which we refer to as good images (Definition|4)\nThe image I, can be generated in multiple ways, here we consider a class of sign-preserving perturbation functions defined as follows. Let PeRT(I, p, x, y) be a function that takes as input an\n(a) original (b) perturbed (c) original (d) perturbed\nTable 2: The row contains original images followed by misclassified images where only one pixel (pointed using a black arrow) was perturbed with perturbation parameter p = 1. After perturbation in the first case (images (a) and (b)) an automobile gets misclassified as a truck, and in the second case (images (c) and (d)) a cat gets misclassified as a dog.\nWe also tried to understand the effect of larger perturbation parameter values. We set U to half the. number of pixels in each image. After usual training of the neural network using the training set (se. Section[6|for more details about training), we ran Algorithm RANDADv on 1000 randomly drawr. images from the test set of the corresponding dataset. In our experiments, we varied perturbatior parameter in the range {1, 5, 10, 100}. Before we consider our results, we note some of the perturba. tion values that we use to construct the adversarial image might construct images that are not in the original image space|However, these results are still somewhat surprising, because even though we. allow large (even out-of-range) perturbation, it is applied to exactly one pixel in the image, and i. appears that it suffices to even pick the pixel at random..\nFigures1and2|show results for 4 datasets (more details about the datasets and the networks are presented in Section[6). On the x-axis we show the perturbation parameter p. In Figure[1] the y-axis represents the output of Algorithm RANDADv averaged over good images for the network7|The. first observation that we can make is that the critical pixels are common, and in fact, as p grows. the fraction of critical pixels increases. For example, in CIFAR10, with p = 100, almost 80% (or average) of the pixels randomly selected are critical. In Figure[2l the y-axis represents the fraction. of successful adversarial images generated by Algorithm RANDADv, i.e., fraction of inputs where Algorithm RANDADv is successful in finding at least one critical pixel. Again we notice that as p. grows it gets easier for Algorithm RANDADv to construct an adversarial image..\nAnother observation is that for the MNIST and STL1O datasets, Algorithm RANDADV succeeds in finding fewer critical pixels as compared to SVHN and CIFAR10 datasets. We give the following explanation for this observation. The majority of pixels in an MNIST image belong to the background\n6We fix this shortcoming using a local-search based strategy in the next section.. 'Note by focusing on good images, we make sure that we are only accounting for those cases whe. perturbation is needed for creating an adversarial image..\nVinN ViaN Perturbation parameter 100 Perturbation parameter 100 Perturbation parameter 100 Perturbation parameter 100 (a) MNIST (b) SVHN (c) CIFAR10 (d) STL10\nFigure 1: Output of Algorithm RANDADv (averaged over good images). The results are for two networks: a Network-in-Network and b) VGG. The perturbation parameter p is varied from {1, 5, 10, 100}\n0.8 0.8 0.8 0.8 0.6 0.6 .6 0.6 . 0.2 0.2 ViGN VinN ViGN 100 Perturbation parameter 100 100 100 Perturbation parameter Perturbation parameter Perturbation parameter (a) MNIST (b) SVHN (c) CIFAR10 (d) STL10\nFigure 2: Fraction of images where Algorithm RANDADv succeeds in finding at least one critical pixel. Again we only start with only good images.\nhence, these pixels are less likely to be critical. On the other hand, STL10 contains high resolutior images, 96 96, where perhaps a single pixel has less of an impact on the output prediction. The latter observation motivated us to generalize the notion of a critical pixel to a critical set.\nDefinition 5 (Critical Set) Given a trained neural network NN and an image I, a critical set of I i. a set of pixels U(x,y){(+, x, y)} in I such that a perturbation of these pixels generates an image thai. is misclassified by the network NN..\nThe general goal will be to find critical sets of small size in an image. With this notion of critica. set, we considered constructing adversarial images on the high-resolution ImageNet1o00 datase. pixels in a set. Similarly, we can devise a simple extension to Algorithm RANDADv to operate witl. a set of pixels and to output an unbiased estimate for the fraction of critical sets of some fixed size. (50 in our case) in the input image|Note that a set size of 50 pixels is still a tiny fraction of all the. pixels in a standard (center) crop of size 224 224, namely just 0.09%. We use a larger perturbatior. parameter p than before, and set (U) the budget on the number of trials on an image as 5000. Figure. shows our results. Overall, we note that we can draw similar conclusions as before, i.e., increasing. the perturbation parameter creates more critical sets making them easier to find and relatively smal perturbations are sufficient to construct adversarial images."}, {"section_index": "3", "section_name": "BLACK-BOX GENERATION: A GREEDY APPROACH", "section_text": "The results from Section4|show that most images have critical pixels such that modifying these pixels significantly leads to a failure of NN to classify the image correctly. However, one shortcoming of Algorithm RANDADv was that to build adversarial images, we sometimes had to apply a large perturbation to a single pixel (or a small set of pixels). Hence, there might exist a pixel (or a set of pixels) in the adversarial image whose coordinate value could lie outside the valid range [LB, UB] To overcome this issue, we need to redesign the search procedure to generate adversarial images that still belong to the original image space I (defined in Section3). Here a brute-force approach is generally not feasible because of computational reasons, especially in high-resolution images. Hence we need to develop an efficient heuristic procedure to find the right small set of pixels to be perturbed Our solution presented in this section is based on performing a greedy local search over the image Space.\n8Searching over all pixel sets of size 50 pixels is computationally prohibitive, which again motivates the nee for a randomized strategy as proposed in Algorithm RANDADv.\n0.8 0.6 .4 0.2 VGG CNN S (Caffe) VGG CNN S (Caffe) VGG CNN M (Caffe) VGG CNN M (Caffe)..... VGG CNN M 2048 (Caffe) 111 VGG CNN M 2048 (Caffe) VGG ILSVRC 19 (Caffe) VGG ILSVRC 19 (Caffe) 900 500 1000 500 1000 Perturbation parameter. Perturbation parameter (a) ImageNet1000 (b) ImageNet1000\nFigure 3: Experiments in Figures 1|and|2[for the high-resolution ImageNet1o00 dataset. The results are again for good images from a set of 1000 randomly selected images. We use a slightly modified version of Algorithm RANDADv that perturbs a set of 50 pixels.\nWe consider the general k-misclassification problem (Definition 1) where an adversarial attacl ensures that the true label does not appear in the top-k predictions of the network. We utilize : local-search procedure, which is an incomplete search procedure that is widely used for solving. combinatorial problems appearing in diverse domains such as graph clustering, scheduling, logisticss. and verification (Lenstra1997). For a general optimization problem it works as follows. Consider ar. objective function f(z) : Rn _> R where the goal is to minimize f(z). The local-search procedur. works in rounds, where each round consists of two steps. Let zi-1 be the solution iterate after rounc. i - 1. Consider round i. The first step is to select a small subset of points Z = {z1, ..., Zn}, a sc. called local neighborhood, and evaluate f(z) for every z; E Z. Usually, the set Z consist of points. that are close to current z,-1 for some measure of distance which is domain specific. The second step. selects a new solution z, taking into account the previous solution zi-1 and the points in Z. Hence. Z; = g(f(zi-1), f(z1), ..., f(zn)), where g is some pre-defined transformation function..\n(a) First, we need to define the cost function f. Let I be the image (with true label c(I)) whose adversarial image we want to generate for a target neural network NN. For some input image I we use the objective function fc(1)(I) which equals the probability assigned by the network NN that the input image I belongs to class c(I). More formally, fc(1)(I) = 0c(I) where NN(I) = (01,..., 0C) with o; denoting the probability as determined by NN that image I belongs to class j. Our local-search procedure aims to minimize this function. (b) Second, we consider how to form a neighborhood set of images. As mentioned above, the local- search procedure operates in rounds. Let I,-1 be the image after round i - 1. Our neighborhood will consist of images that are different in one pixel from the image I,-1. In other words, if we measure the distance between I,-1 and any image in the neighborhood as the number of perturbed pixels, then this distance is the same (equal to one) for all of them. Therefore, we can define the neighborhood in terms of a set of pixel locations. Let (Px, Py), be a set of pixel locations. For the first round (Px, Py )o is randomly generated. At each subsequent round, it is formed based on a set of pixel locations which were perturbed in the previous round. Let (Px, P*)i-1 denote the pixel locations that were perturbed in round i - 1 (formally defined below). Then (Px, Py)i = (x, y) , {(a,b)E(P*,P*)i-1}{xE[a-d,a+d],yE[b-d,b+d]}\nWe adapt this general procedure to search critical sets efficiently as explained below. Our optimization problem will try to minimize the probability that the network determines an perturbed image has the class label of the original image, and by using a local-search procedure we generate perturbed images which differ from the original image in only few pixels. Intuitively, in each round, our local-search procedure computes an implicit approximation to the gradient of the current image by understanding the influence of a few pixels on the output, which is then used to update the current image.\nc(I)() = Oc(I) where NN(I) =01...,0\n(Px, Py)i = {(a,b)E(P*,P*)i-1}{xE[a-d,a+d],yE[b-d,b+d]}\nwhere d is a parameter. In other words, we consider pixels that were perturbed in the previous round, and for each such pixel we consider all pixels in a small square with the side length 2d centered at that pixel. This defines the neighborhood considered in round i. (c) Third, we describe the transformation function g of a set of pixel locations. The function g takes as input an image I, a set of pixel locations (Px, Py), a parameter t that defines how many pixels will be perturbed by g, and two perturbation parameters p and r. In round i of the local-search procedure, the function g(Ii-1, (Px, Py)i-1,t,p,r) outputs a new image, such that exactly t pixels of I-1 are perturbed, and an auxiliary set of pixel locations (P*, P+)i to record which t pixels where perturbed at this round, so we have (I, (P*, P*)i) = g(Ii-1, (Px, Py)i-1, t, p, r). Next we describe transformations that g performs in round i. As the first step, g constructs a set of perturbed images based on (Px, Py )i-1: {PERT(Ii-1,P, (x, y))} (x,y)E(Px,Py)i-1 where PeRT is the perturbation function defined through (1). Then it computes the score of each image in I as VI E I : score(I) = fc(1)(I), and it sorts (in decreasing order) images in I based on the above score function to construct\nAlgorithm LocSeARcHADv shows the complete pseudocode of our local-search procedure. At the high level, the algorithm takes an image as input, and in each round, finds some pixel locations tc perturb using the above defined objective function and then applies the above defined transformatior function to these selected pixels to construct a new (perturbed) image. It terminates if it succeeds to push the true label below the kth place in the confidence score vector at any round. Otherwise, i1 proceeds to the next round (for a maximum of R rounds). Note that the number of pixels in an image perturbed by Algorithm LocSEARcHADv is at most t R and in practice (see Tables4]5land6 in Section 6) it is much less. In round i, we query the network at most the number of times as the number of pixels in (Px, Py); which after the first round is at most 2d 2d t (again in practice this is much less because of the overlaps in the neighborhood squares).\nIn Section 6] we demonstrate the efficacy of Algorithm LOcSEARcHADV in constructing adversaria. images. We first highlight an interesting connection between the pixels perturbed and their influence measured by a notion of called saliency map..\nA Relation to Saliency Maps. Simonyan et al.(2014) introduced the notion of saliency map as a way to rank pixels of the original images w.r.t. their influence on the output of the network. The. intuition is that influential pixels in the saliency map are more likely to be important pixels that represent objects and, for example, can be used for weakly supervised object localization. Formally, let NNc(1)(I) denote the probability assigned to true class c(I) by the network NN on input I E IRlw h\nI = {PERT(I-1,P,(x,y))} (x,y)E(Px,Py)i-1\nVI eI : score(I) = fcD(I)\nand it sorts (in decreasing order) images in I based on the above score function to construct sorted(). Pixels whose perturbation lead to a larger decrease of f are more likely useful in constructing an adversarial candidate. From sorted(), it records a set of pixel locations (Px, P+)i based on the first t elements of sorted(T), where the parameter t regulates the number of pixels perturbed in each round. Formally,\nP*,P*) ={(x,y) : PERr(Ii-1,P, (x,y)) E sorted()[1 : t]}\nwhere sorted()[1 : t] represents the first t sorted images in sorted(T). Finally, I, is constructed. from I,-1 by perturbing each pixel in location (x, y) E (Px, P*) with a perturbation value r. The. perturbation is performed in a cyclic way (as explained in Algorithm CyCLiC) so that we make. sure that all coordinate values in I, are within the valid bounds of LB and UB. Note that at the end of every round i, I, is a valid image from the image space I..\nWe want to point out that the function g uses two perturbation parameters, p and r. The value of. r is kept small in the range [0, 2]. On the other hand, we do not put any explicit restrictions on the. value of p. The best choice of p will be one that facilitates the identification of the \"best' pixels to perturb in each round. In our experiments, we adjust the value of p automatically during the search We defer this discussion to the experimental section..\nOutput: Success/Failure depending on whether the algorithm finds an adversarial image or not\nInput: Image I with true label c(I) E {1, ..., C}, two perturbation parameters p E R and r E 0, 2], and four other parameters: the half side length of the neighborhood square d E N, the number of pixels perturbed at each round t E N, the threshold k E N for k-misclassification, and an upper bound on the number of rounds R E N.\nThe saliency map of I is the matrix M E Rwxh such that Mi,j = maxbe[e] |Wc(1)(b, x, y)|, where Wc(1)(b, x, y) is the element of Wc(1) corresponding to channel b and location (x, y). Pixels with higher scores are considered more influential. In subsequent works, this notion has been extended to adversarial saliency maps that can be useful in generating adversarial perturbations (Papernot et al. 2016c).\nComputing the exact saliency scores for an image requires complete access to the network NN, which. we do not assume. However, a natural hypothesis is that the pixels selected by Algorithm Loc. SeARchADv for perturbation are related to pixels with large saliency scores. We use the Ima. geNet1000 dataset to test this hypothesis. In Figure[3] we present some qualitative results. As can be seen from the pictures, the pixels perturbed by Algorithm LocSEARcHADv appear correlated with. pixels with high saliency scores. Quantitatively, we observed that the pixels that occupy top-10%. of the saliency map, on average contain more than 23% of the pixels chosen by Algorithm Loc. SEARcHADv for perturbation (and this overlap only grows when we consider a bigger chunk of. pixels picked by their saliency scores). Note that this is correlation is not though a random occurrence. For an image I, let S1 denote the set of pixels in I that rank among the top-10% in the saliency. map. If we pick a random set of around 200 pixels (this is on average number of pixels perturbed per image by Algorithm LocSEARcHADV perturbs, see Table|5), we expect only about 10% to them to intersect with S1 and standard tail bounds show that the probability that at least 23% of. the pixels of this random set intersects with S1 is extremely small?|Therefore, it appears that Al. gorithm LocSEARcHADv rediscovers part of the high salient score pixels but without explicitly. computing the gradients.\nDatasets. We use 5 popular datasets: MNIST (handwritten digits recognition dataset), CIFAR10. (objects recognition dataset), SVHN (digits recognition dataset), STL1O (objects recognition dataset) and ImageNet1000 (objects recognition dataset).\nModels.We trained Network-in-Network (Lin et al.]2014) and VGG (Simonyan & Zisserman. 2014) for MNIST, CIFAR, SVHN, STL10, with minor adjustments for the corresponding image sizes Network-in-Network is a building block of the commonly used GoogLeNet architecture that has demonstrated very good performance on medium size datasets, e.g. CIFAR10 (Zagoruyko] 2015) VGG is another powerful network that proved to be useful in many applications beyond image. classification, like object localization (Ren et al.2015). We trained each model in two variants: with and without batch normalization (Ioffe & Szegedy2015). Batch normalization was placed before a ReLU layer in all networks. For the ImageNet1o00 dataset, we used pre-trained VGG models from (Chatfield et al.|2014b) (we did not train them from scratch due to limited resources). All Caffe VGG models were converted to Torch models using the loadcaffe package (Zagoruyko]2016a] These models use different normalization procedures which we reproduced for each model based on provided descriptions. Tables4|and 5](the second column ERRTop-1) show the top-1 (base. error for all datasets and models that we considered. The results are comparable with the known state-of-the-art results on these datasets (Benenson2016).\nRelated Techniques. There are quite a few approaches for generating adversarial images (as. discussed in Section2). Most of these approaches require access to the network architecture and its parameter values [Szegedy et al.]2014 [Goodfellow et al.]2015] Moosavi-Dezfooli et al.]2016 Papernot et al.2016cj1 The general idea behind these attacks is based on the evaluating the. network's sensitivity to the input components in order to determine a perturbation that achieves the adversarial misclassification goal. Among these approaches, the attack approach (known as.\nWe start by describing our experimental setup. We used Caffe and Torch machine learning frameworks. to train the networks. All algorithms to generate adversarial images were implemented in Lua within Torch 7. All experiments were performed on a cluster of GPUs using a single GPU for each run..\n9we can also use here a standard hypothesis testing for a proportion. The null-hypothesis is that the probability of intersection equals 0.1 as with random Bernoulli trails, and test statistic Z = (0.23 - 0.1)//(0.1)(1 - 0.1)/200 = 6.12 indicates that the null-hypothesis can be rejected at significance level 0.01. 1Or\nTable 3: Results on ImageNet1000 using VGG CNN-S (Caffe) network (Chatfield et al.J2014a Columns from left to right: the original image, top 150 pixels chosen according to their saliency. scores (in white), the absolute difference between the perturbed image and the true image (the pixels that are perturbed appear in white), and the perturbed image. Adversarial misclassification (rows from top to bottom): a ruffed grouse misclassified as a frilled lizard, an artichoke misclassified as a sleeping bag, a bubble misclassified as a fountain, and a hare misclassified as a cheetah..\nthe \"fast-gradient sign method') suggested by[Goodfellow et al.(2015) stands out for being able to efficiently generate adversarial images. Here we compare the performance of our local-search basec attack against this fast-gradient sign method1\nFor completeness, we now briefly explain the fast-gradient sign method of Goodfellow et al.[(2015) Given an image Io, a label a E {1,...,C}, and a network NN, the fast-gradient sign method sign(V 1=1, Loss(NN(I), a)) is the sign of the network's cost function gradient (here Loss(NN(I), a). denotes the loss function of the network NN given input I and class a). We vary a over all possible labels in the dataset and choose the best result where this procedure is successful in generating an adversarial image. Without general guidelines for setting e, we experimented with several values of e starting from 0.07 and increasing this number. We found that the value e = 0.212|was the smallest. value where the fast-gradient sign method started to yield competitive performance compared to our algorithm. Smaller values of e leads to generation of fewer adversarial images, e.g., at e = 0.1, the percentage of generated adversarial images is reduced by around 10% as compared to the value at. e = 0.2 for the CIFAR10 dataset on the Network-in-Network model. Larger values of e tends to\n2For the ImageNet1000 dataset, we set e differently as discussed later\n11 Another reason for picking this approach for comparison is that it is also heavily utilized in the recent black-box attack suggested byPapernot et al.(2016b), where they require additional transferability assumptions which is not required by our attack.\ngenerate more adversarial images, but this comes at the cost of an increase in the perturbation. As we. discuss later, our local-search based approach yields better results than the fast-gradient sign methoc in both the volume of adversarial images generated and the amount of perturbation applied. Anothe. important point to remember is that unlike the fast-gradient sign method, our approach is based or. a weaker and more realistic assumption on the adversarial power, making our attacks more widely. applicable.\nImplementing Algorithm LocSeARcHADv. For each image I, we ran Algorithm Loc SEARcHADv for at most 150 rounds, perturbing 5 pixels at each round, and use squares of side length 10 to form the neighborhood (i.e., R = 150, t = 5, d = 5). With this setting of parameters we perturb a maximum of t R = 750 pixels in an image. The perturbation parameter p was adaptively adjusted during the search. This helps in faster determination of the most helpful pixels in generating the adversarial image. Let I be the original image. For some round i of the algorithm define Oc(1) = avg(x,y){0c(1) : (x, y) E (P*, P+)i-1}, where Oc(1) is the probability assigned to class label c(I) in NN(PeRr(I,-1,P, x, y)) (here Oc(1) provides an approximation of the average confidence of the network NN in predicting the true label over perturbed images). At each round we increase the value of p if oc(1) is close to one and decrease p if oc(1) is low, e.g., below O.3. For Algorithm CyCLIC, we set r = 3/2. To avoid perturbing the most sensitive pixels frequently, we make sure that if a pixel is perturbed in a round then we exclude it from consideration for the next 30 rounds.\nExperimental Observations. For ease of comparison with the fast-gradient sign method (Good fellow et al.]2015), we set k = 1 and focus on achieving 1-misclassification. Tables|4|and[5 show the results of our experiments on the test sets. The first column shows the dataset name. The sec ond column (ERRTop-1) presents the top-1 misclassification rate on the corresponding test dataset without any perturbation (base error). ERRTop-1(ADv) is the top-1 misclassification rate where each original image in the test set was replaced with an generated perturbed image (using either ou. approach or the fast-gradient sign method (Goodfellow et al.2015) which is denoted as FGsM)13\n1 1 |I(b,x,y) - ADv(I)(b,x,y) PTB TADV l xwxh IETADV b,x,y\nwhere I E Rl wh is the original image and ADv(I) E Rlwh is the corresponding adversaria image. Note that the inner summation is measuring the L1-distance between I and ADv(I). The #pTBPixELS column shows the average percentage of perturbed pixels in the successful adversaria images. Similarly, TImE column shows the average time (in seconds) to generate a successfu adversarial image. Finally, the last column indicates the type of network architecture.\nAs is quite evident from these results, Algorithm LocSEARcHADv is more effective than the fasi gradient sign method in generating adversarial images, even without having access to the network architecture and its parameter values. The difference is quite prominent for networks trained with batch normalization as here we noticed that the fast-gradient sign method has difficulties producing adversarial images14Another advantage with our approach is that it modifies a very tiny fractior of pixels as compared to all the pixels perturbed by the fast-gradient sign method, and also in many cases with far less average perturbation. Putting these points together demonstrates that\n13Note that by explicitly constraining the number of pixels that can be perturbed, as we do in our approach. it might be impossible to get to a 100% misclassification rate on some datasets. Similarly, the fast-gradient sign method fails to achieve a 100% misclassification rate even with larger values of e (Moosavi-Dezfooli et al. 2016).\n*In general, we observed that models trained with batch normalization are somewhat more resilient tc adversarial perturbations probably because of the regularization properties of batch normalization (Ioffe & Szegedy2015)\nIn the following, we say an adversarial generation technique ADv, given an input image I, succeeds in generating an adversarial image ADv(I) for a network NN iff c(I) E (NN(I),1) and c(I) (NN(ADv(I)), 1). The CONF column shows the average confidence over all successful adversarial images for the corresponding technique. The PTB column shows the average (absolute) perturbation added per coordinate in cases of successful adversarial generation. More formally, let T denote the test set and TApy C T denote the set of images in T on which ADv is successful. Then,.\nTable|5|shows the results for several variants of VGG network trained on the ImageNet1000 dataset These networks do not have batch normalization layers (Chatfield et al.] 2014b] Zagoruyko] 2016a] We set e = 1 for the fast-gradient sign method as a different pre-processing technique was used fo this network (we converted these networks from pre-trained Caffe models). Results are similar tc that observed on the smaller datasets. In most cases, our proposed local-search based approach is more successful in generating adversarial images while on average perturbing less than 0.55% of the pixels.\nCase of Larger k's. We now consider achieving k-misclassification for k 1 using Algo rithm LocSeARcHADv. In Table [6] we present the results as we change the goal from 1 misclassification to 4-misclassification on the CIFAR10 dataset. We use the same parameters as before for Algorithm LocSeARchADv. As one would expect, as we increase the value of k the effectiveness of the attack decreases, perturbation and time needed increases. But overall ou local-search procedure is still able to generate a large fraction of adversarial images at even k = with a small perturbation and computation time, meaning that these images will fool even a systen that is evaluated on a top-4 classification criteria. We are not aware of a straightforward extension o the fast-gradient sign method (Goodfellow et al.2015) to achieve k-misclassification.\nWe trained several modifications of Network-in-Network model for the CIFAR10 dataset, varying. the initial value of the learning rate, the size of filters, and the number of layers in the network. We. observed that between 25% to 43% of adversarial images generated by Algorithm LocSEARcHADV using the original network were also adversarial for these modified networks (at k = 1). The. transferability of adversarial images that we observe here has also been observed with other attacks too (Szegedy et al.[2014f [Goodfellow et al.[2015f Papernot et al.[2016b a) and demonstrates the wider applicability of all these attacks."}, {"section_index": "4", "section_name": "7 CONCLUSION", "section_text": "We investigate the inherent vulnerabilities in modern CNNs to practical black-box adversarial attacks. We present approaches that can efficiently locate a small set of pixels, without using any gradien. information, which when perturbed lead to misclassification by a deep neural network. Our extensive experimental results, somewhat surprisingly, demonstrates the effectiveness of our simple approaches. in generating adversarial examples.\nFinally, we believe that our local-search approach can also be used for attacks against other machin learning systems and can serve as an useful tool in measuring the robustness of these systems..\nAlgorithm LocSeARcHADv is successful in generating more adversarial images than the fast gradient sign method, while modifying far fewer pixels and adding less noise per image. On the other side, the fast-gradient sign method takes lesser time in the generation process and generally seems to produce higher confidence scores for the adversarial (misclassified) images.\nEven Weaker Adversarial Models. We also consider a weaker model where the adversary does. not even have a black-box (oracle) access to the network (NN) of interest, and has to rely on a black-box access to somewhat of a \"similar' (proxy) network as NN. For example, the adversary. might want to evade a spam filter A, but might have to develop adversarial images by utilizing the. output of a spam filter B, which might share properties similar to A..\nDefenses against these attacks is an interesting research direction. However, we note that here that by limiting the perturbation to some pixels (being localized) the adversarial images generated by our local-search based approach do not represent the distribution of the original data. This means for these adversarial images, the use of adversarial training (or fine-tuning), a technique of training (or fine-tuning) networks on adversarial images to build more robust classifiers, is not very effective. In fact, even with adversarial training we noticed that the networks ability to resist new local-search based adversarial attack improves only marginally (on average between 1-2%). On the other hand we suspect that one possible counter-measure to these localized adversarial attacks could be based on performing a careful analysis of the oracle queries to thwart the attempts to generate an adversarial image.\nTable 4: Results for four datasets: CIFAR10. STL10, SVHN, and MNIST. The entries denote by denoted by \"_\" are the cases where the fast-gradient sign method fails to produce any adversarial image in our experimental setup\n#PTBPIXELS TIME Dataset ERRTOP-1 ERRTOP-1(ADV) CONF PTB Technique Network (%) (in sec) ImageNet1000 93.59 0.29 0.29 0.43 12.72 LOCSEARCHADV (Ours) VGG CNN-S (Caffe) 58.27 ImageNet1000 85.51 0.49 1.00 100.00 4.74 FGSM (Goodfellow et al.2015) VGG CNN-S (Caffe) ImageNet1000 91.36 0.28 0.29 0.40 10.01 LOCSEARCHADV (Ours) VGG CNN-M (Caffe) 58.96 ImageNet1000 87.85 0.48 1.00 100.00 4.36 FGSM (Goodfellow et al.2015) VGG CNN-M (Caffe) ImageNet1000 92.82 0.29 0.30 0.41 11.09 LOCSEARCHADV (Ours) VGG CNN-M 2048 (Caffe) 58.80 ImageNet1000 88.43 0.52 1.00 100.00 4.42 FGSM [Goodfellow et al.]2015] VGG CNN-M 2048 (Caffe) ImageNet1000 72.07 0.30 0.54 0.55 73.64 LOCSEARCHADV (Ours) VGG ILSVRC 19 (Caffe) 46.40 ImageNet1000 85.05 0.52 1.00 100.00 23.94 FGSM [Goodfellow et al.2015 VGG ILSVRC 19 (Caffe)\nTable 5: Results for the ImageNet1000 dataset using a center crop of size 224 224 for each image\nTable 6: Effect of increasing k on the performance of Algorithm LocSEARcHADv (without batch normalization).\nDataset ERRTOP-1 ERRTOP-1(ADV) CONF PTB #PTBPTXELS 1TME Technique Network (%) (in sec) NNs trained with batch normalization. CIFAR10 97.63 0.47 0.04 3.75 0.68 LOCSEARCHADV (Ours) NinN 11.65 CIFAR10 70.69 0.55 0.20 100.00 0.01 FGSM Goodfellow et al.2015 NinN CIFAR10 97.51 0.74 0.04 3.16 0.78 LOCSEARCHADV (Ours) VGG 11.62 CIFAR10 11.62 FGSM [Goodfellow et al.2015) VGG STL10 58.17 0.42 0.02 1.20 7.15 LOCSEARCHADV (Ours) NinN 29.81 STL10 54.85 0.53 0.20 100.00 0.03 FGSM [Goodfellow et al.2015 NinN STL10 65.76 0.47 0.02 1.11 13.90 LOCSEARCHADV (Ours) VGG 26.50 STL10 26.50 FGSM Goodfellow et al.2015] VGG - SVHN 97.06 0.47 0.05 4.51 1.02 LOCSEARCHADV (Ours) NinN 9.71 SVHN 48.62 0.49 0.20 100.00 0.02 FGSM [Goodfellow et al.2015] NinN SVHN 81.10 0.66 0.07 5.43 2.15 LOCSEARCHADV (Ours) VGG 4.77 SVHN 4.77 FGSM [Goodfellow et al.2015] VGG MNIST 91.42 0.54 0.20 2.24 0.64 LOCSEARCHADV (Ours) NinN 0.33 MNIST 1.65 0.58 0.20 100.00 0.02 FGSM [Goodfellow et al.2015] NinN MNIST 93.48 0.63 0.21 2.20 0.64 LOCSEARCHADV (Ours) VGG 0.44 MNIST 0.44 FGSM [Goodfellow et al.2015] VGG NNs trained without batch normalization CIFAR10 97.89 0.72 0.04 3.24 0.58 LOCSEARCHADV (Ours) NinN 16.54 CIFAR10 93.67 0.93 0.20 100.00 0.02 FGSM [Goodfellow et al.2015 NinN CIFAR10 97.98 0.77 0.04 2.99 0.72 LOCSEARCHADV (Ours) VGG 19.79 CIFAR10 90.93 0.90 0.20 100.00 0.04 FGSM [Goodfellow et al.2015] VGG STL10 52.65 0.56 0.02 1.17 6.42 LOCSEARCHADV (Ours) NinN 35.47 STL10 87.16 0.94 0.20 100.00 0.04 FGSM (Goodfellow et al.2015] NinN STL10 59.38 0.52 0.01 1.09 19.65 LOCSEARCHADV (Ours) VGG 43.91 STL10 91.36 0.93 0.20 100.00 0.10 FGSM [Goodfellow et al.2015] VGG SVHN 92.31 0.68 0.05 4.34 1.06 LOCSEARCHADV (Ours) NinN 6.15 SVHN 73.97 0.84 0.20 100.00 0.01 FGSM [Goodfellow et al.2015 NinN SVHN 88.34 0.68 0.05 4.09 1.00 LOCSEARCHADV (Ours) NinN 7.31 SVHN 76.78 0.89 0.20 100.00 0.04 FGSM Goodfellow et al.2015 VGG\n[T] Technique c) C LOCSEARCHADV (Ours)\nNetwork\nVGG CNN-M (Caffe) VGG CNN-M (Caffe)\n#PTBPIXELS TIME Dataset k ERRTOP-k ERRTOP-k(ADV) CONF PTB Network (%) (in sec) CIFAR10 1 16.54 97.89 0.72 0.04 3.24 0.58 NinN CIFAR10 2 6.88 76.65 0.88 0.07 5.50 1.02 NinN CIFAR10 3 3.58 59.02 0.90 0.08 7.09 1.85 NinN CIFAR10 4 1.84 48.89 0.90 0.09 7.63 2.12 NinN\nThe authors would like to thank Hamid Maei for helpful initial discussions"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, an Li Zhang. Deep learning with differential privacy. In ACM CCS, 2016..\nKen Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in th details: Delving deep into convolutional nets. In BMVC, 2014b\nAlhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers' robustness to adversarial perturbations. CoRR, abs/1502.02590, 2015.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.\nShixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversaria examples. In ICLR Workshop, 2015\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pp. 448-456, 2015.\nJan Karel Lenstra. Local search in combinatorial optimization. Princeton University Press, 1997\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2014\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In CVPR, 2016\nEli Biham and Adi Shamir. Differential cryptanalysis of des-like cryptosystems. Journal of CRYP TOLOGY, 4(1):3-72, 1991.\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers from adversarial to random noise. arXiv preprint arXiv:1608.08967, 2016.\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press,2016. URL http://www.deeplearningbook.org\nAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world arXiv preprint arXiv:1607.02533, 2016.\nPatrick McDaniel, Nicolas Papernot, and Z. Berkay Celik. Machine learning in adversarial settings IEEE Security & Privacy, 14(3):68-72, 2016\nShaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: towards real-time objec detection with region proposal networks. In NIPS, pp. 91-99, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014.\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks Visualising image classification models and saliency maps. In ICLR Workshop. 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellov and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014.."}] |
SyJNmVqgg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "With large amount of training data as its fuel, deep neural networks (DNN) have achieved state of-art performances in multiple tasks. Examples include deep convolutional neural network (CNN for image understanding(Krizhevsky et al.]2012) Ioffe & Szegedy2015] He et al.]2015]Ren et al.]2015) and recurrent neural networks (RNN) for natural language processing (Cho et al. 2014] Kiros et al.] 2015] Dai & Le]2015}Shang et al.]2015). To effectively train DNN with large scale of data, typically mini-batch based Stochastic Gradient Descent (SGD) (and its variants such as Adagrad(Duchi et al.]2011), Adadelta (Zeiler2012) and Adam (Kingma & Ba]2014)) is used. The mini-batch based SGD training is a sequential process, in which mini-batches of data D = {Di,... Dt,..., DT} arrive sequentially in a random order. Here Dt = (d1,..., dm) is the mini-batch of data arriving at the t-th time step and consisting of M training instances. After Lt. , based on which the neural network model gets updated: and gt =\nHere l() is the loss function specified by the neural network and nt is the learning rate at t-th step\nWith the sequential execution of SGD training, the neural network evolves constantly from a raw state to a fairly mature state, rendering different views even for the same training data. For example as imposed by the spirit of Curriculum Learning (CL) (Bengio et al.2009) and Self-Paced Learning (SPL) (Kumar et al.]2010), at the baby stage of the neural network, easy examples play important roles whereas hard examples are comparatively negligible. In contrast, at the adult age, the neural\nWorks done when Yang Fan is an intern at Microsoft Research Asia"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Wt+1 = Wt - ntgt.\nS1:D1, W S2: D2, W2 St:Dt, Wt a1 f(s1) r1 a2 f(s2) r2 at f(st) rt\nS1: D1, W S2:D2,W2 St:Dt,Wt a f(sD) r1 a f(s2) r L f(s T\nFigure 1: Basic structure of SGD accompanied with NDF. Blue part refers to SGD training process and yellow part is NDF.\nnetwork tends to favor harder training examples, since easy ones bring minor changes. It remains ar important question that, how to optimally and dynamically allocate training data at different stages. of SGD training?\nA possible approach is to solve this problem in an active manner: at each time step t, the mini. batch data Dt is chosen from all the left untrained data (Tsvetkov et al.]2016) Sachan & Xing. 2016). However, this typically requires a feed-forward pass over the whole remaining dataset at each training step, making it computationally expensive. We therefore consider a passive way in this paper, in which the random ordering of all the mini-batches is pre-given and maintained during the. training process. What actually do is, after receiving the mini-batch Dt of M training instances, we. dynamically determine which instances in Dt are used for training and which are filtered, based on. the features extracted from the feedforward pass only on Dt. Acting in this way avoids unnecessary computational steps on those filtered data and thus speeds-up the training process..\nPrevious works such as curriculum learning (CL) and self-paced learning (SPL) can be leveraged to fulfill such a data filtration task. However, they are typically based on simple heuristic rules, such as. shuffling the sequence length to train language model (Bengio et al.]2009), or abandoning training. instances whose loss values are larger than a human-defined threshold (Kumar et al.]2010] Jiang. et al.2014a).\nIn this work, we propose a Neural Data Filter (NDF) framework from a more principled and self. adaptive view. In this framework, as illustrated in Figure 1] the SGD training for DNN is naturall. casted into a Markov Decision Process (MDP) (Sutton & Barto1998) and data filtration strategy is. fully controlled through deep reinforcement learning (Mnih et al.l 2013fLillicrap et al.|2015b}[Mnil et al.2016). In such an MDP, a state (namely S1, :. : , St, ::.) is composed of two parts: the mini. batch of data arrived and the parameters of the current neural network model, i.e, s = Dt, W, ? In each time step t, NDF receives a representation f(st) for current state from SGD, outputs the. action at specifying which instances in Dt will be filtered according to its policy At. Afterwards. the remaining data determined by at will be used by SGD to update the neural network state anc. generate a reward rt (such as validation accuracy), which will be leveraged by NDF as the feedbacl. for updating its own policy.\nFrom another view, while SGD acts as the trainer for base model, i.e., DNN, it meanwhile is the trainee of reinforcement learning module. In other words, reinforcement learning acts at the teach- er module while SGD for DNN is the student. Speaking more ambitiously, such a teacher-student. framework based on reinforcement learning goes far beyond data filtration for neural network train-. ing: On one hand, the base model the can be benefitted is not limited to neural networks; on the other. the action space in reinforcement learning teacher module covers any strategies in machine learning process, such as hyper-parameter tuning and distributed scheduling. Through carefully designed interaction between the two modules, the training process of general machine learning models can. be more elaborately controlled.\nThe rest of the paper is organized as follows: in the next section [2] we will introduce the details. of Neural Data Filter (NDF), including the MDP language to model Stochastic Gradient Descent. training, and the policy gradient algorithms to learn NDF. Then in section 3l the empirical results\nof training LSTM RNN will be shown to verify the effectiveness of NDF. We discuss related worl in subsequent section4and conclude the paper in the last section 5\nWe introduce the mathematical details of Neural Data Filter (NDF) for SGD training in this section As a summary, NDF aims to filter certain amount of training data within a mini-batch, in order to achieve better convergence speed for SGD training. To achieve that, as introduced in last section and Figure[1 we cast Stochastic Gradient Descent training for DNN as a Markov Decision Process (MDP), termed as SGD-MDP.\nSGD-MDP: As traditional MDP, SGD-MDP is composed of the tuple < s, a, P, r, y >, illustrate as:\nA(s,a;O) = Pe(as) =ao(0f(s)+b) +(1-a)(1-o(0f(s)+b)\ns is the state, corresponding to the mini-batch data arrived and current neural network state St = (Dt, Wt). {0,1}M, where M is the batch size and am E {0,1} denotes whether to filter the mth data instance in Dt or not' Those filtered instances will have no effects to neural network training. uniform distribution of sequentially arrived training batch data; 2) The optimization process specified by Gradient Descent principle (c.f. equation |1). The randomness comes from stochastic factors in training, such as dropout (Srivastava et al.||2014). r = r(s, a) is the reward, set to be any signal indicating how well the training goes, such as validation accuracy, or the lost gap for current mini-batch data before/after model update. Furthermore future reward r is discounted by a discounting factor y E [0, 1] into the cumu- lative reward.\nNDF samples the action a by its policy function A = Pe(a[s) with parameters O to be learnt. For example, NDF policy A can be set as logistic regression:\nState Features: The aim of designing state feature vector f(s) is to effectively and efficiently represent SGD-MDP state. Since state s includes both arrived training data and current neural network state, we adopt three categories features to compose f(s):\nData features, contains information for data instance, such as its label category (we use 1 of [Y| representations), the length of sentence, or linguistic features for text seg ments (Tsvetkov et al.]2016). Data features are commonly used in Curriculum Learning (Bengio et al.2009fTsvetkov et al.|2016). Neural network features, include the signals reflecting how well current neural network is trained. We collect several simple features, such as passed mini-batch number (i.e. iteration), the average historical training loss and current validation accuracy. They are proven to be effective enough to represent current neural network status. Features to represent the combination of both data and model. By using these features, we target to represent how important the arrived training data is for current neural network We mainly use three parts of such signals in our classification tasks: 1) the predicted prob- abilities of each class; 2)the cross-entropy loss, which appears frequently in Self-Paced\n1we consider data instances within the same mini-batch are independent with each other, therefore for statement simplicity, when the context is clear, a will be used to denote the remain/filter decision for single data instance, i.e., a E {0, 1}. Similarly, the notation s will sometimes represent the state for only one training instance.\nThe state features f(s) are computed once each mini-batch training data arrives\nThe whole process for training neural networks is listed in Algorithm[1 In particular, we take th. similar generalization framework proposed in (Andrychowicz et al.|. 2016), in which we use pa of training data to train the policy of NDF (Step 1 and 2), and apply the data filtration model to th training process on the whole dataset (Step 3). The detailed algorithm to train NDF policy will b. introduced in the next subsection..\nAlgorithm 1 Training Neural Networks with Neural Data Filter.\nInput: Training Data D. 1. Sample part of NDF training data D' from D.. 2. Optimize NDF policy network A(s; O) (c.f. equation[2) based on D' by policy gradient 3. Apply A(s; O) to full dataset D to train neural network model by SGD.. Output: The Neural Network Model..\nNDF-REINFORCE. NDF-REINFORCE is based on REINFORCE algorithm (Williams||1992), a elegant Monto-Carlo based policy gradient method which favors action with high sampled reward The algorithm details are listed in Algorithm 2] Particularly, as indicated in equation 3] NDF REINFORCE will support data filtration policy leading to higher cumulative reward vt.\nAlgorithm 2 NDF-REINFORCE algorithm to train NDF policy"}, {"section_index": "2", "section_name": "NDF-ActorCritic", "section_text": "The gradient estimator in REINFORCE poses high variance given its Monto-Carlo nature. Further-. more, it is quite inefficient to update policy network only once in each episode. We therefore design.. NDF-ActorCritic algorithm based on value function estimation. In NDF-ActorCritic, a parametric. value function estimator Q(s, a; W) (i.e., a critic) with parameters W for estimating state-action"}, {"section_index": "3", "section_name": "Learning algorithms ( (Kumar et al.]2010] Jiang et al.2014a} Sachan & Xing][2016); 3) th margin value", "section_text": "Policy gradient methods are adopted to learn NDF policy A. In particular, according to different policy gradient methods, we designed two algorithms: NDF-REINFORCE and NDF-ActorCritic.\nlgorithm 2 NDF-REINFORCE algorithm to train NDF policy Input: Training data D'. Episode number L. Mini-batch size M. Discount factor y E [0, 1]. for each episode l = 1, 2, . . : , L do Initialize the base neural network model. Shuffle D' to get the mini-batches sequence D' = {D1, D2, ... , DT}. for t = 1, : .: , T do Sample data filtration action for each data instance in D = {di,. , dM}:a {am} m=1, am A(sm, a; O), sm is the state corresponding to the dm 1 M Update neural network model by Gradient Descent based on the selected data in Dt. Receive reward rt end for for t = 1, ::: , T do Compute cumulative reward ut = rt + yrt+1 + ... + T-trT Update policy parameter O: d log A(s, am; O) O O + aVt (3) dO end for end for Output: The NDF policy network A(s, a; O).\nA log A(s, am; O) OO+avt ao m\nQ(s,a; W) = o(w relu(f(s)W1a) + b)\nAlgorithm 3 NDF-ActorCritic algorithm to train NDF policy"}, {"section_index": "4", "section_name": "3.1 EXPERIMENTS SETUP", "section_text": "We conduct experiments on two different tasks/models: IMDB movie review sentiment classifi cation (with Recurrent Neural Network) and MNIST digital image classification (with Multilayei Perceptron Network). Different data filtration strategies we applied to SGD training include:\nvalue function is leveraged to avoid the high variance of vt from Monto-Carlo sampling in NDF REINFORCE. It remains an open and challenging question that how to define optimal value function estimator Q(s, a; W) for SGD-MDP. Particularly in this work, as a preliminary attempt, the follow- ing function is used as the critic:\nwhere f(s) = (f(s1); f(s2);..., f(sm)) is a matrix with M rows and each row f(sm) represents state features for the corresponding training instance dm. W = {wo, Wi, b} is the parameter set to be learnt by Temporal-Difference algorithm. Base on such a formulation, the details of NDF- ActorCritic is listed in Algorithm|3\nInput: Training data D'. Episode number L. Mini-batch size M. Discount factor y E [0, 1]. for each episode l = 1, 2, . . . , L do Initialize the base neural network model. Shuffle D' to get the mini-batches sequence D' = {D1, D2, ... , DT} for t = 1, : :: , T do Sample data filtration action for each data instance in Dt = {d1,...,dm}: a : 1 M I M Update neural network model by Gradient Descent based on the selected data. Receive reward rt. Update policy(actor) parameter O: O O + Q(s, a; W) m d log A(s,am;O) Update critic parameter W: dQ(s', a'; W) q =rt-1+yQ(s,a;W)-Q(s',a';W),W =W-q (5) aW a' a, s'+ s end for end for Output: The NDF policy network A(s, a; O).\ndQ(s',a'; W) q=rt-1+yQ(s,a;W)-Q(s',a;W), W=W-q aw\nUnfiltered SGD. The SGD training algorithm without any data filtration. Here rather than vanilla sgd (c.f. equation|1), we use its advanced variants such as Adadelta (Zeiler2012 or Adam (Kingma & Ba2014) to each of the task. Self-Paced Learning (SPL) (Kumar et al.]2010). It refers to filtering training data by its 'hardness', as reflected by loss value. Mathematically speaking, those training data d satisfying l(d) > n will be filtered out, where the threshold n grows from smaller to larger during training process. In our implementation, to improve the robustness of SPL, following the widely used trick (Jiang et al.f2014b), we filter data using its loss rank in one mini-batch, rather than the absolute loss value. That is to say, we filter data instances with top K largest training losses within a M-sized mini-batch, where K linearly drops from M - 1 to 0 during training. NDF-REINFORCE. The policy trained with NDF-REINFORCE, as shown in Algorithm 2 We use a signal to indicate training speed as reward. To be concrete, we set an accuracy threshold t E [0, 1] and record the first mini-batch index i, in which validation accuracy\nFor all strategies other than Plain SGD, we make sure that the base neural network model will not be updated until M un-trained, yet selected data instances are accumulated. In that way we make sure that the batch size are the same for every strategies (i.e., M), thus convergence speed is only determined by the effectiveness of data filtration strategies, not by different batch size led by different number of filtered data. For NDF strategies, we initialize b = 2 (c.f. equation|2), with the goal of maintaining training data at the early age, and use Adam (Kingma & Ba]2014) to optimize the policy. The model is implemented with Theano (Theano Development Team2016) and run on one Telsa K40 GPU.\nIMDB movie review datasef|is a binary sentiment classification dataset consisting of 50k movie review comments with positive/negative sentiment labels (Maas et al.]2011). We apply LSTM (Hochreiter & Schmidhuber1997) RNN to each sentence, and the last hidden state of LSTM is fed into a logistic regression classifier to predict the sentiment label (Dai & Le]2015). The model size (i.e., word embedding size hidden state size) is 256 512 and mini-batch size is set as M = 16. Adadelta (Zeiler2012) is used to perform LSTM model training.\nThe detailed results are shown in Figure 2 whose x-axis represents the number of effective training instances and y-axis denotes the accuracy on test dataset. All the curves are results of 5 repeated runs. From the figure we have the following observations:\nhttp://ai.stanford.edu/~amaas/data/sentiment/\nexceeds , then the reward is set as rT = log(-/T). Note here only terminal rewar. exists (i.e., rt = 0, Vt < T). NDF-ActorCritic. The policy trained with NDF-ActorCritic, as shown in Algorithm. Discount factor is set as y = 0.95. Since actor-critic algorithm makes it possible to update policy per time step, rather than pe. episode, different with the terminal reward set in NDF-REINFORCE, validation accurac. is used as the immediate reward for each time step. To save time cost, only part of validatio set is extracted to compute validation accuracy. Randomly Drop. To conduct more comprehensive comparison, for NDF-REINFORCI and NDF-ActorCritic, we record the ratio of filtered data instances per epoch, and the. randomly filter data in each mini-batch according to the logged ratio. In this way we for. m two more baselines, referred to as RandDropREINFORCE and RandDropActorCriti respectively.\nThe IMDB dataset contains 25k training sentences and 25k test sentences. For NDF-REINFORCE and NDF-ActorCritic, from all the training data we randomly sample 10k and 5k as the train- ing/validation set to learn data filtration policy. For NDF-REINFORCE, the validation accuracy. threshold is set as t = 0.8. For NDF-ActorCritic, the size of sub validation set to compute imme- diate reward is 1k. The episode number is set as L = 30. Early stop on validation set is used to control training process in each episode..\nNDF (shown by the two solid lines) significantly boosts the convergence of SGD training for LSTM. With much less data, NDF achieves satisfactory classification accuracy. For example, NDF-REINFORCE achieves 80% test accuracy with only roughly half training data (about 40k) of Plain SGD consumes (about 80k). Furthermore, NDF significantly outperforms the two Randomly Drop baselines, demonstrating the effectiveness of learnt policies. Self-Paced Learning (shown by the red dashed line) helps for the initialization of LSTM however, it delays training after the middle phrase. For the two variants of NDF, NDF-REINFORCE performs better than NDF-ActorCritic. Our conjecture for the reason is: 1) For NDF-REINFORCE, we use a terminal reward fully devoted to indicate training convergence; 2) The critic function (c.f., equation 4) may not be expressive enough to approximate true state-action value functions. Deep critic function should be the next step.\n0.9 0.8 0.6 0.5 0 20000 40000 60000 80000 100000 120000 Number of Training Instances RandomDropREINFORCE SPL NDF-ActorCritic RandomDropActorCritic UnfilteredSGD NDF-REINFORCE\nFigure 2: Test accuracy curves of different data filtration strategies on IMDB sentiment classification dataset. The x-axis records the number of effective training instances.\n20.00% 18.00% 16.00% 14.00% 12.00% 10.00% 8.00% 6.00% 0 5 10 15 20 25 30 35 40 Iteration Number NDF-REINFORCE NDF-ActorCritic\nTo better understand the learnt policies of NDF, in Figure[3|we plot the ratio of filtered data instances per every certain number of iterations. It can be observed that more and more training data are kept during the training process, which are consistent with the intuition of Curriculum Learning and Self- Paced Learning. Furthermore, the learnt feature weights for NDF policies (i.e. 0 in equation[2) are listed in Table[1] From the table, we can observe:\nFigure 3: Data filtration ratio during training LSTM with NDF-REINFORCE and NDF-ActorCritic policies.\nLonger movie reviews, with positive sentiments are likely to be kept. Margin plays critical value in determining the importance of data. As reflected by its fairl large positive weights, training data with large margin is likely to be kept. . Note that the feature - log py is the training loss, its negative weights mean that trainin instances with larger loss values tend to be filtered, thus more and more data will be kej since loss values get smaller and smaller during training, which is consistent with the curve\nTable 1: Feature weights learnt for NDF policies learnt in IMDB sentiment classification. The first row lists all the features (i.e., f(s)) categorized into the three classes described in Section 2. normalized means the feature value is scaled between 0, 1]. yo, y1 is the 1-of-2 representation for sentiment label."}, {"section_index": "5", "section_name": "3.3 IMAGE CLASSIFICATION ON CORRUPTED-MNIST", "section_text": "We further test different data filtration strategies for multilayer perceptron network training on im. age recognition task. The dataset we used is MNIST, which consists of 60k training and 10k testing. images of handwritten digits from 10 categories (i.e., 0, ..., 9). To further demonstrate the effec tiveness of the proposed neural data filter in automatically choosing important instances for training. we manually corrupt the original MNIST dataset by injecting some noises to the original pictures as follows: We randomly split 60k training images into ten folds, and flip (i - 1) 10% randomly cho-. sen pixels of each image in the i-th fold, i = 1, 2, . . . , 10. The 10k test set are remained unchanged Flipping a pixel means setting its value r as r = 1.0 - r. Such a corrupted dataset is named as. C-MNIST. Some sampled images from C-MNIST are shown in Figure4\nA three-layer feedforward neural network with size 784 300 10 is used to classify the C-MNIS7 dataset. For data filtration policy, different from the single-layer logistic regression in equation2 in this task, NDF-REINFORCE and NDF-ActorCritic leverage a three-layer neural network witl model size 24 12 1 as policy network, where the first layer node number 24 is the dimensior of state features fs] and sigmoid function is used as the activation function for the middle layer 10k randomly selected images out of 60k training set acts as validation set to provide reward sig nals to NDF-REINFORCE and NDF-ActorCritic. For NDF-REINFORCE, the validation accuracy threshold is set as t = 0.90. For NDF-ActorCritic, the immediate reward is computed on the whol validation set. The episode number for policy training is set as L = 50 and we control training in each episode by early stopping based on validation set accuracy. We use Adam (Kingma & Ba 2014) to optimize policy network.\nThe test set accuracy curves (averaged over five repeated runs) of different data filtration strategies are demonstrated in Figure[5] From Figure5 we can observe:."}, {"section_index": "6", "section_name": "4 RELATED WORK", "section_text": "Plenty of previous works talk about data scheduling (e.g., filtration and ordering) strategies for ma. chine learning. A remarkable example is Curriculum Learning (CL) (Bengio et al.[2009) showing. that a data order from easy instances to hard ones, a.k.a., a curriculum, benefits learning process\nfs is similar to the features in Table except that (yo, y1) and (logpo, logp1) are switched int yo, : . . , y9) and (log po, : . . , log p9) respectively, given there are ten target classes in mnist classification.\nin Figure[3] However, such a trend is diminished by the negative weight values for neura network features, i.e., historical training accuracy and normalized iteration.\nSimilar to the result in IMDB sentiment classification, NDF-REINFORCE achieves the best convergence speed; The performance of NDF-ActorCritic is inferior to NDF-REINFORCE. In fact, NDF- ActorCritic acts similar to sgd training without any data filtration. This further shows although Actor-Critic reduces variance compared with REINFORCE, the difficulty in de- signing/training better critic functions hurts its performance\nFigure 4: Sampled pictures from C-MNIST dataset. Each row represents a corrupted fold in training set, with the percentage of flipped pixels growing from 0% (top row) to 90% (bottom row)\n0.95 0.94 0.93 0.92 0.9 0.90 600000 800000 1000000 1200000 1400000 1600000 Number of Training Instances RandomDropREINFORCE SPL NDF-ActorCritic RandomDropActorCritic Un filteredSGD NDF-REINFORCE\nFigure 5: Test accuracy curves of different data filtration strategies on C-MNIST dataset. The x-axis records the number of effective training instances..\nThe measure of hardness in CL is typically determined by heuristic understandings of data (Bengio et al.[[2009] Spitkovsky et al.]2010]Tsvetkov et al.[|2016). As a comparison, Self-Paced Learning (SPL)(Kumar et al.[2010f Jiang et al.]2014a bf Supancic & Ramanan 2013) quantifies the hard- ness by the loss on data. In SPL, those training instances with loss values larger than a threshold n will be neglected and n gradually increases in the training process such that finally all training instances will play effects. Apparently SPL can be viewed as a data filtration strategy considered in this paper.\nRecently researchers have noticed the importance of data scheduling for training Deep Neural Net. work models. For example, in (Loshchilov & Hutter2015), a simple batch selection strategy basec on the loss values of training data is proposed for speed up neural networks training.. (Tsvetkov et al.2016) leverages Bayesian Optimization to optimize a curriculum function for training dis-. tributed word representations. The authors of (Sachan & Xing2016) investigated several hand. crafted criteria for data ordering in solving Question Answering tasks based on DNN. Our works. differs significantly with these works in that 1) We aim to filter data in randomly arrived mini-batches. in training process to save computational efforts, rather than actively select mini-batch; 2) We lever. age reinforcement learning to automatically derive the optimal policy according to the feedback of. training process, rather than use naive and heuristic rules..\nThe proposed Neural Data Filter (NDL) for data filtration is based on deep reinforcement learning. (DRL)(Mnih et al.]2013] 2016] Lillicrap et al.][2015a] Silver et al.][2016), which applies deep neu- ral networks to reinforcement learning (Sutton & Barto|1998). In particular, NDL belongs to policy. based reinforcement learning, seeking to search directly for optimal control policy. REINFORCE (Williams1992) and actor-critic (Konda & Tsitsiklis|1999) are two representative policy gradient algorithms, with the difference that actor-critic adopts value function approximation to reduce the. high variance of policy gradient estimator in REINFORCE..\nIn this paper we introduce Neural Data Filter (NDF), a reinforcement learning framework to selec-. t/filter data for training deep neural network. Experiments on several deep neural networks training demonstrate that NDF boosts the convergence of Stochastic Gradient Descent. Going beyond data. filtration, the proposed framework is able to supervise any sequential training process, thus opens a. new view for self-adaptively tuning/controlling machine learning process..\nAs to future work, on one aspect, we aim to test NDF to more tasks and models, such as Con volutional Neural Network (CNN) for image classification. We would also plan to give cleare explanation on the behavior of NDF, such as what data is dropped at different phrases of training. and whether the proposed critic function is good enough. On the other aspect, we aim to apply such a reinforcement learning based teacher-student framework to other strategy design problems for machine learning, such as hyper-parameter tuning, structure learning and distributed scheduling. with the hope of providing better guidance for controlled training process."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Multi-class classification with maxi mum margin multiple kernel. In ICML (3), pp. 46-54, 2013.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3079-3087, 2015\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nVijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In NIPs, volume 13, pp. 1008-1014 1999.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nM Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pp. 1189-1197, 2010.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa. David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXi preprint arXiv:1509.02971, 2015a\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXi preprint arXiv:1509.02971, 2015b.\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958. 2014.\nJames S Supancic and Deva Ramanan. Self-paced learning for long-term tracking. In Proceeding. of the IEEE conference on computer vision and pattern recognition. pp. 2379-2386. 2013.\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of ini tialization and momentum in deep learning. In Sanjoy Dasgupta and David Mcallester (ed s.), Proceedings of the 3Oth International Conference on Machine Learning (ICML-13), vol ume 28, pp. 1139-1147. JMLR Workshop and Conference Proceedings, May 2013.URL http://jmlr.org/proceedings/papers/v28/sutskever13.pdf\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012."}] |
ByOK0rwlx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "t is widely believed that deeper networks tend to achieve better performance than shallow ones in vari- ous computer vision tasks. As a trade-off of such im- oressive improvements, deeper networks impose heavy computational load both in terms of processing time and memory consumption due to an enormous amount f network parameters. For example, VGG-16 model Simonyan & Zisserman 2015) requires about 528 MBytes to store the network weights where fully con nected layers account for 89% of them. A large number f multiplications and additions must also be processed at each layer which prevent real-time processing, con- sume vast amounts of electricity, and require a large number of logic gates when implementing a deep net- vork on a FPGA or ASIC\nThis article addresses the above issues. Specifically, we aimed to reduce the test-time computational load of a pre-trained network. Since our approach does not depend on a network configuration. (e.g. a choice of an activation function, layer structures, and a number of neurons) and acts as a. post-processing of network training, pre-trained networks shared in a download site of MatConvNet. (Vedaldi & Lencl 2015) and Model Zoo (BVLC) can be compressed and accelerated. Our method is outlined in Figure[1] The main idea is to factorize both weights and activations into integer and non-integer components. Our method is composed of two building blocks, as shown below.."}, {"section_index": "1", "section_name": "TERNARY WEIGHT DECOMPOSITION AND BINARY AC- TIVATION ENCODING FOR FAST AND COMPACT NEU- RAL NETWORK", "section_text": "Takayoshi Yamashita & Hironobu Fujiyoshi\n{yamashita,hf}@cs.chubu.ac. jp"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "This paper aims to reduce test-time computational load of a deep neural network. Unlike previous methods which factorize a weight matrix into multiple real-valued. natrices, our method factorizes both weights and activations into integer and non integer components. In our method, the real-valued weight matrix is approximated. y a multiplication of a ternary matrix and a real-valued co-efficient matrix. Since. the ternary matrix consists of three integer values, {-1, 0, +1}, it only consumes. 2 bits per element. At test-time, an activation vector that passed from a previous. layer is also transformed into a weighted sum of binary vectors, -1, +1}, which. enables fast feed-forward propagation based on simple logical operations: AND. XOR, and bit count. This makes it easier to deploy a deep network on low-power. CPUs or to design specialized hardware.\nIn our experiments, we tested our method on three different networks: a CNN for handwritten digits, VGG-16 model for ImageNet classification, and VGG-Face for large-scale face recognition. In particular, when we applied our method to three fully connected layers in the VGG-16, 15 acceleration and memory compression. up to 5.2% were achieved with only a 1.43% increase in the top-5 error. Our. experiments also revealed that compressing convolutional layers can accelerate inference of the entire network in exchange of slight increase in error..\nreal-valued activation vector M. real-valued real-valued weight matrix. MW Computable by. WT CT XOR,AND,BitCount 2~ real-valued\nFigure 1: Our network compression model\nTernary weight decomposition for memory compression: We introduce a factored representation where the real-valued weight matrix is approximated by a multiplication of a ternary basis matrix and a real-valued co-efficient matrix. While the ternary basis matrix is sufficiently informative to reconstruct the original weights, it only consumes 2 bits per element. The number of rows of the co efficient matrix is also smaller than that of the original weight matrix. These compact representations result in efficient memory compression.\nBinary activation encoding for fast feed-forward propagation: It has been reported that an inne. product between a ternary and binary vector can be computed extremely fast by using three logica. operations: AND, XOR, and bit count (Ambai & Satol2014). To use this technique, we approximate the activation vector by a weighted sum of binary vectors. This binary encoding must be processed as fast as possible at test-time. To overcome this issue, we use a fast binary encoding method based on small lookup table."}, {"section_index": "3", "section_name": "1.1 RELATED WORK", "section_text": "There have been extensive studies on accelerating and compressing deep neural networks, e.g., on. an FFT-based method (Mathieu et al.]2014), re-parameterization of a weight matrix (Yang et al. 2015), pruning network connection (Han et al.]2015 2016), and hardware-specific optimization (Vanhoucke et al.] 2011). In the following paragraphs, we only review previous studies that are. intimately connected to ours\nThere is an another series of studies, integer decomposition (Hare et al.]2012) Yuji et al.]2014 Ambai & Sato2014), which involved accelerating test-time speed of a classifier by using fas1 logical operations. Although their contributions are limited to a shallow architecture such as a lineai SVM, they achieved a noticeable acceleration. In these approaches, a real-valued weight vector is approximated by a weighted sum of a few binary or ternary basis vectors. To use fast logica operations, they extracted binary features from an image. Hare et al.(2012) and Yuji et al.(2014 exploited binary basis vectors, and Ambai & Sato (2014) investigated a case of ternary basis to improve approximation quality.\nIn a manner of speaking, our method is a unified framework of matrix/tensor factorization and integer decomposition reviewed in the above and inherits both their advantages. While the weight matrix is factorized to exploit low-rank characteristics, the basis matrix is restricted to take only three integer values, {1, 0, +1}. In contrast to recent binary weighted networks such as XNOR-Net (Rastegari et al.2016) which quantizes both activations and weights during backpropagation, it is not necessary. for our method to change training algorithms at all. We can benefit from recent sophisticated training techniques, e.g. batch normalization (Ioffe & Szegedy| 2015), in combination with our method Furthermore, our method does not need (iterative) end-to-end retraining which is needed for several previous studies such as network pruning (Han et al.]2015) 2016) and distillation (Hinton et al. 2014).\nIn this section, we introduce our compression model and discuss time and space complexity. We consider a convolutional layer with a filter size of wx wy c, where wx and wy are the spacial size and c is a number of input channels. If wx = wy = 1, we can regard this layer as a fully connected layer. This three dimensional volume is reshaped to form a D- dimensional vector where D1 = wx wy X c. The filter weights and biases can be formulated by W E RD1 x Do and b E RDo where Do is a number of output channels. Let x E RD1 denote an activation vector obtained by\nIt was pointed out by[Denil et al.(2013) that network weights have a significant redundancy. Motivated by this fact, researchers have been involved in a series of studies on matrix/tensor factorization (Jaderberg et al.][2014]Zhang et al.[[2015). In these studies, a weight matrix (or tensor) was factorized by minimizing an approximation error of original weights or activations. Jaderberg et al.(2014). exploited 1-D separable filter decomposition to accelerate feed-forward propagation.Zhang et al.. (2015) proposed low-rank approximation based on generalized SVD to compress an entire deep network. Taking into account the lessons learned from these best practices, we also exploit the. redundancy of the weights.\nTable 1: Number of operations\noperation floating point logical multiply-adds AND XOR bit count original (W' x) D1Do 0 0 0 proposed (CMMcx) kxkw+ kwD0 (D1kxkw)/B (D1kxkw)/B (D1kxkw)/B\nTable 2: Memory consumption. Real value is represented in single precision (32 bits/element\nvectorizing the corresponding three dimensional volume. In test-time, we need to compute W ' x + 1 followed by a non-linear activation function..\nIn our compressed network, W is decomposed into two matrices before test-time as follows\nW ~ MwCw,\nwhere Mw E{-1, 0, +1}Dr kw is a ternary basis matrix, Cw E Rkw x Do is a co-efficient matrix.. and kw is the number of basis vectors, respectively. Since Mw only takes the three values, it. consumes only 2 bits per element. Setting a sufficiently small value to k, further reduces total memory consumption. From the viewpoint of approximation quality, it should be noted that a large. number of elements in W takes close to zero values. To fit them well enough, a zero value must be. included in the basis. The ternary basis satisfies this characteristic. In practice, the ternary basis gives. better approximation than the binary basis, as we discuss in Section|3.\nThe activation vector x is also factored to the following form\nwhere M E {-1, +1}D1 k is a binary basis matrix, cx E Rk is a real-valued co-efficient vector bx E R is a bias, and kx is the number of basis vectors, respectively. Since elements of x are ofter. biased, e.g., activations from ReLU take non-negative values and have a non-zero mean, bx is addec to this decomposition model. While c, and bx reflect a range of activation values, M, determines approximated activation values within the defined range. This factorization must be computed a1. test-time because the intermediate activations depend on an input to the first layer. However, ir practice, factorizing x into Mx, Cx, and bx requires an iterative optimization, which is very slow. Since a scale of activation values within a layer is almost similar regardless of x, we pre-computec. canonical c, and b, in advance and only optimized M, at test-time. As we discuss in Section|4] ar. optimal M, under fixed cx and bx can be selected using a lookup table resulting in fast factorization.\noriginal proposed w Mw Cw variables Cx, bx size (bits) 32 : D1D0 2.D1kw 32 . kwDo 32.(kx+1)\nx ~ Mxcx+bx1\nW'x+ b~ (M,Cw)'(Mzcx+ bx1) + b = C'M'Mzcc+ bzCM'1+ b\nThe time and space complexity are summarized in Tables[1and[2] As can be seen from Table[1] most of the floating operations are replaced with logical operations. In this table, B means the bit width of a variable used in the logical operations, e.g., B = 64 if a type of unsigned long long is used in. C/C++ language. Table2|suggests that if kw is sufficiently smaller than D1 and Do, the total size of. Mw and Cw is reduced compared to the original parameterization..\nAlgorithmI Decompose 1nto VI w and C Require:W, kw Ensure: factorized components Mw and Cw. 1: R W 2: for i 1 to kw do Initialize m) by three random values {-1, 0, +1}. 3: Minimize ||R - m) c | by repeating the following two steps until convergence. 4: [Step 1] c8 m(R/m\" m( 5: [Step 2] mij arg min|r - ac|2, for j = 1, ., D 6: aE{-1,0,+1} R R- mcw .(i).(i) 7: 8: end for\nTo factorize W, we need to solve the following optimization problem\nl|W - MwCw||F Jw min Mw,C\nHowever, the ternary constraint makes this optimization very difficult. Therefore, we take an. iterative approach that repeats rank-one approximation one by one, as shown in Algorithm 1] Let vector of Cw. Instead of directly minimizing Eq. (4), we iteratively solve the following rank-one. approximation, (i) o(i) l2\ni |R - m?c(i) min m (i) c(i) W\nBinary decomposition for a given activation vector x can be performed by minimizing\nJx(Mx,Cx,bx;x) =||x- (Mxcx + bx1)|l2\narg min (x-(cx+bx))2,j=1,,D1, m e{-1,+1}1xkx\nOur method makes this decomposition faster by pre-computing canonical cx and bx from training. data and only optimizing M, at test-time using lookup table. This compromise is reasonable because. of the following two reasons: (1) scale of activation values is similar regardless of vector elements\nwhere x; is a j-th element of x. Since kx is sufficiently small, 2kx possible solutions can be exhaustively verified (in line 5 of Algorithm|2)..\nwithin a layer, and (2) cx and bx reflect a scale of approximated activation values. Knowing these. properties, cx and bx are obtained by minimizing Jx(Mx, Cx, bx; x) ,where is constructed as. training data. Second, n elements are randomly sampled from x;. The sampled nN elements are concatenated to form a vector x E RnN+. We use c and bx as constants at test-time, and discard. M.\ncan be efficiently found using a lookup table as follows\nq (L - 1)(x; Pmin)/(Pmax Pmin (1) min(max(q+1/2,1),L)"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "We tested our method on three different convolutional neural networks: CNN for handwritten digit (LeCun et al.|1998), VGG-16 for ImageNet classification (Simonyan & Zisserman]2015), and VGG Face for large-scale face recognition (Parkhi et al.]2015). To compute memory compression rate, size of W and a total size of Mw and Cw were compared. To obtain a fair evaluation of computatior time, a test-time code of forward propagation was implemented without using any parallelizatior scheme, e.g., multi-threading or SIMD, and was used for both compressed and uncompresse networks. The computation time includes both binary activation encoding and calculation of Eq. (3 We used an Intel Core i7-5500U 2.40-GHz processor."}, {"section_index": "5", "section_name": "5.1 CNN FOR HANDWRITTEN DIGITS", "section_text": "MNIST is a database of handwritten digits which consists of 60000 training and 10000 test sets of 28 28 gray-scale images with ground-truth labels from O to 9. We trained our CNN by using an example code in MatConvNet 1.0-beta18 (Vedaldi & Lenc]2015). Our architecture is similar to LeNet-5 (LeCun et al.][1998) but has a different number of input and output channels. Each layer's configuration is shown below:.\nAt test-time, we only need to solve the optimization of Eq. (7) for each x. This can be regarded as the nearest neighbour search in one-dimensional space. We call 3cx + bx a prototype. There are 2kx possible prototypes because takes 2k possible combinations. The nearest prototype to x; and an i)\nPreparing lookup table: We define L bins that evenly divide one-dimensional space in a range from. the smallest to largest prototype. Let xj denote a representative value of the l-th bin. This is located at the center of the bin. For each xi, we solve Eq. (7) and assign the solution to the bin..\nActivation encoding: At test-time, x; is quantized into L-levels. In other words, x; is transformed to an index of the lookup table. Let pmax and pmin denote the largest and smallest prototype, respectively. We transform x ; as follows:\nThe range from pmin to Pmax is linearly mapped to the range from 1 to L by Eq. (8). The term q is rounded and truncated from 1 to L by the max and min function in Eq. (9). If L is sufficiently large, the solution assigned to the l-th bin can be regarded as a nearly optimal solution because the difference between x; and the center of the bin x becomes very small. We found that L = 4096 is sufficient. The time complexity of this encoding is O(D1).\n3.5 3.5 k=1 0 k =1 3 3 k =2 0 kx=2 /5 (%) enn nrnn e! a = 3 (%) nnrn nrnr e! 0 k =D /5 = D 12 k = 3 2.5 W 2.5 - / kx =4 2 2 k =,D /2 K = D 1.5 1.5 lnesssse lneeeese k F D O 1 1 0.5 0.5 0 0 0 20 40 60 80 100 0 1 2 3 4 5 6 7 8 memory compression rate (%) acceleration rate (x times faster). (a) error vs. memory compression (b) error vs. acceleration.\nFigure 2: Results of MNIST. The first fully connected layer was decomposed\nwhere the parameters of a convolutional layer are denoted as (conv<receptive field size>-<number. of output channels>), and parameters of a fully connected layer are denoted as (fc<number of inpui channels>-<number of output channels>). The (maxpool) is 2 2 subsampling without overlapping The error rate of this network is 0.86%.\nWe applied our method to the first fully connected layer (fc1024-640) and set n = 10 and N = 100( to learn cx and bx from randomly chosen nN- activations. The cases of kx = 1,2,3,4 anc kw = Do, Do/2, Do/5 were tested. This means that kw was set to 640, 320, and 128.\nFigures2(a) and (b) show the relationships among the increases in error rates, memory compression rates, and acceleration rates. It was observed that error rates basically improved along with increasing kx and saturated at kx = 4. It is interesting that kx = 2, only 2 bits per element for encoding an activation x, still achieved good performance. While the smaller kw achieved better compression and acceleration rate, error rates rapidly increased when kw = Do/5. One of the well balanced parameters was (kx, kw) = (4, Do/2) which resulted in 1.95 faster processing and a 34.4% memory compression rate in exchange of a 0.19% increase in the error rate.."}, {"section_index": "6", "section_name": "5.2 VGG-16 FOR IMAGENET CLASSIFICATION TASK", "section_text": "A dataset of ILSVRC2012 (Russakovsky et al.] 2015) consists of 1.2 million training, 50,000 validation, and 100,o00 test sets. Each image represents one of 1000 object categories. In this experiment, we used a network model of VGG-16 (model D in (Simonyan & Zisserman]2015)) that consists of 13 convolutional layers and 3 fully connected layers followed by a softmax layer. The architecture is shown below:.\nwhere layers before the first fully connected layer are omitted\nThe three lines with circles in Figure [3|show these results. It should be noted that much higher. acceleration rates and smaller compression rates with small loss of accuracies were achieved than the case of the network for MNIST. Interestingly, the case of kw = Do/4 still performed well due to the low-rank characteristics of weights in the VGG-16 network..\nAlthough the error rates rapidly increased when kw took much smaller values, we found that this could be improved by tuning k, of the third layer. More specifically, we additionally tested the\nFirst, all three fully connected layers were compressed with our algorithm. We set n = 10 and N = 1000 to learn cx and 6x from randomly chosen nN activations. The cases of kx = 2, 3, 4 and kw = Do/2, Do/4, Do/8, Do/16 were tested. The case of kx = 1 was omitted because this setting resulted in a very high error rate. Note that each of the fully connected layers has different Do. The kw was independently set for each layer according to its Do. The top-5 error rates were evaluated on the validation dataset. The top-5 error rate of the original network is 13.4%\n30 30 = D /16 k =2 kx=2 = D /16 25 W (%) k =3 25 kx=3 0 kx=4 kx=4 = D /8 * * kx = 4 (kw = D0 in FC3) kx = 4 (kw = Do in FC3) 20 20 = D /8 k = D /4 9-doo u! 15 ..11:::: 15 K - D 14 K = D 12 lneeesse 10 10 .+++++ k = D 12 5 5 * j 0 0 5 10 15 20 0 10 20 30 40 50 memory compression rate (%) acceleration rate (x times faster) (a) error vs. memory compression (b) error vs. acceleration.\nFigure 3: Results of VGG-16. The last three fully connected layers were decomposed\nTable 3: Best balanced parameters for decomposing three fully connected layers of VGG-16\nOriginal Proposed Top-5 error (%) 13.4 14.8 MBytes kw msec kx MBytes ratio msec ratio fc25088-4096 392.0 142.4 Do/8 4 11.1 2.8% 6.1 23.5 fc4096-4096 64.0 22.8 Do/8 4 8.5 13.3% 3.0 7.5 fc4096-1000 15.6 5.7 Do 4 4.8 30.7% 2.3 2.5 total 471.6 170.9 24.4 5.2% 11.4 15.0\nTable 4: Reults of decomposing convolutional lavers of VGG-16\nNext, we also tested to compress convolutional layers. In this experiment, kw and kx were set to Do and 4. This setting accelerates each of the layers averagely 2.5 times faster. Table4 shows positions of compressed layers, top-5 errors, and acceleration rates of the entire network. Although k, and k. must be larger than those of fully connected layers to avoid error propagation, it is still beneficial for entire acceleration. In summary, while compressing fully connected layers is beneficial for reducing memory. compressing convolutional layers is beneficial for reducing entire computation time.\nThis network outputs a 4096-dimensional descriptor. We can verify whether two face images are identical, by evaluating the Euclidean distance of two l2-normalized descriptors extracted from\nfollowing cases. While kw was set to Do/2, Do/4, Do/8, and Do/16 for the first and second. layers, kw was fixed to Do for the third layer. The kx was set to 4. This is plotted with a red line in Figure[3] In this way, the memory compression rate and acceleration rate noticeably improved. Setting appropriate parameters for each layer is important to improve the total performance. Table[3 shows the details of the best balanced case in which 15 faster processing and 5.2% compression. rate were achieved in exchange of a 1.43% increase in error rate..\n5 5 4.5 4.5 K = D /16 D./16 K 2 kx =2 4 W 4 D = D /8 ++ = 3 k.D /8 3.5 (%) 3.5 W.. (%) 0 kx=4 ER ER 3 3 E = D 12 E 2.5 2.5 lnssesse ilnrrrese 2 2 1.5 1.5 1 1 0.5 0.5 k = D 0 0 0 5 10 15 20 0 10 20 30 40 50 60 memory compression rate (%) acceleration rate (x times faster) (a) error vs. memory compression (b) error vs. acceleration\nFigure 4: Results of VGG-Face. The last two fully connected layers were decomposed\nTable 5: Reults of decomposing convolutional layers of VGG-Face\nthem. In our experiment, we did not apply a descriptor embedding technique based on triplet los. minimization (Parkhi et al.|2015). Following the evaluation protocol introduced in a previous pape: (Parkhi et al.|2015), we used Labeled Faces in the Wild dataset (LFW) (Huang et al.|2007), whicl includes 13,233 face images with 5,749 identities. The LFW defines 1200 positive and 1200 negative pairs for testing. We used the 2400 test pairs to compute ROC curve and equal error rate (EER). Th EER is defined as an error rate at the ROC operating point where the false positive and false negative rates are equal. The EER of the original network is 3.8%.\nFirst, the two fully connected layers were compressed using our algorithm. We set n = 10 and. N = 1000 to learn cx and bx from randomly chosen nN- activations. We tested the cases of kx = 1, 2, 3, 4, and kw = Do/2, Do/4, Do/8, Do/16. Figure 4[reveals an interesting fact that even the fastest and smallest network configuration, kx = 1 and kw = Do/16, had less impact on the EER, in contrast to the previous ImageNet classification task in which the recognition results were. corrupted when kx = 1. This indicates that the 4096-dimensional feature space is well preserved regardless of such coarse discretization of both weights and activations..\nWe proposed a network compression model that consists of two components: ternary matrix decom position and binary activation encoding. Our experiments revealed that the proposed compression model is available not only for multi-class recognition but also for feature embedding. Since our approach is post-processing for a pre-trained model, it is promising that recent networks designed. for semantic segmentation, describing images, stereo matching, depth estimation, and much more can also be compressed with our method. For future work, we plan to improve approximation error. further by investigating the discrete optimization algorithm..\nNext, we also tested to compress convolutional layers. In this experiment, kw and kx were set to Do and 4 which are the the same setting used in Table4 Table[5|shows positions of compressed layers and EERs. The acceleration rates were almost the same as the results shown in Table[4] This is because architecture of VGG-face is the same as VGG-16 and we used the same parameter for ku and kx. Interestingly, compressing multiple layers from 2nd to 1Oth still preserves the original EER As can be seen from this table, our method works very well depending on a certain kind of machine learning task."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Misha Denil, Babak Shakibi, Laurent Dinh, Marc' Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. NIPS, pp. 2148-2156, 2013.\nSong Han, Huizi Mao, and William J. Dally. Deep Compression - Compressing Deep Neura Networks with Pruning, Trained Quantization and Huffman Coding. ICLR, 2016\nSam Hare, Amir Saffari, and Philip H. S. Torr. Efficient Online Structured Output Learning for Keypoint-Based Object Tracking. CVPR, pp. 1894-1901, 2012.\nGary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: a Database for Studying Face Recognition in Unconstrained Environments. University oJ Massachusetts Amherst Technical Report, (07-49), 2007.\nSergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In ICML, pp. 81-87, 2015..\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up Convolutional Neural Networks with Low Rank Expansions. BMVC, 2014.\nOmkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep Face Recognition. BMVC, 2015\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. ECCV, pp. 525-542, 2016.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. ICLR, 2015.\nZichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang. Deep Fried Convnets. ICCV. pp. 1476-1483. 2015.\nYamauchi Yuji, Ambai Mitsuru, Sato Ikuro, Yoshida Yuichi, Fujiyoshi Hironobu, and Yamashita. Takayoshi. Asymmetric Feature Representation for Object Recognition in Client Server System ACCV, pp. 598-612, 2014.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutiona Networks for Classification and Detection. PAMI, 2015\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11):2278-2323, 1998\nMichael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. ICLR. 2014\nAndrea Vedaldi and Karel Lenc. MatConvNet: Convolutional Neural Networks for MATLAB"}, {"section_index": "8", "section_name": "A BINARY VS. TERNARY", "section_text": "Figure5Jillustrates the reconstruction errors of a 4096 1000 weight matrix of the last fully connected layer in VGG-16 model (Simonyan & Zisserman]2015). We tested both the binary and ternary constraints on M.., for comparison. The reconstruction error J. monotonically decreased along with an increase in kw. It was clear that the ternary basis provided better reconstruction than the binary basis.\n300 250 binary basis ternary basis 200 150 100 50 0 0 1/2 D D 3/2 D 2 D O number of basis vectors k\nFigure 5: 4096 1000 weight matrix of last fully connected layer in VGG-16 model (Simonyan & Zisserman 2015) is decomposed under two different constraints: (blue) {-1, +1} and (red) -1, 0, +1}"}] |
HJ7O61Yxe | [{"section_index": "0", "section_name": "MODELING RELATIONAL TIME SERIES USING GAUS- SIAN EMBEDDINGS", "section_text": "Ludovic Dos Santos*Ludovic Denoyer, Benjamin Piwowarski & Patrick Gallinari"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Relational time series, i.e. multiple time series where the observations are correlated both inside each series and between series occur in many domains such as ecology, medicine, biology, earth observation by satellite imagery or local measurements, multimedia or even social data analysis The correlations between the different observed series can come from a proximity (e.g. earth obser- vation or epidemic diffusion) or from a similarity of behavior (e.g. user traces in social data). In the statistical literature, the modeling of relational time series has been the topic of a dedicated field. spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). Different method- ologies have been developed for handling a large variety of spatio-temporal phenomena, with an emphasis on the analysis of natural observations like weather prediction, ecology or remote sensing In the machine learning domain, there exists a vast literature dedicated to sequence or time series prediction. Recently, deep recurrent neural networks have witnessed notable successes in different sequence and time series modeling tasks leading to an increasing number of publications, e.g. (Bar- bounis et al. (2006); Hsieh et al. (2011); Cao et al. (2012); Hermans & Schrauwen (2013)). Despite a large number of recent developments, the modeling and analysis of relational time series has only attracted a few attention in the field of representation learning. In addition, most of the models are deterministic in the sense that they are trained to learn a fixed mapping for modeling the dynamics of the series.\nWe propose a new state space model for relational time series able to model the uncertainty at the observation and at the modeling levels. The principle of this approach is to associate each point o a time series to a Gaussian distribution in a latent space, the distribution over the observed values being directly computed from these latent distributions. The model has two main components. One is responsible for the dynamics in the latent space. This component is thus modeling the evolutior of the Gaussian distribution considering both the temporal intra-series and the relational inter-serie.\n*Both authors contributed equally to this work"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We address the problem of modeling multiple simultaneous time series where the observations are correlated not only inside each series, but among the different series. This problem happens in many domains such as ecology, meteorology, etc We propose a new dynamical state space model, based on representation learn- ng, for modeling the evolution of such series. The joint relational and temporal dynamics of the series are modeled as Gaussian distributions in a latent space. A decoder maps the latent representations to the observations. The two components dynamic model and decoder) are jointly trained. Using stochastic representations allows us to model the uncertainty inherent to observations and to predict unob served values together with a confidence in the prediction\ndependencies. A second component acts as a decoder and maps the latent representations associate with each series to the corresponding observations in the output space..\nThe contributions of the paper are thus: (i) a new dynamical model for relational time series in- spired by representation learning; (ii) a stochastic component for modeling the uncertainties at the observation and dynamic levels\nThe paper is organized as follows. In Section 2 we introduce some related work on forecasting in time series, representation learning for time series, and recent deep learning works focusing on modeling uncertainty. The model is presented in Section 3 together with four different variants Section 4 presents experimental results on four datasets, and section 5 concludes this work and gives some perspectives.\nThe classical topic of time series modeling and forecasting has given rise to an extensive literature In statistics, classical linear models include many variations around auto-regressive and moving average models (De Gooijer & Hyndman (2006)). In machine learning, non linear extensions of these models based on neural networks have been proposed as early as the 90s, opening the way to many other non linear models including kernel methods (Muller et al. (99)).\nRelational time series have mainly been studied in the field of spatio-temporal statistics (Cressie & Wikle (2011); Wikle & Hooten (2010)). The traditional method first relied on a descriptive approach using the first and second-order moments of the process for modeling the spatio-temporal dependen cies. More recently, dynamical state models, where the current state is conditioned on the past have been explored (Wikle (2015)). These models have been considered both for continuous/discrete space and time components. However, the most common way is to consider discrete time, leading to the modeling of time series of spatial processes as we do here. When space is discrete, the mode comes down to a general vectorial autoregressive formulation. These models face a curse of dimen sionality in the case of a large number of sources. Different strategies have been adopted to solve thi. problem such as embedding the spatio-temporal process in a low-dimensional manifold or param eter reduction (Wikle (2015)), leading to model families quite similar to the ones used in machine learning for modeling dynamical phenomena. Also, for complex underlying processes, observations only provide an incomplete description of the process dynamics so that modeling uncertainty at the data and model levels is an important topic.\nn the last 10 years, there has been a growing interest in learning latent representations for exampl through neural networks and deep learning. Dynamical state space models such as recurrent neura. networks (RNN), which have been used for time series forecasting in different contexts since the early nineties (Connor et al. (1994)), have recently witnessed important successes in different areas. or general sequence modeling problems, leading to breakthroughs in domains like speech (Graves et al. (2013)), language generation (Sutskever et al. (2011)), translation (Cho et al. (2014)), anc. many others. Among this family, the model closest to ours is the dynamic factor graph model o. Mirowski & LeCun (2009)) designed for multiple series modeling for the tasks of forecasting anc. mputation. However this model does not consider relational dependencies which is the focus of ou. approach.\nMost of the above models make use of pointwise representations and do not model explicitly the uncertainties present in the process and/or in the observations. Recently, in the learning repre- sentation community, there has been a growing interest in using distributions as latent representa- tions instead of points. (Vilnis & McCallum (2015); He et al. (2015); Dos Santos et al. (2016)) all make use of Gaussian distributions for representing different items like words (Vilnis & McCallum (2015)), nodes in knowledge graphs (He et al. (2015)) or nodes in graphs for transductive classifi- cation (Dos Santos et al. (2016)). Note that Gaussian processes have also been used for time series prediction, but they have mainly been considered for univariate time series prediction (Hachino & Kadirkamanathan (2011); Brahim-Belhouari & Bermak (2004)) and they do not use a state space formulation.\nRecent techniques in variational inference (Kingma & Welling (2014); Rezende et al. (2014)) deal with uncertainty by modeling distributions in the observation space, mapping random variables within a latent space to observations with a deep neural network. Extension of the variational in\nference method to time series has been proposed (Fraccaro et al. (2016); Krishnan et al. (2015)) but. contrarily to those works, we take into account relationships (both temporal and relational). Fur thermore, in our model, we work directly with random variables to predict observations from time series. This gives us direct access to the output distribution with no need to sample or work with. intractable distributions.\nLet us consider a set of n temporal sequences' x1, .., Xn such that x, (t E R is the value of the ith (1) sequence at time t defined by x; = (x. , ::, X: simplification, we consider that all the series have the same length, but this is not restrictive.\nWe model the dependencies between the different series through a graph, the different series sources. being the graph vertices and the links modeling explicit dependencies between the sources. These. links can reflect a spatial proximity between the sources of the series, a similarity of behavior be-. tween users or any other predefined relation. These explicit relations will be modeled in the latent space. Our hypothesis is that they will constrain the representation of linked sources to be similar. one to another in the latent space, this similarity being controlled by the strength of the link between the two time series, denoted e;.. We assume that the graph structure is static in time and is provided as a prior information. The model can be extended to learn these static dependencies but this is not considered here.\nLet us denote t the size of the prediction horizon. The forecasting problem considered here is to compute for all series i the values x. wardly extended to the imputation problem that aims at predicting missing values."}, {"section_index": "3", "section_name": "3.2 INEORMAL DESCRIPTION", "section_text": "The proposed model is a dynamic state space model: the dynamics is modeled in a continuous latent state space and the observations are generated from states in this latent space. State space models have already been considered for multiple time series (e.g. Mirowski & LeCun (2009)) and for spatio-temporal processes (e.g. Wikle & Hooten (2010)).\nTo handle this uncertainty, we propose a model, namely Relational Dynamic model with Gaussian. representations (RDG), that represents latent factors as distributions in a latent space and learns the. series dynamics in this latent space. The distributions themselves are estimated using observations. like for any other representation learning model. Besides being more adapted to handling the noise. inherent to the process and to the observations, the model can be used to predict the posterior distri-. bution of the variables associated to the series and in particular the confidence or variance associated to the predictions.\nThe model is an extension of the deterministic model of (Ziat et al. (2016)) and has two main components: (i) Decoding component: we consider that each series corresponds to a particular trajectory in an unknown latent space. Each series x. ., Z(T), z(t) being the latent factor explaining the observed random variables in Rd denoted Z(1), .. as a multivariate\n'For simplicity, we consider univariate time series, but the model can be trivially extended to multivariate time series.\nOur model is built on top of the model in (Ziat et al. (2016)) which proposes a deterministic dy. namical process model but does not consider any explicit modeling of uncertainty. In this paper, we propose a model that uses Gaussian embeddings, and extend the dynamics and loss functions of the. model in (Ziat et al. (2016)).\nBoth the observations and the dynamics are subject to uncertainties. Usually, the observations cor respond to a partial view of the underlying generating process and the dynamics being hidden is not directly accessible and should be modeled as a stochastic process.\nusing a decoding function mapping z(t) to X(t) = f(z(t). (ii) Dynamic component:The second component models the series dynamics in the latent space. We suppose that dynamics can. be captured for all series through a function h that maps the latent random variable Z(t) to the next. constraints are introduced to reflect prior knowledge about the relational dependency structure of the series. For any couple of series i and j with a known dependency, i.e. such that ei,j > 0 we add. a corresponding constraint on Z(t) and Z(t) a as explained in Section 3.3.3.\nIn the following, we explain how the distributions corresponding to the random variables Z are learned. jointly to the functions f (decoder component) and h (dynamic component)"}, {"section_index": "4", "section_name": "3.3 MODEL DEFINITION", "section_text": "n T n T-1 L(,,f,h)=De LAp ADy i=1t=1 i=1 t=1 n +Ar j=1t=1\nwhere Apy and AR are hyperparameters weighting the importance of the different elements in the loss function. The first term corresponds to the decoding component, and forces both f and the learned distributions of variables Z to \"explain' the observations, the second term, the dynamic component encourages h to model the time dynamics in the latent space, while the third term captures the relations between the pairs of series. In the following, we use for f a linear function and h will be either a linear or non-linear function (see Section 3.3.2).\nLearning: Learning the model is performed through the minimization of the loss function. L(, , f, h) with respect to , , f and h. To simplify the notations, the parameters of f and h are not made explicit in the notations - f and h are supposed to be differentiable. At the end of. the learning process, all the latent distributions for each of the time steps are known for the training. data, as well as the decoding function f and the dynamical one h. We used ADAM (Kingma & Ba (2015)) as a stochastic gradient descent technique. This optimization can be easily made on a large scale dataset, and/or by using GPUs.."}, {"section_index": "5", "section_name": "3.3.1 FROM LATENT SPACE TO OBSERVATIONS", "section_text": "The first loss measures the difference between the expected value of f and the observation using a mean-square error:\nWe define a global loss function (, , f, h) where and are the means and covariance matrices. for all the series and for all the time steps between 1 and T. The loss is a sum of three terms: (i) a decoding loss pe, (ii) a dynamical loss py and (iii) a structural loss r:.\nx(t) of each series can be predicted The mapping onto the latent space is learned so that the values x from their respective Gaussian embedding Z(t) through the f function. We define below two al- ternative decoding loss functions pe, used in the experiments for measuring the error between the model.\nEfZt ADei(f\nWhen considering a linear decoding function such as f() =< 0, : > , 0 being the set of parameters of f, De. can be rewritten as as:\nWhen f is a linear function. this loss can be written as:\nd k=1\nMinimizing De. only updates the mean of the distributions, whereas minimizing De2 updates both the mean and the variance. More specifically, an observed value with pe, will pull the variances representation. Moreover, this effect will be higher for the dimensions of the latent space where the value of 0 is higher. This is sensible since variance is reduced for the dimensions that are important for the prediction."}, {"section_index": "6", "section_name": "3.3.2 MODELING DYNAMICS", "section_text": "predict the representation of the next state of time series i, Z. The function h maps a dis.\nWe propose in the following two alternative functions for h. For the first one, we consider that the latent representation at time (t + 1) is a linear transformation of the latent distribution at time t. The transformed variable is also a Gaussian and its parameters can be easily computed. In this case, h is a linear function from Rd to Rd which is represented by a matrix y E Md.d(R):\nAt last, r corresponds to a structural regularization over the graph structure that encourages th model to learn similar representations for time series that are interdependent. This forces the mode to learn representations that reflect the structure dependencies between the series. Recall that thes.\n2Dk1(Z(t)||zt) =(tr(t)-\nDe1\nThe second loss aims at measuring the distance between the random variable modeling the predicted observations and the observations. This is the expectation of the mean squared error between the predictions and the observations:\nADe? H1 -x\nThe loss function py aims at finding values Z) and a dynamic model h, that will be used to. et al. (2016)), we use a Kullback-Leibler divergence (noted DkL(|l-)) to compare the distribution at (t + 1) to the distribution predicted by h..\nLinear transformations of random vectors might be too restrictive to model complex processes. As an alternative transformation, we used two non linear multilayer perceptrons (MLP), one hm for predicting the means and one for hc for predicting the variance: the next mean is given by\nNote hat in the second case, we also make the hypothesis that the resulting distribution (for Z is Gaussian. In the two cases, the KL divergence between the two Gaussian distributions has a simple analytic form from which the gradient can be easily computed2\nMinimizing the regularization term r has a direct impact on the distributions of the predicted observations for connected times series. More precisely, we have the following inequality:.\nd:DkL Z 2"}, {"section_index": "7", "section_name": "4.1 DATASETS AND BASELINES", "section_text": "Experiments have been performed on four datasets respectively extracted from Google Flu Trends3 WHO4 and from two datasets from Grand Lyon (GL) (respectively data from traffic conditions and from car parks occupancy). All the series are normalized. For all datasets, we used binary dependency relations indicating whether two series are related or not. The Google Flu Trend (GFT) dataset is composed of an aggregation of weekly Google search queries related to the flu in 29 countries. This dataset spans about ten years of time. The binary relations between series are defined a priori so that the series of two countries i and j are linked, i.e. e,; = 1 in Equation (1), only if the countries have a common frontier. There are 96 relations in all. The GL Traffic (GL-T) dataset corresponds to the traffic conditions of the 50 busiest roads of the city of Lyon (France). Data is aggregated on 20 minutes windows spanning 15 days. The binary relations between series are based on the geographical proximity of roads. There are 130 relations in total. The GL Park (GL-P) dataset represents the occupancy of public car parks in Lyon. The series correspond to the occupancy of the 30 busiest car parks. It has the same window and period of time as the previous dataset, and the binary relations between series are based on the geographical proximity of car parks. There are 74 relations in total. The WHO dataset provides the number of deaths caused by diphtheria over 91 different countries, giving rise to 91 time series. The binary relations between series are defined so that two series are linked if the corresponding countries share a common frontier. There are 228 links in total.\nWe compare our approach with five baselines : Auto-Regressive (AR), a monovariate linear. auto-regressive model. It computes its predictions based on a learned linear function of a fixed number p of past values of the series. The order p of the model is a hyperparameter of the model. selected by a grid search. Feed Forward Neural Network (FFNN), representative of non-linear\nhttp://www.google.org/flutrends 4http://www.who.int. 5http://data.grandlyon.com\ndependencies are supposed to be provided as priors for this model. We define this regularization loss\nAr(Z. = DkL(z\ndTv(X,Y) = sup (Dx(A) -Dy(A)D AeBorel\nwith X and Y being to random variables of density distribution respectively Dx and Dy, and Borel being the Borel set of Rn (roughly, cuboids in Rn). This means that having relatively similar representations (regarding the KL-divergence) constrains the predicted values to be similar. For more details see Appendix A.\nDuring inference when forecasting values, the latent distributions at (T + 1) are deduced from the ones at time T and follow N(h((T), (T), distributions at (T+ 2) follow V(h o h((T), (T)). and so on...\nFigure 1: Quantitative comparison between baselines and our proposed model (RDG) on the predic tion task. RDGk,1 corresponds to the variant with losses (De,Dy,)..\nauto-regressive models of order p where the non-linear function is modeled as a feed-forward neura. network with one hidden layer of size s. In this case, p and s are hyperparameters selected by gric. search. RNN, a recurrent neural network with one hidden layer of size s of recurrent units and tanl. non-linearities. The RNN model is a state space non-linear auto-regressive model with exogenous. inputs (the past values of the series). Note that this model should in principle be able to learr. the inter-series dependencies, but the dependencies are not modeled explicitly as they are in ou. model. Also the RNN does not introduce explicit modeling of uncertainties. KF (Kalman (1960)). is a classic Kalman Filter with linear transformations from one state to another. DFG (Mirowsk. & LeCun (20o9)), a state of the art model that learns continuous deterministic latent variables. by modeling the dynamics and the joint probabilities between series. All the hyperparameters o. the baselines have been set using a validation set by grid search, including the best architectures. for the dynamic model h when it is a multi-layer perceptron with one hidden layer or a linear model.\nFor the evaluation we have considered a root-mean-square error (RMSE) criterion. Regarding th experimental protocol, models are evaluated using cross-validation with rolling origin."}, {"section_index": "8", "section_name": "4.2 RESULTS", "section_text": "Let us first present the performance of our model w.r.t. the baselines for prediction at horizon 1 ir Figure 1b We have tested the four variants of our approach i.e combinations of De. or De, with Dy, or Dy,. The proposed model obtains the best results on all the datasets except GFT where KF performs better. Otherwise it outperforms the baselines on two datasets (GL-P -Grand Lyon Parks and GFT -Google Flu Trends- on the table) and gets results similar to the RNN on the two others (GL-T -Grand yon Traffic- and WHO). The non linear dynamical model used for py, usually gets better results than other models, the best combination being the use of the MSE expectation erroi for the decoder and the non-linear model for the dynamics (denoted RDG2.2 on the figure). The non linear dynamical model used for Dy, usually gets better results than other models, the best combination being the use of the MsE expectation error for the decoder and the non-linear model for the dynamics (denoted RDG2.2 on the figure).\nFigure 1a shows the prediction quality (RMSE) at (T+1), (T+2), (T+3), (T+ 4) and (T+ 5) and. illustrates the ability of RDG to predict correctly at different horizons. Here again, the performance. of RDG is very close to the performance of the Recurrent Neural Network. One can remark that at (T + 5) KF does not goes the distance since it performs well at (T + 1) but quite badly at (T + 5). in comparison to other baselines.\nRDG has the additional property of modeling the uncertainty associated to its predictions, which is not the case for a RNN. Let us consider the curves presented in Figure 2. They illustrate, the pre dictions made by our model together with their associated variance computed through the Gaussian embeddings. First, one can see that the ground truth values are always within the confidence interval provided by our model, which means that RDG computes relevant minimum and maximum possible values. Another aspect is that the size of the interval increases with the prediction horizon, which is\nModel GL-T GL-P GFT WHO AR 0.0752 0.0892 0.0626 0.0832 FFNN 0.0751 0.0894 0.045 0.0838 RNN 0.0709 0.0890 0.0431 0.0795 KF 0.0711 0.0833 0.0388 0.0799 DFG 0.0712 0.0911 0.0592 0.0795 RDG1,1 0.0742 0.0902 0.0607 0.0848 RDG1,2 0.0707 0.0834 0.0434 0.0796 RDG2,1 0.0765 0.0896 0.0589 0.0831 RDG2,2 0.0718 0.0828 0.0429 0.0795\n0.50 0.6 groundtruth groundtruth 0.45 prediction +- variance prediction +- variance prediction test prediction test 0.40 0.5 0.35 0.4 0.30 0.25 0.- 0.20 0.15 0.10 0.05 15 35 0.1 0 5 10 20 25 30 5 10 15 20 25 30 35\nFigure 2: Forecasts on GFT (two different time series of the dataset) with the RDG2.2 model showing its range of confidence: E(f(Z(t)))var(f(Z(t))). Prediction at 25+n corresponds to f(hn(Z(25))\nwhat is expected from such a model. The latter is then able to predict relevant confidence values fo. its predictions.\nComparison between RDG with/without structural regularization or uncertainty. We com pare in Table 1 the results between our model when taking into account the neighborhood grapl (r # O) or not (R = O): forecasts are uniformly worse for all datasets when we do not take into account the neighborhood graph, it suggests that the regularizer improves the model when the input graph is relevant. Furthermore, we give the results obtained without uncertainty, which cor. responds to the model described in (Ziat et al. (2016)) (denoted Rainstorm): here again, our mode. outperforms the previous one for all the datasets..\nDataset Model GL-T GL-P GFT WHO Rainstorm 0.0710 0.0886 0.0440 0.0804 RDG (XR = 0) 0.0719 0.900 0.0441 0.0807 RDG 0.0707 0.0828 0.0388 0.0795\nModel Rainstorm RDG (R = 0) RDG\nTable 1: RMSE at T + 1 on the four datasets"}, {"section_index": "9", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We have proposed a model for relational time series forecasting. Our model (RDG) is based or latent Gaussian embeddings, and has shown competitive performance on four different dataset. compared to state-of-the-art models. Moreover, RDG allows us to model the uncertainty of predic tions, providing for example confidence intervals for each prediction. Future work will investigat more complex dynamic and prediction functions, as well as observing the behavior of the model fo. imputation tasks."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "TG Barbounis, JB Theocharis, MC Alexiadis, and PS Dokopoulos. Long-term wind speed anc power forecasting using local recurrent neural network models. IEEE TEC, 2006.\nJan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International journal o forecasting, 2006.\nLudovic Dos Santos, Benjamin Piwowarski, and Patrick Gallinari. Multilabel classification on het erogeneous graphs with gaussian embeddings. In ECML-KDD. 2016\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In IIIE ICASSP, 2013..\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nDP Kingma and M Welling. Auto-encoding variational bayes. In ICLR, 2014.\nRahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. NIPs 2015 Workshop, 2015\nKR Muller, A J Smola, G Ratsch, B Scholkopf, J Kohlmorgen, and V Vapnik. Using support vector machines for time series prediction. Kernel methods--support vector learning, 99.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and variational inference in deep latent gaussian models. In International Conference on Machine Learning, 2014.\nIlya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural ne works. In Proceedings of ICML, 2011.\nLuke Vilnis and Andrew McCallum. Word representations via gaussian embedding. ICLR, 2015\nChristopher K. Wikle. Modern perspectives on statistics for spatio-temporal data. Wiley Interdisci plinary Reviews: Computational Statistics, 7(1):86-98, 2015.\nAli Ziat, Gabriella Contardo, Nicolas Baskiotis, and Ludovic Denoyer. Learning embeddings fo. completion and prediction of relational multivariate time-series. In ESANN, 2016\nJerome T Connor, R Douglas Martin, and Les E Atlas. Recurrent neural networks and robust time series prediction. Neural Networks. IEEE Transactions on. 1994\nNoel A. C. Cressie and Christopher K. Wikle. Statistics for spatio-temporal data. Wiley series in probability and statistics. Hoboken, N.J. Wiley, 2011. ISBN 978-0-471-69274-4.\nMarco Fraccaro, Soren Kaae Sonderby, Ulrich Paquet, and Ole Winther. Sequential neural models with stochastic lavers. Advances in neural information processing svstems 2016. 2016"}, {"section_index": "11", "section_name": "IMPACT OF MINIMIZING THE KL-DIVERGENCE ON PREDICTED VALUES", "section_text": "In this section, we show that the structural regularization term between two time series bounds the difference predicted observations\nSince we use diagonal covariance matrices and that the KL-divergence is invariant by multiplying both random variables by the same scalar, we can show that:.\nThen, using Pinsker's inequality one can see that minimizing the KL-divergence also minimize the total variation norm (which can be more intuitive in some cases), leading to:\nd d DKL(0k2 2 k=1 k=1\nFinally, each component of the random vectors Z(t) being pairwise independent, we have\nCombining the the inequalities above. ve can straightforwardly show the following inequality\nDKL( DKI DkL(e k=1 k=1\nd d 1 d k=1\nd d d dTV 1 dTI k=1 k=1 k=1\nd: DKL(2 Uz I 2"}] |
BJ6oOfqge | [{"section_index": "0", "section_name": "TEMPORAL ENSEMBLING FOR SEMI-SUPERVISED LEARNING", "section_text": "Samuli Laine\nslaine@nvidia.com\nIn this paper, we present a simple and efficient method for training deep neural. networks in a semi-supervised setting where only a small portion of training data. is labeled. We introduce self-ensembling, where we form a consensus prediction. of the unknown labels using the outputs of the network-in-training on different. epochs, and most importantly, under different regularization and input augmenta-. tion conditions. This ensemble prediction can be expected to be a better predictor. for the unknown labels than the output of the network at the most recent training. epoch, and can thus be used as a target for training. Using our method, we set. new records for two standard semi-supervised learning benchmarks, reducing the. (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with. 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels. and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using ran. dom images from the Tiny Images dataset as unlabeled extra inputs during train-. Ing. Finally, we demonstrate good tolerance to incorrect labels.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "It has long been known that an ensemble of multiple neural networks generally yields better pre-. dictions than a single network in the ensemble. This effect has also been indirectly exploited when. training a single network through dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013),. or stochastic depth (Huang et al., 2016) regularization methods, and in swapout networks (Singh. et al., 2016), where training always focuses on a particular subset of the network, and thus the com-. plete network can be seen as an implicit ensemble of such trained sub-networks. We extend this idea by forming ensemble predictions during training, using the outputs of a single network on different. training epochs and under different regularization and input augmentation conditions. Our train-. ing still operates on a single network, but the predictions made on different epochs correspond to an. ensemble prediction of a large number of individual sub-networks because of dropout regularization..\nThis ensemble prediction can be exploited for semi-supervised learning where only a small portion. of training data is labeled. If we compare the ensemble prediction to the current output of the net- work being trained, the ensemble prediction is likely to be closer to the correct, unknown labels of the unlabeled inputs. Therefore the labels inferred this way can be used as training targets for the unlabeled inputs. Our method relies heavily on dropout regularization and versatile input augmen- tation. Indeed, without neither, there would be much less reason to place confidence in whatever labels are inferred for the unlabeled training data..\nWe describe two ways to implement self-ensembling, I-model and temporal ensembling. Both ap proaches surpass prior state-of-the-art results in semi-supervised learning by a considerable margin We furthermore observe that self-ensembling improves the classification accuracy in fully labeled cases as well, and provides tolerance against incorrect labels.\nThe recently introduced transform/stability loss of Sajjadi et al. (2016b) is based on the same prin ciple as our work, and the I-model can be seen as a special case of it. The I-model can also be seen as a simplification of the T-model of the ladder network by Rasmus et al. (2015), a previously presented network architecture for semi-supervised learning. Our temporal ensembling method has connections to the bootstrapping method of Reed et al. (2014) targeted for training with noisy labels\ntaila@nvidia.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "-model w(t) Yi cross- Zi Xi entropy stochastic network weighted >loss augmentation with dropout sum squared difference Temporal ensembling w(t) Yi cross- entropy stochastic network Zi weighted Xj >loss augmentation with dropout sum squared Zi difference >Zj\nFigure 1: Structure of the training pass in our methods. Top: II-model. Bottom: temporal en sembling. Labels yi are available only for the labeled inputs, and the associated cross-entropy loss component is evaluated only for those\nAlgorithm 1 11-model pseudocode\nWe present two implementations of self-ensembling during training. The first one, I-model, en courages consistent network output between two realizations of the same input stimulus, under twc different dropout conditions. The second method, temporal ensembling, simplifies and extends this by taking into account the network predictions over multiple previous training epochs.\nWe shall describe our methods in the context of traditional image classification networks. Let the. training data consist of total of N inputs, out of which M are labeled. The input stimuli, available for all training data, are denoted x, where i E {1... N}. Let set L contain the indices of the labeled. inputs, [L] = M. For every i E L, we have a known correct label yi E {1...C}, where C is the number of different classes."}, {"section_index": "3", "section_name": "2.1 I-MODEL", "section_text": "The structure of I-model is shown in Figure 1 (top), and the pseudocode in Algorithm 1. During training, we evaluate the network for each training input x; twice, resulting in prediction vectors z and zi. Our loss function consists of two components. The first component is the standard cross entropy loss, evaluated for labeled inputs only. The second component, evaluated for all inputs penalizes different predictions for the same training input x; by taking the mean square difference\nIt is important to notice that, because of dropout regularization, the network output during training is a stochastic variable. Thus two evaluations of the same input x; under same network weights yield different results. In addition, Gaussian noise and augmentations such as random translation are evaluated twice, resulting in additional variation. The combination of these effects explains the difference between the prediction vectors z; and zy. This difference can be seen as an error in classification, given that the original input x; was the same, and thus minimizing it is a reasonable goal.\nIn our implementation, the unsupervised loss weighting function w(t) ramps up, starting from zero. along a Gaussian curve during the first 80 training epochs. See Appendix A for further details about. this and other training parameters. In the beginning the total loss and the learning gradients are thus. dominated by the supervised loss component, i.e., the labeled data only. We have found it to be very important that the ramp-up of the unsupervised loss component is slow enough-otherwise the network gets easily stuck in a degenerate solution where no meaningful classification of the data. is obtained.\nOur approach is somewhat similar to the T-model of the ladder network by Rasmus et al. (2015), but conceptually simpler. In the II-model, the comparison is done directly on network outputs, i.e., afte. softmax activation, and there is no auxiliary mapping between the two branches such as the learned denoising functions in the ladder network architecture. Furthermore, instead of having one \"clean' and one \"corrupted\"' branch as in T-model, we apply equal augmentation and noise to the inputs for both branches.\nAs shown in Section 3. the I-model combined with a good convolutional network architecture provides a significant improvement over prior art in classification accuracy..\nAnalyzing how the II-model works, we could equally well split the evaluation of the two branches i two separate phases: first classifying the training set once without updating the weights 0, and the training the network on the same inputs under different augmentations and dropout, using the jus. obtained predictions as targets for the unsupervised loss component. As the training targets obtaine this way are based on a single evaluation of the network, they can be expected to be noisy. Tempora. ensembling alleviates this by aggregating the predictions of multiple previous network evaluation. into an ensemble prediction. It also lets us evaluate the network only once during training, gaining. an approximate 2x speedup over the I-model..\nThe structure of our temporal ensembling method is shown in Figure 1 (bottom), and the pseudocode in Algorithm 2. The main difference to the 1I-model is that the network and augmentations ar. evaluated only once per input per epoch, and the target vectors for the unsupervised loss componen are based on prior network evaluations instead of a second evaluation of the network.\nAfter every training epoch, the network outputs zi are accumulated into ensemble outputs Z, b updating Z < Z, + (1 - Q)zi, where is a momentum term that controls how far the ensemble reaches into training history. Because of dropout regularization and stochastic augmentation, Z thus contains a weighted average of the outputs of an ensemble of networks f from previous training epochs, with recent epochs having larger weight than distant epochs. For generating the training targets z, we need to correct for the startup bias in Z by dividing by factor (1 at). A simila. bias correction has been used in, e.g., Adam (Kingma & Ba, 2014) and mean-only batch normal-. ization (Salimans & Kingma, 2016). On the first training epoch, Z and are zero as no data fron previous epochs is available. For this reason, we specify the unsupervised weight ramp-up functior w(t) to also be zero on the first training epoch..\nSquared difference gave slightly but consistently better results than cross-entropy loss in our tests\nbetween the prediction vectors z; and z,.1 To combine the supervised and unsupervised loss terms,. we scale the latter by time-dependent weighting function w(t). By comparing the entire output. vectors z; and z, we effectively ask the \"dark knowledge\" (Hinton et al., 2015) between the two evaluations to be close, which is a much stronger requirement compared to asking that only the final classification remains the same, which is what happens in traditional training..\nAlgorithm 2 Temporal ensembling pseudocode. Note that the updates of Z and could equally well be done inside the minibatch loop; in this pseudocode they occur between epochs for clarity.\nRequire: x; = training stimuli Require: L = set of training input indices with known labels Require: yi = labels for labeled inputs i E L. Require: = ensembling momentum, 0 < < 1 Require: w(t) = unsupervised weight ramp-up function. Require: fe(x) = stochastic neural network with trainable parameters 0 Require: g(x) = stochastic input augmentation function Z 0[NxC] > initialize ensemble predictions. 0[NxC] > initialize target vectors. for t in [1, num_epochs] do. for each minibatch B do. ZiEB<fe(g(xiEB,t)) > evaluate network outputs for augmented input. loss {- [B] iE(BnL) l0g zi[yi] > supervised loss component +w(t) c|B| iEB|zi- zi||2 > unsupervised loss component update 0 using, e.g., ADAM > update network parameters end for Z aZ +(1-a)z > accumulate ensemble predictions. z Z/(1- at) > construct target vectors by bias correction. end for return 0\nThe benefits of temporal ensembling compared to I-model are twofold. First, the training is faste because the network is evaluated only once per input on each epoch. Second, the training targets z can be expected to be less noisy than with II-model. As shown in Section 3, we indeed obtair somewhat better results with temporal ensembling than with I-model in the same number of training epochs. The downside compared to I-model is the need to store auxiliary data across epochs, anc the new hyperparameter a. While the matrix Z can be fairly large when the dataset contains a larg. number of items and categories, its elements are accessed relatively infrequently. Thus it can be stored, e.g., in a memory mapped file.\nAn intriguing additional possibility of temporal ensembling is collecting other statistics from the network predictions z besides the mean. For example, by tracking the second raw moment of the network outputs, we can estimate the variance of each output component zi,j. This makes it possible to reason about the uncertainty of network outputs in a principled way (Gal & Ghahramani, 2016). Based on this information, we could, e.g., place more weight on more certain predictions vs. uncertain ones in the unsupervised loss term. However, we leave the exploration of these avenues as future work."}, {"section_index": "4", "section_name": "3 RESULTS", "section_text": "Our network structure is given in Table 5, and the test setup and all training parameters are detailed. in Appendix A. We test the I-model and temporal ensembling in two image classification tasks. CIFAR-10 and SVHN, and report the mean and standard deviation of 10 runs using different random seeds.\nAlthough it is rarely stated explicitly, we believe that our comparison methods do not use input aug mentation. i.e.. are limited to dropout and other forms of permutation-invariant noise. Therefore w report the error rates without augmentation, unless explicitly stated otherwise. Given that the abilit of an algorithm to extract benefit from augmentation is also an important property, we report th classification accuracy using a standard set of augmentations as well. In purely supervised trainin the de facto standard way of augmenting the CIFAR-10 dataset includes horizontal flips and randor translations, while SVHN is limited to random translations. By using these same augmentations w can compare against the best fully supervised results as well. After all, the fully supervised result should indicate the upper bound of obtainable accuracy.\nTable 1: CIFAR-10 results with 4000 labels, averages of 10 runs (4 runs for all labels)\nError rate (%) with # labels. 4000 All (50000) Supervised-only 35.56 1.59 7.33 0.04 with augmentation 34.85 1.65 6.05 0.15 Conv-Large, T-model (Rasmus et al., 2015) 20.40 0.47 CatGAN (Springenberg, 2016) 19.58 0.58 GAN of Salimans et al. (2016) 18.63 2.32 II-model 16.55 0.29 6.90 0.07 H-model with augmentation. 12.36 0.31 5.56 0.10 Temporal ensembling with augmentation 12.16 0.24 5.60 0.10\nTable 2: SVHN results for 500 and 1000 labels, aver. ages of 10 runs (4 runs for all labels)..\nError rate (%) with # labels Model 500 1000 All (73257) Supervised-only 35.18 5.61 20.47 2.64 3.05 0.07 with augmentation 31.59 3.60 19.30 3.89 2.88 0.03 DGN (Kingma et al., 2014) 36.02 0.10 Virtual Adversarial (Miyato et al., 2016) 24.63 ADGM (Maalge et al., 2016) 22.86 SDGM (Maalge et al., 2016) 16.61 0.24 GAN of Salimans et al. (2016) 18.44 4.8 8.11 1.3 I-model 7.05 0.30 5.43 0.25 2.78 0.03 H-model with augmentation 6.65 0.53 4.82 0.17 2.54 0.04 Temporal ensembling with augmentation 5.12 0.13 4.42 0.16 2.74 0.06"}, {"section_index": "5", "section_name": "3.1 CIFAR-10", "section_text": "CIFAR-10 is a dataset consisting of 32 32 pixel RGB images from ten classes. Table 1 shows a 2.1 percentage point reduction in classification error rate with 4000 labels (400 per class) compared to earlier methods for the non-augmented II-model.\nEnabling the standard set of augmentations further reduces the error rate by 4.2 percentage points to 12.36%. Temporal ensembling is slightly better still at 12.16%, while being twice as fast tc train. This small improvement conceals the subtle fact that random horizontal flips need to be done independently for each epoch in temporal ensembling, while H-model can randomize once per a pair of evaluations, which according to our measurements is ~0.5 percentage points better thar independent flips.\nA principled comparison with Sajjadi et al. (2016b) is difficult due to several reasons. They provide results only for a fairly extreme set of augmentations (translations, flipping, rotations, stretching and shearing) on top of fractional max pooling (Graham, 2014), which introduces random, loca stretching inside the network, and is known to improve classification results substantially. They quote an error rate of only 13.60% for supervised-only training with 4000 labels, while our cor responding baseline is 34.85%. This gap indicates a huge benefit from versatile augmentations and fractional max pooling-in fact, their baseline result is already better than any previous semi- supervised results. By enabling semi-supervised learning they achieve a 17% drop in classification error rate (from 13.60% to 11.29%), while we see a much larger relative drop of 65% (from 34.85% to 12.16%).\nThe street view house numbers (SVHN) dataset consists of 32 32 pixel RGB images of real-world house numbers, and the task is to classify the centermost digit. In SVHN we chose to use only the\n6.90 0.07 5.56 0.10 5.60 0.10\n18.44 4.8\n18.44 4.8\n7.05 0.30 6.65 0.53 5.12 + 0.13\nTable 3: CIFAR-100 results with 10000 labels, aver ges of 10 runs (4 runs for all labels).\nError rate (%) with # labels 10000 All (50000) Supervised-only 51.21 0.33 29.14 0.25 with augmentation 44.56 0.30 26.42 0.17 II-model 43.43 0.54 29.06 0.21 H-model with augmentation 39.19 0.36 26.32 0.04 Temporal ensembling with augmentation 38.65 0.51 26.30 0.15\nTable 4: CIFAR-100 + Tiny Images results. averages of 10 runs\nError rate (%) with # unlabeled auxiliary inputs from Tiny Images Random 500k Restricted 237k 1-model with augmentation. 25.79 0.17 25.43 0.32 Temporal ensembling with augmentation 23.62 0.23 23.79 0.24\nofficial 73257 training examples following Salimans et al. (2016). Even with this choice our erro rate with all labels is only 3.05% without augmentation..\nWe also investigated the behavior with 500 labels, where we obtained an error rate less than hali. of Salimans et al. (2016) without augmentations, with a significantly lower standard deviation as well. When augmentations were enabled, temporal ensembling further reduced the error rate to. 5.12%. In this test the difference between H-model and temporal ensembling was quite significan. at 1.5 percentage points.\nIn SVHN Sajjadi et al. (2016b) provide results without augmentation, with the caveat that they use fractional max pooling, which is a very augmentation-like technique due to the random, local stretching it introduces inside the network. It leads to a superb error rate of 2.28% in supervised- only training, while our corresponding baseline is 3.05% (or 2.88% with translations). Given that in a separate experiment our network matched the best published result for non-augmented SVHN when extra data is used (1.69% from Lee et al. (2015)), this gap is quite surprising, and leads us to conclude that fractional max pooling leads to a powerful augmentation of the dataset, well beyond what simple translations can achieve. Our temporal ensembling technique obtains better error rates for both 500 and 1000 labels (5.12% and 4.42%, respectively) compared to the 6.03% reported by Sajjadi et al. for 732 labels."}, {"section_index": "6", "section_name": "3.3 CIFAR-1OO AND TINY IMAGES", "section_text": "The CIFAR-100 dataset consists of 32 32 pixel RGB images from a hundred classes. We are not aware of previous semi-supervised results in this dataset, and chose 1oooo labels for our ex periments. Table 3 shows error rates of 43.43% and 38.65% without and with augmentation, re. spectively. These correspond to 7.8 and 5.9 percentage point improvements compared to supervisec. learning with labeled inputs only.\nWe ran two additional tests using unlabeled extra data from Tiny Images dataset (Torralba et al.. 2008): one with randomly selected 500k extra images, most not corresponding to any of the CIFAR 100 categories, and another with a restricted set of 237k images from the categories that correspond. to those found in the CIFAR-100 dataset (see appendix A for details). The results are shown in. Table 4. The addition of randomly selected, unlabeled extra images improved the error rate by 2.7. percentage points (from 26.30% to 23.63%), indicating a desirable ability to learn from random natural images. Temporal ensembling benefited much more from the extra data than the I-model. Interestingly, restricting the extra data to categories that are present in CIFAR-100 did not improve\nTable 2 compares our method to the previous state-of-the-art. With the most commonly used 1000 labels we observe an improvement of 2.7 percentage points, from 8.11% to 5.43% without augmen- tation, and further to 4.42% with standard augmentations..\nStandard supervised Temporal ensembling 100 100 (%) 90 90 eceenncy 80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 0 1 epoch 300 1 epoch 300 0% 20% 50% 80% 90% -0% 20% 50% 80% 90%\n(%) eaeeeeee eaeeeeeeaeen 90 90 80 70 60 60 50 50 40 40 30 Wwwwwwwwwmww 30 20 20 10 10 0 0 1 epoch 300 1 epoch 300 0% 20% 50% 80% 90% 0% 20% 50% 80% 90%\nthe classification accuracy further. This indicates that in order to train a better classifier by addin? extra data as unlabeled inputs, it is enough to have the extra data roughly in the same space as the. actual inputs-in our case, natural images. We hypothesize that it may even be possible to use. properly crafted synthetic data as unlabeled inputs to obtain improved classifiers..\nIn order to keep the training times tolerable, we limited the number of unlabeled inputs to 50k per epoch in these tests, i.e., on every epoch we trained using all 50k labeled inputs from CIFAR-100 and 50k additional unlabeled inputs from Tiny Images. The 50k unlabeled inputs were chosen randomly on each epoch from the 500k or 237k extra inputs. In temporal ensembling, after each epoch we updated only the rows of Z that corresponded to inputs used on that epoch.\nWhen all labels are used for traditional supervised training, our network approximately matches the state-of-the-art error rate for a single model in CIFAR-10 with augmentation (Lee et al., 2015. Mishkin & Matas, 2016) at 6.05%, and without augmentation (Salimans & Kingma, 2016) at 7.33% The same is probably true for SVHN as well, but there the best published results rely on extra data that we chose not to use.."}, {"section_index": "7", "section_name": "3.5 TOLERANCE TO INCORRECT LABELS", "section_text": "In a further test we studied the hypothesis that our methods add tolerance to incorrect labels by assigning a random label to a certain percentage of the training set before starting to train. Figure 2. shows the classification error graphs for standard supervised training and temporal ensembling.\nClearly our methods provide considerable resistance to wrong labels, and we believe this is because. the unsupervised loss term encourages the mapping function implemented by the network to be. flat in the vicinity of all input data points, whereas the supervised loss term enforces the mapping. function to have a specific value in the vicinity of the labeled input data points. This means that. even the wrongly labeled inputs play a role in shaping the mapping function-the unsupervised. loss term smooths the mapping function and thus also the decision boundaries, effectively fusing. the inputs into coherent clusters, whereas the excess of correct labels in each class is sufficient fol locking the clusters to the right output vectors through the supervised loss term. The difference to. classical regularizers is that we induce smoothness only on the manifold of likely inputs instead.\nFigure 2: Percentage of correct SVHN classifications as a function of training epoch when a part of the labels is randomized. With standard supervised training (left) the classification accuracy suffers when even a small portion of the labels give disinformation, and the situation worsens quickly as the portion of randomized labels increases to 50% or more. On the other hand, temporal ensembling (right) shows almost perfect resistance to disinformation when half of the labels are random, and retains over ninety percent classification accuracy even when 80% of the labels are random.\nGiven this premise, it is perhaps somewhat surprising that our methods reduce the error rate also when all labels are used (Tables 1 and 2). We believe that this is an indication that the consis- tency requirement adds a degree of resistance to ambiguous labels that are fairly common in many classification tasks, and that it encourages features to be more invariant to stochastic sampling.\nof over the entire input domain. For further analysis about the importance of the gradient of th mapping function, see Simard et al. (1998)."}, {"section_index": "8", "section_name": "4 RELATED WORK", "section_text": "There is a large body of previous work on semi-supervised learning (Zhu, 2005). In here we wil concentrate on the ones that are most directly connected to our work\n-model is a subset of a ladder network (Rasmus et al., 2015) that introduces lateral connections int an encoder-decoder type network architecture, targeted at semi-supervised learning. In T-model, al but the highest lateral connections in the ladder network are removed, and after pruning the un necessary stages, the remaining network consists of two parallel, identical branches. One of th branches takes the original training inputs, whereas the other branch is given the same input cor rupted with noise. The unsupervised loss term is computed as the squared difference between th (pre-activation) output of the clean branch and a denoised (pre-activation) output of the corrupte branch. The denoised estimate is computed from the output of the corrupted branch using a para metric nonlinearity that has 10 auxiliary trainable parameters per unit. Our I-model differs fron the I-model in removing the parametric nonlinearity and denoising, having two corrupted paths and comparing the outputs of the network instead of pre-activation data of the final layer.\nSajjadi et al. (2016b) recently introduced a new loss function for semi-supervised learning, so callec. transform/stability loss, which is founded on the same principle as our work. During training, they. run augmentation and network evaluation n times for each minibatch, and then compute an unsu. pervised loss term as the sum of all pairwise squared distances between the obtained n networl. outputs. As such, their technique follows the general pseudo-ensemble agreement (PEA) regular. ization framework of Bachman et al. (2014). In addition, they employ a mutual exclusivity loss. term (Sajjadi et al., 2016a) that we do not use. Our I-model can be seen as a special case of the. transform/stability loss obtained by setting n = 2. The computational cost of training with trans. form/stability loss increases linearly as a function of n, whereas the efficiency of our tempora ensembling technique remains constant regardless of how large effective ensemble we obtain via the. averaging of previous epochs' predictions.\nIn bootstrap aggregating, or bagging, multiple networks are trained independently based on subsets. of training data (Breiman, 1996). This results in an ensemble that is more stable and accurate than the individual networks. Our approach can be seen as pulling the predictions from an implicit ensemble that is based on a single network, and the variability is a result of evaluating it under different dropout and augmentation conditions instead of training on different subsets of data. In. work parallel to ours, Huang et al. (2017) store multiple snapshots of the network during training.. hopefully corresponding to different local minima, and use them as an explicit ensemble..\nThe general technique of inferring new labels from partially labeled data is often referred to as boot. strapping or self-training, and it was first proposed by Yarowsky (1995) in the context of linguistic. analysis. Whitney & Sarkar (2012) analyze Yarowsky's algorithm and propose a novel graph-based. label propagation approach. Similarly, label propagation methods (Zhu & Ghahramani, 2002) infer. labels for unlabeled training data by comparing the associated inputs to labeled training inputs using. a suitable distance metric. Our approach differs from this in two important ways. Firstly, we never. compare training inputs against each other, but instead only rely on the unknown labels remaining. constant, and secondly, we let the network produce the likely classifications for the unlabeled inputs instead of providing them through an outside process..\nIn addition to partially labeled data, considerable amount of effort has been put into dealing with densely but inaccurately labeled data. This can be seen as a semi-supervised learning task where part of the training process is to identify the labels that are not to be trusted. For recent work in this area. see, e.g., Sukhbaatar et al. (2014) and Patrini et al. (2016). In this context of noisy labels, Reed et al (2014) presented a simple bootstrapping method that trains a classifier with the target composed of a convex combination of the previous epoch output and the known but potentially noisy labels. Our temporal ensembling differs from this by taking into account the evaluations over multiple epochs.\nGenerative Adversarial Networks (GAN) have been recently used for semi-supervised learning with promising results (Maalge et al., 2016; Springenberg, 2016; Odena, 2016; Salimans et al., 2016). I\nTable 5: The network architecture used in all of our tests\ncould be an interesting avenue for future work to incorporate a generative component to our solution We also envision that our methods could be applied to regression-type learning tasks..\nWe thank the anonymous reviewers, Tero Karras, Pekka Janis, Tim Salimans, Ian Goodfellow, as well as Harri Valpola and his colleagues at Curious AI for valuable suggestions that helped to im prove this article."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems 27 (NIPS). 2014.\nLeo Breiman. Bagging predictors. Machine Learning. 24(2). 1996\nBenjamin Graham. Fractional max-pooling. CoRR, abs/1412.6071, 2014\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassin human-level performance on imagenet classification. CoRR, abs/1502.01852, 2015.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks witl stochastic depth. CoRR, abs/1603.09382, 2016.\nNAME DESCRIPTION input 32 x 32 RGB image noise Additive Gaussian noise = 0.15 conv1a 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv1b 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv1c 128 filters, 3 3, pad = 'same', LReLU (a = 0.1) pool1 Maxpool 2 2 pixels drop1 Dropout, p = 0.5 conv2a 256 filters, 3 3, pad = 'same', LReLU (a = 0.1) conv2b 256 filters, 3 3, pad = 'same', LReLU ( = 0.1) conv2c 256 filters, 3 3, pad = 'same', LReLU (a = 0.1) pool2 Maxpool 2 2 pixels drop2 Dropout, p = 0.5 conv3a 512 filters, 3 3, pad = 'valid', LReLU (a = 0.1) conv3b 256 filters, 1 1, LReLU (a = 0.1) conv3c 128 filters, 1 1, LReLU (a = 0.1) pool3 Global average pool (6 6 -> 1 1 pixels) dense Fully connected 128 -> 10 Output Softmax\nSander Dieleman, Jan Schluter, Colin Raffel, Eben Olson, Soren Kaae Sonderby, et al. Lasagne. First release., 2015.\nYarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. CoRR, abs/1506.02142, 2016.\nLars Maalge, Casper Kaae Sonderby, Soren Kaae Sonderby, and Ole Winther. Auxiliary deep gen erative models. CoRR, abs/1602.05473, 2016.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributiona smoothing with virtual adversarial training. In Proc. International Conference on Learning Rep. resentations (1CLR), 2016.\nAugustus Odena. Semi-supervised learning with generative adversarial networks. Data Efficie Machine Learning workshop at ICML 2016. 2016.\nScott E. Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and An drew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. CoRR. abs/1412.6596, 2014.\nTim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization tc accelerate training of deep neura1 networks. CoRR, abs/1602.07868, 2016\nTim Salimans. Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training GANs. CoRR, abs/1606.03498, 2016.\nSaurabh Singh, Derek Hoiem, and David A. Forsyth. Swapout: Learning an ensemble of deep architectures. CoRR, abs/1605.06465, 2016.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. CoRR. abs/1412.6806. 2014\nGiorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural networks robust to label noise: a loss correction approach. CoRR, abs/1609.03683, 2016\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems 28 (NIPS). 2015.\nSainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. CoRR, abs/1406.2080, 2014.\nXiaojin Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sci ences, University of Wisconsin-Madison, 2005.\nXiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propa gation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002\nTable 5 details the network architecture used in all of our tests. It is heavily inspired by ConvPool CNN-C (Springenberg et al., 2014) and the improvements made by Salimans & Kingma (2016). Al data layers were initialized following He et al. (2015), and we applied weight normalization anc mean-only batch normalization (Salimans & Kingma, 2016) with momentum 0.999 to all of them We used leaky ReLU (Maas et al., 2013) with a = 0.1 as the non-linearity, and chose to use max pooling instead of strided convolutions because it gave consistently better results in our experiments\nAll networks were trained using Adam (Kingma & Ba, 2014) with a maximum learning rate of Amax = 0.003, except for temporal ensembling in the SVHN case where a maximum learning rate of Amax = 0.001 worked better. Adam momentum parameters were set to 1 = 0.9 and 2 = 0.999 as suggested in the paper. The maximum value for the unsupervised loss component was set to Wmax . M/N, where M is the number of labeled inputs and N is the total number of training inputs. For II-model runs, we used wmax = 100 in all runs except for CIFAR-100 with Tiny Images where we set wmax = 300. For temporal ensembling we used wmax = 30 in most runs. For the corrupted label test in Section 3.5 we used wmax = 300 for 0% and 20% corruption, and wmax = 3000 for corruption of 50% and higher. For basic CIFAR-100 runs we used wmax = 100, and for CIFAR-100 with Tiny Images we used wmax = 1000. The accumulation decay constant of temporal ensembling was set to Q = 0.6 in all runs.\nIn all runs we ramped up both the learning rate X and unsupervised loss component weight w during. the first 80 epochs using a Gaussian ramp-up curve exp[-5(1 T)2], where T advances linearly from zero to one during the ramp-up period. In addition to ramp-up, we annealed the learning rate. X to zero and Adam 1 to 0.5 during the last 50 epochs, but otherwise we did not decay them. during training. The ramp-down curve was similar to the ramp-up curve but time-reversed and with a scaling constant of 12.5 instead of 5. All networks were trained for 300 epochs with minibatch. size of 100.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. CoRR, abs/1605.02688, May 2016\nCIFAR-10 Following previous work in fully supervised learning, we pre-processed the images us ing ZCA and augmented the dataset using horizontal flips and random translations. The translations were drawn from -2, 2[ pixels, and were independently applied to both branches in the II-model.\nSVHN We pre-processed the input images by biasing and scaling each input image to zero mean and unit variance. We used only the 73257 items in the official training set, i.e., did not use the provided 531131 extra items. The training setups were otherwise similar to CIFAR-10 except that horizontal flips were not used.\nModel convergence As discussed in Section 2.1, a slow ramp-up of the unsupervised cost is very important for getting the models to converge. Furthermore, in our very preliminary tests with 250 labels in SVHN we noticed that optimization tended to explode during the ramp-up period, and we eventually found that using a lower value for Adam 2 parameter (e.g., 0.99 instead of 0.999) seem. to help in this regard.\nWe do not attempt to guarantee that the occurrence of labeled inputs during training would be some. how stratified; with bad luck there might be several consecutive minibatches without any labeled inputs when the label density is very low. Some previous work has identified this as a weakness, and. have solved the issue by shuffling the input sequences in such a way that stratification is guaranteed. e.g. Rasmus et al. (2015) (confirmed from the authors). This kind of stratification might further. improve the convergence of our methods as well..\nTiny Images, extra data from restricted categories The restricted extra data in Section 3.3 was extracted from Tiny Images by picking all images with labels corresponding to the 100 categories used in CIFAR-100. As the Tiny Images dataset does not contain CIFAR-100 categories aquar- ium_fish and maple tree, we used images with labels fish and maple instead. The result was a total of 237 203 images that were used as unlabeled extra data. Table 6 shows the composition of this extra data set.\nIt is worth noting that the CIFAR-100 dataset itself is a subset of Tiny Images, and we did no explicitly prevent overlap between this extra set and CIFAR-100. This led to approximately a thirc of the CIFAR-100 training and test images being present as unlabeled inputs in the extra set. The other test with 500k extra entries picked randomly out of all 79 million images had a negligibl overlap with CIFAR-100.\nImplementation Our implementation is written in Python using Theano (Theano. Development Team, 20i6) and Lasagne (Dieleman et al., 2015), and is available at nttps://qithub.com/smlaine2/tempens.\nTable 6: The Tiny Images (Torralba et al., 2008) labels and image counts used in the CIFAR-10 plus restricted extra data tests (rightmost column of Table 4). Note that the extra input images wer. supplied as unlabeled data for our networks, and the labels were used only for narrowing down th. full set of 79 million images.\nLabel # Label # Label # Label # apple 2242 baby 2771 bear 2242 beaver 2116 bed 2767 bee 2193 beetle 2173 bicycle 2599 bottle 2212 bowl 2707 boy 2234 bridge 2274 bus 3068 butterfly 3036 camel 2121 can 2461 castle 3094 caterpillar 2382 cattle 2089 chair 2552 chimpanzee 1706 clock 2375 cloud 2390 cockroach 2318 couch 2171 crab 2735 crocodile 2712 cup 2287 dinosaur 2045 dolphin 2504 elephant 2794 fish* 3082 flatfish 1504 forest 2244 fox 2684 girl 2204 hamster 2294 house 2320 kangaroo 2563 keyboard 1948 lamp 2242 lawn_mower 1929 leopard 2139 lion 3045 lizard 2130 lobster 2136 man 2248 maple* 2149 motorcycle 2168 mountain 2249 2128 mushroom 2390 mouse oak_tree 1995 orange 2650 orchid 1902 otter 2073 palm_tree 2107 pear 2120 pickup_truck 2478 pine_tree 2341 plain 2198 plate 3109 poppy 2730 porcupine 1900 possum 2008 rabbit 2408 raccoon 2587 ray 2564 road 2862 rocket 2180 rose 2237 sea 2122 seal 2159 shark 2157 shrew 1826 skunk 2450 skyscraper 2298 snail 2369 snake 2989 spider 3024 squirrel 2374 streetcar 1905 sunflower 2761 sweet_pepper 1983 table 3137 tank 1897 telephone 1889 television 2973 tiger 2603 tractor 1848 train 3020 trout 2726 tulip 2160 turtle 2438 wardrobe 2029 whale 2597 willow_tree 2040 wolf 2423 woman 2446 worm 2945"}] |
BJuysoFeg | [{"section_index": "0", "section_name": "REVISITING BATCH NORMALIZATION FOI PRACTICAL DOMAIN ADAPTATION", "section_text": "Liu', Xiaodi Hou.\nyttonhao@pku.edu.cn winsty@gmail.com shijianping5000@gmail.con liujiaying@pku.edu.cn xiaodi.hou@gmail.com\nDeep neural networks (DNN) have shown unprecedented success in various com- puter vision applications such as image classification and object detection. How- ever, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study (Tommasi et al., 2015) shows that a DNN has strong depen dency towards the training dataset, and the learned features cannot be easily trans. ferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics from the source domain to the target domain in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state- of-the-art performance despite its surprising simplicity. Furthermore, we demon- strate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Training a DNN for a new image recognition task is expensive. It requires a large amount of labeled. training images that are not easy to obtain. One common practice is to use labeled data from other. related source such as a different public dataset, or harvesting images by keywords from a search. engine. Because 1) the distributions of the source domains (third party datasets or Internet images) are often different from the target domain (testing images); and 2) DNN is particularly good at. capturing dataset bias in its internal representation (Torralba & Efros, 2011), which eventually leads to overfitting. imperfectly paired training and testing sets usually leads to inferior performance..\nIn this paper, we propose a simple yet effective approach called AdaBN for batch normalized DNI. domain adaptation. We hypothesize that the label related knowledge is stored in the weight matri. of each layer, whereas domain related knowledge is represented by the statistics of the Batch Nor malization (BN) (Ioffe & Szegedy, 2015) layer. Therefore, we can easily transfer the trained mode. to a new domain by modulating the statistics in the BN layer. This approach is straightforward t. implement, has zero parameter to tune, and requires minimal computational resources. Moreover. our AdaBN is ready to be extended to more sophisticated scenarios such as multi-source domai. adaptation and semi-supervised settings. Fig. 1 illustrates the flowchart of AdaBN. To summarize. our contributions are as follows:"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Known as domain adaptation, the effort to bridge the gap between training and testing data distribu- tions has been discussed several times under the context of deep learning (Tzeng et al., 2014; Long et al., 2015; Tzeng et al., 2015; Ganin & Lempitsky, 2015). To make the connection between the domain of training and the domain of testing, most of these methods require additional optimiza- tion steps and extra parameters. Such additional computational burden could greatly complicate the training of a DNN which is already intimidating enough for most people.\nTraining Testing Input 0.5 0.5 0.4 0.4 Conv/FC 0.3 0.3 0.2 0.2 0.1 0.1 90 0 -5 10 -10 5 5 10 BatchNorm 3 2 1 0 -1 Activation -2 3 -2 Output E -2\nFigure 1: Illustration of the proposed method. For each convolutional or fully connected layer, we use different bias/variance terms to perform batch normalization for the training domain and the test domain. The domain specific normalization mitigates the domain shift issue"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Domain transfer in visual recognition tasks has gained increasing attention in recent literature (Bei. jbom, 2012; Patel et al., 2015). Often referred to as covariate shift (Shimodaira, 2000) or datasei. bias (Torralba & Efros, 2011), this problem poses a great challenge to the generalization ability of. a learned model. One key component of domain transfer is to model the difference between source. and target distributions. In Khosla et al. (2012), the authors assign each dataset with an explicit bias. vector, and train one discriminative model to handle multiple classification problems with different. bias terms. A more explicit way to compute dataset difference is based on Maximum Mean Discrep. ancy (MMD) (Gretton et al., 2012). This approach projects each data sample into a Reproducing. Kernel Hilbert Space, and then computes the difference of sample means. To reduce dataset discrep. ancies, many methods are proposed, including sample selections (Huang et al., 2006; Gong et al. 2013), explicit projection learning (Pan et al., 2011; Gopalan et al., 2011; Baktashmotlagh et al.. 2013) and principal axes alignment (Fernando et al., 2013; Gong et al., 2012; Aljundi et al., 2015)..\nAll of these methods face the same challenge of constructing the domain transfer function -- a high. dimensional non-linear function. Due to computational constraints, most of the proposed transfer functions are in the category of simple shallow projections, which are typically composed of kernel. transformations and linear mapping functions.\nIn the field of deep learning, feature transferability across different domains is a tantalizing yet generally unsolved topic (Yosinski et al., 2014; Tommasi et al., 2015). To transfer the learned representations to a new dataset, pre-training plus fine-tuning (Donahue et al., 2014) have become de facto procedures. However, adaptation by fine-tuning is far from perfect. It requires a considerable amount of labeled data from the target domain, and non-negligible computational resources to re train the whole network\n1. We propose a novel domain adaptation technique called Adaptive Batch Normalizatior (AdaBN). We show that AdaBN can naturally dissociate bias and variance of a dataset which is ideal for domain adaptation tasks. 2. We validate the effectiveness of our approach on standard benchmarks for both single source and multi-source domain adaptation. Our method outperforms the state-of-the-art methods. 3. We conduct experiments on the cloud detection for remote sensing images to further demonstrate the effectiveness of our approach in practical use.\nA series of progress has been made in DNN to facilitate domain transfer. Early works of domair adaptation either focus on reordering fine-tuning samples (Chopra et al., 2013), or regularizing MMD (Gretton et al., 2012) in a shallow network (Ghifary et al., 2014). It is only until recently that the problem is directly attacked under the setting of classification of unlabeled target domair using modern convolutional neural network (CNN) architecture. DDC (Tzeng et al., 2014) used the classical MMD loss to regularize the representation in the last layer of CNN. DAN (Long et al. 2015) further extended the method to multiple kernel MMD and multiple layer adaptation. Besides adapting features using MMD, RTN (Long et al., 2016) also added a gated residual layer for classi fier adaptation. RevGrad (Ganin & Lempitsky, 2015) devised a gradient reversal layer to compensat the back-propagated gradients that are domain specific. Recently, by explicitly modeling both pri vate and shared components of the domain representations in the network, Bousmalis et al. (2016 proposed a Domain Separation Network to extract better domain-invariant features.\nAnother related work is CORAL (Sun et al., 2016). This model focuses on the last layer of CNN. CORAL whitens the data in source domain, and then re-correlates the source domain features tc. target domain. This operation aligns the second order statistics of source domain and target domain. distributions. Surprisingly, such simple approach yields state-of-the-arts results in various text clas-. sification and visual recognition tasks. Recently, Deep CORAL (Sun & Saenko, 2016) also extends the method into DNN by incorporating a CORAL loss.."}, {"section_index": "4", "section_name": "2.1 BATCH NORMALIZATION", "section_text": "In this section, we briefly review Batch Normalization (BN) (Ioffe & Szegedy, 2015) which is. closely related to our AdaBN. The BN layer is originally designed to alleviate the issue of internal covariate shifting - a common problem while training a very deep neural network. It first standard-. izes each feature in a mini-batch, and then learns a common slope and bias for each mini-batch. Formally, given the input to a BN layer X E Rnp, where n denotes the batch size, and p is the. feature dimension, BN layer transforms a feature j E {1... p} into:.\nYj =YjXj+ Rj\nwhere x; and y; are the input/output scalars of one neuron response in one data sample; X.; denotes. the jth column of the input data; and y; and , are parameters to be learned. This transformation. guarantees that the input distribution of each layer remains unchanged across different mini-batches. For Stochastic Gradient Descent (SGD) optimization, a stable input distribution could greatly facil itate model convergence, leading to much faster training speed for CNN. Moreover, if training data. are shuffled at each epoch, the same training sample will be applied with different transformations, or in other words, more comprehensively augmented throughout the training. During the testing. phase, the global statistics of all training samples is used to normalize every mini-batch of test data.\nExtensive experiments have shown that Batch Normalization significantly reduces the number o iteration to converge, and improves the final performance at the same time. BN layer has become : standard component in recent top-performing CNN architectures, such as deep residual network (H et al., 2016), and Inception V3 (Szegedy et al., 2015)."}, {"section_index": "5", "section_name": "3.1 A PILOT EXPERIMENT", "section_text": "The Batch Normalization (BN) technique is originally proposed to help SGD optimization by align. ing the distribution of training data. From this perspective, it is interesting to examine the BN parameters (batch-wise mean and variance) over different dataset at different layers of the network.\nXj-E[X.j] Xj= /Var[X.j]\nIn Sec. 3.1, we first analyze the domain shift in deep neural network, and reveal two key observa tions. Then in Sec. 3.2, we introduce our Adaptive Batch Normalization (AdaBN) method based on these observations.\nIn this pilot experiment, we use MXNet implementation (Chen et al., 2016b) of the Inception-BN model (Ioffe & Szegedy, 2015) pre-trained on ImageNet classification task (Russakovsky et al 2015) as our baseline DNN model. Our image data are drawn from (Bergamo & Torresani, 2010) which contains the same classes of images from both Caltech-256 dataset (Griffin et al., 2007) an Bing image search results. For each mini-batch sampled from one dataset, we concatenate the meai and variance of all neurons from one layer to form a feature vector. Using linear SVM, we car almost perfectly classify whether the mini-batch feature vector is from Caltech-256 or Bing datase1 Fig. 2 visualizes the distributions of mini-batch feature vectors from two datasets in 2D. It is clea that BN statistics from different domains are separated into clusters.\nFigure 2: t-SNE (Van der Maaten & Hinton, 2008) visualization of the mini-batch BN feature vecto. distributions in both shallow and deep layers, across different datasets. Each point represents the BN statistics in one mini-batch. Red dots come from Bing domain, while the blue ones are fron. Caltech-256 domain. The size of each mini-batch is 64..\noth observations motivate us to adapt the representation across different domains by BN layer\nGiven the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows' :\nAlgorithm 1 Adaptive Batch Normalization (AdaBN)\nThe intuition behind our method is straightforward: The standardization of each layer by domai ensures that each layer receives data from a similar distribution, no matter it comes from the sourc\n'In practice we adopt an online algorithm (Donald, 1999) to efficiently estimate the mean and variance\n(a) Shallow layer distributions (b) Deep layer distributions\n1. Both shallow layers and deep layers of the DNN are influenced by domain shift. Domain adaptation by manipulating the output layer alone is not enough. 2. The statistics of BN layer contain the traits of the data domain.\nCompute BN output y;(m) := Yj\nFor K domain adaptation where K > 2, we standardize each sample by the statistics in its owr domain. During training, the statistics are calculated for every mini-batch, the only thing that we need to make sure is that the samples in every mini-batch are from the same domain. For (semi )supervised domain adaptation, we may use the labeled data to fine-tune the weights as well. As a result, our method could fit in all different settings of domain adaptation with minimal effort.\nCompared with CORAL (Sun et al., 2016), one natural question is why we transform the neuron responses independently, not decorrelate and then re-correlate the responses together as suggested in Sun et al. (2016). Under certain conditions, decorrelation could improve the performance. How-. ever, in CNN, the mini-batch size is usually smaller than the feature dimension, leading to singulai covariance matrices that is hard to be inversed. As a result, the covariance matrix is always sin. gular. In addition, decorrelation requires to compute the inverse of the covariance matrix which is. computationally intensive, especially if we plan to apply AdaBN to all layers of the network.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we demonstrate the effectiveness of AdaBN on standard domain adaptation datasets and empirically analyze our AdaBN model. We also evaluation our method on a practical application with remote sensing images."}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETTINGS", "section_text": "We first introduce our experiments on two standard datasets: Office (Saenko et al., 2010) an Caltech-Bing (Bergamo & Torresani, 2010).\nOffice (Saenko et al., 2010) is a standard benchmark for domain adaptation, which is a collection of 4652 images in 31 classes from three different domains: Amazon(A), DSRL(D) and Webcam(W) Similar to (Tzeng et al., 2014; Sun et al., 2016; Long et al., 2015), we evaluate the pairwise do- main adaption performance of AdaBN on all six pairs of domains. For the multi-source setting, we evaluate our method on three transfer tasks {A, W} -> D, {A, D} -> W, {D, W} -> A.\nCaltech-Bing (Bergamo & Torresani, 2010) is a much larger domain adaptation dataset, which con tains 30,607 and 121,730 images in 256 categories from two domains Caltech-256(C) and Bing(B) The images in the Bing set are collected from Bing image search engine by keyword search. Ap parently Bing data contains noise, and its data distribution is dramatically different from that of Caltech-256.\nWe compare our approach with a variety of methods, including four shallow methods: SA (Fernand et al., 2013), LSSA (Aljundi et al., 2015), GFK (Gong et al., 2012), CORAL (Sun et al., 2016) and four deep methods: DDC (Tzeng et al., 2014), DAN (Long et al., 2015), RevGrad (Ganin 8 Lempitsky, 2015), Deep CORAL (Sun & Saenko, 2016). Specifically, GFK models domain shift b integrating an infinite number of subspaces that characterize changes in statistical properties fron the source to the target domain. SA, LSSA and CORAL align the source and target subspaces b explicit feature space transformations that would map source distribution into the target one. DD and DAN are deep learning based methods which maximize domain invariance by adding to AlexNe one or several adaptation layers using MMD. RevGrad incorporates a gradient reversal layer in th deep model to encourage learning domain-invariant features. Deep CORAL extends CORAL t perform end-to-end adaptation in DNN. It should be noted that these deep learning methods hav the adaptation layers on top of the output layers of DNNs, which is a sharp contrast to our methoc that delves into early convolution layers as well with the help of BN layers.\nWe follow the full protocol (Donahue et al., 2014) for the single source setting; while for multiple. sources setting, we use all the samples in the source domains as training data, and use all the samples in the target domain as testing data. We fine-tune the Inception-BN (Ioffe & Szegedy, 2015) model on source domain in each task for 100 epochs. The learning rate is set to O.01 initially, and then. is dropped by a factor 0.1 every 40 epochs. Since the office dataset is quite small, following the.\ndomain or the target domain. Although modulating statistics in one BN layer by AdaBN is a simple translation and scaling operation, such linear transformation in one layer can achieve a highly non- linear transformation through the whole deep CNN architecture. Thus, we believe this AdaBN process could approximate the intrinsically non-linear domain transfer function.\nTable 1: Single source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl standard unsupervised adaptation protocol.\nbest practice in Long et al. (2015), we freeze the first three groups of Inception modules, and set the learning rate of fourth and fifth group one tenth of the base learning rate to avoid overfitting. Fo Caltech-Bing dataset, we fine-tune the whole model with the same base learning rate."}, {"section_index": "8", "section_name": "4.2.1 OFFICE DATASET", "section_text": "Our results on Office dataset is reported in Table 1 and Table 2 for single/multi source(s), respec. tively. Note that the first 5 models of the Table 1 are pre-trained on AlexNet (Krizhevsky et al., 2012. instead of the Inception-BN (Ioffe & Szegedy, 2015) model, due to the lack of publicly availabl pre-trained Inception BN model in Caffe (Jia et al., 2014). Thus, the relative improvements over the. baseline (AlexNet/Inception BN) make more sense than the absolute numbers of each algorithm.\nFrom Table 1, we first notice that the Inception-BN indeed improves over the AlexNet on average. which means that the CNN pre-trained on ImageNet has learned general features, the improvements. on ImageNet can be transferred to new tasks. Among the methods based on Inception-BN features our method improves the most over the baseline. Moreover, since our method is complementary tc. other methods, we can simply apply CORAL on the top of AdaBN. Not surprisingly, this simple. combination exhibits O.5% increase in performance. This preliminary test reveals further potentia. of AdaBN if combined with other advanced domain adaptation methods. Finally, we could improve. 1.7% over the baseline, and advance the state-of-the-art results for this dataset..\nNone of the compared methods has reported their performance on multi-source domain adaptation To demonstrate the capacity of AdaBN under multi-domain settings, we compare it against CORAL, which is the best performing algorithm in the single source setting. The result is reported in Table 2. We find that simply combining two domains does not lead to better performance. The result is generally worse compared to the best performing single domain between the two. This phenomenon suggests that if we cannot properly cope with domain bias, the increase of training samples may be reversely affect to the testing performance. This result confirms the necessity of domain adaptation. In this more challenging setting, AdaBN still outperforms the baseline and CORAL on average.. Again, when combined with CORAL, our method demonstrates further improvements. At last, our method archives 2.3% gain over the baseline..\nTable 2: Multi-source domain adaptation results on Office-31 (Saenko et al., 2010) dataset witl standard unsupervised adaptation protocol.\nMethod A->W D->W W ->D A->D D->A W ->A Avg AlexNet (Krizhevsky et al., 2012) 61.6 95.4 99.0 63.8 51.1 49.8 70.1 DDC (Tzeng et al., 2014) 61.8 95.0 98.5 64.4 52.1 52.2 70.6 DAN (Long et al., 2015) 68.5 96.0 99.0 67.0 54.0 53.1 72.9 Deep CORAL (Sun & Saenko, 2016) 66.4 95.7 99.2 66.8 52.8 51.5 72.1 RevGrad (Ganin & Lempitsky, 2015) 73.0 96.4 99.2 - - - Inception BN (Ioffe & Szegedy, 2015) 70.3 94.3 100 70.5 60.1 57.9 75.5 SA (Fernando et al., 2013) 69.8 95.5 99.0 71.3 59.4 56.9 75.3 GFK (Gong et al., 2012) 66.7 97.0 99.4 70.1 58.0 56.9 74.7 LSSA (Aljundi et al., 2015) 67.7 96.1 98.4 71.3 57.8 57.8 74.9 CORAL (Sun et al., 2016) 70.9 95.7 99.8 71.9 59.0 60.2 76.3 AdaBN 74.2 95.7 99.8 73.1 59.8 57.4 76.7 AdaBN + CORAL 75.4 96.2 99.6 72.7 59.0 60.5 77.2"}, {"section_index": "9", "section_name": "4.2.2 CALTECH-BING DATASET", "section_text": "To further evaluate our method on the large-scale dataset, we show our results on Caltech-Bing Dataset in Table 3. Compared with CORAL, AdaBN achieves better performance, which improves 1.8% over the baseline. Note that all the domain adaptation methods show minor improvements over the baseline in the task C -> B. One of the hypotheses to this relatively small improvement is that the images in Bing dataset are collected from Internet, which are more diverse and noisier (Bergamc & Torresani, 2010). Thus, it is not easy to adapt on the Bing dataset from the relatively clean datasel Caltech-256. Combining CORAL with our method does not offer further improvements. This might be explained by the noise of the Bing dataset and the imbalance of the number of images in the two domains.\nTable 3: Single source domain adaptation results on Caltech-Bing (Bergamo & Torresani, 2010 dataset.\nIn this section, we investigate the influence of the number of samples in target domain to the perfor mance and empirically analyze the adaptation effect of different BN layers.."}, {"section_index": "10", "section_name": "4.3.1 SENSITIVITY TO TARGET DOMAIN SIZE", "section_text": "Since the key of our method is to calculate the mean and variance of the target domain on different BN layers, it is very natural to ask how many target images is necessary to obtain stable statistics. In this experiment, we randomly select a subset of images in target domain to calculate the statistics and then evaluate the performance on the whole target set. Fig. 3 illustrates the effect of using different number of batches. The results demonstrate that our method can obtain good results wher using only a small part of the target examples. It should also be noted that in the extremal case of one batch of target images, our method still achieves better results than the baseline. This is valuable in practical use since a large number of target images are often not available.\n0.76 0.71 Adapt BN Adapt BN Inception BN 0.7 Inception BN 0.75 0.69 0.74 0.68 0.67 0.73 0.66 0.72 0.65 0 2 4 6 8 10 12 0 20 40 60 80 100 (a) A->W (b) B>C\nFigure 3: Accuracy when varying the number of mini-batches used for calculating the statistics of. BN layers in A -> W and B -> C, respectively. For B -> C, we only show the results of using less than 100 batches, since the results are very stable when adding more examples. The batch size is 64. in this experiment. For even smaller number of examples, the performance may be not consistent and drop behind the baseline (e.g. 0.652 with 16 samples, 0.661 with 32 samples)..\nMethod C -> B B -> C Avg Inception BN (Ioffe & Szegedy, 2015) 35.1 64.6 49.9 CORAL (Sun et al., 2016) 35.3 67.2 51.3 AdaBN 35.2 68.1 51.7 AdaBN + CORAL 35.0 67.5 51.2\n0.71 Adapted BN 0.70 Inception BN 0.69 uo! 5 0.68 0.67 0.66 0.65 1 0 1 2 3a 3b 4a 4b 4c 5a 5b Adapted layers\nFigure 4: Accuracy when adapting with different BN blocks in B -> C. x = 0 corresponds to the result with non-adapt method, and 1, 2, 3a, 3b, 4a, 4b, 4c, 5a, 5b correspond to the nine different blocks in Inception-BN network...\nIn this experiment, we analyze the effect of adapting on different BN layers with our AdaBN method. According to the structure of Inception-BN network Ioffe & Szegedy (2015), we categorize the BN layers into 9 blocks: 1, 2, 3a, 3b, 4a, 4b, 4c, 5a, 5b. Since the back BN layers are influenced by the outputs of previous BN layers, when adapting a specific block we adapted all the blocks before it. Fig. 4 illustrates the adaptation effect for different BN layers. It shows that adapting BN layers consistently improves the results over the baseline method in most cases. Specifically, when incorporating more BN layers in the adaptation, we could achiever better transfer results."}, {"section_index": "11", "section_name": "4.4 PRACTICAL APPLICATION FOR CLOUD DETECTION IN REMOTE SENSING IMAGES", "section_text": "In this section, we further demonstrate the effectiveness of AdaBN on a practical problem: Clou. Detection in Remote Sensing Images. Since remote sensing images are taken by different satellites. with different sensors and resolutions, the captured images are visually different in texture, color. and value range distributions, as shown in Fig. 5. How to adapt a model trained on one satellite tc another satellite images is naturally a domain adaptation problem..\nOur task here is to identify cloud from the remote sensing images, which can be regarded as a semantic segmentation task. The experiment is taken under a self-collected dataset, which includes three image sets, from GF2, GF1 and Tianhui satellites. Each image set contains 635, 324 and 113. images with resolution over 6000x6000 pixels respectively. We name the three different datasets following the satellite names. GF2 dataset is used as the training dataset while GF1 and Tianhui datasets are for testing. We use a state-of-art semantic segmentation method (Chen et al., 2016a) as our baseline model.\nTable 4: Domain adaptation results (mIOU) on GF1 and Tianhui datasets training on GF2 datasets\nThe results on GF1 and Tianhui datasets are shown in Table 4. The relatively low results of the baseline method indicate that there exists large distribution disparity among images from different satellites. Thus, the significant improvement after applying AdaBN reveals the effectiveness of our method. Some of the visual results are shown in Fig. 6. Since other domain adaptation methods require either additional optimization steps and extra components (e.g. MMD) or post-processing distribution alignment (like CORAL), it is very hard to apply these methods from image classifi- cation to this large-size (6000x6000) segmentation problem. Comparatively, besides the effective performance, our method needs no extra parameters and very few computations over the whole adaptation process.\n(a) GF1 image (b) GF2 image (c) Tianhui image\nFigure 5: Remote sensing images in different domains\n(a) Original image (b) Without AdaBN (c) AdaBN (a) Original image (b) Without AdaBN (c) AdaBN\nFigure 6: Visual cloud detection results on GF1 dataset. White pixels in (b) and (c) represent the detected cloud regions."}, {"section_index": "12", "section_name": "CONCLUSION AND FUTURE WORKS", "section_text": "In this paper, we have introduced a simple yet effective approach for domain adaptation on batch. normalized neural networks. Besides its original uses, we have exploited another functionality of. Batch Normalization (BN) layer: domain adaptation. The main idea is to replace the statistics of each BN layer in source domain with those in target domain. The proposed method is easy to. implement and parameter-free, and it takes almost no effort to extend to multiple source domains. and semi-supervised settings. Our method established new state-of-the-art results on both single and. multiple source(s) domain adaptation settings on standard benchmarks. At last, the experiments on. cloud detection for large-size remote sensing images further demonstrate the effectiveness of our. method in practical use. We believe our method opens up a new direction for domain adaptation..\nIn contrary to other methods that use Maximum Mean Discrepancy (MMD) or domain confusion. loss to update the weights in CNN for domain adaptation, our method only modifies the statistics of BN layer. Therefore, our method is fully complementary to other existing deep learning basec methods. It is interesting to see how these different methods can be unified under one framework."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Rahaf Aljundi, Remi Emonet, Damien Muselet, and Marc Sebban. Landmarks-based kernelizec subspace alignment for unsupervised domain adaptation. In CVPR. 2015.\nMahsa Baktashmotlagh. Mehrtash Harandi. Brian Lovell, and Mathieu Salzmann. Unsupervisec domain adaptation by domain invariant projection. In ICCV, pp. 769-776, 2013.\nLiang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016a.\nE Knuth Donald. The art of computer programming. Sorting and searching, 3:426-458, 1999\nBasura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual do main adaptation using subspace alignment. In ICCV, pp. 2960-2967, 2013.\nYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pp. 1180-1189, 2015.\nMuhammad Ghifary, W Bastiaan Kleijn, and Mengjie Zhang. Domain adaptive neural networks fo. object recognition. In PRICAI: Trends in Artificial Intelligence, pp. 898-904. 2014\nAlessandro Bergamo and Lorenzo Torresani. Exploiting weakly-labeled web images to improve object classification: a domain adaptation approach. In NIPS, pp. 181-189, 2010.\nTianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu Chiyuan Zhang, and Zheng Zhang. MXNet: A flexible and efficient machine learning library for. heterogeneous distributed systems. NIPs Workshop on Machine Learning Systems. 2016b.\nRaghuraman Gopalan, Ruonan Li, and Rama Chellappa. Domain adaptation for object recognition An unsupervised approach. In ICCV, pp. 999-1006, 2011.\nGregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CVPR, 2016.\nJiayuan Huang, Arthur Gretton, Karsten M Borgwardt, Bernhard Scholkopf, and Alex J Smola Correcting sample selection bias by unlabeled data. In NIPS, pp. 601-608, 2006.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. In ICML, pp. 448-456, 2015.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. In ACM MM, pp. 675-678, 2014.\nAditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A Efros, and Antonio Torralba. Undoin the damage of dataset bias. In ECCV, pp. 158-171. 2012.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In N1PS, pp. 1097-1105, 2012\nMingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, pp. 97-105, 2015.\nSinno Jialin Pan. Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. 1EEE Transactions on Neural Networks, 22(2):199-210. 2011.\nVishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation A survey of recent advances. IEEE Signal Processing Magazine, 32(3):53-69, 2015\nHidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000.\nBaochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation arXiv preprint arXiv:1607.01719. 2016\nBaochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. AAAI 2016.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.\nTatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset bias. German Conference on Pattern Recognition, 2015.\nAntonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR, pp. 1521-1528 2011.\nEric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion Maximizing for domain invariance. arXiv preprint arXiv:1412.3474. 2014.\nEric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer acro. domains and tasks. In ICCV, pp. 4068-4076, 2015.\nLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, pp. 3320-3328, 2014."}] |
SJJN38cge | [{"section_index": "0", "section_name": "DISTRIBUTED TRANSFER LEARNING FOR DEEP CONVOLUTIONAL NEURAL NETWORKS BY BASIC PROBABILITY ASSIGNMENT", "section_text": "Arash Shahriari\nResearch School of Engineering, Australian National University. Commonwealth Scientific and Industrial Research Organisation\nTransfer learning is a popular practice in deep neural networks, but fine-tuning. of a large number of parameters is a hard challenge due to the complex wiring. of neurons between splitting layers and imbalance class distributions of original. and transferred domains. Recent advances in evidence theory show that in an imbalance multiclass learning problem, optimizing of proper objective functions. based on contingency tables prevents biases towards high-prior classes. Transfer learning usually deals with highly non-convex objectives and local minima in deep neural architectures. We propose a novel distributed transfer learning to tackle both optimization complexity and class-imbalance problem jointly. Our solution imposes separated greedy regularization to each individual convolutional filter to. make single-filter neural networks such that the minority classes perform as the majority ones. Then, basic probability assignment from evidence theory boosts these distributed networks to improve the recognition performance on the target. domains. Our experiments on several standard datasets confirm the consistent improvement as a result of our distributed transfer learning strategy.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Transfer learning for deep neural networks has been proved highly beneficial to boost their overall. performance. Deep learning practices usually require huge amount of labeled data to learn powerful. models. The transfer learning enables adaptation to a different source with small training samples On the other hand, deep neural networks practically learn intermediate features. They could provide. better transfer among domains because some of them generalize well among various domains of knowledge [Glorot et al.(2011). These transferable features generally underlies several probability. distributions Oquab et al.(2014) which reduce the cross-domain discrepancyYosinski et al.(2014)\nThe common observation among several deep architectures is that features learned in bottom layers are not that specific, but transiting towards top layers makes them tailored to a dataset or task. A recent study Yosinski et al.(2014) of the generality or specificity of deep layers for the sake of transfer learning reveals two difficulties which may affect the transfer of deep features. First, top layers get quite specialized to their original tasks and second, some optimization difficulties rise due to the splitting of the network between co-adapted layers. In spite of these negative effects, it"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In supervised learning, many classification algorithms assume the same distribution for training and testing data. Consequently, change of distribution requires rebuilding of the statistical models which is not always practical because of the hardship of recollecting of training data or heavy learning process. One of the solutions is transfer learning that transfers the classification knowledge into a new domain Pan & Yang(2010). This aims at learning of highly-generalized models with differ- ent probability distributions across domains to learn novel domains without labeled data |Wang & Schneider(2014)Zhang et al.[(2013). Here, the main challenge is to reduce the shifts in data dis- tribution between domains by algorithms that minimize the discriminant of the domains. It is worth mentioning that this could not get rid of domain-specific variations Long et al.(2016).\nis shown that transferred features not only perform better than random ones but also provide bette initialization. This gives a boost to the generalization of deep neural networks as well.\nIn this paper, we propose a framework for distributed transfer learning in deep convolutional net-. works. This tries to alleviate the burden of splitting networks in the middle of fragile co-adaptec layers. The intuition is that above difficulty relates to the complexity of deep architectures and also. class-imbalance in the transferred domain.\nOn the matter of network complexity, we argue that the splitting of layers leads to a hard optimization problem because of high complexity in the interconnections between neurons of co-adapted layers It seems that transfer learning is not able to thoroughly reconstruct the original powerful wiring for the transferred domain. This is due to the size of network and large number of interconnections across neurons. To address this issue, we fine-tune the convolutional filters separately and hence, reduce the complexity of the non-convex optimization.\nOn the other hand, it seems that the class-imbalance problem rises form different distribution of. data in original and transferred domains. This issue can be handled by cost-sensitive imbalanced classifications methods. By class-imbalance in transferred domain, we mean variable coverage of common classes in this domain and the ones from the original domain. It is probable that both original and transferred datasets have uniform distributions of data among their classes, but some classes in one domain may be fully or partly covered by the other domain. This results in imbalance. class distribution in the transfer learning..\nThe determination of a probabilistic distribution from the confusion matrix is highly effective to. produce a probability assignment which contributes to class-imbalance problems. This basic prob. ability assignment can be either constructed from recognition, substitution and rejection rates Xu. et al.[(1992) or both precision and recall rates of each classDeng et al.[(2016). The key point is. harvesting of maximum possible prior knowledge provided by the confusion matrix to overcome the. imbalance classification challenge.\nSince the power of deep convolutional models come from mutual optimization of all parameters we join the above distributed fine-tuned filters by a boosting scheme based on basic probability assignment. Our experiments confirm the functionality of our distributed strategy for deep transfer learning. The rest of paper is organized as follows. We present the formulation of our method in Section[2] report our experiments in Section[3|and conclude in Section4"}, {"section_index": "3", "section_name": "2 FORMULATION", "section_text": "In general, a confusion matrix represents the class-based predictions against actual labels in form oj. a square matrix. Inspired by Dempster-Shafer theory, construction of basic probability assignmen (BPA) Sentz & Ferson(2002) gives a vector which is independent of the number of class samples. and sums up to one for each individual label. This basic probability assignment provides the abil. ity to reflect the difference contributions of a classifier to each individual classes or combine the. outcomes of multiple week classifiers.."}, {"section_index": "4", "section_name": "2.1 BASIC PROBABILITY ASSIGNMENT", "section_text": "A raw two-dimensional confusion matrix indexed by predicted classes and actual labels provides some common measures of classification performance. They are accuracy (the proportion of the total number of predictions that were correct), precision (a measure of the accuracy provided thai\na specific class has been predicted), recall (a measure of the ability of a prediction model to se lect instances of a certain class from a dataset) and F-score (the harmonic mean of precision anc recall) Sammut & Webb(2011)\nnij riJ C nij Pij L] ni. i=1\nIt can be seen that the recall ratio is summed over the actual labels (rows) whilst the precision ratic is accumulated by the predicted classes (columns) of the confusion matrix C(). Now, we are able to define recall and precision matrices as\nR($) ={rij} P($) ={Pij} for ie [1... l, je [1...C]\nThe basic probability assignments of these matrices contain recall and precision probability elements for each individual class C; such that.\nrii mri Pii mpi j=1 Pij\nThese elements are synthesized to form the final probability assignments representing the recogn tion ability of classifier to each of the classes of set C.\nmri X mpi m; = mri mpi =- C mri X mpi\nHere, operator is an orthogonal sum which is applied by Dempster rule of combination Sentz & Ferson (2002). The overall contribution of the classifier cab be presented as a probability assignment vector\nSuppose a set of train/validation samples = {X1,..., Xx} from C = {C1,..., Cc} different. classes are assigned to a label set = {L1,..., Lc|} by a classifier () such that |C| = |L|. If each element (ns) of the confusion matrix C() is considered as the number of samples belonging. to class C; which assigned to label Lj, then we can define recall (r) and precision (pij) ratios as followsDeng et al.(2016)\nBPA() ={mi} for i e [1...C]\nIt is worth mentioning that BPA() should be computed by the train/validation set because we. assume that the test set does not include actual labels. Besides, combination of different classes under vertical or horizontal categories is a common practice in visual classification. The benefit. lies in the fact that bottom layers of deep convolutional architectures make better contribution to. detect first and second order features that are usually of specific directions (vertical vs horizontal). rather than detailed distinguished patterns of the objects. This leads to a powerful hierarchical. feature learning in the case that [C < [L|. In contrast, some classes can be divided to various. sub-categories although they all get the same initial labels and hence this holds C] > L to take. the advantage of top layers. In the above formulation, we do not merge or divide the original setup. of the datasets under study ([C] = [L|) although it seems that our BPA-based approach is also able to boost the trained classifiers for each of the merge/divide scenarios..\nConv Conv Conv Softmax Conventional Transfer Learning Conv Conv Conv Softmax Conv Conv Conv ... ... ... Softmax BPA Conv Conv Conv 000 000 ... Softmax Distributed Transfer Learning\nFigure 1: Conventional and Distributed Transfer Learning. The blue blocks (Conv) represent convo. lutional layers in the original domain, the red blocks (Softmax) show fine-tuned layers for the target domain and the green block corresponds to the basic probability assignment (BPA) respectively.."}, {"section_index": "5", "section_name": "2.2 DISTRIBUTED TRANSFER LEARNING", "section_text": "A general practice in transfer learning includes training of an original deep neural network on a dataset and then, fine-tuning of learned features for another dataset on a new target network.Bengio et al.(2012). The generality of selected features for both original and target domains is critical to the success of the transfer learning. For implementation, we train the original network and copy its bottom layers to form the target network. The top layers of the target network are initialized randomly and trained on the target dataset. We are able to employ backpropagation from top to. bottom layers and fine-tune their parameters for the target task or freeze the copied originals and only update top target layers. This can be decided by size of the target dataset and number of parameters in the original layers. Fine-tuning of large networks for small dataset leads to overfitting but for small network or large dataset, performance will be improved Sermanet et al.(2013).\nSuppose that C; is the predicted class for a test sample T provided by classifier . To revise the classification outcome by the BPA calculation, we multiply the test sample's unary poten- tials U(T) = {u1,..., ujc|} (probabilities of belonging to each class) by an assignment vector. M($) = {1 - m1,...,1 mgc|} (contributions of the classifier $ to each class) and pick the. maximum index as the revised predicted label.\nC(T) = I(arg max {u1 (1- m1),..., uc (1- mcD)}\nBased on our formulation for basic probability assignment (BPA) on Section2.1] we are able to follow the above transfer learning procedure by learning of a classifier (SVM or Softmax) and computing BPA() using Algorithm[1| Here, the learning means fine-tuning of target domain using the rained weights and biases of the original network. To implement this, we train the original fully. connected layers by the features calculated by presenting target's train set to convolutional layers of the same original network. We deploy this procedure for each of the available convolutional filters separately and compute the BPA of each individual single-filter network for train/validation sets. Then, we combine unary potentials of all the fine-tuned classifiers by employing BPA weights to come up with a unit set of class probabilities. Figure [1provides an overview of conventional and distributed transfer learning processes.\nThis implies that if classifier performs well on class C; (high m), it is highly probable that C(T) leans towards C;. At the same time, other minority classes like C; (low m;) have a chance to win if their unary potentials would be high enough (u; > ut). In contrast, if $ does poor classification on class C (low m), the possibility of updating C(T) to another class (C;) with even worse unary potential (u; < u) would be higher. Therefore, BPA shows quite successful in handling imbalance data distribution among classes.\nAs described in Section[1] employing probability assignment addresses the class-imbalance problem but does not reduce the complexity of optimization because of the fact that both forward learning and error backpropagation are applied to all the model parameters. To break this non-convex op timization, we introduce our distributed transfer learning strategy. For implementation, we replace the mutual learning of all the parameters with learning of each individual convolutional filter in a separate classifier fed by the bottom original layer. It means that we train a set of week single-filter classifiers F = {1,..., ||} which F equals the number of convolutional filters in the deep neural architecture.we follow the recipe of single classifier in Equation|5|but extend it to redefine\nsuch that m; is the probability assignment of class C; to week single-filter classifier . To com up with class of the test sample T, we update the Equation[6as follows.\nU1j (1-m1j) Uij (1-m|c|j CF(T) = I(arg max 1 U1j (1-m1j) j=1U|c|j x (1-m|c|j\nHere, u; is the unary potential of class C, determined by the week single-filter classifier $;. Build ing on the above formulations, we are able to distribute the transfer learning among convolutional filters and join them later to implement a better fine-tuning for the target deep convolutional network according to the Algorithm2"}, {"section_index": "6", "section_name": "3 EXPERIMENTS", "section_text": "We conduct our experiments on MNIST, CIFAR and Street View House Numbers (SVHN) datasets.. The MNIST dataset|LeCun et al.(1998) contains 60, 000 training examples and 10, 000 test samples normalized to 20 20, centered by center of mass in 28 28 and sheared by horizontally shifting. such that the principal axis is vertical. The foreground pixels were set to one and the background to. zero. The CIFAR dataset Krizhevsky & Hinton(2009) includes two subsets. CIFAR-10 consists of 10 classes of objects with 6, 000 images per class. The classes are airplane, automobile, bird, cat,. deer, dog, frog, horse, ship and truck. It was divided to 5, 000 randomly selected images per class. as training set and the rest, as testing samples. The second subset is called CIFAR-100 having 600. images in each of 100 classes. These classes also come in 20 super-classes of five class each. The. SVHN datasetNetzer et al.(2011) was extracted from a large number of Google Street View images by automated algorithms and the Amazon Mechanical Turk (AMT) framework. It consists of over. 600, 000 labeled characters in full numbers and MNIST-like cropped digits in 32 32. Three subsets. are available containing 73, 257 digits for training, 26, 032 for testing and 531, 131 extra samples.\nWe consider two different scenarios to evaluate the performance of our distributed transfer learn ing algorithm. In the first experiment, we try to observe the performance of fine-tuning for pairs\nBPA(F) ={mij} for ie [1...cll, je 1...f\nFigure 2: Examples of MNIST, CIFAR and SVHN Datasets\nof datasets with close data distributions or number of classes. We select MNIST & SVHN and CIFAR-10 & CIFAR-100 as original-target domains and report the transfer learning results in form of train-test errors. In the second experiment, we apply transfer learning for pairs of datasets with far data/class setups which are MNIST & CIFAR-10 and SVHN & CIFAR-100. In this experiment we arrange the datasets to examine the effect of dissimilar distributions rather than overfitting.\nTable 2 shows the performance of conventional and distributed transfer learnings for the first sce nario. The first values before dash correspond to the training errors (left) and the second ones present the testing errors (right)\nIn this experiment, we target two pairs of datasets (original-target domains) which contain similar. data and perform number/object recognition tasks. We report the results for both conventional anc our distributed transfer learning methods. By conventional Bengio et al.(2012), we mean training. the original dataset and fine-tuning of the target one. With distributed, we aim at training the origina. dataset but employing the basic probability assignment for the transfer learning..\nIt can be seen that the results for the conventional transfer learning follows our argument on size of network and number of model parameters Sermanet et al.(2013). Compared to Table[1 MNIST does a poor job on transferring of SVHN due to the overfitting of SVHN over MNIST network. In. contrast, SVHN perform quite well on transferring MNIST..\n3 9 5 5 0 63 q46 7 S9 83\nBefore moving forward to discuss the experiments, we report the baseline train-test errors for the datasets in Table[1 These results are produced by the deep learning library provided by the Oxford. Visual Geometry GroupVedaldi & Fulkerson (2008)\nTable 1: Baseline Performances of Deep Learning\nDascne Train Error (%) Test Error (%) MNIST 0.04 0.55 SVHN 0.13 3.81 CIFAR-10 0.01 19.40 CIFAR-100 0.17 50.90\nTable 2: Performance of Conventional and Distributed Transfer Learning for Experiment\nOn the other hand, transferring of SVHN from MNIST does not overfit when our distributed transfe. learning is employed. In both settings of original-target domains, our distributed strategy outper. forms the conventional transfer learning approach.\nThe experiment on CIFAR pair exposes more interesting results due to the fact that both datasets have the same number of samples but completely different distributions among the classes. In prac- tice, CIFAR-100 includes all the classes of CIFAR-10 but CIFAR-10 does not have any clue of the several classes of CIFAR-100. The conventional experiments show that CIFAR-10 transfers well on CIFAR-100 but it cannot perform transferring although the target network does not overfit."}, {"section_index": "7", "section_name": "3.2 EXPERIMENT 2", "section_text": "For the first setup, CIFAR-10 does a better transfer learning than MNSIT although the number of. classes are the same. It seems that CIFAR-10 provides better generalization due to higher diversity. among its classes. Here, our distributed algorithm performs better than the conventional process and\nTarget Conventional MNIST SVHN MNIST 0.01 29.57 SVHN 0.35 1.04 Target Distributed MNIST SVHN MNIST 0.24 5.18 SVHN 0.16 0.46 Target Conventional CIFAR-10 CIFAR-100 CIFAR-10 0.53 68.44 CIFAR-100 0.11 24.08 Target Distributed CIFAR-10 CIFAR-100 CIFAR-10 0.29 54.32 CIFAR-100 0.05 18.24\nAll in all, the performance of our distributed transfer learning (bold values) is better than the con ventional scheme and also, outperforms the baseline deep learning practices.\nIn Table [3] we reports the results for both conventional and distributed transfer learnings on the. second scenario. Here, we pair datasets such that the similarity of their data distributions and number of classes get minimized and they are originally trained for different tasks. It is obvious that our. distributed transfer learning outperforms all the conventional results.\nTable 3: Performance of Conventional and Distributed Transfer Learning for Experiment 2\ntargeting of MNIST on CIFAR-10 network gives close performance to the deep learning outcomes. The second setup leads to the overfitting of SVHN over CIFAR-100 network due to huge number of samples. The other outcome is the poor performance of transferring CIFAR-100 over SVHN network as a result of huge conceptual gap between original-target domains.\nOur observations show that fine-tuning on training set and calculating BPA on validation, result in. better generalization of the transferred model on testing set. On the other hand, computing of BPA on. training plus validation sets gives higher performance in case of hugely different number of classes in original-target datasets. Since we employ BPA to address the class-imbalance problem, we reckon that it better captures the distribution of data by adjoining both train/validation sets especially when we intend to transfer few classes of original dataset to the larger number of classes in the target.."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "We introduce a novel transfer learning for deep convolutional networks that tackles the optimization complexity of a highly non-convex objective by breaking it to several distributed fine-tuning oper- ations. This also resolves the imbalance class coverage between original-target domains by using basic probability assignment across several week single-filter classifiers. By the above boosting, the overall performance shows considerable improvement over conventional transfer learning scheme We conduct several experiments on publicly available datasets and report the performance as train- test errors. The results confirm the advantage of our distributed strategy for the transfer learning.\nTarget Conventional MNIST CIFAR-10 MNIST 0.43 28.92 CIFAR-10 0.44 2.37 Target Distributed MNIST CIFAR-10 MNIST 0.25 20.85 CIFAR-10 0.23 0.95 Target Conventional SVHN CIFAR-100 SVHN 0.71 89.31 CIFAR-100 0.01 12.18 Target Distributed SVHN CIFAR-100 SVHN 0.46 61.10 CIFAR-100 0.28 7.25"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMingsheng Long, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636, 2016\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.\nMaxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring mid-level im age representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717-1724. 2014\nKari Sentz and Scott Ferson. Combination of evidence in Dempster-Shafer theory, volume 4015 Citeseer, 2002.\nXuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. Ir Advances in Neural Information Processing Systems. pp. 1898-1906. 2014.\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in dee neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.\nKun Zhang, Bernhard Scholkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In 1CML (3), pp. 819-827, 2013\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE. 86(11):2278-2324. 1998"}] |
Hkg8bDqee | [{"section_index": "0", "section_name": "INTROSPECTION:ACCELERATING NEURAL NETWORK TRAINING BY LEARNING WEIGHT EVOLUTION", "section_text": "Abhishek Sinha\nDepartment of Electronics and Electrical Comm. Engg IIT Kharagpur West Bengal. India"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Neural Networks are function approximators that have achieved state-of-the-ar accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolutior pattern from a simple network for accelerating training of novel neural networks.\nWe use a neural network to learn the training pattern from MNIST classifi cation and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have been very successful in modeling high-level abstractions in data. How.. ever, training a deep neural network for any AI task is a time-consuming process. This is because a. large number of parameters need to be learnt using training examples. Most of the deeper network. can take days to get trained even on GPU thus making it a major bottleneck in the large-scale appli. cation of deep networks. Reduction of training time through an efficient optimizer is essential for. fast design and testing of deep neural nets..\nIn the context of neural networks, an optimization algorithm iteratively updates the parameters (weights) of a network based on a batch of training examples, to minimize an objective function The most widely used optimization algorithm is Stochastic Gradient Descent. Even with the adven of newer and faster optimization algorithms like Adagrad, Adadelta, RMSProp and Adam there is still a need for achieving faster convergence.\nIn this work we apply neural network to predict weights of other in-training neural networks to accelerate their convergence. Our method has a very low memory footprint and is computationally efficient. Another aspect of this method is that we can update the weights of all the layers in parallel\n*This work was done as part of an internship at Adobe Systems, Noida\nMausoom Sarkar\nAdobe Systems Inc. Noida Uttar Pradesh.India.\nkbalaji at adobe dot com"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Several extensions of Stochastic Gradient Descent have been proposed for faster training of neura networks. Some of them are Momentum (Rumelhart et al.]1986), AdaGrad (Duchy et al.2011) AdaDelta (Zeiler2012), RMSProp (Hinton et al.]2012) and Adam (Kingma & Ba]2014). All o1 them reduce the convergence time by suitably altering the learning rate during training. Our methoc can be used along with any of the above-mentioned methods to further improve convergence time.\nIn the above approaches, the weight update is always a product of the gradient and the modi fied/unmodified learning rate. More recent approaches (Andrychowicz et al.] 2016) have tried tc learn the function that takes as input the gradient and outputs the appropriate weight update. This exhibited a faster convergence compared to a simpler multiplication operation between the learning rate and gradient. Our approach is different from this, because our forecasting Network does no use the current gradient for weight update, but rather uses the weight history to predict its futur value many time steps ahead where network would exhibit better convergence. Our approacl generalizes better between different architectures and datasets without additional retraining. Furthe our approach has far lesser memory footprint as compared to (Andrychowicz et al.] 2016). Also ou approach need not be involved at every weight update and hence can be invoked asynchronously which makes it computationally efficient.\nAnother recent approach, called Q-gradient descent (Fu et al. 2016), uses a reinforcement learning framework to tune the hyperparameters of the optimization algorithm as the training progresses. The Deep-Q Network used for tuning the hyperparameters itself needs to be trained with data from any specific network N to be able to optimize the training of N. Our approach is different because we use a pre-trained forecasting Network that can optimize any network N without training itself by data from N.\nFinally the recent approach by (Jaderberg et al.]2016) to predict synthetic gradients is similar to ou work, in the sense that the weights are updates independently, but it still relies on an estimation of the gradient, while our update method does not.\nOur method is distinct from all the above approaches because it uses information obtained from t training process of existing neural nets to accelerate the training of novel neural nets..\nThe evolution of weights of neural networks being trained on different classification tasks such as on MNIST and CIFAR-10 datasets and over different network architectures (weights from different layers of fully connected as well as convolutional architectures) as well as different optimization rules were analyzed. It was observed that the evolution followed a general trend independent of the task the model was performing or the layer to which the parameters belonged to. A major proportion of the weights did not undergo any significant change. Two metrics were used to quantify weight. changes:\nDifference between the final and initial values of a weight scalar: This is a measure of how much a weight scalar has deviated from its initial value after training.In figure 4|we show the frequency histogram plot of the weight changes in a convolutional network trained for. MNIST image classification task, which indicates that most of the weight values do not. undergo a significant change in magnitude. Similar plots for a fully connected network. trained on MNIST dataset ( figure [6) and a convolutional network trained on CIFAR-10 dataset (figure[8) present similar observations. Square root of 2nd moment of the values a weight scalar takes during training: Through this measure we wish to quantify the oscillation of weight values. This moment has been. taken about the initial value of the weight. In figure[5] we show the frequency histogram plot of the second moment of weight changes in a convolutional network trained for the MNIST digit classification task, which indicates that most of the weight values do not. undergo a significant oscillations in value during the training. Similar plots for a fully\nA very small subset of the all the weights undergo massive changes compared to the rest\n0.015 0.010 0.005 0.000 -0.005 -0.010 0 10000 20000 30000 40000 50000 Training steps\n0.015 0.010 0.005 0.000 -0.005 -0.010 0 10000 20000 30000 40000 50000 Training steps\nFigure 1: Deviation of weight values from initialized values as a convolutional network gets trainec on MNIST dataset using SGD optimizer..\nDeviation of weight value from initialization with training fully connected network on MNiST 0.8 yauee 0.6 0.4 0.2 0.0 0.2 0.4 -0.6 -0.8 O 20000 40000 60000 80000 100000 Training steps\nfully connected network on MNIS Deviation of weight values from initialized values 0.8 when training a convolutional network on ClFAR-10 0.10 0.6 . 0.05 0.4 0.2 0.00 0.0 0.05 -0.2 0.10 biereee -0.4 oo lieeeeee 0.6 0.15 0.8 0 20000 40000 60000 80000 100000 0.20 Training steps 10000 20000 30000 40000 50000 Training steps\nFigure 2: Deviation of weight values from initialized values as a fully-connected net- work gets trained on MNIST dataset using Adam optimizer."}, {"section_index": "4", "section_name": "3.1 WEIGHT PREDICTION", "section_text": "We collect the weight evolution trends of a network that is being trained and use the collected data to train a neural network I to forecast the future values of each weight based on its values in the previous time steps. The trained network I is then used to predict the weight values of an unseen. network N during its training which move N to a state that enables a faster convergence. The. time taken for the forecast is significantly smaller compared to the time a standard optimizer (e.g. SGD) would have taken to achieve the same accuracy. This leads to a reduction in the total training"}, {"section_index": "5", "section_name": "connected network trained on MNIST (figure7) and a convolutional network trained or CIFAR-10 ( figure[9) dataset present similar observations", "section_text": "The few that did change significantly were observed to be following a predictable trend, where. they would keep on increasing or decreasing with the progress of training in a predictable fashion. In figuresand 3we show the evolution history of a few weights randomly sampled from the weight change histogram bins of figures4 6|and|8|respectively, which illustrates our observation.\nIng Deviation of weight values from initialized values when training a convolutional network on CIFAR-10 0.10 0.05 0.00 0.05 0.10 0.15 100000 -0.20 0 10000 20000 30000 40000 500 Training steps\nFigure 3: Deviation of weight values from. initialized values as CNN gets trained on CIFAR-10 dataset using SGD optimizer.\nlog-Frequency Distribution of deviation of weight value from initialization 106 105 104 Frenneeey 103 102 101 100 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35\nFigure 4: log-Frequency distribution of dif- ference between weight values before and after training for a network No trained on MNIST dataset using SGD optimizer.\nThe forecasting network I is a simple 1-layered feedforward neuralnet. The input layer consists oj four neurons that take four samples from the training history of a weight. The hidden layer consist. of 40 neurons, fully connected to the input layer, with ReLU activation. The output layer is a single neuron that outputs the predicted future value of the weight. In our experiments four was minimun numbers of samples for which the training of Introspection Network I converged."}, {"section_index": "6", "section_name": "4.1 TRAINING OF INTROSPECTION NETWORK", "section_text": "The introspection network I is trained on the training history of the weights of a network No whicl was trained on MNIST dataset.The network No consisted of 3 convolutional layers and two full. connected layers, with ReLU activation and deploying Adam optimiser. Max pooling(2X2 poo. size and a 2X2 stride) was applied after the conv layers along with dropout applied after the first fc. layer. The shapes of the conv layer filters were [5, 5, 1, 8, 5, 5, 8, 16 and [5, 5, 16, 32[ respectivel whereas of the fc layer weight were [512, 1024] and 1024, 10] respectively.The network No wa. trained with a learning rate of 1e - 4 and batch size of 50. The training set of I is prepared as. follows. A random training step t is selected for each weight of No selected as a training sample. and the following 4 values are given as inputs for training I:.\n1. value of the weight at step t . 2. value of the weight at step 7t/10 3. value of the weight at step 4t/10 4. at step 0 (i.e. the initialized value)\nlog-Frequency Distribution of square root of 2nd moment about initialized value of weights 105 104 Frnnneeey 103 102 101 100 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Square root of 2nd moment about initialized value\nFigure 5: log-Frequency distribution of square root of 2nd moment of a weight. value(about initial value) along its training history. The weight values are taken from a network No trained on MNIST dataset using. SGD optimizer.\ntime. The predictor I that is used for forecasting weights is a comparatively smaller neural network whose inference time is negligible compared to the training time of the network that needs to be trained(N). We call this predictor I Introspection network because it looks at the weight evolution during training.\nThe figure 1o|below shows a comparison of the weight evolution for a single scalar weight value. with and without using the introspection network I. The vertical green bars indicate the points at. which the introspection network was used to predict the future values. Post prediction, the network continues to get trained normally by SGD, until the introspection network I is used once again to. jump to a new weight value.\n105 104 Frenneeey 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Deviation of Weiabt Valu.\nFigure 6: log-Frequency distribution of dif- ference between weight values before and after training for a fully-connected network trained on MNIST dataset using Adam opti mizer.\nlog-Frequency Distribution of deviation of weight value from initialization 106 105 104 Frenneney 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Deviation ofWeight Value\nFigure 8: log-Frequency distribution of dif- ference between weight values before and af- ter training for a CNN trained on CIFAR-10 dataset using SGD optimizer.\nSince a large proportion of weights remain nearly constant throughout the training, a preprocessing step is done before getting the training data for I. The large number of weight histories collected are sorted in decreasing order on the basis of their variations in values from time step O to time step t. We choose 50% of the training data from the top 50th percentile of the sorted weights, 25% from the next 25th percentile(between 50 to 75th percentile of the sorted weights) and the remaining 25% from the rest (75th to 100th percentile). Approximately 0.8 million examples of weight history are used to train I. As the weight values are very small fractions they are further multiplied by 1000 before being input to the network I. The expected output of I, which is used for training I using backpropagation, is a single scalar the value of the same weight at step 2t. This is an empirical choice. For example, any step kt with k > 1 can be chosen instead of 2t. In our experiments with varying the value of k, we found that the value of k = 2.2 reached a slightly better validation accuracy than k = 2.0 on MNIST dataset (see figure 15) but, on the whole the value of k = 2.0 was a lot more consistent in its out-performance at various points in its history. All the results reported here are with respect to the I trained to predict weight values at 2t.\n105 104 Frenneeey 103 102 101 100 0 1 2 3 4 5 Square root of 2nd moment about initialized value\nFigure 7: log-Frequency distribution of. square root of 2nd moment of a weight. value(about initial value) along its training. history. The weight values are taken from a fully-connected network trained on MNIST dataset using Adam Optimizer..\nlog-Frequency Distribution of square root of 2nd moment about initialized value of weights 106 105 104 Freenbeey 103 102 101 100 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Square root of 2nd moment about initialized value\nFigure 9: log-Frequency distribution of square root of 2nd moment of a weight. value(about initial value) along its training history. The weight values are taken from a CNN trained on CIFAR-10 dataset using. SGD Optimizer.\nEvolution of weights with and without Introspection network -70 SGD SGD + update using Introspection network -75 80 -85 90 0 5000 10000 15000 20000 Training Steps\nFigure 10: Example of weight update using Introspection Network\nAdam optimizer was used for the training of the introspection network with a mini-batch size of 20.The training was carried out for 30k steps. The learning rate used was 5e-4 which decreased gradually after every 8k training steps. L1- error was used as the loss function for training . We experimented with both L2 error and percentage error but found that L1 error gave the best result over the validation set. The final training loss obtained was 3.1 and the validation loss of the final trained model was 3.4. These correspond to average L1 weight prediction error of 0.0031 and 0.0034 in the training and validation set respectively as the weight values are multiplied by 1o00 before they are input to I.\nThe introspection network once trained can be then used to guide the training of other networks. We illustrate our method by using it to accelerate the training of several deep neural nets with varying architectures on 3 different datasets, namely MNIST, CIFAR-10 and ImageNet. We note that the same introspection network I, trained on the weight evolutions of the MNIST network No was used in all these different cases.\nAll the networks trained using I required comparatively less time to reach the same accuracy as normal SGD training. Also, when the same network was trained for the same time with and without updates by I, the former is observed to have better accuracy. These results show that there is a remarkable similarity in the weight evolution trajectories across network architectures,tasks and datasets.\nFour different neural networks were trained using I on MNIST dataset\nAll the networks have been trained using either Stochastic Gradient Descent, or ADAM and the network I is used at a few intermediate steps to propel the network to a state with higher accuracy. We refer to the time step at which the introspection network I is applied to update all the weights as a \"jump point\".\nThe selection of the steps at which I is to be used is dependent on the distribution of the training step t used for training I. We show the effect of varying the timing of the initial jump and the time interval between jump points in section4.2.2] It has been observed that I gives a better increase in accuracy when it is used in later training steps rather than in the earlier ones.\nA convolutional neural network MNIST with 2 convolutional layer and 2 fully con nected layers(dropout layer after 1st fc layer is also present)with ReLU acitvations fo.\n1. A convolutional neural network MNIST with 2 convolutional layer and 2 fully con nected layers(dropout layer after 1st fc layer is also present)with ReLU acitvations for\nA comparison of the validation accuracy with and without updates by I is shown in figures[11]12 13and14 The green lines indicate the steps at which the introspection network I is used. For the M N I ST network with the application of the introspection network I at three points, we found that it took 251 seconds and 20000 SGD steps to reach a validation accuracy of 98.22%. In the same number of SGD steps, normal training was able to reach a validation accuracy of only 97.22%. In the same amount of time (251 seconds), normal training only reached 97.92%. Hence the gain in accuracy with the application of introspection network translates to real gains in training times.\n0.992 Plot of acc 0.99 trospection network applied 0.988 0.985 0.986 0.984 0.98 0.978 0.976 0.965 With introspection network 0.974 Without introspection network 000 12000 14000 I6000 0.972 raining steps 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Training steps\nFigure 11: Validation accuracy plot for MNIST\nThe initial drop in accuracy seen after a jump in M N I ST2 figure[12|can be attributed to the fact that each weight scalar is predicted independently, and the interrelationship between the weight scalars in a layer or across different layers is not taken into consideration. This interrelationship is soon reestablished after few SGD steps. This phenomenon is noticed in the CIFAR and ImageNet cases tOO.\nclassification task on MNIST image dataset.Max pooling(2X2 pool size and a 2X2 stride) was applied after every conv layer. The CNN layer weights were of shape [5, 5,1, 8] and [5, 5, 32, 64] respectively and the fc layer were of sizes [3136, 1024] and [1024, 10].The weights were initialised from a truncated normal distribution with a mean of 0 and std of 0.01. The network was trained using SGD with a learning rate of 1e-2 and batch size of 50. It takes approximately 20,000 steps for convergence via SGD optimiser. For M NIST1, I was used to update all weights at training step 3000, 4000, and 5000. 2. A convolutional network M N I ST, with 2 convolutional layer and 2 fully connected layers with ReLU acitvations. Max pooling(2X2 pool size and a 2X2 stride) was applied after ev- ery conv layer. The two fc layer were of sizes [800, 500] and [500, 10] whereas the two conv layers were of shape [5, 5, 1, 20] and [5, 5, 20, 50] respectively. The weight initialisations were done via xavier intialisation. The initial learning rate was O.01 which was decayed via the inv policy with gamma and power being 1e - 4 and 0.75 respectively. Batch size of 64 was used for the training.It takes approximately 10,000 steps for convergence . The network I was used to update weights at training step 2500 and 3000. 3. A fully connected network M NIST; with 2 hidden layers each consisting of 256 hidden units and having ReLU acitvations. The network was trained using SGD with a learning rate of 5e - 3 and a batch size of 100. The initial weights were drawn out from a normal distribution having mean O and std as 1.0. For this network the weight updations were carried out at steps 6000, 8000 and 10000. 4. A RNN MNIST4 used to classify MNIST having a LSTM cell of hidden size of 128 followed by a fc layer of shape 128, 10] for classification. The RNN was trained on Adam optimizer with a learning rate of 5e - 4 and a batch size of 128. The weight updations for this network were done at steps 2000,3000 and 4000. Since the LSTM cell uses sigmoid and tanh activations, the RNN M N I ST4 allows us to explore if the introspection network, trained on ReLU can generalize to networks using different activation functions.\nFor the MNIST2 network, the figure [12|shows that to reach an accuracy of 99.11%, the number of iterations required by normal SGD was 6000, whereas with the application of the introspection network I, the number of iterations needed was only 3500, which represents a significant savings in time and computational effort.\n14000 000 6500\nFigure 13: Validation accuracy plot for M N I ST3\nFor MNIST; after 15oo0 steps of training,the max accuracy achieved by normal training of net work via Adam optimizer was 95.71% whereas with introspection network applied the max accuracy was 96.89%. To reach the max accuracy reached by normal training , the modified network(weights updated by I) took only 8300 steps.\nFor M N I ST4 after 7o00 steps of training, the max accuracy achieved by normal training of network was 98.65% achieved after 6500 steps whereas after modification by I it was 98.85% achieved aftel 5300 steps. The modified network(weights updated by I) reached the max accuracy achieved by normal network after only 4200 steps. It is notable that the introspection network I trained on weight evolutions with ReLU activations was able to help accelerate the convergence of an RNN network which uses sigmoid and tanh activations.\n0.994 0.992 0.99 0.988 aeenneey 0.986 rest 0.984 0.982 0.98 Jump*2.2 Jump(2) 0.978 Jump(*1.5) Jump(*1.3) 0.4 0.6 0.8 1.2 1.4 1.6 1.8 2 training steps X10\n0.994 0.992 0.99 0.988 eeernnee 0.986 eest 0.984 0.982 0.98 Jump(*2.2) Jump(*2) 0.978 Jump(*1.5) Jump(*1.3) 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 training steps x10\nFigure 15: Comparison of introspection networks trained with different jump ratios on MNIST network with Adam optimizer.Jump of 2.0 has a more consistent out performance compared to jump value of 2.2 even though it reaches a slightly higher accuracy\nFigure 14: Validation accuracy plot for MNIST Which is an RNN\nWe applied our introspection network I on a CNN CI F AR1 for classifying images in the CIFAR10. (Krizhevsky]2009) dataset. It has 2 convolutional layers, 2 fully connected layer and a final soft-. max layer with ReLU activation function. Max pooling (3X3 pool size and a 2X2 stride) and batch. normalization has been applied after each convolutional layer. The two conv layer filter weights. were of shape [5, 5, 3, 64] and [5, 5, 64, 64] respectively whereas the two fc layers and final softmax. ayer were of shape 2304, 384[,[384, 192 and 192, 10 respectively. The weights were initialized from a zero mean normal distribution with std of 1e - 4 for conv layers,0.04 for the two fc layers. and 1/192.0 for the final layer. The initial learning rate used is 0.1 which is decayed by a factor of. 0.1 after every 350 epochs. Batch size of 128 was used for training of the model which was trained. via the SGD optimizer. It takes approximately 40,O00 steps for convergence. The experiments on\n1. Set1 : Weight updates were carried out at training steps 12000 and 17( 2. Set2 : Weight updates at steps 15000 and 18000 . 3. Set3 : Weight updates at steps 12000 , 15000 and 19000 . 4. Sets : Weight updates at steps 14000 . 17000 and 20000\nWe observed that for the CIF ARj network that in order to reach a validation accuracy of 85.7% we need 40,000 iterations with normal SGD without any intervention with the introspection network I. In all the four sets where the introspection network was used, the target accuracy of 85.7% was reached in approximately 28,O00 steps. This shows that the introspection network is able to successfully generalize to a new dataset and new architecture and show significant gains in training time.\nOn CIFAR1. the time taken by I for prediction is negligible compared to the time required fo SGD. So the training times in the above cases on CIFAR1 can be assumed to be proportional tc. the number of SGD steps required.\nA comparison of the validation accuracy with and without updates by I at the four different sets of jump points are shown in figures[16]17][18|and 19] The results show that the while choice of jump points have some effect on the final result, the effects are not very huge. In general, we notice that. better accuracy is reached when the jumps take place in later training steps.\n0.86 Plot of accuracy vs training steps for cifar-10 0.85 0.84 0.83 0.82 Cenre 0.81 0.8 0.79 0.78 Introspection network applied normal training 0.77 / 0.5 1.5 2 2.5 3 3.5 Training steps 4 104\nFigure 16: Validation accuracy plot for CIFAR1 with jumps at Set\nFigure 18: Validation accuracy plot for CIFAR1 with jumps at Set3"}, {"section_index": "7", "section_name": "4.2.3 IMAGENET", "section_text": "CIFARj were done to investigate two issues. The first was to investigate if the introspection net work trained on MNIST weight evolutions is able to generalize to a different network and different dataset. The second was to investigate the effect of varying the timing of the initial jump, the inter- val between successive jumps and the number of jumps. To investigate these issues, four separate training instances were performed with 4 different set of jump points:\n0.86 Plot of accuracy vs trainii. ing steps for cifar-10 Plot of accuracy vs training steps for cifar-10 0.86 0.85 0.85 0.84 .84 0.83 0.79 0.78 network applie 0.77 1.5 2.5 3.5 .5 3.5 Training steps 10 raining steps 104\nFigure 17: Validation accuracy plot for CIFAR1 with jumps at Set\nMVW ).E 0.85 0.83 ntrospection network app 3.5 104\nFigure 19: Validation accuracy plot for CIFAR with jumps at Set4\nTo investigate the practical feasibility and generalization ability of our introspection network, we applied it in training AlexNet(Krizhevsky et al.||2012) (AlexNet1) on the ImageNet (Russakovsky\net al.2015) dataset. It has 5 conv layers and 3 fully connected layers . Max pooling and loca response normalization have been used after the two starting conv layers and the pooling layer i there after the fifth conv layer as well. We use SGD with momentum of O.9 to train this network starting from a learning rate of 0.01. The learning rate was decreased by one tenth every 100, 00 iterations. The mini-batch size was 128. It takes approximately 300,000 steps for convergence. Th weight updates were carried out at training steps 120, 000 , 130, 000 , 144, 000 and 160, 000 .\nWe find that in order to achieve a top-5 accuracy of 72%, the number of iterations required in the normal case was 196,O00. When the introspection network was used, number of iterations required to reach the same accuracy was 179,000. Again the time taken by I for prediction is negligible compared to the time required for SGD. A comparison of the validation accuracy with and without updates by I is shown in figure 20 The green lines indicate the steps at which the introspection network I is used. The corresponding plot of loss function against training steps has been shown in figure21\nPlot of accuracy vs training steps for imageNet 0.74 normal training Introspection network applied 0.72 0.7 0.68 0.66 0.64 0.62 0.6 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 Training steps X105\nFigure 20: Validation accuracy plot for AlexNet1 on ImageNet\nThe results on Alexnetj show that our approach has a small memory footprint and computationally efficient to be able to scale to training practical large scale networks.\nIn this section we provide a comparison with other optimizers and simple heuristics which can be. used to update the weights at different training steps instead of updations by introspection network"}, {"section_index": "8", "section_name": "4.4 COMPARISON WITH ADAM OPTIMIZER", "section_text": "We applied the introspection network on MNIST and MNIST; networks being trained with Adam optimizer with learning rates of 1e - 4 and 1e 3. The results in figure 22| and figure 23|show that while Adam outperforms normal SGD and SGD with introspection, we were able to successfully apply the introspection network on Adam optimizer and accelerate it.\nnormal training Introspection network applied 2.6 SSO rann Training steps .0 X 10\nFigure 21: Plot of loss function vs training steps for AlexNet1 on ImageNet\nFor MNIST the max accuracy achieved by Adam with introspection was 99.34%, by normal. Adam was 99.3%, by SGD with introspection was 99.21% and by normal SGD was 99.08% . With. introspection applied on Adam the model reaches the max accuracy as achieved by normal Adam after only 7200 steps whereas the normal training required 10000 steps.\nFor M NI ST the max accuracy achieved by Adam with introspection was 96.9%, by normal Adam. was 95.7%, by SGD with introspection was 94.47% and by normal SGD was 93.39% . With intro-. spection applied on Adam the model reaches the max accuracy as achieved by normal Adam after only 8800 steps whereas the normal training required 15000 steps..\n0.9 0.985 foeanooe 0.9 Introspection on Sgd Introspection on adar Normal Adam Normal Sgd 0.97 3000 4000 5000 6000 7000 8000 9000 10000 training steps\nFigure 22: : Test accuracy comparison for MNIST for SGD and Adam optimiser in the presence and absence of introspection.\nA separate quadratic curve was fit to each of the weight values of the model on the basis of the 4 past weight values chosen from history.The weight values chosen from history were at the same steps as they were for updations by I. The new updated weight would be the value of the quadratic curve at some future time step.For M N I ST1 , experiments were performed by updating the weights to the value predicted by the quadratic function at a future timestep which was one of 1.25,1.3 ol 1.4 times the current time step. For other higher jump ratios the updates would cause the model tc diverge, and lower jump ratios did not show much improvement in performance. The plot showing the comparison in validation accuracy have been shown below in figure24\n0.9i 0.985 0.98 0.975 0.97 0.965 Normal SGD With introspection network QuadraticFit(*1.4) Quadratic fit(*1.3) Quadratic Fit(*1.25) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000\nFigure 24: Comparison of test accuracy for MNIST with weight updations by Intro-. spection and quadratic fit..\nThe max accuracy achieved with introspection applied was 99.21% whereas with quadratic fit it was 99.19%. We note that even though the best performing quadratic fit eventually almost reaches the same max accuracy than that achieved with introspection network, it required considerable exper imentation to find the right jump ratio.A unique observation for the quadratic fit baseline was that it would take the accuracy down dramatically, upto 9.8%, from which the training often never re- covers. Sometimes,the optimizers (SGD or Adam) would recover the accuracy, as seen in figure|24 Moreover, the quadratic fit baseline was not able to generalize to other datasets and tasks. The best performing jump ratio of 1.25 was not able to outperform Introspection on the CIFAR-10 dataset, as seen in figure25\nIn the CIFAR-10 case, The maximum accuracy achieved via updations by introspection was 85.6. which was achieved after 25500 steps, whereas with updations by quadratic fit, the max accuracy of 85.45 was achieved after 27200 steps.\nFor the normal training via SGD without any updations after 30oo0 steps of training, the max ac. curacy of 85.29 was achieved after 26500 steps, whereas the same accuracy was achieved by intro spection after only 21200 steps and after 27000 steps via updation by quadratic..\n0.97 0.96 0.95 0.9 0.92 0.91 Introspection on adam Introspection on Sgd Normal Adam Normal Sgd 000 4000 6000 8000 10000 12000 14000 training steps\nFigure 23: : Test accuracy comparison for MNIST3 for SGD and Adam optimiser in the presence and absence of introspection.\n0.85 0.84 0.83 0.82 0.81 With introspection network Normal SGD QuadraticFit(*1.25) 0.8 1.2 1.4 1.6 1.8 training steps 2.2 2.4 2.6 2.8 X 10\nFigure 25: Comparison of test accuracy for CIFAR-10 with weight updations by Intro- spection and quadratic fit.\nInstead of fitting a quadratic curve to each of the weights we tried fitting a linear curve. Experiments were performed on M N IST for jump ratios of 1.1 and 1.075 as the higher ratios would cause the model to diverge after 2 or 3 jumps.The result has been shown below in figure|26.\nWith in Normal SGD Linear Fit(*1.1) Linear fit(*1.075) 4000 training steps 000 training steps\n0.99 0.985 0.97 0.9 0.965 With introspection network Normal SGD Linear Fit(*1.1) Linear fit(*1.075) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000\nFigure 26: Comparison of test accuracy for MNIST with weight updations by Intro- spection and linear fit.\nAs no significant improvement in performance was observed the experiment was not repeated over cifar."}, {"section_index": "9", "section_name": "4.5 LINEAR INTROSPECTION NETWORK", "section_text": "We removed the ReLU nonlinearity from the introspection network and used the same training procedure of the normal introspection network to predict the future values at 2t. We then used this linear network on the M NI ST network. We found that it gave some advantage over normal SGD but was not as good as the introspection network as shown in figure27 Hence we did not explore this baseline for other datasets and networks."}, {"section_index": "10", "section_name": "4.5.1 ADDING NOISE", "section_text": "The weight values were updated by adding small gaussian random zero mean noise values . The experiment was performed over MNIST for two different std. value, the results of which have been shown below in figure|28\n0.99 0.985 0.98 0.975 0.97 0.965 With introspection network Noise(std =0.001) Normal SGD Noise(std =0.005) 0.96 2000 4000 6000 8000 training steps 10000 12000 14000 16000\nFigure 28: Test accuracy for M N IST with weight updations via gaussian noise\nSince no significant improvement was observed for the weight updations via noise for MNIST, the experiment was not performed over cifar-10.\n0.99 0.985 \\/ 0.98 0.975 eesl 0.97 0.965 0.96 With introspection network Normal SGD 0.955 LinearIntrospection 2000 4000 6000 training steps 8000 10000 12000 14000\nFigure 27: Validation accuracy plot for MNIST using an introspection network without nonlinearity\n0.99 0.985 0.98 0.975 - 0.97 0.965 With introspection network Normal SGD Noise(std =0.001) Noise(std =0.005) 0.96 2000 4000 6000 8000 10000 12000 14000 16000 training steps\nSome of the open questions to be investigated relate to determination of the optimal jump points and investigations regarding the generalization capacity of the introspection network to speed up training\nin RNNs and non-image tasks. Also, we noticed that applying the jumps in very early training steps. while training AlexNet1 tended to degrade the final outcomes. This may be due to the fact that ou. introspection network is extremely simple and has been trained only on weight evolution data fron MNIST. A combination of a more powerful network and training data derived from a diverse set. may ameliorate this problem.\nWe introduced a method to accelerate neural network training. For this purpose, we used a neura. network I that learns a general trend in weight evolution of all neural networks. After learning th trend from one neural network training, I is used to update weights of many deep neural nets on different tasks - MNIST, CIFAR-10, and ImageNet, with varying network architectures, activations. optimizers, and normalizing strategies(batch norm,lrn). Using the introspection network I led t faster convergence compared to existing methods in all the cases. Our method has a small memor footprint, is computationally efficient and is usable in practical settings. Our method is differen from other existing methods in the aspect that it utilizes the knowledge obtained from weights o one neural network training to accelerate the training of several unseen networks on new tasks. Th results reported here indicates the existence of a general underlying pattern in the weight evolutior of any neural network."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009\nMatthew D. Zeiler. Adadelta:An adaptive learning method. 2012. URL https : / /arxiv. org. pdf/1212.5701v1.pdf"}, {"section_index": "12", "section_name": "A APPENDIX", "section_text": "In this section, we report some initial results of applying the introspection network I (trained on the. weight evolution of MNIST network N0) to accelerate the training of inception v1 network (Szegedy et al.|2014). We trained the inception v1 network on imagenet dataset with a mini-batchsize of 128 and a RMS optimizer(decay 0.9, momentum 0.9, epsilon 1.0) starting from a learning rate of 0.01. with a decay of 0.94 after every 2 epochs. The network training is still in progress, and we will. eventually report on the final outcome. However we thought it would be valuable to share the. preliminary results all the same\nWe found that applying introspection network seems to be reducing the training time quite signif. icantly. In Figures [29|and 30] we see that applying the introspection network leads to a gain of at least 730,o00 steps.After training for around 1.5 million steps, the maximum accuracy achieved. by normal training was 68.40%, whereas with introspection applied after every 300k steps the max. accuracy achieved was 69.06%.The network achieved the max accuracy of 68.40% after only 852k steps. With introspection applied at steps 200k, 400k and 600k the max accuracy achieved was. 68.69% and it reached the max accuracy achieved by the normal training of model after only 944k. Steps.\nHowever, we also observed that choosing the jump points early in the training does not lead tc eventual gains, even though a significant jump in accuracy is observed initially. Figure|31|shows the flattening of the test accuracy after a set of early jumps. It remains to be seen if further interventions later in the training can help maintain the initial accelerated convergence.\n*yyfiaya/ra 0.65 0.6 0.5 0.45 With introspection network(jump step =300k) With introspection network(jump step =200k) 0.4 Without introspection network 10 training steps 12 14 16 x 10\nFigure 29: Test accuracy plot for Inception V1 network with weight updates via intro- spection network at steps 2 105, 4 105 and 6 10(pink curve) and at steps 3 105. 6 105 and 9 105(blue curve)\nWith introspection network(jump step =300k) With introspection network(jump step =200k) Without introspection network With introspection network(jump step =300k x 10 raining ste\n0.68 0.66 nooe seJ 0.64 0.62 0.6 With introspection network(jump step =300k) 0.58 Without introspection network. training stepse 10 12 14 16 X 10\nFigure 30: Test accuracy plot for Inception V1 network with weight updates via intro- spection network at steps 3 105, 6 105 9 x 105\n0.7 0.6 0.5 0.4 0.3 0.2 With introspection network(jump step =300k) 0.1 With introspection network(jump step =200k) With introspection network(early jumps Without introspection network 2 3 4 5 6 7 8 9 10 training steps X 10\nFigure 31: Test accuracy plots for Inception V1 network with weight updates via introspection network in early training. steps."}] |
HyWDCXjgx | [{"section_index": "0", "section_name": "MULTI-LABEL LEARNING WITH THE RNNs FOR FASHION SEARCH", "section_text": "taey.16@navercorp.com\nWe build a large-scale visual search system which finds similar product images. given a fashion item. Defining similarity among arbitrary fashion-products is. still remains a challenging problem, even there is no exact ground-truth. To re. solve this problem, we define more than 90 fashion-related attributes, and com-. bination of these attributes can represent thousands of unique fashion-styles. We. then introduce to use the recurrent neural networks (RNNs) recognising multiple. fashion-attributes with the end-to-end manner. To build our system at scale, these. fashion-attributes are again used to build an inverted indexing scheme. In addition. to these fashion-attributes for semantic similarity, we extract colour and appear ance features in a region-of-interest (ROI) of a fashion item for visual similarity. By sharing our approach, we expect active discussion on that how to apply current. deep learning researches into the e-commerce industry.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning technology has given great success in computer vision tasks such as efficient feature. representation (Razavian et al.[2014]|Babenko et al.[2014), classification (He et al.[|2016a] Szegedy et al.[2016b), detection (Ren et al.]2015fZhang et al.[2016), and segmentation (Long et al.[2015). Furthermore, image to caption generation (Vinyals et al.2015} Xu et al.2015) and visual ques- tion answering (VQA) (Antol et al.2015) are emerging research fields combining vision, language. (Mikolov et al.]2010), sequence to sequence (Sutskever et al.]2014), long-term memory (Xiong et al.[2016) based modelling technologies.\nThese computer vision researches mainly concern about general object recognition. However, ir our fashion-product search domain, we need to build a very specialised model which can mimic human's perception of fashion-product similarity. To this end, we start by brainstorming about what makes two fashion items are similar or dissimilar. Fashion-specialist and merchandisers are also involved. We then compose fashion-attribute dataset for our fashion-product images. Table 1|explains a part of our fashion-attributes. Conventionally, each of the columns in Table[1can be modelled as a multi-class classification. Therefore, our fashion-attributes naturally is modelled as a multi-label classification."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Online commerce has been a great impact on our life over the past decade. We focus on an online market for fashion related items' Finding similar fashion-product images for a given image query is a classical problem in an application to computer vision, however, still challenging due to the absence of an absolute definition of the similarity between arbitrary fashion items.\nTable 1: An example of fashion-attributes\nMulti-label classification has a long history in the machine learning field. To address this problem, a straightforward idea is to split such multi-labels into a set of multi-class classification problems. In our fashion-attributes. there are more than 90 attributes. Consequently. we need to build more than 90 classifiers for each attribute. It is worth noting that, for example, collar attribute can represent the upper-garments, but it is absent to represent bottom-garments such as skirts or pants, which means some attributes are conditioned on other attributes. This is the reason that the learning tree structure of the attributes dependency can be more efficient (Zhang & Zhang2010]Fu et al.]2012fGibaja & Ventura2015).\nRecently, recurrent neural networks (RNN) are very commonly used in automatic speech recognitior. (ASR)(Graves et al. 2013 Graves & Jaitly2014), language modelling (Mikolov et al.]2010) word dependency parsing (Mirowski & Vlachos2015), machine translation (Cho et al.[2014), anc. dialog modelling (Henderson et al.2014Serban et al.] 2016). To preserve long-term dependency in hidden context, Long-Short Term Memory (LSTM) (Hochreiter & Schmidhuber1997) and its. variants (Zaremba et al.2014f Cooijmans et al.[2016) are breakthroughs in such fields. We use this. LSTM to learn fashion-attribute dependency structure implicitly. By using the LSTM, our attribute. recognition problem is regarded to as a sequence classification. There is a similar work in Wang. et al.(2016), however, we do not use the VGG16 network (Simonyan & Zisserman. 2014) as ar image encoder but use our own encoder. To the best of our knowledge, it is the first work applying. LSTM into a multi-label classification task in the commercial fashion-product search domain.\nWe start by building large-scale fashion-attribute dataset in the last year. We employ maximum 100 man-months and take almost one year for completion. There are 19 fashion-categories and more than 90 attributes for representing a specific fashion-style. For example, top garments have the T- shirts, blouse, bag etc. The T-shirts category has the collar, sleeve-length, gender, etc. The gende. attribute has binary classes (i.e. female and male). Sleeve-length attribute has multiple classes (i.e long, a half, sleeveless etc.). Theoretically, the combination of our attributes can represent thousands of unique fashion-styles. A part of our attributes are in Table[1 ROIs for each fashion item in an image are also included in this dataset. Finally, we collect 1 million images in total. This internal dataset is to be used for training our fashion-attribute recognition model and fashion-product ROI detector respectively.\nIn this section, we describe the details of our system. The whole pipeline is illustrated in Fig. 3 As a conventional information retrieval system, our system has offline and online phase. In offline process, we take both an image and its textual meta-information as the inputs. The reason we take additional textual meta-information is that, for example, in Fig. 1a dominant fashion item in the image is a white dress however, our merchandiser enrolled it to sell the brown cardigan as described\nGreat-category Fashion-category Gender Silhouette Collar sleeve-length (3 classes) (19 classes) (2 classes) (14 classes) (18 classes) (6 classes) bottom T-shirts male shirt long ... normal top female A-line turtle a half ... pants ... bags ... sleeveless ... round :..\nThe remaining of this paper is organized as follows. In Sec. 2] We describe details about our fashion-attribute dataset. Sec. 3 describes the proposed fashion-product search system in detail. Sec.4lexplains empirical results given image queries. Finally, we draw our conclusion in Sec.5\nb) Textual meta-information: Textual meta-information women's clothes/ brend-new/ cardigan and knit/. women's shirts, blouse/. round-neck cardigan see-through blouse\nFigure 1: Examples of image and its textual meta-information\nin its meta-information. In Fig.1b, there is no way of finding which fashion item is to be sold with out referring the textual meta-information seller typed manually. Therefore, knowing intension (i.e. what to sell) for our merchandisers is very important in practice. To catch up with these intension, we extract fashion-category information from the textual meta. The extracted fashion-category in- formation is fed to the fashion-attribute recognition model. The fashion-attribute recognition model predicts a set of fashion-attributes for the given image. (see Fig.2) These fashion-attributes are. used as keys in the inverted indexing scheme. On the next stage, our fashion-product ROI detector. finds where the fashion-category item is in the image. (see Fig. 8) We extract colour and appear-. ance features for the detected ROI. These visual features are stored in a postings list. In these processes, it is worth noting that, as shown in Fig. 8f our system can generate different results in the fashion-attribute recognition and the ROI detection for the same image by guiding the fashion-. category information. In online process, there is two options for processing a user-query. We can\n#shoes, #male, #leather, #top, #coat, #female, #bottom,#pants, #female, #bag,#female,#midimum-size, #under-ankle, #low-heel. #long-sleeved, #monochrom, #long, #skiny-shilloutte, #handbag, #zipper-lock, #leather #monochrom,#shoelace #tailored-collar, #car-coat, #normal-waist, #belt-type, #normal-fit #double-button-type #botton-lock, #in-pocket, #fading NEO #top-bottom, #dress, #female, #bottom, #pants, #male, #top, #suit-jacket, #male, #shoes, #female, #leather, #slim,#mini, #pencil, #long,#sweetpants,#Elastic-waist, #tailored-collar, #long-sleeved, #ankle-boot, #high-heel, #round-neck, #long-sleeved #in-pocket, #sibori, #brend-logo #modern-fit, #two-button #monochrom,#buckle\nFigure 2: Examples of recognized fashion-attributes for given images\ntake a guided information, what the user wants to find, or the fashion-attribute recognition mode. automatically finds what fashion-category item is the most likely to be queried. This is up to the. user' s choice. For the given image by the user, the fashion-attribute recognition model generate.. fashion-attributes, and the results are fed into the fashion-product ROI detector. We extract coloui. and appearance features in the ROI resulting from the detector. We access to the inverted index. addressed by the generated a set of fashion-attributes, and then get a postings list for each fashion. attribute. We perform nearest-neighbor retrieval in the postings lists so that the search complexity i reduced drastically while preserving the semantic similarity. To reduce memory capacity and speec. up this nearest-neighbor retrieval process once more, our features are binarized and CPU depen.\nOffline Colour and appearence textual meta-info.: feature extraction Information ROI detection brand-new/ Extraction woman's wear/ recognition Attribute skirts Reference Online Inverted index postings postings ... Colour and appearence ROI detection feature extraction Nearest-Neighbor recognition Attribute search Query Search results\nFigure 3: The whole pipeline of the proposed fashion-product search system. (Dashed lines denote the flows of the guided information.).\ndent intrinsic instruction (i.e. assembly popcnt instruction 1S use distance.\nWe build our own vision encoder network (ResCeption) which is based on inception-v3 architecture (Szegedy et al.] 2016b). To improve both speed of convergence and generalization, we introduce a shortcut path (He et al.|2016a b) for each data-flow stream (except streams containing one convo- lutional layer at most) in all inception-v3 modules. Denote input of l-th layer , x' e R , output of. the l-th layer, x'+1, a l-th layer is a function, H : x' +> xl+1 and a loss function, L(0; x). Then. forward and back(ward)propagation is derived such that.\nImposing gradients from the loss function to l-th layer to Eq. (2\naL aL dxl+2 dxl+1 Oxl OxL Oxl+1 dxl dL 0Hx dH dxL Ox Ox aL OH(x) OxL Oxl i=L-1\nx+1 H(x)+xl Oxl+1 OH(x') + 1 dxl dxl\nAs in the Eq. (3), the error signal, ft, oxt, goes down to the l-th layer directly through the shortcut path, and then the gradient signals from (L - 1)-th layer to l-th layer are added consecutively (i.e. instead of the multiplicative operation except initial error from the loss (i.e. from vanishing or exploding gradient problem. Fig. 4 depicts network architecture for shortcut\n1x1 conv. 1x7 7x1 1x7 conv.e 7x1 conv.e conv. conv. Output(depth-concat) MaxPool 1x1 conv. Input 1x1 conv. 1x1 conv. 1x7 conv. 7x1 conv.\nFigure 4: Network architecture for shortcut paths (depicted in two red lines) in an inception-v3 module.\npaths in an inception-v3 module. We use projection shortcuts throughout the original inception-v modules due to the dimension constraint|3 To demonstrate the effectiveness of the shortcut paths i1 the inception modules, we reproduce ILSVRC2012 classification benchmark (Russakovsky et al. 2015) for inception-v3 and our ResCeption network. As in Fig.5a we verify that residual shortcu paths are beneficial for fast training and slight better generalization!4 The whole of the training curve is shown in Fig. 5b The best validation error is reached at 23.37% and 6.17% at top- and top-5, respectively. That is a competitive result To demonstrate the representation power o our ResCeption, we employ the transfer learning strategy for applying the pre-trained ResCeptio. as an image encoder to generate captions. In this experiment, we verify our ResCeption encode outperforms the existing VGG16 network[on MS-COCO challenge benchmark (Chen et al.|2015] The best validation CIDEr-D score (Vedantam et al.]2015) for c5 is 0.923 (see Fig.5c) and tes CIDEr-D score for c40 is 0.937\ninception-v3 vs. ResCeption ResCeptior VGG16 vs. ResCeption train inception-v3 val ResCeption 0.8 wwwn 55 [%] no CIRr 30 40 0.2 35 VGG16 ResCeption 30 1000000 2000000 3000000 4000000 5000000 6000000 7000000 8000000 9000000 0.0 100000 200000 300000 400000 iterations 100000 200000 300000 400000 50000 Iterations steps (b) The whole training curve on ILSVRC2012 a) Early validation curve on ILSVRC2012 dataset. (c) Validation curve on MS-COCO dataset. dataset.\nFigure 5: Training curves on ILSVRC2012 and MS-COCO dataset with our ResCeption model\nThe traditional multi-class classification associates an instance x with a single label a from previ ously defined a finite set of labels A. The multi-label classification task associates several finite sets of labels An. C A. The most well known method in the multi-label literature are the binary relevance method (BM) and the label combination method (CM). There are drawbacks in both BM\n'We submitted our final result with beam search on MS-COCO evaluation server and found out the beam search improves final CIDEr-D for c40 score by 0.02\n'If the input and output dimension of the main-branch is not the same, projection shortcut should be use stead of identity shortcut.\nT-shirt Top blouse Bottom pants Fashion-attributes: leggings #Top-Bottom, #Dress, .. #Round-neck, others long #A half sleeved, #Knee-length dress . a half .. (0) := [P0seq(ao|ge1(I)) max {01,0seq}E0 P0seq(a1|ao, ge1(I)) Poseg(a2|ao, a1, ge1(I))\nFigure 6: An example of the fashion-attribute dependence tree for a given image and the objectiv function of our fashion-attribute recognition model.\nand CM. The BM ignores label correlations that exist in the training data. The CM directly takes into account label correlations, however, a disadvantage is its worst-case time complexity (Read et al.]2009j. To tackle these drawbacks, we introduce to use the RNN. Suppose we have ran- dom variables a E An, An C A. The objective of the RNN is to maximise the joint probability, p(at, at-1, at-2, .. . ao), where t is a sequence (time) index. This joint probability is factorized as a product of conditional probabilities recursively,\nAt,t-1,...A0 p(ao)p(a1|ao) p(a2|a1, ao p(ao,a1) p(ao,a1,a2) p(ao,a1,a2,... p(ao) I at-1,...,a\n8Our attribute recognition model is parameterized as 0 = [01; Oseq]. In our case, updating 01 as well as Ose in the gradient descent step helps for much better performance..\nFollowing the Eq. 4 we can handle multi-label classification as sequence classification which is illustrated in Fig. 6 There are many label dependencies among our fashion-attributes. Direct mod- elling of such label dependencies in the training data using the RNN is our key idea. We use the ResCeption as a vision encoder 01, LSTM and softmax regression as our sequence classifier Oseq,. and negative log-likelihood (NLL) as the loss function. We backpropagage gradient signal from the. sequence classifier to vision encoder|8 Empirical results of our ResCeption-LSTM based attribute. recognition are in Fig. 2 Many fashion-category dependent attributes such as sweetpants, fad-. ing, zipper-lock, mini, and tailored-collar are recognized quite well. Fashion-category independent attributes (e.g., male, female) are also recognizable. It is worth noting we do not model the fashion- attribute dependance tree at all. We demonstrate the RNN learns attribute dependency structure. implicitly. We evaluate our attribute recognition model on the fashion-attribute dataset. We split this dataset into 721544, 40000, and 40000 images for training, validating, and testing. We employ the early-stopping strategy to preventing over-fitting using the validation set. We measure precision. and recall for a set of ground-truth attributes and a set of predicted attributes for each image. The. quantitative results are in Table2\nTable 2: A quantitative evaluation of the ResCeption-LSTM based attribute recognition model\nMeasurement Train Validation Test Precision 0.866 0.842 0.841 Recall 0.867 0.841 0.842 NLL 0.298 0.363 0.363\nOur prediction model of the fashion-attribute recognition is based on the sequence generation pro cess in the RNN (Graves2013). The attribute-sequence generation process is illustrated in Fig 7 First, we predict a probability of the first attribute for a given internal representation of the im age i.e. Peseg (ao|ge, (I)), and then sample from the estimated probability of the attribute, ao ~ Peseg (ao|go1(I)). The sampled symbol is fed to as the next input to compute peseg (a1|ao, ge1(I)) This sequential process is repeated recursively until a sampled result is reached at the special end of-sequence (EOS) symbol. In case that we generate a set of attributes for a guided fashion-category we do not sample from the previously estimated probability, but select the guided fashion-category and then we feed into it as the next input deterministically. It is the key to considering for eacl seller' s intention. Results for the guided attribute-sequence generation is shown in Fig. 8\nPeseg(ao|ge1(I)) Peseg(a1ao, ge1(I)) LSTM LSTM LSTM : ... ... seq LSTM LSTM LSTM ResCeption ao ~ Pese (ao|ge1(I)) a1 ~ peseg(a1ao, ge1(I) ge1(I) Guided information\nFigure 7: Guided sequence generation process"}, {"section_index": "3", "section_name": "3.4 Guided ROI DETECTION", "section_text": "Our fashion-product ROI detection is based on the Faster R-CNN (Ren et al.|2015). In the conven-. tional multi-class Faster R-CNN detection pipeline, one takes an image and outputs a tuple of (ROI. coordinate, object-class, class-score). In our ROl detection pipeline, we take additional informa tion, guided fashion-category from the ResCeption-LSTM based attribute-sequence generator. Our. fashion-product ROI detector finds where the guided fashion-category item is in a given image.Jing. et al.(2015) also uses a similar idea, but they train several detectors for each category independently. so that their works do not scale well. We train a detector for all fashion-categories jointly. Our. detector produces ROIs for all of the fashion-categories at once. In post-processing, we reject ROIs. that their object-classes are not matched to the guided fashion-category. We demonstrate that the. guided fashion-category information contributes to higher performance in terms of mean average. precision (mAP) on the fashion-attribute dataset. We measure the mAP for the intersection-of-union. (IoU) between ground-truth ROIs and predicted ROIs. (see Table[3) That is due to the fact that our. guided fashion-category information reduces the false positive rate. In our fashion-product search. pipeline, the colour and appearance features are extracted in the detected ROIs..\nblol Guided fashion-category: Guided fashion-category: Guided fashion-category: Guided fashion-category: skirt blouse T-shirt pants Recognition results: Recognition results:. Recognition results: Recognition results: #bottoms, #skirts, #woman #top,#blous,#woman, #top,#tshirts #woman #bottom,#pants,#woman, #maxi, #pleated-skirts, #waistline, #sleeveless, #normal-fit, #waistline, #long-line, #skiny-shilloutte, #no-slit #round-neck #round-neck, #normal-waist, #belt-type, #long-sleeved, #striped #button-lock, #in-pocket, #roll-up cuff, #fading Guided fashion-category: Guided fashion-category: Guided fashion-category: Guided fashion-category: leggings shirt dress dress Recognition results: Recognition results: Recognition results: Recognition results #top-bottom,#dress, #bottom, #leggings, #women, #top,#shirt,#women, #top-bottom,#dress,#women, #woman,#mini, #long #loose-fit,#button-lock #mini,#slim-fit,#straight-skirt #regular-fit, #round-neck, #pullover, #collared shirt, #round-neck, #long-sleeved #long-sleeved #long-sleeved\nGuided fashion-category: skirt Recognition results: #bottoms,#skirts,#woman #maxi,#pleated-skirts, #no-slit\nFigure 8: Examples of the consecutive process in the guided sequence generation and the guidea ROI detection. Although we take the same input image, results can be totally different guiding the fashion-category information.\nTable 3: Fashion-product ROI Detector evaluation. (mAP)\nIoU 0.5 0.6 0.7 0.8 0.9 Guided 0.877 0.872 0.855 0.716 0.225 Non-guided 0.849 0.842 0.818 0.684 0.223\nIoU 0.5 0.6 0.7 0.8 0.9 Guided 0.877 0.872 0.855 0.716 0.225 Non-guided 0.849 0.842 0.818 0.684 0.223\nTo extract appearance feature for a given ROI, we use pre-trained GoogleNet (Szegedy et al.]2015) In this network, both inception4 and inception5 layer's activation maps are used. We evaluate. this feature on two similar image retrieval benchmarks, i.e.Holidays (Jegou et al.[2o08) and. UK-benchmark (UKB) (Nister & Stewenius]2006). In this experiment, we do not use any post-. processing method or fine-tuning at all. The mAP on Holidays is 0.783, and the precision@4 and. recall@4 on UKB is 0.907 and 0.908 respectively. These scores are competitive against several deep feature representation methods (Razavian et al.]2014] Babenko et al.]2014). Examples of queries and resulting nearest-neighbors are in Fig.9I On the next step, we binarize this appearance fea-. ture by simply thresholding at 0. The reason we take this simple thresholding to generate the hash. code is twofold. The neural activation feature map at a higher layer is a sparse and distributed code. in nature. Furthermore. the bias term in a linear layer (e.g.. convolutional layer) compensates for\nFigure 9: Examples of retrieved results on Holidays and UKB. The violet rectangles denote the ground-truth nearest-neighbors corresponding queries.\naligning zero-centering of the output feature space weakly. Therefore, we believe that a code from a well-trained neural model, itself, can be a good feature even to be binarized. In our experiment, such simple thresholding degrades mAP by O.02 on the Holidays dataset, but this method makes it possible to scaling up in the retrieval. In addition to the appearance feature, we extract colour feature using the simple (bins) colour histogram in HSV space, and distance between a query and a reference image is computed by using the weighted combination of the two distances from the colour and the appearance feature.\nTo evaluate empirical results of the proposed fashion-product search system, we select 3 million. fashion-product images in our e-commerce platform at random. These images are mutually ex-. clusive to the fashion-attribute dataset. We have again selected images from the web used for the queries. All of the reference images pass through the offline process as described in Sec. 3] and. resulting inverted indexing database is loaded into main-memory (RAM) by our daemon system We send the pre-selected queries to the daemon system with the RESTful API. The daemon system. then performs the online process and returns nearest-neighbor images correspond to the queries. In this scenario, there are three options to get similar fashion-product images. Option 1 is that the fashion-attribute recognition model automatically selects fashion-category, the most likely to be. queried in the given image. Option 2 is that a user manually selects a fashion-category given a query image. (see Fig. 10) Option 3 is that a user draw a rectangle to be queried by hand like Jing et al.. (2015). (see Fig. 11) By the recognized fashion-attributes, the retrieved results reflect the user's. main needs, e.g. gender, season, utility as well as the fashion-style, that could be lacking when using. visual feature representation only.\n)ption Option 2 (a) For the Option2, the guided information is \"pants' Option 1 Option 2 (b) For the option 2, the guided information is \"blouse\"\n(b) For the option 2, the guided information is \"blouse\"\nFigure 10: Similar fashion-product search for the Option 1 and the Option 2\nCropped region Cropped region Option 3 Option 3 Figure 11: Similar fashion-product search for the Option 3.\nFigure 11: Similar fashion-product search for the Option 3"}, {"section_index": "4", "section_name": "5 CONCLUSIONS", "section_text": "Artem Babenko, Anton Slesarev, Alexander Chigorin, and Victor S. Lempitsky. Neural codes f image retrieval. CoRR, abs/1404.1777, 2014.\nKyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, and Aaron C. Courville. Recurrent batch normal ization. CoRR, abs/1603.09025, 2016\nBin Fu, Zhihai Wang, Rong Pan, Guandong Xu, and Peter Dolog. Learning tree structure of label dependency for multi-label learning. Advances in Knowledge Discovery and Data Mining, 2012.\nEva Gibaja and Sebastian Ventura. A tutorial on multilabel learning. The ACM Computing Surveys 2015.\nCropped region Cropped region Option 3 Option 3\nToday ' s deep learning technology has given great impact on various research fields. Such a success story is about to be applied to many industries. Following this trend, we traced the start-of-the art computer vision and language modelling research and then, used these technologies to create value for our customers especially in the e-commerce platform. We expect active discussion on that how to apply many existing research works into the e-commerce industry.\nXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and. C. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR abs/1504.00325, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In The IEEE Conference on Computer Vision and Pattern Recognition, 2016a.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997\nJonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In The IEEE Conference on Com puter Vision and Pattern Recognition. 2015\nIllulll-laO classification. In The European Conference on Machine Learning and Knowledge Discovery in Databases, 2009. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. The International Journal of Computer Vision, 2015. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In The AAAI Conference on Artificial Intelligence, 2016..\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016b\nPiotr Mirowski and Andreas Vlachos. Dependency recurrent neural language models for sentence completion. CoRR, abs/1507.01193, 2015.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du mitru Erhan. Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition. 2015\nJiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. CNN-RNN: A unified framework for multi-label image classification. CoRR, abs/1604.04573, 2016\nLiliang Zhang, Liang Lin, Xiaodan Liang, and Kaiming He. Is faster R-CNN doing well for pedes trian detection? CoRR. abs/1607.07032. 2016.\nMin-Ling Zhang and Kun Zhang. Multi-label learning by exploiting label dependency. In The ACM International Conference on Knowledge Discovery and Data Mining, 2010.."}] |
rJEgeXFex | [{"section_index": "0", "section_name": "PREDICTING MEDICATIONS FROM DIAGNOSTIC CODES WITH RECURRENT NEURAL NETWORKS", "section_text": "Jacek M. Baior. Thomas A. Lasko\nDepartment of Biomedical Informatics. Vanderbilt University School of Medicine Nashville TN 37203 USA\n{jacek.m.bajor,tom.lasko}@vanderbilt.edu\nIt is a surprising fact that electronic medical records are failing at one of their pri mary purposes, that of tracking the set of medications that the patient is actively taking. Studies estimate that up to 50% of such lists omit active drugs, and thai up to 25% of all active medications do not appear on the appropriate patient list Manual efforts to maintain these lists involve a great deal of tedious human labor which could be reduced by computational tools to suggest likely missing or in correct medications on a patient's list. We report here an application of recurrent neural networks to predict the likely therapeutic classes of medications that a pa tient is taking, given a sequence of the last 100 billing codes in their record. Our best model was a GRU that achieved high prediction accuracy (micro-averaged AUC 0.93, Label Ranking Loss 0.076), limited by hardware constraints on model size. Additionally, examining individual cases revealed that many of the predic tions marked incorrect were likely to be examples of either omitted medications or omitted billing codes, supporting our assertion of a substantial number of er rors and omissions in the data, and the likelihood of models such as these to help correct them."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The idea of exploiting the large amounts of data captured in electronic medical records for botl clinical care and secondary research holds great promise, but its potential is weakened by errors anc omissions in those records (Safran et al.][2007] de Lusignan & van Weel|2006). Among many othe. problems, accurately capturing the list of medications currently taken by a given patient is extremely challenging (Velo & Minuz2009). In one study, over 50% of electronic medication lists containec omissions (Caglar et al.] 2011), and in another, 25% of all medications taken by patients were no. recorded (Kaboli et al.]2004). Even medication lists provided by the patients themselves contain. multiple errors and omissions (Green et al.2010)\nMany efforts have been made to ensure the correctness of medication lists, most of them involving. improved communication between patients and providers (Keogh et al.]2016), but these efforts. have not yet been successful, and incorrect or incomplete medication documentation continues tc be a source of error in computational medical research. In this work we attempt to identify likely. errors and omissions in the record, predicting the set of active medications from the sequence o. most recent disease-based billing codes in the record. Predictions from such a model could be usec. either in manual medication reconciliation (a common process undertaken to correct the medicatior. record) or to provide a prior to other models, such as an NLP model attempting to extract medicatior. use from the narrative clinical text.\nGiven the sequential nature of clinical data, we suspected that recurrent neural networks would be a good architecture for making these predictions. In this work we investigate this potential, comparing the performance of recurrent networks to that of similarly-configured feed forward networks.\nThe input for each case is a sequence of ICD-9 billing codes (Section2.1), for which the model produces a single, multi-label prediction of the therapeutic classes (Section 3.1) of medications taken by the patient during the period of time covered by the billing code sequence.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "This work is designed to test how well the complete set of medications a patient is actively taking a a given moment can be predicted by the sequence of diagnostic billing codes leading up to that mo ment, in the context of non-trivial label noise. It also explores whether sequence-oriented recursive neural nets can do a better job of that prediction than standard feed-forward networks"}, {"section_index": "3", "section_name": "2.1 MEDICAL BILLING CODES", "section_text": "Each time a patient has billable contact with the healthcare system, one or more date-stamped billing codes are attached to the patient record, indicating the medical conditions that are associated (or suspected to be associated) with the reason for the visit. While these codes are notoriously unreliable because they are only used for billing and not actual clinical practice (O'Malley et al.l|2005), they are. nevertheless useful in a research context (Bastarache & Denny2011f|Denny et al.2010), especially if they are used probabilistically (Lasko2014). In our institution, codes from the Internationa Classification of Diseases, Ninth Revision (ICD-9) have historically been used, although we have. recently transitioned to the tenth revision (ICD-1O). For this project, we used ICD-9 codes..\nThe ICD-9 hierarchy consists of 21 chapters roughly corresponding to a single organ system or. pathologic class (Appendix Bj. Leaf-level codes in that tree represent single diseases or disease subtypes. For this project, we used a subset of the two thousand most common leaf-level codes as. our input data.\nA recurrent neural network is a variation in which the output of one node on input xt loops aroun to become an input to another node on input xt+1, allowing information to be preserved as it iterate over an input data sequence (Figure1). They were introduced in the 1980s (Rumelhart et al.1986) but achieved explosive popularity only recently, after the development of methods to more reliabl capture long-term dependencies, which significantly improved their performance on sequence-to sequence mapping (Hochreiter & Schmidhuber1997) Sutskever et al.2014).\nThe basic RNN unit has a simple internal structure (Figure2a). Output from the previous iteratior. ht-1 and the next input in a sequence xt are both fed to the network on the next iteration. The Long Short-Term Memory configuration (LSTM) introduces new, more complex internal structure (Figure[2b) consisting of four neural network layers and a cell state (ct), which is carried from one iteration to another. The additional layers form forget, input and output gates, which allow for the information to be forgotten (reset) or passed on to varying degrees..\nThe LSTM model and its variations are commonly used in applications where sequence and temporal. data are involved, such as in image captioning (Vinyals et al.]2014), language translation (Sutskever et al.|2014), and speech recognition (Graves et al.[2013). In many cases LSTM models define the state of the art, such as with a recent conversational speech recognizer that (slightly) outperforms. professional transcriptionists (Xiong et al.2016).\nA recent variation on the LSTM architecture is the Gated Recurrent Unit (GRU) (Cho et al.l2014) which introduces a single update gate in place of input and forget gates (Figure|2). GRUs perform as well as or better than LSTMs in many cases (Chung et al.2014] Jozefowicz et al.]2015), and have the additional advantage of a simpler structure.\nIn this work we try both an LSTM and a GRU on our learning problem\nLittle research in the computational medical domain has used recurrent neural networks. The ear liest example we are aware of is the use of an LSTM model that produced reasonable accuracy\nMost of the ICLR community are very familiar with recurrent neural networks and their variations,. but we include a conceptual description of them here for readers coming from other fields. More thorough descriptions are available elsewhere (Graves.. )lah 2015\nFigure 1: Simplified representation of a recurrent neural network (left) and an unrolled recurren1 neural network (right). x; is a single element in an input sequence x, h; is an output after a single pass through the recurrent unit. Adapted fromOlah (2015).\nFigure 2: Architectures of (a) Simple RNN, (b) LSTM, and (c) GRU units. x: a single element in an input sequence being considered in the current iteration, ht-1, ht: the output from the previous and current iterations, Ct-1, Ct: the cell states of the previous and current iterations. Adapted from Olah (2015).\nVery recent work, contemporary with ours, used a GRU model with a semantic embedding in 32,787 patient records to predict the development of heart failure 3 - 6 months in the future, from medicatior. orders and billing codes in an 18-month window. The model achieved respectable accuracy (0.88. AUC), and demonstrated a meaningful 0.05 AUC improvement over a deep feedforward network. (Choi et al.]2016b).\nOther recent work from the same group used a GRU model in a multi-label context to predict the. medications, billing codes, and time of the next patient visit from a sequence of that same infor mation for previous visits, using 263,706 patient records. It achieved a recall@30 of 72.4 for the task, an improvement of 20 over a single-hidden-layer MLP with 2000 units (Choi et al.]2016a) This is an example of using one of the strengths of a recurrent network - predicting the next element. in a sequence. It contrasts with our work that exploits a different strength of recurrent networks predicting a sequence or class that is semantically distinct from but parallel to the elements of the. input sequence.\nThe closest work to ours from a medical domain perspective is a series of collaborative filter models (including co-occurrence counting, k-nearest neighbors, and logistic regression) that predict missing medications using a leave-one-drug-out evaluation design, with predictions based on the rest of the medications, ICD-9 billing codes, and demographic data. The models were trained and tested or data from 419 patients in three different clinics, with accuracy varying by clinic, as expected, bu1 not appreciably by model. Most models ranked the missing drug in the top 10 results between 40 and 50% of the time, and ranked the therapeutic class of the drug in the top 10 results between 50 and 65% of the time.\nMany aspects of our work can be found in these prior efforts, but none addresses our particula1 problem in the same way. Our work is unique in its learning problem of identifying all drugs a. patient is likely to be taking, based only on the billing codes in the record. Like most others cited, we. use recurrent neural networks in a multi-label predictive context, but in contrast to them we compare\nn2 n3 n n3 recurrent recurrent recurrent recurrent recurrent unit unit unit unit unit X1 X2 +3 X Xt\nh h1 n2 n3 h recurrent recurrent h3 recurrent recurrent recurrent unit unit unit unit unit X X1 X2 +3 X+\na) b) c) tanh tanh update output tanh forget input tanh output\na) b) c) tanh forget input update tanh output tanh output\n(micro-AUC 0.86) in a 128-dimensional multi-label prediction of diagnoses from regularly sam pled, continuously-monitored, real-valued physiologic variables in an Intensive Care Unit setting This was an interesting initial application, but it turned out to be only O.001 better than the baseline classifier, which was a multi-layer perceptron with expert-designed features (Lipton et al.]2016). Given the dataset size (10,401 patient records) the lack of improvement may have been due to insuf- ficient data to power accurate feature learning in the recurrent network."}, {"section_index": "4", "section_name": "3.1 DATA", "section_text": "Our source database was the deidentified mirror of Vanderbilt's Electronic Medical Record. whicl contains billing codes, medication histories, laboratory test results, narrative text and medical imag ing data for over 2 million patients, reaching back nearly 30 years (Roden et al.]2008). We obtainec. IRB approval to use this data in this research.\nFor this experiment we filtered all records in our database to include only the top 1,o00 most common medications and the top m = 2000 most common billing codes, which cover 99.5% of all medication occurrences and 85.1% of all billing code occurrences. We then included all records from the filtered data that had at least one medication occurrence and at least ten billing code occurrences. This resulted in 610,076 complete patient records, which we divided 80/5/15 into training, validation, and final test sets.\nA data instance d = { E, T, y} consisted of a sequence E = {e1, ..., en}, of one-hot billing code. vectors e, E {0, 1}m and their associated times T = {t1, ..., tn}, t, E R as input, and a multi-label. vector y E {0, 1}k of medication classes as the output target. The most recent n = 100 billing codes to a selected reference time point in a given patient record were collected into the input sequence E,. and their occurrence times into T, zero padding if necessary. All medications that occurred during. the time span of T were then collected into the output vector y. Practice patterns change over time,. so simply taking the most recent 100 codes for each patient could produce a biased result. To avoid this, we chose random reference points, stratified by medication. In other words, the reference points. were randomly chosen from the occurrences of each medication in the entire dataset, up to 10,000. points per medication. This resulted in 3.3 million data instances, an average of 5.4 instances per. patient record. Each patient's data was included in at most one of the training, validation, or test. Sets.\nBecause there are often many approximately equivalent medication choices for a given therapeutic. purpose, we converted medication names to their therapeutic class (beta blocker, immunosuppres-. sant, corticosteroid, etc.) as a synonym reduction step. This step also aggregated generic with brand names, as well as different formulations of the same active ingredient. For this task we used the Anatomical Chemical Classification System (ATCJ'! which is a multi-level ontology of medica-. tions, organized by both anatomic and therapeutic class. The top level is a broad categorization of medications (Appendix B), the bottom (fifth) level is individual medications, and we used the third. level, which contains 287 therapeutic classes of the approximately appropriate abstraction level for. our purpose. We used a publicly available mapping-|to translate between our medication names and. ATC codes, with manual mapping for the minority of medications that had no mapping entry. Our. set of medications used k = 182 third-level ATC codes, rendering our output label a 182-element-. long multi-label vector, in which an element is set yi = 1 if a medication in that class appeared in the set of medications identified for that instance, y = 0 otherwise. Some medications mapped to. more than one class, and we set y; = 1 for all of them..\nOur medication data was collected from structured order entry records and extracted using NLP (Xt et al.[20io) from mentions in the narrative text of a patient record that included the medication name, dose, route and frequency. As discussed above, we assumed (and our results demonstrate that the medication data is incomplete, and our hope was that a model learned from a sufficiently large dataset will be robust to the missing data.\nThis configuration represents the input billing codes in a sequence, but the output medications as. a multi-label vector. This is because ICD-9 codes are represented sequentially in our source data but medications are not. They are represented as a list that changes over time in the record. The.\n'http://www.whocc.no/atc/structure_and_principles 2https://www.nlm.nih.gov/research/umls/rxnorm/\nto the most similar non-recurrent model we can construct. in order to evaluate the contribution of the temporal sequence information to the solution. Finally, we use one to four orders of magnitude more data (3.3 million instances, see Section|3.1) than these prior efforts, which we hope will give us a more realistic assessment of the various deep architectures we use on our problem.\nusual goal of clinicians is to verify the list of medications at each visit, and if omissions or addition are indicated by the patient, to change the list to reflect that. But in the time-constrained realit of clinical practice, this reconciliation happens sporadically, and many clinicians are hesitant t. change an entry on the medication list for which they were not the original prescriber, so the timin. of the changes in the documentation do not reflect the timing of changes in reality. Therefore w. are reduced to predicting a single multi-label vector, representing the medications that the patien probably took during the span of time represented by the input codes. (We actually did attemp. some full sequence-to-sequence mappings, with various orderings of the medication sequences, bu. we did not achieve any promising results in that direction.)."}, {"section_index": "5", "section_name": "3.2.1 RECURRENT NEURAL NETWORKS", "section_text": "The optimal hyperparameters for the model were selected in the randomized parameter optimizatior (Bergstra & Bengio]2012), with the embedding dimension b = 32, number of layers, and number of nodes optimized by a few trials of human-guided search. Other optimized parameters included the fraction of dropout (between layers, input gates and recurrent connections), and L1 and L2 regularization coefficients (final values are presented in|Appendix A)\nBoth models were implemented using Keras (Chollet2015) and trained for 300 iterations using cross-entropy under the Adadelta optimizer (Zeiler2012).\n182 recurrent feed-forward layers layers + X3 + X100 t3 100 embedding embedding e e100 e e100 100 100\nFigure 3: Recurrent (left) and feed-forward (right) neural network architectures. Arrows indicate the flow of information. Input for both models is sequence of billing code observations e and sequence of. corresponding timestamps t. A code observation e; passes through an embedding layer, producing an embedding vector xi, which is then appended with time t. The processed matrix then passes through either recurrent layers or feed-forward layers. The output in both cases is a single vector of label probabilities.\nOur main technical goal was to test the performance of recurrent neural networks on this sequence centric prediction problem. To evaluate the specific gains provided by the recurrent architectures.. we compare performance against a fully connected feed-forward network configured as similarly. as possible to the recurrent networks, and (as baselines) a random forest and a constant-prevalence. model. We discuss the specific configurations of these classifiers in this section..\nWe tested both LSTMs and GRUs in this experiment. We configured both architectures to first compute a semantic embedding x E Rb of each input e; vector, before appending the times t;. (Figure[3) and feeding the result to three layers of recurrent units. The final output from the last pass of recurrent unit is as a multi-label prediction for each candidate medication.."}, {"section_index": "6", "section_name": "3.2.2 FULLY CONNECTED NEURAL NETWORK", "section_text": "The fully connected network used as similar an architecture as possible to the recurrent networks, i1 an attempt to isolate the gain achieved from the recurrence property. Specifically, we used the same architecture for embedding and timestamp appending (Figure|3)\nHyperparameters were optimized using random search over the number of layers, number of nodes dropout, activation function between layers, L1 and L2 regularization coefficients (Appendix A]. (Surprisingly, the optimizer chose t anh over ReLU as the optimal activation function.).\nThe models were also implemented using Keras, and were trained using cross-entropy for 500 iter ations under the Adadelta optimizer.."}, {"section_index": "7", "section_name": "3.2.3 RANDOM FOREST", "section_text": "Because the random forest model is not easily structured to operate on sequences, we represented the input data as either binary occurrence vectors v E {0, 1}m, or bag-of-codes vectors w E Nm (counts of each code value in the sequence) rather than as sequences of codes with associated times. No embedding was used, because random forest code was not able to cope with the large size of the data in the (dense) embedded space.\nEven in the (sparse) original space, the full dataset was too large for the random forest code, so we implemented it as an ensemble of ten independent forests, each trained on one tenth of the training data, and their average score used for test predictions\nModels were implemented using scikit-learn (Pedregosa et al. 2011) with parameters optimize under random search (Appendix A)\nWhile other models could reasonably serve as a baseline for this work, we chose a random forest because they tend to perform well on widely varying datasets (Fernandez-Delgado et al.f2014), they are efficient to train and test, and they don't require a huge effort to optimize (in order to produce a fair comparison)."}, {"section_index": "8", "section_name": "3.3 CONSTANT-PREVALENCE MODEI", "section_text": "This minimum baseline model simply predicts the prevalence of each label for all instances. For example, if there were three possible medications, with prevalences of 0.3, 0.9, and 0.2, then the prediction of this model would be a constant [0.3, 0.9, 0.2] for each instance. We include this model in order to mitigate the fact that while all of our evaluation measures are suitable for comparing models on the same data, some are not well suited for external comparison because they depend, for example, on the prevalence of positive labels (Section|3.4). By including this model we can at least establish a true minimum baseline for reference."}, {"section_index": "9", "section_name": "3.4 EVALUATION", "section_text": "Our main evaluation focused on the models, although we also performed a separate evaluation oi the embedding\nThere are several possibilities for evaluation in a multi-label classification context (Sechidis et al. 2011f Zhang & Zhou2014). We chose micro-averaged area under the ROC curve (AUC) and la- bel ranking loss as the primary methods of evaluation, because they treat each instance with equal weight, regardless of the nature of the positive labels for that instance. In other words, we wanted primary measures that did not give a scoring advantage to instances with either very many or very few positive labels, or that included very rare or very prevalent labels. Additionally, both of these measures appeal to us as intuitive extensions of the usual binary AUC, when seen from the perspec- tive of a single instance. However, because these two measures don't reflect all aspects of multi-label prediction performance, we also include macro-averaged AUC, label ranking average precision and coverage error measures.\nMicro-averaged AUC considers each of the multiple label predictions in each instance as either tru. or false, and then computes the binary AUC as if they all belonged to the same 2-class problem (Zhang & Zhou2014). In other words, micro-averaged AUC A is:\n(x,x',l,l') : f(x,l) f(x',l'),(x,l), E S,(x',l') E S S|S\nN 1 1 LR {(l,l'):r()(l) >r(j)(l'),(l,l') E Y(j) x YGj) N |Y(j)|Y(j) j=1\nMacro-averaged AUC can be thought of as averaging the AUC performance of several one-vs-all classifiers, one model for each label. It treats each model equally, regardless of the prevalence of positive labels for that model. This gives a score of O.5 to the constant-prevalence model, at the cost of weighting instances differently in order to achieve that. This is in contrast to micro-averagec AUC, which can be thought of as averaging across instances rather than labels. It weighs each instance equally. at the cost of a 0.5 score no longer being the random-guessing baseline\nLabel ranking average precision gives the mean fraction of correct positive labels among all positive labels with lower scores for each label. The coverage error function calculates the mean number of labels on the ranked list that are needed to cover all the positive labels of the sample. Both of these depend on the prevalence of positive labels in a test instance."}, {"section_index": "10", "section_name": "1+ RESULTS AND DISCUSSION", "section_text": "The GRU model had the top performance by all measures, although the LSTM was a close seconc (Table 1), a performance pattern consistent with previous reports (Chung et al.]2014). The deep neural net performance was about O.01 worse in both measures, suggesting that the recurrent models were able to use the sequence information, but only to a small advantage over the most similar non temporal architecture. However, we note that both RNNs' performance peaked at the top end of ou tractable range for model size, while the feed-forward network peaked using a model about one thirc that size (Appendix A). Experimenting with the architecture, we found that increasing the numbe of nodes or layers for the feed-forward network increased training time but not performance. Thi suggests that the RNN performance was limited by the hardware available, and increasing the size of the model may further increase performance, and that the feed-forward network was limited by something else.\nBoth random forest models were weaker than the deep neural net, as might be expected from the need to resort to binary and bag-of-codes representations of the input data..\nLabel ranking loss LR gives the average fraction of all possible (positive, negative) label pairs for each instance in which the negative label has a higher score than the positive label (Tsoumakas et al. 2010):\nN 1 LR {(l,l') :r()(l) >r(J)(l'),(l,l') E Y(j) x Y(j) N (j)||Y(j)\nWe evaluated the embedding based on how strongly related in a clinical semantic sense the nearest neighbor to each code is (in the embedding space).. Alicensed physi cian manually annotated the list of all 2o0o codes with its match category m E {strongly related,loosely related,unrelated}, and we computed the empirical. marginal probability P(m) of each category, the empirical conditional probability P(m[d) of the. match category given the nearest neighbor (Manhattan) distance d and the empirical marginal prob-. ability P(d). For comparison, we computed P(m) under 100 random code pairings..\nTable 1: Results of multi-label classification for each model. Baseline is the constant-prevalence model. Perfect is the best possible performance for our data under the given measure\nA natural question is what performance is good enough for clinical use. While there is little clinical experience with multi-label classifiers, we would generally expect clinicians using a binary classifier in an advisory role to find an AUC 0.9 to be useful, and AUC 0.95 to be very useful. An AUC difference of O.01, and perhaps O.005 are potentially noticeable in clinical use.\nThis O.9/0.01 rule of thumb may loosely translate to our AUC variants, but it can directly translat to Label Ranking Loss LR 2). If we think of a single output prediction y E [0,1]k as a set of predictions for k binary labels, then 1 - AUC for that set of predictions is equivalent to LR for the original instance y. Therefore, values of LR 0.1 may be clinically useful, and LR 0.05 may be very useful.\nA good example of missing medications is a case in which the record has multiple billing codes for both osteoporosis (which is very commonly treated with medication) and postablative hypothy roidism (a deliberately induced condition that is always treated with medication), but no medications of the appropriate classes were in the record. The GRU model predicted both of these classes, which the patient was almost surely taking.\nA good example of either missing billing codes or discontinued medications that remain documente as active is a case in which the record has at least five years of data consisting only of codes fo. Parkinson's disease, but which lists medications for high cholesterol, hypertension, and other hear. disease. The GRU model predicted a reasonable set of medications for Parkinson's disease and it. complications, but did not predict the other medications that are not suggested by the record..\nGiven how easy it was to find cases with apparently missing codes and medications, we conclude. that there is indeed a substantial amount of label noise in our data, and we therefore interpret our. models' performance as lower bounds on the actual performance. We are encouraged that this kinc of a model may actually be useful for identifying missing medications in the record, but of course. a more thorough validation, and possibly a more accurate model, would be necessary before using. in a clinical scenario. A definitive experiment would use off-line research, including reconciling. information from various electronic and human sources to establish the ground truth of which med-. ications were being taken on a particular day, but such efforts are labor intensive and expensive, and. can only be conducted on a very small scale.\nAn interesting byproduct of these models is the semantic embedding of ICD-9 codes used in the. recurrent networks (Figure 5). Transforming input to a semantic embedding is a common pre\nLabel Ranking Label Ranking Coverage Model Micro-AUC Loss Macro-AUC Avg. Precision Error GRU 0.927 0.076 0.861 0.603 62.6 LSTM 0.926 0.077 0.859 0.600 63.0 NN 0.916 0.086 0.835 0.570 67.3 RF (binary) 0.903 0.102 0.804 0.523 73.7 RF (counts) 0.894 0.111 0.787 0.497 77.3 Baseline 0.828 0.172 0.500 0.355 97.2 Perfect 1.0 0.0 1.0 1.0 15.0\nSubjectively examining performance on 20 randomly selected cases, we find very good detailed. predictions, but also evidence of both missing medications and missing billing codes. An example. of a good set of detailed predictions is from a complex patient suffering from multiple myeloma (a type of cancer) with various complications. This patient was taking 26 medications, 24 of which. had moderate to high probability predictions (Figure4). (We have found by eyeball that a prediction. cutoff of O.2 gives a reasonable balance between sensitivity and specificity for our model.) In the. other direction, only two of the high-prediction classes were not actually being taken, but those. classes, along with several of the other moderately-predicted classes, are commonly used for cancer. and are clinically reasonable for the case. (Details of this and the two cases below are in|Appendix.\nS01C S02B S03B D07X 1.0 D07A H02A S01B C05A D10A A01A J05A R01A N02A B05C A12C B05X 0.8 L04A N02B S01A 0.6 J01D C030 0.4 J01m C07A N03A 0.2 01X M03B 0.0\nFigure 4: Medication predictions for a complicated patient. Each vertical bar represents the pre diction for a single medication class, with the height of the bar representing the confidence of the prediction. Black labels with arrows indicate ATC therapeutic classes for medications the patient was actually taking. Colors and letters below the axis indicate organ system groups. More detail ir Appendix C\nprocessing step to improve performance, but clearly the semantic understanding it provides to ar. algorithm can be useful beyond the immediate learning problem (Mikolov et al.l|2013). Investigating the embedding learned in this experiment shows some generalizable potential, but it also reveals the need for further refinement before it can be truly useful. Specifically, while it's easy to find tigh. groups of ICD-9 codes that are strongly clinically related in our embedding, we also find groups for. which we cannot see a meaningful clinical relationship..\nFor example, we see two groups of codes relating to kidney failure and diabetes mellitus, two classes. of very prevalent disease (Figure 5] insets). In other iterations with different parameter settings, the kidney failure codes were even embedded in a sequence reflecting the natural progression of the disease, with the code for dialysis (an intensive treatment for end-stage kidney failure) embedded. at the appropriate place. Interestingly, these were not the parameter settings that optimized overall. prediction performance. In other settings, such as our performance-optimal setting, the sequence is close to the natural progression of the disease, but not quite identical. Nevertheless, this is an. exciting result that suggests great potential.\nFor this prediction problem, we settled on predicting the medications that occurred in the record during the same time span as the billing codes used. Originally, we intended to predict only the medications listed on the day of the reference point, but that turned out to greatly exacerbate the missing medication problem. After trying medications that fell on the reference day only, the week prior to the reference day, and the six months prior, our best performance both subjectively and objectively was achieved using the full time range of the input data.\nWhile the performance of the recurrent networks was quite good, we believe it could be improved by including additional input data, such as laboratory test results, demographics, and perhaps vital\nFurther evaluation of the embedding found that 49% of codes were strongly related semantically. to their nearest neighbor, 10% were loosely related, and 41% unrelated. This fraction of strongly. related nearest neighbors was lower than we had hoped, but much higher than expected by chance. (Figure 6), and it definitely improved classification performance. Furthermore, it was obvious by. inspection that in general, codes closer in the embedding were more semantically related than distant codes, but interestingly, the distance to the nearest such neighbor showed the opposite relationship. nearest neighbors that were very close were less likely to be semantically related than nearest neighbors that were far, and this trend is roughly linear across the full range of d (Figure[6). So the. sparser the points are in the embedded space, the more semantically related they are to their nearest. neighbor, but the causal direction of that effect and the technical reason for it are beyond the scope. of this initial work.\n585.9 Chronic kidney disease, unspecified. 585.3 Chronic kidney disease, Stage IlI (moderate) 585.6 End stage renal disease 585.4 Chronic kidney disease, Stage IV (severe) V45.11 Renal dialysis status 585.5 Chronic kidney disease, Stage V. : V45.1 Postsurgical renal dialysis status. 285.21 Anemia in chronic kidney disease. . 0 787.1 Heartburn 727.00 Synovitis and tenosynovitis. 0 309.24 Adjustment disorder with anxiety. 0 831.00 Closed dislocation of shoulder. 724.3 Sciatica .701.4 Keloid scar 250.81 Diabetes with other specified manifestations, type I ... 362.01 Background diabetic retinopathy. 250.40 Diabetes with renal manifestations, type Il or unspecified, .. 250.80 Diabetes with other specified manifestations, type II or uns. - 250.50 Diabetes with ophthalmic manifestations, type II or uns 250.42 Diabetes with renal manifestations, type II or unspecifie 250.01 Diabetes without complication, type I. : 250.62 Diabetes with neurological manifestations, type Il or uncontrolled : 250.60 Diabetes with neurological manifestations, type II or 357.2 Polyneuropathy in diabetes. 1\n585.9 Chronic kidney disease, unspecified 585.3 Chronic kidney disease, Stage III (moderate) . 585.6 End stage renal disease 585.4 Chronic kidney disease, Stage IV (severe) V45.11 Renal dialy$is status . 585.5 Chronic kidney disease, Stage V : V45.1 Postsurgical renal dialysis status 285.21 Anemia in chronic kidney disease F 0 787.1 Heartburn 727.00 $ynovitis and tenosynovitis 0 309.24 Adjustment disorder with anxiety 4 0 831.00 Closed dislocation of shoulder 724.3 Sciatica701.4 Keloid scar 250.81 Diabetes with other specified manifestations, type I ... 362.01 Background diabetic retinopathy 250.40 Diabetes with renal manifestations, type ll or unspecified, . 250.80 Diabetes with other specified manifestations, type Il or unspecified, ... - 250.50 Diabetes with ophthalmic manifestations, type ll or unspecified, .. 250.42 Diabetes with renal manifestations, type Il or unspecified, uncontrol. 250.01 Diabetes without complication, type I 250.62 Diabetes with neurological manifestations, type Il or unspecified, uncontrolled : 250.60 Diabetes with neurological manifestations, type Il or unspecified\nFigure 5: A t-SNE representation of our final embedding. The insets highlight two groups of codes (diabetes mellitus and kidney failure) that are strongly related clinically, and a third group that is not. Codes are colored by whether their nearest neighbor in the embedding space (which may be different from the nearest neighbor in this t-SNE space) is strongly related (blue), loosely related (orange), or unrelated (gray) from a clinical perspective..\n0.9 0.07 0.8 P(d) 0.06 0.7 0.05 0.6 m = strongly related o 0.5 0.04 (p)d E m = loosely related 0.4 0.03 0.3 0.02 m = unrelated 0.2 - 0.01 0.1 0.0 0.00 10 20 30 40 50 nearest neighbor distance d\nsigns. We also suspect that if we can devise a way to convert our medication data into reliably. ordered sequences, we can more fully exploit the strengths of recurrent networks for medication prediction. We look forward to trying these and other variations in future work.."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by grants from the Edward Mallinckrodt, Jr. Foundation and the Nationa Institutes of Health R21LM011664 and R01EB020666. Clinical data was provided by the Vanderbil Synthetic Derivative, which is supported by institutional funding and by the Vanderbilt CTSA grant ULTR000445.\nFigure 6: Semantic relatedness of nearest neighbors vs. the distance between them. Solid lines. are the conditional probabilities P(m[d) for the three values of m, dashed line is the marginal probability P(d) of nearest neighbor distances d. Surprisingly, nearest neighbors that are farther away (but still the nearest neighbor) are more strongly related than nearest neighbors that are closer. in the embedding space. Shaded regions, colored to correspond to the three values of m, are the 95%. CI for empirically estimated P(m) under random pairings, and represent the expected null result.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Lisa Bastarache and Joshua C. Denny. The use of ICD-9 codes in genetic association studies. In AMIA Annu Symp Proc, volume 2011, pp. 1738, 2011.\nSelin Caglar, Philip L Henneman, Fidela S Blank, Howard A Smithline, and Elizabeth A Henneman Emergency department medication lists are not accurate. The Journal of emergency medicine, 40 613-616, Jun 2011.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical ma- chine translation. CoRR. abs/1406.1078. 2014\nEdward Choi, Andy Schuetz, Walter F. Stewart, and Jimeng Sun. Using recurrent neural networl models for early detection of heart failure onset. J Am Med Inform Assoc, Aug 2016b..\nFrancois Chollet. Keras. https: //github. com/fchollet/keras 2015.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation o gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555. 2014\nSimon de Lusignan and Chris van Weel. The use of routinely collected computer data for research in primary care: opportunities and challenges. Family practice, 23:253-263, Apr 2006\nJoshua C. Denny, Marylyn D. Ritchie, Melissa A. Basford, Jill M. Pulley, Lisa Bastarache, Kristin Brown-Gentry, Deede Wang, Dan R. Masys, Dan M. Roden, and Dana C. Crawford. Phewas demonstrating the feasibility of a phenome-wide scan to discover gene-disease associations Bioinformatics, 26(9):1205-1210, 2010\nManuel Fernandez-Delgado, Eva Cernadas, Senen Barro, and Dinani Amorim. Do we need hun dreds of classifiers to solve real world classification problems? Journal of Machine Learning. Research, 15:3133-3181, 2014.\nAlex Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Springer, 2012\nAlex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. arXiv preprint, 1303.5778, 2013\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735 1780, November 1997\nRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurren network architectures. Journal of Machine Learning Research, 2015..\n1CC 11111O11, netl. Assessng tne accuracy of computerized medication histories. The American journal of managed care, 10:872-877, Nov 2004. Caroline Keogh, Allen Kachalia, Karen Fiumara, Dorothy Goulart, Jonathan Coblyn, and Sonali P. Desai. Ambulatory medication reconciliation: Using a collaborative approach to process improve- ment at an academic medical center. Joint Commission journal on quality and patient safety, 42: 186-194, Apr 2016.\nThomas A. Lasko. Efficient inference of Gaussian process modulated renewal processes with ap plication to medical event data. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence (UAI), July 2014.\nZachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. Learning to diagnose with LSTM recurrent neural networks. In Proceedings of the International Conference on Learning Representaitons (1CLR 2016), 2016.\nKimberly J. O'Malley, Karon F. Cook, Matt D. Price, Kimberly Raiford Wildes, John F. Hurdle and Carol M. Ashton. Measuring diagnoses: ICD code accuracy. Health Serv Res, 40(5 Pt 2) 1620-1639, Oct 2005.\nHua Xu, Shane P Stenner, Son Doan, Kevin B Johnson, Lemuel R Waitman, and Joshua C Denny. Medex: a medication information extraction system for clinical narratives. J Am Med Inform Assoc. 17(1:19-24. 2010\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representa tions of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing System. 26, pp. 3111-3119. Curran Associates, Inc., 2013.\n1620-1639, Oct 2005. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and. E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research.. 12:2825-2830. 2011. D. M. Roden, J. M. Pulley, M. A. Basford, G. R. Bernard, E. W. Clayton, J. R. Balser, and D. R. Masys. Development of a large-scale de-identified dna biobank to enable personalized medicine. Clin Pharmacol Ther, 84(3):362-369, Sep 2008 D. G. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error. propagation. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing:. Explorations in the Microstructure of Cognition, volume 1: Foundations, pp. 318 - 362. MIT. Press, 1986. Charles Safran, Meryl Bloomrosen, W Edward Hammond, Steven Labkoff, Suzanne Markel-Fox. Paul C. Tang, Don E. Detmer, and Expert Panel. Toward a national framework for the secondary. use of health data: an american medical informatics association white paper. J Am Med Inform. Assoc, 14(1):1-9, 2007. Konstantinos Sechidis, Grigorios Tsoumakas, and Ioannis Vlahavas. On the stratification of multi\nMatthew D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012\nM. L. Zhang and Z. H. Zhou. A review on multi-label learning algorithms. IEEE Transactions or Knowledge and Data Engineering. 26(8):1819-1837. Aug 2014."}, {"section_index": "13", "section_name": "APPENDIX A", "section_text": "This appendix lists the optimized parameters for the different models. Except where noted, param eters were optimized under random search.\nRecurrent Neural Network Models: (parameters marked with an asterisk were optimized with human-guided search.)\nParameter Model GRU LSTM Dropout for input gates 0.1 0.25 Dropout for recurrent connections 0.75 0.75 L1 applied to the input weights matrices 0 0 L1 applied to the recurrent weights matrices 0 0 L2 applied to the input weights matrices 0.0001 0.0001 L2 applied to the recurrent weights matrices 0.0001 0.001 L2 applied to the output layer's weights matrices 0.0001 0.001 Dropout before the output layer 0.5 0.5 *Number of recurrent layers 3 3 *Number of nodes in recurrent units 400 400\nFeed Forward Neural Network Model.\nRandom Forest Model (binary input)"}, {"section_index": "14", "section_name": "APPENDIX B", "section_text": "This appendix lists the top level classes for International Statistical Classification of Diseases anc Related Health Problems, Ninth Revision (ICD-9) and Anatomical Chemical Classification Systen (ATC).\n001-139 Infectious and parasitic diseases 140-239 Neoplasms 240-279 Endocrine, nutritional and metabolic diseases, and immunity disorders 280-289 Diseases of the blood and blood-forming organs. 290-319 Mental disorders 320-359 Diseases of the nervous system 360-389 Diseases of the sense organs 390-459 Diseases of the circulatory system 460-519 Diseases of the respiratory system 520-579 Diseases of the digestive system 580-629 Diseases of the genitourinary system 630-679 Complications of pregnancy, childbirth, and the puerperium. 680-709 Diseases of the skin and subcutaneous tissue. 710-739 Diseases of the musculoskeletal system and connective tissue. 740-759 Congenital anomalies 760-779 Certain conditions originating in the perinatal period. 780-799 Symptoms, signs, and ill-defined conditions 800-999 Injury and poisoning V01-V91 Supplementary - factors influencing health status and contact with health se. 000-E999 Supplementary - external causes of injury and poisoning.\nTop level groups ATC codes and their corresponding colors used in Figure4and|Appendix ("}, {"section_index": "15", "section_name": "APPENDIX C", "section_text": "This appendix presents results from three illustrative cases from the dozen cases randomly selectec for individual evaluation.\n203.00 Multiple myeloma, without mention of having achieved remission. 4.8 months ago 273.1 Monoclonal paraproteinemia 4.8 months ago 285.9 Anemia, unspecified 4.8 months ago 276.50 Volume depletion, unspecified 4.8 months ago 733.00 Osteoporosis, unspecified 4.8 months ago 203.00 Multiple myeloma, without mention of having achieved remission. 4.8 months ago 203.00 Multiple myeloma, without mention of having achieved remission. 2.9 months ago 203.01 Multiple myeloma, in remission. 2.9 months ago 273.1 Monoclonal paraproteinemia 2.9 months ago 273.1 Monoclonal paraproteinemia 1.6 months ago 279.3 Unspecified immunity deficiency 1.6 months ago 203.00 Multiple myeloma, without mention of having achieved remission. 1.6 months ago 781.2 Abnormality of gait 3.7 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission. 3.7 weeks ago 401.9 Unspecified essential hypertension 3.7 weeks ago V12.54 Personal history of transient ischemic attack (TIA), and cerebral infarction without residual deficits 3.7 weeks ago 794.31 Nonspecific abnormal electrocardiogram [ECG] [EKG] 3.7 weeks ago 786.09 Other respiratory abnormalities 3.7 weeks ago 273.1 Monoclonal paraproteinemia 3.7 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission. 3.6 weeks ago V58.69 Long-term (current) use of other medications 3.6 weeks ago 794.31 Nonspecific abnormal electrocardiogram [ECG] [EKG] 3.4 weeks ago 203.00 Multiple myeloma, without mention of having achieved remission. 4 days ago V42.82 Peripheral stem cells replaced by transplant 4 days ago 203.01 Multiple myeloma, in remission. 3 days ago 38.97 Central venous catheter placement with guidance 3 days ago V42.82 Peripheral stem cells replaced by transplant 3 days ago V58.81 Fitting and adjustment of vascular catheter 3 days ago 203.00 Multiple myeloma, without mention of having achieved remission. 3 days ago V42.82 Peripheral stem cells replaced by transplant 2 days ago 203.01 Multiple myeloma, in remission 2 days ago 203.00 Multiple myeloma, without mention of having achieved remission. 1 day ago V42.82 Peripheral stem cells replaced by transplant 1 day ago 203.00 Multiple myeloma, without mention of having achieved remission. now V42.82 Peripheral stem cells replaced by transplant now S01C S02B S03B D07X D07A H02A S01B 1.0 C05A D10A J05A R01A A01A N02A B05C A12C -\nD07X D07A H02A S01B 1.0 C05A D10A A01A J05A R01A N02A B05C A12C B05X 0.8 L04A N02B S01A 0.6 J01D C030 0.4 J01M C07A N03A 0.2 J01X M03B\nMedication predictions for a complicated patient. Each vertical bar represents the prediction for a single medication class, with the height of the bar representing the confidence of the prediction Black labels above arrows indicate ATC therapeutic classes for medications the patient was actually taking. Colors and letters below the axis indicate high-level therapeutic class groups.\nPredicted vs. actual medication classes for the patient in Case 1. The four-character sequence in the first and fourth columns is the ATC code for the medication therapeutic class, and an asterisk in the first column indicates that the predicted medication is in the actual medication list. Probabilities listed are the model predictions for the listed therapeutic class. In the predicted medications column, all predictions with probability at least 0.2 are listed\nTop predictions Prob. True labels Prob. S03B* Corticosteroids 97.01% S03B Corticosteroids 97.01% S01C* Antiinflammatory agents and antiinfectives in combi- 95.54% S01C Antiinflammatory agents and antiinfectives in combi- 95.54% nation nation S02B* Corticosteroids 95.54% S02B Corticosteroids 95.54% L01A Alkylating agents 94.00% D07X Corticosteroids, other combinations 93.37% D07X* Corticosteroids, other combinations 93.37% H02A Corticosteroids for systemic use, plain 91.06% H02A* Corticosteroids for systemic use, plain 91.06% D07A Corticosteroids, plain 90.83% D07A* Corticosteroids, plain 90.83% S01B Antiinflammatory agents 90.79% S01B* Antiinflammatory agents 90.79% D10A Anti-acne preparations for topical use 88.56% D10A* Anti-acne preparations for topical use 88.56% C05A Agents for treatment of hemorrhoids and anal fissures 88.52% for topical use C05A* Agents for treatment of hemorrhoids and anal fissures 88.52% R01A Decongestants and other nasal preparations for topi- 87.02% for topical use cal use A04A Antiemetics and antinauseants 87.95% J05A Direct acting antivirals 86.83% R01A* Decongestants and other nasal preparations for topi 87.02% A01A Stomatological preparations 86.11% cal use J05A* Direct acting antivirals 86.8% N02A Opioids 84.86% A01A* Stomatological preparations 86.11% B05C Irrigating solutions 82.56% N02A* Opioids 84.86% A12C Other mineral supplements 79.50% B05C* Irrigating solutions 82.56% B05X I. V. solution additives 74.84% A12C* Other mineral supplements 79.50% L04A Immunosuppressants 68.76% B05X* I.v. solution additives 74.84% N02B Other analgesics and antipyretics 57.24% L04A* Immunosuppressants 68.76% S01A Antiinfectives 54.59% N05A Antipsychotics 58.64% J01D Other beta-lactam antibacterials 43.40% N02B* Other analgesics and antipyretics 57.24% C03C High-ceiling diuretics 39.88% S01A* Antiinfectives 54.59% J01M Quinolone antibacterials 29.78% L03A Immunostimulants 45.96% C07A Beta blocking agents 27.08% A02B Drugs for peptic ulcer and gastro-oesophageal reflux 44.56% disease J01D* Other beta-lactam antibacterials 43.40% N03A Antiepileptics 20.00% C03C* High-ceiling diuretics 39.88% J01X Other antibacterials 5.88% B01A Antithrombotic agents 37.80% M03B Muscle relaxants, centrally acting agents 5.09% V03A All other therapeutic products 34.18% R06A Antihistamines for systemic use 31.78% A06A Drugs for constipation 31.57% J01M* Quinolone antibacterials 29.78% N05B Anxiolytics 29.42% D04A Antipruritics, incl. antihistamines, anesthetics, etc. 27.62% C07A* Beta blocking agents 27.08% L01X Other antineoplastic agents 24.72% R05C Expectorants, excl. combinations with cough sup- 20.43% pressants N03A* Antiepileptics 20.00%\n1CD-9 code Code description Time estimate (ago) 735.4 Other hammer toe (acquired) 2.4 years ago 729.5 Pain in limb 2.4 years ago 244.1 Other postablative hypothyroidism 1.5 years ago 285.9 Anemia, unspecified 1.5 years ago 244.1 Other postablative hypothyroidism 1.2 years ago 244.1 Other postablative hypothyroidism 11.5 months ago 733.00 Osteoporosis, unspecified 11.5 months ago 733.01 Senile osteoporosis 7.7 months ago 268.9 Unspecified vitamin D deficiency 7.7 months ago 729.5 Pain in limb 7.7 months ago 174.9 Malignant neoplasm of breast (female), unspecified. 7.7 months ago 722.52 Degeneration of lumbar or lumbosacral intervertebral disc 7.7 months ago 279.3 Unspecified immunity deficiency 7.7 months ago 733.01 Senile osteoporosis 6.4 months ago 733.01 Senile osteoporosis 6.2 months ago 244.1 Other postablative hypothyroidism 6.0 months ago 401.1 Benign essential hypertension 6.0 months ago V58.69 Long-term (current) use of other medications 1.9 weeks ago 733.01 Senile osteoporosis now 244.1 Other postablative hypothyroidism now V58.69 Long-term (current) use of other medications now\nPredicted vs. actual medication classes for Case 2. Table structure as in case 1\nTop predictions Prob. True labels Prob. M05B Drugs affecting bone structure and mineralization 88.18% A11C Vitamin a and d, incl. combinations of the two 39.42% H03A Thyroid preparations 84.82% N06A Antidepressants 20.88% H05A Parathyroid hormones and analogues 66.33% C10A Lipid modifying agents, plain 17.05% A11C* Vitamin a and d, incl. combinations of the two 39.42% N03A Antiepileptics 15.61% N02B Other analgesics and antipyretics 37.58% C09C Angiotensin ii antagonists, plain 10.38% A01A Stomatological preparations 23.05% L02B Hormone antagonists and related agents 4.22% A12A Calcium 21.59% N06A* Antidepressants 20.88% C07A Beta blocking agents 20.81% 0.9r 0.8 0.7 0.6 0.5 Al1C 0.4 N06A 0.3 C10A N03A 0.2 02B 0.1 0.0 A B C G H M N R\nMedication predictions for a simpler patient. Note that the high-prediction medications are clinicall reasonable given the billing codes in the sequence. Figure representation as in case 1.\nTop predictions Prob. True labels Prob. M05B Drugs affecting bone structure and mineralization 88.18% A11C Vitamin a and d, incl. combinations of the two 39.42% H03A Thyroid preparations 84.82% N06A Antidepressants 20.88% H05A Parathyroid hormones and analogues 66.33% C10A Lipid modifying agents, plain 17.05% A11C* Vitamin a and d, incl. combinations of the two 39.42% N03A Antiepileptics 15.61% N02B Other analgesics and antipyretics 37.58% C09C Angiotensin ii antagonists, plain 10.38% A01A Stomatological preparations 23.05% L02B Hormone antagonists and related agents 4.22% A12A Calcium 21.59% N06A* Antidepressants 20.88% C07A Beta blocking agents 20.81% 0.9 0.8 0.7 0.6 0.5 Al1C 0.4 N06A 0.3 C10A N03A 0.2 02B 0.1 B D G H M A P R S\nPredicted vs. actual medication classes for Case 3. Table structure as in case 1.\nC10A 0.2 C09A CO1E C02 G03B 0.0 R H M N D R S\nMedication predictions for a patient with only one ICD-9 code, repeated many times over five years The medications listed under true labels are not indicated for paralysis agitans (Parkinson's disease) but the patient was surely taking them for reasons not documented in the ICD-9 sequence. The model predicted mostly reasonable medications for a patient with Parkinson's disease, especially Dopaminergic agents, which is the primary treatment for the disease. Figure representation as ir case 1, above.\nTop predictions Prob. True labels Prob. N04B Dopaminergic agents 97.66% C10A Lipid modifying agents, plain. 13.90% N03A Antiepileptics 34.01% C09A Ace inhibitors, plain. 9.21% N02B Other analgesics and antipyretics. 32.81% C01E Other cardiac preparations 5.56% N06A Antidepressants 26.10% C02C Antiadrenergic agents, peripherally acting 0.72% N02A Opioids 20.33% G03B Androgens 0.32% A14A Anabolic steroids 0.08% 1.0 0.8 0.6 0.4 C10A 0.2 C09A CO1E A14A C02C G03B 0.0 A B D H M N P R S"}] |
rJ8uNptgl | [{"section_index": "0", "section_name": "TOWARDS THE LIMIT OF NETWORK OUANTIZATION", "section_text": "Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee\nyoojin.c,mostafa.e, jungwon2.lee}@samsung.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Network quantization is one of network compression techniques to reduce the re. dundancy of deep neural networks. It reduces the number of distinct network pa. rameter values by quantization in order to save the storage for them. In this paper. we design network quantization schemes that minimize the performance loss due. to quantization given a compression ratio constraint. We analyze the quantitative. relation of quantization errors to the neural network loss function and identify tha. the Hessian-weighted distortion measure is locally the right objective function fo. the optimization of network quantization. As a result, Hessian-weighted k-means clustering is proposed for clustering network parameters to quantize. When opti. mal variable-length binary codes, e.g., Huffman codes, are employed for furthe. compression, we derive that the network quantization problem can be related tc. the entropy-constrained scalar quantization (ECSQ) problem in information the. ory and consequently propose two solutions of ECsQ for network quantization. i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm Finally, using the simple uniform quantization followed by Huffman coding, we. show from our experiments that the compression ratios of 51.25, 22.17 and 40.65 are achievable for LeNet, 32-layer ResNet and AlexNet, respectively.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have emerged to be the state-of-the-art in the field of machine learning fol image classification, object detection, speech recognition, natural language processing, and machine translation (LeCun et al., 2015). The substantial progress of neural networks however comes with high cost of computations and hardware resources resulting from a large number of parameters. For example,Krizhevsky et al. (2012) came up with a deep convolutional neural network consisting of 61 million parameters and won the ImageNet competition in 2012. It is followed by deeper neural networks with even larger numbers of parameters, e.g.,Simonyan & Zisserman (2014).\nBesides network quantization, network pruning has been studied for network compression to remov redundant parameters permanently from neural networks (Mozer & Smolensky 1989:|LeCun et al 1989;Hassibi & Stork, 1993; Han et al.,2015b; Lebedev & Lempitsky,2016; Wen et al.,2016 Matrix/tensor factorization and low-rank approximation have been investigated as well to find mor efficient representations of neural networks with a smaller number of parameters and consequentl to save computations (Sainath et al., 2013; Xue et al.],2013; Jaderberg et al.,2014; Lebedev et al. 2014;Yang et al. 2015:Liu et al.]2015: Kim et al. [2015: Tai et al.2015: Novikov et al.. 2015] Moreover, similar to network quantization, low-precision network implementation has been exam ined inVanhoucke et al.(2011);Courbariaux et al.(2014);[Anwar et al.(2015);Gupta et al.(2015) Lin et al.(2015a). Some extremes of low-precision neural networks consisting of binary or ternar parameters can be found inCourbariaux et al. (2015);Lin et al.] (2015b);Rastegari et al.](2016). w note that these are different types of network compression techniques, which can be employed oi top of each other.\nThe large sizes of deep neural networks make it difficult to deploy them on resource-limited devices,. e.g., mobile or portable devices, and network compression is of great interest in recent years to. reduce computational cost and memory requirements for deep neural networks. Our interest in this paper is mainly on curtailing the size of the storage (memory) for network parameters (weights and biases). In particular, we focus on the network size compression by reducing the number of distinct network parameters by quantization..\nThe most related work to our investigation in this paper can be found inGong et al. (2014);Han et al. (2015a), where a conventional quantization method using k-means clustering is employed for net. work quantization. This conventional approach however is proposed with little consideration for the impact of quantization errors on the neural network performance loss and no effort to optimize the quantization procedure for a given compression ratio constraint. In this paper, we reveal the subop timality of this conventional method and newly design quantization schemes for neural networks. Ir. particular, we formulate an optimization problem to minimize the network performance loss due tc quantization given a compression ratio constraint and find efficient quantization methods for neura. networks.\nThe main contribution of the paper can be summarized as follows.\nWe consider a neural network that is already trained, pruned if employed and fine-tuned before quan tization. If no network pruning is employed, all parameters in a network are subject to quantization For pruned networks, our focus is on quantization of unpruned parameters..\nThe goal of network quantization is to quantize (unpruned) network parameters in order to reduce the. size of the storage for them while minimizing the performance degradation due to quantization. Fol network quantization, network parameters are grouped into clusters. Parameters in the same cluste share their quantized value, which is the representative value (i.e., cluster center) of the cluster they belong to. After quantization, lossless binary coding follows to encode quantized parameters intc binary codewords to store instead of actual parameter values. Either fixed-length binary coding o1. variable-length binary coding, e.g., Huffman coding, can be employed to this end..\nSuppose that we have total N parameters in a neural network. Before quantization, each parameter. is assumed to be of b bits. For quantization, we partition the network parameters into k clusters. Let C; be the set of network parameters in cluster i and let b; be the number of bits of the codeworc assigned to the network parameters in cluster i for 1 < i < k. For a lookup table to decode quantized.\nIt is derived that the performance loss due to quantization in neural networks can be quan. tified approximately by the Hessian-weighted distortion measure. Then, Hessian-weightec. k-means clustering is proposed for network quantization to minimize the performance loss. It is identified that the optimization problem for network quantization provided a compres. sion ratio constraint can be reduced to an entropy-constrained scalar quantization (ECSQ. problem when optimal variable-length binary coding is employed after quantization. Twc. efficient heuristic solutions for ECsQ are proposed for network quantization, i.e., uniform. quantization and an iterative solution similar to Lloyd's algorithm.. As an alternative of Hessian, it is proposed to utilize some function (e.g., square root) o1. the second moment estimates of gradients when the Adam (Kingma & Ba,2014) stochastic gradient decent (SGD) optimizer is used in training. The advantage of using this alterna. tive is that it is computed while training and can be obtained at the end of training at nc. additional cost. It is shown how the proposed network quantization schemes can be applied for quantizing. network parameters of all layers together at once, rather than layer-by-layer network quan. tization in Gong et al.](2014);Han et al.(2015a). This follows from our investigation tha Hessian-weighting can handle the different impact of quantization errors properly not only. within layers but also across layers. Moreover, quantizing network parameters of all layers. together, one can even avoid layer-by-layer compression rate optimization..\nThe rest of the paper is organized as follows. In Section[2] we define the network quantization prob. lem and review the conventional quantization method using k-means clustering. Section3|discusses Hessian-weighted network quantization. Our entropy-constrained network quantization schemes follow in Section Finally, experiment results and conclusion can be found in Section[5and Sec- tion6 respectively.\nvalues from their binary encoded codewords, we store k binary codewords (b; bits for 1 < i < k). and corresponding quantized values (b bits for each). The compression ratio is then given by\nNb Compression ratio = 1(C;[+1)b;+ kb\nObserve in (1) that the compression ratio depends not only on the number of clusters but also on the. sizes of the clusters and the lengths of the binary codewords assigned to them. in particular. whe. a variable-length code is used for encoding quantized values. For fixed-length codes, however, al. codewords are of the same length, i.e., b; = log2 k] for all 1 i k, and thus the compressior. ratio is reduced to only a function of the number of clusters, i.e., k, assuming that N and b are given.\nk 1 argmin |w-cil2 where * C W. |Ci C1,C2,...,Ck i=1 wECi WECi\nWe observe two issues with employing g k-means clustering for network quantization"}, {"section_index": "3", "section_name": "3.1 NETWORK MODEL", "section_text": "We consider a general non-linear neural network that yields output y = f (x; w) from input x, where w = w1 ... wyI is the vector consisting of all trainable network parameters in the network; N is the total number of trainable parameters in the network. A loss function loss(y, y) is defined as the objective function that we aim to minimize in average, where y = y(x) is the expected (ground- truth) output for input x. Cross entropy or mean square error are typical examples of a loss function Given a training data set Atrain, we optimize network parameters by solving the following problem, e.g., approximately by using a stochastic gradient descent (SGD) method with mini-batches:\n1 w = argminL(rain;w), where L(X;w) loss(f(x; w),y(x)) W xEX\nProvided network parameters {wi}1 to quantize, k-means clustering partitions them into k dis- joint sets (clusters), denoted by C1, C2, . . . , Ck, while minimizing the mean square quantization error (MSQE) as follows:\nFirst, although k-means clustering minimizes the MSQE, it does not imply that k-means clustering minimizes the performance loss due to quantization as well in neural networks. K-means clustering treats quantization errors from all network parameters with equal im- portance. However, quantization errors from some network parameters may degrade the performance more significantly that the others. Thus, for minimizing the loss due to quan- tization in neural networks, one needs to take this dissimilarity into account. . Second, k-means clustering does not consider any compression ratio constraint. It simply minimizes its distortion measure for a given number of clusters, i.e., for k clusters. This is however suboptimal when variable-length coding follows since the compression ratio de- pends not only on the number of clusters but also on the sizes of the clusters and assigned codeword lengths to them, which are determined by the binary coding scheme employed af- ter clustering. Therefore, for the optimization of network quantization given a compression ratio constraint, one need to take the impact of binary coding into account, i.e., we need to solve the quantization problem under the actual compression ratio constraint imposed by the specific binary coding scheme employed after clustering\nIn this section, we analyze the impact of quantization errors on the neural network loss functior and derive that the Hessian-weighted distortion measure is a relevant objective function for network quantization in order to minimize the quantization loss locally. Moreover, from this analysis, we pro- pose Hessian-weighted k-means clustering for network quantization to minimize the performance loss due to quantization in neural networks.\nThe average loss function L(X; w) can be expanded by Taylor series with respect to w as follows\n1 8L(X;w) =g(w)T8w+ dwTH(w)8w+O(J|8w||)\naL(X;w) a2L(X;w) H(w) = g(w dw Ow2\nthe square matrix H(w) consisting of second-order partial derivatives is called as Hessian matrix. or Hessian. Assume that the loss function has reached to one of its local minima. at w = w. after training. At local minima, gradients are all zero, i.e., we have g(w) = 0, and thus the first term in. the right-hand side of (3) can be neglected at w = w. The third term in the right-hand side of (3). is also ignored under the assumption that the average loss function is approximately quadratic at the. local minimum w = w. Finally, for simplicity, we approximate the Hessian matrix as a diagonal. matrix by setting its off-diagonal terms to be zero. Then, it follows from (3) that.\nN 1 8L(X;w) ~ hii(w)|8Wi|2 2 i=1\nwhere h;(w) is the second-order partial derivative of the average loss function with respect to w. evaluated at w = w, which is the i-th diagonal element of the Hessian matrix H(w).\nSW=Wi-Wi\nAt a local minimum, the diagonal elements of Hessian, i.e., h,(w)'s, are all non-negative and thus. the summation in (6) is always additive, implying that the average loss function either increases or stays the same. Therefore, the performance degradation due to quantization of a neural network can be measured approximately by the Hessian-weighted distortion as shown in (6). Further discussion. on the Hessian-weighted distortion measure can be found in Appendix[A.1.\nFor notational simplicity, we use w; = w; and h: = hu(w) from now on. The optimal clustering that minimizes the Hessian-weighted distortion measure is given by\nk iiW L argmin hii|wi -cj[2, where Cj C1,C2,...,Ck. j=1 wiECj Wi EC\nwhere w; is a quantized value of w;. Finally, combining (4) and (5), we derive that the local impact. of quantization on the average loss function at w = w can be quantified approximately as follows:\nN 1 8L(X;w) ~ hii(W)|Wi-Wi|2 I2 i=1\nWe call this as Hessian-weighted k-means clustering. Observe in (7) that we give a larger penalty for a network parameter in defining the distortion measure for clustering when its second-order partial derivative is larger, in order to avoid a large deviation from its original value, since the impact on the loss function due to quantization is expected to be larger for that parameter.\nHessian-weighted k-means clustering is locally optimal in minimizing the quantization loss when fixed-length binary coding follows, where the compression ratio solely depends on the number of clusters as shown in Section[2.1 Similar to the conventional k-means clustering, solving this op timization is not easy, but Lloyd's algorithm is still applicable as an efficient heuristic solution for this problem if Hessian-weighted means are used as cluster centers instead of non-weighted regular means.\nFor obtaining Hessian, one needs to evaluate the second-order partial derivative of the average loss function with respect to each of network parameters, i.e., we need to calculate\na2L(X;w) 1 a2 loss(f(x;w),y(x)) dw? |x| ow? w=w xEt w=w\nHessian computation and our network quantization are performed after completing network training. For the data set I' used to compute Hessian in (8), we can either reuse a training data set or use some. other data set, e.g., validation data set. We observed from our experiments that even using a small subset of the training or validation data set is sufficient to yield good approximation of Hessian for. network quantization."}, {"section_index": "4", "section_name": "3.5 ALTERNATIVE OF HESSIAN", "section_text": "Although there is an efficient way to obtain the diagonal of Hessian as discussed in the previous sub section, Hessian computation is not free. In order to avoid this additional Hessian computation, we propose to use an alternative metric instead of Hessian. In particular, we consider neural network trained with the Adam SGD optimizer (Kingma & Ba,2014) and propose to use some function (e.g square root) of the second moment estimates of gradients as an alternative of Hessian.\nThe Adam algorithm computes adaptive learning rates for individual network parameters from the. first and second moment estimates of gradients. We compare the Adam method to Newton's op. timization method using Hessian and notice that the second moment estimates of gradients in the Adam method act like the Hessian in Newton's method. This observation leads us to use some func. tion (e.g., square root) of the second moment estimates of gradients as an alternative of Hessian..\nThe advantage of using the second moment estimates from the Adam method is that they are com. puted while training and we can obtain them at the end of training at no additional cost. It makes Hessian-weighting more feasible for deep neural networks, which have millions of parameters.. We note that similar quantities can be found and used for other SGD optimization methods using. adaptive learning rates, e.g., AdaGrad (Duchi et al., 2011), Adadelta (Zeiler, 2012) and RMSProp (Tieleman & Hinton,2012)."}, {"section_index": "5", "section_name": "3.6 OUANTIZATION OF ALL LAYERS", "section_text": "We propose quantizing the network parameters of all layers in a neural network together at once. by taking Hessian-weight into account. Layer-by-layer quantization was examined in the previous work (Gong et al., 2014;[Han et al.,[2015a). However, e.g., inHan et al.] (2015a), a larger number of bits (a larger number of clusters) are assigned to convolutional layers than fully-connected layers. which implies that they heuristically treat convolutional layers more importantly. This follows fron the fact that the impact of quantization errors on the performance varies significantly across layers some layers, e.g., convolutional layers, may be more important than the others. This concern is. exactly what we can address by Hessian-weighting..\nHessian-weighting properly handles the different impact of quantization errors not only within layers. but also across layers and thus it can be employed for quantizing all layers of a network together The impact of quantization errors may vary more substantially across layers than within layers. Thus, Hessian-weighting may show more benefit in deeper neural networks. We note that Hessian. weighting can still provide gain even for layer-by-layer quantization since it can address the different. impact of the quantization errors of network parameters within each layer as well..\nRecent neural networks are getting deeper, e.g., see Szegedy et al. (2015ab); He et al.(2015). For such deep neural networks, quantizing network parameters of all layers together is even more advan- tageous since we can avoid layer-by-layer compression rate optimization. Optimizing compression\nRecall that we are interested in only the diagonal elements of Hessian. An efficient way of computing. the diagonal of Hessian is presented inLe Cun (1987); Becker & Le Cun (1988) and it is based on the back propagation method that is similar to the back propagation algorithm used for computing. first-order partial derivatives (gradients). That is, computing the diagonal of Hessian is of the same. order of complexity as computing gradients.."}, {"section_index": "6", "section_name": "4.1 ENTROPY CODING", "section_text": "After quantizing network parameters by clustering, lossless data compression by variable-length bi. nary coding can be followed for compressing quantized values. There is a set of optimal codes that. achieve the minimum average codeword length for a given source. Entropy is the theoretical limit of. the average codeword length per symbol that we can achieve by lossless data compression, proved by Shannon (see, e.g.,Cover & Thomas (2012, Section 5.3)). It is known that optimal codes achieve. this limit with some overhead less than 1 bit when only integer-length codewords are allowed. So optimal coding is also called as entropy coding. Huffman coding is one of entropy coding schemes. commonly used when the source distribution is provided (see, e.g., Cover & Thomas (2012, Sec. tion 5.6)), or can be estimated.\nConsidering a compression ratio constraint in network quantization, we need to solve the clustering problem in (2) or (7) under the compression ratio constraint given by\nk 6 Compression ratio = > C, where b = |Ci|bi b + 1 bi+kb)/N N\nwhich follows from (1). This optimization problem is too complex to solve for any arbitrary variable. length binary code since the average codeword length b can be arbitrary. However, we identify that. it can be simplified if optimal codes, e.g., Huffman codes, are assumed to be used. In particular optimal coding closely achieves the lower limit of the average source code length, i.e., entropy, and. then we approximately have\nk 6 H =-pi log2 Pi, i=1\nk H=- Pi log2 Pi < R i=1\nwhere R ~ b/C. In summary, assuming that optimal coding is employed after clustering, one can approximately replace a compression ratio constraint with an entropy constraint for the clustering output. The network quantization problem is then translated into a quantization problem with an en- tropy constraint, which is called as entropy-constrained scalar quantization (ECSQ) in informatior theory. Two efficient heuristic solutions for ECSQ are proposed for network quantization in the fol lowing subsections, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm for k-means clustering.\nratios jointly across all individual layers (to maximize the overall compression ratio for a network) requires exponential time complexity with respect to the number of layers. This is because the total number of possible combinations of compression ratios for individual layers increases exponentially as the number of layers increases.\nIn this section, we investigate how to solve the network quantization problem under a constraint on the compression ratio. In designing network quantization schemes, we not only want to minimize the performance loss but also want to maximize the compression ratio. In Section[3] we explored how to quantify and minimize the loss due to quantization. In this section, we investigate how to take the compression ratio into account properly in the optimization of network quantization.\nwhere H is the entropy of the quantized network parameters after clustering (i.e., source), given that pi = |C|/N is the ratio of the number of network parameters in cluster C; to the number of all network parameters (i.e., source distribution). Moreover, assuming that N > k, we have\nk 1 L b + kb ~ 0. N i=1\nIt is shown in|Gish & Pierce (1968) that the uniform quantizer is asymptotically optimal in mini mizing the mean square quantization error for any random source with a reasonably smooth density function as the resolution becomes infinite, i.e., as the number of clusters k -> oo. This asymptotic result leads us to come up with a very simple but efficient network quantization scheme as follows:"}, {"section_index": "7", "section_name": "4.4 ITERATIVE ALGORITHM TO SOLVE ECSQ", "section_text": "Another scheme proposed to solve the ECSQ problem for network quantization is an iterative algo. rithm, which is similar to Lloyd's algorithm for k-means clustering. Although this iterative solutior is more complicated than the uniform quantization in Section4.3] it finds a local optimum for a. given discrete source. An iterative algorithm to solve the general ECsQ problem is provided in. Chou et al.(1989). We derive a similar iterative algorithm to solve the ECsQ problem for network. quantization. The main difference from the method in Chou et al. (1989) is that we minimize the. Hessian-weighted distortion measure instead of the non-weighted regular distortion measure for op. timal quantization. The detailed algorithm and further discussion can be found in Appendix[A.3."}, {"section_index": "8", "section_name": "5.1 EXPERIMENT MODELS", "section_text": "First, we evaluate our network quantization schemes for the MNIST data set with a simplified ver sion of LeNet5 (LeCun et al.,|1998), consisting of two convolutional layers and two fully-connectec\n1. We first set uniformly spaced thresholds and divide network parameters into clusters 2. After determining clusters, their quantized values (cluster centers) are obtained by taking the mean of network parameters in each cluster.\nNote that one can use Hessian-weighted mean instead of non-weighted mean in computing clus ter centers in the second step above in order to take the benefit of Hessian-weighting. A perfor. mance comparison of uniform quantization with non-weighted mean and uniform quantization with. Hessian-weighted mean can be found in Appendix|A.2.\nAlthough uniform quantization is a straightforward method, it has never been shown before in the literature that it is actually one of the most efficient quantization schemes for neural networks when optimal variable-length coding, e.g., Huffman coding, follows. We note that uniform quantization is not always good; it is inefficient for fixed-length coding, which is also first shown in this paper..\nWe employ the proposed network quantization methods to quantize all of network param eters in a network together at once, as discussed in Section[3.6 We evaluate the performance of the proposed network quantization methods with and with out network pruning. For a pruned model, we need to store not only the values of unprunec parameters but also their respective indexes (locations) in the original model. For the index information, we compute index differences between unpruned network parameters in the original model and further compress them by Huffman coding as in|Han et al. (2015a) For Hessian computation, 50,o00 samples of the training set are reused. We also evaluate the performance when Hessian is computed with 1,000 samples only. Finally, we evaluate the performance of our network quantization schemes using Hessiar when its alternative is used instead, as discussed in Section[3.5] To this end, we retrain the considered neural networks with the Adam SGD optimizer and obtain the second momen estimates of gradients at the end of training. Then, we use the square roots of the seconc noment estimates instead of Hessian and evaluate the nerforma\nFigure 1: Accuracy versus average codeword length per network parameter after network quantiza tion for 32-layer ResNet.\nlayers followed by a soft-max layer. It has total 431,080 parameters and achieves 99.25% accuracy For a pruned model, we prune 91% of the original network parameters and fine-tune the rest\nSecond, we experiment our network quantization schemes for the CIFAR-10 data set (Krizhevsky 2009) with a pre-trained 32-layer ResNet (He et al., 2015). The 32-layer ResNet consists of 464,154 parameters in total and achieves 92.58% accuracy. For a pruned model, we prune 80% of the origina network parameters and fine-tune the rest..\nThird, we evaluate our network quantization schemes with AlexNet (Krizhevsky et al.,2012) for the ImageNet ILSVRC-2012 data set (Russakovsky et al.,2015). We obtain a pre-trained AlexNet Caffe model, which achieves 57.16% top-1 accuracy. For a pruned model, we prune 89% parameters and fine-tune the rest. In fine-tuning, the Adam SGD optimizer is used in order to avoid the computation. of Hessian by utilizing its alternative (see Section|3.5). However, the pruned model does not recove. the original accuracy after fine-tuning with the Adam method; the top-1 accuracy recovered after pruning and fine-tuning is 56.00%. We are able to find a better pruned model achieving the original. accuracy by pruning and retraining iteratively (Han et al.,[2015b), which is however not used here.."}, {"section_index": "9", "section_name": "5.2 EXPERIMENT RESULTS", "section_text": "We first present the quantization results without pruning for 32-layer ResNet in Figure1 where the accuracy of 32-layer ResNet is plotted against the average codeword length per network pa rameter after quantization. When fixed-length coding is employed, the proposed Hessian-weighted k-means clustering method performs the best, as expected. Observe that Hessian-weighted k-means clustering yields better accuracy than others even after fine-tuning. On the other hand, when Huff- man coding is employed, uniform quantization and the iterative algorithm for ECsQ outperform Hessian-weighted k-means clustering and k-means clustering. However, these two ECSQ solutions underperform Hessian-weighted k-means clustering and even k-means clustering when fixed-length coding is employed since they are optimized for optimal variable-length coding.\n100 100 90 90 80 80 70 70 (%) (%) 60 60 Aerneucy 50 50 40 40 30 30 20 O-k-means 20 O-k-means -Hessian-weighted k-means -Hessian-weiqhted k-means 10 OUniform quantization 10 O.Uniform quantization A Iterative EcsQ Iterative EcsQ 0 0 1 2 3 4 5 6 8 9 0 1 2 3 4 5 6 8 9 Codeword length (bits) Codeword 1ength (bits) (a) Fixed-length coding (b) Fixed-length coding + fine-tuning 100 100 90 90 80 80 70 70 (%) (%) 60 60 50 50 I. 40 40 A 30 30 20 O-k-means 20 O-k-means -Hessian-weighted k-means -Hessian-weighted k-means 10 O.Uniform quantization 10 D. Uniform quantization Iterative EcsQ AIterative EcsQ 0 o 0 1 2 3 4 5 9 0 1 2 3 4 6 9 Average codeword 1ength (bits) (c) Huffman coding. (d) Huffman coding + fine-tuning\nFigure 2: Accuracy versus average codeword length per network parameter after network quanti zation, Huffman coding and fine-tuning for LeNet and 32-layer ResNet when Hessian is computed with 50,000 or 1,000 samples and when the square roots of the second moment estimates of gradients are used instead of Hessian as an alternative..\nFigure[2shows the performance of Hessian-weighted k-means clustering when Hessian is computed with a small number of samples (1,000 samples). Observe that even using the Hessian computed. with a small number of samples yields almost the same performance. We also show the performance. of Hessian-weighted k-means clustering when an alternative of Hessian is used instead of Hessian as. explained in Section[3.5] In particular, the square roots of the second moment estimates of gradients. are used instead of Hessian, and using this alternative provides similar performance to using Hessian\nIn Table1 we summarize the compression ratios that we can achieve with different network quanti zation methods for pruned models. The original network parameters are 32-bit float numbers. Using the simple uniform quantization followed by Huffman coding, we achieve the compression ratios of 51.25, 22.17 and 40.65 (i.e., the compressed model sizes are 1.95%, 4.51% and 2.46% of the original model sizes) for LeNet, 32-layer ResNet and AlexNet, respectively, at no or marginal per formance loss. Observe that the loss in the compressed AlexNet is mainly due to pruning. Here, we also compare our network quantization results to the ones in|Han et al. (2015a). Note that layer-by. layer quantization with k-means clustering is evaluated in Han et al. (2015a) while our quantization schemes including k-means clustering are employed to quantize network parameters of all layers together at once (see Section3.6)"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutiona neural networks for object recognition. In IEEE International Conference on Acoustics, Speecl\n100 100 99.5 90 99 80 98.5 70 (%) 98 60 97.5 50 97 40 96.5 30 96 O-k-means 20 O-k-means -Hessian-weighted k-means (50,ooo) -Hessian-weighted k-means (50, ooo) 95.5 Hessian-weighted k-means (1, ooo). 10 Hessian-weighted k-means (1, ooo) AAlt-Hessian-weiqhted k-means. AAlt-Hessian-weighted k-means. 95 0 2 3 0 1 1 3 4 5 6 8 9 Average codeword Iength (bits). Average codeword 1ength (bits). (a) LeNet (b) ResNet\nThis paper investigates the quantization problem of network parameters in deep neural networks We identify the suboptimality of the conventional quantization method using k-means clustering and newly design network quantization schemes so that they can minimize the performance loss due to quantization given a compression ratio constraint. In particular, we analytically show that Hessian can be used as a measure of the importance of network parameters and propose to minimize Hessian- weighted quantization errors in average for clustering network parameters to quantize. Hessian- weighting is beneficial in quantizing all of the network parameters together at once since it can handle the different impact of quantization errors properly not only within layers but also across layers. Furthermore, we make a connection from the network quantization problem to the entropy- constrained data compression problem in information theory and push the compression ratio to the limit that information theory provides. Two efficient heuristic solutions are presented to this end. i.e., uniform quantization and an iterative solution for ECsQ. Our experiment results show that the proposed network quantization schemes provide considerable gain over the conventional method using k-means clustering, in particular for large and deep neural networks.\nTable 1: Summary of network quantization results with Huffman coding for pruned models\n+ Quantization all layers + Huffman coding\nPhilip A Chou, Tom Lookabaugh, and Robert M Gray. Entropy-constrained vector quantization IEEE Transactions on Acoustics. Speech. and Signal Processing. 37(1:31-42. 1989\nMatthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024. 2014.\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123-3131, 2015.\nThomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning anc stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011\nHerbert Gish and John Pierce. Asymptotically efficient quantizing. IEEE Transactions on Informa tion Theory, 14(5):676-683, 1968\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115. 2014.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a\nAccuracy % Compression ratio Original model. 99.25 Pruned model. 99.27 10.13 k-means 99.27 44.58 Pruning LeNet Hessian-weighted k-means. 99.27 47.16 + Quantization all layers Uniform quantization. 99.28 51.25 + Huffman coding. Iterative ECSQ 99.27 49.01 Deep compression (Han et al., 2015a) 99.26 39.00 Original model. 92.58 Pruned model 92.58 4.52 k-means 92.64 18.25 Pruning ResNet Hessian-weighted k-means. 92.67 20.51 + Quantization all layers Uniform quantization 92.68 22.17 + Huffman coding Iterative ECSQ 92.73 21.01 Deep compression (Han et al., 2015a) N/A N/A Original model. 57.16 Pruned model. 56.00 7.91 Pruning k-means 56.12 30.53 AlexNet + Quantization all layers. Alt-Hessian-weighted k-means. 56.04 33.71 + Huffman coding. Uniform quantization 56.20 40.65 Deep compression (Han et al., 2015a) 57.22 35.00\nk-means Hessian-weighted k-means ers Uniform quantization Iterative ECSQ\nAlex Krizhevsky. Learning multiple layers of features from tiny images. 2009\nYann Le Cun. Modeles connexionnistes de l'apprentissage. PhD thesis, Paris 6, 1987.\nVadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceed ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554-2564, 2016\nVadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky Speeding-up convolutional neural networks using fine-tuned CP-decomposition. arXiv preprini arXiv:1412.6553, 2014.\nYann LeCun. John S Denker. Sara A Solla. Richard E Howard, and Lawrence D Jackel. Optima brain damage. In Advances in Neural Information Processing Systems, pp. 598-605, 1989..\nAlexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neura networks. In Advances in Neural Information Processing Systems. pp. 442-450. 2015\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164-171, 1993.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedinos. of the British Machine Vision Conference. 2014.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nDarryl D Lin, Sachin S Talathi, and V Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. arXiv preprint arXiv:1511.06393, 2015a.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nJian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models witl singular value decomposition. In INTERSPEECH, pp. 2365-2369, 2013.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012."}, {"section_index": "11", "section_name": "A.1 FURTHER DISCUSSION ON THE HESSIAN-WEIGHTED QUANTIZATION ERROR", "section_text": "The diagonal approximation for Hessian simplifies the optimization problem as well as its solutioi. for network quantization. This simplification comes with some performance loss. We conjecture tha the loss due to this approximation is small. The reason is that the contributions from off-diagona terms are not always additive and their summation may end up with a small value. However, diagona terms are all non-negative and therefore their contributions are always additive. We do not verify thi. conjecture in this paper since solving the problem without diagonal approximation is too complex we even need to compute the whole Hessian matrix, which is also too costly..\nObserve that the relation of the Hessian-weighted distortion measure to the quantization loss hold. for any model for which the objective function can be approximated as a quadratic function witl respect to the parameters to quantize in the model. Hence, the quantization methods proposed ir this paper to minimize the Hessian-weighted distortion measure are not specific to neural network. but are generally applicable to quantization of parameters of any model whose objective function is. locally quadratic with respect to its parameters approximately..\nFinally, we do not consider the interactions between quantization and retraining in our formulation. in Section 3.2 We analyze the expected loss due to quantization assuming no further retraining. and focus on finding optimal network quantization schemes that minimize the performance loss. In. our experiments, however, we further fine-tune the quantized values (cluster centers) so that we can recover the loss due to quantization and improve the performance..\nWe compare uniform quantization with non-weighted mean and uniform quantization with Hessian weighted mean in Figure [3] which shows that uniform quantization with Hessian-weighted mear slightly outperforms uniform quantization with non-weighted mean\n100 100 90 90 80 80 70 70 (%) (%) 60 60 Aceanccy 50 50 40 40 30 30 20 20 1q-O-Uniform with non-weighted mean 1q-O-Uniform with non-weighted mean -Uniform with Hessian-weighted mean -Uniform with Hessian-weighted mean 1 0 Average codeword length (bits) Average codeword length (bits) 1 (a) Huffman coding (b) Huffman coding + fine-tuning\nFigure 3: Accuracy versus average codeword length per network parameter after network quanti zation, Huffman coding and fine-tuning for 32-layer ResNet when uniform quantization with non weighted mean and uniform quantization with Hessian-weighted mean are used.\nIn order to solve the ECsQ problem for network quantization, we define a Lagrangian cost function\n(hi|wi-cj|2-Al0g2Pj) J(C,C2,...,Ck) = D+ XH N j=1 wiECj =dx(i,j)\nk k 1 hi|w-cj|2H=-Pjlog2Pj D = N j=1 wi ECj j=1\nAlgorithm 1 Iterative solution for entropy-constrained network quantization\nA heuristic iterative algorithm to solve this method of Lagrange multipliers for network quantizatior. is presented in Algorithm1 It is similar to Lloyd's algorithm for k-means clustering. The key difference is how to partition network parameters at the assignment step. In Lloyd's algorithm, the. Euclidean distance (quantization error) is minimized. For ECsQ, the individual Lagrangian cost function, i.e., d(i, j) in (12), is minimized instead, which includes both quantization error and expected codeword length after entropy coding.\nC(n+1) C(n+1) U{wi} n for l= argmin\nn+1 ;W ang N n+\nargmin J(C1,C2,...,Ck) C1,C2,...,Ck."}] |
BJh6Ztuxl | [{"section_index": "0", "section_name": "FINE-GRAINED ANALYSIS OF SENTENCE EMBEDDINGS USING AUXILIARY PREDICTION TASKS", "section_text": "There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and represen- tations based on the hidden states of recurrent neural networks such as LSTMs The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture.\n1anguagolnlo lllonlllcyCaplu We propose a framework that facilitates better understanding of the encoded rep-. resentations. We define prediction tasks around isolated aspects of sentence struc-. ture (namely sentence length, word content, and word order), and score repre-. sentations by the ability to train a classifier to solve each prediction task when. using the representation as input. We demonstrate the potential contribution of the. approach by analyzing different sentence representation mechanisms. The analy-. sis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded. vector's dimensionality on the resulting representations."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "While sentence embeddings or sentence representations play a central role in recent deep learning. approaches to NLP, little is known about the information that is captured by different sentence em. bedding learning mechanisms. We propose a methodology facilitating fine-grained measuremen of some of the information encoded in sentence embeddings, as well as performing fine-grainec. comparison of different sentence embedding methods.\nIn sentence embeddings, sentences, which are variable-length sequences of discrete symbols, are encoded into fixed length continuous vectors that are then used for further prediction tasks. A simple and common approach is producing word-level vectors using, e.g., word2vec (Mikolov et al.,. 2013a;b), and summing or averaging the vectors of the words participating in the sentence. This. continuous-bag-of-words (CBOw) approach disregards the word order in the sentence.1.\nAnother approach is the encoder-decoder architecture, producing models also known as sequence- to-sequence models (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014, inter alia). In this architecture, an encoder network (e.g. an LSTM) is used to produce a vector representation of the sentence, which is then fed as input into a decoder network that uses it to perform some prediction task (e.g. recreate the sentence, or produce a translation of it). The encoder and decoder networks are trained jointly in order to perform the final task.\n1We use the term CBOw to refer to a sentence representation that is composed of an average of the vectors of the words in the sentence, not to be confused with the training method by the same name which is used in the word2vec algorithm."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Some systems (for example in machine translation) train the system end-to-end, and use the traine system for prediction (Bahdanau et al., 2014). Such systems do not generally care about the encodec vectors, which are used merely as intermediate values. However, another common case is to train ar encoder-decoder network and then throw away the decoder and use the trained encoder as a genera mechanism for obtaining sentence representations. For example, an encoder-decoder network car be trained as an auto-encoder, where the encoder creates a vector representation, and the decode attempts to recreate the original sentence (Li et al., 2015). Similarly, Kiros et al. (2015) train a net work to encode a sentence such that the decoder can recreate its neighboring sentences in the text Such networks do not require specially labeled data, and can be trained on large amounts of unanno tated text. As the decoder needs information about the sentence in order to perform well, it is clea that the encoded vectors capture a non-trivial amount of information about the sentence, making the encoder appealing to use as a general purpose, stand-alone sentence encoding mechanism. The sentence encodings can then be used as input for other prediction tasks for which less training dat is available (Dai & Le, 2015). In this work we focus on these \"general purpose\"' sentence encodings\nThe resulting sentence representations are opaque, and there is currently no good way of comparing different representations short of using them as input for different high-level semantic tasks (e.g sentiment classification, entailment recognition, document retrieval, question answering, sentenc similarity, etc.) and measuring how well they perform on these tasks. This is the approach takei by Li et al. (2015), Hill et al. (2016) and Kiros et al. (2015). This method of comparing sentenc embeddings leaves a lot to be desired: the comparison is at a very coarse-grained level, does not tel us much about the kind of information that is encoded in the representation, and does not help u form generalizable conclusions.\nOur Contribution We take a first step towards opening the black box of vector embeddings fo. sentences. We propose a methodology that facilitates comparing sentence embeddings on a mucl finer-grained level, and demonstrate its use by analyzing and comparing different sentence repre. sentations. We analyze sentence representation methods that are based on LSTM auto-encoders anc the simple CBOw representation produced by averaging word2vec word embeddings. For each oj. CBOw and LSTM auto-encoder, we compare different numbers of dimensions, exploring the ef. fect of the dimensionality on the resulting representation. We also provide some comparison to the. skip-thought embeddings of Kiros et al. (2015).\nIn this work, we focus on what are arguably the three most basic characteristics of a sequence its length, the items within it, and their order. We investigate different sentence representations based on the capacity to which they encode these aspects. Our analysis of these low-level propertie. leads to interesting, actionable insights, exposing relative strengths and weaknesses of the differen representations.\nLimitations Focusing on low-level sentence properties also has limitations: The tasks focus or. measuring the preservation of surface aspects of the sentence and do not measure syntactic an. semantic generalization abilities; the tasks are not directly related to any specific downstream appli. cation (although the properties we test are important factors in many tasks - knowing that a mode is good at predicting length and word order is likely advantageous for syntactic parsing, while mod. els that excel at word content are good for text classification tasks). Dealing with these limitations. requires a complementary set of auxiliary tasks, which is outside the scope of this study and is lef. for future work.\nThe study also suffers from the general limitations of empirical work: we do not prove general theorems but rather measure behaviors on several data points and attempt to draw conclusions from these measurements. There is always the risk that our conclusions only hold for the datasets on which we measured, and will not generalize. However, we do consider our large sample of sentences from Wikipedia to be representative of the English language, at least in terms of the three basic sentence properties that we study.\nSentence representations based on averaged word vectors are surprisingly effective, and encode a non-trivial amount of information regarding sentence length. The information they contain\nSentence representations based on averaged word vectors are surprisingly effective, and encode a non-trivial amount of information regarding sentence length. The information they contair\nmanner (due to regularities in the natural language data).. LSTM auto-encoders are very effective at encoding word order and word content.. Increasing the number of dimensions benefits some tasks more than others.. Adding more hidden units sometimes degrades the encoders' ability to encode word content. Thi. degradation is not correlated with the BLEU scores of the decoder, suggesting that BLEU ove. the decoder output is sub-optimal for evaluating the encoders' quality.. LSTM encoders trained as auto-encoders do not rely on ordering patterns in the training sentence. when encoding novel sentences, while the skip-thought encoders do rely on such patterns.."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Word-level distributed representations have been analyzed rather extensively, both empirically anc. theoretically, for example by Baroni et al. (2014), Levy & Goldberg (2014) and Levy et al. (2015) In contrast, the analysis of sentence-level representations has been much more limited. Commonly. used approaches is to either compare the performance of the sentence embeddings on down-strean tasks (Hill et al., 2016), or to analyze models, specifically trained for predefined task (Schmaltz. et al., 2016; Sutskever et al., 2011).\nWhile the resulting analysis reveals differences in performance of different models, it does not ade-. quately explain what kind of linguistic properties of the sentence they capture. Other studies analyze. the hidden units learned by neural networks when training a sentence representation model (Elman. 1991; Karpathy et al., 2015; Kadar et al., 2016). This approach often associates certain linguistic. aspects with certain hidden units. Kadar et al. (2016) propose a methodology for quantifying the contribution of each input word to a resulting GRU-based encoding. These methods depend on the specific learning model and cannot be applied to arbitrary representations. Moreover, it is still not. clear what is captured by the final sentence embeddings..\nOur work is orthogonal and complementary to the previous efforts: we analyze the resulting sentence embeddings by devising auxiliary prediction tasks for core sentence properties. The methodology. we purpose is general and can be applied to any sentence representation model.."}, {"section_index": "4", "section_name": "3 APPROACH", "section_text": "We aim to inspect and compare encoded sentence vectors in a task-independent manner. The main idea of our method is to focus on isolated aspects of sentence structure, and design experiments to measure to what extent each aspect is captured in a given representation\nIn each experiment, we formulate a prediction task. Given a sentence representation method, w. create training data and train a classifier to predict a specific sentence property (e.g. their length based on their vector representations. We then measure how well we can train a model to perform the task. The basic premise is that if we cannot train a classifier to predict some property of a sentenc based on its vector representation, then this property is not encoded in the representation (or rather not encoded in a useful way, considering how the representation is likely to be used)."}, {"section_index": "5", "section_name": "3.1 THE PREDICTION TASKS", "section_text": "We now turn to describe the specific prediction tasks. We use lower case italics (s, w) to refer to sentences and words, and boldface to refer to their corresponding vector representations (s, w) When more than one element is considered, they are distinguished by indices (w1, w2, w1, w?).\nOur underlying corpus for generating the classification instances consists of 20o,o0o Wikipedia. sentences, where 150,000 sentences are used to generate training examples, and 25,000 sentences\ncan also be used to reconstruct a non-trivial amount of the original word order in a probabilistic manner (due to regularities in the natural language data)\nThe experiments in this work focus on low-level properties of sentences - the sentence length, the identities of words in a sentence. and the order of the words. We consider these to be the core elements of sentence structure. Generalizing the approach to higher-level semantic and syntactic properties holds great potential, which we hope will be explored in future work, by us or by others.\nare used for each of the test and development examples. These sentences are a subset of the training set that was used to train the original sentence encoders. The idea behind this setup is to test the models on what are presumably their best embeddings.\nLength Task This task measures to what extent the sentence representation encodes its length Given a sentence representation s E Rk, the goal of the classifier is to predict the length (number. of words) in the original sentence s. The task is formulated as multiclass classification, with eight output classes corresponding to binned lengths.2 The resulting dataset is reasonably balanced, with a majority class (lengths 5-8 words) of 5,182 test instances and a minority class (34-70) of 1,084 test instances. Predicting the majority class results in classification accuracy of 20.1%.\nWord-content Task This task measures to what extent the sentence representation encodes the. identities of words within it. Given a sentence representation s E Rk and a word representation. w E Rd, the goal of the classifier is to determine whether w appears in the s, with access to neither. w nor s. This is formulated as a binary classification task, where the input is the concatenation of s and w.\nTo create a dataset for this task, we need to provide positive and negative examples. Obtaining positive examples is straightforward: we simply pick a random word from each sentence. For negative examples, we could pick a random word from the entire corpus. However, we found that such a dataset tends to push models to memorize words as either positive or negative words, instead of finding their relation to the sentence representation. Therefore, for each sentence we pick as a negative example a word that appears as a positive example somewhere in our dataset, but does not appear in the given sentence. This forces the models to learn a relationship between word anc sentence representations. We generate one positive and one negative example from each sentence The dataset is balanced, with a baseline accuracy of 50%.\nWord-order Task This task measures to what extent the sentence representation encodes word. order. Given a sentence representation s E Rk and the representations of two words that appear in. the sentence, w1, w2 E Rd, the goal of the classifier is to predict whether w1 appears before or after w2 in the original sentence s. Again, the model has no access to the original sentence and the two words. This is formulated as a binary classification task, where the input is a concatenation of the three vectors s, w1 and w2.\nFor each sentence in the corpus, we simply pick two random words from the sentence as a positive. example. For negative examples, we flip the order of the words. We generate one positive and one negative example from each sentence. The dataset is balanced, with a baseline accuracy of 50%."}, {"section_index": "6", "section_name": "4 SENTENCE REPRESENTATION MODELS", "section_text": "Continuous Bag-of-words (CBOW) This simple yet effective text representation consists of per forming element-wise averaging of word vectors that are obtained using a word-embedding metho. such as word2vec.\nDespite its obliviousness to word order, CBOw has proven useful in different tasks (Hill et al., 2016 and is easy to compute, making it an important model class to consider.\nEncoder-Decoder (ED) The encoder-decoder framework has been successfully used in a number of sequence-to-sequence learning tasks (Sutskever et al., 2014; Bahdanau et al., 2014; Dai & Le. 2015; Li et al., 2015). After the encoding phase, a decoder maps the sentence representation back to. the sequence of words:\nDEC : S E Rk +> s ={W1, W2,..., WN\nGiven a sentence s = we aim to find a sentence representation s using an encoder.\nThe encoding process usually assumes a vector representation w; E Rd for each word in the vo- cabulary. In general, the word and sentence embedding dimensions, d and k, need not be the same The word vectors can be learned together with other encoder parameters or pre-trained. Below we describe different instantiations of ENC.\n35 90 35 35 ED 85 H CBOW aeeunnee y 90 80 30 30 30 ** ED BLEU 80 70 25 25 80 25 15 60 50 70 40 65 10 10 oreer 10 30 -ED -ED H CBOW 55 HCBOW 20 * ED BLEU 50 *ED BLEU 1 0 50 100 300 500 750 1000 100 300 500 750 1000 100 300 500 750 1000 Representation dimensions Representation dimensions Representation dimensions (a) Length test. (b) Content test. (c) Order test.\nFigure 1: Task accuracy vs. embedding size for different models; ED BLEU scores given for reference\nHere we investigate the specific case of an auto-encoder, where the entire encoding-decoding process can be trained end-to-end from a corpus of raw texts. The sentence representation is the final outpu vector of the encoder. We use a long short-term memory (LSTM) recurrent neural network (Hochre iter & Schmidhuber, 1997; Graves et al., 2013) for both encoder and decoder. The LSTM decode is similar to the LSTM encoder but with different weights."}, {"section_index": "7", "section_name": "EXPERIMENTAL SETUP", "section_text": "The bag-of-words (CBOw) and encoder-decoder models are trained on 1 million sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tok- enization, and constrain sentence lengths to be between 5 and 70 words. For both models we control the embedding size k and train word and sentence vectors of sizes k E {100, 300, 500, 750, 1000} More details about the experimental setup are available in the Appendix."}, {"section_index": "8", "section_name": "6.1 LENGTH EXPERIMENTS", "section_text": "We begin by investigating how well the different representations encode sentence length. Figure 1a. shows the performance of the different models on the length task, as well as the BLEU obtained by the LSTM encoder-decoder (ED).\nWith enough dimensions, the LSTM embeddings are very good at capturing sentence length, ob taining accuracies between 82% and 87%. Length prediction ability is not perfectly correlated witl BLEU scores: from 300 dimensions onward the length prediction accuracies of the LSTM remai relatively stable. while the BLEU score of the encoder-decoder model increases as more dimensior are added.\nSomewhat surprisingly, the CBOw model also encodes a fair amount of length information, with length prediction accuracies of 45% to 65%, way above the 20% baseline. This is remarkable, as the CBOw representation consists of averaged word vectors, and we did not expect it to encode length at all. We return to CBOw's exceptional performance in Section 7.\nTo what extent do the different sentence representations encode the identities of the words in the sentence? Figure 1b visualizes the performance of our models on the word content test\nAll the representations encode some amount of word information, and clearly outperform the ran-. dom baseline of 50%. Some trends are worth noting. While the capacity of the LSTM encoder. to preserve word identities generally increases when adding dimensions, the performance peaks at 750 dimensions and drops afterwards. This stands in contrast to the BLEU score of the respective.\nIn this section we provide a detailed description of our experimental results along with their analysis For each of the three main tests - length, content and order - we investigate the performance of different sentence representation models across embedding size."}, {"section_index": "9", "section_name": "6.3 WORD ORDER EXPERIMENTS", "section_text": "Figure 1c shows the performance of the different models on the order test. The LSTM encoders are. very capable of encoding word order, with LSTM-1000 allowing the recovery of word order in 91%. of the cases. Similar to the length test, LSTM order prediction accuracy is only loosely correlatec with BLEU scores. It is worth noting that increasing the representation size helps the LSTM-encoder. to better encode order information..\nSurprisingly, the CBOw encodings manage to reach an accuracy of 70% on the word order task 20% above the baseline. This is remarkable as, by definition, the CBOw encoder does not attempt to preserve word order information. One way to explain this is by considering distribution patterns of words in natural language sentences: some words tend to appear before others. In the next section we analyze the effect of natural language on the different models.\nNatural language imposes many constraints on sentence structure. To what extent do the differ ent encoders rely on specific properties of word distributions in natural language sentences when encoding sentences?\nTo account for this, we perform additional experiments in which we attempt to control for the effec of natural language\nHow can CBOw encode sentence length? Is the ability of CBOw embeddings to encode length. related to specific words being indicative of longer or shorter sentences? To control for this, we. created a synthetic dataset where each word in each sentence is replaced by a random word from the dictionary and re-ran the length test for the CBOw embeddings using this dataset. As Figure 2a. shows, this only leads to a slight decrease in accuracy, indicating that the identity of the words is not. the main component in CBOw's success at predicting length..\nRceeneee 65 0.55 60 0.50 55 CBOW CBow syn sent 0.45 50 ION 45 0.40 40 0.35 35 100 300 500 750 1000 5 10 15 20 25 30 35 Representation dimensions Sentence length.\nAn alternative explanation for CBOw's ability to encode sentence length is given by considering the. norms of the sentence embeddings. Indeed, Figure 2b shows that the embedding norm decreases as sentences grow longer. We believe this is one of the main reasons for the strong CBOw results..\nencoder-decoder models. We hypothesize that this occurs because a sizable part of the auto-encoder performance comes from the decoder, which also improves as we add more dimensions. At 1000 di- mensions, the decoder's language model may be strong enough to allow the representation produced by the encoder to be less informative with regard to word content.\nCBOw representations with low dimensional vectors (100 and 300 dimensions) perform exception. ally well, outperforming the more complex, sequence-aware models by a wide margin. If your task requires access to word identities, it is worth considering this simple representation. Interestingly. CBOw scores drop at higher dimensions..\nWhile the correlation between the number of averaged vectors and the resulting norm surprised us. in retrospect it is an expected behavior that has sound mathematical foundations. To understand. the behavior. consider the different word vectors to be random variables. with the values in each\ndimension centered roughly around zero. Both central limit theorem and Hoeffding's inequality tel. us that as we add more samples, the expected average of the values will better approximate the true mean, causing the norm of the average vector to decrease. We expect the correlation between th sentence length and its norm to be more pronounced with shorter sentences (above some number o samples we will already be very close to the true mean, and the norm will not decrease further), behavior which we indeed observe in practice.\nHow does CBOw encode word order? The surprisingly strong performance of the CBOW model on the order task made us hypothesize that much of the word order information is captured in general natural language word order statistics.\nTo investigate this, we re-run the word order tests, but this time drop the sentence embedding in training and testing time, learning from the word-pairs alone. In other words, we feed the network as input two word embeddings and ask which word comes first in the sentence. This test isolates general word order statistics of language from information that is contained in the sentence embedding (Fig 3)\nThe difference between including and remov- ing the sentence embeddings when using the. CBOW model is minor. while the LSTM-ED. suffers a significant drop. Clearly, the LSTM-. ED model encodes word order, while the pre. diction ability of CBOw is mostly explained by. general language statistics. However, CBOw. does benefit from the sentence to some extent:. we observe a gain of ~3% accuracy points. when the CBOw tests are allowed access to the. sentence representation. This may be explained. by higher order statistics of correlation between. word order patterns and the occurrences of spe. cific words."}, {"section_index": "10", "section_name": "How important is English word order for en", "section_text": "Results are presented in Fig. 4. When considering CBOw embeddings, word order accuracy drops. to chance level. as expected. while results on the other tests remain the same. Moving to the LSTM encoder-decoder, the results on all three tests are comparable to the ones using non-permuted sen-. tences. These results are somewhat surprising since the models were originally trained on \"real'\"'. non-permuted sentences. This indicates that the LSTM encoder-decoder is a general-purpose se-. quence encoder that for the most part does not rely on word ordering properties of natural language. when encoding sentences. The small and consistent drop in word order accuracy on the permuted. sentences can be attributed to the encoder relying on natural language word order to some extent,. but can also be explained by the word order prediction task becoming harder due to the inability tc.\n100 orrreene erereee eeeey 90 90 85 80 80 80 70 7. 10 70 60 60 - CBOW 65 Perm CBOw 50 Encoder-decoder 60 Perm ED 40 55 100 300 500 750 1000 100 300 500 750 1000 100 300 500 750 1000 Representation dimensions Representation dimensions Representation dimensions (a) Length test. (b) Content test. (c) Order test..\nFigure 4: Results for length, content and order tests on natural and permuted sentences\n90 85 ED EDno sent CBOW 80 CBOW no sent 75 oreer 70 65 100 300 500 750 1000 Representation dimensions\nOrrennee ereerenee oreer 90 85 ED ED no sent CBOW 80 CBOW no sent 75 70 65 100 300 500 750 1000 Representation dimensions\nFigure 3: Order accuracy w/ and w/o sentence repre- sentation for ED and CBOW models\ncoding sentences? To what extent are the models trained to rely on natural language word order when encoding sentences? To control for this, we create a synthetic dataset, PeRMUTED, in which the word order in each sentence is randomly permuted. Then, we repeat the length, content and order experiments using the PeRMUTED dataset (we still use the original sentence encoders that are trained on non-permuted sentences). While the permuted sentence representation is the same for CBOw, it is completely different when generated by the encoder-decoder.\nuse general word order statistics. The results suggest that a trained encoder will transfer well acros different natural language domains, as long as the vocabularies remain stable. When considering the decoder's BLEU score on the permuted dataset (not shown), we do see a dramatic decrease. in accuracy. For example, LSTM encoder-decoder with 1000 dimensions drops from 32.5 to 8. BLEU score. These results suggest that the decoder, which is thrown away, contains most of the. language-specific information.\nIn addition to the experiments on CBOW and LSTM-encoders, we also experiment with the skip thought vectors model (Kiros et al., 2015). This model extends the idea of the auto-encoder to neighboring sentences.\nGiven a sentence s, it first encodes it using an RNN, similar to the auto-encoder model. However, instead of predicting the original sentence, skip-thought predicts the preceding and following sen tences, Si-1 and si+1. The encoder and decoder are implemented with gated recurrent units (Cho et al., 2014).\nHere, we deviate from the controlled environment and use the author's provided model3 with the. recommended embeddings size of 480o. This makes the direct comparison of the models \"unfair\"' However, our aim is not to decide which is the \"best\"' model but rather to show how our method can. be used to measure the kinds of information captured by different representations..\nTable 1 summarizes the performance of the skip-thought embeddings in each of the prediction tasks on both the PeRMUTED and original dataset.\nTable 1: Classification accuracy for the prediction tasks using skip-thought embeddings"}, {"section_index": "11", "section_name": "9 CONCLUSION", "section_text": "We presented a methodology for performing fine-grained analysis of sentence embeddings using auxiliary prediction tasks. Our analysis reveals some properties of sentence embedding methods.\nCBOw is surprisingly effective - in addition to being very strong at content, it is also predictiv of length, and can be used to reconstruct a non-trivial amount of the original word order. 30 dimensions perform best, with greatly degraded word-content prediction performance on highe dimensions. With enough dimensions, LSTM auto-encoders are very effective at encoding word order an word content information. Increasing the dimensionality of the LSTM encoder does not signif icantly improve its ability to encode length, but does increase its ability to encode content an order information. 50o dimensional embeddings are already quite effective for encoding wor order, with little gains beyond that. Word content accuracy peaks at 750 dimensions and drops a 1000, suggesting that larger is not always better.\n3https://github.com/ryankiros/skip-thoughts\nLength Word content Word order Original 82.1% 79.7% 81.1% Permuted 68.2% 76.4% 76.5%\nThe performance of the skip-thought embeddings is well above the baselines and roughly similar. for all tasks. Its performance is similar to the higher-dimensional encoder-decoder models, except in the order task where it lags somewhat behind. However, we note that the results are not directly comparable as skip-thought was trained on a different corpus..\nThe more interesting finding is its performance on the PeRmuTED sentences. In this setting we see a large drop. In contrast to the LSTM encoder-decoder, skip-thought's ability to predict length and word content does degrade significantly on the permuted sentences, suggesting that the encoding. process of the skip-thought model is indeed specialized towards natural language texts.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nJeffrey L Elman. Distributed representations, simple recurrent networks, and grammatical structure Machine learning, 7(2-3):195-225, 1991.\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In Proceedings of ICASSP, 2013..\nThe trained LSTM encoder (when trained with an auto-encoder objective) does not rely on order ing patterns in the training sentences when encoding novel sequences. In contrast, the skip-thought encoder does rely on such patterns. Its performance on the other tasks is similar to the higher-dimensional LSTM encoder, which is impressive considering it was trained on a different corpus. Finally, the encoder-decoder's ability to recreate sentences (BLEU) is not entirely indicative of the quality of the encoder at representing aspects such as word identity and order. This suggests that BLEU is sub-optimal for model selection.\nMarco Baroni, Georgiana Dinu, and German Kruszewski. Don't count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 238-247, Baltimore, Maryland, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/p14-1023.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3061-3069, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and 1?12150201\nKavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In\nJiwei Li, Minh-Thang Luong, and Dan Jurafsky. A hierarchical neural autoencoder for paragraph and documents. arXiv preprint arXiv:1506.01057, 2015\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen tations in vector space. arXiv preprint arXiv:1301.3781. 2013a\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen. tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013b.\nDonald B Rubin. Matching to remove bias in observational studies. Biometrics, pp. 159-183, 1973\nAllen Schmaltz, Alexander M Rush, and Stuart M Shieber. Word ordering without syntax. arXi preprint arXiv:1604.08633. 2016\nIlya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net works. In Advances in neural information processing systems, pp. 3104-3112, 2014.\nOmer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations In Proc. of CONLL, pp. 171-180, Baltimore, Maryland, 2014.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic. evaluation of machine translation. In Proceedings of the 4Oth annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics. 2002.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nSentence Encoders The bag-of-words (CBOw) and encoder-decoder models are trained on 1 million sentences from a 2012 Wikipedia dump with vocabulary size of 50,000 tokens. We use NLTK (Bird, 2006) for tokenization, and constrain sentence lengths to be between 5 and 70 words\nFor the encoder-decoder models, we use an in-house implementation using the Torch7 toolkit (Col lobert et al., 2011). The decoder is trained as a language model, attempting to predict the correct word at each time step using a negative-log-likelihood objective (cross-entropy loss over the softmax layer). We use one layer of LSTM cells for the encoder and decoder using the implementation in Leonard et al. (2015).\nWe use the same size for word and sentence representations (i.e. d = k), and train models o sizes k E {100, 300, 500, 750, 1000}. We follow previous work on sequence-to-sequence learn ing (Sutskever et al., 2014; Li et al., 2015) in reversing the input sentences and clipping gradients Word vectors are initialized to random values\nWe evaluate the encoder-decoder models using BLEU scores (Papineni et al., 2002), a popular ma chine translation evaluation metric that is also used to evaluate auto-encoder models (Li et al.. 2015) BLEU score measures how well the original sentence is recreated, and can be thought of as a proxy for the quality of the encoded representation. We compare it with the performance of the models on the three prediction tasks. The results of the higher-dimensional models are comparable to those found in the literature. which serves as a sanity check for the quality of the learned models.\nAuxiliary Task Classifier For the auxiliary task predictors, we use multi-layer perceptrons with a single hidden layer and ReLU activation, which were carefully tuned for each of the tasks. We experimented with several network architectures prior to arriving at this configuration.\nFurther details regarding the training and architectures of both the sentence encoders and auxiliary task classifiers are available in the Appendix.."}, {"section_index": "13", "section_name": "ENCODER DECODER", "section_text": "Parameters of the encoder-decoder were tuned on a dedicated validation set. We experienced with different learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) (Hinton et al., 2012) and optimization techniques (AdaGrad (Duchi et al., 2011), AdaDelta (Zeiler, 2012), Adam (Kingma & Ba, 2014) and RMSprop (Tieleman & Hinton, 2012)). We also experimented with different batch sizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance\nBased on the tuned parameters, we trained the encoder-decoder models on a single GPU (NVIDIA Tesla K40), with mini-batches of 32 sentences, learning rate of O.01, dropout rate of 0.1, and the AdaGrad optimizer; training takes approximately 10 days and is stopped after 5 epochs with no loss improvement on a validation set."}, {"section_index": "14", "section_name": "PREDICTION TASKS", "section_text": "Parameters for the predictions tasks as well as classifier architecture were tuned on a dedicated vali dation set. We experimented with one, two and three layer feed-forward networks using ReLU (Nai & Hinton, 2010; Glorot et al., 2011), tanh and sigmoid activation functions. We tried different hid den layer sizes: the same as the input size, twice the input size and one and a half times the inpu size. We tried different learning rates (0.1, 0.01, 0.001), dropout rates (0.1, 0.3, 0.5, 0.8) and differ ent optimization techniques (AdaGrad, AdaDelta and Adam).\n4https://radimrehurek.com/gensim\nFor the CBOw model, we train Skip-gram word vectors (Mikolov et al., 2013a), with hierarchical softmax and a window size of 5 words, using the Gensim implementation.4 We control for the embedding size k and train word vectors of sizes k E {100, 300, 500, 750, 1000}\nOur best tuned classifier, which we use for all experiments, is a feed-forward network with one. hidden layer and a ReLU activation function. We set the size of the hidden layer to be the same size. as the input vector. We place a softmax layer on top whose size varies according to the specific task and apply dropout before the softmax layer. We optimize the log-likelihood using AdaGrad. We. use a dropout rate of 0.8 and a learning rate of O.01. Training is stopped after 5 epochs with no loss. improvement on the development set. Training was done on a single GPU (NVIDIA Tesla K40).."}, {"section_index": "15", "section_name": "10 ADDITIONAL EXPERIMENTS - CONTENT TASK", "section_text": "How well do the models preserve content when we increase the sentence length? In Fig. 5 we plo content prediction accuracy vs. sentence length for different models..\nCBOW300 0.95 X X CBOW 100 neernree ED 750 ED 500 0.90 ED 1000 eonreten enneon 0.85 0.80 X X 0.75 0.70 0.65 5 10 15 20 25 30 Sentence length\nFigure 5: Content accuracy vs. sentence length for selected models"}, {"section_index": "16", "section_name": "APPENDIX III: SIGNIFICANCE TESTS", "section_text": "In this section we report the significance tests we conduct in order to evaluate our findings. In orde to do so, we use the paired t-test (Rubin, 1973).\nDim. Length Word content Word order 100 1.77e-147 0.0 1.83e-296 300 0.0 0.0 0.0 500 0.0 0.0 0.0 750 0.0 0.0 0.0 1000 0.0 0.0 0.0\nTable 2: P-values for ED vs. CBOw over the different dimensions and tasks. For example, in the row where. dim equals 100, we compute the p-value of ED compared to CBOw with embed size of 100 on all three tasks\nDim. Length Word content Word order 100 vs. 300 0.0 8.56e-190 0.0 300 vs. 500 7.3e-71 4.20e-05 5.48e-56 500 vs. 750 3.64e-175 4.46e-65 0.11 750 vs. 1000 1.37e-111 2.35e-243 4.32e-61\nTable 3: P-yalues for ED models over the different dimensions and tasks\n1.00 CBOW300 0.95 X CBOW 100 Reernree ED 750 ED 500 0.90 ED1000 eoretn enneon 0.85 0.80 X X 0.75 0.70 0.65 5 10 15 20 25 30 Sentence length\nAs expected, all models suffer a drop in content accuracy on longer sentences. The degradation is roughly linear in the sentence length. For the encoder-decoder, models with fewer dimensions seem to degrade slower.\nAll the results reported in the summery of findings are highly significant (p-value < O.ooo1). The ones we found to be not significant (p-value > O.03) are the ones which their accuracy does not have much of a difference, i.e ED with size 500 and ED with size 750 tested on the word order task (p-value=0.11), or CBOw with dimensions 750 and 1000 (p-value=0.3).\nDim. Length Word content Word order 100 vs. 300 0.0 0.0 1.5e-33 300 vs. 500 1.47e-215 0.0 3.06e-64 500 vs. 750 0.68 0.032 0.05 750 vs. 1000 4.44e-32 0.3 0.08\nTable 4: P-values for CBow models over the different dimensions and tasks"}] |
HyFkG45gl | [{"section_index": "0", "section_name": "MACHINE SOLVER FOR PHYSICS WORD PROBLEMS", "section_text": "Megan Leszczynski & Jose Moreira\nIBM T.J. Watson Research Center Yorktown Heights. NY 10598 USA\nmel255@cornell.edu, imoreira@us.ibm.com\nWe build a machine solver for word problems on the physics of a free falling object under constant acceleration of gravity. Each problem consists of a formulation part, describing the setting, and a question part asking for the value of an unknown. Our solver consists of two long short-term memory recurrent neural networks and a numerical integrator. The first neural network (the labeler) labels each word of the problem, identifying the physical parameters and the question part of the problem. The second neural network (the classifier) identifies what is being asked in the question. Using the information extracted by both networks, the numerical integrator computes the solution. We observe that the classifier is resilient to errors made by the labeler, which does a better job of identifying the physics parameters than the question. Training, validation and test sets of problems are generated from a grammar, with validation and test problems structurally different from the training problems. The overall accuracy of the solver on the test cases is 99.8%."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We present a complete system architecture for a machine solver that automatically solves a class of physics word problems, namely classical mechanics of a point particle in free fall. This domair allows us to formulate one dynamical system to which all the physics problems in this domain car be mapped. The dynamical system describes how the state of the particle, defined by its locatior and velocity, changes over time. Correspondingly, the initial conditions for the dynamical systen include the location and velocity of the particle at the time origin..\nGiven the word problem as input, the solver must first learn to extract the parameters needed t produce the dynamical system and also learn to identify the type of question. Two independentl trained recurrent neural networks are used to complete these tasks. The first neural network, referre to as the labeler, learns to find the dynamical system parameters and locate the question within th problem statement. The second neural network, referred to as the classifier, identifies the type o question. Finally, the solver uses a numerical integrator to solve the dynamical system and produc the solution. We use a problem generator in order to produce disjoint datasets as input to the sys tem for training and testing. The generator produces short-answer high school-level physics wor problems with mixed units.\nAutomatically solving word problems has been a research interest of the natural language process ing community for some time, particularly with math word problems. The main challenge is tc develop a semantic representation of the word problem. Kushman et al.[(2014) learned to represent mathematical word problem with a system of equations, by aligning words in the word problem to templates. While their technique learns to induce multiple templates and assumes knowledge o1 numbers and nouns, we assume no knowledge of the words in the text but only map to one template"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "After a brief related work section, we provide a more detailed description of the class of physics problems we address. We proceed to describe how the machine solver works and present exper imental results. We conclude with a summary of our work and proposals for future works. The appendices contain additional details that did not fit in the body of the paper.\nAnother study to solve math word problems was done by Hosseini et al. (2014). This study alsc. assumes the ability to identify numbers and nouns in the text and uses a dependency parser tc. determine relationships between words in the text. Like the other study, this approach generalizes tc. math word problems that require different equations. Shi et al.(2015) similarly used a parser to solve math word problems. However, their parser maps the word problems to a carefully defined language. they created called DOL, from which equations can be derived. Rather than use a parser to breal down the word problems, we use neural networks to learn to identify key pieces of information. Our. study is the first of our knowledge to apply recurrent neural networks to the task of solving worc. problems.\nWe chose to use recurrent neural networks (RNN) for the labeler and the classifier as both of their inputs consist of sequences of words. Recurrent neural networks are commonly used to process se- quences, and as a result have found application in natural language processing tasks such as machine translation (Cho et al.]2014b) and speech recognition (Graves et al.]2013). After experimenting with different models, we obtained the most success with Long Short-Term Memory (LSTM) vari- ants of RNNs. For additional discussion on RNNs in general, and LSTMs in particular, we refer the. reader to AppendixA\nWe consider the following class of physical systems (see Figure[1(a)): In a two-dimensional space with gravity producing a downward constant acceleration g, there is one particle in free fall. That is no forces other than gravity are acting on the particle. Movement of the particle starts at time t = 0 with an initial position defined by displacements d1 and d2 and initial velocity with components vj and U2.\nThe time behavior of the particle can be represented by the dynamical system shown in Figure[1(b) The state vector x(t) = [x1(t), x2(t), x1(t), x2(t)]T' consists of two positions and two velocities and its derivative depends only on itself and the acceleration of gravity, as shown in the figure Combined with the initial condition x(0) = [d1, d2, V1, v2]T, the differential equation produces a unique solution.\nx1(t) 0 0 1 0 x1(t) 0 V2 x2(t) 0 0 0 1 x2(t) 0 + d2 x1(t) 0 0 0 0 x1(t) 0 V1 x2(t) 0 0 0 0 x2(t) -9 9 x1(0) d1 x2(0) d2 x1(0) U1 x2(0) X1 V2 d1 (a) (b)\nx1(t) 0 0 1 0 x1(t) 0 tV2 x2(t) 0 0 0 1 x2(t) 0 + d2 x1(t) 0 0 0 0 x1(t) 0 V1 x2(t) 0 0 0 0 x2(t) g x1(0) d1 x2(0) d2 x1(0) U1 X1 x2(0) U2 d1\nFigure 1: Physics domain (a): We consider a two-dimensional space with a free falling particle Displacements d1 and d, define the initial position of the particle, while v1 and v2 define its initial velocity. Gravity produces a constant acceleration g pointing straight down. The behavior of the particle is defined by the dynamical system shown in (b).."}, {"section_index": "3", "section_name": "4 MACHINE SOLVER", "section_text": "In this section we describe the machine solver, which is composed of two recurrent neural network and the numerical integrator. The top-level system block diagram is shown in Figure|2\nOur machine solver computes answers to word problems in the domain just described. The word. problem must specify, sometimes indirectly, the five parameters of the dynamical system (d1, d2. V1, v2, and g). It must also include a question that can be answered by computing the time behavior of the system. We discuss how our machine solver works in the next section..\nTRANSLATE Dynamical SOLVE Word System Solution problem x = Ax+ Bu"}, {"section_index": "4", "section_name": "4.1 NEURAL NETWORK ARCHITECTURES", "section_text": "The data flow through the labeler and classifier neural networks is shown in Figure [3] We usec TensorFlowTMto develop the neural network models for both labeler and the classifier. TensorFlow is an open source library from Google that allowed us to easily explore different models and training settings with already implemented RNN cells and optimizers (Abadi et al.]2015). We quickly experiment with the provided optimizers to find the optimal optimizer for each network.\nClassifier Question Question (RNN) type Word Labeler Word Dynamical problem (RNN) Label System Parameters of Dynamical System\nWord Labeler Word problem (RNN) Label problem Let the acceleration of gravity be 32 ft/s2 How far ? label 0 0 0 0 0 0 G A_UNIT QUEST QUEST QUEST\nWord Labeler Word problem (RNN) Label problem Let the acceleration of gravity be 32 ft/s2 How far ? label 0 0 0 0 0 0 G A_UNIT QUEST QUEST QUEST\nWord Labeler Word problem (RNN) Label\nFigure 4: Example of input to labeler with expected output. A label is associated with each word where O indicates other, or a word not needed for the dynamical system translation. Input text is. shortened for the example\nThe chosen RNN model is one that produces an output at each time step and has recurrent connection between hidden units, as described byGoodfellow et al.(2016) in Chapter 10, Figure 10.3. At each step of the input sequence, the RNN receives a word embedding and outputs a label for the word. The label that is outputted at each time step can fall into one of the ten categories shown in Table[1 In addition to tagging words for their relevancy to the dynamical system formulation, we tag the question part of the word problem to pass to the classifier.\nWe use three measures to assess the performance of the labeler: label accuracy, question accuracy,. and overall accuracy. Label accuracy is measured as having matching labels in the predicted anc expected (generated) labels, not including the question part of the word problem. Question accuracy. is measured as having both the first word of the question and the last word of the question labeled. correctly, as label-based post processing to extract the question relies only on these indices. Overall. accuracy is measured as meeting both of the label and question accuracy criteria..\nTensorFlow is a trademark of Google Inc\nFigure 2: The first step from word problem to dynamical system is accomplished via neural net works. The second step from dynamical system to solution is achieved with a numerical integrator..\nClassifier Question Question (RNN) type Word Labeler Word Dynamical problem (RNN) Label System Parameters of Dynamical System\nThe labeler is an LSTM network with one hidden layer of ten units. Figure4 shows an example of. the data flow through the labeler. The input to the labeler is the full problem statement and the output. is a label for each word. The words are input into the labeler via an embedding that is randomly. initialized and trained simultaneously with the weights and biases. The weights are also randomly initialized and the biases are initialized to zero. To limit the exploration of the parameter space, we. set the dimension of the embedding to equal the number of hidden units..\nTable 1: Possible output word labels and corresponding dynamical system parameters\nLABEL DESCRIPTION QUEST Question G Value for gravity g A UNIT Unit for acceleration (gravity) g DUNIT Unit for initial height. d2 HEIGHT Initial height value or height of each story. d2 VUNIT Unit for velocity. V1, V2 V Initial velocity magnitude. V1, V2 THETA Angle of initial movement. V1, V2 STORY Value for number of stories (if applicable) d2 O Other\nWe train the labeler with TensorFlow's Adam Optimizer, an initial learning rate of O.1, and a mini batch size of 100 word problems. The Adam Optimizer uses adaptive learning rates and is par ticularly effective with sparse gradients (Kingma & Ba]2014). We use early stopping based on a validation accuracy or when the training accuracy stops improving. We chose the network architec- ture and training settings after performing a limited grid search across the number of layers, number of units per a layer, and learning rate. (See Appendix[B)\nAfter the labeler assigns a label to each word, a post processing step maps the labels to the dynamical system parameters, converting the initial conditions and value of gravity to SI units if necessary..\nThe classifier is an LSTM network with one hidden layer of 1,O00 units. An example of the data flow through the classifier is shown in Figure [5] For the problems in our dataset, the formulation part of the word problem does not provide information necessary to classify the type of question Moreover, as sequences become longer, the performance of RNNs tend to decrease (Pascanu et al.. 2013). Armed with these two observations, we chose to only have the question part of the word problem as the sequence to input into the classifier..\nFigure 5: Example of input to classifier with expected output. Symbol x1 refers to horizontal dis placement and symbol x2 refers to vertical displacement..\nAs with the labeler, we encode the words of the sequence into word embeddings, matching the dimension of the word embedding to the number of hidden units, and training them with the weights and biases. In this case, a sequence would be one question. Unlike the labeler, there is only one output for each sequence, occurring on the last step of the sequence. For more information see Chapter 10, figure 10.5 of Goodfellow et al.(2016) for an illustration. The singular output is the. type of question, which can fall into one of the nine types shown in Table2."}, {"section_index": "5", "section_name": "4.2 NUMERICAL INTEGRATOR", "section_text": "Classifier Question Question (RNN) type How far has the (x1 : x2 = 0) rock traveled when it strikes the around?\nClassifier Question Question (RNN) type How far has the. (x1 : x2 = 0) rock traveled when it th\nThe classifier is trained with TensorFlow's Gradient Descent Optimizer, an initial learning rate of 0.5, and a mini-batch size of 100 questions. As with the labeler, we performed a grid search to. choose these hyperparameters. (See AppendixB\nThe numerical integrator computes the evolution over time of the dynamical system shown in Fig ure[1(b). As input it receives the initial conditions, the value of g, and the type of question extracted from the labeler and the classifier. Using SciPy's ordinary differential equation integrator, a table of values representing the system's state to the point that the object hits the ground is iteratively constructed. The numerical solution is refined to a precision of O.001 (one part in a thousand), based on the type of the question. For example, if the question is about the maximum height, we produce\nTable 2: Possible Output Question Types\na first instance of the table, find the maximum height in that table, and then search for the maximum. around that value with increased precision, repeating until we reach the desired precision. Finally the question type is used to determine which value from the table to output from the solver. This. data flow is shown in Figure|6\nDynamical Numerical Solution System Integrator Type of question CHOOSE ONE\nFigure 6: Outputs from the labeler and the classifier feed into the numerical integrator, where the labeler outputs form the dynamical system to integrate and the classifier outputs control the focus. and output of the integrator."}, {"section_index": "6", "section_name": "4.3 TRAINING. VALIDATION. AND TEST SETS", "section_text": "The grammar also ensures that the training set is disjoint from the validation and test sets, partic. ularly in structure. Examples of generated problems are shown below in Figure7 This is vital in assessing the ability of the trained networks to generalize.\nWe implement the grammar in Python. When a new problem is instantiated, the grammar rules are. descended to build up the problem, making random choices when choices are available. Labels for each problem are also automatically generated. The complete generative model is shown in Figure|8 By using a problem generator to build our datasets, we are also free to choose the size of the dataset. Our problem generator is capable of generating ~26,000 different training problems and ~22,000 different test and validation problems.\nWe define the word problems with a grammar that is provided in the AppeNDix. The word problems. in the training, validation, and test sets are exclusively made up of problems that follow the specifi. cations laid out by the grammar. The grammar allows for mixed units, meaning that within the same. problem, the height may have a metric unit, while the velocity may have a U.S. customary unit. The. grammar also permits the initial conditions to be exposed in multiple ways. For instance, a theta value and speed will be provided in some problems, from which the solver would need to calculate. the initial vertical velocity using the theta, whereas in other problems no theta value may be pro. vided. Using mixed units and varying numbers of values to provide information about each initial. condition allows us to increase the complexity of the problems within the scope of the dynamical. system.\nAssume the acceleration due to gravity is 85 ft/s2. A ping pong ball is dropped from the top of a story building, where each story is 89 m. What is the maximum speed the ping pong ball obtains\nA chair is launched at a speed of 51 mph and an angle from the horizontal of 28 degrees. Let the acceleration due to gravity on Planet Watson be 98 m/s2. How much time has passed when it. reaches its maximum height?\nFigure 7: Examples of generated problems that adhere to the grammar\nFigure 8: The generative model allows us to generate the input and output for the neural networks without requiring any manual annotation..\nLabeler Accuracy (%) on Training Data. Classifier Accuracy (%) on Training Data 100 100 0 0 0.10.20.30.40.50.60.70.80.911.11.21.31.41.51.61.71.81.9 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Epoch Epoch Overall Tag Question\nFigure 9: Training accuracy of labeler (left) and classifier (right\nThe accuracy on the test set after the labeler and classifier have been independently trained are shown in Table|3] The accuracy of the combined RNN system amounts to an overall accuracy of 99.8% The labeler achieves 100% accuracy on predicting the non-question labels and incurs a small error on predicting the beginning and end of the question. As a result, the question that is extracted based on the labeler's predictions does not always match the true question. However, based on the classifier's accuracy of 99.8%, the classifier is often resilient to the errors that labeler makes in extracting the\nINPUT Words Formulation Labeler + Question (RNN) Labels OUTPUT Grammar Generator INPUT Words Classifier Question (RNN) Question type OUTPUT\nINPUT Words Formulation Labeler + Question (RNN) Labels OUTPUT Grammar Generator INPUT Words Classifier Question (RNN) Question type OUTPUT\nThe datasets consisted of 7,000 word problems for training, 2,000 word problems for validation, and. 1,000 word problems for test. The progress of training over time is shown in Figure9 As can be. seen in the left graph, the labeler learns to identify the beginning and end of the question faster than it learns to correctly predict the labels. The overall accuracy of the labeler is both limited by and equivalent to that of the label accuracy. With this particular model of the labeler, there is no problem for which the labeler correctly predicts the non-question labels, but incorrectly locates the question.\nThe training accuracy for the label, question, and overall reach 10o% for all by the end of the first epoch. The classifier also reaches 1o0% accuracy on the training set by the end of the first epoch.. The epoch is broken down into fractions as the training accuracy is evaluated every seven mini-. batches of 100 problems.\nquestion. While the labeler incorrectly extracts ninety-one questions, the classifier only incorrectly classifies two questions from a test set of 1,000 word problems. Figure |12|in Appendix |C shows examples of the labeler's errors and how the classifier handles them.\nWe note that for the two wrongly classified cases, both shown in Figure[12] the classification error is. the same. That is, a question that should be about the speed of the object when it hits the ground is. classified as a question about the maximum speed the object reaches. The numerical answer to the problem is the same for both classes of question. Therefore, even in the case of wrongly classifiec questions, the system produces the right answer..\nThe high accuracy of the labeler and classifier are not a total surprise. LSTMs have been shown to be. very effective in learning context-free and even context-sensitive languages (Gers & Schmidhuber. 2001, Cleeremans et al.]1989, Rodriguez2001), including the ability to generalize and recognize structures not seen before. Our training, validation and test sets are from a regular language, as. described in Appendix E so an LSTM should do well in learning them. In fact, we have seer. situations (with the test. yalidation and test sets all with distinct structures) where the labeler an classifier both achieve perfect accuracy on all test problems. We decided to include the data on the. \"not so perfect\" case because it illustrates some important points (Figure|12)..\nThe trained variables for both models consist of word embeddings for input to the RNN, and weigh and biases within the RNN and from the RNN to the final output. We focus our evaluation on th RNN weights, as we believe these are more specific to the our physics problem solver. For a evaluation of the word embeddings, please see Appendix|D\nThe distributions of weights for the labeler and classifier are shown in figures|10] As the labeler was an LSTM network, there are weights from the input and the previous hidden values to input, forget and an output gates, as well as to the memory cells. While there appears to be a high concentration of negative weights to the output gate and positive weights to the input gate, this is likely a result of random initialization of the weights as this pattern was not consistently found with other randon initializations. The output weights, which go from the output of the LSTM cell's hidden units tc the target labels, have a slightly wider range. The few number of zero weights indicates that the majority outputs from the hidden units of the LSTM cell contribute to making the final prediction o1 the label.\nThe LSTM weight distribution for the classifier is more uniform and compressed than that of the labeler. We believe this is due to the great increase in parameters since the classifier has 1,000- dimensional embeddings and 1,00o hidden units, leading to 8 million weights (Karpathy et al. 2015). We predict that each piece of information captured by the trained embeddings and hidden units makes a less significant contribution to the final prediction than with the labeler, as indicated by the classifier's smaller weight values. The range of the output values for the output weights similarly contributes to this prediction, with a very small range of weights which are mostly concentrated around zero.\nAfter examining the general distribution of weights, we also wanted to explore potential patterns of specific weights. We chose to explore the heat map of the weights for labeler since there are a magnitude fewer connections, allowing the patterns to be more readily examined. We include the heat map of the weight matrices for the connections between the hidden units of the labeler to the output predictions in Figure[11 Looking at the heat map, hidden units 3 and 8 seem to have a similar weight distribution across the output categories. We also see seemingly logical pairs forming, such\nTable 3: Accuracies shown are on the test set of word problems for the system. The classifier is fed the extracted questions as identified by the labeler. The combined RNN system accuracy is based. on the final output of the system having the same dynamical system parameters and question type as. the generated output for a word problem..\n8- 2 6 0 4 2 2 4 0- -O QUEST A_UNIT D_UNIT HEIGHT V_UNIT V 1 THETA STORY Output Category\nFigure 11: Heat map for labeler weights from LSTM hidden layer to output layer\nas the strong positive weights associated with D UNIT and HEIGHT for hidden unit 6 and for and THETA for hidden unit O. However, there are also features that are challenging to explain such as the strong positive contribution hidden unit 4 makes to predicting THETA while making a1 equally strong negative contribution to predicting STORY.\nWe have developed a machine solver for a word problems on the physics of a free falling object in. two-dimensional space with constant acceleration of gravity. The solver has three main components. The labeler labels each word of the problem to identify the parameters of a canonical dynamical system that describes the time evolution of the object, and the part of the problem that corresponds to the question being asked. The classifier classifies the question part. Finally, an integrator is used to solve the dynamical system, producing a numerical answer to the problem..\nA grammar-based generator is used to produce the training, validation and test set of problems for the neural networks. The grammar is specified so that the validation and test problems are structurally different from the training problems. We use a total of 10,o00 generated problems, partitioned into 7,000 for training, 2,000 for validation and 1,000 for testing.\nWhen measured against the test set of 1,ooo problems, the dynamical system parameters are cor- rectly identified in all of them. The question part is precisely identified in 909 cases, but because\nInput gate Forget gate 16 45000 Input gate 40000 Forget gate 14 40000 35000 12 35000 hrenneny 30000 30000 25000 8 25000 20000 20000 6 15000 15000 4 10000 10000 5000 5000 OL 0 3 -2 1 0 1 3 0.040.020.000.020.04 0.040.020.000.020.04 Input gate weights Forget gate weights Input gate weights Forget gate weights Output gate 16 Cell input 35000 Output gate 50000 Cell input 16 14 14 30000 12 40000 heenney 25000 30000 8642 20000 hedu 15000 10000 5000 10000 0 0 -3 1 0 2 3 3 -2 -1 0 0.040.020.000.020.04 0.040.020.000.020.04 Output gate weights Cell input weights Output gate weights Cell input weights Output Weights 800 Output Weights 700 600 500 heenneey Aou 300 200 100 Q 4 -2 0.6 Q L 6 0 4 0.4 0.2 0.0 0.2 0.4 0.6 Output weights Output weights\nFigure 1O: Top left: labeler LSTM weight distributions. Top right: classifier LSTM weight distri butions. Bottom left: labeler output weight distributions. Bottom right: classifier output weight distributions.\nthe classifier can work with partial questions, in the end all but 2 questions are classified correctly Therefore, the combined accuracy of the two neural networks, for the purpose of solving the physics problems, is 99.8%.\nThere are several opportunities for future work. First, we would like to investigate more deeply hou. our neural networks work. In particular, what features of the word problem they are identifying anc. how specific units are responsible for that identification. Second, we could extend our solver by con sidering more complex physical situations, including additional forces, three-dimensional motion multiple objects, and so on. We would have to extend our canonical dynamical system to represen. those situations and/or use a collection of dynamical systems. We expect that the complexity of the. neural networks and the training/validation/test sets will grow accordingly. Finally, the more am. bitious goal would be to remove the canonical dynamical system(s) and train the networks to build. their own. We believe this would be closer to the way humans solve these physics problems.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Martin Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems 2015. Software available from http:/tensorflow.org.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, November 1997. ISSN 0899-7667\nMohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In EMNLP. pp. 523-533. 2014\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014\nNate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve algebra word problems. 2014.\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical ma-. chine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP. 2014), 2014b.\nan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press. Book available from http://www.deeplearningbook.org, 2016.\nShuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. Automatically solving number word problems by semantic parsing and reasoning. In EMNLP, 2015\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. The Journal of Machine Learning Research, 9(2579-2605):85, 2008"}, {"section_index": "8", "section_name": "RECURRENT NEURAL NETWORKS", "section_text": "The labeler and classifier are both recurrent neural networks (RNNs). We provide background in formation on RNNs in this section, followed by an overview of Long Short-Term Memory (LSTM networks, which are an advanced type of RNNs and were used to build our networks. A recurren neural network receives the previous values of the hidden layer as input in addition to the curren input values into the network. Thus each hidden unit retains information about the history of th sequence. As explained inGoodfellow et al.[(2016), the fundamental behavior of recurrent neura networks can be captured in the following equation:\nh(t) =1\nwhere h(t) represents the state of the RNN unit at time t, x(t) represents the current input, and 0 represents the weights and biases. The function f is usually hyperbolic tangent (Karpathy et al. 2015). It is important to note that the weights and biases are reused across time. Thus, while an RNN with one hidden layer can be unfolded in time to having many layers, the weights and biases between each of the unfolded layers are shared\n+ sigm T2n,4n 0 sigm tanh = +iOJ ht = o O tanh(ct)\nA limitation of the basic recurrent neural network described above is that it cannot retain informatior over long sequences. If a key piece of information for predicting an output at the end of a long sequence occurs at the very beginning of the sequence, the basic recurrent neural network will likely fail as a result of training difficulties. A popular solution for this limitation is the Long Short-Term Memory (LSTM) - essentially a highly capable, more complex type of recurrent neural network (Hochreiter & Schmidhuber1997). An LSTM is composed of a memory cell, and input, output. and forget gates that determine how to modify and reveal the contents of memory cell. Each of these gates has its own set of weights and biases that are connected to the inputs. Therefore the number of weights within a layer of an LSTM is quadrupled from that of a basic recurrent neural network to 2n 4n, where n is the number of hidden units in the layer and assumes each layer has the same number of units. 2n is from the input being a concatenation of the output from the previous hidden layer (in time) with the current input, as occurs for all RNNs, and the 4n is for the connections to each of the three gates as well as to the memory cell input. More specifically, the equations for the LSTM are as follows (Graves2013); (Zaremba et al.]2014):\ns1gm + sigm T2n,4n 0 sigm tanh\nAs both of our neural network models have only one hidden layer, h?-- merely refers to the current input. T2n,4n refers to the weight and bias transformation W x +b applied to the concatenated hidden layer inputs. The hyperbolic tangent and sigmoid functions are applied element-wise. The variables i, f, o, and j refer to the input gate, forget gate, output gate, and cell input, respectively.\nAnother potential solution to the inability of the basic recurrent neural network to capture long-termr. dependencies is the Gated Recurrent Unit (GRU) (Cho et al., 2014a), however, we had the most success with the LSTM for our specific labeler and classifier tasks.."}, {"section_index": "9", "section_name": "B CHOOSING THE RIGHT RNN CONFIGURATION", "section_text": "We selected the models for our RNNs by performing a grid search over the learning rate, the number of units, and the number of layers. The results of the grid search for the the labeler recurrent network are shown in Table 4|and the results for the classifier network are shown in Table 5] For each RNN we chose the most efficient model, in that it requires the least space and obtains the greatest accuracy with the lowest training time.\nTable 4: The chosen RNN network for the labeler has one layer of ten units with a learning rate of O.1. The notation x/y/z means x for overall accuracy, y for label accuracy, and z for questior accuracy, where accuracy is given as a proportion of correct predictions over total predictions. Al results shown use TensorFlow's Adam Optimizer and LSTM cell..\nTable 5: The chosen network for the labeler has one layer of 1,Oo0 units. The values shown are accuracies given as a proportion of the number of correctly predicted classifications over total clas sifications. All results use TensorFlow's Gradient Descent Optimizer and LSTM cell.\nInterestingly, for the classifier, we see that models with two or three layers and lower learning rates achieve an equivalent accuracy as the one-layer model. However, they are inferior to the one layer. model in that the multi-layer models require more space and usually require longer to train..\nLearning Rate Layers Units 0.01 0.1 0.5 1 10 0.197/1.000/0.197 0.911/1.000/0.911 0.001/0.110/0.032 100 0.850/1.000/0.850 0.763/0.932/0.814 0.196/0.207/0.587 1000 0.048/0.281/0.525 0.882/0.907/0.955 0.225/0.230/0.975 2 10 0.000/0.000/0.000 0.037/0.099/0.048 0.005/0.009/0.354 100 0.096/0.337/0.096 0.000/0.000/0.000 0.000/0.000/0.000 1000 0.000/0.000/0.000 0.000/0.000/0.000 0.000/0.000/0.000 3 10 0.000/0.000/0.015 0.021/0.132/0.059 0.000/0.000/0.000 100 0.076/0.442/0.091 0.000/0.000/0.000 0.000/0.000/0.000 1000 0.000/0.000/0.000 0.000/0.000/0.000 0.000/0.000/0.000\nLearning Rate Layers Units 0.01 0.1 0.5 1 10 0.193 0.486 0.830 100 0.774 0.801 0.889 1000 0.980 0.997 1.000 2 10 0.163 0.424 0.637 100 0.833 0.875 0.819 1000 1.000 1.000 0.724 3 10 0.297 0.656 0.482 100 0.867 0.907 0.539 1000 1.000 1.000 0.695\nThis section is included to illustrate examples of the the labeler network incorrectly extracting the question. In each of these cases, the classifier receives as input the labeler's incorrect output. The classifier's handling of these errors is shown in Figure|12\n(1) Labeler input: Let the acceleration due to gravity on Planet Watson be 65 ft/s^2. A ping pong ball is released from the top of a 3 story building, where each story is 79 m. What is the maximum speed the ping pong ball obtains?\n(2) Labeler input:Assume the acceleration due to gravity is 49 m/s^2. A ping pong ball is launched at a speed of 35 m/s and an elevation of 88 degrees. What is the magnitude of the veloci of the ping pong ball just before it touches the ground?\nLabeler output / classifier input: What is the magnitude of the velocity of the Classifier output: (speed : max) Expected output: (speed : x2=0)\n(3) Labeler input:Let the acceleration due to gravity on Planet Watson be 71 ft/s^2. A ping pon. ball is thrown at a speed of 53 mph and an elevation of 52 degrees. What is the magnitude of th velocity of the ping pong ball just before it touches the ground?.\nLabeler output / classifier input: What is the magnitude of the velocity of the Classifier output: (speed : max) Expected output: (speed : x2=0)\nFigure 12: Examples of incorrectly extracted questions from the labeler and the classifier's respons. to them. In all three cases, the question is cut short. The classifier still makes the correct the classification for the first case. but fails for the second and third cases"}, {"section_index": "10", "section_name": "D WORD EMBEDDINGS", "section_text": "To input the words into both RNNs, the words were first encoded as word embeddings. Word embed- dings map words to a multi-dimensional space, providing the words with numerical representations which expose relationships between words. The final embeddings for the labeler network are 10 dimensional, and the embeddings for the classifier network are 1,Oo0-dimensional. Rather than use Word2Vec, we chose to train the embeddings simultaneously with the weights and biases. We were interested in seeing if embeddings trained for a particular task could capture intuitive word features, as can often be seen with embeddings trained with Word2Vec (Mikolov et al.[|2013).\nIn order to explore the results of the trained embeddings, we used scikit-learn's implementation of t SNE to map the high-dimensional embeddings down to two dimensions (van der Maaten & Hinton 2008). The results from t-SNE are shown in Figure[13] Words appear exactly as they appear in th word problem, and no stemmers are used.\nThe embeddings from the labeler network seem more intuitive, as numbers and similar units, such as. \"m/s\", \"mph', and \"ft/s\", are mapped to similar regions. We had hypothesized that the embedding. may capture some word function related to the task the embeddings were being trained to perform However, the objects seem to be distributed throughout the space and have no easily distinguishable. pattern, despite playing a similar functional role in each word problem. It is even more difficult to. discern any patterns from the embeddings from the classifier network. We do see that words such. as \"traveling\", \"traveled\", and \"travels\"' map near each other, as well as question words \"What\"' and. \"How'. We predict that the limited vocabulary in the question space of only forty words may con.\ntribute to these more perplexing results by reducing the effectiveness of which t-SNE can determine the similarity between words.\nmagnitude elapsed ravels sceAunghe ssuI leigl Stonzontal atsor Of magnitude travels oal obtains athe annonbal 200 40\nFigure 13: Top: The embeddings from the labeler network for the top 100 most frequent words in the word problems. Bottom: The embeddings from the classifier network for all words in the questions."}, {"section_index": "11", "section_name": "WORD PROBLEM GRAMMAR", "section_text": "Notation: \"object' is used as a parameter in order to enforce consistency between parts of the prob lem. Within a word problem, the same object must appear wherever an object symbol occurs. As used in the question part of the grammar, \"x1\" indicates horizontal displacement and \"x2\" indicates vertical displacement. When used with numbers, \"...' indicates the sequence of numbers continues in between the bars.\n(val test formulation(object)) ::= (Assumption). A object is (action)\n123...89 stat verb) from (location released dropped let go\n(object) TalnllnlgOofecl (valleslOofecl (training_object) ::= golf ball stone chair feather soccer ball rock cannonbal1 (val_test_object) := pebble | ping pong ball | vacuum | tennis ball | basketballhat (formulation(object)) ::=(training_formulation(object)) (val_test_formulation(object)) (training_formulation(object)) ::= A object is (action). (Assumption) (val_test_formulation(object)) ::= (Assumption). A object is (action). (assumption) ::= Let the acceleration due to gravity on Planet Watson (acceleration) . Assume the acceleration due to gravity is (acceleration) . (acceleration) ::= (accel_value) (accel_unit) (accel_value) =12|3100 (accel_unit) ::= m/s2 | ft/s2 (action) := (moving) I (stationary) (moving) ::= (descent) I (projectile) (descent) ::= descending at a speed of (speed) moving downwards at a speed of (speed) (projectile) ::= (proj_verb) at a speed of (speed) and an (angle_word) of (angle) degrees (proj_verb) ::= thrown| fired|launched (speed) ::= (speed_value) (speed_unit) (speed_value) =0|12|.|99 (speed_unit) ::= m/s | ft/s |mph (angle_word) ::= elevation angle from the horizontal (angle) =1|2|3||89 (stationary) ::= (stat_verb) from (location) (stat_verb) ::= releaseddropped let go 11\n::= elevation angle from the horizontal ::= 12|3...|89 := (stat verb) from(location ::= releaseddroppedlet go\n(training max x2(object)) ::= What is the maximum height the object reaches?\nWhenever the grammar dictates a choice of construct (for example, when selecting the object of a word problem), a uniform random number generator is used to select one of the valid constructs Therefore, the frequency of a particular form in the training, validation and test sets ultimatel\nTable 6: Occurrence counts for different objects in word problems\nTable 7: Occurrence counts for different question types\ndepend on how many random choices are necessary to produce that form and how many variations there are in each choice.\nTable[6Jillustrates the simple case of occurrence counts of the different objects in our word problems. The training set uses seven different objects, while the validation and test sets use six objects. Not. surprisingly, each object in the training set appears in approximately 1/7 of the total number o. problems in that set. Meanwhile, each object in the validation and test sets appears in approximately. 1/5 of the total number of problems in those sets..\nA more interesting situation is illuatrated in Table7|for the occurrence counts of question types As shown in Table[2] there are nine different question types. However, the grammar works by first choosing one of two groups of questions: either max-type questions (the first three in Table2) or conditional-type questions (the last six in Table 2). Within each group, there is equal probability for each question type. Consequently, as Table[7|shows, each of the max-type questions is approxi mately twice as common as each of the conditional-type questions.\n(a) training set (b) validation set (c) test set object count object count object count golf ball 1052 pebble 336 pebble 156 stone 1007 342 ping pong ball ping pong ball 159 chair 987 vacuum 316 vacuum 165 feather 1020 tennis ball 355 tennis ball 163 soccer ball 965 basketball 325 basketball 178 rock 989 hat 326 hat 179 cannonball 980\n(a) training set (b) validation set (c) test set class count class count class count (x1 : max) 1163 (x1 : max) 326 (x1 : max) 168 (speed : max) 1157 (speed : max) 349 (speed : max) 180 1120 (x2 : max) 325 (x2 : max) 166 (x2 : max) (speed : max height) 610 (speed : max height) 160 (speed : max height) 64 (time : max height) 602 (time : max height) 158 (time : max height) 92 (x1 : x2=0) 598 (x1 : x2=0) 160 (x1 : x2=0) 88 (time : x2=0) 596 (time : x2=0) 194 (time : x2=0) 75 (speed : x2=0) 585 (speed : x2=0) 180 (speed : x2=0) 77 (x1 : max height) 569 (x1 : max height) 148 (x1 : max height) 90"}] |
HJTzHtqee | [{"section_index": "0", "section_name": "A COMPARE-AGGREGATE MODEL FOR MATCHING TEXT SEQUENCES", "section_text": "Shuohang Wang\nSchool of Information Systems Singapore Management University\nshwang.2014@phdis.smu.edu.sq\nMany NLP tasks including machine comprehension, answer selection and text en- tailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general \"compare-aggregate\"' framework that performs word-level matching fol lowed by aggregation using Convolutional Neural Networks. We particularly fo cus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple com parison functions based on element-wise operations can work better than standard neural network and neural tensor network."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence (Bowman et al.J2015). In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer (Richardson et al.]2013] Tapaswi et al.]2016). Table[1gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing.\nA common trait of a number of these recent studies on sequence matching problems is the use of a \"compare-aggregate\" framework (Wang & Jiang 2016b]He & Lin]2016Parikh et al.]2016). Ir such a framework, comparison of two sequences is not done by comparing two vectors each rep resenting an entire sequence. Instead, these models first compare vector representations of smalle units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by Wang & Jiang(2016b) for tex tual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM.He & Lin (2016) proposec a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN.Parikh et al.[(2016) proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with ar\nSchool of Information Systems Singapore Management University"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "With recent advances of neural network models in natural language processing, a standard practice. for sequence modeling now is to encode a sequence of text as an embedding vector using models. such as RNN and CNN. To match two sequences, a straightforward approach is to encode each. sequence as a vector and then to combine the two vectors to make a decision (Bowman et al. 2015, [Feng et al.J2015). However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems (Hermann et al.. 2015 Hill et al. 2016] Rocktaschel et al.]2015).\nPlot: Aragorn is crowned King of Gon dor and taking Arwen as his queen before all present at his coronation bowing before Frodo and the other Hobbits . The Hobbits return to the Shire where Sam marries Rosie Cotton . .\nTable 1: The example on the left is a machine comprehension problem from MovieQA, where the. correct answer here is The Shire. The example on the right is an answer selection problem from InsuranceQA\nattention-weighted version of the other sequence to produce a series of comparison vectors. The. comparison vectors are then aggregated and fed into a feed forward network for final classification.\nAlthough these studies have shown the effectiveness of such a \"compare-aggregate\"' framework fo sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used (Hu et al.]2014} [Wang & Jiang]2016b) to combine two vectors representing twc units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequence. are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed He & Lin[(2016) used cosine similarity, Euclidear distance and dot product to define the comparison function, which seem to be better justifiable. Bu they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network.\nIn this paper, we argue that the general \"compare-aggregate\" framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets.\nThe contributions of this work are twofold: (1) Using four different datasets, we show that our mode following the \"compare-aggregate' framework is very effective when compared with the state-of- the-art performance on these datasets. (2) We conduct systematic evaluation of different comparisor functions and show that a comparison function based on element-wise operations, which is no widely used for word-level matching, works the best across the different datasets. We believe tha these findings will be useful for future research on sequence matching problems. We have also made our code available online"}, {"section_index": "3", "section_name": "2 METHOD", "section_text": "In this section, we propose a general model following the \"compare-aggregate\"' framework for matching two sequences. This general model can be applied to different tasks. We focus our discus- sion on six different comparison functions that can be plugged into this general \"compare-aggregate' model. In particular, we hypothesize that two comparison functions based on element-wise oper ations, SuB and MuLT, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean\nQuestion: can i have auto insurance without a car\nGround-truth answer: yes, it be possible have auto insurance without own a vehicle. you will purchase what be call a name ..\nAnother candidate answer: insurance not be a tax or merely a legal obligation because auto insurance follow a car....\nhj tj CNN aj X t3 (1) NTN bilinear (NTN) or non-linear transformation (NN) or cosine similarity or element-wise subtraction or etc. between two vectors. Ci Cosine Euclidean aj X hj aj hj (2) NN (3) EucCos soft attention. Element-wise subtraction Element-wise multiplication hj aj aj nj q2 q3 qq (4) Sub (5) Mult\nFigure 1: The left hand side is an overview of the model. The right hand side shows the details about the different comparison functions. The rectangles in dark represent parameters to be learned. represents matrix multiplication\ndistance. As we will show in the experiment section, these comparison functions based on element wise operations can indeed perform very well on a number of sequence matching problems"}, {"section_index": "4", "section_name": "2.1 PROBLEM DEFINITION AND MODEL OVERVIEW", "section_text": "The general setup of the sequence matching problem we consider is the following. We assume there. are two sequences to be matched. We use two matrices Q E RdQ and A E RdA to represent. the word embeddings of the two sequences, where Q and A are the lengths of the two sequences. respectively, and d is the dimensionality of the word embeddings. In other words, each column vector of Q or A is an embedding vector representing a single word. Given a pair of Q and A, the. goal is to predict a label y. For example, in textual entailment, Q may represent a premise and A a. hypothesis, and y indicates whether Q entails A or contradicts A. In question answering, Q may. be a question and A a candidate answer, and y indicates whether A is the correct answer to Q..\nWe treat the problem as a supervised learning task. We assume that a set of training examples in the form of (Q, A, y) is given and we aim to learn a model that maps any pair of (Q, A) to a y.\nAn overview of our model is shown in Figure[1 The model can be divided into the following fou layers:\n1. Preprocessing: We use a preprocessing layer (not shown in the figure) to process Q anc A to obtain two new matrices Q E RlQ and A E Rl A. The purpose here is to use some gate values to control the importance of different words in making the predictions on the sequence pair. For example, q, E R', which is the ith column vector of Q, encodes the ith word in Q. 2. Attention: We apply a standard attention mechanism on Q and A to obtain attention weights over the column vectors in Q for each column vector in A. With these attentior weights, for each column vector a; in A, we obtain a corresponding vector h;, which is ar attention-weighted sum of the column vectors of Q. 3. Comparison: We use a comparison function f to combine each pair of a, and h; into a vector tj.\nIn the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider.\nnspired by the use of gates in LSTM and GRU, we p rocess Q and A with the following formulas\nQ o(W'Q + b' 8 eQ) O tanh(W\"Q+ b\" A o(W'A +b' 8 eA) O tanh(W\"A+ bu\nIn this preprocessing step, the word order does not matter. Although a better way would be to use. RNN such as LSTM and GRU to chain up the words such that we can capture some contextual information, this could be computationally expensive for long sequences. In our experiments, we. only incorporated LSTM into the formulas above for the SNLI task..\nG softmax (W*Q + bs H QG,"}, {"section_index": "5", "section_name": "2.3 COMPARISON", "section_text": "The goal of the comparison layer is to match each a;, which represents the jth word and its context in A, with h, which represents a weighted version of Q that best matches a. Let f denote a comparison function that transforms a; and h; into a vector t; to represent the comparison result\nA natural choice of f is a standard neural network layer that consists of a linear transformatior followed by a non-linear activation function. For example, we can consider the following choice:\na NEURALNET (NN): t; = f(a;,h;) = ReLU(W + b\nAlthough this model follows more or less the same framework as the model proposed byParikh et al. (2016), our work has some notable differences. First, we will pay much attention to the comparison function f and compare a number of options, including some uncommon ones based on element-. wise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work byParikh et al.(2016). For example, we use a CNN layer instead of. summation and concatenation for aggregation. Our attention mechanism is one-directional instead. of two-directional.\nCA) where O is element-wise multiplication, and wi, wu E R'd and bi, bu E R' are parameters to. be learned. The outer product (: ex) produces a matrix or row vector by repeating the vector. or scalar on the left for X times. Here o(W'Q + b' eQ) and o(W'A + b' eA) act as gate. values to control the degree to which the original values of Q and A are preserved in Q and A. For. example, for stop words, their gate values would likely be low for tasks where stop words make little difference to the final predictions.\nThe general attention (Luong et al. 2015) layer is built on top of the resulting Q and A as follows\nwhere Ws E R'l and b E R' are parameters to be learned, G E RQA is the attention weight matrix, and H E RlA are the attention-weighted vectors. Specifically, hj, which is the jth column vector of H, is a weighted sum of the column vectors of Q and represents the part of Q that best matches the jth word in A. Next we will combine h, and a, using a comparison function.\nHowever, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related a, is to h,. For this reason, a more natural choice used in some previous work is Euclidean distance or cosine similarity between a, and hy. We therefore consider the following definition of f :\n||a; - hj||2 EUCLIDEAN+COSINE (EUCCOS): t=f(aj,hj) = cos(aj,hj)\nNote that the operator O is element-wise multiplication. For both comparison functions, the resulting vector t, has the same dimensionality as a, and h:.\nFinally, we consider combining SuB and MuLT followed by an NN layer as follows.\n(a-h) O(a-h) SUBMULT+NN: t; = f(aj,hj) = ReLU(W a; O h,.\nIn summary, we consider six different comparison functions: NN, NTN, EucCos, SuB, MuLT and SUBMULT+NN. Among these functions, the last three (SUB, MULT and SUBMULT+NN) have not been widely used in previous work for word-level matching."}, {"section_index": "6", "section_name": "2.4 AGGREGATION", "section_text": "CNN([t1,...,tA]) r\nr E Rnl is then used for the final classification. where n is the number of windows in CNN\nTable 2: The statistics of different datasets. Q:question/hypothesis, C:candidate answers for each question, A:answer/hypothesis, P:plot, w:word (average).\nNote that with EucCos, the resulting vector t, is only a 2-dimensional vector. Although EucCos is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors a, and h;. On the other hand, NN and NTN are too general and thus do not capture the intuition that we care mostly about the similarity between a, and hs.\nTo use something that is a good compromise between the two extreme cases, we consider the fol- lowing two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously byMou et al.(2016).\nSUBTRACTION (SUB): t=fa,h=a-hOa-h MULTIPLICATION (MULT): t; = f(aj, hj) = a; O hj.\nWe can see that Sub is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector t; produced by SuB. But by not summing up these entries, SuB preserves some information about the different dimensions of the original two vectors. Similarly MuLT is closely related to cosine similarity but preserves some information about the original two Vectors.\nAfter we apply the comparison function to each pair of a; and h; to obtain a series of vectors t, finally we aggregate these vectors using a one-layer CNN (Kim2014):\nMovieQA InsuranceQA WikiQA SNLI train dev test train dev test train dev test train dev test #Q 9848 1958 3138 13K 1K 1.8K*2 873 126 243 549K 9842 9824 #C 5 5 5 50 500 500 10 9 10 #w in P 873 866 914 - - - - - - 1 #w in Q 10.6 10.6 10.8 7.2 7.2 7.2 6.5 6.5 6.4 14 15.2 15.2 #w in A 5.9 5.6 5.5 92.1 92.1 92.1 25.5 24.7 25.1 8.3 8.4 8.3\nMovieQA InsuranceQA WikiQA SNLI Models dev test dev test1 test2 MAP MRR train test Cosine Word2Vec 46.4 45.63 1 - Cosine TFIDF 47.6 47.36 SSCB TFIDF 48.5 1 - IR model 52.7 55.1 50.8 1 CNN with GESD 65.4 65.3 61.0 Attentive LSTM. 68.9 69.0 64.8 IARNN-Occam 69.1 68.9 65.1 0.7341 0.7418 IARNN-Gate 70.0 70.1 62.8 0.7258 0.7394 CNN-Cnt 0.6520 0.6652 1 1 ABCNN - 0.6921 0.7108 1 CubeCNN 1 0.7090 0.7234 W-by-W Attention. 85.3 83.5 1 match-LSTM 92.0 86.1 LSTMN 88.5 86.3 Decomp Attentionr 90.5 86.8 1 EBIM+TreeLSTM - - - 93.0 1 1 88.3 1 NN 31.6 76.8 74.9 72.4 0.7102 0.7224 89.3 86.3 1 NTN 31.6 75.6 75.0 72.5 0.7349 0.7456 91.6 86.3 EucCos 71.9 70.6 70.2 67.9 0.6740 0.6882 87.1 84.0 SUB 64.9 70.0 71.3 68.2 0.7019 0.7151 89.8 86.8 MULT 66.4 76.0 75.2 73.4 0.7433 0.7545 89.7 85.8 1 SUBMULT+NN 72.1 72.9 77.0 75.6 72.3 0.7332 0.7477 89.4 86.8\nTable 3: Experiment Results\nMovieQA InsuranceQA WikiQA SNLI Models dev test dev test1 test2 MAP MRR train test SUBMuLT+NN (no preprocess) 72.0 : 72.8 73.8 70.7 0.6996 0.7156 89.6 82.8 SUBMULT+NN (no attention) 60.4 - 69.4 70.4 67.8 0.7164 0.7238 89.0 84.4\nTable 4: Ablation Experiment Results. \"no preprocess': remove the preprocessing layer by directly using word embeddings Q and A to replace Q and A in Eqn.[1} \"no attention\"': remove the attention layer by using mean pooling of Q to replace all the vectors of H in Eqn.2\nIn all these tasks, we use matrix Q E RdQ to represent the question or premise and matrix A E RdA (k E [1, K]) to represent the kth answer or the hypothesis. For the machine comprehension task MovieQA (Tapaswi et al.]2016), there is also a matrix P E Rd P that represents the plot of a movie. Here Q is the length of the question or premise, A the length of the kth answer, and P the length of the plot.\nFor the InsuranceQA (Feng et al.[ 2015) dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the WikiQA (Yang et al. 2015) datasets, we need to rank the candidate answers according to a question. For both tasks,.\nIn this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table2] We will fist introduce the task settings and the way we customize the \"compare-aggregate\"' structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table[3land the ablation study shown in Table 4\nFor the SNLI (Bowman et al.]2015) dataset, the task is text entailment, which identifies the relation-. ship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence Here K = 1, and there are exactly two sequences to match. The actual model structure is what we. have described before.\nthere are K candidate answers for each question. Let us use rk to represent the resulting vecto. produced by Eqn.9|for the kth answer. In order to select one of the K answers, we first define R = [r1, r2,..., rk]. We then compute the probability of the kth answer to be the correct one as follows:\np(k|R) softmax(w' tanh(W$R + b$ eK) + b eK) Ws E Rlnl, w E R', bs E R', b E R are parameters to be learned.\nFor the machine comprehension task MovieQA, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot P, the question Q and the answer Ag. For each k, we first match Q and P and refer to the matching result at position j as t, as generated by one of the comparison functions f. Similarly, we also match Ag with P and refer to the matching result at position j as tk.. We then define.\nrk CNN([tk,1,...,tk,]\nTo select an answer from the K candidate answers, again we use Eqn.10|to compute the probabili ties.\nThe implementation details of the modes are as follows. The word embeddings are initialized fron GloVe (Pennington et al.]2014). During training, they are not updated. The word embeddings no. found in GloVe are initialized with zero.\nThe dimensionality l of the hidden layers is set to be 150. We use ADAMAX (Kingma & Ba 2015) with the coefficients 1 = 0.9 and 2 = 0.999 to optimize the model. We do not use L2 regularization. The main parameter we tuned is the dropout on the embedding layer. For WikiQA which is a relatively small dataset, we also tune the learning rate and the batch size. For the others, we set the batch size to be 30 and the learning rate 0.002."}, {"section_index": "7", "section_name": "3.2 BASELINES", "section_text": "Here, we will introduce the baselines for each dataset. We did not re-implement these models bu simply took the reported performance for the purpose of comparison..\nSNLI: . W-by-W Attention: The model by[Rocktaschel et al.(2015), who first introduced attention mechanism into text entailment. . match-LSTM: The model by Wang & Jiang(2016b), which. concatenates the matched words as the inputs of an LSTM. . LSTMN: Long short-term memory-. networks proposed by Cheng et al.(2016). . Decomp Attention: Another \"compare-aggregate'. model proposed byParikh et al.(2016). EBIM+TreeLSTM: The state-of-the-art model proposed by Chen et al.[(2016) on the SNLI dataset.\nInsuranceQA: . IR model: This model by Bendersky et al.(2010) learns the concept information to help rank the candidates. o CNN with GESD: This model by Feng et al.(2015) uses Euclidean distance and dot product between sequence representations built through convolutional neural net- works to select the answer. . Attentive LSTM: Tan et al.(2016) used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. . IARNN-Occam: This model by[Wang et al.(2016) adds regularization on the attention weights. . IARNN-Gate: This model by[Wang et al.(2016) uses the representation of the question to build the GRU gates for each candidate answer.\nWikiQA: . IARNN-Occam and IARNN-Gate as introduced before. . CNN-Cnt: This model by Yang et al.(2015) combines sentence representations built by a convolutional neural network with logistic regression. . ABCNN: This model is Attention-Based Convolutional Neural Network proposed by|Yin et al.(2015). o CubeCNN proposed by[He & Lin[(2016) builds a CNN on all pairs of word similarity.\nMovieQA: All the baselines we consider come from Tapaswi et al.(2016)'s work: . Cosine Word2Vec: A sliding window is used to select the answer according to the similarities computed\nt! tk,J\nthrough Word2Vec between the sentences in plot and the question/answer. o Cosine TFIDF: This. model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similar ity. o SSCB TFIDF: Instead of using the sliding window method, a convolutional neural network is. built on the sentence level similarities.\nWe use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correc answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciproca Rank (MRR).\nWe observe the following from the results. (1) Overall, we can find that our general \"compare aggregate\" structure achieves the best performance on MovieQA, InsuranceQA, WikiQA datasets and very competitive performance on the SNLI dataset. Especially for the InsuranceQA dataset. with any comparison function we use, our model can outperform all the previous models. (2) The comparison method SuBMuLT+NN is the best in general. (3) Some simple comparison functions can achieve better performance than the neural networks or neural tensor network comparison func- tions. For example, the simplest comparison function EucCos achieves nearly the best performance in the MovieQA dataset, and the element-wise comparison functions, which do not need parameters can achieve the best performance on the WikiQA dataset. (4) We find the preprocessing layer and the attention layer for word selection to be important in the \"compare-aggregate\"' structure through the experiments of removing these two layers separately. We also see that for sequence matching with big difference in length, such as the MovieQA and InsuranceQA tasks, the attention layer plays a more important role. For sequence matching with smaller difference in length, such as the WikiQA and SNLI tasks, the pre-processing layer plays a more important role. (5) For the MovieQA, InsuranceQA and WikiQA tasks, our preprocessing layer is order-insensitive so that it will not take the context information into consideration during the comparison, but our model can still outperform the previous work with order-sensitive preprocessing layer. With this finding, we believe the word-by-word comparison part plays a very important role in these tasks. We will further explore the preprocessing layer in the future."}, {"section_index": "8", "section_name": "3.4 FURTHER ANALYSES", "section_text": "To further explain how our model works, we visualize the max values in each dimension of the. convolutional layer. We use two examples shown in Table 1|from MovieQA and InsuranceQA. datasets respectively. In the top of Figure [2] we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then. this answer is more likely to be the correct one. Similarly, the bottom one of Figure 2 also shows. that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one.."}, {"section_index": "9", "section_name": "4 RELATED WORK", "section_text": "We review related work in three es of general structures for matching sequences\nSiamense network: These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity (Feng et al.[2015)|Yang et al.|2015), element-wise operation (Tai et al.[2015) |Mou et al. 2016) or neural network-based combination Bowman et al. (2015) are used for sequence matching..\nAttentive network: Soft-attention mechanism (Bahdanau et al. 2014} Luong et al.]2015) has been widely used for sequence matching in machine comprehension (Hermann et al.2015), text entail- ment (Rocktaschel et al.] 2015) and question answering (Tan et al.2016). Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation.\nCompare-Aggregate network: This kind of framework is to perform the word level match ing (Wang & Jiang 2016a Parikh et al. 2016 He & Lin 2016: Trischler et al. 2016; Wan et al.\nQuestion: Where does Sam marry Rosie? Gonnor pue Areen se siy uueen berore at s!y Coononnoon buomoq berore opouy pue the other oF The Hoddits rennnn the shhre whhee Roose Coooon Plot Question: Can I have. auto insurance without a car. - ves ae poq!ssod have aute nnnnnnnee umo vehheee you nseypunee what be name monmmnme Pooey be that you when Vou op nou uMn e vehheee connaet Answer\nQuestion: Where does Sam marry Rosie? 50 Arrnrrn cnnmned bu Gonnor pue Areen se s!y uueen berore le at s!y Coononnoon buomoq berore Fpoy pue the other The rennnn the shhee whhee Roose Cotton\nLIOn: Indye ULO udncE WILDOUL car ves be have aute fnnnnnneee vehheee Vou Wil what be e name moomnnme Pooee this be thet you corrrreee when nou op not umn O vehheee connet Answer\nFigure 2: An visualization of the largest value of each dimension in the convolutional layer of CNN. The top figure is an example from the dataset MovieQA with CNN window size 5. The botton. figure is an example from the dataset InsuranceQA with CNN window size 3. Due to the sparsity. of the representation, we show only the dimensions with larger values. The dimensionality of th raw representations is 150.\n2016). Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately."}, {"section_index": "10", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we systematically analyzed the effectiveness of a \"compare-aggregate\" model on fou. different datasets representing different tasks. Moreover, we compared and tested different kinds. of word-level comparison functions and found that some element-wise comparison functions car outperform the others. According to our experiment results, many different tasks can share the same \"compare-aggregate\"' structure. In the future work, we would like to test its effectiveness or. multi-task learning.\nThis research is supported by the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly. learning to align and translate. In Proceedings of the International Conference on Learning Rep resentations, 2014.\nMichael Bendersky, Donald Metzler, and W Bruce Croft. Learning concept importance using weighted dependence model. In Proceedings of the third ACM International Conference on We Search and Data Mining. ACM, 2010.\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the Con ference on Empirical Methods in Natural I.. Processing. 2014\nHua He and Jimmy Lin. Pairwise word interaction modeling with deep neural networks for semantic. similarity measurement. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016\nKarl Moritz Hermann. Tomas Kocisky. Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the. Conference on Advances in Neural Information Processing Systems, 2015..\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings oJ the International Conference on Learning Representations. 2015\nShengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. Match-srnn Modeling the recursive matching structure with spatial RNN. International Joint Conference on. Artificial Intelligence, 2016.\nBingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answe\nShuohang Wang and Jing Jiang. Machine comprehension using match-LSTM and answer pointer arXiv preprint arXiv:1608.07905, 2016a\ntne Conference on tne Norin Amerlcan Cn lleAssOclallol TOlCOnlpulcllorlel Llnl 2016b. Yi Yang, Wen-tau Yih, and Christopher Meek. WikiQA: A challenge dataset for open-domair. question answering. In Proceedings of the Conference on Empirical Methods in Natural Languag. Processing, 2015.\nWenpeng Yin, Hinrich Schutze, Bing Xiang, and Bowen Zhou. ABCNN: Attention-based convolu. tional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193. 2015\nMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja. Fidler. MovieQA: Understanding stories in movies through question-answering. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016..\nAdam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. In Proceedings of the Conference on Association for Computational Linguistics. 2016"}] |
ry18Ww5ee | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In an effort to develop more efficient search methods, the problem of hyperparameter optimization ha. recently been dominated by Bayesian optimization methods (Snoek et al.]2012)Hutter et al.]2011 Bergstra et al.2011) that focus on optimizing hyperparameter configuration selection. These method. aim to identify good configurations more quickly than standard baselines like random search b selecting configurations in an adaptive manner; see Figure[1(a)] Existing empirical evidence suggest. that these methods outperform random search (Thornton et al.]2013]Eggensperger et al.]2013][Snoel et al.|[2015). However, these methods tackle a fundamentally challenging problem of simultaneously. fitting and optimizing a high-dimensional, non-convex function with unknown smoothness, anc. possibly noisy evaluations. To overcome these difficulties, some Bayesian optimization methods. resort to heuristics, at the expense of consistency guarantees, to model the objective function or speec up resource intensive subroutines,'[Moreover, these adaptive configuration selection methods are. intrinsically sequential and thus difficult to parallelize..\nAn orthogonal approach to hyperparameter optimization focuses on speeding up configuratioi evaluation; see Figure|1(b)] These methods are adaptive in computation, allocating more resource. to promising hyperparameter configurations while quickly eliminating poor ones. Resources car take various forms, including size of training set, number of features, or number of iterations fo. iterative algorithms. By adaptively allocating these resources, these methods aim to examine orders o. magnitude more hyperparameter configurations than methods that uniformly train all configurations tc completion, thereby quickly identifying good hyperparameters. While there are methods that combine. Bayesian optimization with adaptive resource allocation (Swersky et al.2013f 2014) Domhan et al 2015), we focus on speeding up random search as it offers a simple, parallelizable, and theoretically. principled launching point and is shown to outperform grid search (Bergstra & Bengiol 2012).\n'Consistency can be restored by allocating a fraction of resources to performing random search"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "The task of hyperparameter optimization is becoming increasingly important as modern data analysis pipelines grow in complexity. The quality of a predictive model critically depends on its hyperpa rameter configuration, but it is poorly understood how these hyperparameters interact with each other to affect the quality of the resulting model. Consequently, practitioners often default to either hand-tuning or automated brute-force methods like random search and grid search.\n0.30 0.25 0.20 SSC 0.1 0.10 0.6 1/1 0.4 0.05 0.2 0.00 10 40 20 30 50 100 150 200 250 300 350 400 Resources Resources Allocated (a) Configuration Selection (b) Configuration Evaluation (c) Envelopes\nFigure 1: (a) The heatmap shows the validation error over a two dimensional search space, with red corresponding to areas with lower validation error, and putative configurations selected in a sequential manner as indicated by the numbers. (b) The plot shows the validation error as a function of the resources allocated to each configuration (i.e., each line in the plot). Configuration evaluation methods allocate more resources to promising configurations. (c) The validation loss as a function of total resources allocated for two configurations. The shaded areas bound the maximum distance from the terminal validation loss and monotonically decreases with the resource.\nOur novel configuration evaluation method, HyPERBAND, relies on a principled early-stopping strategy to allocate resources, allowing it to evaluate orders of magnitude more configurations thai uniform allocation strategies. HyPERBAND is a general-purpose technique that makes minima assumptions, unlike prior configuration evaluation approaches (Swersky et al.|2013f Domhan et al 2015] Swersky et al.]2014] Gyorgy & Kocsis[2011f Agarwal et al.[2011). In this work, we describe HyPERBAND, provide intuition for the algorithm through a detailed example, and present a wide range of empirical results comparing HyPERBAND with well established competitors. We also briefl describe the theoretical underpinnings of HyPERBAND, however a thorough theoretical treatment i beyond the scope of this paper and is deferred toLi et al.(2016)."}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Bayesian optimization techniques model the conditional probability p(f[) of a configuration's. performance on a metric f given a set of hyperparameters . For instance, SMAC uses random forests. to model p(f[) as a Gaussian distribution (Hutter et al.2011). TPE is a non-standard Bayesian. optimization algorithm based on tree-structured Parzen density estimators (Bergstra et al.| 2011). A third popular method, Spearmint, uses Gaussian processes (GP) to model p(f[) and performs slice. sampling over the GP's hyperparameters (Snoek et al.2012).\nAdaptive configuration evaluation is not a new idea. Maron & Moore(1993) considered a setting. where training time is negligible (e.g., k-nearest-neighbor classification) and evaluation on a large validation set is accelerated by evaluating on an increasing subset of the validation set, stopping early configurations that are performing poorly. Since subsets of the validation set provide unbiased estimates of its expected performance, this is an instance of the stochastic best-arm identificatior problem for multi-armed bandits (see Jamieson & Nowak (2014) for a brief survey)..\nIn contrast, this paper assumes that evaluation time is negligible and the goal is to early-stop long. running training procedures by evaluating partially trained models on the validation set. Previous. approaches either require strong assumptions or use heuristics to perform adaptive resource allocation. Several works propose methods that make strong assumptions on the convergence behavior of training. algorithms, providing theoretical performance guarantees under these assumptions (Gyorgy & Kocsis. 2011} Agarwal et al.2011 Swersky et al.]2013] 2014]Domhan et al.]2015 Sabharwal et al. 2016). Unfortunately, these assumptions are often hard to verify, and empirical performance can. drastically suffer when they are violated. One recent work of particular interest proposes a heuristic. based on sequential analysis to determine stopping times for training configurations on increasing. subsets of the data (Krueger et al.J2015). However, it has a few shortcomings: (1) it is designed. to speedup multi-fold cross-validation and is not significantly faster than standard holdout, (2) it. is not an anytime algorithm and requires the set of configurations to be evaluated as an input, and (3) the theoretical correctness and empirical performance of this method are highly dependent on.\na user-defined \"safety-zone'| Lastly, in an effort avoid heuristics and strong assumptions, Sparks et al.(2015) proposed a halving style algorithm that did not require explicit convergence behavior and Jamieson & Talwalkar (2015) analyzed a similar algorithm, providing theoretical guarantees and encouraging empirical results. Unfortunately, these halving style algorithms suffer from the n vs B/n issue which we will discuss in Section3.\nFinally,[Klein et al.(2016) recently introduced Fabolas, a Bayesian optimization method that combines adaptive selection and evaluation. Similar to Swersky et al.(2013f 2014), it models the conditional validation error as a Gaussian process using a kernel that captures the covariance with downsampling rate to allow for adaptive evaluation. While we intended to compare HyPeRBAND with Fabolas, we encountered some technical difficulties when using the package3|and are working with the authors of. Klein et al.(2016) to resolve the issues."}, {"section_index": "3", "section_name": "HYPERBAND ALGORITHM", "section_text": "HYPERBAND extends the SUCCEssIVEHALVING algorithm proposed for hyperparameter optimiza tion in Jamieson & Talwalkar[(2015) and calls it as a subroutine. The idea behind SuccEssivE HALvinG follows directly from its name: uniformly allocate a budget to a set of hyperparameter configurations, evaluate the performance of all configurations, throw out the worst half, and repeat until one configurations remains. The algorithm allocates exponentially more resources to more promising configurations. Unfortunately, SUCCESsIVEHALVING requires the number of configu- rations n as an input to the algorithm. Given some finite time budget B (e.g. an hour of training time to choose a hyperparameter configuration), B/n resources are allocated on average across the configurations. However, for a fixed B, it is not clear a priori whether we should (a) consider many configurations (large n) with a small average training time; or (b) consider a small number of configurations (small n) with longer average training times.\nWe use a simple example to better understand this tradeoff. Figure[1(c)shows the validation loss as a function of total resources allocated for two configurations with terminal validation losses v1 and v. The shaded areas bound the maximum deviation from the terminal validation loss and will be referre. to as \"envelope\" functions. It is possible to differentiate between the two configurations when th envelopes diverge. Simple arithmetic shows that this happens when the width of the envelopes i. less than v2 - V1, i.e. when the intermediate losses are guaranteed to be less than 2-1 away from the terminal losses. There are two takeaways from this observation: more resources are needed tc. differentiate between the two configurations when either (1) the envelope functions are wider or (2. the terminal losses are closer together.\nHowever, in practice, the optimal allocation strategy is unknown because we do not have knowledge. of the envelope functions nor the distribution of terminal losses. Hence, if more resources are required before configurations can differentiate themselves in terms of quality (e.g., if an iterative training method converges very slowly for a given dataset or if randomly selected hyperparameter configurations perform similarly well) then it would be reasonable to work with a small numbei of configurations. In contrast, if the quality of a configuration is typically revealed using minimal resources (e.g., if iterative training methods converge very quickly for a given dataset or if randomly selected hyperparameter configurations are of low-quality with high probability) then n is the. bottleneck and we should choose n to be large"}, {"section_index": "4", "section_name": "3.1 HYPERBAND", "section_text": "HyPERBAND, shown in Algorithm[1] addresses this \"n versus B/n\"' problem by considering several possible values of n for a fixed B, in essence performing a grid search over feasible value of n Associated with each value of n is a minimum resource r that is allocated to all configurations before some are discarded; a larger value of n corresponds to a smaller r and hence more aggressive early stopping. There are two components to HyPERBAND; (1) the inner loop invokes SUCCEssIvEHALV- ING for fixed values of n and r (lines 3-9) and (2) the outer loop which iterates over different values\n2The first two drawbacks prevent a full comparison to HyPERBAND on our selected empirical tasks, however. for completeness, we provide a comparison in Appendix[A|toKrueger et al.(2015) on some experimental tasks replicated from their paper.\nof n and r (lines 1-2). We will refer to each such run of SUCCESSIVEHALVING within HYPERBAND as a \"bracket.\" Each bracket is designed to use about B total resources and corresponds to a different tradeoff between n and B/n. A single execution of HyPERBAND takes a finite number of iterations and in practice can be repeated indefinitely.\nHyPERBAND requires two inputs (1) R, the maximum amount of resource that can be allocated to a single configuration, and (2) n, an input that controls the proportion of configurations discarded in each round of SuccEssiveHALvING. The two inputs dictate how many different brackets are considered; specifically, Smax + 1 different values for n are considered with smax = logn(R)]. HyPERBAND begins with the most aggressive bracket s = smax, which sets n to maximize exploration, subject to the constraint that at least one configuration is allocated R resources. Each subsequent bracket reduces n by a factor of approximately n until the final bracket, s = 0, in which every configuration is allocated R resources (this bracket simply performs classical random search). Hence, HyPERBAND performs a geometric search in the average budget per configuration to address the \"n versus B/n' problem, at the cost of approximately Smax +1 times more work than running SuccEssiveHALvING for a fixed n. By doing so, HyPERBAND is able to exploit situations in which adaptive allocation works well. while protecting itself in situations where more conservative allocations are required\nAlgorithm 1: HyPERBAND algorithm for hyperparameter optimization\nNDADaI ptnnZatlon. input : R, n (default n = 3) initialization: Smax = [log,(R)], B = (Smax + 1) R 1 for s E {Smax, Smax - 1, ..., 0} do 2 r = Rn-s // begin SuccessiveHaLving with (n,r) inner loop 3 T =get hyperparameter configuration(n) 4 for i E {0,..., s} do 5 ni = [nn-'] 6 ri = rni 7 L ={run_then_return_val_loss(t,ri) :t E T} 8 T =top_k(T,L,[ni/n]) 9 end 10 end 11 return Configuration with the smallest intermediate loss seen so far\nri = rn L ={run_then_return_val_loss(t,r;):t E T T =top_k(T,L,ni/n\nR represents the maximum amount of resources that can be allocated to any given configuration. Ir. most cases, there is a natural upper bound on the maximum budget per configuration that is ofter dictated by the resource type (e.g., training set size for dataset downsampling; limitations based. on memory constraint for feature downsampling; rule of thumb regarding number of epochs wher. iteratively training neural networks). R is also the number of configurations evaluated in the bracke. that performs the most exploration, i.e s = Smax. In practice one may want n nmax to limit. overhead associated with training many configurations on a small budget, i.e. costs associated with. initialization, loading a model, and validation. In this case, set smax = [logn(nmax)]..\nThe value of n can be viewed as a knob that can be tuned based on practical user constraints Larger values of n correspond to a more aggressive elimination schedule and thus fewer rounds of elimination; specifically, each round retains 1/n configurations for a total of logn(n) + 1 rounds o elimination with n configurations. If one wishes to receive a result faster at the cost of a sub-optima asymptotic constant, one can increase n to reduce the budget per bracket B = (|log,(R)| + 1)R We stress that results are not very sensitive to the choice of n. In practice we suggest taking n to be equal to 3 or 4.\nHypeRBAnD requires the following methods to be defined for any given learning problem: get_hyperparameter_configuration (n) returns a set of ni.i.d. samples from some dis tribution defined over the hyperparameter configuration space; run_then_return_val_loss (t, r) takes a hyperparameter configuration (t) and resource allocation (r) and returns the validation loss after training for the allocated resources; and top_k (configs, losses, k) takes a set of configurations as well as their associated losses and returns the top k performing configurations.\nWe further define the number of iterations as the resource to allocate. with one unit of resource corresponding to one epoch or a full pass over the dataset. We set R to 81 and use the default value of n = 3, resulting in Smax = 4 and thus 5 brackets of SucCEssivEHALvInG with different tradeoffs between n and B/n. The resources allocated within each bracket are displayed in Table[1\ns = 4 s = 3 s = 2 s = 1 s = 0 2 ni ri ni ri ni ri ni ri ni ri 0 81 1 27 3 9 9 6 27 5 81 1 27 3 9 9 3 27 2 81 2 9 9 3 27 1 81 3 3 27 1 81 4 1 81\nTable 1: Values of n; and r; for the brackets of HypER BAND when R = 81 and n = 3.\nFigure2|compares the empirical performance of the different brackets of HyPERBAND if they were. used separately, as well as standard HyPERBAND (all results are averaged over 7O trials). In practice we do not know a priori which bracket s E {0, ..., 4} will be most effective, and in this case neither the most (s = 4) nor least aggressive (s = 0) setting is optimal. However, note that HyPERBAND. does nearly as well as the optimal bracket (s = 3) and vastly outperforms the baseline uniform allocation (i.e. random search), which is equivalent to bracket s = 0.."}, {"section_index": "5", "section_name": "3.3 OVERVIEW OF THEORETICAL RESULTS", "section_text": "Although a detailed theoretical analysis is beyond the scope of this paper, we provide an intuitive. high-level description of theoretical properties of HyPERBAND. Suppose there are n configurations. each with a given terminal validation error v; for i = 1, . . ., n. Without loss of generality, index th. configurations by performance so that v1 corresponds to the best performing configuration, v2 to th. second best, and so on. Now consider the task of identifying the best configuration. The optima strategy would allocate to each configuration i the minimum resource required to distinguish it fror. V1, i.e., enough so that the envelope functions depicted in Figure[1(c)|bound the intermediate loss t. be less than -v1 away from the terminal value. As shown in Jamieson & Talwalkar(2015) and et al.(2016), the budget required by SuccEssiveHALvING is in fact only a small factor away fron. this optimal approach because it capitalizes on configurations that are easy to distinguish from v. In contrast, the naive uniform allocation strategy, which allocates B/n to each configuration, has t. allocate to every configuration the resource required to distinguish v2 from v1.\nThe relative size of the budget required for uniform allocation and SuccEssivEHALvING depends on the envelope functions bounding deviation from terminal losses as well as the distribution from which v;'s are drawn. The budget required for SuccEssivEHALVING is smaller when the optimal n versus B/n tradeoff requires fewer resources per configuration. Hence, if the envelope functions tighten quickly as a function of resource allocated, or the average distances between terminal losses is large, then SucCEssivEHALVING can be substantially faster than uniform allocation. Of course we do not have knowledge of either function in practice, so we will hedge our aggressiveness with HyPERBAND. Remarkably, despite having no knowledge of the envelope functions or the distribution of v;'s, HyPERBAND requires a budget that is only log factors larger than the optimal for SuCCESs1VEHALVING. SeeLi et al.(2016) for details.\nWe next present a simple example to provide intuition. We work with the MNIST dataset and optimize. hyperparameters for the LeNet convolutional neural network trained using mini-batch SGD. Our. search space includes learning rate, batch size, and number of kernels for the two layers of the network as hyperparameters (details are shown in Table|3|in Appendix|A).\n1e-2 nepoch=81 1.00 s=0 Baseline 0.98 s=1 s=2 0.96 s=3 rrrroerorr 0.94 s=4 Hyperband 0.92 0.90 0.88 0.0 0.5 1.0 1.5 2.0 Seconds 1e6\n8 0.5 1.0 0.0 1.5 2.0 Seconds 1e6\nFigure 2: Performance of individ ual brackets s and HYPERBAND.\n0.30 0.10 0.32 hyperband (finite) spearmint SMAC 0.29 random 0.09 0.30 SMAC (Early Stop) + random_2x 0.28 0.28 0.08 TPE bracket s=4 Frror 0.27 est 0.07 est 0.26 g0.24 ge 11 0.06 0.25 Aee g 0.05 0.22 0.24 0.04 0.20 0.23 0.03 0.18 0.22 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 Multiple of R Used Multiple of R Used Multiple of R Used (a) CIFAR-10 (b) MRBI (c) SVHN"}, {"section_index": "6", "section_name": "4 HYPERPARAMETER OPTIMIZATION EXPERIMENTS", "section_text": "In this section, we evaluate the empirical behavior of HyPERBAND with iterations, data subsamples and features as resources. For all experiments, we compare HyPERBAND with three well known. Bayesian optimization algorithms - SMAC, TPE, and Spearmint. Additionally, we show results for. SUCCESsIVEHALVING corresponding to repeating the most exploration bracket of HYPERBAND Finally for all experiments, we benchmark against standard random search and random_2, which is a variant of random search with twice the budget of other methods..\nWe study a convolutional neural network with the same architecture as that used inSnoek et al. (2012 and Domhan et al.[(2015) from cuda-convnet. The search spaces used in the two previous works. differ, and we used a search space similar to that of|Snoek et al.(2012) with 6 hyperparameters for. stochastic gradient decent and 2 hyperparameters for the response normalization layers. In line with. the two previous works, we used a batch size of 100 for all experiments. For these experiments, we also compare against a variant of SMAC named SMAC_early that uses the early termination criterion. proposed inDomhan et al.(2015) for neural networks. We view SMAC with early stopping to be a. combination of adaptive configuration selection and configuration evaluation. See Appendix|A for more details about the experimental setup.\nDatasets: We considered three image classification datasets: CIFAR-10 (Krizhevsky2009), rotatec MNIST with background images (MRBI) (Larochelle et al.] 2007), and Street View House Numbers (SVHN) (Netzer et al.]2011). CIFAR-10 and SVHN contain 32 32 RGB images while MRBI contains 28 28 grayscale images. The splits used for each dataset are as follows: (1) CIFAR-10 has 40k, 10k, and 10k instances; (2) MRBI has 10k, 2k, and 50k instances; and (3) SVHN has close to 600k, 6k, and 26k instances for training, validation, and test respectively. For all datasets, the only preprocessing performed on the raw images was demeaning.\nHyPERBAND Configuration: For these experiments, one unit of resource corresponds to 100 mini batch iterations. For CIFAR-10 and MRBI, R is set to 300 (or 30k total iterations). For SVHN, R is set to 600 (or 60k total iterations) to accommodate the larger training set. n was set to 4 for all experiments, resulting in 5 SUCCESSIVEHALVING brackets for HYPERBAND.\nResults: Ten independent trials were performed for each searcher. For CIFAR-10, the results i Figure[3[a) show that HyPERBAND is more than an order of magnitude faster than its competitors In Figure 6|of Appendix A] we extend the x-axis for CIFAR-10 out to 100R. The results shov that Bayesian optimization methods ultimately converge to similar errors as HypERBAND. Fo. MRBI, HyPERBAND is more than an order of magnitude faster than standard configuration selectio approaches and 5 faster than SMAC with early stopping. For SVHN, while HyPERBAND find. a good configuration faster, Bayesian optimization methods are competitive and SMAC with earl. stopping outperforms HyPERBAND. This result demonstrates that there is merit to incorporating. early stopping with configuration selection approaches.\nFigure 3: Average test error across 10 trials is shown in all plots. Label \"SMAC_early\"' corresponds to SMAC with the early stopping criterion proposed inDomhan et al.(2015) and label \"bracket s = 4\" corresponds to repeating the most exploratory bracket of HyPERBAND.\nAcross the three datasets, HyPERBAND and SMAC_early are the only two methods that consistently. outperform random_2. On these datasets, HyPERBAND is over 20 faster than random search. while SMAC_early is 7 faster than random search within the evaluation window. In fact, the first. result returned by HYPERBAND after using a budget of 5R is often competitive with results returnec by other searchers after using 5OR. Additionally, HypeRBAND is less variable than other searchers. across trials, which is highly desirable in practice (see Appendix[A|for plots with error bars)..\nFor computationally expensive problems in high dimensional search spaces, it may make sense to just repeat the most exploratory brackets. Similarly, if meta-data is available about a problem or it is known that the quality of a configuration is evident after allocating a small amount of resource. then one should just repeat the most exploration bracket. Indeed, for these experiments, repeating the most exploratory bracket of HyPERBAND outperforms cycling through all the brackets. In fact. bracket s = 4 vastly outperforms all other methods on CIFAR-1O and MRBI and is nearly tied with SMAC_early for first on SVHN.\nFinally, CIFAR-10 is a very popular dataset and state-of-the-art models achieve much better accuracy than what is shown in Figure[3 The difference in performance is mainly attributable to higher mode complexities and data manipulation (i.e. using reflection or random cropping to artificially increase the dataset size). If we limit the comparison to published results that use the same architecture and exclude data manipulation, the best human expert result for the dataset is 18% error and hyperparameter optimized result is 15.0% for Snoek et al.(20124and 17.2% forDomhan et al. (2015). These results are better than our results on CIFAR-10 because they use 25% more data by including the validatior set and also train for more epochs. The best model found by HyPeRBAND achieved a test error of 17.0% when trained on the combined training and validation data for 300 epochs.\nIn this experiment, we use HyPERBAND with data samples as the resource to optimize the hyper parameters of a kernel-based classification task on CIFAR-10. We use the multi-class regularized least squares classification model which is known to have comparable performance to SVMs (Rifkir & Klautau2004] Agarwal et al.][2014) but can be trained significantly faster. The hyperparameters considered in the search space include preprocessing method, regularization, kernel type, kernel length scale, and other kernel specific hyperparameters (see Appendix |A|for more details). Hy PERBAND is run with n = 4 and R = 400, with each unit of resource representing 100 datapoints Similar to previous experiments, these inputs result in a total of 5 brackets. Each hyperparameter optimization algorithm is run for ten trials on Amazon EC2 m4 . 2x1arge instances; for a given trial, HyPERBAND is allowed to run for two outer loops, bracket s = 4 is repeated 10 times, and all other searchers are run for 12 hours.\nFigure4shows that HyPERBAND returns a good configuration after just the first SuCCESs1VEHALV ING bracket in approximately 20 minutes; other searchers fail to reach this error rate on average even after the entire 12 hours. Notably, HyPERBAND was able to evaluate over 250 configurations in this first bracket of SuccEssiveHALviNG, while competitors were able to evaluate only three configurations in the same amount of time. Consequently, HyPERBAND is over 3O faster than Bayesian optimization methods and 70 faster than random search. Bracket s = 4 sightly outper forms HyPERBAND but the terminal performance for the two algorithms are the same. Random_2 is competitive with SMAC and TPE.\nWe next demonstrate the performance of HyPERBAND when using features as a resource, focusing. on random feature approximations for kernel methods. Features are randomly generated using the method described in Rahimi & Recht (2007) to approximate the RBF kernel, and these random. features are then used as inputs to a ridge regression classifier. We consider hyperparameters of. a random feature kernel approximation classifier trained on CIFAR-10, including preprocessing. method, kernel length scale, and l2 penalty. We impose an upper bound of 100k random features for the kernel approximation so that the data will comfortably fit into a machine with 60GB of.\n4We were unable to reproduce this result even after receiving the optimal hyperparameters from the authors through a personal communication..\nhyperband SMAC 0.65 TPE random 0.60 random 2x rror bracket s=4 0.55 lest 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nFigure 4: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished."}, {"section_index": "7", "section_name": "4.4 EXPERIMENTAL DISCUSSION", "section_text": "the maximum speedu y HYPERBAND compared to random search 1s F Ilog..(R)+1\nIf training time is superlinear as a function of the resource, then HypeRBAND can offer higher. speedups. More generally, if training scales like a polynomial of degree p > 1, the maximum speedup y np---1 n[logn(R)]. Hence, higher speedups of HyPERBAND over random search is approximately. were observed for the the kernel least square classifier experiment discussed in Section 4.2|because. the training time scaled quadratically as a function of the resource..\nIf 10 randomly sampled configurations is sufficient to find a good hyperparameter setting, then th. benefit of evaluating orders of magnitude more configurations is muted. Generally the difficulty of th problem scales with the dimension of the search space since coverage diminishes with dimensionalit For low dimensional problems, the number of configurations evaluated by random search an Bayesian methods is exponential in the number of dimensions so good coverage can be achieved; i.. if d = 3 as in the features subsampling experiment, then n = O(2d = 8). Hence, HypeRBAND i only 6 faster than random search on the feature subsampling experiment. For the neural networ experiments however, we hypothesize that faster speedups are observed for HyPERBAND because th. dimension of the search space is higher."}, {"section_index": "8", "section_name": "5 FUTURE WORK", "section_text": "We have introduced a novel bandit-based method for adaptive configuration evaluation with demon strated competitive empirical performance. Future work involves exploring (i) embedding HYPER BAND into parallel and distributed computing environments; (ii) adjusting for training methods with different convergence rates; and (iii) combining HyPERBAND with non-random sampling methods.\nhyperband 0.65 SMAC TPE spearmint 0.60 random random 2x 0.55 bracket s=4 ees 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nFigure 5: Average test error of the best ran- dom features model found by each searcher on CIFAR-10. The test error for HyPERBAND and bracket s = 4 are calculated in every eval- uation instead of at the end of a bracket.\nmemory. Additionally, we set one unit of resource to be 100 features for an R = 1000, which gives 5 different brackets with n = 4. Each searcher is run for 10 trials, with each trial lasting 12 hours on a n1-st andard-1 6 machine from Google Cloud Compute. The results in Figure|5 show that HYPERBAND is around 6x faster than Bayesian methods and random search. HyPERBAND performs similarly to bracket s = 4. Random_2 outperforms Bayesian optimization algorithms.\nFor a given R, the most exploratory SucCEssiVEHALVING round performed by HyPERBAND evaluates n[log,(R)] configurations using a budget of ([logn(R)] + 1) R, which gives an upper bound. on the potential speedup over random search. If training time scales linearly with the resource,. n[logn(R)] the values of n and R used in our experiments, the maximum speedup over random search is. approximately 50 given linear training time. However, we observe a range of speedups from 6 to 70 faster than random search. The differences in realized speedup can be explained by two factors. (1) the scaling properties of total evaluation time as a function of the allocated resource and (2) the. difficulty of finding a good configuration..\nof HYPERBAND over random search is approximately Hence. higher speedups"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": ". Bergstra and Y. Bengio. Random search for hyper-parameter optimization. In JMLR, 2012\nJ. Bergstra et al. Algorithms for hyper oarameter optimization. In NIPS, 2011.\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPs, 2007.\nR. Rifkin and A. Klautau. In defense of one-vs-all classification. JMLR, 2004\nA. Gyorgy and L. Kocsis. Efficient multi-start strategies for local search algorithms. JAIR, 41, 2011\nF. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm. configuration. In Proc. of LION-5, 2011. K. Jamieson and R. Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In Information Sciences and Systems (CISs), 2014 48th Annual Conference on,. pp. 1-6. IEEE, 2014. K. Jamieson and A. Talwalkar. Non-stochastic best arm identification and hyperparameter optimiza- tion. In AISTATS, 2015. A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine. learning hyperparameters on large datasets. arXiv preprint arXiv:1605.07079, 2016. A. Krizhevsky. Learning multiple layers of features from tiny images. In Technical report, Department. of Computer Science, Univsersity of Toronto, 2009.. T. Krueger, D. Panknin, and M. Braun. Fast cross-validation via sequential testing. Journal of. Machine Learning Research, 16:1103-1155, 2015. H. Larochelle et al. An empirical evaluation of deep architectures on problems with many factors of. variation. In ICML, 2007. L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit- based approach to hyperparameter optimization. arXiv:1603.06560, 2016. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In NIPS, 1993. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS. Workshop on Deep Learning and Unsupervised Feature Learning, 2011.. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPs, 2007.. G. Ratsch. T. Onoda, and K.R. Muller. Soft margins for adaboost. Machine Learning, 42:287-320.\nvariation. In ICML, 2007. L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit based approach to hyperparameter optimization. arXiv:1603.06560, 2016. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and. function approximation. In NIPS, 1993. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS. Workshop on Deep Learning and Unsupervised Feature Learning, 2011..\nSabharwal, H. Samulowitz, and G. Tesauro. Selecting near-optimal learners via incremental data allocation. In AAAI, 2016. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house numbers digit classification. In ICPR, 2012 Snoek, H. Larochelle, and R. Adams. Practical bayesian optimization of machine learning. algorithms. In NIPS, 2012.\nSwersky, J. Snoek, and R. Adams. Multi-task bayesian optimization. In NIPs, 2013\nK. Swersky, J. Snoek, and R. P. Adams. Freeze-thaw bayesian optimization. arXiv:1406.3896, 2014"}, {"section_index": "10", "section_name": "A.1 COMPARISON WITH CVST", "section_text": "We replicated the classification experiments in Krueger et al.(2015) that train a support vectoi machine on the datasets from the IDA benchmark (Ratsch et al.]2001). All experiments were performed on Google Cloud Compute's n1-standard-1 instances. Following Krueger et al (2015), we evaluated HyPERBAND and CVST on the same 2d grid of 610 hyperparameters and recorded the best test error and duration for 50 trials . The only modification we made to their origina experimental setup was the data splits; instead of half for test and half for training, we used 1/11th fo test and 10/11th for training. HyPERBAND performed holdout evaluation using 1/1Oth of the training data as the validation set. We set n = 3, and R was set for each dataset so that a minimum resource of 5O datapoints is allocated to each configuration. Table2[shows that CVST and HyPERBAND achieve comparable test errors (the differences are well within the error bars), while HyPERBAND is significantly faster than CVST on all datasets. More granularly, while CVST on average has slightl lower mean error, HyPERBAND is within O.2% of CVST on 5 of the 7 datasets. Additionally, fo each of the 7 datasets. HyPERBAND does as well as or better than CVST in over half of the trials"}, {"section_index": "11", "section_name": "A.2 LENET EXPERIMENT", "section_text": "We trained the LeNet convolutional neural network on MNIST using mini-batch SGD. Code is available for the network at http://deeplearning.net/tutorial/lenet.html The search space for the LeNet example discussed in Section|3.2|is shown in Table|3\nHyperparameter Scale Min Max Learning Rate. log 1e-3 1e-1 Batch size log 1e1 1e3 Layer-2 Num Kernels (k2) linear 10 60 Layer-1 Num Kernels (k1) linear 5 k2\nTable 3: Hyperparameter space for the LeNet application of Section|3.2] Note that the number o kernels in Layer-1 is upper bounded by the number of kernels in Layer-2.\nThe CVST algorithm from Krueger et al.(2015) focuses on speeding up standard k-fold cross. validation. We did not include it as one of the competitors in Section 4|because the experiments we selected were too computational expensive for multi-fold cross-validation and CVST is not an. any time algorithm. Nonetheless, the CVST algorithm is an interesting approach and was shown to. have promising empirical performance in Krueger et al.(2015). Hence, we performed a small scale. comparison modeled after their empirical studies between CVST and HyPERBAND.\nCVST Hyperband Ratio Dataset Test Error Duration Test Error Duration Duration banana 9.8%1.6% 12.35.0 9.9%1.5% 1.80.1 6.72.8 german 26.0%4.5% 2.71.1 27.6%4.8% 0.70.0 4.11.7 image 2.9%1.1% 3.51.0 3.3%1.4% 1.00.0 3.40.9 splice 8.6%1.8% 10.63.1 8.7%1.8% 3.90.1 2.70.8 ringnorm 1.4%0.4% 21.32.3 1.5%0.4% 6.50.3 3.30.4 twonorm 2.4%0.5% 27.910.0 2.4%0.5% 6.50.2 4.31.5 waveform 9.3%1.3% 13.72.7 9.5%1.3% 2.90.2 4.81.0\nTable 2: The test error and duration columns show the average value plus/minus the standard deviation. across 50 trials. Duration is measured in minutes and indicates how long it took each method to evaluate the grid of 610 hyperparameters used inKrueger et al.(2015). The ratio column shows the. ratio of the duration for HyPERBAND over that for CVST with associated standard deviation..\nFor the experiments discussed in Section|4.1] the exact architecture used is the 18% model provided on cuda-convnet for CIFAR-105\nHyperparameter Scale Min Max Learning Parameters Initial Learning Rate. log 5 *10-5 5 Conv1 l2 Penalty log 5 *10-5 5 Conv2 l2 Penalty. log 5 *10-5 5 Conv3 l2 Penalty log 5 *10-5 5 FC4 l2 Penalty log 5 *10-3 500 Learning Rate Reductions. integer 0 3 Local Response Normalization Scale log 5 *10-6 5 Power linear 0.01 3\nTable 4: Hyperparameters and associated ranges for the three-layer convolutional network\nSearch Space: The search space used for the experiments is shown in Table4 The learning ra reductions hyperparameter indicates how many times the learning rate was reduced by a factor of 1 over the maximum iteration window. For example, on CIFAR-10, which has a maximum iteration 30,000, a learning rate reduction of 2 corresponds to reducing the learning every 10,000 iterations, fc a total of 2 reductions over the 30,o00 iteration window. All hyperparameters with the exception the learning rate decay reduction overlap with those in Snoek et al.(2012). Two hyperparameters i Snoek et al.(2012) were excluded from our experiments: (1) the width of the response normalizatic ayer was excluded due to limitations of the Caffe framework and (2) the number of epochs wa excluded because it is incompatible with dynamic resource allocation.\nDatasets: CIFAR-10 and SVHN contain 32 32 RGB images while MRBI contains 28 28 grayscale images. For all datasets, the only preprocessing performed on the raw images was demeaning. Fo1 CIFAR-10, the training (40,000 instances) and validation (10,000 instances) sets were sampled from data batches 1-5 with balanced classes. The original test set (10,O00 instances) is used for testing For MRBI, the training (10,000 instances) and validation (2,000 instances) sets were sampled from the original training set with balanced classes. The original test set (50,o00 instances) is used for testing. Lastly, for SVHN, the train, validation, and test splits were created using the same procedure as that in Sermanet et al.(2012).\nComputational Considerations: The experiments took the equivalent of over 1 year of GPU hour. on NVIDIA GRID K520 cards available on Amazon EC2 g2 . 8x1arge instances. We set a tota. budget constraint in terms of iterations instead of compute time to make comparisons hardware. independent|6Comparing progress by iterations instead of time ignores overhead costs not associatec with training like cost of configuration selection for Bayesian methods and model initializatior. and validation costs for HyPERBAND. While overhead is hardware dependent, the overhead fo. HyPERBAND is below 5% on EC2 g2 . 8x1arge machines, so comparing progress by time passec. would not impact results significantly.\nDue to the high computational cost of these experiments, we were not able to run all searchers out to convergence. However, we did double the budget for each trial of CIFAR-10 to allow for a comparison of the searchers as they near convergence. Figure|6|shows while Bayesian optimization methods achieve similar performance as HyPERBAND and SuCCEssIVEHALVING, it takes them much longer to achieve a comparable error rate.\nComparison with Early Stopping: Adaptive allocation for hyperparameter optimization can be thought of as a form of early stopping where less promising configurations are halted before comple tion.Domhan et al.(2015) propose an early stopping method for neural networks and combine it\nThe model specification is available at http: //code. google. com/p/ cuda- convnet/ 6Most trials were run on Amazon EC2 g2.8xlarge instances but a few trials were run on different machine: due to the large computational demand of these experiments.\nyperparameter Scale Min Max earning Parameters Initial Learning Rate. log 5 * 10-5 5 Conv1 l2 Penalty log 5 *10-5 5 Conv2 l2 Penalty log 5 *10-5 5 Conv3 l2 Penalty 1og 5 *10-5 5 FC4 l2 Penalty 1og 5 * 10-3 500 Learning Rate Reductions. integer 0 3 ocal Response Normalization Scale log 5 * 10-6 5 Power linear 0.01 3\n0.32 hyperband (finite) I spearmint SMAC 0.30 random SMAC (Early Stop) I I random 2x Err 0.28 TPE bracket s=4 0.26 Tte ae 0.24 ere AY 0.22 0.20 H HI #H H HI 0.18 0 20 40 60 80 100 Multiple of R Used (a) CIFAR-10 0.30 0.10 0.29 0.09 0.28 0.08 Frror 0.27 0.07 lest 0.26 0.06 0.25 0.05 0.24 0.04 0.23 0.03 0.22 10 20 30 40 50 0 10 20 30 40 Multiple of R Used Multiple of R Used (b) MRBI (c) SVHN\nFigure 6: Average test error across 10 trials is shown in all plots. Error bars indicate the maximum and minimum ranges of the test error corresponding to the model with the best validation error\nwith SMAC to speed up hyperparameter optimization. Their method stops training a configuration. if the probability of the configuration beating the current best is below a specified threshold. This. probability is estimated by extrapolating learning curves fit to the intermediate validation error losses. of a configuration. If a configuration is terminated early, the predicted terminal value from the estimated learning curves is used as the validation error passed to the hyperparameter optimization algorithm. Hence, if the learning curve fit is poor, it could impact the performance of the configura tion selection algorithm. While this approach is heuristic in nature, it does demonstrate promising. empirical performance so we included SMAC with early termination as a competitor. We used the conservative termination criterion with default parameters and recorded the validation loss every 400 iterations and evaluated the termination criterion 3 times within the training period (every 8k. iterations for CIFAR-10 and MRBI and every 16k iterations for SVHN)[Comparing performance by the total multiple of R used is conservative because it does not account for the time spent fitting the learning curve in order to check the termination criterion.."}, {"section_index": "12", "section_name": "A.4 KERNEL CLASSIFICATION EXPERIMENTS", "section_text": "We trained the regularized least-squares classification model using a block coordinate descent solve. Our models take less than 10 minutes to train on CIFAR-10 using an 8 core machine, while the defaul. SVM method in Scikit-learn is single core and takes hours. Table 5|shows the hyperparameter and associated ranges considered in the kernel least squares classification experiment discussed ir.\n7We used the code provided at https://github.com/automl/pylearningcurvepredictor\nTable 5: Hyperparameter space for kernel regularized least squares classification problem discussec in Section4.2\nI hyperband / SMAC 0.65 / TPE random 0.60 random 2x Error bracket s=4 0.55 est 0.50 0.45 0.40 100 200 300 400 500 600 700 Minutes\nTable 6: Hyperparameter space for random feature kernel approximation classification problen discussed in Section4.3\nTable6|shows the hyperparameters and associated ranges considered in the random features kerne. approximation classification experiment discussed in Section 4.3] The regularization term X i divided by the number of features so that the tradeoff between the squared error and the l2 penalt would remain constant as the resource increased. Additionally, the average test error with associate minimum and maximum ranges across 10 trials are show in Figure[8\nHyperparameter Type Values preprocessor Categorical min/max, standardize, normalize kernel Categorical rbf, polynomial, sigmoid. C Continuous log [10-3,105 gamma Continuous log [10-5,10 degree if kernel=poly integer [2,5] coef0 if kernel=poly,sigmoid. uniform [-1.0, 1.0]\nSection|4.2] The cost term C is divided by the number of samples so that the tradeoff between the squared error and the l2 penalty would remain constant as the resource increased (squared error is. summed across observations and not averaged). The regularization term is equal to the inverse of the scaled cost term C. Additionally, the average test error with associated minimum and maximum. ranges across 10 trials are show in Figure7.\nFigure 7: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished. Error bars correspond to observed minimum and maximum test error across 10 trials.\nhyperband SMAC 0.65 TPE spearmint 0.60 random lrroreron random 2x 0.55 bracket s=4 0.50 0.45 0.40 0 100 200 300 400 500 600 700 Minutes\nFigure 8: Average test error of the best random features model found by each searcher on CIFAR-10 The test error for HypeRBAND and bracket s = 4 are calculated in every evaluation instead of at the end of a bracket. Error bars correspond to observed minimum and maximum test error across 10 trials."}] |
SkXIrV9le | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "The current computer graphics pipelines are the result of efficient implementations required by lim. ited hardware and high frequency output requirements. These requirements were also achieved witl the use of explicit physics and optic constraints and modeling with constantly improving data struc tures (Shirley et al.2015).\nIn machine learning on the other hand, for a long time image (Olshausen et al.]1996) and videc (Hurri & Hyvarinen2003) generative models had been investigated with statistical approaches tha model images down to the pixel level (Simoncelli & Olshausen2001), sometimes assuming neigh borhood statistical dependencies (Osindero & Hinton2008). In video prediction, the current state of the art uses variations of deep convolutional recurrent neural networks (Kalchbrenner et al.]2016 (Lotter et al.2016) (Finn et al.2 2016).\nAs a parallel to the classic machine learning approach to image and video interpretation and pre diction is a growing trend in the deep learning literature for modeling vision as inverse graphics (Kulkarni et al.]2015)(Rezende et al.] 2016)(Eslami et al.]2016). These approaches can be inter preted into two groups: supervised and unsupervised vision as inverse graphics. The supervised approach assumes that during training an image is provided with extra information about its rota tion, translation, illumination, etc. The goal of the supervised model is to learn an auto-encoder tha explicitly factors out the content of the image and its physical properties. The supervised approach is illustrated byKulkarni et al.(2015).\nThe unsupervised approach requires extra architectural constraints, similar to those assumed in com-. puter graphics. For example,Reed et al.(2016) modeled the content of a scene with a Generative. Adversarial Network (Goodfellow et al.2014) and its location with Spatial Transformer Networks. (Jaderberg et al.|2015). The full model is adapted end-to-end to generate images whose appear- ance can be changed by independently modifying the 'what\"' and/or 'where\"' variables. A similar approach was applied to video generation with volumetric convolutional neural networks (Vondrick et al.[2016).In two papers by Google DeepMind (Rezende et al.[2016) (Eslami et al.[2016) they improved the 'where\"' representations of the unsupervised approach and modeled the 3D geometry of the scene. This way they explicitly represented object rotation, translation, camera pose, etc. Their approaches were also trained end-to-end with REINFORCE-like stochastic gradients to back-. propagate through non-differentiable parts of the graphics pipeline (Rezende et al.]2016) or to count"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "OXY convolution result 0 10 10 20 20 30 30 40 40 50 50 convolution 60 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 0 10 20 30 8 0 4 spatial transformer A 40 8 4 50 60 0 10 20 30 40 50 60 spatial transformer result\nthe number of objects in the scene (Eslami et al.||2016). Those papers also used Spatial Transformer Networks to model the position of the objects in the scene, but they extended it to 3D geometry so it could also model rotation and translation in a volumetric space.\nOther approaches inspired by the graphics pipeline and computer vision geometry in machine learn ing uses the physics constraints to estimate the depth of each pixel in the scene and camera pose movements to predict frames in video (Mahjourian et al.| 2016) (Godard et al.2016).\nThe present paper is closer to the unsupervised approach of vision as inverse graphics. More pre- cisely, here we investigate frame prediction in video. Contrary to the work by[Reed et al. (2016) here we first limit ourselves to simple synthetic 2D datasets and learning models whose representations can be visually interpreted. This way we can investigate exactly what the neural network is learning and validate our statistical assumptions. Also, we investigate the behavior of Spatial Transformer Networks and question it as the default choice when limited compute resources are available and no scale invariance is required.\nFirst in the next Section we will pose a statistical model that is appropriate for machine learning bu inspired by the graphics pipeline\nThis section starts with a high level description of the 2D graphics pipeline, followed by a discussior of how to implement it with neural network modules, and finally we define a formal statistical model\nThe 2D graphics pipeline starts from geometric primitives and follows with modeling transforma. tions, clipping, viewing transformations and finally scan conversion for generating an image. Here. we will deal with previously rasterized bitmaps, i.e. sprites, and will model the translation transfor mations, rotation and clipping with differential operations. This way, the steps in the pipeline cai. be defined as layers of a neural network and the free parameters can be optimized with backpropa. gation.\nFigure 1: How to get similar results using convolutions with delta-functions and spatial transformers. Input sprite is 8 8 pixels and the outputs are 64 64 pixels. Note that in the convolution the result. shape is rotated 180 degrees and its center is where the delta equals to one at pixel (x = 16, y = 16).. Note also that the edges of the spatial transformer results are blurred due to bilinear interpolation. A. matrix can be read as \"zoom-out\"' 8 times and translate up and left in a quarter of the resulting size\nCjSj, where j Cj=1. j\nFor interpretable results it would be optimal to do one-hot memory addressing where c; = 1 fo S; = S and c; = 0 otherwise. Note that (1) is differentiable w.r.t to both c; and S, so we can learn the individual sprites from data. We can for c, to sum to 1 using the softmax nonlinearity. This approach was inspired by the recent deep learning literature on attention modules (Bahdanau et al. 2014) (Graves et al.2014).\nWhen the number of possible sprites is too large it is more efficient to do a compressed represen-. tation. Instead of using an address value c we use a content addressable memory where the image generator estimates a code z that is then decoded to the desired sprite with a (possibly nonlinear). function d(z). If we interpret the addressing value z as a latent representation and the content addressable memory d(z) as a decoder, we can use the recent advances in neural networks for gen erative models to setup our statistical model. We will revisit this later in this section..\nThe translation transformation can be modeled with a convolution with a Delta function or using spatial transformers. Note that the translation of an image I(x, y) can be defined as.\nI(x-Tx,y-Ty) =I(x,y)+O(x-Tx,y-Ty\nWe assume the coordinates in the original image are integers 0 < x < M and 0 < y < N, where M N is the size of the image I. Once the new coordinates are defined, we can calculate the values of the pixels in the new image I using bilinear interpolation:\nwhere (x1, X2, Y1, Y2 are integers, x1 x < x2, y1 < y < y2 and\nWx1,Y1 =x-xy-x) Wx1,Y2 =[x]-x)([y]+1-y) Wx2,Y1 =([x]+1-x)([y]-y) Wx2,y2 =([x]-x)([y]+1-y)\nFor our neural network implementation, we assume a finite set of sprites (later we generalize it tc infinite sprites) that will be part of the frames in the video. The image generation network selects a. sprite, s, from a memorized sprite database S,cs1. K} using an addressing signal c..\nwhere + denotes the image convolution operation. Clipping is naturally handled in such a case. If the output images have finite dimensions and (x -- T, y -- Ty) is non-zero near its border, the translated image I(x - Tx, y Ty) will be clipped. Another way of implementing the translation operation is using Spatial Transformer Networks (STN) (Jaderberg et al.J2015). An implementation of STN can be defined in two steps: resampling and bilinear interpolation. Resampling is defined by moving the position of the pixels (x, y) in the original image using a linear transform to new positions (x, y) as\n[x] A y where 1] [A11 A12 A13 A A21 A22 A23\nI(x,y) = Wx1,y1I(x1, Y1) + Wx1,y2I(x1,Y2)+ Wx2,y1I(X2,Y1) + Wx2,y2I(x2,Y2)\nTo avoid sampling from outside the image we clip the values |x | and |x|+ 1 between 0 and M and the values [y] and [y] + 1 between 0 and N. We omitted that in (5) for conciseness. Note that (4 is piecewise differentiable w.r.t I.\nWe can define translation through operations with\n0 cos p sin p A = - sin p cosp 0\nConsidering the tools defined above, we can define a statistical model of 2D images the explicitly represents sprites and their positions in the scene. We can use the free energy of this statistical model. to optimize a neural network. Let us start with a static single frame model and later generalize it to Video.\nLet an image I ~ pe(I) be composed of sprite s ~ pe(s) centered in the (x, y) coordinates in. the larger image I. Denote these coordinates as a random variable dxy ~ pe, where 0 are the. model parameters. pe(xy) can be factored in two marginal categorical distributions Cat(x) and. Cat(oy) that models the probability of each coordinate of the sprite independently. For the finite sprite dataset, pe(s) is also a categorical distribution conditioned on the true sprites. For this finite case the generative model can be factored as.\npe(I, s, ) = pe(s)pe(Oxu)p(Is, Oxy)\n1 0 T x A 0 1 Ty\nImage rescaling is achieved on that framework by rescaling in the right square submatrix A1:2,1:2. We illustrate in Fig.1 how to get similar results using convolutions with a delta-function and spatial transformers.\npe(s,0[I) = pe(s[D)p(SxyI\nis tractable. One could use for instance Expectation-Maximization or greedy approaches like Match- ing Pursuit to alternate between the search for the position and fitting the best matching shape. For the infinite number of sprites case, we assume that there is a hidden variable z from which the sprites are generated as p(s, z) = pe(z )p0(s[z). In such case our full posterior becomes\npe(z, s,6I) = pe(z, s|I)p(8xy[I) = pe(zI)pe(sI,z)p(0xy[I)\nWe can simplify q10) assuming pe(z[s) = pe(z|I) for simple images without ambiguity and no. sprite occlusion. For a scalable inference in the case of unknown 0 and z and intractable pe(z[s) we can use the auto-encoding variational Bayes (VAE) approach proposed byKingma & Welling. (2013). Using VAE we define an approximate recognition model qs(z[s). In such case, the log-.\nogpe(I) = DkL(qo(z[si)[[pe(zsi))+ DkL(pe(z|si)|pe(z[Ii))+ L(0,$,0xy,Ii)\n(0,$,8,I) = -DkL(qo(zs)[pe(z))+ Eqp(z|s,s)pe(s|I)[l0gPe(I|z,0)];\nc sprites translate RNN Add 't+1 dxY rotate Background p\nFigure 2: A schematic block diagram for a Perception Updating Network. This configuration uses both convolutions with delta functions for translation and spatial transformers for rotation. It also shows the optional background underlay..\nz =mo(I)+vo(I)\nwhere ~ N(0, oI), I is the identity matrix, the functions m(I) and v(I) are deep neural network learned from data.\nOne can argue that given z and a good approximation to the posterior qo, estimating is stil tractable. Nevertheless, we preemptively avoid Expectation-Maximization or other search ap proaches and use instead neural network layers lx and ly:.\nWe extend the model above to videos, i.e. sequences of images I(t) = {I(0), I(1), ...}, assuming that the conditional log-likelihood logpe(It|H1t) = logpe(It|Hst, Hzt) follows (11), where H1 is the history of video frames prior to time point t. Also Hs, and Hz. are the history of position coordinates and the history of latent variables of the sprites respectively. We should observe that one can make the assumption that the sprites don't change for a given video 1(t) and only estimate one sprite st=o or hidden variable zt=o. This assumption can be useful for long term predictions, but requires that the main object moving in the scene doesn't change.\nIn the next section, we propose a neural network architecture for maximizing our approximate vari ational lower bound 2D videos."}, {"section_index": "2", "section_name": "PERCEPTION UPDATING NETWORKS", "section_text": "This Section proposes a family of neural architectures for optimizing the lower bound (12). A. schematic diagram is represented in Fig. (2). The core of our method is a Recurrent Neural Network (RNN) augmented with task specific modules, namely a sprite addressable memory and modeling transformations layers. RNNs augmented with task specific units were popularized by|Graves et al.. (2014) in the context of learning simple differentiable algorithms and served as inspiration for us as. well. Here since we explicitly model the perceived sprites as s or z and update it and its location. and/or rotation though time we decided to call our method simply Perception Updating Networks.\nHere an input frame at time t, It, is fed to the RNN that emits 2 signals: a memory address that selects a relevant sprite and transformation parameters. If we are doing the translation transformation using convolutions and delta functions this output is equal to (14). If using STN, the translation operation returns the matrix A used in (3). Note that we could use both, letting convolutions with 8 to the translation is constraining A as in (7) to do rotation transformations only. We describe the. general case where both oxy and STNs are used in Algorithm 1..\nwhere we dropped the subindices xy and i to avoid clutter. Here we would like to train our model by maximizing the lower bound (12), again inspired by VAE. We can do so using the reparametrization trick assuming qo(z[s) and the prior pe(z) to be Gaussian and sampling.\ndxy = softmax(lx(I)) softmax(ly(I))\nwith denoting the outer product of marginals. We also experiment using STNs. Such amortized inference is also faster in training and test time than EM and will also cover the case where I is itself a learned low dimensional or latent representation instead of an observable image. Bear this in mind while we use this approach even in simple experiments such as those with moving shapes in the Experiments Section. This will help us to understand what can be learned from this model.\nBeyond deciding between STNs vs xu, a few other free parameters of our method are the type of. RNN (e.g. vanilla RNN, LSTM, GRU, ConvRNN, etc), the number of neurons in the hidden state of. the RNN and neural network architectures that infer the correct sprite and modeling transformation parameters. Our hyperparameter choices are investigated separately in each experiment in the next. Section.\nData: input videos It, t E {0, 1, 2, ...}, initial RNN state ho, neural network layers m, Vo, d, l, f Result: video predictions It.t E L1. 2. 3...\ndxy = softmax(lx(ht)) softmax(ly(ht) p = f(ht) cos p sin p 0 A = sin p cos p 0 ~ pe(z) Zt =mo(ht)+vs(ht) St = d(zt) at = STN(st,A) It+1 = at * Oxy It+1 = It+1 +(1- )B nd\nIn the next section we present experiments with the proposed architecture on synthetic datasets"}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section we experiment with several implementations of the proposed Perception Updating Networks. We start with a simple synthetic dataset made of videos where one of 3 moving shapes moves with constant speed bouncing in the edges of an image. This illustrates the working of the finite memory and the addressing scheme in (1). Afterwards we show results on the moving MNIST dataset (Srivastava et al.2015) commonly used in the literature of generative neural network models of videos."}, {"section_index": "4", "section_name": "4.1 BOUNCING SHAPES", "section_text": "In this first experiment we generate videos of one of three shapes moving on a non-zero background The shapes are a square, triangle and cross. The image size is 20 20 pixels and the shapes are 8 8 pixels. The pixel values are between O and 1. The shapes are picked with equal probability and they move at constant speed of 1 pixel per frame. The shapes start from random initial positions with anc start moving in random directions as well.\nWe tested two implementations of the proposed architecture: one using only convolutions, referred. to as convolutional PUN in the figures, and another using using spatial transformers, called spatial. transformer PUN. For the parameters of the convolutional PUN the RNN used was a Long Short. Term Memory (LSTM) with 100 cells. The RNN in the Spatial Transformer PUN had 256 cells. In. the convolutional PUN, the location layers used to calculate dxy, lx and ly, output vectors of size 20. pixels and we used the finite addressable memory described in (1). The background is also learned. from data as weights of neural network. This background served to make the task more difficult and force the network to avoid just exploiting any non-zero value. After the convolutional composition. It = St * Oxy, we added the background to form a new image using It = : It + (1 - ) B, where. is a differentiable mask that accounts for the \"transparency\" of the image It. B is the learned. 20 20 pixels background image. For complex shapes this mask shape could be calculated as. another module in the network, similarly to the approach in|Vondrick et al.(2016)..\nAlgorithm 1: Perception Updating Networks. STN denotes spatial transformer operator (3)-(4) and * denotes convolution. We experimented with several variations of this algorithm, mainly changing if and how the \"where\"' modules dxy and STN are used. Also changing how the sprite st is calculated and not using a background B when not necessary..\nFigure 3: Results on the Bouncing Shapes dataset. Three 8x8 sprites (a square, a cross and a triangle were used to generate videos. The shapes move in a 20x20 pixels canvas with a Toeplitz backgrounc and bounce on the corners. a) We show one step ahead predictions with the compared methods. b We also show the learned sprites for the convolutional implementation of the proposed Perceptior Updating Networks when we over- and under-estimate the size of the desired sprites.\nIn the following experiments, the training videos were 10 frames long. At test time the network is fed the first 10 frames of a video and asked to predict the next 10. Results for the compared methods are shown in Fig. ??. For the baseline method, we did a hyperparameter search on conventional LSTMs with a single linear output layer until we found one that had comparable results at test time. That network had 256 hidden cells. Also, note that although the scale of the mean square error is the same, the results from our proposed architecture look smoother than those learned by the LSTM as shown in Fig.3\nGiven such a simple experiment, it is elucidating to visualize values learned by each piece of th network. As expected the sprite memory learned the 3 investigated shapes in transposed order sinc they are reverted by the convolution operation to compose the frame. We also experimented witl choosing the size of the learned sprites s smaller and larger than the true shapes. We observed tha for larger shapes such as 10 10 the sprites converge to the correct shapes but just using part o the pixels. For smaller shapes such as 6 6 pixels, instead of learning a part of the correct shape the convolutional Perception Updating Network learned to compensate for the lack of enough pixel with more than one non-zero value in the location operation dxy (see Fig. 3. This allow us t suggest to the interested practitioner that in order to get interpretable results it is better to use sprite larger than the expected size than smaller.\nFor the spatial transformer PUN the image is calculated as (see Algorithm 1 for context)\nA=f(ht) It+1 = STN(St, A)\nWe noticed that the spatial transformer PUN was not able to learn the training videos using an equivalent architecture to the convolutional PUN one. We had to use multiple layers to define the function f(ht). In other words, in the convolution based method dxy can be estimated by a single affine transformation of the state ht but A cannot. We also had to use smaller learning rates to\na) one step ahead prediction ground truth convolutional PUN LSTM spatial transformer PuN b) convolutional PUN learned sprites 10x10 sprites 6x6 sprites sample 0xy when sprites 10x10 sample xy when sprites are 6x6\n0.25 cony.PUN LSTM 0.20 STNPUN 0.15 E ISW 0.10 0.05 0.00 0 50 100 150 200 epochs\n0.25 cOnVPUN LSTM 0.20 STN PUN 0.15 E ISW 0.10 0.05 0.00 0 50 100 150 200 epochs\nFigure 5: Sample rollouts of a 2 layer LSTM convolutional Perception Updating Network with\nguarantee convergence: 0.0001 for STN while the dxy-based model worked with a value 10 time larger.\nIf we don't use the softmax nonlinearity to construct dxy the representations learned by the con. volutional PUN are no longer visually interpretable. It is interesting to conclude that under this framework the \"what' and \"where\"' can only be distinguished if we impose architectural constraints.. The reason is the commutative property of the convolution operation..\nAs a note on rotation, we ran experiments where the sprite are rotated by a random angle before. being placed in the image. This new type of videos cannot be learned using only convolutiona based Perception Updating Networks unless we increase the number of sprites proportionally to the. number of possible angles. Spatial transformer based Perception Updating Networks can handle this. new type of video naturally. Nevertheless, if the number of rotation angles is finite or can be dis. cretized we found that we could learn to generate the videos faster if we combined the convolutional. approach with a mechanism to select the appropriate angle from a set of possibilities. Results on. this experiment are not shown in this paper due to space constraints but they can be reproduced with. the companion code."}, {"section_index": "5", "section_name": "4.2 MOVING MNIST", "section_text": "The Moving MNIST benchmark uses videos generated by moving 28 28 pixel images of hand writ- ten digits in a 64 64 pixels canvas. Just like in the Bouncing Shapes dataset, the digits move with different different speeds in different directions and can bounce in the walls. Unlike the Bouncing Shapes dataset, there are 60oo0 different sprites for training and 1000o for test, making it impracti- cal to use a discrete memory module. Instead, we use the memory representation denoted by (13. followed by st = d(zt) as written in Algorithm 1..\nWe trained a convolutional Perception Updating Network using 2 layer LSTMs each one with 1024. cells for 200 epochs, with 10000 gradient updates per epoch. The latent variable z had 100 dimen-. sions and the decoder d(.) was a single hidden layer MLP with 10oo hidden neurons and softplus\nFigure 4: Learning curves in the test task of two implementations of the proposed architecture (conv PUN and STN PUN) and an equivalent LSTM baseline. Note that the spatial transformer based PUN was not able to generalize to the test set, i.e. they did not work well for generating videos when getting its own previous outputs as next step inputs..\nK 8 87 87 87 7 00 00\nB 8 B7 87 87 87 7 7 Q\nactivation function. The output layer of this MLP has 784 neurons, which is the size of an MNIST image, and sigmoid activation function. In the test set we obtained a negative log-likelihood of 239. nats with the proposed architecture, while a 2 layer LSTM baseline had 250 nats. Note that the our. method was optimized to minimize the lower bound (12), not only the negative likelihood. These. results are not as good as those obtained by the Video Pixel Networks (Kalchbrenner et al.] 2016. that obtained 87 nats on the test set. Nevertheless, both approaches are not mutually exclusive anc instead of a fully connected decoder we could use a similar PixelCNN decoder to generate sprites. with higher likelihood. In this first paper we decided instead to focus in defining the statistica. framework and interpretable \"what' and \"where\"' decoupling.\nWhen running the proposed method in rollout mode, feeding the outputs back as next time step inputs, we were able to generate high likelihood frames for more time steps than with a baseline LSTM. Also, since the sprite to be generated and its position in the frame are decoupled, in rollout mode we can fix the sprite and only use the oxy coming from the network. This way we can generate realistic looking frames for even longer, but after a few frames we observed the digits stopped moving or moved in the wrong direction (see video in the companion code repository). This means that the LSTM RNN was not able to maintain its internal dynamics for too long, thus, there is still room for improvement in the proposed architecture."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "This paper introduced a statistical framework for modeling video of 2D scenes inspired by graphic. ipelines and variational auto-encoding Bayes. From this statistical framework we derived a vari. tional lower bound that decouples sprites and their dynamics in a video. To optimize this lowe. ound, we suggested a family of architectures called Perception Updating Networks that can tak. dvantage of this decoupled representation by memorizing sprites or their percepts and updating i. ocation in a scene independently. We showed that this architecture could generate videos that ar nterpretable and are better suited than baseline RNNs for long video generation.."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ryan Burt for several suggestions to the first draft. This work was partially funded by the University of Florida Graduate Student Fellowship and ONR N00014-14-1-0542"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl. learning to align and translate. arXiv preprint arXiv:1409.0473. 2014\nChelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interactior through video prediction. arXiv preprint arXiv:1605.07157, 2016.\nIn Fig. 5|we show sample rollout videos. The network was fed with 10 frames and asked to generate. 10 more getting its own outputs back as inputs and the companion code repository for an animated version of this figure\nThis experiment also suggests several improvements in the proposed architecture. For example, we assumed that the internal RNN has to calculate a sprite at every time step, which is inefficient when the sprites don't change in the video. We should improve the architecture with an extra memory unity that snapshots the sprites and avoid the burden of recalculating the sprites at every step. We believe this would a possible way to free representation power that the internal RNN could use to. model the movement dynamics for even more time steps..\nJarmo Hurri and Aapo Hyvarinen. Simple-cell-like receptive fields maximize temporal coherenc in natural video. Neural Computation, 15(3):663-691, 2003.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Svstems. p. 2017-2025. 2015\nNal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex. Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527, 2016\nBruno A Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607-609, 1996.\nScott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learn ing what and where to draw. arXiv preprint arXiv:1610.02454, 2016..\nDanilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nico las Heess. Unsupervised learning of 3d structure from images. arXiv preprint arXiv:1607.00662 2016.\nPeter Shirley, Michael Ashikhmin, and Steve Marschner. Fundamentals of computer graphics. CRC Press, 2015.\nEero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation Annual review of neuroscience, 24(1):1193-1216, 2001\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc representations using lstms. CoRR, abs/1502.04681, 2, 2015.\nCarl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics arXiv preprint arXiv:1609.02612, 2016.\nan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014"}] |
B184E5qee | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Language models, which are probability distributions over sequences of words, have many appli cations such as machine translation (Brown et al.]1993), speech recognition (Bahl et al.] 1983) 01 dialogue agents (Stolcke et al.|20oo). While traditional neural networks language models have ob tained state-of-the-art performance in this domain (Jozefowicz et al.[2016, Mikolov et al.[2010) they lack the capacity to adapt to their recent history, limiting their application to dynamic environ ments (Dodge et al.]2015). A recent approach to solve this problem is to augment these networks with an external memory (Graves et al.2014) Grefenstette et al.f2015] Joulin & Mikolov2015 Sukhbaatar et al.]2015). These models can potentially use their external memory to store new information and adapt to a changing environment.\nWhile these networks have obtained promising results on language modeling datasets (Sukhbaatar. et al.|[2015), they are quite computationally expensive. Typically, they have to learn a parametrizable mechanism to read or write to memory cells (Graves et al.][2014)Joulin & Mikolov2015). This may limit both the size of their usable memory as well as the quantity of data they can be trained on. In. this work, we propose a very light-weight alternative that shares some of the properties of memory. augmented networks, notably the capability to dynamically adapt over time. By minimizing the computation burden of the memory, we are able to use larger memory and scale to bigger datasets.. We observe in practice that this allows us to surpass the perfomance of memory augmented networks. on different language modeling tasks.\nOur model share some similarities with a model proposed byKuhn (1988), called the cache model. A cache model stores a simple representation of the recent past, often in the form of unigrams, and uses them for prediction (Kuhn & De Mori1990). This contextual information is quite cheap to store and can be accessed efficiently. It also does not need any training and can be appplied on top of any model. This makes this model particularly interesting for domain adaptation (Kneser & Steinbiss1993).\nOur main contribution is to propose a continuous version of the cache model, called Neural Cache Model, that can be adapted to any neural network language model. We store recent hidden activations and use them as representation for the context. Using simply a dot-product with the current hidden activations, they turn out to be extremely informative for prediction. Our model requires no training and can be used on any pre-trained neural networks. It also scales effortlessly to thousands of memory cells. We demonstrate the quality of the Neural Cache models on several language model tasks and the LAMBADA dataset (Paperno et al.]2016)."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose an extension to neural network language models to adapt their pre. diction to the recent history. Our model is a simplified version of memory aug. nented networks, which stores past hidden activations as memory and accesses. hem through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between. he use of external memory in neural network and cache models used with count. ased language models. We demonstrate on several language model datasets that. our approach performs significantly better than recent memory augmented net-. works.\nA language model is a probability distribution over sequences of words. Let V be the size of the vocabulary; each word is represented by a one-hot encoding vector x in RV = V, corresponding to its index in the vocabulary. Using the chain rule, the probability assigned to a sequence of words C1 : , xT can be factorized as\nT I 0(x1,..., XT) = pxtXt-1,.,X1 t=1\nLanguage modeling is often framed as learning the conditional probability over words, given the history (Bahl et al.1983)\nThis conditional probability is traditionally approximated with non-parameteric models based on counting statistics (Goodman]|2001). In particular, smoothed N-gram models (Katz||1987| Kneser & Ney|1995) achieve good performance in practice (Mikolov et al.]2011). Parametrized alternatives are either maximum entropy language models (Rosenfeld1996), feedforward networks (Bengio et al.]2003) or recurrent networks (Mikolov et al.2010).In particular, recurrent networks are currently the best solution to approximate this conditional probability, achieving state-of-the-arts performance on standard language modeling benchmarks (Jozefowicz et al.2016]Zilly et al.2016).\nRecurrent networks. Assuming that we have a vector ht E Rd encoding the history xt,..., X1 the conditional probability of a word w can be parametrized as.\nPvocab(w Xt, ..., X1 x exp(h+\nPvocab(w xt,..., x1) exp(ht Ow.\nht =xt,ht-1)\nwhere is a function depending on the architecture of the network. Several architecture for recur- rent networks have been proposed, such as the Elman network (Elman1990), the long short-term memory (LSTM) (Hochreiter & Schmidhuber1997) or the gated recurrent unit (GRU) (Chung et al.2014). One of the simplest recurrent networks is the Elman network (Elman![1990), where.\nht =o(Lxt+ Rht-1)\nCache model. After a word appears once in a document, it is much more likely to appear again. As an example, the frequency of the word tiger on the Wikipedia page of the same name is 2.8%. compared to 0.0037% over the whole Wikipedia. Cache models exploit this simple observatior. to improve n-gram language models by capturing long-range dependencies in documents. More. precisely, these models have a cache component, which contains the words that appeared in the. recent history (either the document or a fixed number of words). A simple language model, such as. a unigram or smoothed bigram model, is fitted on the words of the cache and interpolated with th static language model (trained over a larger dataset). This technique has many advantages. First. this is a very efficient way to adapt a language model to a new domain. Second, such models cai. predict out-of-vocabulary words (OOV words), after seeing them once. Finally, this helps captur. long-range dependencies in documents, in order to generate more coherent text..\nwhere o is a non-linearity such as the logistic or tanh functions, L E RdV is a word embedding. matrix and R E Rdd is the recurrent matrix. The LSTM architecture is particularly interesting in. the context of language modelling (Jozefowicz et al.2016) and we refer the reader to[Graves et al. (2013) for details on this architecture.\nThe parameters of recurrent neural network language models are learned by minimizing the nega tive log-likelihood of the training data. This objective function is usually minimized by using the stochastic gradient descent algorithm, or variants such as Adagrad (Duchi et al.[2011). The gradient is computed using the truncated backpropagation through time algorithm (Werbos!1990) Williams & Peng1990).\nId (h1, x2) (h2,x3) (h3,x4) X5 Id I d Id 0 R R R h1 h4 n? L L L L X1 X2 x3 x 4\nThe Neural Cache Model adds a cache-like memory to neural network language models. It exploits the hidden representations ht to define a probability distribution over the words in the cache. As illustrated Figure[1| the cache stores pairs (hi, xi+1) of a hidden representation, and the word which. was generated based on this representation (we remind the reader that the vector h, encodes the history xi, ..., x1). At time t, we then define a probability distribution over words stored in the cache based on the stored hidden representations and the current one ht as.\nt-1 Pcache(w h1..t, x1..t) ) l{w=xi+1} exp(0ht i=1\nwhere the scalar 0 is a parameter which controls the flatness of the distribution. When 0 is equal to zero, the probability distribution over the history is uniform, and our model is equivalent to a unigram cache model (Kuhn & De MoriJ1990)\nFrom the point of view of memory-augmented neural networks, the probabilit. Pcache(w h1.t, x1.t) given by the neural cache model can be interpreted as the probabilit. to retrieve the word w from the memory given the query ht, where the desired answer is the nex word xt+1. Using previous hidden states as keys for the words in the memory, the memory lookuj. operator can be implemented with simple dot products between the keys and the query. In contras. to existing memory-augmented neural networks, the neural cache model avoids the need to learn the. memory lookup operator. Such a cache can thus be added to a pre-trained recurrent neural languag. model without fine tuning of the parameters, and large cache size can be used with negligible impac. on the computational cost of a prediction..\nNeural cache language model. Following the standard practice in n-gram cache-based languag. models, the final probability of a word is given by the linear interpolation of the cache languag model with the regular language model, obtaining:.\np(wh1..t, x1..t X)pvocab(w ht) + Apcache(wh1..t, x1..t)\nInstead of taking a linear interpolation between the two distribution with a fixed X, we also conside. a global normalization over the two distribution:.\nt-1 p(w (h1..t, x1..t) exp ) {w=xi+1} exp(0ht hi+ i=1\nThis corresponds to taking a softmax over the vocabulary and the words in the cache. The parametet a controls the weight of the cache component, and is the counterpart of the parameter for linear. interpolation.\nThe addition of the neural cache to a recurrent neural language model inherits the advantages of n gram caches in usual cache-based models: The probability distribution over words is updated online depending on the context, and out-of-vocabulary words can be predicted as soon as they have been seen at least once in the recent history. The neural cache also inherits the ability of the hidden states of recurrent neural networks to model longer-term contexts than small n-grams, and thus allows for a finer modeling of the current context than e.g., unigram caches.\nFigure 1: The neural cache stores the previous hidden states in memory cells. They are then used as keys to re trieve their corresponding word, that is the next word. There is no transfor mation applied to the storage during writing and reading\nh1 t.X1\nLinear interpolation (ptb) Global normalization (ptb). 0.05 0.0 96 0.1 96 0.5 0.15 1.0 90 90 0.2 aldhe 1.5 0.25 2.0 84 84 0.3 2.5 0.35 3.0 78 78 0.4 3.5 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.080.160.240.32 0.4 theta theta\nModel Test PPL RNN+LSA+KN5+cache (Mikolov & Zweig 2012 90.3 LSTM (Zaremba et al.|2014 78.4 Variational LSTM (Gal & Ghahramanil2015 73.4 Recurrent Highway Network (Zilly et al.||2016 66.0 Pointer Sentinel LSTM (Merity et al.||2016) 70.9 LSTM (our implem.). 82.3 Neural cache model. 72.1\nTable 1: Test perplexity on the Penn Tree eBank.\nTraining procedure. For now, we first train the (recurrent) neural network language model, with- out the cache component. We only apply the cache model at test time, and choose the hyperparam eters 0 and X (or a) on the validation set. A big advantage of our method is that it is very easy and cheap to apply, with already trained neural models. There is no need to perform backpropaga tion over large contexts, and we can thus apply our method with large cache sizes (larger than one thousand)."}, {"section_index": "2", "section_name": "4 RELATED WORK", "section_text": "Cache model. Adding a cache to a language model was intoducted in the context of speech recog nition(Kuhn]1988] Kupiec][1989] Kuhn & De Mori]1990). These models were further extended by Jelinek et al.[(1991) into a smoothed trigram language model, reporting reduction in both perplexity and word error rates.Della Pietra et al.(1992) adapt the cache to a general n-gram model such that it satisfies marginal constraints obtained from the current document.\nAdaptive language models. Other adaptive language models have been proposed in the past:. Kneser & Steinbiss (1993) and Iyer & Ostendorf|(1999) dynamically adapt the parameters of their model to the recent history using different weight interpolation schemes.Bellegarda (2o00) and Coccaro & Jurafsky[(1998) use latent semantic analysis to adapt their models to the current context. Similarly, topic features have been used with either maximum entropy models (Khudanpur & Wu 2000) or recurrent networks (Mikolov & Zweig2012] Wang & Cho 2015). Finally, Lau et al.. (1993) proposes to use pairs of distant of words to capture long-range dependencies..\nMemory augmented neural networks. In the context of sequence prediction, several memory augmented neural networks have obtained promising results (Sukhbaatar et al.|2015] Graves et al. 2014, Grefenstette et al.2015, Joulin & Mikolov2015). In particular, Sukhbaatar et al.(2015 stores a representation of the recent past and accesses it using an attention mechanism Bahdanau [et al. (2014). Sukhbaatar et al. (2015) shows that this reduces the perplexity for language modeling.\nFigure 2: Perplexity on the validation set of Penn Tree Bank for linear interpolation (left) and global normalization (right), for various values of hyperparameters 0, X and a. We use a cache. model of size 500. The base model has a validation perplexity of 86.9. The best linear interpolation. has a perplexity of 74.6, while the best global normalization has a perplexity of 74.9..\nLinear interpolation (wikitext2) Global normalization (wikitext2) 104 104 0.05 0.0 0.1 0.5 96 96 0.15 1.0 0.2 aldhe 1.5 88 88 0.25 2.0 0.3 2.5 80 80 0.35 3.0 0.4 3.5 72 72 0.0 0.2 0.4 0.60.8 1.0 0.0 0.08 0.160.240.32 0.4 theta theta\nTable 2: Test perplexity on the wikitext datasets. The two datasets share the same validation an. test sets, making all the results comparable.\nThis approach has been successfully applied to question answering, when the answer is containec. in a given paragraph (Chen et al.. 2016, Hermann et al. 2015]Kadlec et al.| 2016 Sukhbaatar et al. 2015). Similarly, Vinyals et al.(2015) explores the use of this mechanism to reorder sequence. of tokens. Their network uses an attention (or \"pointer') over the input sequence to predict whicl. element should be selected as the next output.Gulcehre et al.[(2016) have shown that a simila mechanism called pointer softmax could be used in the context of machine translation, to decid which word to copy from the source to target.."}, {"section_index": "3", "section_name": "5 EXPERIMENTS", "section_text": "Datasets. In this section, we describe experiments performed on two small datasets: the Penr Tree Bank (Marcus et al.l1993) and the wikitext2 (Merity et al.] 2016) datasets. The Penn Tree Bank dataset is made of articles from the Wall Street Journal, contains 929k training tokens and has a vocabulary size of 10k. The wikitext2 dataset is derived from Wikipedia articles. contains 2M training tokens and has a vocabulary size of 33k. These datasets contain non-shuffled documents, therefore requiring models to capture inter-sentences dependencies to perform well..\nFigure 3: Perplexity on the validation set of wikitext2 for linear interpolation (left) and global normalization (right), for various values of hyperparameters 0, X and a. We use a cache model of size 2000. The base model has a validation perplexity of 104.2. The best linear interpolation has a perplexity of 72.1, while the best global normalization has a perplexity of 73.5.\nModel wikitext2 wikitextl03 Zoneout + Variational LSTM (Merity et al.2016 100.9 Pointer Sentinel LSTM (Merity et al..) 2016) 80.8 LSTM (our implementation) 99.3 48.7 Neural cache model (size = 100) 81.6 44.8 Neural cache model (size = 2,000) 68.9 40.8\nIndependently of our work, Merity et al.(2016) apply the same mechanism to recurrent network. Unlike our work, they uses the current hidden activation as a representation of the current input. (while we use it to represent the output). This requires additional learning of a transformation between the current representation and those in the past. The advantage of our approach is that we. can scale to very large caches effortlessly.\nIn this section, we evaluate our method on various language modeling datasets, which have differen sizes and characteristics. On all datasets, we train a static recurrent neural network language model. with LSTM units. We then use the hidden representations from this model to obtain our cache, which. is interpolated with the static LSTM model. We also evaluate a unigram cache model interpolated. with the static model as another baseline..\ntext8 wikitext103 125 49 48 120 47 0 0 O 115 0 0 leetir 46 0 0 0 0 O OX 0 0 0 0 45 X 110 baseline baseline pernn 44 x 0 unigram x 0 unigram 105 43 x x neural x neural 42 100 X * 41 X X 95 40 102 103 104 102 103 104 cache size (log scale) cache size (log scale)\nFigure 4: Test perplexity as a function of the number of words in the cache, for our method and a unigram cache baseline. We observe that our approach can uses larger caches than the baseline..\nImplementation details. We train recurrent neural network language models with 1024 LSTM. units, regularized with dropout (probability of dropping out units equals to 0.65). We use the Ada- grad algorithm, with a learning rate of 0.2, a batchsize of 20 and initial weight uniformly sampled in the range -0.05, 0.05]. We clip the norm of the gradient to 0.1 and unroll the network for 30 steps We consider cache sizes on a logarithmic scale, from 50 to 10, 000, and fit the cache hyperparameters. on the validation set.\nResults. We report the perplexity on the validation sets in Figures 2|and 3] for various value of hyperparameters, for linear interpolation and global normalization. First, we observe that oi both datasets, the linear interpolation method performs slightly better than the global normalizatioi approach. It is also easier to apply in practice, and we thus use this method in the remainder of thi paper. In Tables[1|and|2] we report the test perplexity of our approach and state-of-the-art models Our approach is competitive with previous models, in particular with the pointer sentinel LSTM model of Merity et al.(2016). On Penn Tree Bank, we note that the improvement over the bas model is similar for both methods. On the wikit ext 2 dataset, both methods obtain similar result when using the same cache size (100 words). Since our method is computationally cheap, it is eas to increase the cache to larger values (2, 000 words), leading to dramatic improvements (30% ove the baseline, 12% over a small cache of 100 words)."}, {"section_index": "4", "section_name": "5.2 MEDIUM SCALE EXPERIMENTS", "section_text": "Datasets and implementation details. In this section, we describe experiments performed over two medium scale datasets: text 8 and wikitext 103. Both datasets are derived from Wikipedia. but different pre-processing were applied. The text8 dataset contains 17M training tokens and has a vocabulary size of 44k words, while the wikitext103 dataset has a training set of size. 103M, and a vocabulary size of 267k words. We use the same setting as in the previous section,. except for the batchsize (we use 128) and dropout parameters (we use 0.45 for text 8 and 0.25 for. wikitext103). Since both datasets have large vocabularies, we use the adaptive softmax (Grave. et al.2016) for faster training.\nResults.We report the test perplexity as a function of the cache size in Figure 4] for the neura. cache model and a unigram cache baseline. We observe that our approach can exploits larger cach. sizes, compared to the baseline. In Table 2] we observe that the improvement in perplexity c. our method over the LSTM baseline on wikitext103 is smaller than for wikitext2 (approx. 16% v.s. 30%). The fact that improvements obtained with more advanced techniques decreas. when the size of training data increases has already been observed by Goodman (2oo1). Bot. wikitext datasets sharing the same test set, we also observe that the LSTM baseline, traine on 103M tokens (wikitext103), strongly outperforms more sophisticated methods, trained o 2M tokens (wikitext2). For these two reasons, we believe that it is important to evaluate an compare methods on relatively large datasets.\nTable 3: Perplexity on the text 8 and 1ambada datasets. WB5 stands for 5-gram language model with Witten-Bell smoothing\nlambada 700 600 x Control 500 0 Development Peeelerir 400 300 0 X x 200 X 100 X 0 0.0 0.2 0.4 0.6 0.8 1.0 1ambda\nFigure 5: Perplexity on the development and control sets of 1ambada, as a function of the interpo lation parameters ."}, {"section_index": "5", "section_name": "5.3 EXPERIMENTS ON THE LAMBADA DATASET", "section_text": "Finally, we report experiments carried on the 1ambada dataset, introduced byPaperno et al.(2016) This is a dataset of short passages extracted from novels. The goal is to predict the last word of the excerpt. This dataset was built so that human subjects solve the task perfectly when given the full context (approx. 4.6 sentences), but fail to do so when only given the sentence with the target word Thus, most state-of-the-art language models fail on this dataset. The 1ambada training set contains approximately 200M tokens and has a vocabulary size of 93, 215. We report results for our method in Table[3] as well the performance of baselines from|Paperno et al.[(2016). Adding a neural cache model to the LSTM baseline strongly improves the performance on the 1ambada dataset. We alsc observe in Figure |5 that the best interpolation parameter between the static model and the cache is not the same for the development and control sets. This is due to the fact that more than 83% of passages of the development set include the target word, while this is true for only 14% of the control set. Ideally, a model should have strong results on both sets. One possible generalization of our model would be to adapt the interpolation parameter based on the current vector representatior of the history ht :"}, {"section_index": "6", "section_name": "6 CONCLUSION", "section_text": "We presented the neural cache model to augment neural language models with a longer-term mem ory that dynamically updates the word probablilities based on the long-term context. A neural cach can be added on top of a pre-trained language model at negligible cost. Our experiments on both lan guage modeling tasks and the challenging LAMBADA dataset shows that significant performance gains can be expected by adding this external memory component.\nTechnically, the neural cache models is similar to some recent memory-augmented neural networks such as pointer networks. However, its specific design makes it possible to avoid learning the mem ory lookup component. This makes the neural cache appealing since it can use larger cache sizes than memory-augment networks and can be applied as easily as traditional count-based caches."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning tc align and translate. arXiv preprint arXiv:1409.0473, 2014.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. JMLR, 2003.\nPeter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. The mathematics statistical machine translation: Parameter estimation. Computational linguistics, 1993.\nDanqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858, 2016.\nunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated r current neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.\nNoah Coccaro and Daniel Jurafsky. Towards better integration of semantic predictors in statistical languag modeling. In ICSLP. Citeseer, 1998.\nJeffrey L Elman. Finding structure in time. Cognitive science, 1990\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural net- works. arXiv preprint arXiv:1512.05287, 2015.\nJoshua T Goodman. A bit of progress in language modeling. Computer Speech & Language, 2001\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 1997\nRukmini M Iyer and Mari Ostendorf. Modeling long distance dependence in language: Topic mixtures versus dynamic cache models. IEEE Transactions on speech and audio processing, 1999\nFrederick Jelinek, Bernard Merialdo, Salim Roukos, and Martin Strauss. A dynamic language model for speecl recognition. In HLT, 1991.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. Ir Advances in Neural Information Processing Systems. pp. 190-198. 2015.\nLalit R Bahl, Frederick Jelinek, and Robert L Mercer. A maximum likelihood approach to continuous speech recognition. PAMI, 1983.\nJerome R Bellegarda. Exploiting latent semantic information in statistical language modeling. Proceedings of the IEEE, 2000.\nEdouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient softmax ap. proximation for gpus. arXiv preprint arXiv:1609.04309, 2016.\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce witl unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828-1836, 2015..\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom. Teaching machines to read and comprehend. In NIPS. 2015.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits ol language modeling. arXiv preprint arXiv:1602.02410, 2016.\nSanjeev Khudanpur and Jun Wu. Maximum entropy techniques for exploiting syntactic, semantic and colloca tional dependencies in language modeling. Computer Speech & Language, 2000..\nReinhard Kneser and Hermann Ney. Improved backing-off for m-gram lang! e modeling. In ICASSP. 199:\nRoland Kuhn. Speech recognition and the frequency of recently used words: A modified markov model fo natural language. In Proceedings of the 12th conference on Computational linguistics-Volume 1, 1988\nRoland Kuhn and Renato De Mori. A cache-based natural langu age model for speech recognition. PAMI, 1990\nRaymond Lau, Ronald Rosenfeld, and Salim Roukos. Trigger-based language models: A maximum entrop. approach. In ICASSP, 1993.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus o english: The penn treebank. Computational linguistics, 1993.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neura network based language model. In INTERSPEECH, 2010.\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato. Learning longe memory in recurrent neural networks. arXiv preprint arXiv:1412.7753. 2014.\nDenis Paperno, German Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. 2016.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPs, 2015\nPaul J Werbos. Backpropagation through time: what it does and how to do it. 1990.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.\nSlava M Katz. Estimation of probabilities from sparse data for the language model component of a speecl recognizer. ICASSP, 1987.\nReinhard Kneser and Volker Steinbiss. On the dynamic adaptation of stochastic language models. In ICASSP.\nTomas Mikolov. Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation anc combination of advanced language modeling techniques. In INTERSPEECH, 2011.\nAndreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 2000.\nainbayar Sukhbaatar, Szlam Arthur, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS 2015.\nRonald J Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent net work trajectories. Neural computation, 1990.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014."}] |
ryHlUtqge | [{"section_index": "0", "section_name": "GENERALIZING SKILLS WITH SEMI-SUPERVISED REINFORCEMENT LEARNING", "section_text": "Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine + Berkeley AI Research (BAIR), University of California, Berkeley * OpenAI {cbfinn, tianhe.yu, justinfu, pabbeel, svlevine}@berkeley\nn,tianhe.yu, justinfu, pabbeel, svlevine}@berkeley.edu\nDeep reinforcement learning (RL) can acquire complex behaviors from low-level inputs, such as images. However, real-world applications of such methods require generalizing to the vast variability of the real world. Deep networks are known to achieve remarkable generalization when provided with massive amounts of la- beled data, but can we provide this breadth of experience to an RL agent, such as a robot? The robot might continuously learn as it explores the world around it, even while it is deployed and performing useful tasks. However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Such reward functions are often hard to measure in the real world, es pecially in domains such as robotics and dialog systems, where the reward could depend on the unknown positions of objects or the emotional state of the user. On the other hand, it is often quite practical to provide the agent with reward func- tions in a limited set of situations, such as when a human supervisor is present or in a controlled laboratory setting. Can we make use of this limited supervi sion, and still benefit from the breadth of experience an agent might collect in the unstructured real world? In this paper, we formalize this problem setting as semi- supervised reinforcement learning (SSRL), where the reward function can only be evaluated in a set of \"labeled\"' MDPs, and the agent must generalize its behavior to the wide range of states it might encounter in a set of \"unlabeled' MDPs, by using experience from both settings. Our proposed method infers the task objec- tive in the unlabeled MDPs through an algorithm that resembles inverse RL, using the agent's own prior experience in the labeled MDPs as a kind of demonstration of optimal behavior. We evaluate our method on challenging, continuous control tasks that require control directly from images, and show that our approach can improve the generalization of a learned deep neural network policy by using ex perience for which no reward function is available. We also show that our method outperforms direct supervised learning of the reward."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Reinforcement learning (RL) provides a powerful framework for learning behavior from high level goals. RL has been combined with deep networks to learn policies for problems such as Atari games (Mnih et al.]2015), simple Minecraft tasks (Oh et al.]2016), and simulated locomo- tion (Schulman et al.||2015). To apply reinforcement learning (RL) to real-world scenarios, however. the learned policy must be able to handle the variability of the real-world and generalize to scenarios that it has not seen previously. In many such domains, such as robotics and dialog systems, the vari- ability of the real-world poses a significant challenge. Methods for training deep, flexible models combined with massive amounts of labeled data are known to enable wide generalization for super- vised learning tasks (Russakovsky et al.|2015). Lifelong learning aims to address this data challenge in the context of RL by enabling the agent to continuously learn as it collects new experiences \"on the job, directly in the real world (Thrun & Mitchellf[1995). However, this learning requires access to a reward function, to tell the agent whether it is succeeding or failing at its task. Although the reward is a high-level supervision signal that is in principle easier to provide than detailed labels, in practice it often depends on information that is extrinsic to the agent and is therefore difficult to measure in the real world. For example, in robotics, the reward may depend on the poses of all\nM E L M E U training evaluation RL M E L M E U transfer M E L, M E U with reward M E U reward function reward function SSRL M E L, M E U no reward M E U available unavailable (a) (b)\nVlE O training evaluation RL M E L M E U transfer M E L, M E U with reward M E U reward function reward function SSRL M E L, M E U no reward M e U available unavailable\nFigure 1: We consider the problem of semi-supervised reinforcement learning, where a reward function can be evaluated in some small set of labeled MDPs M E L, but the resulting policy must be successful on a larger set of unlabeled MDPs M E L for which the reward function is not known. In standard RL, the policy is trained only on the labeled MDPs, while in transfer learning, the policy is finetuned using a known reward function in the unlabeled MDP set. Semi-supervised RL is distinct in that it involves using experience from the unlabeled set without access to the reward function.\nof the objects in the environment, and in dialog systems, the reward may depend on the happines of the user. This reward supervision is practical to measure in a small set of instrumented training scenarios, in laboratory settings, or under the guidance of a human teacher, but quickly becomes impractical to provide continuously to a lifelong learning system, when the agent is deployed ir varied and diverse real-world settings.\nConceptually, we might imagine that this challenge should not exist, since reinforcement learning should, at least in principle, be able to handle high-level delayed rewards that can always be mea sured. For example, a human or animal might have their reward encode some higher-level intrinsic goals such as survival, reproduction, or the absence of pain and hunger. However, most RL method do not operate at the level of such extremely sparse and high-level rewards, and most of the suc cesses of RL have been in domains with natural sources of detailed external feedback, such as the score in a video game. In most real-world scenarios, such a natural and convenient score typicall does not exist. It therefore seems that intelligent agents in the real world should be able to cope witl only partial reward supervision, and that algorithms that enable this are of both of practical and con ceptual value, since they bring us closer to real-world lifelong reinforcement learning, and can helj us understand adaptive intelligent systems that can learn even under limited supervisory feedback So how can an agent continue to learn in the real world without access to a reward function?\nIn this work, we formalize this as the problem of semi-supervised reinforcement learning, where the agent must perform RL when the reward function is known in some settings, but cannot be evaluatec in others. As illustrated in Figure[1] we assume that the agent can first learn in a small range oj \"labeled' scenarios, where the reward is available, and then experiences a wider range of \"unlabeled' scenarios where it must learn to act successfully, akin to lifelong learning in the real world. This problem statement can be viewed as being analogous to the problem of semi-supervised learning, bu with the additional complexity of sequential decision making. Standard approaches to RL simply learn a policy in the scenarios where a reward function is available, and hope that it generalizes tc new unseen conditions. However, it should be possible to leverage unlabeled experiences to find a more general policy, and to achieve continuous improvement from lifelong real-world experience.\nOur main contribution is to propose and evaluate the first algorithm for performing semi-supervisec. reinforcement learning, which we call semi-supervised skill generalization (S3G). Our approach car. leverage unlabeled experience to learn a policy that can succeed in a wider variety of scenarios thar. a policy trained only with labeled experiences. In our method, we train an RL policy in settings. where a reward function is available, and then run an algorithm that resembles inverse reinforce. ment learning, to simultaneously learn a reward and a more general policy in the wider range of. unlabeled settings. Unlike traditional applications of inverse RL algorithms, we use roll-outs fro. the RL policy in the labeled conditions as demonstrations, rather than a human expert, making ou. method completely autonomous. Although our approach is compatible with any choice of rein. forcement learning and inverse reinforcement learning algorithm, we use the guided cost learning. method in our experimental evaluation, which allows us to evaluate on high-dimensional, continu. ous robotic manipulation tasks with unknown dynamics while using a relatively modest number o. samples (Finn et al.]2016). We compare our method to two baselines: (a) a policy trained with RI. in settings where reward labels are available (as is standard), and (b) a policy trained in the unlabeled."}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Utilizing both labeled and unlabeled data is a well-known technique that can improve learning per. formance when data is limited (Zhu & Goldberg. 2009). These techniques are especially importan1 in domains where large, supervised datasets are difficult to acquire, but unlabeled data is plentiful. This problem is generally known as semi-supervised learning. Methods for solving this problem. often include propagating known labels to the unlabeled examples (Zhu & Ghahramani|2002) and. using regularizing side information (Szummer & Jaakkola]2002) such as the structure of the data Semi-supervised learning has been performed with deep models, either by blending unsupervised. and supervised objectives (Rasmus et al.]2016f Zhang et al.|2016) or by using generative models. with the labels treated as missing data (Kingma et al.l2014). Semi-supervised learning is particularly relevant in robotics and control, where collecting labeled experience on real hardware is expensive However, while semi-supervised learning has been successful in domains such as object tracking and detection (Teichman & Thrun|2007), applications to action and control have not been applied. to the objective of the task itself.\nThe generalization capabilities of policies learned through RL (and deep RL) has been limited, as pointed out by Oh et al. Oh et al.(2016). That is, typically the settings under which the agent is tested do not vary from those under which it was trained. We develop a method for generalizing skills to a wider range of settings using unlabeled experience. A related but orthogonal problem is transfer learning (Taylor & Stone]2009] Barrett et al.||2010), which attempts to use prior experience in one domain to improve training performance in another. Transfer learning has been applied to RL domains for transferring information across environments (Mordatch et al.]2016) Tzeng et al. 2016), robots (Devin et al.]2016), and tasks (Konidaris & Barto2006 Stolle & Atkeson2007 Dragan et al.2011] Parisotto et al.]2016]Rusu et al.]2016). The goal of these approaches is typically to utilize experience in a source domain to learn faster or better in the target domain. Unlike most transfer learning scenarios, we assume that supervision cannot be obtained in many scenarios. We are also not concerned with large, systematic domain shift: we assume that the labeled and unlabeled settings come from the same underlying distribution. Note, however, that the method that we develop could be used for transfer learning problems where the state and reward are consistent across domains.\nTo the best of our knowledge, this paper is the first to provide a practical and tractable algorithm fo. semi-supervised RL with large, expressive function approximators, and illustrate that such learning. actually improves the generalization of the learned policy. However, the idea of semi-supervise reinforcement learning procedures has been previously discussed as a compelling research directior. byChristiano(2016) and Amodei et al.(2016).\nTo accomplish semi-supervised reinforcement learning, we propose a method that resembles an in-. verse reinforcement learning (IRL) algorithm, in that it imputes the reward function in the unlabeled settings by learning from the successful trials in the labeled settings. IRL was first introduced byNg. et al. (2000) as the problem of learning reward functions from expert, human demonstrations, typ. ically with the end goal of learning a policy that can succeed from states that are not in the set of demonstrations (Abbeel & Ng2004). We use IRL to infer the reward function underlying a policy previously learned in a small set of labeled scenarios, rather than using expert demonstrations. We. build upon prior methods, including guided cost learning, which propose to learn a cost and a policy. simultaneously (Finn et al.]2016, Ho et al.[2016). Note that the problem that we are consider-. ing is distinct from semi-supervised inverse reinforcement learningAudiffren et al.(2015), which. makes use of expert and non-expert trajectories for learning. We require a reward function in some instances, rather than expert demonstrations."}, {"section_index": "3", "section_name": "SEMI-SUPERVISED REINFORCEMENT LEARNING", "section_text": "We first define semi-supervised reinforcement learning. We would like the problem definition to be able to capture situations where supervision, via the reward function. is only available in a small se\nsettings using a reward function trained to regress to available reward labels. We find that S3G re. covers a policy that is substantially more effective than the prior, standard approach in a wide variety of settings, without using any additional labeled information. We also find that, by using an inverse RL objective, our method achieves superior generalization to the reward regression approach.\nof labeled Markov decision processes (MDPs), but where we want our agent to be able to continue. to learn to perform successfully in a much larger set of unlabeled MDPs, where reward labels are. unavailable. For example, if the task corresponds to an autonomous car learning to drive, the labele MDPs might correspond to a range of closed courses, while the unlabeled MDPs might involve. driving on real-world highways and city streets. We use the terms labeled and unlabeled in analogy. to semi-supervised learning, but note a reward observation is not as directly informative as a label..\nFormally, we consider a distribution p(M) over undiscounted finite-horizon MDPs, each defined as a 4-tuple M; = (S,A, T, R) over states, actions, transition dynamics (which are generally unknown). and reward. The states and actions may be continuous or discrete, and the reward function R is assumed to the same across MDPs in the distribution p(M). Let L and U denote two sets of MDPs sampled from the distribution p(M). Experience may be collected in both sets of MDPs, but the reward can only be evaluated in the set of labeled MDPs L. The objective is to find a policy * that maximizes expected reward in the distribution over MDPs:\nH * = argmax R(st,at M t=0\nwhere H denotes the horizon. Note that the notion of finding a policy that succeeds on a distribution of MDPs is very natural in many real-world reinforcement learning problems. For example, in the earlier autonomous driving example, our goal is not to find a policy that succeeds on one particular road or in one particular city, but on all roads that the car might encounter. Note that the proble can also be formalized in terms of a single large MDP with a large diversity of initial states, but viewing the expectation as being over a distribution of MDPs provides a more natural analogue with semi-supervised learning. as we discuss below.\nIn standard semi-supervised learning, it is assumed that the data distribution is the same across both labeled and unlabeled examples, and the amount of labeled data is limited. Similarly, semi- supervised reinforcement learning assumes that the labeled and unlabeled MDPs are sampled from the same distribution. In SSRL, however, it is the set of labeled MDPs that is limited, whereas ac quiring large amounts of experience within the set of labeled MDPs is permissible, though unlimited experience in the labeled MDPs is not sufficient on its own for good performance on the entire MDP distribution. This is motivated by real-world lifelong learning, where an agent (e.g. a robot) may be initially trained with detailed reward information in a small set of scenarios (e.g. with a human teacher), and is then deployed into a much larger set of scenarios, without reward labels. One natural question is how much variation can exist in the distribution over MDPs. We empirically answer this question in our experimental evaluation in Section5\nThe standard paradigm in reinforcement learning is to learn a policy in the labeled MDPs and apply i. directly to new MDPs from the same distribution, hoping that the original policy will generalize (Ol. et al.|[2016). An alternative approach is to train a reward function with supervised learning to regress. from the agent's observations to the reward labels, and then use this reward function for learning. in the unlabeled settings. In our experiments, we find that this approach is often more effective. because, unlike the policy, the reward function is decoupled from the rest of the MDP, and can thus. generalize more readily. The agent can then continue to learn from unlabeled experiences using. the learned reward function. However, because the state distributions in the two sets of MDPs may. be different, a function approximator trained on the reward function in the labeled MDPs may no. necessarily generalize well to the unlabeled one, due to the domain shift. A more effective solutior. would be to incorporate the unlabeled experience sampled from U when learning the reward. Unlike typical semi-supervised learning, the goal is not to learn the reward labels per se, but to learn a policy. that optimizes the reward. By incorporating both labeled and unlabeled experience, we can develop. an algorithm that alternates between inferring the reward function and updating the policy, which. effectively provides a shaping, or curriculum, for learning to perform well in the unlabeled settings. In the following section, we discuss our proposed algorithm in detail.."}, {"section_index": "4", "section_name": "SEMI-SUPERVISED SKILL GENERALIZATION", "section_text": "We now present our approach for performing semi-supervised reinforcement learning for generaliz ing previously learned skills. As discussed previously, our goal is to learn a policy that maximizes. expected reward in M E U, using both unlabeled experience in U and labeled experience in L. We\nwill use the formalism adopted in the previous section; however, note that performing RL in a set of. MDPs can be equivalently be viewed as a single MDP with a large diversity of initial conditions.\nTo see that this is a generalization of the standard RL setting, observe that, as the magnitude o. the reward increases, the relative weight on the entropy regularizer decreases, so the classic RI. objective can be recovered by putting a temperature on the reward, and taking the limit as -> For finite rewards, this objective encourages policies to take random actions when all options have. roughly equal value. Under the optimal policy RL, samples with the highest reward R have the. highest likelihood, and the likelihood decreases exponentially with decrease in reward. In our work. this framework helps to produce policies in the labeled MDP that are diverse, and therefore better. suited for inferring reward functions that transfer effectively to the unlabeled MDP..\nAfter training RL, we generate a set of samples from RL in L, which we denote as DaRL. The objective of S3G is to use Dp1 to find a policy that maximizes expected reward in U.\nT max Ee,MEU H(e) 0 t=0\nwhere the reward R is not available. By using the agent's prior experience DtrL, as well as unlabeled. experience in U, we aim to learn a well-shaped reward function to facilitate learning in U. To dc. so, S3G simultaneously learns a reward function Ro with parameters $ and optimizes a policy g with parameters 0 in the unlabeled MDP U. This consists of iteratively taking samples D, fror. the current policy e in U, updating the reward Ro, and updating the policy using reward values. imputed using Ro. At the end of the procedure, we end up with a policy te optimized in U. As. shown in prior work, this procedure corresponds to an inverse reinforcement learning algorithm tha converges to a policy that matches the performance observed in DurL (Finn et al.]2016). We next. go over the objectives used for updating the reward and the policy..\n1 exp(R(t)) Z\nwhere t denotes a single trajectory sample {so,ao,s1,a1, ..,sT} and R(t) = t R(st, at). Thus, the. objective of the reward optimization phase is to maximize the log likelihood of the agent's prior experience DurL under this exponential model. The computational challenge here is to estimate. the partition function Z which is intractable to compute in high-dimensional spaces. We thus use importance sampling, using samples to estimate the partition function Z as follows:.\nexp(Ro(t)) L($) = Rg(t)-logZ Ro(t)-log ~ q(t) t~D tRL t~DtRL t~Dsamp\nwhere Dsamp is the set of samples used for estimating the partition function Z and q(t) is the prob. ability of sampling t under the policy it was generated from. Note that the distribution of this se. of samples is crucial for effectively estimating Z. The optimal distribution for importance samplin is the one that is proportional to q(t) exp(Ro(t))] = exp(Ro(t)). Conveniently, this is also th. optimal behavior when the reward function is fully optimized such that R, ~ R. Thus, we adaptivel. update the policy to minimize the KL-divergence between its own distribution and the distributioi. induced by the current reward, Ro(t), and use samples from the policy to estimate the partition func. tion. Since the importance sampling estimate of Z will be high variance at the beginning of trainin when fewer policy samples have been collected, we also use the samples from the RL policy RI. Thus we set Dsamp to be {D, UDRL}.\nI order to perform semi-supervised reinforcement learning, we use the framework of maximum en-. opy control (Ziebart2010] Kappen et al.| 2012), also called linear-solvable MDPs (Dvijotham & odorov2010). This framework is a generalization of the standard reinforcement learning formula- on, where instead of optimizing the expected reward, we optimize an entropy-regularized objective f the form\nH TRL = argmax E,MEL R(st,at) -H() t=0\nReward update: Because of the entropy regularized objective in Equation [1] it follows that the samples Dar1. are generated from the following maximum entropy distribution(Ziebart|2010):\nAlgorithm 1 Semi-Supervised Skill Generalization\nTSennl-SupervisedSklnlOeneranlzatlon 0: inputs: Set of unlabeled MDPs U; reward R for labeled MDPs M E L 1: Optimize RL to maximize R in M E L 2: Generate samples DRL from RL in M E L 3: Initialize Dsamp DRL 4: for iteration i = 1 to I do 5: Run e in M E U to generate samples De 6: Append samples Dsamp Dsamp U De 7: Update reward R according to Equation3|using DRL and Dsamp 8: Update policy te according to Equation4[using Rg and De 9: end for 10: return generalized policy te\nWe parameterize the reward using a neural network, and update it using mini-batch stochastic gra dient descent, by backpropagating the gradient of the Equation|3|to the parameters of the reward.\nPolicy update:( Our goal with the policy is two-fold. First, we of course need a policy that succeeds. in MDPs M E U. But since the reward in these MDPs is unavailable, the policy must also serve to generate samples for more accurately estimating the partition function in Equation 2] so that the reward update step can improve the accuracy of the estimated reward function. The policy optimization objective to achieve both of these is to maximize the expected reward Ro, augmented. with an entropy term as before:\nWhile we could in principle use any policy optimization method in this step, our prototype uses mirror descent guided policy search (MDGPS), a sample-efficient policy optimization method suit able for training complex neural network policies that has been validated on real-world physical robots (Montgomery & Levine2016, Montgomery et al.]2016).We interleave reward function updates using the objective in Equation |3|within the policy optimization method. We describe the policy optimization procedure in detail in Appendix[A\nThe full algorithm is presented in Algorithm|1] Note that this iterative procedure of comparing the current policy to the optimal behavior provides a form of shaping or curriculum to learning. Oui. method is structured similarly to the recently proposed guided cost learning method (Finn et al.. 2016), and inherits its convergence properties and theoretical foundations. Guided cost learning is. an inverse RL algorithm that interleaves policy learning and reward learning directly in the targe domain, which in our case is the unlabeled MDPs. Unlike guided cost learning, however, the cosi (or reward) is not inferred from expert human-provided demonstrations, but from the agent's own prior experience in the labeled MDPs.\nSince the aim of S3G is to improve the generalization performance of a learned policy by leveragin data from the unlabeled MDPs, our experiments focus on domains where generalization is critical fc success. Despite the focus on generalization in many machine learning problems, the generalizatio capabilities of policies trained with RL have frequently been overlooked. For example, in recer RL benchmarks such as the Arcade Learning Environment (Bellemare et al.]2012) and OpenA Gym (Brockman et al.|2016), the training conditions perfectly match the testing conditions. Thu we define our own set of simulated control tasks for this paper, explicitly considering the types c variation that a robot might encounter in the real world. Through our evaluation, we seek to measur how well semi-supervised methods can leverage unlabeled experiences to improve the generalizatio of a deep neural network policy learned only in only labeled scenarios.\nCode for reproducing the simulated experiments is available onlinq' Videos of the learned policies can be viewed at sites.google.com/site/semisupervisedrl\nT L(O)= Ene,MEU Rg(st,at) -H(te) t=0\nobstacle navigation reacher with vision half-cheetah jump\nFigure 2: Illustrations of the tasks. For the reacher with vision, the range of the target for the labeled MDPs is shown with a red dotted line, and for the unlabeled MDPs with a green dashed line. For. the obstacle and cheetah tasks, we show the highest obstacle height..\nEach of the tasks are modeled using the MuJoCo simulator, and involve continuous state and actior spaces with unknown dynamics. The task difficulty ranges from simple, low-dimensional problems. to tasks with complex dynamics and high-dimensional observations. In each experiment, the reward function is available in some settings but not others, and the unlabeled MDPs generally involve a wider variety of conditions. We visualize the tasks in Figure[2|and describe them in detail below:\nobstacle navigation / obstacle height: The goal of this task is to navigate a point robot around an obstacle to a goal position in 2D. The observation is the robot's position and velocity, and does not. include the height of the obstacle. The height of the obstacle is 0.2 in the labeled MDP, and O.5 in the unlabeled MDP.\n2-link reacher with vision / target position: The task objective is the same as the 2-link reacher. except, in this task, the MDPs involve a wide 2D range of target positions, shown in Figure[2 Insteac of passing in the coordinate of the target position, the policy and the reward function receive a raw 64 80 RGB image of the environment at the first time step.\nhalf-cheetah jump / wall height: In this task, the goal is for a simulated 6-DOF cheetah-like. robot with to jump over a wall, with 10% gravity. The observation is the robot's joint angles, global. pose, and their velocities, for a total dimension of 20. The unlabeled MDP involves jumping over. a O.5 meter wall, compared to the labeled MDP with a 0.2 meter wall. Success is measured based. on whether or not the cheetah fully clears the wall. Policies for reward regression, S3G, and oracle were initialized from the RL policy..\nIn all tasks, the continuous action vector corresponds to the torques or forces applied to each of the robot's joints. For the first three tasks, reaching the goal position within 5 cm is considered a success. For the non-visual tasks, the policy was represented using a neural network with 2 hidden layers of 40 units each. The vision task used 3 convolutional layers with 15 filters of size 5 5 each followed by the spatial feature point transformation proposed byLevine et al.(2016), and lastly 3 fully-connected layers of 20 units each. The reward function architecture mirrored the architecture as the policy, but using a quadratic norm on the output, as done byFinn et al.(2016)."}, {"section_index": "5", "section_name": "5.2 EVALUATION", "section_text": "In our evaluation, we compare the performance of S3G to that of (i) the RL policy RL, trained only in the labeled MDPs, (ii) a policy learned using a reward function fitted with supervised learning, and (iii) an oracle policy which can access the true reward function in all scenarios. The architecture of the reward function fitted with supervised learning is the same as that used in S3G.\n2-link reacher / mass: This task involves moving the end-effector of a two-link reacher to a spec. ified goal position. The observation is the robot's joint angles, end-effector pose, and their time derivatives. In the labeled MDPs, the mass of the arm varies between 7 10-9 and 7 10, whereas the unlabeled MDPs involve a range of 7 10-9 to 7 103.\nTo extensively test the generalization capabilities of the policies learned with each method, we mea sure performance on a wide range of settings that is a superset of the unlabeled and labeled MDPs as indicated in Figure[3] We report the success rate of policies learned with each method in Table[1\nTable 1: The success rate of each method with respect to generalization. The table compares th standard RL policy (which is trained only on the labeled MDPs), with both the supervised regressio method and S3G. Both of the latter use the unlabeled regime for additional training, though only S3 also uses the unlabeled data to improve the learned reward function.\nRL policy reward regression (ours) S3G (ours) oracle obstacle 65% 29% 79% 36% 2-link reacher 75% 60% 98% 80% 2-link reacher with vision 69% 85% 92% 100% half-cheetah 56% 73% 79% 86% Obstacle 2-link reacher Half-Cheetah LOC L00 80 suee seeeete seee seeeete seee seeeese S3G reward regr. RL policy oracle 0.0 0.1 3.5 3.6 3.7 4.1 0.0 0.2 0.4 Log Mass 0.2 0.8 1.0 Wall Height Wall Height\nFigure 3: Generalization capability of the obstacle, 2-link reacher, and half-cheetah tasks as a func tion of the task variation. Performance for these tasks is averaged over 3 random seeds.\nand visualize the generalization performance in the 2-link reacher, cheetah, and obstacle tasks ir Figure[3] The sample complexity of each method is reported in Appendix B\nIn all four tasks, the RL policy RL generalizes worse than S3G, which demonstrates that, by using. unlabeled experience, we can indeed improve generalization to different masses, target positions. and obstacle sizes. In the obstacle and both reacher tasks, S3G also outperforms reward regression suggesting that it is also useful to use unlabeled experience to learn the reward..\nIn the obstacle task, the results demonstrate that the reward functions learned using S3G actually. produce better generalization in some cases than learning on both the labeled and unlabeled MDPs. with full knowledge of the true reward function. While this may at first seem counterintuitive, this. agrees with the observation in prior work|Guo et al.(2013) that the true reward function is not always. the best one when learning with limited samples, computational power, or representational capacity. (i.e. because it is not sufficiently shaped). S3G also outperforms the oracle and reward regression in. the 2-link reacher task, indicating that the learned reward shaping is also beneficial in that task.\nFor the vision task, the visual features learned via RL in the labeled MDPs were used to initialize the vision layers of the reward and policy. We trained the vision-based reacher with S3G with both end-to-end finetuning of the visual features and with the visual features frozen and only the fully-connected layers trained on the unlabeled MDPs. We found performance to be similar in both cases, suggesting that the visual features learned with RL were good enough, though fine-tuning the features end-to-end with the inverse RL objective did not hurt the performance.\nWe presented the first method for semi-supervised reinforcement learning, motivated by real-worl. lifelong learning. By inferring the reward in settings where one is not available, S3G can improve. the generalization of a learned neural network policy trained only in the \"labeled' settings. Ad ditionally, we find that, compared to using supervised regression to reward labels, we can achieve. higher performance using an inverse RL objective for inferring the reward underlying the agent's. prior experience. Interestingly, this does not directly make use of the reward labels when inferring. the reward of states in the unlabeled MDPs, and our results on the obstacle navigation task in fac. suggest that the rewards learned with S3G exhibit better shaping..\nAs we discuss previously, the reward and policy optimization methods that we build on in this work. are efficient enough to learn complex tasks with hundreds of trials, making them well suited fo.\nlearning on physical systems such as robots. Indeed, previous work has evaluated similar methoc. on real physical systems, in the context of inverse RL (Finn et al.]2016) and vision-based polic. learning (Levine et al.2016). Thus, it is likely feasible to apply this method for semi-supervise. reinforcement learning on a real robotic system. Applying S3G on physical systems has the potentia. to enable real-world lifelong learning, where an agent is initialized using a moderate amount o. labeled experience in a constrained setting, such as a robot learning a skill for the first time in th lab, and then allowed to explore the real world while continuous improving its capabilities withou. additional supervision. This type of continuous semi-supervised reinforcement learning has th. potential to remove the traditional distinction between a training and test phase for reinforcemer. learning agents, providing us with autonomous systems that continue to get better with use.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Anca Dragan for insightful discussions, and Aviv Tamar and Robertc Calandra for helpful feedback on the paper. Funding was provided by the NSF GRFP, the DARPA Simplex program, and Berkeley DeepDrive.\nPieter Abbeel and Andrew Ng. Apprenticeship learning via inverse reinforcement learning. In International Conference on Machine Learning (ICML), 2004..\nDario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. Concrete prob lems in ai safety. arXiv preprint arXiv:1606.06565, 2016.\nSamuel Barrett, Matt E. Taylor, and Peter Stone. Transfer learning for reinforcement learning on a physica. robot. In Ninth International Conference on Autonomous Agents and Multiagent Systems - Adaptive Learr ing Agents Workshop (ALA), 2010.\nMarc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: A evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012.\nGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.\nColine Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning modular neural network policies for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088, 2016.\nAnca Dragan, Geoffrey Gordon, and Siddhartha Srinivasa. Learning from experience in manipulation planning Setting the right goals. International Symposium on Experimental Robotics (ISER), 2011.\nKrishnamurthy Dvijotham and Emanuel Todorov. Inverse optimal control with linearly-solvable MDPs. In International Conference on Machine Learning (ICML), 2010..\nJonathan Ho, Jayesh K. Gupta, and Stefano Ermon. Model-free imitation learning with policy optimization International Conference on Machine Learning (ICML), 2016..\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Neural Information Processing Systems (NIPs). 2014.\nJulien Audiffren. Michal Valko, Alessandro Lazaric, and Mohammad Ghavamzadeh. Maximum entropy semi- supervised inverse reinforcement learning. International Joint Conference on Artificial Intelligence (IJCAI) 2015.\nChelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. International Conference on Machine Learning (ICML), 2016\nXiaoxiao Guo, Satinder Singh, and Richard L Lewis. Reward mapping for transfer in long-lived agents. In Neural Information Processing Systems (NIPS), 2013.\nGeorge Konidaris and Andrew Barto. Autonomous shaping: Knowledge transfer in reinforcement learning International Conference on Machine Learning (ICML), 2006.\nWilliam Montgomery and Sergey Levine. Guided policy search as approximate mirror descent. Advances in Neural Information Processing Systems (NIPS), 2016..\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception and action in minecraft. International Conference on Machine Learning (ICML), 2016.\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer rein forcement learning. International Conference on Learning Representations (ICLR). 2016\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy optimization. International Conference on Machine Learning (ICML), 2015.\nMartin Stolle and Christopher G. Atkeson. Knowledge transfer using local features. Approximate Dynami andRei ADPR 000\nAlex Teichman and Sebastian Thrun. Tracking-based semi-supervised learning. Robotics: Science and Systems (RSS), 2007.\nSebastian Thrun and Tom M Mitchell. Lifelong robot learning. Springer Berlin Heidelberg, 1995\nEric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Pieter Abbeel, Sergey Levine, Kate Saenko, and Trevor Darrell. Adapting deep visuomotor representations with weak pairwise constraints. Workshop on the Algorithmic Foundations of Robotics (WAFR). 2016.\nXiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Morgan & Claypool, 2009\nMatthew E. Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research (JMLR), 2009"}, {"section_index": "7", "section_name": "MIRROR DESCENT GUIDED POLICY SEARCH", "section_text": "To optimize policies with S3G, we chose to use mirror-descent guided policy search (MDGPS), for its superior sample efficiency over other policy optimization methods. MDGPS belongs to a class of guided policy search methods, which simplify policy search by decomposing the problem into two phases: a) a trajectory-centric RL phase (C-phase) and b) a supervised learning phase (S-phase) During the C-phase, a trajectory-centric RL method is used to train 'local' controllers for each of M initial positions. In the S-phase, a global policy nte(a[s) is trained using supervised learning to. match the output of each of the local policies.\nTo produce local policies, we make use of the iterative linear quadratic regulator (iLQR) algorithm tc train time-varying linear-Gaussian controllers. iLQR makes up for its weak representational powe by being sample efficient under regimes where it is capable of learning. Usage of iLQR requires a twice-differentiable cost function and linearized dynamics..\nIn order to fit a dynamics model, we use the recent samples to fit a gaussian mixture model (GMM) on (st,at,St+1) tuples. We then use linear regression to fit time-varying linear dynamics of the form St+1 = Fst + ft on local policy samples from the most recent iteration, using the clusters from the GMM as a normal-inverse Wishart prior.\nDuring the C-step, for each initial condition m, we optimize the entropy-augmented of the form objective constrained against the global policy:\nT -H(q) s.t. DkL(q[[e)< q t=0\nWhere R(st, a) is a twice-differentiable objective such as L2-distance from a target state\nThis optimization results in a local time-varying linear-Gaussian controller qm(st|at) ~ N(Km,tst + km.t, Cm.t) which is executed to obtain supervised learning examples for the S-step."}, {"section_index": "8", "section_name": "B SAMPLE COMPLEXITY OF EXPERIMENTS", "section_text": "Because we use guided policy search to optimize the policy, we inherit its sample efficiency. In Table[2] we report the number of samples used in both labeled and unlabeled scenarios for all tasks. and all methods. Note that the labeled samples used by the oracle are in from the \"unlabeled\" MDPs U, where we generally assume that reward labels are not available..\nTable 2: Sample complexity of each experiment. This table records the total number of samples used to train policies in the labeled setting (RL and oracle), and the unlabeled setting (reward regression S3G). The sample complexity of unlabeled experiments is denoted as (unlabeled samples + labeled samples)\nMDGPS can be interpreted as an approximate variant of mirror-descent on the expected cost. J(0) = I=1 Eg(st,at)[-R(st,at)] under policy's trajectory distribution, where e(st, at) denotes the marginal of ne(t) = p(s1)I=1 p(st+1St,at )(at[st) and t = {s1,a1,...,ST, aT} denotes the trajec tory. In the C-phase, we learn new local policies for each initial position, and in the S-phase we. project the local policies down to a single global policy ne, using KL divergence as the distance metric.\nLabeled Unlabeled + Labeled RL oracle reward regression S3G obstacle 250 250 300+250 300+250 2-link reacher 200 300 900+200 900+200 2-link reacher with vision 250 650 1170+250 1300+250 half-cheetah 600 600 1400+600 1400+600"}] |
SkkTMpjex | [{"section_index": "0", "section_name": "DISTRIBUTED SECOND-ORDER OPTIMIZATION USING KRONECKER-FACTORED APPROXIMATIONS", "section_text": "Jimmy Ba\nRoger Grosse\nUniversity of Toronto\nUniversity of Toronto\njimmy@psi.toronto.edu\nAs more computational resources become available, machine learning researchers. train ever larger neural networks on millions of data points using stochastic gradi-. ent descent (SGD). Although SGD scales well in terms of both the size of dataset and the number of parameters of the model, it has rapidly diminishing returns as. parallel computing resources increase. Second-order optimization methods have an affinity for well-estimated gradients and large mini-batches, and can therefore benefit much more from parallel computation in principle. Unfortunately, they. often employ severe approximations to the curvature matrix in order to scale to large models with millions of parameters, limiting their effectiveness in practice versus well-tuned SGD with momentum. The recently proposed K-FAC method. (Martens and Grosse] 2015) uses a stronger and more sophisticated curvature ap proximation, and has been shown to make much more per-iteration progress than SGD, while only introducing a modest overhead. In this paper, we develop a ver-. sion of K-FAC that distributes the computation of gradients and additional quan tities required by K-FAC across multiple machines, thereby taking advantage of. the method's superior scaling to large mini-batches and mitigating its additional. overheads. We provide a Tensorflow implementation of our approach which is easy to use and can be applied to many existing codebases without modification.. Additionally, we develop several algorithmic enhancements to K-FAC which can improve its computational performance for very large models. Finally, we show that our distributed K-FAC method speeds up training of various state-of-the-art ImageNet classification models by a factor of two compared to an improved form of Batch Normalization (Ioffe and Szegedy2015).."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Current state-of-the-art deep neural networks (Szegedy et al. 2014}Krizhevsky et al.]2012] He et al.| [2015) often require days of training time with millions of training cases. The typical strategy to speed-up neural network training is to allocate more parallel resources over many machines and cluster nodes (Dean et al.]2012). Parallel training also enables researchers to build larger models where different machines compute different splits of the mini-batches. Although we have improved our distributed training setups over the years, neural networks are still trained with various simple first-order stochastic gradient descent (SGD) algorithms. Despite how well SGD scales with the size of the model and the size of the datasets, it does not scale well with the parallel computation resources. Larger mini-batches and more parallel computations exhibit diminishing returns for SGD and related algorithms.\nSecond-order optimization methods, which use second-order information to construct updates that account for the curvature of objective function, represent a promising alternative. The canonical second-order methods work by inverting a large curvature matrix (traditionally the Hessian), but. this doesn't scale well to deep neural networks with millions of parameters. Various approximations. to the curvature matrix have been proposed to help alleviate this problem, such as diagonal (LeCun. et al.[1998] Duchi et al.] 2011} Kingma and Ba]2014), block diagonal [Le Roux et al.[(2008), and low-rank ones (Schraudolph et al.. 2007]Bordes et al.2009} Wang et al.]2014] Keskar and Berahas[2015] Moritz et al.[[2016] Byrd et al.[[2016]|Curtis2016f|Ramamurthy and Duffy). Another\nJames Martens\nUniversity of Toronto and Google DeepMind"}, {"section_index": "2", "section_name": "strategy is to use Krylov-subspace methods and efficient matrix-vector product algorthms to avoid the inversion problem entirely (Martens]2010} Vinyals and Povey2012] Kiros!2013] Cho et al. 2015, He et al.| 2016)", "section_text": "The usual problem with curvature approximations, especially low-rank and diagonal ones, is thai. they are very crude and only model superficial aspects of the true curvature in the objective function. Krylov-subspace methods on the other hand suffer because they still rely on 1st-order methods tc compute their updates.\nMore recently, several approximations have been proposed based on statistical approximations of the Fisher information matrix (Heskes2000, Ollivier2013] Grosse and Salakhutdinov 12015 Povey et al.]2015} Desjardins et al.2015). In the K-FAC approach (Martens and Grosse 2015 Grosse and Martens2016), these approximations result in a block-diagonal approximation to the Fisher information matrix (with blocks corresponding to entire layers) where each block is approximatec as a Kronecker product of two much smaller matrices, both of which can be estimated and invertec fairly efficiently. Because the inverse of a Kronecker product of two matrices is the Kroneckel product of their inverses, this allows the entire matrix to be inverted efficiently.\nMartens and Grosse(2015) found that K-FAC scales very favorably to larger mini-batches compared to SGD, enjoying a nearly linear relationship between mini-batch size and per-iteration progress for. medium-to-large sized mini-batches. One possible explanation for this phenomenon is that second- order methods make more rapid progress exploring the error surface and reaching a neighborhood of a local minimum where gradient noise (which is inversely proportional to mini-batch size) becomes the chief limiting factor in convergencq' This observation implies that K-FAC would benefit in particular from a highly parallel distributed implementation..\nIn this paper, we propose an asynchronous distributed version of K-FAC that can effectively ex. ploit large amounts of parallel computing resources, and which scales to industrial-scale neural net. models with hundreds of millions of parameters. Our method augments the traditional distributec. synchronous SGD setup with additional computation nodes that update the approximate Fisher anc. compute its inverse. The proposed method achieves a comparable per-iteration runtime as a norma SGD using the same mini-batch size on a typical 4 GPU cluster. We also propose a \"doubly fac. tored\"' Kronecker approximation for layers whose inputs are feature maps that are normally too large. to handled by the standard Kronecker-factored approximation. Finally, we empirically demonstrate. that the proposed method speeds up learning of various state-of-the-art ImageNet models by a factor. of two over Batch Normalization (Ioffe and Szegedy|2015).\nLet Dw be the gradient of the log likelihood of a neural network w.r.t. some weight matrix W E RCout Cin in a layer, where Cin, Cout are the number of input/output units of the layer. The block of the Fisher information matrix of that layer is given by:\nF = E vec{DW} vec{DW} x,y~F\nK-FAC (Martens and Grosse]2015] Grosse and Martens2016) uses a Kronecker-factored approx imation to each block which we now describe. Denote the input activation vector to the layer as A E RCin, the pre-activation inputs as s = WA and the back-propagated loss derivatives as. Ds = dL E RCout. Note that the gradient of the weights is the outer product of the input acti-. vation and back-propagated derivatives DW = DsA' . K-FAC approximates the Fisher block as a.\nIMathematical evidence for this idea can be found in Martens (2014), where it is shown that (convex quadratic) objective functions decompose into noise-dependent and independent terms, and that second-orde methods make much more rapid progress optimizing the noise-independent term compared to SGD, while hav no effect on the noise-dependent term (which shrinks with the size of the mini-batch)\nwhere P is the distribution over the input x and the network's distribution over targets y (implied. by the log-likelihood objective). Throughout this paper we assume, unless otherwise stated, that expectations are taken with respect to P (and not the training distribution over y).\nKronecker product of the second-order statistics of the input and the backpropagated derivatives\nF =E[vec{DW} vec{DW}T]=EAA O DsDs~E[AAT] OEDsDs F\nThis approximation can be interpreted as making the assumption that the second-order statistics of the activations and the backpropagated derivatives are uncorrelated.\nwhich amounts to several matrix inversion of multiplication operations involving matrices roughly the same size as the weight matrix W."}, {"section_index": "3", "section_name": "DISTRIBUTED OPTIMIZATION USING K-FAC", "section_text": "Stochastic optimization algorithms benefit from low-variance gradient estimates (as might be ob. tained from larger mini-batches). Prior work suggests that approximate natural gradient algorithms. might benefit more than standard SGD from reducing the variance (Martens and Grosse. 2015 Grosse and Martens|2016). One way to efficiently obtain low-variance gradient estimates is to par-. allelize the gradient computation across many machines in a distributed system (thus allowing large. mini-batches to be processed efficiently). Because the gradient computation in K-FAC is identical to. that of SGD. we parallelize the gradient computation using the standard synchronous SGD model\nHowever, K-FAC also introduces other forms of overhead not found in SGD - in particular, estima tion of second-order statistics and computation of inverses or eigenvalues of the Kronecker factors In this section, we describe how these additional computations can be performed asynchronously While this asynchronous computation introduces an additional source of error into the algorithm we find that it does not significantly affect the per-iteration progress in practice. All in all, the per- iteration wall clock time of our distributed K-FAC implementation is only 5-10% higher compared to synchronous SGD with the same mini-batch size."}, {"section_index": "4", "section_name": "3.1 ASYNCHRONOUS FISHER BLOCK INVERSION", "section_text": "The natural gradient (Amari,[1998) is defined as the inverse of the Fisher times the gradient. It is traditionally interpreted as the direction in parameter space that achieves the largest (instantaneous) improvement in the objective per unit of change in the output distribution of the network (as mea- sured using the KL-divergence). Under certain conditions, which almost always hold in practice it can also be interpreted as a second-order update computed by minimizing a local quadratic ap- proximation of the log-likelihood objective, where the Hessian is approximated using the Fisher (Martens2014).\nTo compute the approximate natural gradient in K-FAC, one multiplies the gradient for the weights of each layer by the inverse of the corresponding approximate Fisher block F for that layer. Denote. the gradient of the loss function with respect to the weights W by 9w E RCinxCout. We will. assume the use of the factorized Tikhonov damping approach described by Martens and Grosse. (2015), where the addition of the damping term XI to F is approximated by adding 42 I to. EAAand Tps2I to EDsDsT , where A and Ds are adjustment factors that are described in detail and generalized in Sec.4.1] (Note that one can also include the contribution to the curvature from any L2 regularization terms with X.).\nBy exploiting the basic identities (A B)-1 = (A-1 B-1) and (A B) vec(C) = vec(BCA) the approximate natural gradient update v can then be computed as:.\nE[AAT]+nAXI EDsDs+TDsA2] vec{9w} ~ vec 9w\nComputing the parameter updates as per Eq|3|requires the estimated gradients to be multiplied by the inverse of the smaller Kronecker factors. This requires periodically computing (typically) either inverses or eigendecompositions of each of these factors. While these factors typically have sizes\nFigure 1: The diagram illustrates the distributed computation of K-FAC. Gradient workers (blue. compute the gradient w.r.t. the loss function. Stats workers (grey) compute the sampled second-. order statistics. Additional workers (red) compute inverse Fisher blocks. The parameter server (orange) uses gradients and their inverse Fisher blocks to compute parameter updates.\nonly in the hundreds or low thousands, very deep networks may have hundreds of such matrice. (2 or more for each layer). Furthermore, matrix inversion and eigendecomposition see little benefi from GPU computation, so they can be more expensive than standard neural network operations For these reasons, inverting the approximate Fisher blocks represents a significant computationa COSt.\nIt has been observed that refreshing the inverse of the Fisher blocks only occasionally and usin stale values otherwise has only a small detrimental effect on average per-iteration progress, perhap. because the curvature changes relatively slowly (Martens and Grosse2015). We push this a ste. further by computing the inverses asynchronously while the network is still training. Because the re quired linear algebra operations are CPU-bound while the rest of our computations are GPU-bounc. we perform them on the CPU with little effective overhead. Our curvature statistics are somewha. more stale as a result, but this does not appear to significantly affect per-iteration optimization pe formance. In our experiments, we found that computing the inverses asynchronously usually offere. a 40-50% speed-up to the overall wall-clock time of the K-FAC algorithm..\nThe other major source of computational overhead in K-FAC is the estimation of the second-orde statistics of the activations and derivatives, which are needed for the Kronecker factors. In the stan dard K-FAC algorithm, these statistics are computed on the same mini-batches as the gradients allowing the forward pass computations to be shared between the gradient and statistics computa tions. By computing the gradients and statistics on separate mini-batches, we can enable a highe degree of parallelism, at the expense of slightly more total computational operations. Under thi scheme, the statistics estimation is independent of the gradient computation, so it can be done ol one or more separate worker nodes with their own independent data shards. These worker node. receive parameters from the parameter server (just as in synchronous SGD) and communicate statis tics back to the parameter server. In our experiments, we assigned at most one worker to computing Statistics.\nIn cases where it is undesirable to devote separate worker nodes to computing statistics, we als introduce a fast approximation to the statistics for convolution layers (see Appendix|A)\nparameter compute parameters server inverses E[AAT]-19wE[D,DT]-1 E[AAT]-1 E[D,DT]-1 W 9w E[AAT E|D DT gradient gradient gradient stats worker worker worker worker"}, {"section_index": "5", "section_name": "4 DOUBLY-FACTORED KRONECKER APPROXIMATION FOR LARGE CONVOLUTION LAYERS", "section_text": "Computing the standard Kronecker factored Fisher approximation for a given layer involves opera. tions on matrices whose dimension is the number of input units or output units. The cost of these operations is reasonable for most fully-connected networks because the number of units in each layer rarely exceeds a couple thousand. Large convolutional neural networks, however, often include a fully-connected layer that \"pools' over a large feature map before the final softmax classification.. For instance, the output of the last pooling layer of AlexNet is of size 6 6 256 = 9216, which. then provides inputs to the subsequent fully connected layer of 4096 ReLUs. VGG models also. share a similar architecture. For the standard Kronecker-factored approximation one of the factors. will be a matrix of size 9216 9216, which is too expensive to be explicitly inverted as often as is. needed during training.\nIn this section we propose a \"doubly-factored\"' Kronecker approximation for layers whose input is. a large feature map. Specifically, we approximate the second-order statistics matrix of the inputs as itself factoring as a Kronecker product. This gives an approximation which is a Kronecker product. of three matrices.\nUsing the AlexNet example, the 9216 4096 weight matrix in the first fully connected layer is equivalent to a filterbank of 4096 filters with kernel size 6 6 on 256 input channels. Let A be a matrix of dimension T-by-Cin representing the input activations (for a single training case), where. T = Kw, Kn is the feature map height and width, and Cin is the number of input channels. The. Fisher block for such a layer can be written as:.\nE|vec{DW} vec{DW}']=E[vec{A} vec{A}'8DsDs'1 A E R7\nWe begin be making the following rank-1 approximation:\nwhere I E RT, E RCin are the factors along the spatial location dimension and the input channe. dimension. The optimal solution of a low-rank approximation under the Frobenius norm is giver. by the singular value decomposition. The activation matrix A is small enough that its SVD can be. computed efficiently. Let 1, u1, v1 be the first singular value and its left and right singular vector. of the activation matrix A, respectively. The factors of the rank-1 approximation are then chosen tc be I = /1u1 and = 1V1. K captures the activation patterns across spatial locations in a feature map and I captures the pattern across the filter responses. Under the rank-1 approximatior. of A we have:\nE[vec{A} vec{A}' DsDs']~E[vec{Kg} vec{KgT}T 8 DsDs E[KKT8 IT 8 DsDsT\nWe further assume the second order statistics are three-way independent between the loss derivatives Ds, the activations along the input channels I, and the activations along spatial locations IC:.\nE[vec{DW}vec{DW}']~ E[KK'18 E[']Q E[DsDs\nE[vec{DW} vec{DW}']~ E[KK']E[Ig] E[DsDs]\nThe approximate natural gradient for this layer can then be computed by multiplying the inverses of each of the smaller matrices against the respective dimensions of the gradient tensor. We define a function R, : IRd1d2d3 > Rdjddi that constructs a matrix from a 3D tensor by \"reshap- ing\"' it so that the desired target dimension i E {1,2,3} maps to columns, while the remaining dimensions (j and k) are \"folded together\"' and map to the rows. Given the gradient of the weights 9w E RTCin Cout we can compute the matrix-vector product with the inverse double-factored. Kronecker approximated Fisher block as:.\nR3 E[DsDs]-1R3(R1(E[yT]-1R2(R](E[KK]-1R1(9w))))\nE[vec{DW} vec{DW}']=E[vec{A} vec{A}' DsDs'1, A E JRTx Cin\nA ~KT\nThe final approximated Fisher block is a Kronecker product of three small matrices. And note that although we assumed the feature map activations have low-rank structure, the resulting approxi mated Fisher is not low-rank..\nIn second-order optimization methods, \"damping\" performs the crucial task of correcting for the. inaccuracies of the local quadratic approximation of the objective that is (perhaps implicitly) op timized when computing the update (Martens and Sutskever2012] Martens]2014, e.g.). In the. well-known Tikhonov damping/regularization approach, one adds a multiple of the identity XI to. the Fisher before inverting it (as one also does for L2-regularization / weight-decay), which roughly. corresponds to imposing a spherical trust-region on the update..\nThe inverse of a Kronecker product can be computed efficiently as the Kronecker product of the inverse of its factors. Adding a multiple of the identity complicates this computation (although it can still be performed tractably using eigendecompositions). The \"factored Tikhonov damping' technique proposed in (Martens and Grosse2015) is appealing because it preserves the Kronecker structure of the factorization and thus the inverse can still be computed by inverting each of the smaller matrices (and avoiding the more expensive eigendecomposition operation). And in our experiments with large ImageNet models, we also observe the factored damping seems to perform better in practice. In this subsection we derive a generalized version of factored Tikhonov damping for the double-factored Kronecker approximation.\nSuppose we wish to add XI to our approximate Fisher block A B C. In the factored Tikhonov scheme this is approximated by adding a3I, 3 I, and c\\3 I to A, B and C respectively, for. non-negative scalars a, and c satisfying ac = 1. The error associated with this approxi-. mation is:\n(A+aI) (B+ \\I) (C+c\\I)-(A B C+ XI) =c\\3I AO B+b\\3I AO C+g\\3IO BO C +cX3IO nbA3IO A+nc\\3IO na\\3I B+ng\\3IO nhX3IO C\nwhere the d's are the number of rows (equiv. columns) of the corresponding Kronecker factor ma. trices. The corresponding formulae for y and c are analogous. Intuitively, the Eq.12|rescales the contribution to each factor matrix according to the geometric mean of the ratio of its norm vs. the norms of the other factor matrices. This results in the contribution being upscaled if the factor's. norm is larger than averaged norm, for example. Note that this formula generalizes to Kronecke. products of arbitrary numbers of matrices as the geometric mean of the norm ratios..\nThe doubly factored Kronecker approximation provides a computationally feasible alternative to the. standard Kronecker-factored approximation for layers that have a number of parameters in the order of hundreds of millions. For example, inverting it for the first fully connected layer of AlexNet takes about 15 seconds on an 8 core Intel Xeon CPU, and such time is amortized in our asynchronous. algorithm.\nUnfortunately, the homogeneous coordinate formulation is no longer applicable under this new ap-. proximation. Instead, we lump the bias parameters together and associate a full Fisher block with them, which can be explicitly computed and inverted since the number of bias parameters per layer. is small.\n(A+a\\$I) (B+b\\3I) O(C+c\\3I)-(A8 BOC+ XI) =cXI8 A8 B+ns\\$I AO C+aX3IO BO C +cX3IO nnX3I A+nc\\3IO naX3I B+a\\3IO nnA3I C\nFollowingMartens and Grosse(2015), we choose a, y and c by taking the nuclear norm in Eq.[11|and minimizing its triangle inequality-derived upper-bound. Note that the nuclear norm of Kronecker products is the product of the nuclear norms of each individual matrices: |A B[* = A|l*|| B||*. This gives the following formula for the value of a\n2 IB* 3 * + Ta d d B dc\nAlthough[Grosse and Martens(2016) found that Polyak averaging (Polyak and Juditsky1992) ob viated the need for tuning learning rate schedules on some problems, we observed the choice of learning rate schedules to be an important factor in our ImageNet experiments (perhaps due to highei stochasticity in the updates). On ImageNet, it is common to use a fixed exponential decay schedule (Szegedy et al.|2014f 2015). As an alternative to learning rate schedules, we instead use curvature information to control the amount by which the predictive distribution is allowed to change aftei each update. In particular, given a parameter update vector v, the second-order Taylor approxima tion to the KL divergence between the predictive distributions before and after the update is given by the (squared) Fisher norm:\nThis quantity can be computed with a curvature-vector product (Schraudolph]2002). Observe that. choosing a step size of n will produce an update with squared Fisher norm n? v' Fv. Instead of using a learning rate schedule, we choose n in each iteration such that the squared Fisher norm is at. most some value c:\nGrosse and Martens (2016) used this method to clip updates at the start of training, but we founc it useful to use it throughout training. We use an exponential decay schedule ck = cock, where. Co and ( are tunable parameters, and k is incremented periodically (every half an epoch in our. ImageNet experiments). Shrinking the maximum changes in the model prediction after each update is analogous to shrinking the trust region of the second-order optimization. In practice, computing curvature-vector products after every update introduces significant computational overhead, so we instead used the approximate Fisher F in place of F, which allows the approximate Fisher norm tc. be computed efficiently as vT Fv = vT F(F-19w) = v' 9w. The maximum step size nmax was. set to a large value, and in practice this maximum was reached only at the beginning of training. when F was small in magnitude. We found this outperformed simple exponential learning rate. decay on ImageNet experiments (see AppendixB).\nDue to computational resource constraints, we used a single GPU server with 8 Nvidia K80 GPUs. to simulate a large distributed system. The GPUs were used as gradient workers that computed the gradient over a large mini-batch, with the CPUs acting as a parameter server. The Fisher block. inversions were performed on the CPUs in parallel, using as many threads as possible. The second-. order statistics required for the various Fisher block approximations were computed either syn. cronously by the gradient workers after each gradient computation (CIFAR-10 experiments), or. asynchronously using a separate dedicated \"stats worker\"' (ImageNet experiments)..\nWe chose to base our implementation of distributed K-FAC on the TensorFlow framework (Abadi et al.[[2016) because it provides well-engineered and scalable primitives for distributed computation. We implement distributed K-FAC in TensorFlow by scanning the gradient-computing graph for groups of parameters whose gradient computations have particular structures. Having identified such groups we compute/approximate their Fisher blocks using a method tailored to the type of structure\n1 DkL[q||p]\nn = min nmax;\nMeta-parameters such as learning rates, damping parameters, and the decay-rate for the second- order statistics, were optimized carefully by hand for each method. The momentum was fixed to 0.9.\nSimilarly to |Martens and Grosse (2015), we applied an exponentially decayed Polyak averaging scheme to the sequence of output iterates produced by each method. We found this improved their convergence rate in the later stages of optimization, and reduced or eliminated the need to decay the learning rates.\n0.30 1.6 dist.K-FAC async gpu1 0.25 1.4 dist.K-FAC async gpu4 1.2 0.20 dist.K-FAC sync gpu1 1.0 0.15 dist.K-FAC sync gpu4 0.8 0.10 0.6 0.05 0.4 0.00 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000 Updates Updates 0.30 1.6 0.25 1.4 1.2 0.20 1.0 0.15 0.8 0.10 0.6 0.05 0.4 0.00 0 100 200 300 400 500 600 0 100 200 300 400 500 600 sec. sec.\nFigure 2: The results from our CIFAR-10 experiment looking at the effectiveness of asynchronousl computing the approximate Fisher inverses. gpu indicates the number of gradient workers. Dashec. lines denote training curves and solid lines denote test curves. Top row: cross entropy loss anc. classification error vs the number of updates. Bottom row: cross entropy loss and classificatior. error vs wallclock time.\nobserved. See Appendix|C|for details. This type of implementation can be applied to existing model- specification code without significant modification of said code. And because TensorFlow's parallel primitives were designed with scalability in mind, it should be possible to scale our implementation to a larger distributed system with hundreds of workers"}, {"section_index": "6", "section_name": "6.1 CIFAR-1O CLASSIFICATION AND ASYNCHRONOUS FISHER BLOCK INVERSION", "section_text": "In our first experiment we evaluated the effectiveness of asynchronously computing the approximate. Fisher inverses (as described in Section |3.1). We considered the effect that this has both on th quality of the updates, as measured by per-iteration progress on the objective, and on the averag. per-iteration wall-clock time\nThe task is to train a basic convolutional network model on the CIFAR-10 image classification dataset (Krizhevsky and Hinton,2009). The model has 3 convolutional layers of 32-32-64 filters, each with a receptive field size of 5x5, followed by a softmax layer that predicts 10 classes. This is a similar but not identical CIFAR-10 model that was used byGrosse and Martens(2016). All the CIFAR-10 experiments use a mini-batch size of 512..\nThe baseline method is a simple synchronous version of distributed K-FAC with a fixed learning rate, and up to 4 GPUs acting as gradient and stats workers, which recomputes the inverses of the approximate Fisher blocks once every 20 iterations. This baseline method behaves similarly to the implementation of K-FAC in|Grosse and Martens(2016), while being potentially faster due to its greater use of parallelism. We compare this baseline to a version of distributed K-FAC where the approximate Fisher blocks are inverted asynchronously and in parallel with the rest of the optimiza- tion process. Note that under this scheme, inverses are updated about once every 16 iterations for the single GPU condition, and every 30 iterations for the four GPU condition. For networks larger than this relatively small CIFAR-10 net they may get updated (far) less often (e.g. the AlexNet experiments in Section6.2.2).\nThe results of this first experiment are plotted in Fig.2 We found that the asynchronous version. iterated about 1.5 times faster than the synchronous version, while its per-iteration progress remained. comparable. The plots show that the asynchronous version is better at taking advantage of paralle. computation and displayed an almost linear speed-up as the number of gradient workers increases to 4. In terms of the wall-clock time, using only 4 GPUs the asynchronous version of distributed. K-FAC is able to complete 700 iterations in under a minute, where it achieves the minimum test. error (19%)."}, {"section_index": "7", "section_name": "6.2 IMAGENET CLASSIFICATION", "section_text": "In our second set of experiments we benchmarked distributed K-FAC against several other populai. approaches, and considered the effect of mini-batch size on per-iteration progress. To do this we. trained various off-the-shelf convnet architectures for image classification on the ImageNet datasel\n4.5 0.50 4.0 SGD+BN bz256 rbz128 0.45 Ad 3.5 SGD+BN bz256 rbz32 3.0 0.40 dist.K-FAC bz256 2.5 dist.K-FAC+BN bz256 0.35 2.0 0.30 1.5 .... ..... ...... ... ... ... 1.0 : 0.25 0 2 4 6 8 10 12 14 16 0 2 4 6 8 10 12 14 16 Updates x 1e+04 Updates x 1e+04 4.5 0.50 4.0 0.45 3.5 3.0 0.40 Err 2.5 0.35 3 2.0 0.30 1.5 ... 1.0 0.25 0 13.9 27.8 41.7 55.6 69.4 83.3 97.2 0 13.9 27.8 41.7 55.6 69.4 83.3 97.2 hours hours\nFigure 3: Optimization performance of distributed K-FAC and SGD training GoogLeNet on Ima- geNet. Dashed lines denote training curves and solid lines denote validation curves. bz indicates the size of mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top row: cross entropy loss and classification error v.s. the number of updates. Bottom row: cross entropy loss and classification error vs wallclock time (in hours). All methods used 4 GPUs, with distributed K-FAC using the 4-th GPU as a dedicated asynchronous stats worker.."}, {"section_index": "8", "section_name": "Russakovsky et al. 2015): AlexNet (Krizhevsky et al.] 2012), GoogLeNet InceptionV1 (Szegedy et al.[[2014) and the 50-layer Residual network (He et al.2015)", "section_text": "Despite having 1.2 million images in the ImageNet training set, a data pre-processing pipeline i almost always used for training ImageNet that includes image jittering and aspect distortion. W used a less extensive dataset augmentation/pre-processing pipeline than is typically used for Ima geNet, as the purpose of this paper is not to achieve state-of-the-art ImageNet results, but rathe to evaluate the optimization performance of distributed K-FAC. In particular, the dataset consist of 224x224 images and during training the original images are first resized to 256x256 and ther randomly cropped back down to 224x224 before being fed to the network. Note that while it i typically the case that validation error is higher than training error, this data pre-processing pipelin for ImageNet creates an augmented training set that is more difficult than the undistorted validatioi set and therefore the validation error is often lower than the training error during the first 90% o training. This observation is consistent with previously published results (He et al.||2015)."}, {"section_index": "9", "section_name": "6.2.1 GOOGLELENET AND BATCH NORMALIZATION", "section_text": "Batch Normalization (Ioffe and Szegedy 2015) is a reparameterization of neural networks that car make them easier to train with first-order methods, and has been successfully applied to large Ima geNet models. It can be thought of as a modification of the units of a neural network so that each one centers and normalizes its own raw input over the current mini-batch (or subset thereof), afte which it applies a separate shift and scaling operation via its own local \"bias'\" and \"gain'' parameters (which are optimized). These shift and scaling operations can learn to effectively undo the center- ing and normalization, thus preserving the class of functions that the network can compute. Batch Normalization (BN) is closely related to centering techniques (Schraudolph] 1998), and likely helps for the same reason that they do, which is that the alternative parameterization gives rise to loss surfaces with more favorable curvature properties. The main difference between BN and traditiona centering is that BN makes the centering and normalization operations part of the model insteac of the optimization algorithm (and thus \"backprops\"' through them when computing the gradient) which helps stabilize the optimization.\nWithout any changes to the algorithm, distributed K-FAC can be used to train neural networks that have BN layers. The weight-matrix gradient for such layers has the same structure as it does for standard layers, and so Fisher blocks can be approximated using the same set of techniques. The\nIn all our ImageNet experiments, we used the cheaper Kronecker factorization from AppendixA and the KL-based step sized selection method described in Section 5|with parameters co = 0.01 and ( = 0.96. The SGD baselines use an exponential learning rate decay schedule with a decay rate of 0.96. Decaying is applied after each half-epoch for distributed K-FAC and SGD+Batch Normalization, and after every two epochs for plain SGD, which is consistent with the experimental setup ofIoffe and Szegedy(2015).\n5.0 0.70 4.5 SGD bz2048 0.65 4.0 SGD+BN bz2048 rbz256 0.60 dist.K-FAC bz2048 0.55 2.5 2.0 0.50 1.5 0.45 ... .. ... 1.0 : 0.40 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Updates x 1e+04 Updates x 1e+04 5.0 0.70 4.5 0.65 4.0 3.5 0.60 3.0 0.55 2.5 0.50 2.0 1.5 0.45 1.0 0.40 0 5.6 11.1 16.7 22.2 27.8 33.3 0 5.6 11.1 16.7 22.2 27.8 33.3 hours hours\nFigure 4: Optimization performance of distributed K-FAC and SGD training AlexNet on ImageNet.. Dashed lines denote training curves and solid lines denote validation curves. bz indicates the size. of the mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top row. cross entropy loss and validation error vs the number of updates. Bottom row: cross entropy loss. and validation error vs wallclock time (in hours). All methods used 8 GPUs, with distributed K-FAC. using the 8-th GPU as a dedicated asynchronous stats worker..\nper-unit gain and bias parameters cause a minor complication, but because they are relatively few i number, one can compute an exact Fisher block for each of them.\nComputing updates for BN networks over large mini-batches is usually done by splitting the mini batch into chunks of size 32, computing the gradients separately for these chunks (using only the. data in the chunk to compute the mean and variance statistics), and then summing them together Using small sample sets to compute the statistics like this introduces additional stochasticity into th BN update that acts as a regularizer, but can also hurt optimization performance. To help decoupl. the effect of regularization and optimization, we also compared to a BN baseline that uses large chunks. We found using larger chunks can give a factor of 2 speed-up in optimization performance. over the standard BN baseline. In our figures rbz will indicate the chunk size, which defaults 32 i left unspecified.\nIn Fig.[3] we compare distributed K-FAC to SGD on GoogLeNet with and without BN. All methods used 4 GPUs, with distributed K-FAC using the 4-th GPU as a dedicated asynchronous stats worker\nFor the simplicity of our discussion, distributed K-FAC is not combined with BN in the the rest of the experiments, as we are chiefly interested in evaluating optimization performance, not regularization. and BN doesn't seem to provide any additional benefit to distributed K-FAC in regards to the former. Note that this is not too surprising, given that K-FAC is provably invariant to the kind of centering. and normalization transformations that BN does (Martens and Grosse| 2015).\nTo demonstrate that distributed K-FAC can efficiently optimize models with very wide layers we train AlexNet using distributed K-FAC and compare to SGD+BN. The doubly-factored Kronecker approximation proposed in Section4is applied to the first fully-connected layer of AlexNet, which. has 9216 input units and is thus too wide for the standard Kronecker approximation to be feasible.. Note that even with this addtional approximation, computing all of the Fisher block inverses for AlexNet is very expensive, and in our experiments they only get updated once every few hundred. iterations by our 16 core Xeon 2.2Ghz CPU.\nThe results from this experiment are plotted in Fig.4 They show that Distributed K-FAC still works well despite potentially extreme staleness of the Fisher block inverses, speeding up training by a factor of 1.5 over the improved SGD-BN baseline.\nWe observe that the per-iteration progress made by distributed K-FAC on the training objective is not significantly affected by the use of BN. Moreover, distributed K-FAC is 3.5 times faster than SGD with standard BN baseline (orange line) and 1.5-2 times faster than the enhanced BN baseline (blue line). BN, however, does help distributed K-FAC generalize better, likely due to its aforementioned regularizing effect.\n5.0 0.8 4.5 SGD+BN bz512 rbz64 0.7 Crrrrrrsny 4.0 dist.K-FAC bz512 0.6 2.5 0.4 2.0 1.5 0.3 1.0 0.2 0 2 4 8 10 12 0 2 6 8 10 12 Updates x 1e+04 Updates x 1e+04 5.0 0.8 4.5 0.7 Crrrrrrrrspy 4.0 3.5 0.6 3.0 2.5 0.4 2.0 1.5 0.3 1.0 0.2 0 13.9 27.8 41.7 55.6 69.4 83.3 97.2 0 13.9 27.8 41.7 55.6 69.4 83.3 97.2 hours hours\nFigure 5: Optimization performance of distributed K-FAC and SGD training ResNet50 on Ima. geNet. The dashed lines are the training curves and solid lines are the validation curves. bz indicates. the size of mini-batches. rbz indicates the size of chunks used to assemble the BN updates. Top. row: cross entropy loss and classification error v.s. the number of updates. Bottom row: cross en. tropy loss and classification error v.s. wallclock time (in hours). All methods used 8 GPUs, witl distributed K-FAC using the 8-th GPU as a dedicated asynchronous stats worker..\n0.50 2.4 SGD+BN bz1024 SGD+BN bz2048 SGD+BN bz256 2.2 0.45 dist.K-FAC bz1024 dist.K-FAC bz2048 2.0 dist.K-FAC bz256 Er 0.40 1.8 SSO 1.6 0.35 1.4 M 0.30 1.2 1.0 0.25 0 10 20 30 40 50 0 10 20 30 40 50 #example consumed x 1e+06 #example consumed x 1e+06\nFigure 6: The comparison of distributed K-FAC and SGD on per training case progress on training loss and errors. The experiments were conducted using GoogLeNet with various mini-batch sizes\nIn recent years very deep convolutional architectures have been successfully applied to ImageNet classification. These networks are particularly challenging to train because the usual difficulties as- sociated with deep learning are especially severe. Fortunately second-order optimization is perhaps ideally suited to addressing these difficulties in a robust and principled way (Martens 2010).\nTo investigate whether distributed K-FAC can scale to such architectures and provide useful ac celeration, we compared it to SGD+BN using the 50 layer ResNet architecture (He et al.2015) The results from this experiment are plotted in Fig.5] They show that distributed K-FAC provides significant speed-up during the early stages of training compared to SGD+BN."}, {"section_index": "10", "section_name": "6.2.4 MINI-BATCH SIZE SCALING PROPERTIES", "section_text": "In our final experiment we explored how well distributed K-FAC scales as additional parallel com. puting resources become available. To do this we trained GoogLeNet with varying mini-batch size of {256, 1024, 2048}, and measured per-training-case progress. Ideally, if extra gradient data is be. ing used efficiently, one should expect the per-training-case progress to remain relatively constar. with respect to mini-batch size. The results from this experiment are plotted in Fig. 6I and shor. that distributed K-FAC exhibits something close to this ideal behavior, while SGD+BN rapidly lose. data efficiency when moving beyond a mini-batch size of 256. These results suggest that distribute. K-FAC, more so than the SGD+BN baseline, is capable of speeding up training in proportion to th. amount of parallel computational resources used.."}, {"section_index": "11", "section_name": "7 DISCUSSION", "section_text": "We have introduced distributed K-FAC, an asynchronous distributed second-order optimization al. gorithm which computes Kronecker-factored Fisher approximations and stochastic gradients ove. larger mini-batches asynchronously and in parallel..\nOur experiments show that the extra overhead introduced by distributed K-FAC is mostly mitigatec by the use of parallel asynchronous computation, resulting in updates that can be computed in a similar amount of time to those of distributed SGD, while making much more progress on the ob. jective function per iteration. We showed that in practice this can lead to speedups of roughly 3.5x compared to standard SGD + Batch Normalization (BN), and 2x compared to SGD + an improved version of BN on large-scale convolutional network training tasks..\nWe also proposed a doubly-factored Kronecker approximation that allows distributed K-FAC to scale up to large models with hundreds of millions of parameters, and demonstrated the effectiveness oi this approach in experiments.\nFinally, we showed that distributed K-FAC enjoys a favorable scaling property with mini-batch. size that is seemingly not shared by SGD+BN. In particular, we showed that per-iteration progress tends to be proportional to the mini-batch size up to a much larger threshold than for SGD+BN. This suggests that it will yield even further reductions in total wall-clock training time when implemented. in a larger distributed system than the one we considered.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Martn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heteroge-. neous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nShun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251-276, 1998\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in Science Conf, pages 1-7, 2010.\nAntoine Bordes, Leon Bottou, and Patrick Gallinari. Sgd-qn: Careful quasi-newton stochastic gradient descen Journal of Machine Learning Research, 10(Jul):1737-1754, 2009\nRichard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large scale optimization. SIAM Journal on Optimization, 26(2):1008-1031, 2016.\nMinhyung Cho, Chandra Dhir, and Jaehyung Lee. Hessian-free optimization for learning deep multidimen sional recurrent neural networks. In Advances in Neural Information Processing Systems, pages 883-891, 2015.\nFrank Curtis. A self-correcting variable-metric algorithm for stochastic optimization. In Proceedings of Th 33rd International Conference on Machine Learning, pages 632-641, 2016.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker. Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223-1231, 2012.\nGuillaume Desjardins, Karen Simonyan, Razvan Pascanu, and Koray Kavukcuoglu. Natural neural networks In Advances in Neural Information Processing Systems, pages 2071-2079, 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.\nXi He. Dheevatsa Mudigere, Mikhail Smelyanskiy, and Martin Takac. Large scale distributed hessian-free optimization for deep neural network. arXiv preprint arXiv:1606.00511, 2016.\nNitish Shirish Keskar and Albert S Berahas. adaqn: An adaptive quasi-newton algorithm for training rnns arXiv preprint arXiv:1511.01169, 2015.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. , University c Toronto, 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in neural information processing systems, pages 1097-1105, 2012\nNicolas Le Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algo rithm. In Advances in neural information processing systems, pages 849-856, 2008.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nJames Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Confe ence on Machine Learning (ICML), pages 735-742, 2010.\nJames Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193 2014.\nJames Martens and Ilya Sutskever. Training deep and recurrent networks with Hessian-free optimization. Ir Neural Networks: Tricks of the Trade. pages 479-535. Springer. 2012.\nBoris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journa on Control and Optimization, 30(4):838-855, 1992\nDaniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. Parallel training of DNNs with natural gradient anc parameter averaging. In International Conference on Learning Representations: Workshop track, 2015.\nVivek Ramamurthy and Nigel Duffy. L-SR1: A novel second order optimization method for deep learning\nNicol N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Com putation, 14(7), 2002.\nPhilipp Moritz, Robert Nishihara, and Michael Jordan. A linearly-convergent stochastic L-BFGS algorithm. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, pages 249-258, 2016.\nNicol N Schraudolph, Jin Yu, Simon Gunter, et al. A stochastic quasi-newton method for online convex opti mization. In A1STATS, volume 7, pages 436-443, 2007.\nXiao Wang, Shiqian Ma, and Wei Liu. Stochastic quasi-newton methods for nonconvex stochastic optimization arXiv preprint arXiv:1412.1196, 2014.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Er-. han, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. Oriol Vinyals and Daniel Povey. Krylov subspace descent for deep learning. In AISTATS, pages 1261-1268,. 2012.\n0.50 2. dist.K-FAC KFC bz512 2.2 0.45 Crrrerrrrpy 2.0 dist.K-FAC fast bz512 0.40 1.8 Err 1.6 0.35 1.4 0.30 1.2 1.0 0.25 0 1 2 3 4 5 6 1 2 3 4 5 6 Updates x 1e+04 0.50 Updates x 1e+04 2.4 2.2 0.45 Crrrrrrrny 2.0 0.40 1.8 Err 1.6 0.35 1.4 0.30 1.2 1.0 0.25 13.9 27.8 41.7 55.6 69.4 83.3 97.2 0 13.9 27.8 41.7 55.6 69.4 83.3 97.2 hou hou\nFigure 7: Empirical evaluation of the proposed cheaper Kronecker approximation on GoogLeNet bz indicates the size of the mini-batches. Dashed lines denote training curves and solid lines denote validation curves. Top row: cross entropy loss and classification error vs the number of updates Bottom row: cross entropy loss and classification error vs wallclock time.\nIn a convolution layer, the gradient is the sum of the outer product between the receptive field inpu activation At and the back-propagated derivatives Dst at each spatial location t E T. One canno simply apply the standard Kronecker factored approximation from [Martens and Grosse (2015) tc each location, sum the results, and then take the inverse, as there is no known efficient algorithm fo. computing the inverse of such a sum.\nIn Grosse and Martens (2016), a Kronecker-factored approximation for convolutional layers called Kronecker Factors for Convolution (KFC) was developed. It works by introducing additional sta- tistical assumptions about how the weight gradients are related across locations. In particular, KFC assumes spatial homogeneity, i.e. that all locations have the same statistics, and spatially uncor- related derivatives, which (essentially) means that gradients from any two different locations are statistically independent. This yields the following approximation:\nE[vec{DW} vec{DW}']~|T|E|AtA] 8ED\nE[vec{DW} vec{DW}]=E|vec{>`DstA} vec{>`DstA tET tET -E tET tET ~E T|E[At] O E[Ds E|At8 E|D\nUnder the approximation assumption that the second-order statistics of the average activations Et[At], and the second-order statistics of the average derivatives, Et[Dst[, are uncorrelated, this becomes:\nIn this section we introduce an arguably simpler Kronecker factored approximation for convolutional layers that is cheaper to compute. In practice, it appears to be competitive with the original KFC ap- proximation in terms of per-iteration progress on the objective, working worse in some experiments and better in others, while (often) improving wall-clock time due to its cheaper cost.\nIt works by approximating the sum of the gradients over spatial locations as the outer product of the averaged receptive field activations over locations Et[At], and the averaged back-propagated derivatives IEt[Dst], multipled by the number of spatial locations [T]. In other words:.\nc{DW}T]=E|vec{>`DstA}vec{>`DstAT (16) tET tET -E (17) + \\tET ~E T|E[At] O E[Dst E[At] O E[T (18)\nT2E E[At]E[At]] 8 E E[Dst] E[Dst] t\n5.0 0.70 dist.K-FAC bz256 decayKL 4.5 dist.K-FAC bz256 decayLR 0.65 4.0 0.60 3.5 Crrrrrrrny 3.0 0.55 2.5 0.50 2.0 0.45 1.5 1.0 0.40 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 Updates x 1e+04 Updates x 1e+04\nFigure 8: Results from the experiment described in Appendix|B] decayKL indicates the proposed step-size selection method and decayLR indicates standard exponential learning rate decay..\nThis approximation is cheaper than the original KFC approximation because it is easier to compute a single outer product (after averaging over locations) than it is to compute an outer product at each location and then average. In the synchronous setting, for the large convolutional networks we experimented with, this trick resulted in a 20-30% decrease in overall wall clock time per iteration with little effect on per-iteration progress.\nTo compare our proposed step size selection from Sec.5|with the commonly-used exponential learn ing rate decay, we performed a simple experiment training GoogLeNet. Both the learning rate and threshold c on the square Fisher norm, is decayed by a factor of 0.96 after every 3200 iterations The results of this experiment are plotted in Fig. 8] and indicate that our method outperforms the standard baseline."}, {"section_index": "13", "section_name": "AUTOMATIC CONSTRUCTION OF THE K-FAC COMPUTATION GRAPH", "section_text": "[n recent years, deep learning libraries have moved towards the computational graph abstractioi Bergstra et al. 2010] [Abadi et al.] 2016) to represent neural network computations. In this sectior we give a high level description of an algorithm that scans a computational graph for parameters fo which one of the various Kronecker-factored approximations can be applied, locates nodes contain ing the required information to compute the second-order statistics required by the approximations and then constructs a new graph that computes the approximations and uses them to update the parameters.\nFor the sake of discussion, we will assume the computation graph is a directed bipartite graph that has a set of operator nodes doing some computation, and some variable nodes that holds inter- mediate computational results. The trainable parameters are stored in the memory that is loaded or mutated through read/write operator nodes. We also assume that the trainable parameters are grouped layer-wise as a set of weights and biases. Finally, we assume the gradient computation for the trainable parameters is performed by a computation graph (which is usually is generated via automatic differentiation).\nIn analogy to generating the gradient computation graph through automatic differentiation, given ar arbitrary computation graph with a set of the trainable parameters, we would like to use the existing. nodes in the given graph to automatically generate a new computation graph, a \"K-FAC computation. graph', that computes the Kronecker-factored approximate Fisher blocks associated with each group. of parameters (typically layers in a neural net), and then uses them to update the parameters..\nTo compute the Fisher block for a given layer, we want to find all the nodes holding the gradients ol the trainable parameters in a computation graph. One simple strategy is to traverse the computatior graph from the gradient nodes to their immediate parent nodes.\nA set of parameters has a Kronecker-factored approximation to its Fisher block if its corresponding gradient node has a matrix product or convolution operator node as its immediate parent node. For these parameters, the Kronecker factor matrices are the second-order statistics of the inputs to the parent operator node of their gradient nodes (typically the activities A and back-propagated deriva- tives Ds). For other sets of parameters an exact Fisher block can be computed instead (assuming they have low enough dimension).\nIn a typical neural network, most of the parameters are concentrated in weight matrices, that are used for matrix product or convolution operations, for which one of the existing Kronecker-factorec. approximations applies. Homogeneous coordinates can be used if the weights and biases of the. same layer are annotated in the computation graph. The rest of the parameters are often gain anc. bias vectors for each hidden unit, and it is feasible to compute and invert exact Fisher blocks fo. these.\nA neural network can be also instantiated multiple times in a computational graph (with shared pa rameters) to process different inputs. The gradient of the parameters shared across the instantiations are the sum of the individual gradients from each instantiation. Given such computation graph, the immediate parent operator node from the gradient is a summation whose inputs are computed by the same type of operators. Without additional knowledge about the computation graph, one approxi- mation is to treat the individual gradient contributions in the summation as statistically independent of each other (similarly to how gradient contributions from multiple spatial locations are treated as independent in the KFC approximation (Grosse and Martens|2016)). Under this approximation, the Kronecker factors associated with the gradient can be computed by lumping the statistics associated with each of the gradient contributions together.\nOur implementation of Distributed K-FAC in TensorFlow applies the above the strategy to auto matically generate K-FAC computation graphs without requiring the user to modify their existing. model-definition code.\nKronecker factors can sometimes be shared by approximate Fisher blocks for two or more parame-. ters. This is the case, for example, when a vector of units serves as inputs to two different weight matrix multiplication operations. In such cases, the computation of the second-order statistics can be reused, which is what we do in our implementation.."}] |
Byk-VI9eg | [{"section_index": "0", "section_name": "GENERATIVE MULTI-ADVERSARIAL NETWORKS", "section_text": "Ishan Durugkar*, Ian Gemp*, Sridhar Mahadevan\nCollege of Information and Computer Sciences University of Massachusetts, Amherst 15\n{idurugkar, imgemp, mahadeva}@cs.umass.edu\nGenerative adversarial networks (GANs) are a framework for producing a gen erative model by way of a two-player minimax game. In this paper, we propose the Generative Multi-Adversarial Network (GMAN). a framework that extend. GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on In contrast, GMAN can be reliably trained with the original, untampered objec tive. We explore a number of design perspectives with the discriminator role rang ing from formidable adversary to forgiving teacher. Image generation tasks com paring the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The GAN framework is one of the more recent successes in a line of research on adversarial train ing in machine learning (Schmidhuber (1992); Bagnell (2005); Ajakan et al. (2014)) where games between learners are carefully crafted so that Nash equilibria coincide with some set of desired op- timality criteria. Preliminary work on GANs focused on generating images (e.g., MNIST (LeCun et al. (1998)), CIFAR (Krizhevsky (2009))), however, GANs have proven useful in a variety of appli- cation domains including learning censored representations (Edwards & Storkey (2015)), imitating expert policies (Ho & Ermon (2016)), and domain transfer (Yoo et al. (2016)). Work extending GANs to semi-supervised learning (Chen et al. (2016); Mirza & Osindero (2014); Gauthier (2014) Springenberg (2015)), inference (Makhzani et al. (2015); Dumoulin et al. (2016)), feature learning (Donahue et al. (2016)), and improved image generation (Im et al. (2016); Denton et al. (2015); Radford et al. (2015)) have shown promise as well.\nDespite these successes, GANs are reputably difficult to train. While research is still underway to. improve training techniques and heuristics (Salimans et al. (2016)), most approaches have focused on understanding and generalizing GANs theoretically with the aim of exploring more tractable formulations (Zhao et al. (2016); Li et al. (2015); Uehara et al. (2016); Nowozin et al. (2016))."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Generative adversarial networks (Goodfellow et al. (2014)) (GANs) are a framework for producing a generative model by way of a two-player minimax game. One player, the generator, attempts tc generate realistic data samples by transforming noisy samples, z, drawn from a simple distributior (e.g., z ~ N(0, 1)) using a transformation function Ge(z) with learned weights, 0. The generator receives feedback as to how realistic its synthetic sample is from another player, the discriminator which attempts to discern between synthetic data samples produced by the generator and samples drawn from an actual dataset using a function Dw(x) with learned weights, w.\nIn this paper, we theoretically and empirically justify generalizing the GAN framework to multiple discriminators. We review GANs and summarize our extension in Section 2. In Sections 3 and 4 we present our N-discriminator extension to the GAN framework (Generative Multi-Adversarial Networks) with several variants which range the role of the discriminator from formidable adversary to forgiving teacher. Section 4.2 explains how this extension makes training with the untampered minimax objective tractable. In Section 5, we define an intuitive metric (GMAM) to quantify GMAN\nperformance and evaluate our framework on a variety of image generation tasks. Section 6 conclude with a summary of our contributions and directions for future research.\nContributions- To summarize, our main contributions are: i) a multi-discriminator GAN frame work, GMAN, that allows training with the original, untampered minimax objective; ii) a generative multi-adversarial metric (GMAM) to perform pairwise evaluation of separately trained frameworks iii) a particular instance of GMAN, GMAN*, that allows the generator to automatically regulate training and reach higher performance (as measured by GMAM) in a fraction of the training time required for the standard GAN model..\nThe original formulation of a GAN is a minimax game between a generator, Ge(z) : z -> x, and discriminator, Dw(x) : x -> [0, 1],\nmin max V(D, G) = Ex~Pdat log(1 - D(G(z) G DeD\nIn their original work, Goodfellow et al. (2014) proved that given sufficient network capacities and an oracle providing the optimal discriminator, D* = arg maxp V(D, G), gradient descent on. pG(x) will recover the desired globally optimal solution, pg(x) = Pdata(x), so that the generator. distribution exactly matches the data distribution. In practice, they replaced the second term, log(1 D(G(z))), with - log(D(G(z))) to enhance gradient signals at the start of the game; note this is no. longer a zero-sum game. Part of their convergence and optimality proof involves using the oracle D*, to reduce the minimax game to a minimization over G only:.\nmin V(D*, G) = min{C(G) = - log(4) + 2 : JSD(Pdata|PG\nWe propose introducing multiple discriminators, which brings with it a number of design possibil-. ities. We explore approaches ranging between two extremes: 1) a more discriminating D (better. approximating maxp V(D, G)) and 2) a D better matched to the generator's capabilities. Math-. ematically, we reformulate G's objective as ming max F(V(D1, G),..., V(Dv, G)) for different choices of F (see Figure 1). Each D; is still expected to independently maximize its own V(D,, G) (i.e. no cooperation). We sometimes abbreviate V(D, G) with V, and F(V1,..., Vn) with FG(Vt)..\nHere, we consider multi-discriminator variants that attempt to better approximate maxp V(D, G providing a harsher critic to the generator.\nwhere pdata(x) is the true data distribution and pz(z) is a simple (usually fixed) distribution that is easy to draw samples from (e.g., N(0, 1)). We differentiate between the function space of discrim- inators, D, and elements of this space, D. Let pG(x) be the distribution induced by the generator, Ge(z). We assume D, G to be deep neural networks as is typically the case..\nwhere JSD denotes Jensen-Shannon divergence. Minimizing C(G) necessarily minimizes JSD. however, we rarely know D* and so we instead minimize V(D, G), which is only a lower bound\nhis perspective of minimizing the distance between the distributions, Pdata and pg, motivatec. i et al. (2015) to develop a generative model that matches all moments of pG(x) with pdata(x) (a ptimality) by minimizing maximum mean discrepancy (MMD). Another approach, EBGAN, (Zha t al. (2016)) explores a larger class of games (non-zero-sum games) which generalize the generato nd discriminator objectives to take real-valued \"energies\"' as input instead of probabilities. Nowozir. t al. (2016) and then Uehara et al. (2016) extended the JSD perspective on GANs to more genera. livergences, specifically f-divergences and then Bregman-divergences respectively.\nIn general, these approaches focus on exploring fundamental reformulations of V(D, G). Similarly our work focuses on a fundamental reformulation, however, our aim is to provide a framework that accelerates training of the generator to a more robust state irrespective of the choice of V.\nG A F(.) V(D,G) V(D, ,G) V(DN,G) D D D 1 2 N\nFigure 1: (GMAN) The generator trains using feedback aggregated over multiple discriminators. If. F := max, G trains against the best discriminator. If F := mean, G trains against an ensemble. We explore other alternatives to F in Sections 4.1 & 4.4 that improve on both these options"}, {"section_index": "3", "section_name": "3.1 MAXIMIZING V(D,G", "section_text": "In practice, maxD,eD V(Di, G) is not performed to convergence (or global optimality), so the above problem is oversimplified. Furthermore, introducing N discriminators affects the dynam ics of the game which affects the trajectories of the discriminators. This prevents us from claiming max{V1(t), ..., V(t)} > max{V/(t)} Vt even if we initalize D1(0) = D (0) as it is unlikely that Di(t) = D| (t) at some time t after the start of the game."}, {"section_index": "4", "section_name": "3.2 BOOSTING", "section_text": "There are a few differences between taking the max (case 1) and online boosting (case 2). In case 1 our booster is limited to selecting a single weak discriminator (i.e. a pure strategy), while in case 2 many boosting algorithms more generally use linear combinations of the discriminators. Moreover in case 2, a booster must make a prediction before receiving a loss function. In case 1, we assume access to the loss function at prediction time, which allows us to compute the max.."}, {"section_index": "5", "section_name": "1 A FORGIVING TEACHER", "section_text": "The previous perspectives focus on improving the discriminator with the goal of presenting a better approximation of maxp V(D, G) to the generator. Our next perspective asks the question, \"Is maxp V(D, G) too harsh a critic?\""}, {"section_index": "6", "section_name": "4.1 Soft-DISCRIMINATOR", "section_text": "In practice, training against a far superior discriminator can impede the generator's learning. This is because the generator is unlikely to generate any samples considered \"realistic\" by the discrimi nator's standards, and so the generator will receive uniformly negative feedback. This is problem\nFor a fixed G, maximizing FG(Vt) with F := max and N randomly instantiated copies of our dis-. criminator is functionally equivalent to optimizing V (e.g., stochastic gradient ascent) with random restarts in parallel and then presenting max,e{1,.,N} V(Di, G) as the loss to the generator --a very. pragmatic approach to the difficulties presented by the non-convexity of V caused by the deep net. Requiring the generator to minimize the max forces G to generate high fidelity samples that must. hold up under the scrutiny of all N discriminators, each potentially representing a distinct max..\nWe can also consider taking the max over N discriminators as a form of boosting for the discrim-. inator's online classification problem (online because G can produce an infinite data stream). The boosted discriminator is given a sample xt and must predict whether it came from the generator or the dataset. The booster then makes its prediction using the predictions of the N weaker D;.\nIt is possible to train the weak discriminators using boosting and then ignore the booster's prediction by instead presenting max{ V, }. We explore both variants in our experiments, using the adaptive al- gorithm proposed in Beygelzimer et al. (2015). Unfortunately, boosting failed to produce promising results on the image generation tasks. It is possible that boosting produces too strong an adversary for learning which motivates the next section. Boosting results appear in Appendix A.7.\natic because the information contained in the gradient derived from negative feedback only dictates where to drive down pg(x), not specifically where to increase pg(x). Furthermore, driving down pG(x) necessarily increases pG(x) in other regions of (to maintain Jx PG(x) = 1) which may or may not contain samples from the true dataset (whack-a-mole dilemma). In contrast, a generator is more likely to see positive feedback against a more lenient discriminator, which may better guide a generator towards amassing pg(x) in approximately correct regions of I'.\nFor this reason, we explore a variety of functions that allow us to soften the max operator. We choose to focus on soft versions of the three classical Pythagorean means parameterized by where. X = 0 corresponds to the mean and the max is recovered as -> oo:.\nN AMsoft(V,A) = WiVi N GMsoft(V,X) = exp Wi log (-Vi) N HMsoft(V,X) =\nwhere w; = eAVi /,eV with X 0, V, < 0. Using a softmax also has the well known advantage of being differentiable (as opposed to subdifferentiable for max). Note that we only require continuity to guarantee that computing the softmax is actually equivalent to computing V(D, G) where D is some convex combination of D; (see Appendix A.5)."}, {"section_index": "7", "section_name": "4.2 USING THE ORIGINAL MINIMAX OBJECTIVE", "section_text": "To illustrate the effect the softmax has on training, observe that the component of AMsoft(V, 0 relevant to generator training can be rewritten as\nN 1 og(1- D(x) N 2\nwhere z = IIN (1 - D;(). Note that the generator gradient, | log(2) |, is minimized at z = 1 over z E (0, 1]'. From this form, it is clear that z = 1 if and only if D, = 0Vi, so G only receives a vanishing gradient if all D, agree that the sample is fake; this is especially unlikely for large N. In other words, G only needs to fool a single D, to receive constructive feedback. This result allows the generator to successfully minimize the original generator objective, log(1 - D). This is in contrast to the more popular - log(D) introduced to artificially enhance gradients at the start of training.\nAt the beginning of training, when maxp, V(D, G) is likely too harsh a critic for the generator, we. can set closer to zero to use the mean, increasing the odds of providing constructive feedback tc the generator. In addition, the discriminators have the added benefit of functioning as an ensemble,. reducing the variance of the feedback presented to the generator, which is especially important. when the discriminators are far from optimal and are still learning a reasonable decision boundary. As training progresses and the discriminators improve, we can increase A to become more critical. of the generator for more refined training.."}, {"section_index": "8", "section_name": "4.3 MAINTAINING MULTIPLE HYPOTHESES", "section_text": "We argue for this ensemble approach on a more fundamental level as well. Here, we draw on the density ratio estimation perspective of GANs (Uehara et al. (2016)). The original GAN proof assumes we have access to pdata(x), if only implicitly. In most cases of interest, the discriminator only has access to a finite dataset sampled from pdata(x); therefore, when computing expectations of V(D, G), we only draw samples from our finite dataset. This is equivalent to training a GAN with pdata(x) = Pdata which is a distribution consisting of point masses on all the data points in the dataset. For the sake of argument, let's assume we are training a discriminator and generator, each\nN AMsoft(V,X)=wiVi N GMsoft(V, X) = - exp W; IC N HMsoft(V,A) = Wi\nwith infinite capacity. In this case, the global optimum (pG(x) = Pdata(x)) fails to capture any of the interesting structure from pdata(x), the true distribution we are trying to learn. Therefore, it is actually critical that we avoid this global optimum\nO . p(x)\nFigure 2: Consider a dataset consisting of the nine 1-dimensional samples in black. Their corre- sponding probability mass function is given in light gray. After training GMAN, three discrimina- tors converge to distinct local optima which implicitly define distributions over the data (red, blue yellow). Each discriminator may specialize in discriminating a region of the data space (placing more diffuse mass in other regions). Averaging over the three discriminators results in the distribu- tion in black, which we expect has higher likelihood under reasonable assumptions on the structure of the true distribution.\nIn practice, this degenerate result is avoided by employing learners with limited capacity and corrupt ing data samples with noise (i.e., dropout), but we might better accomplish this by simultaneously training a variety of limited capacity discriminators. With this approach, we might obtain a diverse set of seemingly tenable hypotheses for the true pdata(x). Averaging over these multiple locally. optimal discriminators increases the entropy of pdata(x) by diffusing the probability mass over the. data space (see Figure 2 for an example).."}, {"section_index": "9", "section_name": "4.4 AUTOMATING REGULATION", "section_text": "The problem of keeping the discriminator and generator in balance has been widely recognized in. previous work with GANs. Issues with unstable dynamics, oscillatory behavior, and generator col-. lapse are not uncommon. In addition, the discriminator is often times able to achieve a high degree of classification accuracy (producing a single scalar) before the generator has made sufficient progress on the arguably more difficult generative task (producing a high dimensional sample). Salimans. et al. (2016) suggested label smoothing to reduce the vulnerability of the generator to a relatively. superior discriminator. Here, we explore an approach that enables the generator to automatically. temper the performance of the discriminator when necessary, but still encourages the generator to. challenge itself against more accurate adversaries. Specifically, we augment the generator objective:. min FG(Vi) - f() (7)\nmin Fg(V)-f( G,X>0\nwhere f(X) is monotonically increasing in which appears in the softmax equations, (3)-(5). In experiments, we simply set f(X) = cA with c a constant (e.g., O.001). The generator is incentivized to increase to reduce its objective at the expense of competing against the best available adversary D* (see Appendix A.6)."}, {"section_index": "10", "section_name": "5 EVALUATION", "section_text": "Evaluating GANs is still an open problem. In their original work, Goodfellow et al. (2014) repor log likelihood estimates from Gaussian Parzen windows, which they admit, has high variance anc is known not to perform well in high dimensions. Theis et al. (2016) recommend avoiding Parzer windows and argue that generative models should be evaluated with respect to their intended appli cation. Salimans et al. (2016) suggest an Inception score, however, it assumes labels exist for the dataset. Recently, Im et al. (2016) introduced the Generative Adversarial Metric (GAM) for mak ing pairwise comparisons between independently trained GAN models. The core idea behind theii approach is given two generator, discriminator pairs (G1, D1) and (G2, D2), we should be able tc learn their relative performance by judging each generator under the opponent's discriminator.\nIn GMAN, the opponent may have multiple discriminators, which makes it unclear how to perform the swaps needed for GAM. We introduce a variant of GAM, the generative multi-adversarial metric (GMAM), that is amenable to training with multiple discriminators,\n' GMAM = log\nwhere a and b refer to the two GMAN variants (see Section 3 for notation Fg(V)). The idea here is similar. If G2 performs better than Gj with respect to both Dj and D2, then GMAM>0 (remember V<0 always). If G1 performs better in both cases, GMAM<0, otherwise, the result is indeterminate.\nWe evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST. (LeCun et al. (1998)). CIFAR-10 (Krizhevsky (2009)) and CelebA (Liu et al. (2015)). We focus on rates of convergence to steady state along with quality of the steady state generator according to the GMAM metric. To summarize, loosely in order of increasing discriminator leniency, we compare."}, {"section_index": "11", "section_name": "5.2.1 MNIST", "section_text": "Our GMAM metric (see Table 1) agrees with the relative quality of images in Figure 5 with GMAN achieving the best overall performance. Figure 6 reveals GMAN*'s attempt to regulate the difficulty\nTable 1: Pairwise GMAM metric means with stdev for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse Scores are obtained by summing each variant's column..\nF-boost: A single AdaBoost.OL-boosted discriminator (see Appendix A.7). P-boost: D; is trained according to AdaBoost.OL. A max over the weak learner losses presented to the generator instead of the boosted prediction (see Appendix A.7). GMAN-max: max{ V} is presented to the generator. GAN: Standard GAN with a single discriminator (see Appendix A.2). mod-GAN: GAN with modified objective (generator minimizes log(D(G(z))) GMAN-X: GMAN with F :=arithmetic softmax with parameter X. GMAN*: The arithmetic softmax is controlled by the generator through X.\nAll generator and discriminator models are deep (de)convolutional networks (Radford et al. (2015)). and aside from the boosted variants, all are trained with Adam (Kingma & Ba (2014)) and batch normalization (Ioffe & Szegedy (2015)). Discriminators convert the real-valued outputs of their. networks to probabilities with squashed-sigmoids to prevent saturating logarithms in the minimax. {2. 5} discriminators. We maintain discriminator diversity by varying dropout and network depth.\nFigure 3 reveals that increasing the number of discriminators reduces the number of iterations to steady-state by 2x on MNIST; increasing N (the size of the discriminator ensemble) also has the added benefit of reducing the variance the minimax objective over runs. Figure 4 displays the vari- ance of the same objective over a sliding time window, reaffirming GMAN's acceleration to steady. state. Figure 5 corroborates this conclusion with recognizable digits appearing approximately an epoch before the single discriminator run; digits at steady-state appear slightly sharper as well\nScore Variant GMAN* GMAN-0 GMAN-max mod-GAN 0.127 GMAN* 0.020 0.009 0.028 0.019 0.089 0.036 0.007 GMAN-0 0.020 0.009 -0.0130.015 0.018 0.027 -0.034 GMAN-max 0.028 0.019 0.013 0.015 0.011 0.024 B 0.122 mod-GAN 0.089 0.036 0.018 0.027 0.011 0.024\n0.0 -0.5 -1.0 -1.5 C -2.0 -2.5 N=1 original -3.0 N=1 modified -3.5 N=2 N=5 -4.0 0 1000 2000 3000 4000 5000 6000 Iteration #\n0.5 -1.0 -1.5 -2.0 -2.5 N=1 original 3.0 N=1 modified -3.5 N=2 N=5 -4.0 1000 2000 3000 4000 5000 6000\nFigure 3: Generator objective, F, averaged. over 5 training runs on MNIST. Increas- ing the number of discriminators accelerates convergence of F to steady state (solid line) and reduces its variance, o2 (filled shadow. 1o). Figure 4 provides alternative evidence of GMAN*'s accelerated convergence..\n1 epoch 2 epochs 3 epochs 7 6 5 epochs 5 3 8 3 10 epochs 1 Discriminator 2 Discriminators 5 Discriminators\nFigure 5: Comparison of imag. epochs for N = {1, 2, 5} using GMAN-0 on MNIST\nof the game to accelerate learning. Figure 7 displays the GMAM scores comparing fixed X's to the variable \\ controlled by GMAN*\n101 N=1 original N=1 modified 100 N=2 N=5 10-1 10-2 10-3 0 1000 2000 3000 4000 5000 600 Iteration #\nFigure 4: Stdev, , of the generator objec- tive over a sliding window of 500 iterations Lower values indicate a more steady-state GMAN* with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 3's filled shadows reveal stdev of F over runs, while this plot shows stdev over time.\n. 1.1 N=2 Score 1* X=1 X=0 1.0 N=5 (N = 5) 0.9 -0.008 \\* 0.019 0.028 0.009 0.010 0.8 0.001 X=1 0.008 0.008 0.7 0.009 0.010 0.6 -0.025 X =0 0.019 0.008 0.5 0.010 0.010 0.4 '0 2000 4000 6000 8000 1000012000 Iteration # Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise GMAM for GMAN-X and\nGMAM Figure 6: GMAN* regulates difficulty of the Figure 7: Pairwise for GMAN-X and stdev(GMAM) game by adjusting X. Initially, G reduces A to GMAN* (X*) over 5 runs on MNIST. ease learning and then gradually increases X for a more challenging learning environment.\nWe see similar accelerated convergence behavior for the CelebA dataset in Figure 8\nFigure 8: Image quality improvement across number of generators at same number of iterations for GMAN-0 on CelebA.\nFigure 9 displavs imag erated by GMAN-0 on CIFAR-10. See Appendix A.3 for more results\nReal Images. Generated Images\nFigure 9: Images generated by GMAN-0 on the CIFAR-10 dataset\nWe introduced multiple discriminators into the GAN framework and explored discriminator roles ranging from a formidable adversary to a forgiving teacher. Allowing the generator to automatically tune its learning schedule (GMAN*) outperformed GANs with a single discriminator on MNIST. In general, GMAN variants achieved faster convergence to a higher quality steady state on a variety of tasks as measured by a GAM-type metric (GMAM). In addition, GMAN makes using the original. GAN objective possible by increasing the odds of the generator receiving constructive feedback..\nIn future work, we will look at more sophisticated mechanisms for letting the generator control the game as well as other ways to ensure diversity among the discriminators. Introducing multiple generators is conceptually an obvious next step, however, we expect difficulties to arise from more complex game dynamics. For this reason, game theory and game design will likely be important."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We acknowledge helpful conversations with Stefan Dernbach, Archan Ray, Luke Vilnis, Ben Turtel Stephen Giguere, Rajarshi Das, and Subhransu Maji. We also thank NVIDIA for donating a K4C GPU. This material is based upon work supported by the National Science Foundation under Gran Nos. IIS-1564032. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF\n100 iterations 500 iterations 1000 iterations 2000 iterations 5000 iterations 9000 iterations 1 Discriminator 2 Discriminators 3 Discriminators\nWe also found that GMAN is robust to mode collapse. We believe this is because the generator must appease a diverse set of discriminators in each minibatch. Emitting a single sample will score well for one discriminator at the expense of the rest of the discriminators. Current solutions (e.g., minibatch discrimination) are quadratic in batch size. GMAN, however, is linear in batch size"}, {"section_index": "13", "section_name": "BIBLIOGRAPHY", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016\nHana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, and Mario Marchand Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446, 2014\nJeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprin arXiv:1605.09782, 2016.\nIan Goodfellow. Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nJonathan Ho and Stefano Ermon. Generative adversarial imitation learning. .arXiv preprint arXiv:1606.03476, 2016\nDaniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating image. with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Master's Thesis, 2009\nYann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits 1998.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild In Proceedings of International Conference on Computer Vision (ICCV). December 2015\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. 2015.\nSiamak Ravanbakhsh, Francois Lanusse, Rachel Mandelbaum, Jeff Schneider, and Barnabas Poczos Enabling dark energy science with deep generative models of galaxy images. arXiv preprin arXiv:1609.05796, 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498. 2016.\nJurgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation 4(6):863-879, 1992\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844v3. 2016.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiy. preprint arXiv:1606.00709. 2016\nJost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015."}, {"section_index": "14", "section_name": "A APPENDIX", "section_text": "See Figures 10, 11, 12, and 13\n-0.2 N=1 -0.4 N=2 -0.6 N=5 C -0.8 -1.0 -1.2 -1.4 -1.6 0 2000 4000 6000 8000100001200 Iteration #\nFigure 10: Generator objective, F, averaged over 5 training runs on CelebA. Increasing. N (# of D) accelerates convergence of F to steady state (solid line) and reduces its vari- ance, 2 (filled shadow 1). Figure 11 pro. vides alternative evidence of GMAN-O's ac- celerated convergence.\nN=1 Original 0.3 N=1 Modified N=2=0 -0.4 N=2=1 ( 0.5 0.6 -0.7 -0.8 5000 10000 15000 20000 25000 30000 Iteration #\n-0.3 0.4 () -0.5 0.6 -0.7 -0.8\nFigure 12: Generator objective, F, averaged over 5 training runs on CIFAR-10. Increas- ing N (# of D) accelerates convergence of F to steady state (solid line) and reduces its variance, 2 (filled shadow 1). Figure 13 provides alternative evidence of GMAN-0's accelerated convergence."}, {"section_index": "15", "section_name": "A.2 ADDITIONAL GMAM TABLES", "section_text": "See Figures 14 and 15\nFigure 11: Stdev, , of the generator objec- tive over a sliding window of 500 iterations. Lower values indicate a more steady-state GMAN-0 with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 10's filled shadows reveal stdev of F over runs, while this plot shows stdev over time.\n100 N=1 Original N=1 Modified N=2,=0 10 N=2, X=1 10-2 10-3 0 5000 10000 15000 20000 25000 30000 Iteration #\nFigure 13: Stdev, , of the generator objec- tive over a sliding window of 500 iterations Lower values indicate a more steady-state GMAN-0 with N = 5 achieves steady-state at ~2x speed of GAN (N = 1). Note Fig- ure 12's filled shadows reveal stdev of F' over runs, while this plot shows stdey over time.\nSee Tables 2, 3, 4, 5, 6. Increasing the number of discriminators from 2 to 5 on CIFAR-10 signif icantly improves scores over the standard GAN both in terms of the GMAM metric and Inception Scores.\nTable 2: Pairwise GMAM metric means for select models on MNIST. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores. are obtained by summing each column.\nScore Variant GMAN-0 GMAN-1 GMAN* mod-GAN 0.172 GMAN-0 0.022 0.062 0.088 BReter 0.050 GMAN-1 0.022 0.006 0.078 0.055 GMAN* 0.062 0.006 0.001 -0.167 mod-GAN 0.088 0.078 0.001\nScore Variant GMAN-0 GMAN-1 GMAN* mod-GAN 0.172 GMAN-0 0.022 0.062 0.088 Reeer 1 0.050 GMAN-1 0.022 0.006 0.078 0.055 GMAN* 0.062 0.006 0.001 0.167 mod-GAN 0.088 0.078 0.001\nTable 3: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores. are obtained by summing each column. GMAN variants were trained with two discriminators.\nGMAN-0 GMAN-1 mod-GAN GMAN* Score 5.878 0.193 5.765 0.168 5.738 0.176 5.539 0.099\nTable 4: Inception score means with standard deviations for select models on CIFAR-10. Higher scores are better. GMAN variants were trained with two discriminators..\nScore Variant GMAN-0 GMAN* GMAN-1 mod-GAN 0.180 GMAN-0 0.008 0.041 0.132 Beter 0.122 GMAN* 0.008 0.038 0.092 0.010 GMAN-1 0.041 0.038 0.089 - -0.313 mod-GAN 0.132 0.092 0.089\nGMAN-1 GMAN-0 GMAN* mod-GAN Score 6.001 0.194 5.957 0.135 5.955 0.153 5.738 0.176\nTable 6: Inception score means with standard deviations for select models on CIFAR-10. Highe scores are better. GMAN variants were trained with five discriminators.\n1 Discriminator 5 discriminator GMAN* 5 discriminator GMAN- 0\nFigure 14: Sample of pictures generated on CelebA cropped dataset\nTable 5: Pairwise GMAM metric means for select models on CIFAR-10. For each column, a positive GMAM indicates better performance relative to the row opponent; negative implies worse. Scores are obtained by summing each column. GMAN variants were trained with five discriminators.\nGenerated Images Real Images.\nFigure 15: Sample of pictures generated by GMAN-0 on CIFAR dataset.\nA GAN framework with two discriminators appeared in Yoo et al. (2016), however, it is applica ble only in a semi-supervised case where a label can be assigned to subsets of the dataset (e.g. ' = {t = Domain 1, = Domain 2,...}). In contrast, our framework applies to an unsu pervised scenario where an obvious partition of the dataset is unknown. Furthermore, extending GMAN to the semi-supervised domain-adaptation scenario would suggest multiple discriminators per domain, therefore our line of research is strictly orthogonal to that of their multi-domain dis criminator approach. Also, note that assigning a discriminator to each domain is akin to prescribing a new discriminator to each value of a conditional variable in conditional GANs (Mirza & Osinder (2014)). In this case, we interpret GMAN as introducing multiple conditional discriminators and no a discriminator for each of the possibly exponentially many conditional labels\nIn Section 4.4, we describe an approach to customize adversarial training to better suit the devel opment of the generator. An approach with similar conceptual underpinnings was described ir Ravanbakhsh et al. (2016), however, similar to the above, it is only admissible in a semi-supervisec scenario whereas our applies to the unsupervised case"}, {"section_index": "16", "section_name": "A.5 Softmax REPRESENTABILITY", "section_text": "Let softmax(Vi) = V e [miny,, maxv,]. Also let a = argmin; Vi, b = arg max, Vi, and V(t) = V((1 - t)Da + tD) so that V(0) = Va and V(1) = V. The softmax and minimax objective. V(D, G) are both continuous in their inputs, so by the intermediate value theorem, we have tha t E [0,1] s.t. V(t) = V, which implies D E D s.t. V(D, G) = V. This result implies tha. the softmax (and any other continuous substitute) can be interpreted as returning V(D, G) for som. D selected by computing an another, unknown function over the space of the discriminators. This. result holds even if D is not representable by the architecture chosen for D's neural network.."}, {"section_index": "17", "section_name": "A.6 UNCONSTRAINED OPTIMIZATION", "section_text": "To convert GMAN* minimax formulation to an unconstrained minimax formulation. we introduce an auxiliary variable, A, define X(A) = log(1 + eA), and let the generator minimize over A E R.\nFigure 16: Sample of pictures generated across 4 independent runs on MNIST with F-boost (simila results with P-boost)."}, {"section_index": "18", "section_name": "A.8 EXPERIMENTAL SETUP", "section_text": "All experiments were conducted using an architecture similar to DCGAN (Radford et al. (2015)) We use convolutional transpose layers (Zeiler et al. (201o)) for G and strided convolutions for L. except for the input of G and the last layer of D. We use the single step gradient method as ir. (Nowozin et al. (2016)), and batch normalization (Ioffe & Szegedy (2015)) was used in each o. the generator layers. The different discriminators were trained with varying dropout rates fron. 0.3, 0.7|. Variations in the discriminators were effected in two ways. We varied the architecture b varying the number of filters in the discriminator layers (reduced by factors of 2, 4 and so on), a well as varying dropout rates. Secondly we also decorrelated the samples that the disriminators were. training on by splitting the minibatch across the discriminators. The code was written in Tensorflov. (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs. Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN. Specifics for the MNIST architecture and training are:.\nAdaBoost.OL (Beygelzimer et al. (2015)) does not require knowledge of the weak learner's slight edge over random guessing (P(correct label) = 0.5 + y E (0, 0.5]), and in fact, allows y < 0. This. is crucial because our weak learners are deep nets with unknown, possibly negative, y's..\nH H H H H\nGenerator latent variables z ~ U (1, 1) 100 Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128) Variants have either convolution 3(4,4,128) removed or all the filter sizes are dividedby 2 or 4. That is,(32,32,1),(16,16,16),(8,8,32),(4,4,64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16), (4, 4, 32). ReLu activations for all the hidden units. Tanh activation at the output units of the generator.. Sigmoid at the output of the Discriminator.. Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 10-4, 1 = 0.5). MNIST was trained for 20 epochs with a minibatch of size 100.. CelebA and CIFAR were trained over 24000 iterations with a minibatch of size 100."}] |
BJluGHcee | [{"section_index": "0", "section_name": "TENSORIAL MIXTURE MODELS", "section_text": "Or Sharir. Ronen Tamari. Naday Cohen & Amnon Shashua\n{or. sharir, ronent, cohennadav, shashua}@cs.huji.ac.il\nWe introduce a generative model, we call Tensorial Mixture Models (TMMs) based on mixtures of basic component distributions over local structures (e.g patches in an image) where the dependencies between the local-structures are rep- resented by a'priors tensor' holding the prior probabilities of assigning a compo. nent distribution to each local-structure.\nIn their general form, TMMs are intractable as the priors tensor is typically oj exponential size. However, when the priors tensor is decomposed it gives rise to an arithmetic circuit which in turn transforms the TMM into a Convolutiona Arithmetic Circuit (ConvAC). A ConvAC corresponds to a shallow (single hidder layer) network when the priors tensor is decomposed by a CP (sum of rank-1 approach and corresponds to a deep network when the decomposition follows the Hierarchical Tucker (HT) model.\nThe ConvAC representation of a TMM possesses several attractive properties. First, the inference is tractable and is implemented by a forward pass through. a deep network. Second, the architectural design of the model follows the deep. networks community design, i.e., the structure of TMMs is determined by just two easily understood factors: size of pooling windows and number of channels. Finally, we demonstrate the effectiveness of our model when tackling the problem of classification with missing data, leveraging TMMs unique ability of tractable. marginalization which leads to optimal classifiers regardless of the missingness distribution."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative models have played a crucial part in the early development of the field of Machin Learning. However, in recent years they were mostly cast aside in favor of discriminative models lead by the rise of ConvNets (LeCun et al. 2015), which were found to perform equally well o better than classical generative counter-parts on almost any task. Despite the increased interest ir unsupervised learning, many of the recent studies on generative models choose to focus solely oj the generation capabilities of these models (Goodfellow et al. T2014Gregor et al.2015}van de1 Oord et al.]2016} Dinh et al.]2016] Tran et al.]2016] Chen et al.]2016] Kingma et al.]2016] Kin and Bengio 2016). There is much less emphasis on leveraging generative models to solve actua tasks, e.g. semi-supervised learning (Kingma et al.[2014f|Springenberg) 2016f Maale et al. 2016 Forster et al.|2015 Salimans et al.2016), image restoration (Dinh et al. 2014 Bengio et al 2014 van den Oord et al.[2016f Zoran and Weiss! 2011f Rosenbaum and Weiss! 2015,Soh1-Dickstei1 et al.2015] Theis and Bethge|2015) or unsupervised feature representation (Radford et al.]2016 Coates et al.|2011). Nevertheless, work on generative models for solving actual problems are yet t show a meaningful advantage over competing discriminative models.\nOn the most fundamental level, the difference between a generative model and a discriminative one is simply the difference between learning P(X, Y) and learning P(Y|X), respectively. While i is always possible to infer P(Y|X) given P(X, Y), it might not be immediately apparent why the generative objective is preferred over the discriminative one. In|Ng and Jordan(2002), this ques tion was studied w.r.t. the sample complexity, proving that under some cases it can be significantly lesser in favor of the generative classifier. However, their analysis was limited only to specific pair. of discriminative and generative classifiers, and they did not present a general case where the the generative method is undeniably preferred. We wish to highlight one such case, where learning"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "P(X, Y) is provenly better regardless of the models in question, by examining the problem of clas. sification with missing data. Despite the artificially well-behave nature of the typical classification benchmarks presented in current publications, real-world data is usually riddled with noise and miss- ing values - instead of observing X we only have a partial observation X - a situation that tends. to be ignored in modern research. Discriminative models have no natural mechanisms to handle missing data and instead must rely on data imputation, i.e. filling missing data by a preprocessing. step prior to prediction. Unlike the discriminative approaches, generative models are naturally fitted. to handle missing data by simply marginalizing over the unknown values in P(X, Y), from which. we can attain P(Y|X) by an application of Bayes Rule. Moreover, under mild assumptions which. apply to many real-world settings, this method is proven to be optimal regardless of the process by. which values become missing (see sec.5|for a more detailed discussion)..\nWhile almost all generative models can represent P(X, Y), only few can actually infer its exact value efficiently. Models which possess this property are said to have tractable inference. Many studies specifically address the hard problem of learning generative models that do not have this property. Notable amongst those are works based on Variational Inference (Kingma and Welling 2014} Kingma et al.2014] Blei et al.]2003] Wang and Grimson2007] Makhzani et al.]2015 Kingma et al.|2016), which only provide approximated inference, and ones based on Generative Adversarial Networks (Goodfellow et al. 2014]Radford et al.| 2016, Springenberg2016Chen et al.[2016f Salimans et al.[2016] Makhzani et al.2015), which completely circumvent the infer- ence problem by restructuring the learning problem as a two-player game of discriminative objec. tives - both of these approaches are incapable of tractable inference.\nTo summarize, most generative models do not have tractable inference, and of the few models which do, they all possess one or more of the following shortcomings: (i) they do not possess the expressive capacity to model high-dimensional data (e.g. images), (ii) they require explicitly designing all the. dependencies of the data, or (iii) they do not have tractable marginalization..\nWe present in this paper a family of generative models we call Tensorial Mixture Models (TMMs). which aim to address the above shortcomings of alternative models. Under TMMs, we assume tha the data generated by our model is composed of a sequence of local-structures (e.g. patches in an. image), where each local-structure is generated from a small set of simple component distributions. (e.g. Gaussian), and the dependencies between the local-structures are represented by a prior tensor. holding the prior probabilities of assigning a component distribution to each local-structure. In their. general form, TMMs are intractable as the prior tensor is typically of exponential size. However, by. decomposing the prior tensor, inference of TMMs becomes realizable by Convolutional Arithmetic. Circuits (ConvACs) - a recently proposed (Cohen et al.[2016a) ConvNet architecture based on twc.\nThere are several advantages to models with tractable inference (e.g. they could be simpler to train), and as we have shown above, this property is also a requirement for proper handling of missing data in the form of marginalization. In practice, to marginalize over P(X, Y) means to perform integration on it, thus, even if it is tractable to compute P(X, Y), it still might not be tractable to compute every possible marginalization. Models which are capable of this are said to have tractable marginalization. Mixture Models (e.g. Gaussian Mixture Models) are the classical example of a generative model with tractable inference, as well as tractable marginalization. Though they are simple to understand, easy to train and even known to be universal - can approximate any distribution given sufficient capacity -- they do not scale well to high-dimensional data. The Gaussian Mixture Model is an example of a shallow model - containing just a single latent variable - with limited expressive efficiency. More generally, Graphical Models are deep and exponentially more expressive, capable of representing intricate relations between many latent variables. While not all kinds of Graphical Models are tractable, many are, e.g.Latent Tree Models (Zhang2004 Mourad et al.2013) and Sum-Product Networks (Poon and Domingos2011). The main issue with generic graphical models is that by virtue of being too general they lack the inductive bias needed to efficiently model unstructured data, e.g. images or text. Despite the success of structure learning algorithms (Huang et al.]2015] Gens and Domingos]2013]Adel et al.]2015) on structured datasets, such as discovering a hierarchy among diseases in patients health records, there are no similar results on unstructured datasets. Indeed some recent works on the subject have failed to solve even simple handwritten digit classification tasks (Adel et al.|[2015). Thus deploying graphical models on such cases requires experts to manually design the model. Other attempts which harness neural networks blocks (Dinh et al.| 2014f [2016) offer tractable inference, but not tractable marginalization.\ninput hidden layer 0. hidden layer L-1 coordinates 1x1 conv product by indicators. pooling 1x1 conv product dense pooling (output) V 1i=d\nFigure 1: The decoding algorithm of an arbitrary tensor decomposition represented by a ConvAC\noperations, weighted sum and product pooling - which enables both tractable inference as well as tractable marginalization. While Graphical Models are typically hard to design, ConvACs follow the same design conventions of modern ConvNets, which reduces the task of designing a model to simply choosing the number of channels at each layer, and size of pooling windows. ConvACs were also the subject of several theoretical studies on its expressive capacity (Cohen et al.|2016af Cohen and Shashua] 2016b) and comparing them to ConvNets (Cohen and Shashua2016a), showing they are especially suitable for high-dimensional natural data (images, audio, etc.) with a non-negligible advantage over standard ConvNets. Sum-Product Networks are another kind of Graphical Model realizable by Arithmetic Circuits, but they do not posses the same theoretical guarantees, nor do they provide a simple method to design efficient and expressive models.\nWe begin by establishing the minimal background in the field of tensor analysis required for fol- lowing our work (see app.|A|for a more detailed review of the subject). A tensor is best thought of as a multi-dimensional array Ad1,.,dn E R, where Vi E [N], d, E [Mi] and N is referred to as the order of the tensor. For our purposes we typically assume that M1 = ... = M = M, and denote it as A E (RM )ON. It is immediately apparent that performing operations with tensors, or simply storing them, quickly becomes intractable due to their exponential size of MN. That is one of the primary motivations behind tensor decomposition, which can be seen as a generalization of low-rank matrix factorization.\nThe relationship between tensor decomposition and networks arises from the simple observation that through decomposition one can tradeoff storage complexity with computation, where the type. of computation consists of sums and products. Specifically, the decompositions could be described. by a compact representation coupled with a decoding algorithm of polynomial complexity to retrieve. the entries of the tensor. Most tensor decompositions have a decoding algorithm representable via. computation graphs of products and weighted sums, also known as Arithmetic Circuits (Shpilka and. Yehudayoff2010) or Sum-Product Networks (Poon and Domingos|2011). More specifically, these circuits take as input N indicator vectors d1,..., N, representing the coordinates (d1,..., d),. where o, = 1[j=d], and output the value of Ad1,..,dn, where the weights of these circuits form the. compact representation of tensors.\nApplying this perspective to two of the most common decomposition formats, CANDE. COMP/PARFAC (CP) and Hierarchical Tucker (HT), give rise to a shared framework for repre. senting their decoding circuits by convolutional networks as illustrated in fig.1] where a shallow. network with one hidden layer corresponds to the CP decomposition, and a deep network with log2(N) hidden layers corresponds to the HT decomposition. The networks consists of just product. pooling and 11 conv layers. Having no point-wise activations between the layers, the non-linearity. of the models stems from the product pooling operation itself. The pooling layers also control the. depth of the network by the choice of the size and the shape of pooling windows. The conv operator. is not unlike the standard convolutional layer of ConvNets, with the sole difference being that it may. operate without coefficient sharing. i.e. the filters that generate feature maps by sliding across the\nThe rest of the article is organized as follows. In sec.2|we briefly review mathematical background on tensors required in order to follow our work. This is followed by sec. 3|which presents our generative model and its theoretical properties. How our model is trained is covered in sec.4] and a thorough discussion on the importance of marginalization and its implications on our model is given in sec.5 We conclude the article by presenting our experiments on classification with missing data in sec.6 a and revisit the main points of the article and future research in sec.\nArithmetic Circuits constructed from the above conv and product pooling layers are called Con. volutional Arithmetic Circuits, or ConvACs for short, first suggested by Cohen et al.[(2016a) as : theoretical framework for studying standard convolutional networks, sharing many of the defining traits of the latter, most noteworthy, the locality, sharing and pooling properties of ConvNets. Unlike. general circuits, the structure of the network is determined solely by two parameters, the number o channels of each conv layer and the size of pooling windows, which indirectly controls the deptl. of the network. Any decomposition that corresponds to a ConvAC can represent any tensor, giver. sufficient number of channels, though deeper circuits result in more efficient representations (Coher. et al.]2016a).\nk+1 x;=1,ViEk+1:x;>0 X i=1\nA well-known observation, which has been verified in several empirical studies (e.g. byZoran. and Weiss (2011), is that the distributions of local structures typically found in natural data could. be sufficiently modeled by a mixture model consisting of only few components (on the order of. 100) of simple distributions (e.g. Gaussian). Assuming the above holds for X E (Rs)N and let. {P(x|d; 0a)} d-1 be the mixing components, parameterized by 01, ., OM, from which local struc-. tures are generated, i.e. for all i E [N] there exist d; E [M] such that x; ~ P(x[di; 0d,), where d; is. a hidden variable specifying the matching component for the i-th local structure, then the probability density of sampling X is fully described by:.\na1,..,aN=1 where P(d1, ..., d) represents the prior probability of assigning components d1,..., dy to their. respective local structures x1, ..., X. Even though we had to make an assumption on X to derive eq.2l it is important to note that if we allow M to become unbounded, then any distribution with. support in (Rs)N could be approximated by this equation. The argument follows from the universal-. ity property of the common parametric families of distributions (Gaussian, Laplacian, etc.), where. any distribution can be approximated given sufficient number of components from these families, and thus the assumption always holds to some degree (see app.B|for the complete proof)..\nUnlike standard mixture models, we cannot perform inference directly from eq.2l nor can we even store the priors tensor directly given its exponential size of MN entries. Therefore the TMM as. presented by eq.2|is not tractable. The way to make the TMM tractable is to replace the tensor. Adt,..,dy by a tensor decomposition and, as described in the previous section, this gives rise to. arithmetic circuits. But before we present our approach for tractable TMMs through tensor decom-. positions, it is worth examining some of the TMM special cases and how they relate to other known generative models.\nFinally, since we are dealing with generative models, the tensors we study are non-negative and sum to one, i.e. the vectorization of A (rearranging its entries to the shape of a vector), denoted by vec(A), is constrained to lie in the multi-dimensional simplex, denoted by:\nX =(x1,...,xn) e(Rs)N\nThis representation is quite natural for many high-dimensional input domains such as images. where the local structures represent patches consisting of s pixels - voice through spectrograms. and text through words.\nM N P(X) = P(di,..., dn) P(xi[di;0di) A =1\nThe prior probabilities P(d1, ..., dN) can also be represented by a tensor A E (IRM ) oN of order N, Thus, we refer to eq.2 as a Tensorial Mixture Model (TMM) with priors tensor A and mixing components P(x[d1; 0i),..., P(x[d; 0). Notice that if N = 1 then we obtain the standard mixture model, whereas for a general N it is equivalent to a mixture model with tensorised mixing weights and conditionally independent mixing components.\nhidden layer 0 hidden layer L-1 input X representation 1x1 conv product pooling 1x1 conv product dense pooling (output) X M rep(i,d) = P(x[di;0di)\nFigure 2: Inference of a TMM carried out by a ConvAC"}, {"section_index": "3", "section_name": "3.1 SPECIAL CASES", "section_text": "We have already shown that TMMs can be thought of as a special case of mixture models, but it is important to also note that diagonal Gaussian Mixture Models (GMMs), probably the most common type of mixture models, are a strict subset of TMMs. Assume M = N . K, as well as:\nWk ViE[N], d;=N.(k-1)+i P(d1,..., d Otherwise\n(x; kz, diag(k.) (x; k, diag(?) ?=((k1)T,... pk1 LkN\nwhich is equivalent to a diagonal GMM with mixing weights w E K-1 and Gaussian mixture components with means {t} K-1\nWhile the previous example highlights another connection between TMMs and mixture models,. it does not take full advantage of the priors tensor, setting most of its entries to zero. Perhaps. the simplest assumption we could make about the priors tensor, without it becoming degener ate, would be to assume that that the hidden variables d1,..., dy are statistically independent.. i.e. P(d1, . .., d)= I=1 P(dt). Then rearranging eq.2|will result in a product of mixture models:.\nIf we also assume that the priors are identical in addition to being independent,. i.e. P(d1 = d) = = P(d = d), then this model becomes a bag-of-words model, where the. components {P(x|d; 0a)} 1 define a soft dictionary for translating local-structures into \"words\", as is often done when applying bag-of-words models to images. Despite this familiar setting, had we. subscribed to only using independent priors, we would lose the universality property of the general. TMM model -- it would not be capable of modeling dependencies between the local-structures"}, {"section_index": "4", "section_name": "3.2 DECOMPOSING THE PRIORS TENSOR", "section_text": "We have just seen that TMMs could be made tractable through constraints on the priors tensor, bu it was at the expense of either not taking advantage of its tensor structure, or losing its universalit property. Our approach for tractable TMMs is to apply tensor decompositions to the priors tensor. which is the conventional method for tackling the exponential size of high-order tensors..\nWe have already mentioned in sec.2 that any decomposition representable by ConvACs, includ- ing the well-known CP and HT decompositions, can represent any tensor, and thus applying them would not limit the expressivity of our model. Fixing a ConvAC representing the priors tensor, i.e. o(1, ..., dn) = Ad1,..,dn where O are the parameters of the ConvAC and {i}1 are the in- entries of the priors tensor with the sums and products expression of e(1,..., v) results in:\nP(X) = e(q Vi E [N]Vd E [M],q} = P(xi[di = d\nwhich is nearly equivalent to how the ConvAC is used for computing the entries of the priors tensor, differing only in the way the input vectors are defined. Namely, eq.3 is a result of\nP(x[d; 0d) = N(x; ki, diag(ki)), d=N.(k-1)\nN M P(X) = P(d=d)P(xi|di=d;0d d=1\nUnlike general tensors, for a TMM to represent a valid distribution, the priors tensor is constrainec. to the simplex and thus not every choice of parameters for the decomposition would result in a. tensor holding this constraint. By restricting ourselves to non-negative decomposition parameters.. i.e. use positive weights in the 11 conv layers, it guarantees the resulting tensors would be non. negative as well. Additionally, normalizing the non-negative tensor is equivalent to requiring the. parameters to be restricted to the simplex, i.e. for every layer l and spatial position j the weight. vector w',j E rt-1-1 of the respective 11 conv kernel is normalized to sum to one. Under. these constraints we refer to it as a generative decomposition. Notice that restricting ourselves. to generative decompositions does not limit the expressivity of our model, as we can still repre. sent any non-negative tensor and thus any distribution that the original TMM could represent. Ir. discussing the above, it helps to distinguish between the two extreme cases of generative decompo. sitions representable by ConvACs, namely, the shallow Generative CP decomposition referred to as. the GCP-model. and the deep Generative HT decomposition referred to as the GHT-model.\nNon-negative matrix and tensor decompositions have a long history together with the development of corresponding generative models, e.g., pLSA (Hofmann1999) which uses non-negative ma-. trix decompositions for text analysis, which was later extended for images with the help of \"vi. sual words\" (Li and Perona2005). The non-negative variant of the CP decomposition presented above is related to the more general Latent Class Models (Zhang2004), which could be seen as a multi-dimensional pLSA. Likewise, the non-negative HT decomposition is related to the Latent Tree Model (Zhang] 2004, [Mourad et al.] [2013) with the structure of a complete binary tree. Thus both the GCP and GHT models can be represented as a two-level graphical model, where the top. level is either an LCM or an LTM, and the bottom level represent the local structures which are conditionally sampled from the mixing components of the TMM..\nTo conclude, the application of ConvACs to decompose the priors tensor leads to tractable TMMs. with inference implemented by convolutional networks, has deep roots to classical use of non- negative factorizations of generative models, and given sufficient resources does not limit expressiv- ity. However, practical considerations raise the question on the extent of the expressive capacity of our models when the size of the ConvAC is polynomial with respect to the number of local struc- tures and mixing components. This question was thoroughly studied in a series of works analyzing. the importance of depth (Cohen et al.|2016a), compared them to the expressive capacity of Con- vNets (Cohen and Shashua 2016a), showing the latter is less capable than ConvACs, and the ability of ConvACs to model the dependency structure typically found in natural data (Cohen and Shashua. 2016b). We prove in app.D|that their main results are not hindered by the introduction of simplex. constraints to ConvACs as we did above. Together these results give us a detailed understanding of how the number of channels and size of pooling windows control the expressivity of the model. A more in depth overview of their results and its application to our models can be found in app.[C."}, {"section_index": "5", "section_name": "3.3 COMPARISON TO SUM-PRODUCT NETWORKS", "section_text": "Sum-Product Networks (SPNs) are a related class of generative models which are also realized by. Arithmetic Circuits, though not strictly convolutional circuits as defined above. While SPNs can realize any ConvAC and thus are universal and posses tractable inference, their lack of structure puts them at a disadvantage\nPicking the right SPN structure from the infinite possible combinations of sum and product nodes could be perplexing even for experts in the field. Indeed Poon and Domingos (2011); Gens and. Domingos (2012) had to hand-engineer complex structures for each dataset guided by prior knowl-. edge and heuristics, and while their results were impressive for their time, they are poor by current measures. This lead to many works studying the task of learning the structure directly from the data itself (Peharz et al.]2013) Gens and Domingos]2013] Adel et al.]2015] Rooshenas and Lowd 2014), which indeed improved upon manually designed SPNs on some tasks. Nevertheless, when.\nreplacing indicator vectors d, with probability vectors q', which could be interpreted as a soft variant of indicator vectors. Viewed as a network, it begins with a representation layer, map-. ping the local structures to the likelihood probabilities of belonging to each mixing component,. )N, M described by e ). The complete network is illustrated by fig.2.\nFigure 3: Classifier variant of TMM carried out by a ConvAC\nAs opposed to SPNs, TMMs implemented with ConvACs have an easily designed architecture with only two set of parameters. size of pooling windows and number of channels. both of which can be directly related to the expressivity of the model as detailed in app.[C Additionally, while SPNs are typically trained using special EM-type algorithms, TMMs are trained using the stochastic gradient descent type algorithms as is common in training neural networks (see sec.4 for details), thereby benefiting from the shared experience of a large and growing community."}, {"section_index": "6", "section_name": "CLASSIFICATION AND LEARNING WITH TMMS", "section_text": "Until this point we presented the TMM as a generative model for high-dimensional data, which is universal, and whose structure is tightly coupled to that of convolutional networks. We have yet to incorporate classification and learning into our framework. This is the purpose of the current section.\nThe common way to introduce object classes into a generative framework is to consider a class variable Y, and the distributions P(X|Y) of the instance X conditioned on Y. Under our model this is equivalent to having shared mixing components, but different priors tensors P(d1, .. . , d[Y=y) for each class. Though it is possible to decompose each priors tensor separately, it is much more efficient to employ the concept of joint tensor decomposition, and use a shared ConvAC instead. This results in a single ConvAC computing inference, where instead of a single scalar output, multiple outputs are driven by the network - one for each class - as illustrated through the network in fig.3\nHeading on to predicting the class of a given instance, we note that in practice, naive implementatioi. of ConvACs is not numerically stable, the reason being that high degree polynomials (as computec. by such networks) are easily susceptible to numerical underflow or overflow. The conventiona. method for tackling this issue is to perform all computations in log-space. This transforms ConvAC. into SimNets, a recently introduced deep learning architecture (Cohen and Shashua2014) Coher et al.|[2016b). Finally, prediction is carried by returning the most likely class, which in the commol setting of uniform class priors (Pe(Y=y)=1/K), translates to simply predicting the class for whicl. the corresponding network output is maximal, in accordance with standard neural network practice.\nY(X) = argmax, P(Y = y|X) = argmax, log P(X|Y = y\n(O) = E-logPe(YX)[+E-logPe(X)\nwhere E[- log Po(Y|X)] is commonly known as the cross-entropy loss, which we refer to as the. discriminative loss, while E[- log Pe(X)] corresponds to maximizing the prior likelihood P(X) and has no analogy in standard discriminative neural networks. It is this term that captures the generative nature of our model, and we accordingly refer to it as the generative loss. Now, let No(X(); y):=log Po(X()|Y=y) stand for the y'th output of the SimNet (ConvAC in log-space). realizing the TMM with parameters O, then in the case of uniform class priors, the empirical esti- mation of L(O) may be written as:\nc(O:s X(i);y\nhidden layer 0. hidden layer L-1. input X representation 1x1 conv product pooling 1x1 conv product dense pooling (output) X: M - rep(i,d) = P(xi[di;0di) P(X|Y =y)\nSuppose now that we are given a training set S = {(X()e(Rs)N,y(i)e[K])}!S of instances and labels, and would like to fit the parameters O of multi-class TMM according to the Maximum Likelihood method. Equivalently, we minimize the Negative Log-Likelihood (NLL) loss function:. L(O) = E[- log Pe(X, Y)l, which can be factorized into two separate loss functions:.\nMaximum likelihood training of generative models is oftentimes based on dedicated algorithm. such as Expectation-Maximization, which are typically difficult to apply at scale. We leverage th resemblance between our objective (eq.4) and that of standard neural networks, and apply the same. optimization procedures used for the latter, which have proven to be extremely effective for trainin classifiers at scale. Whereas other works have used tensor decompositions for the optimization o probabilistic models (Song et al.[2013 Anandkumar et al.l2014), we employ them strictly for mod eling and instead make use of conventional methods. In particular, our implementation of TMM. is based on the SimNets extension of Caffe toolbox (Cohen et al.2016bf Jia et al.2014), and use. standard Stochastic Gradient Descent-type methods for optimization (see sec.6|for more details).."}, {"section_index": "7", "section_name": "S CLASSIFICATION WITH MISSING DATA THROUGH MARGINALIZATION", "section_text": "A major advantage of generative models over discriminative ones lies in the ability to cope witl missing data, specifically in the context of classification. By and large, discriminative method either attempt to complete missing parts of the data before classification, known as data imputation or learn directly to classify data with missing values (Little and Rubin,2002). The first of thes approaches relies on the quality of data completion, a much more difficult task than the origina one of classification with missing data. Even if the completion was optimal, the resulting classifie is known to be sub-optimal (see app.E). The second approach does not make this assumption, bu nonetheless assumes that the distribution of missing values at train and test times are similar, condition which often does not hold in practice. Indeed, Globerson and Roweis (2006) coined th term \"nightmare at test time' to refer to the common situation where a classifier must cope witl missing data whose distribution is different from that encountered in training.\nAs opposed to discriminative methods, generative models are endowed with a natural mechanism for. classification with missing data. Namely, a generative model can simply marginalize over missing. values, effectively classifying under all possible completions, weighing each completion according. to its probability. This, however, requires tractable inference and marginalization. We have already. shown in sec.3|that TMM support the former, and will show in sec.|5.1|bring forth marginalization. which is just as efficient. Beforehand, we lay out the formulation of classification with missing data\nFollowing the works of Rubin (1976); Little and Rubin (2002), we consider three cases for the missingness distribution Q(M=m|I=x): missing completely at random (MCAR), where M is independent of X, i.e. Q(M=m|I=x) is a function of m but not of x; missing at random (MAR) where M is independent of the missing values in X, i.e. Q(M=m|I'=x) is a function of both m and x, but is not affected by changes in x; if m;=0; and missing not at random (MNAR), covering the rest of the distributions for which M depends on missing values in , i.e. Q(M=m|I=x) is a function of both m and x, which at least sometimes is sensitive to changes in x; when m;=0.\nLet P be the joint distribution of the object X, label V, and missingness mask M\nP(X=x,V=y,M=m) = D(X=x,V=y):Q(M=m|X=x\nFor given x E Rs and m E {0,1}s, denote by o(x,m) the event where the random vector coincides with x on the coordinates i for which m; = 1. For example, if m is an all-zero vector o(x, m) covers the entire probability space, and if m is an all-one vector o(x, m) corresponds to the event A' = x. With these notations in hand, we are now in a position to characterize the optimal predictor in the presence of missing data:\nClaim 1. For any data distribution D and missingness distribution Q, the optimal classificatior rule in terms of 0-1 loss is given by.\nh*(xOm) argmaxy PQ)=yo(x, o(x,m),Y=y)\nLet be a random vector in Rs representing an object, and V be a random variable in K|:={1, ..., K} representing its label. Denote by D(t, V) the joint distribution of (t, V), and by (xERs, yE[K]) specific realizations thereof. Assume that after sampling a specific instance (x, y), a random binary vector M is drawn conditioned on I'=x. More concretely, we sample a binary mask mE{0, 1}s (realization of M) according to a distribution Q([T=x). x; is considered missing if m. is equal to zero, and observed otherwise. Formally, we consider the vector xOm, whose i'th coor- dinate is defined to hold x; if m,=1, and the wildcard * if m,=0. The classification task is then to predict y given access solely to xOm.\nh*(x O m) = argmaxy P(V=y[o(x,m)\nCorollary[1|indicates that in the MAR setting, which is frequently encountered in practice, optimal. classification does not require prior knowledge regarding the missingness distribution Q. As long. as one is able to realize the marginalized Bayes predictor (eq. 5), or equivalently, to compute the likelihoods of observed values conditioned on labels (P(o(x, m)|Y=y)), classification with miss. ing data is guaranteed to be optimal, regardless of the corruption process taking place. This is in stark contrast to discriminative methods, which require access to the missingness distribution during. training, and thus are not able to cope with unknown conditions at test time..\nMost of this section dealt with the task of prediction given an input with missing data, where we. assumed we had access to a complete and uncorrupted training set, and only faced missingness dur. ing prediction. However, many times we wish to tackle the reverse problem, where the training se itself is riddled with missing data. Generative methods can once again leverage their natural abilit to handle missing data in the form of marginalization during the learning stage. Generative model are typically learned through the Maximum Likelihood principle. When it comes to learning fron. missing data, the marginalized likelihood objective is used instead. Under the MAR assumption this method results in an unbiased classifier (Little and Rubin! 2002).."}, {"section_index": "8", "section_name": "5.1 EFFICIENT MARGINALIZATION WITH TMMS", "section_text": "M P(x1,...,xn|Y=y)= P(d1,...,dn|Y=y) P(x;|di; Odi dN\nM P(xi,...,Xiv|Y=y) P(di,...,dn|Y=y) P(xi,di,; Odi,\nTo conclude, with TMMs marginalizing over missing values is just as efficient as plain inference requires only a single pass through the corresponding ConvAC. Accordingly, the marginalized Bayes predictor (eq.5) is realized efficiently, and classification with missing data (in the MAR setting) is optimal, regardless of the missingness distribution. This capability is not provided by discriminative methods, which rely on the distribution of missing values being know at training, and by contempo rary generative models, which do not bring forth tractable marginalization.\nWhen the distribution Q is MAR (or MCAR). the classifier admits a simpler form, referred to as the\nAs discussed above, with generative models optimal classification with missing data (in the MAR setting) is oblivious to the specific missingness distribution. However, it requires tractable compu-. tation of the likelihood of observed values conditioned on labels, i.e. tractable marginalization over missing values. The plurality of generative models that have recently gained attention in the deep learning community (Goodfellow et al.]2014; Kingma and Welling2014} Dinh et al.] 2014] 2016) do not meet this requirement, and thus are not suitable for classification with missing data. TMMs. on the other hand bring forth extremely efficient marginalization, requiring only a single forward. pass through the corresponding network. Details follow..\n1 , X; is missing (marginalized). P(x;|d; O) , x; is visible (not marginalized)\nTable 1: Blind classification with missing data on the binary MNIST dataset with feature deletion noise according to Globerson and Roweis (2006), averaged over all pairs of digits\nWe demonstrate the properties of our models through both qualitative and quantitative experiments. In subsec.6.1|we present our state-of-the-art results on image classification with missing data, with. robustness to various missingness distributions. In app. G we show visualizations produced by out models, which gives us insight into its inner workings. Our experiments were conducted on the MNIST digit classification dataset, consisting of 6ooo0 grayscale images of single digit numbers, as well as the small NORB 3D object recognition dataset, consisting of 48600 grayscale stereo images of toys belonging to 5 categories: four-legged animals, human figures, airplanes, trucks, and cars\nIn all our experiments we use either the GCP or GHT model with Gaussian mixing components The weights of the conv layers are partially shared as described in sec [3.2] and are represented in log-space. For the case of the GHT model, we use 2 2 pooling windows for all pooling layers. We train our model according to the loss described in sec.4] using the Adam (Kingma and Ba]2015 variant of SGD and decaying learning rates. We apply L2-regularization to the weights while taking into account they are stored in log-space. Additionally, we also adapt a probabilistic interpretation of dropout (?) by introducing random marginalization layers, that randomly select spatial locations in the input and marginalize over them. We provide a complete and detailed description of our experiments in app.F\nThe problem of learning classifiers which are robust to unforeseen missingness distributions at test. time was first proposed by Globerson and Roweis (2006). They suggested missing values could be. denoted by values which were deleted, i.e. their values were changed to zero, and a robust classifie. would have to assume that any of its zero-value inputs could be the result of such a deletion process. and must be treated as missing. Their solution was to train a linear classifier and formulate the optimization as a quadric program under the constraint that N of its features could be deleted In Dekel and Shamir (2008), this solution was improved upon and generalized to other kinds of. corruption beyond deletion as well as to an adversarial setting..\nWe follow the central experiment of these articles, conducted on binary classification of digits pairs from the MNIST dataset, where N non-zero pixels are deleted with uniform probability over the set of N non-zero pixel locations of the given image. We compare our method, using the deep GHT.\nWe demonstrate the effectiveness of our method for classification with missing data of unknown missingness distribution (see sec.5), by conducting three kinds of experiments on the MNIST dataset, and an additional experiment on the NORB dataset. We begin by following the protocol of Globerson and Roweis (2006) - the binary classification problem of digit pairs with feature dele- tion noise - where we compare our method to the best known result on that benchmark (Dekel and Shamir2008). For our main experiment, we move to the harder multi-class digit classification un- der two different MAR missingness distributions, comparing against other methods which do not assume a specific missingness distribution. We repeat this experiment on the NORB dataset as well. Finally, our last experiment demonstrates the failure of purely discriminative methods to adapt to previously unseen missingness distributions, underlining the importance of the generative approach to missing data. We do wish to emphasize that missing data is not typically found in most image data, nevertheless, experiments on images with missing data are very common, for both classifi- cation and inpainting tasks. Additionally, there is nothing about our method, nor the methods we compare it against, that is very specific to the image domain, and thus any conclusion drawn should not be limited to the chosen datasets, but be taken in the broader context of the missing data problem\n100 90 90 (%) (%) Aoeanc asa 80 80 O KNN 70 Zero t 70 KNN 60 Mean t 60 Zero t 50 AGSN t 50 A Mean t KX NICEt 40 GSN t H DPM t 40 estr NICE t 30 * MP-DBM * 30 DPM t 20 O-O GCP-model 20 GCP-model VGHT-model GHT-model 10 10 0.00 0.25 0.50 0.75 0.90 0.95 0.99 (1,7) (2,7) 3,71,11)2,11)3,11)1,152,15)3,15 Probability of Missing Pixels (Number of Rectangles, Width) (a) MNIST with i.i.d. corruption. (b) MNIST with missing rectangles 100 100 90 (%) Aoeuncoy sa] O 90 (%) -. 80 80 70 AAernnee 70 60 60 50 AA KNN 50 KNN 40 K X Zero t X Zero t 40 HMean t teest Mean t 30 ** NICEt 30 * NICE t 20 O O DPM t 20 DPM t 7VGHT-mode 10 V GHT-model 10 0.00 0.25 0.50 0.75 0.90 0.95 0.99 (1,7) (2,7)(3,7)(1,11) (2,11) (3,11) (1,15) (2,15)(3,15) Probability of Missing Pixels (Number of Rectangles, Width) (c) NORB with i.i.d. corruption.. (d) NORB with missing rectangles Figure 4: Blind classification with missing data. (a c) Testing i.i.d. corruption with probability p fo. each pixel. (b d) Testing missing rectangles corruption with N missing rectangles, each of width an night equal to W. (*) Accuracies are estimated from the plot ofGoodfellow et al.[(2013). () Dat fo11ox\nFigure 4: Blind classification with missing data. (a c) Testing i.i.d. corruption with probability p for each pixel. (b d) Testing missing rectangles corruption with N missing rectangles, each of width and hight equal to W. (*) Accuracies are estimated from the plot of Goodfellow et al.(2013). (t) Data imputation algorithms followed by a ConvNet. Raw results can be found in app.[H\nModel, solely against the LP-based algorithm of Dekel and Shamir (2oo8), which is the previous state-of-the-art on this task. Due to the limited computational resources at the time, the origina experiments were limited to training sets of just 50 images per digit. We have repeated their experi ment, using the implementation kindly supplied to us by the authors, and increased the limit to 300 images per digit, which is the maximal amount possible with our current computational resources Though it is possible to train our own models using much larger training sets, we have trained then under the same limitations. Despite the fact that missingness distribution of this experiment is of the MNAR type, which our method was not guarantied to be optimal under, the test results (see table[1 clearly show the large gap between our method and theirs. Additionally, whereas our method uses a single model trained once and with no prior knowledge on the missingness distribution, their metho. requires training special classifiers for each value of N, chosen through a cross-validation process disqualifying it from being truly blind to the missingness distribution.\nWe continue to our main experiments on multi-class blind classification with missing data, where the missingness distribution is completely unknown during test time, and a single classifier mus handle all possible distributions. We simulate two kinds of MAR missingness distributions: (i) an i.i.d. mask with a fixed probability p E [0, 1] of missing each pixel, and (ii) a mask composed of the union of N possibly overlapping rectangles of width and height equal to W, each with a randomly assigned position in the image, distributed uniformly. We evaluate both our shallow GCP-Mode as well as the deep GHT-Model against the most widely used methods for blind classification with missing data. We repeat these experiments on the MNIST and NORB datasets, the results of which are presented in fig.4\nAs a baseline for our results, we use K-Nearest Neighbors (KNN) to vote on the most likely class oi a given example. We extend KNN to missing data by comparing distances using only the observed entries, i.e. for a corrupted instance xOm, and a clean image from the training set x, we compute d(x, xOm)= ma=1(ij-xj)2. Though it scores better than the majority of modern methods we have compared, in practice KNN is very inefficient, even more so for missing data, which prevents most common memory and runtime optimizations typically employed to reduce its inefficiency Additionally, KNN does not generalize well for more complex datasets, as is evident by its pooi performance on the clean test set of the NORB dataset.\nPtest 0.25 0.50 0.75 0.90 0.95 0.99 Ptrain (%) reeeenne eee 0.25 98.9 97.8 78.9 32.4 17.6 11.0 0.50 99.1 98.6 94.6 68.1 37.9 12.9 0.75 98.9 98.7 97.2 83.9 56.4 16.7 0.90 97.6 97.5 96.7 89.0 71.0 21.3 0.95 95.7 95.6 94.8 88.3 74.0 30.5 0.99 87.3 86.7 85.0 78.2 66.2 31.3 i.i.d. (rand) 98.7 98.4 97.0 87.6 70.6 29.6 rects (rand) 98.2 95.7 83.2 54.7 35.8 17.5\nFigure 5: We compare ConvNets trained on one distribution while tested on others. Training on randomly (rand) chosen distributions were also examined. (a) Trained on i.i.d. corruption with probability Ptrain, while tested on i.i.d. corruption with probability Ptest. (b) Train and tested on the same (fixed) missing rectangles distribution, against ones trained on randomly chosen distributions.\nAs discusses in sec.[5l data-imputation is the most common method to handle missing data of un- known missingness distributions. Despite the popularity of this method, high quality data impu tations are very hard to produce, amplified by the fact that classification algorithms are known to be highly sensitive to even a small noise applied to their inputs (?). Even if we assume the data- imputation step was done optimally, it would still not give optimal performance under all MAR missingness distributions, and under some settings could produce results which are only half as good as our method (see app.E for such a case). In our experiments, we have applied several data-imputations methods to complete the missing data, followed by classifying its outputs using a standard ConvNet fitted to the fully-observed training set. We first tested naive heuristics, fill- ing missing values with zeros or the mean pixel value computed over all the images in the dataset We then tested three generative models: GSN (Bengio et al.]2014), NICE (Dinh et al.]2014) and DPM (Sohl-Dickstein et al.] 2015), which are known to work well for inpainting. GSN was omitted from the NORB experiments as we have not manage to properly train it on that dataset. Though the data-imputation methods are competitive when only few of the pixels are missing, they all fall far behind our models above a certain threshold, with more than 50 percentage points separating our GHT-model from the best data-imputation method under some of the cases. Additionally, all the generative models require very long runtimes, which prevents from using them in most real-world applications. While we tried to be as comprehensive as possible when choosing which inpainting methods to use, some of the most recent studies on the subject, e.g. the works of van den Oord et al. (2016) and Pathak et al.(2016), have either not yet published their code or only partially published it. We have also ruled out inpainting algorithms which are made specifically for images, as we did not want to limit the implications of these experiments solely to images.\nWe have also compared ourselves to the published results of the MPDBM model (Goodfellow et al. 2013). Unlike the previous generative models we tested, MPDBM is a generative classifier similar tc our method. However, unlike our model, MPDBM does not posses the tractable marginalization nor the tractable inference properties, and uses approximations instead. Its lesser performance under lines the importance of these properties for achieving optimality under missing data. An additional factor might also be their training method, which includes randomly picking a subset of variables to act as missing, which might have introduced a bias to the specific missingness distribution used during their training.\nIn order to demonstrate the ineffectiveness of purely discriminative models, we trained ConvNets. directly on randomly corrupted instances according to pre-selected missingness distributions on the MNIST dataset. Unlike the previous experiments, we do allow prior knowledge about the missing-. ness distribution during training time. We found that the best results are achieved when replacing. missing values with zeros, and adding as an extra input channel the mask of missing values (known as flag data-imputation). The results (see fig.5) unequivocally show the effectiveness of this method. when tested on the same distribution it was trained on, achieving a high accuracy even when only 10% of the pixels are visible. However, when tested on different distributions, whether on a com- pletely different kind or even on the same kind but with different parameters, the accuracy drops by a large factor, at times by more than 35 percentage points. This illustrate the disadvantage of. the discriminative method, as it necessarily incorporates bias towards the corruption process it had. seen during training, which makes it fail on other distributions. One might wonder whether it is\npossible for a single network to be robust on more than a single distribution. We found out that. the latter is true, and if we train a network on multiple different missingness distributions'I then the network will achieve good performance on all such distributions, though at some cases not reaching. the optimal performance. However, though it is possible to train a network to be robust on more than one distribution, the type of missingness distributions are rarely known in advance, and there is. no known method to train a neural network against all possible distributions, limiting the effectivity of this method in practice.\nUnlike all the above methods, our GHT-model, which is trained only once on the clean dataset, matcl or sometimes even surpass the performance of ConvNets that are trained and tested on the same dis tribution, showing it is achieving near optimal performance - as much as possible on any giver distribution. Additionally, note that similar to ConvNets and according to the theory in app.Cl the deep GHT-model is decidedly superior to the shallow GCP-model. Experimenting on more complex datasets is left for further research. Progress on optimization and regularization of networks basec on product pooling (even in log-space) is required, and ways to incorporate larger bxb convolu tional operations with overlaps would be useful before we venture into larger and complex datasets Nevertheless, our preliminary results demonstrate an overwhelming advantage of our TMM model: compared to competing methods, both in terms of robustness to different types of missing data, as well as in terms of raw performance, with very wide gaps in absolute accuracy than the next bes method, at times as large as 50 percentage points more than the next best method."}, {"section_index": "9", "section_name": "7 SUMMARY", "section_text": "We have introduced a new family of probabilistic models, which we call Tensorial Mixture Models TMMs are based on a simple assumption on the data, which stems from known empirical results or natural images, that gives rise to mixture models with tensorial structure represented by the prior tensor. When the priors tensor is decomposed it gives rise to an arithmetic circuit which in turi transforms the TMM into a Convolutiona1 Arithmetic Circuit (ConvAC). A ConvAC correspond. to a shallow (single hidden layer) network when the priors tensor is decomposed by a CP (sum o rank-1) approach and corresponds to a deep network when the decomposition follows the Hierarchi cal Tucker (HT) model.\nThe ConvAC representation of a TMM possesses several attractive properties. First, the inference is tractable and is implemented by a forward pass through a deep network. Second, the architectural. design of the model follows the deep networks community design, i.e., the structure of TMMs is determined by just two easily understood factors: size of pooling windows and number of channels.. Finally, we have demonstrated the effectiveness of our model when tackling the problem of classifi- cation with missing data, leveraging TMMs unique ability of tractable marginalization which leads. to optimal classifiers regardless of the missingness distribution.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Tameem Adel, David Balduzzi, and Ali Ghodsi. Learning the Structure of Sum-Product Networks via an SVD-based Algorithm. UAI, 2015.\nAnimashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompo sitions for learning latent variable models. Journal of Machine Learning Research 0, 15(1):2773-2832 2014.\nSpecifically, we trained the network by randomizing not only the corruption noise, but the parameters o the corruption process (e.g. for i.i.d. corruption we sampled p for each image from a uniform distribution)\nThere are several avenues for future research on TMMs which we are currently looking at, including. other problems which TMMs could solve (e.g. semi-supervised learning), experimenting with other ConvACs architectures (e.g. through different decompositions), and further progress on optimiza-. tion and regularization of networks with product pooling.\nTal Ben-Nun, Ely Levy, Amnon Barak, and Eri Rubin. Memory Access Patterns: The Missing Piece of the Multi-GPU Puzzle. In Proceedings of the International Conference for High Performance Computing, Net working, Storage and Analysis, pages 19:1-19:12. ACM, 2015.\nYoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep Generative Stochastic Networks Trainable by Backprop. In International Conference on Machine Learning, 2014.\nchard Caron and Tim Traynor. The Zero Set of a Polynomial. wSMR Report 05-02, 2005\nXi Chen. Yan Duan. Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Inter pretable Representation Learning by Information Maximizing Generative Adversarial Nets. arXiv.org, June 2016.\nAdam Coates, Andrew Y Ng, and Honglak Lee. An Analysis of Single-Layer Networks in Unsupervise Feature Learning. International Conference on Artificial Intelligence and Statistics, pages 215-223, 2011.\nNaday Cohen and Amnon Shashua. SimNets: A Generalization of Convolutional Networks. In Advances ir Neural Information Processing Systems NIPs, Deep Learning Workshop, 2014.\nNadav Cohen and Amnon Shashua. Convolutional Rectifier Networks as Generalized Tensor Decompositions In International Conference on Machine Learning. May 2016a.\nNadav Cohen, Or Sharir, and Amnon Shashua. On the Expressive Power of Deep Learning: A Tensor Analysis In Conference on Learning Theory COLT, May 2016a.\nNadav Cohen, Or Sharir, and Amnon Shashua. Deep SimNets. In Computer Vision and Pattern Recognitior CVPR, May 2016b.\nLaurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear Independent Components Estimation arXiv.org, October 2014.\nLaurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. arXiv.org, Ma 2016.\nDennis Forster, Abdul-Saboor Sheikh, and Jorg Lucke. Neural Simpletrons - Minimalistic Probabilistic Net works for Learning With Few Labels. arXiv.org, June 2015..\nRobert Gens and Pedro M Domingos. Discriminative Learning of Sum-Product Networks. Advances in Neura Information Processing Systems, 2012.\nIan Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-Prediction Deep Boltzmann Ma chines. Advances in Neural Information Processing Systems, 2013\nThomas Hofmann. Probabilistic latent semantic analysis. Morgan Kaufmann Publishers Inc., July 1999\nFurong Huang, Niranjan U N, Ioakeim Perros, Robert Chen, Jimeng Sun, and Anima Anandkumar. Scalabl Latent Tree Model and its Application to Health Analytics. In NIPs Machine Learning for Healthcar Workshop, 2015.\nDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine Learning research, 3:993-1022, March 2003.\nNadav Cohen and Amnon Shashua. Inductive Bias of Deep Convolutional Networks through Pooling Geome try. arXiv.org, May 2016b.\nR Gens and P M Domingos. Learning the Structure of Sum-Product Networks. Internation Conference on Machine Learning, 2013.\nAmir Globerson and Sam Roweis. Nightmare at test time: robust learning by feature deletion. In Internationa Conference on Machine Learning. ACM. 2006..\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayey, Jonathan Long, Ross B Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional Architecture for Fast Feature Embedding. CoRR abs/1202.2745, cs.CV, 2014.\nDiederik P Kingma, Danilo J Rezende, Shakir Mohamed, and Max Welling. Semi-Supervised Learning v Deep Generative Models. In Advances in Neural Information Processing Systems, 2014..\nYan LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nYann LeCun. Yoshua Bengio. and Geoffrey Hinton. Deep learning. Nature. 521(7553):436-444. May 2015\nRoderick J A Little and Donald B Rubin. Statistical analysis with missing data (2nd edition). John Wiley & Sons, Inc., September 2002.\nLars Maalge, Casper Kaae Sonderby, Soren Kaae Sonderby, and Ole Winther. Auxiliary Deep Generativ Models. In International Conference on Machine Learning ICML, May 2016.\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial Autoen coders. arXiv.org, November 2015\nRaphael Mourad, Christine Sinoquet, Nevin Lianwen Zhang, Tengfei Liu, and Philippe Leray. A Survey or Latent Tree Models and Applications. J. Artif. Intell. Res. (. cs.LG:157-203. 2013..\nAndrew Y Ng and Michael I Jordan. On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes. In Advances in Neural Information Processing Systems NIPs, Deep Learning Workshop, 2002.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context Encoders Feature Learning by Inpainting. In Computer Vision and Pattern Recognition CVPR, May 2016.\nF Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, and E Duchesnay. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research (. 12:2825-2830, 2011.\nHoifung Poon and Pedro Domingos. Sum-Product Networks: A New Deep Architecture. In Uncertainty in Artificail Intelligence, 2011\nAmirmohammad Rooshenas and Daniel Lowd. Learning Sum-Product Networks with Direct and Indirec Variable Interactions. ICML, 2014.\nDonald B Rubin. Inference and missing data. Biometrika, 63(3):581-592, December 1976\nTaesup Kim and Yoshua Bengio. Deep Directed Generative Models with Energy-Based Probability Estimation arXiv.org, June 2016.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving Variational Inference with Inverse Autore gressive Flow. In Advances in Neural Information Processing Systems, June 2016\nFei-Fei Li and Pietro Perona. A Bayesian Hierarchical Model for Learning Natural Scene Categories. Compute. Vision and Pattern Recognition, 2:524-531, 2005.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improve Techniques for Training GANs. In Advances in Neural Information Processing Systems, 2016.\nLe Song, Mariya Ishteva, Ankur P Parikh, Eric P Xing, and Haesun Park. Hierarchical Tensor Decompositior of Latent Tree Graphical Models. ICML, pages 334-342, 2013.\nJost Tobias Springenberg. Unsupervised and Semi-supervised Learning with Categorical Generative Adversar ial Networks. In International Conference on Learning Representations, 2016.\nYaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. DeepFace: Closing the Gap to Human Level Performance in Face Verification. In Computer Vision and Pattern Recognition CVPR. IEEE Com puter Society, June 2014.\nLucas Theis and Matthias Bethge. Generative Image Modeling Using Spatial LSTMs. In Advances in Neura Information Processing Systems, 2015.\nDustin Tran, Rajesh Ranganath, and David M Blei. The Variational Gaussian Process. In International Con. ference on Learning Representations ICLR, 2016.\nXiaogang Wang and Eric Grimson. Spatial Latent Dirichlet Allocation. Advances in Neural Information Processing Systems, 2007.\nMatthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In Europea Conference on Computer Vision. Springer International Publishing, 2014..\nNevin Lianwen Zhang. Hierarchical Latent Class Models for Cluster Analysis. Journal of Machine Learning Research (, pages 697-723, 2004.\nDaniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration ICCV, pages 479-486, 2011.\nAmir Shpilka and Amir Yehudayoff. Arithmetic Circuits: A survey of recent results and open questions Foundations and Trends(R) in Theoretical Computer Science. 5(3-4):207-388. March 2010\nJascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. Internation Conference on Machine Learning, 2015.\ninput hidden Tayer coordinates 1x1 conv global dense 8i pooling (output) M di=1[j=di] conv(i,z) = .,dx = (a, pool(:) pool(z) = conv(i,z\nFigure 6: The decoding algorithm of the CP decomposition re resented by an Arithmetic Circuit."}, {"section_index": "11", "section_name": "A BACKGROUND ON TENSOR DECOMPOSITIONS AND CONVOLUTIONAL ARITHMETIC CIRCUITS", "section_text": "We begin by establishing the minimal background in the field of tensor analysis required for following ou work. A tensor is best thought of as a multi-dimensional array Ad1,..,dy E R, where Vi E [N], di E [Mi]. The number of indexing entries in the array, which are also called modes, is referred to as the order of th tensor. The number of values an index of a particular mode can take is referred to as the dimension of the mode The tensor A E RM1 O...@M mentioned above is thus of order N with dimension M, in its i-th mode. Fo. our purposes we typically assume that M1 = ... = M = M, and simply denote it as A E (RM)oN.\nThe fundamental operator in tensor analysis is the tensor product. The tensor product operator, denoted by is a generalization of outer product of vectors (1-ordered vectors) to any pair of tensors. Specifically, let A and B be tensors of order P and Q respectively, then the tensor product A B results in a tensor of order P + Q. defined by: (A B)d1,..,dp+Q.\nThe main concept from tensor analysis we use in our work is that of tensor decompositions. The most straight- forward and common tensor decomposition format is the rank-1 decomposition, also known as a CANDE- COMP/PARAFAC decomposition, or in short, a CP decomposition. The CP decomposition is a natural exten- sion of low-rank matrix decomposition to general tensors, both built upon the concept of a linear combination of rank-1 elements. Similarly to matrices, tensors of the form v(1) ... v(N), where v(i) E RMi are non-zero vectors, are regarded as N-ordered rank-1 tensors, thus the rank-Z CP decomposition of a tensor A is naturally defined by:\nZ A = aza A z=1 Z N I az adi Z,i > Ad1. ..d r z=1 i=1\nwhere {az,i R Mi 7 N, Z and aE RZ are the parameters of the decomposition. As mentioned above. 1.Z= for N = 2 it is equivalent to low-order matrix factorization. It is simple to show that any tensor A can be represented by the CP decomposition for some Z, where the minimal such Z is known as its tensor rank..\n2 More precisely, we use a special case of the canonical HT decomposition as presented inHackbusch and Kuhn(2009). In the terminology of the latter, the matrices A',j,~ are diagonal and equal to diag(a'3?) (using the notations from eq.8\nAnother decomposition we will use in this paper is of a hierarchical nature and known as the Hierarchical Tucker decomposition (Hackbusch and Kuhn2009), which we will refer to as HT decomposition. While the CP decomposition combines vectors into higher order tensors in a single step, the HT decomposition does that more gradually, combining vectors into matrices, these matrices into 4th ordered tensors and so on recursively in a hierarchically fashion. Specifically, the following describes the recursive formula of the HT decompositior?\nfor a tensor A E (RM ON where N = 2L, i.e. N is a power of twc\nro 1,jY 1,j,y0,2j-1,a 0,2j,a aa a a=1\nrl-1 0 Q=1 order 2l - 1 order 2l - 1\nThe relationship between tensor decomposition and networks arises from the simple observation that throug. decomposition one can tradeoff storage complexity with computation where the type of computation consist. of sums and products. Specifically, tensor decompositions could be seen as a mapping, that takes a tensor o. exponential size and converts it into a polynomially sized representation, coupled with a decoding algorithn. of polynomial runtime complexity to retrieve the original entries of tensor - essentially trading off space com plexity for computational complexity. Examining the decoding algorithms for the CP and HT decompositions. i.e. eq.7|and eq.[8] respectively, reveal a shared framework for representing these algorithms via computatior graphs of products and weighted sums, also known as Arithmetic Circuits (Shpilka and Yehudayoff2010) o Sum-Product Networks (Poon and Domingos2011). More specifically, these circuits take as input N indicato. vectors 1, ..., n, representing the coordinates (d1, ..., dN), where , = 1[j=d,], and output the value o. Ad1,...,dy. In the case of the CP decomposition, the matching decoding circuit is defined by eq.9|below:.\nM N M Z did Ad1,.,dn = az 1L adi i.d d=1 z=1 i=1 d=1\nThe above formula is better represented by the network illustrated in fig.[6] beginning with an input layer o. N N M-dimensional indicator vectors arranged in a 3D array, followed by a 1 1 conv operator, a. global product pooling layer, and ends with a dense linear layer outputting Ad1,...,dn. The conv operator is not. unlike the standard convolutional layer of ConvNets, with the sole difference being that it may operate without. coefficient sharing, i.e. the filters that generate feature maps by sliding across the previous layer may have. different coefficients at different spatial locations. This is often referred to in the deep learning community as a locally-connected operator (Taigman et al.[2014). Similarly to the CP decomposition, retrieving the entries. of a tensor from its HT decomposition can be computed by the circuit represented in fig.[7] where instead of. a single pair of conv and pooling layers there are log, N such pairs, with pooling windows of size 2. Though. the canonical HT decomposition dictates size 2 pooling windows, any pooling structure used in practice still. results in a valid HT decomposition..\nArithmetic Circuits constructed from the above conv and product pooling layers are called Convolutional Arith metic Circuits, or ConvACs for short, first suggested by Cohen et al.(2016a) as a theoretical framework fo studying standard convolutional networks, sharing many of the defining traits of the latter, most noteworthy the locality, sharing and pooling properties of ConvNets. Unlike general circuits, the structure of the networl is determined solely by two parameters, the number of channels of each conv layer and the size of pooling windows, which indirectly controls the depth of the network.\n3The requirement for N to be a power of two is solely for simplifying the definition of the HT decomposi. tion. More generally, instead of defining it through a complete binary tree describing the order of operations the canonical decomposition can use any balanced binary tree..\nwhere ,1.e. 7 is a power of t ro 1,j,Y 1,j,Ya0,2j-1,a 0,2j,Q aa a X a Q=1 rl-1 l,j,Y 10 1,2j,0 aa Q=1 order 2l - 1 order 2l - 1 rL-2 1,J,Y L-1,jY L-2,2j-1,Q L-2,2jQ aa X a=1 order N order N 4 4 rL-1 A = -1,2, Q=1 order N order N 2 2\nrL-2 L-1,j,Y L-1,jY L-2,2j-1,a -2,2jQ aa Q=1 order N order N 4 4 rL-1 A = L-1,2 ao Q=1 order N order N 2\ntop level vector a E RrL-1, and the scalars ro, ..., rL-1 E N are referred to as the ranks of the decompo-. sition. Similar to the CP decomposition, any tensor can be represented by an HT decomposition. Moreover,. any given CP decomposition can be converted to an HT decomposition by only a polynomial increase in the number of parameters.\ninput hidden layer 0 hidden layer L-1 coordinates 1x1 conv (L=IogN) pooling dense 1x1 conv pooling (output) conv(j, y) = (a,j, poolL-1(y)= conVL- j'E{1,2} poolo(j, y) = convo(j',z) a,pool-1()) d.A. j'E{2j-1,2j}\nFigure 7: The decoding algorithm of the HT decomposition represented by an Arithmetic Circuit\nProof. If F is the set of Gaussian PDFs over Rs with diagonal covariance matrices, which is known to be : PDF total set, then FON is the set of Gaussian PDFs over (Rs)N with diagonal covariance matrices and the claim is trivially true.\nM1 N M1 N M2 1 E Wi Jij(x)- Wi Wijkf Xj 2 i=1 j=1 i=1 j=1 k=1\nM2 N I fk,(xj) KN k1,...,kN=1 j=1\nCorollary 2. Let F be a PDF total set of PDFs over Rs, then the family of TMMs with mixture components from F can approximate any PDF over (Rs)N arbitrarily well, given arbitrarily many components..\nC OVERVIEW ON THE EXPRESSIVE CAPACITY OF CONVOLUTIONAL ARITHMETIC CIRCUITS AND ITS AFFECT ON TENSORIAL MIXTURE MODELS\nThe expressiveness of ConvACs has been extensively studied, and specifically the non-generative variants ol our models, named CP-model and HT-model respectively. In Cohen et al.(2016a) it was shown that ConvACs.\nIn this section we prove the universality property of TMMs, as discussed in sec.3 We begin by taking note from functional analysis and define a new property called PDF total set, which is similar in concept to a total set, followed by proving that this property is invariant under the cartesian product of functions, which entails the universality of TMMs as a corollary.\nM1 N I E g(x) - Wi Jij Xj 2 i=1 j=1\nposses the property known as complete depth efficiency. Namely, almost all functions'Irealized by an HT-model of polynomial size, for them to be realized (or approximated) by a CP-model, require it to be of exponential size In other words, the expressiveness borne out of depth is exponentially stronger than a shallow network, almost always. It is worth noting that in the followup paper (Cohen and Shashua2016a), the authors have shown that the same result does not hold for standard ConvNets - while there are specific instances where depth efficiency holds, it is not complete, i.e. there is a non-zero probability that a function realized by a polynomially sized deep ConvNet can also be realized by a polynomially sized shallow ConvNet. Despite the additional simplex constraints put on the parameters, complete depth efficiency does hold for the generative ConvACs of our work proof of which can be found in app.D] which shows the advantage of the deeper GHT-model over the shallow GCP-model. Additionally, this illustrates how the two factors controlling the architecture - number of channels and size of pooling windows - control the expressive capacity of the GHT-model. While the above shows why the deeper GHT-model is preferred over the shallow GCP-model, there is still the question of whether a polynomially sized GHT-model is sufficient for describing the complexities of natural data. Though a complete and definite answer is unknown as of yet, there are some strong theoretical evidence that it might. One aspect of being sufficient for modeling natural data is the ability of the model to describe the dependency structures typically found in the data. In Cohen and Shashua (2016b), the authors studied the separation rank - a measure of correlation, which for a given input partition, measures how far a function is from being separable - and found that a polynomially sized HT-model is capable of exponential separation rank for interleaved partitions i.e. that it can model high correlations in local areas in the input. Additionally, for non-contiguous partitions. the separation rank can be at most polynomial, i.e. it can only model a limited correlation between far away areas in the input. These two results combined suggest that the HT-model, and thus also our GHT-model, is especially fit for modeling the type of correlations typically found in natural images and audio, even if it is only of polynomial size. Finally, from an empirical perspective, convolutional hierarchical structures have shown great success on multitude of different domains and tasks. Our models leverage these structures, taking them to a probabilistic setting, which leads us to believe that they will be able to effectively model distributions in practice - a belief we verify by experiments."}, {"section_index": "12", "section_name": "D PROOF FOR THE DEPTH EFFICIENCY OF GENERATIVE CONVOLUTIONAL ARITHMETIC CIRCUITS", "section_text": "In this section we prove that the depth efficiency property of ConvACs proved in[Cohen et al.(2016a) applies also to the Generative ConvACs we have introduced in sec.3.2 More specifically, we prove the following theorem, which is the generative analog of theorem 1 from (Cohen et al.||2016a):.\nTheorem 1. Let A' be a tensor of order N and dimension M in each mode, generated by the recursive. formulas in eq.[8 under the simplex constraints introduced in sec.3.2] Define r := min{ro, M}, and consider. the space of all possible configurations for the parameters of the decomposition - {al'j,~ E rt-1-1}l,j,y. In this space, the generated tensor A' will have CP-rank of at least rN/2 almost everywhere (w.r.t. the produci. measure of simplex spaces). Put differently, the configurations for which the CP-rank of Au is less than rN/2. form a set of measure zero. The exact same result holds if we constrain the composition to be \"shared\", i.e. se a',j,~ = a',~ and consider the space of {al,~ T1-1- l.y configurations.\nThe only differences between ConvACs and their generative counter-parts are the simplex constraints applied to the parameters of the models, which necessitate a careful treatment to the measure theoretical arguments of. the original proof. More specifically, while the k-dimensional simplex is a subset of the k + 1-dimensional. space Rk+1, it has a zero measure with respect to the Lebesgue measure over Rk+1. The standard method. to define a measure over k is by the Lebesgue measure over Rk of its projection to that space, i.e. let. and A C be a subset of the simplex, then the latter's measure is defined as A(p(A)). Notice that p(k). has a positive measure, and moreover that p is invertible over the set p(), and that its inverse is given by. of several simplex spaces of different dimensions, for each of them the measure is defined as above, and the. measure over their cartesian product is uniquely defined by the product measure. Though standard, the choice. of the projection function p above could be seen as a limitation, however, the set of zero measure sets in k. is identical for any reasonable choice of a projection (e.g. all polynomial mappings). More specifically, for. any projection : Rk+1 > R that is invertible over (), -1 is differentiable, and the Jacobian of -1 is bounded over (k), then a subset A C k is of measure zero w.r.t. the projection iff it is of measure. zero w.r.t. p (as defined above). This implies that if we sample the weights of the generative decomposition. (eq.[8 with simplex constraints) by a continuous distribution, a property that holds with probability 1 under the. standard parameterization (projection p), will hold with probability 1 under any reasonable parameterization.\n4\"Almost all functions\" in this context means, that for any continuous distribution over the parameters of. the HT-model, with probability one the following statement is true for a function realized by an HT-model with sampled parameters.\nWe now state and prove a lemma that will be needed for our proof of theorem1\nLemma 1. Let M, N, K E N, 1 r min{M, N} and a polynomial mapping A : RK RMN (i.e. for every i E [M], j E [N] then Aij : Rk -> R is a polynomial function). If there exists a point x E RK s.t. rank (A(x)) > r, then the set{x E RK|rank (A(x)) < r} has zero measure.\nProof. Remember that rank (A(x)) > r iff there exits a non-zero r r minor of A(x), which is polynomia in the entries of A(x), and so it is polynomial in x as well. Let c = (M) . () be the number of minors in A,. denote the minors by { f(x)}i-1, and define the polynomial function f(x) = i=1 f(x)?. It thus holds that f(x) = 0 iff for all i E [c] it holds that fi(x) = 0, i.e. f(x) = 0 iff rank (A(x)) < r.\nFollowing the work of Cohen et al. (2016a), our main proof relies on following notations and facts\nProof of theorem1 Stemming from the above stated facts, to show that the CP-rank of A is at least rN/2. i is sufficient to examine its matricization [A] and prove that rank ([A]) rN/2\nNotice from the construction of [A], according to the recursive formula of the HT-decomposition, tha its entires are polynomial in the parameters of the decomposition, its dimensions are MN/2 each and that. 1 rN/2 MN/2. In accordance with the discussion on the measure of simplex spaces, for each vector. parameter a',j,~ E --1-1, we instead examine its projection a',j,~ = p(a',j,~) E Rr--1-1, and notice that. p-1(al,j) is a polynomial mappingw.r.t._ al,j,. Thus, [4] is a polynomial mapping w.r.t. the projected. parameters {al,j,~},~, and using lemmait is sufficient to show that there exists a set of parameters for. which rank ([A]) rN/2\nDenoting for convenience $L,1,1 := A% and rL = 1, we will construct by induction over l = 1, ., L a. while enforcing the simplex constraints on the parameters. More so, we'll construct these parameters s.t. al,,~ = a',~, thus proving both the \"unshared\"' and \"shared\" cases.\nFor the case l = 1 we have\nro 1,jY 1,j,Y0,2j-1,a 0,2j,Q aa a a Q=1\ni=j^i< Otherwise\nwhich means rank ([,,v1) = r, while preserving the simplex constraints, which proves our inductive hy pothesis for l = 1.\n5As we mentioned earlier, p is invertible only over p(), for which its inverse is given by p-1(1,..., xk) = (1,..., xk, 1 - =1 xi). However, to simplified the proof and notations, we use p as defined here over the entire range IRk-1, even where it does not serve as the inverse of p.\nNow, f(x) is a polynomial in the entries of x, and so it either vanishes on a set of zero measure, or it is the zero polynomial (see[Caron and Traynor(2005) for proof). Since we assumed that there exists x E RK s.t.. rank(A(x)) r, the latter option is not possible..\nWe denote by [A] the matricization of an N-order tensor A (for simplicity, N is assumed to be even), where rows and columns correspond to odd and even modes, respectively. Specifically, if A E RM1 .Mn, the matrix [A] has M1 . M3 . .... M-1 rows and M2 . M4 . .... M columns, rearranging the entries of the tensor such that Ad1,.,d is stored in row index 1 + (d2i-1 - tion is a linear operator, i.e. for all scalars Q1, Q2 and tensors A1, A2 with the order and dimensions in every mode, it holds that [Q1A1 + Q2A2] = Q1[A1] + Q2[A2]. The relation between the Kronecker product (denoted by O) and the tensor product (denoted by ) is given byA B=AOB] . For any two matrices A and B, it holds that rank (A O B) = rank (A) : rank (B). Let Z be the CP-rank of A, then it holds that rank ([A]) Z (see (Cohen et al.2016a) for proof).\nCorollary 3. Assume the mixing components M = {fi(x) E L2(R?)nL'(R)}M1 are square integrable probability density functions, which form a linearly independent set. Consider a deep GHT-model of polynomial size whose parameters are drawn at random by some continuous distribution. Then, with probability 1, the distribution realized by this network requires an exponential size in order to be realized (or approximated w.r.t. the L? distance) by the shallow GCP-model. The claim holds regardless of whether the parameters of the deep GHT-model are shared or not.\nProof. Given a coefficient tensor A, the CP-rank of A is a lower bound on the number of channels (denoted by Z in the body of the article) required to represent that tensor by the ConvAC following the CP decomposition as introduced in sec.2] Additionally, since the mixing components are linearly independent, their products {II=1 fi(x)|fi E M} are linearly independent as well, which entails that any distribution representable by the TMM with mixing components M has a unique coefficient tensor A. From theorem 1] the set of parameters of a polynomial GHT-model with a coefficient tensor of a polynomial CP-rank, the requirement for a polynomial GCP-model realizing that distribution exactly, forms a set of measure zero.\nIt is left to prove, that not only is it impossible to exactly represent a distribution with an exponential coeffi- cient tensor by a GCP-model, it is also impossible to approximate it. This follows directly from lemma 7 in appendix B of[Cohen et al.|(2016a), as our case meets the requirement of that lemma."}, {"section_index": "13", "section_name": "PROOF FOR THE OPTIMALITY OF MARGINALIZED BAYES PREDICTOR", "section_text": "In this section we give short proofs for the claims from sec.5l on the optimality of the marginalized Baye. predictor under missing-at-random (MAR) distribution, when the missingness mechanism is unknown, as wel. as the general case when we do not add additional assumptions. In addition, we will also present a counter ex ample proving data imputation results lead to suboptimal classification performance. We begin by introducing. several notations that augment the notations already introduced in the body of the article..\nGiven a specific mask realization m E {0, 1}s, we use the following notations to denote partial assignments to the random vector X. For the observed indices of , i.e. the indices for which m; = 1, we denote a partial. assignment by I' \\ m = xo, where xo E IRdo is a vector of length do equal to the number of observed indices. Similarly, we denote by 1 N m = xm a partial assignment to the missing indices according to m, where Xm E Rdm is a vector of length dm equal to the number of missing indices. As an example of the notation. for given realizations x E Rs and m E {0, 1}s, we defined in sec.5|the event o(x, m), which using current. notation is marked by the partial assignment I m = xo where x, matches the observed values of the vecto. x according to m.\nWith the above notations in place, we move on to prove claim[1 which describes the general solution to the optimal prediction rule given both the data and missingness distributions, and without adding any additional assumptions.\nIt is important to note that most commonly used distribution functions are square integrable, e.g. mos members of the exponential family such as the Gaussian distribution..\nrl-1 e Q=1 rl-1 -12- ao a=1\nDenote M := ['-1,2j-1,a] O [l-1,2j,a] for Q = 1, .., rl-1. By our inductive assumption, and by the general property rank (A O B) = rank (A) : rank (B), we have that the ranks of all matrices M are at least r2'-1/2 . r2-1/2 = r2'/2. Writing [l,j7] = &=1 al,3 . M, and noticing that {M} do not depend on a',j,, we simply pick ah,~ = 1a=1, and thus olj, = M1, which is of rank r2/2. This completes the proof of the theorem.\nProof of claim[1 Fix an arbitrary prediction rule h. We will show that L(h*) < L(h), where L is the expected 0-1 1osS\nWe now continue and prove corollary[1] a direct implication of claim[1which shows that in the MAR setting the missingness distribution can be ignored, and the optimal prediction rule is given by the marginalized Bayes predictor.\nProof of corollary[1] Using the same notation as in the previous proof, and denoting by xo the partial vector containing the observed values of x O m, the following holds:\nWhere (1) is due to the independence assumption of the events V = y and M = m conditioned on I = x, while noting that (X m = xo) ^ (t m = xm) is a complete assignment of . (2) is due to the MAR assumption, i.e. that for a given m and x, it holds for all xm E Rdm :.\nP(M=m|X\\m=xo,XNm=Xm =P(M=m|X\\m=xo\nWe have shown that P(M=m|o(x, m), V = y) does not depend on y, and thus does not affect the optimal prediction rule in claim[1| It may therefore be dropped, and we obtain the marginalized Bayes predictor.\nHaving proved that in the MAR setting, classification through marginalization leads to optimal performance.. we now move on to show that the same is not true for classification through data-imputation. Though there are many methods to perform data-imputation, i.e. to complete missing values given the observed ones, all of these. methods can be seen as the solution of the following optimization problem, or more typically its approximation:.\ng(x O m) = argmax P(X = x x'ERs^Vi:m;=1->x=xi\nO-11OSS 1- L(h)=E(x,m,y)~(x,M,y)[1n(xOm)=y] P(M=m,X=x,V=y)1n(x0m)=ydx mE{0,1}yE[k]JRs P(M=m,X\\m=xo,Xnm=xm,Y=y)1n(x@m)=ydx,dxm P(M=m,X\\m=xo,XNm=xm,V=y)dxm 1h(x mE{0,1}syE[k]^ J Rdo 1n(xom)=yP(M=m,X\\m=xo,V=y)dxo Rdo mE{0,1}SyE[k] P(X\\m=xo)1n(xOm)=yP(V=y|X\\m=xo)P(M=m|X\\m=xo,V=y)dxo -3 mE{0,1}sRdo y E[k] P(X\\m=xo)1n*(xOm)=yP(Y=y|X\\m=xo)P(M=m|X\\m=xo,Y=y)dxo <4 mE{0,1}sJRdo y E[k] =1 - L(h*)\nWhere (1) is because the output of h(x O m) is independent of the missing values, (2) by marginalization (3) by conditional probability definition and (4) because by definition h*(x O m) maximizes the expression P(V=y|T\\m=xo)P(M=m|l\\m=xo, V=y) w.r.t. the possible values of y for fixed vectors m and xo. Finally, by replacing integrals with sums, the proof holds exactly the same when instances (l) are discrete.\nP(M=mo(x,m),V=y) :=P(M=m|X\\m=xo,V=y] P(M=m,lN m=xm|X\\m=xo,V=y)dxm R dm P(Xnm=xm|X\\m=xo,V=y) P(M=m|XNm=xm,X\\m=xo,V=y)dxm IRdm P(Xnm=xm|T\\m=xo,V=y) P(M=m|XNm=xm,X\\m=xo)dxm dm P(Xnm=xm|X\\m=xo,V=y) P(M=m|X\\m=xo)dxm Rdm =P(M=m|X\\m=x P(XNm=xm|X\\m=xo,V=y)dxm Rdm =P(M=m|o(x, m))\nP(M=m,XN m=xm|X\\m=xo,V=y)dxm P(Xnm=xm|X\\m=xo,V=y) P(M=m|XNm=Xm,X\\m=xo,V=y)dxm P(Xnm=xm|X\\m=xo,Y=y):P(M=m|XNm=xm,X\\m=xo)dxm P(XNm=xm|X\\m=xo,V=y) P(M=m|X\\m=xo)dxm =P(M=m|X\\m=xo P(XNm=xm|X\\m=xo,V=y)dxm =P(M=m|o(x.m)\nX1 X2 Y Weight Probability (e = 10-4) 0 0 0 1-e 16.665% 0 1 0 1 16.667% 1 0 0 1 - e 16.665% 1 1 0 1 16.667% 0 0 1 0 0.000% 0 1 1 1+e 16.668% 1 0 1 0 0.000% 1 1 1 1+e 16.668%\ng(x O m;y) argmax Pl = x V- x'ERs^Vi:m;=1->x =X i\nClaim 3. There exists a data distribution D and MAR missingness distribution Q s.t. the accuracy of classi fication through data-imputation is almost half the accuracy of the optimal marginalized Bayes predictor, with an absolute gap of more than 33 percentage points."}, {"section_index": "14", "section_name": "F DETAILED DESCRIPTION OF THE EXPERIMENTS", "section_text": "Experiments are meaningful only if they could be reproduced by other proficient individuals. Providing suf. ficient details to enable others to replicate our results is the goal of this section. We hope to accomplish this. by making our code public, as well as documenting our experiments to a sufficient degree allowing for their reproduction from scratch. Our complete implementation of the models presented in this paper, as well as our. modifications to other open-source projects and scripts used in the process of conducting our experiments, are. available at our Github repository: https : //github. com/HuJI-Deep/TMMWe additionally wish to. invite readers to contact the authors, if they deem the following details insufficient in their process to reproduce. Our results."}, {"section_index": "15", "section_name": "F.1 DESCRIPTION OF METHODS", "section_text": "In the following we give concise descriptions of each classification method we have used in our experiments The results of the experiment on MP-DBM (Goodfellow et al.|2013) were taken directly from the paper and\nable 2: Data distribution over the space X V = {0, 1}2 {0, 1} that serves as the example fc ne sub-optimality of classification through data-imputation (proof of claim[3)\nWhere g(x O m) is the most likely completion of x O m. When data-imputation is carried out for classification purposes, one is often interested in data-imputation conditioned on a given class Y = y, i.e.:.\nGiven a classifier h : Rs -> [K] and an instance x with missing values according to m, classification through data-imputation is simply the result of applying h on the output of g. When h is the optimal classifier for complete data, i.e. the Bayes predictor, we end up with one of the following prediction rules:.\nConditional: h(x O m) = argmaxP() = yl = g(x O m;y y\nProof. For simplicity, we will give an example for a discrete distribution over the binary set. V = {0, 1}2 {0, 1}. Let 1 > e > 0 be some small positive number, and we define D according to table2 where each triplet (x1, x2, y) E I'V is assigned a positive weight, which through normalization defines a. distribution over A'V. The missingness distribution Q is defined s.t. P(M1 = 1, M2 = 0|X = x) = 1 for all x E X, i.e. X1 is always observed and X2 is always missing, which is a trivial MAR distribution. Given the. above data distribution D, we can easily calculate the exact accuracy of the optimal data-imputation classifier and the marginalized Bayes predictor under the missingness distribution Q, as well as the standard Bayes pre-. dictor under full-observability. First notice that whether we apply conditional or unconditional data-imputation,. and whether X1 is equal to 0 or 1, the completion will always be X2 = 1 and the predicted class will always be Y = 1. Since the data-imputation classifiers always predict the same class Y = 1 regardless of their input.. the probability of success is simply the probability P(Y = 1) = 1+c (for e = 10-4 it equals approximately 33.337%). Similarly, the marginalized Bayes predictor always predicts Y = 0 regardless of its input, and so. its probability of success is P(Y = 0) = 2c (for e = 10-4 it equals approximately 66.663%), which is almost double the accuracy achieved by the data-imputation classifier. Additionally, notice that the marginal- ized Bayes predictor achieves almost the same accuracy as the Bayes predictor under full-observability, which.\nwere not conducted by us, hence we do not cover it in this section. We direct the reader to that article for exa details on how to reproduce their results."}, {"section_index": "16", "section_name": "F.1.1 ROBUST LINEAR CLASSIFIER", "section_text": "In|Dekel and Shamir (2008), binary linear classifiers were trained by formulating their optimization as a quadri.. program under the constraint that some of its features could be deleted, i.e. their original value was changec. to zero. While the original source code was never published, the authors have kindly agreed to share with u. their code, which we used to reproduced their results, but on larger datasets. The algorithm has only a couple hyper-parameters, which were chosen by a grid-search through a cross-validation process. For details on the. exact protocol for testing binary classifiers on missing data, please see sec.F.2.1"}, {"section_index": "17", "section_name": "F.1.2 K-NEAREST NEIGHBORS", "section_text": "K-Nearest Neighbors (KNN) is a classical machine learning algorithm used for both regression and classifica. tion tasks. Its underlying mechanism is finding the k nearest examples (called neighbors) from the training set. (x1, y1), ..., (xk, yk) E S, according to some metric function d(.,) : I -> R+, after which a summa rizing function f is applied to the targets of the k nearest neighbors to produce the output y* = f (y1, . . . , yk). When KNN is used for classification, f is typically the majority voting function, returning the class found in. most of the k nearest neighbors.."}, {"section_index": "18", "section_name": "F.1.3 CONVOLUTIONAL NEURAL NETWORKS", "section_text": "For our experiments on MNIST, both with and without missing data, we have used the LeNeT ConvNet ar chitecture (LeCun et al. 1998) that is bundled with Caffe (Jia et al.]2014), trained for 20,000 iterations using SGD with 0.9 momentum and 0.01 base learning rate, which remained constant for 10,000 iterations, followed by a linear decrease to 0.001 for another 5,O00 iterations, followed by a linear decrease to 0 learning rate fo the remaining 5,ooo iterations. The model also used l2-regularization (also known as weight decay), which was chosen through cross-validation for each experiment separately. No other modifications were made to the model or its training procedure.\nFor our experiments on NORB, we have used an ensemble of 3 ConvNets, each using the following architecture. 5 5 convolution with 128 output channels, 33 max pooling with stride 2, ReLU activation, 5 5 convolutio. with 128 output channels, ReLU activation, dropout layer with probability O.5, 33 average pooling witl. stride 2, 55 convolution with 256 output channels, ReLU activation, dropout layer with probability 0.5. 33 average pooling with stride 2, fully-connected layer with 768 output channels, ReLU activation, dropou. layer with probability O.5, and ends with fully-connected layer with 5 output channels. The stereo image. were represented as a two-channel input image when fed to the network. During training we have used dat augmentation consisting of randomly scaling and rotation transforms. The networks were trained for 40,o0. iterations using SGD with 0.99 momentum and 0.001 base learning rate, which remained constant for 30,00 iterations, followed by a linear decrease to 0.0001 for 6000 iterations, followed by a linear decrease to 0 learnin. rate for the remaining 4,000 iterations. The model also used O.0001 weight decay for additional regularization.\nWhen ConvNets were trained on images containing missing values, we passed the network the original image with missing values zeroed out, and an additional binary image as a separate channel, containing 1 for missing values at the same spatial position, and O otherwise - this missing data format is sometimes known as flag data imputation. Other formats for representing missing values were tested (e.g. just using zeros for missing values), however, the above scheme performed significantly better than other formats. In our experiments, we assumed that the training set was complete and missing values were only present in the test set. In order tc design ConvNets that are robust against specific missingness distributions, we have simulated missing values during training, sampling a different mask of missing values for each image in each mini-batch. As covered in sec.6 the results of training ConvNets directly on simulated missingness distributions resulted in classifiers\nIn our experiments we use KNN for classification with missing data, where the training set consists of complete examples with no missing data, but at classification time the inputs have missing values. Given an input with missing values x O m and an example x' from the training set, we use a modified Euclidean distance metric, where we compare the distance only against the non-missing coordinates of x, i.e. the metric is defined by d(x', x O m) =i:m,=1 (' - xi). Through a process of cross-validation we have chosen k = 5 for all of our experiments. Our implementation of KNN is based on the popular scikit-learn python library (Pedregosa et al.|2011).\nThe most widespread and successful discriminative method nowadays are Convolutional Neural Net- works (ConvNets). Standard ConvNets are represented by a computational graph consisted of different kinds of nodes, called layers, with a convolutional-like operators applied to their inputs, followed by a non-linear point-wise activation function, e.g. max(0, x) known as ReLU.\nwhich were biased towards the specific distribution used in training, and performed worse on other distributions compared to ConvNets trained on the same distribution.."}, {"section_index": "19", "section_name": "F.1.4 CLASSIFICATION THROUGH DATA IMPUTATION", "section_text": "We have tested the following generative models:"}, {"section_index": "20", "section_name": "F.1.5 TENSORIAL MIXTURE MODELS", "section_text": "For a complete theoretical description of our model please see the body of the article. Our models were implemented by performing all intermediate computations in log-space, using numerically aware operations. Ir practiced, that meant our models were realized by the SimNets architecture (Cohen and Shashua]2014]Coher et al.2016b), which consists of Similarity layers representing gaussian distributions, MEX layers representing weighted sums performed on log-space input and outputs, as well as standard pooling operations. The learnec parameters of the MEX layers are called offsets, which represents the weights of the weighted sum, but saved ir log-space. The parameters of the MEX layers can be optionally shared between spatial regions, or alternativel left with no parameter sharing at all. Additionally, when used to implement our generative models, the offsets are normalized to have a soft-max (i.e., log (, exp(x))) of zero.\nThe network architectures we have tested in this article, consists of M different Gaussian mixture components with diagonal covariance matrices, over non-overlapping patches of the input of size 2 2, which were imple- mented by a similarity layer as specified by the SimNets architecture, but with an added gaussian normalization term.\nIn addition to training ConvNets directly on missing data, we have also used them as the classifier for testing different data imputation methods, as describe in the next section..\nThe most common method for handling missing data, while leveraging available discriminative classifiers, is. through the application of data imputation - an algorithm for the completion of missing values - and then passing the results to a classifier trained on uncorrupted dataset. We have tested five different types of data. imputation algorithms:\n. Zero data imputation: replacing every missing value by zero. . Mean data imputation: replacing every missing value by the mean value computed over the dataset.. Generative data imputation: training a generative model and using it to complete the missing values. by finding the most likely instance that coincides with the observed values, i.e. solving the followin,\ng(x O m) = argmax P(X = x x'ERs^Vi,mi=1->x'=xi\n-Generative Stochastic Networks (GsN) (Bengio et al.]2014): We have used their origina) source code from https://github. com/yaoli/GsN and trained their example model on MNIST for 1000 epochs. Whereas in the original article they have tested completing only the left or right side of a given image, we have modified their code to support general masks Our modified implementation can be found at https : / /github. com/HuJI-Deep/GSN Non-linear Independent Components Estimation (NICE) (Dinh et al.|2014): We have used their original source code from https://github. com/1aurent-dinh/nice and trained it on MNIST using their example code without changes. Similarly to our modification to the GSN code, here too we have adapted their code to support general masks over the input. Additionally their original inpainting code required 110,000 iterations, which we have reduced to just 8,000 iterations, since the effect on classification accuracy was marginal. For the NORB dataset, we have used their CIFAR10 example, with lower learning rate of 10-4. Our modified code can be found athttps://github.com/HuJI-Deep/nice Diffusion Probabilistic Models (DPM) (Sohl-Dickstein et al. 2015):Wehaveuser theiroriginal sourcecode from https://github.com/Sohl-Dickstein/ Diffusion-Probabilistic-Models and trained it on MNIST using their example code without changes. Similarly to our modifications to GsN, we have add support for a general mask of missing values, but other than that kept the rest of the parameters for inpainting unchanged. For NORB we have used the same model as MNIST. We have tried using their CIFAR1O example for NORB, how- ever, it produced exceptions during training. Our modified code can be found a1 https://github.com/HuJI-Deep/Diffusion-Probabilistic-Models\nWe first describe the architectures used for the MNIST dataset. For the GCP-model, we used M = 800, and following the similarity layer is a 1 1 MEX layer with no parameter sharing over spatial regions and 10 output channels. The model ends with a global sum pooling operation, followed by another 1 1 MEX layer\nwith 10 outputs, one for each class. The GHT-model starts with the similarity layer with M = 32, followed by a sequence of four pairs of 1 1 MEX layer followed by 2 2 sum pooling layer, and after the pairs and additional 1 1 MEX layer lowering the outputs of the model to 10 outputs as the number of classes. The number of output channels for each MEX layer are as follows 64-128-256-512-10. All the MEX layers in this network do not use parameter sharing, except the first MEX layer, which uses a repeated sharing pattern of 2 2 offsets, that analogous to a 2 2 convolution layer with stride 2. Both models were trained with the losses described in sec.4] using the Adam SGD variant for optimizing the parameters, with a base learning rate of 0.03, and 1 = 2 = 0.9. The models were trained for 25,000 iterations, where the learning rate was dropped by 0.1 after 20,000 iterations.\nThe standard L2 weight regularization (sometimes known as weight decay) did not work well on our mod-. els, which lead us to adapt it to better fit to log-space weights, by minimizing , (exp (x))2 instead of. A|x||2 = , x?, where the parameter was chosen through cross-validation. Additionally, since even. with large values of X our model was still overfitting, we have added another form of regularization in the form of random marginalization layers. A random marginalization layer, is similar in concept to dropout, but instead. of zeroing activations completely in random, it choses spatial locations at random, and then zero out the activa- tions at those locations for all the channels. Under our model, zeroing all the activations in a layer at a specific location, is equivalent to marginalizing over all the inputs for the receptive field for that respective location We have used random marginalization layers in between all our layers during training, where the probability. for zeroing out activations was chosen through cross-validation for each layer separately. Though it might raise. concern that random marginalization layers could lead to biased results toward the missingness distributions. we have tested it on, in practice the addition of those layers only helped improve our results under cases where only few pixels where missing..\nFinally, we wish to discuss a few optimization tricks which had a minor effects compared to the above, but were. nevertheless very useful in achieving slightly better results. First, instead of optimizing directly the objective defined by eq.4] we add smoothing parameter between the two terms, as follows:.\n|s| |s| K O* = argmin - ) log log e i=1 i=1 y=1\nsetting too low diminish the generative capabilities of our models, while setting it too high diminish the discriminative performance. Through cross-validation, we decided on the value = O.01 for the models trained on MNIST, while for NORB we have used a different value of for each of the models, ranging in { 0.01, 0.1, 0.5, 1}. Second, we found that performance increased if we normalized activations before applying the 1 1 MEX operations. Specifically, we calculate the soft-max over the channels for each spatial location which we call the activation norm, and then subtract it from every respective activation. After applying the MEX operation, we add back the activation norm. Though might not be obvious at first, subtracting a constant from the input of a MEX operation and adding it to its output is equivalent does not change the mathematical operation. However, it does resolve the numerical issue of adding very large activations to very small offsets. which might result in a loss of precision. Finally, we are applying our model in different translations of the input and then average the class predictions. Since our model can marginalize over inputs, we do not need to crop the original image, and instead mask the unknown parts after translation as missing. Applying a similar trick to standard ConvNets on MNIST does not seem to improve their results. We believe this method is especially fit to our model, is because it does not have a natural treatment of overlapping patches like ConvNets do, and because it is able to marginalize over missing pixels easily, not limiting it just to crop translation as is typically done."}, {"section_index": "21", "section_name": "F.2 DESCRIPTION OF EXPERIMENTS", "section_text": "In this section we will give a detailed description of the protocol we have used during our experiments\nThis experiment focuses on the binary classification problem derived from MNIST, by limiting the number ol classes to two different digits at a time. We use the same non-zero feature deletion distribution as suggested by. Globerson and Roweis (2006), i.e. for a given image we uniformly sample a set of N non-zero pixels from the.\nFor the NORB dataset, we have trained only the GHT-model with M = 128 for the similarity layer. The MEX layers use the same parameter sharing scheme as the one for MNIST, and the number of output channels for. each MEX layer are as follows: 256-256-256-512-5. Training was identical to the MNIST models, with the exception of using 40,o00 iterations instead of just 25,000. Additionally, we have used an ensemble of 4 models trained separately, each trained using a different generative loss weight (see below for more information). We have also used the same data augmentation methods (scaling and rotation) which were used in training the ConvNets for NORB used in this article..\nimage (if the image has less than N non-zero pixels then they are non-zero pixels are chosen), and replace theii values with zeros. This type of missingness distribution falls under the MNAR type defined in sec|5."}, {"section_index": "22", "section_name": ".2.2 MULTI-CLASS DIGIT CLASSIFICATION WITH MAR MISSING DATA", "section_text": "This experiment focuses on the complete multi-class digit classification of the MNIST dataset, in the presence. of missing data according to different missingness distributions. Under this setting, only the test set contains. missing values, whereas the training set does not. We test two kinds of missingness distributions, which both. fall under the MAR type defined in sec|5] The first kind, which we call i.i.d. corruption, each pixel is missing. with a fixed probability p. the second kind, which we call missing rectangles corruption, The positions of N. rectangles of width W or chosen uniformly in the picture, where the rectangles can overlap one another. During. the training stage, the models to be tested are not to be biased toward the specific missingness distributions we. have chosen, and during the test stage, the same classifier is tested against all types of missingness distributions.. and without supplying it with the parameters or type of the missingness distribution it is tested against. This. rule prevent the use of ConvNets trained on simulated missingness distributions. To demonstrate that the latter lead to biased classifiers, we have conducted a separate experiment just for ConvNets, where the previous rule is ignored, and we train a separate ConvNet classifier on each type and parameter of the missingness distributions we have used. We then tested each of those ConvNets on all other missingness distributions, the results of. which are in fig.5 which confirmed our hypothesis.."}, {"section_index": "23", "section_name": "G IMAGE GENERATION AND NETWORK VISUALIZATION", "section_text": "Following the graphical model perspective of our models allows us to not only generate random instances fror. the distribution, but to also generate the most likely patches for each neuron in the network, effectively explair ing its role in the classification process. We remind the reader that every neuron in the network corresponds to possible assignment of a latent variable in the graphical model. By looking for the most likely assignments fo each of its child nodes in the graphical tree model, we can generate a patch that describes that neuron. Unlik. similar suggested methods to visualize neural networks (Zeiler and Fergus2014), often relying on brute-forc search or on solving some optimization problem to find the most likely image, our method emerges naturall. from the probabilistic interpretation of our model.\nIn fig.[8] we can see conditional samples generates for each digit, while in fig.[9|we can see a visualization of the top-level layers of network, where each small patch matches a different neuron in the network. The common wisdom of how ConvNets work is by assuming that simple low-level features are composed together to create more and more complex features, where each subsequent layer denotes features of higher abstraction - the visualization of our network clearly demonstrate this hypothesis to be true for our case, showing small strokes iteratively being composed into complete digits.\nM P\nFigure 8: Generated digits samples from the GHT-model trained on the MNIST dataset\nWe test values of N in {0, 25, 50, 75, 100, 125, 150}. For a given value of N, we train a separate classifier on each digit pair classifier on a randomly picked subset of the dataset containing 300 images per digit (600 total). During training we use a fixed validation set with 1o00 images per digit. After picking the best classifier according to the validation set, the classifier is tested against a test set with a 1o00 images per digits with a. randomly chosen missing values according to the value of N. This experiment is repeated 10 times for each digit pair, each time using a different subset for the training set, and a new corrupted test set. After conducting all the different experiments, all the accuracies are averaged for each value of N, which are reported in table[1.\nFigure 9: Visualization of the GHT-model. Each of the images above visualize a different layer of the. model and consists of several samples generated from latent variables at different spatial locations conditioned on randomly selected channels. The leftmost image shows samples taken from the 5th. layer which consists of just a single latent variable with 512 channels. The center image shows. samples taken from the 4th layer, which consists of 2 2 grid of latent variables with 256 channels. each. The image is divided to 4 quadrants, each contains samples taken from the respective latent variable at that position. The rightmost image shows samples from the 3rd layer, which consists. of 4 4 grid of latent variables with 128 channels, and the image is similarly spatial divided into. different areas matching the latent variables of the layer.."}, {"section_index": "24", "section_name": "RAW RESULTS OF EXPERIMENTS", "section_text": "Table 3: Blind classification with missing data on the binary MNIST dataset with feature deletio noise according to|Globerson and Roweis(2006), averaged over all pairs of digits\np = 0 0.25 0.50 0.75 0.90 0.95 0.99 KNN 96.8 96.7 96.2 94.4 86.4 71.7 29.2 Zero + 99.2 97.3 88.2 58.6 28.7 19.5 12.6 Mean + 99.2 98.4 90.9 52.4 21.1 15.6 10.9 GSN + 99.2 97.4 88.5 51.8 17.7 12.6 10.1 NICE + 99.2 98.9 97.9 82.6 36.3 20.2 11.7 DPM t 99.2 99.0 98.2 89.4 47.7 25.7 12.7 MP-DBM * 99.0 98.0 97.0 92.0 35.0 18.0 13.0 GCP-model 96.6 96.4 95.7 92.2 79.8 66.5 31.2 GHT-model 99.0 99.0 98.7 97.7 90.5 76.0 33.0\nFor both presentational and page layout reasons we have chosen to present most of results in the form of charts in the body or the article. Considering that exact results are important for both reproducibility as well as future comparisons to our work, we provide below the raw results of our experiments in the form of detailed tables For completeness. some of the tables we did include in the body of the article are duplicated to here as well.\nN = 0 25 50 75 100 125 150 LP-Based 97.9 97.5 96.4 94.1 89.2 80.9 70.2 GHT-model 98.5 98.2 97.8 96.5 93.9 87.1 76.3\nTable 4: Blind classification with missing data on the multi-class MNIST dataset, generated accord ing to i.i.d. corruption with probability p for each pixel. (*) Accuracies are estimated from the. plot presented in|Goodfellow et al. (2013). (t) Data imputation algorithms followed by a standard. ConvNet.\n(N,W) (1,7) (2,7) (3,7) (1,11) (2,11) (3,11) (1,15) (2,15) (3,15) = KNN 96.6 94.0 87.1 95.9 90.3 76.7 95.0 86.1 65.0 Zero + 93.0 74.9 47.6 86.2 56.2 31.2 78.6 44.2 22.6 Mean + 97.9 89.9 67.8 95.8 74.1 42.0 91.8 60.0 27.4 GSN + 97.4 86.8 56.8 94.2 64.3 31.8 88.9 46.4 21.8 NICE + 98.5 93.2 74.9 97.7 81.3 52.3 95.7 69.1 38.0 DPM + 97.2 87.0 64.0 94.4 73.2 44.6 91.4 61.8 33.2 GCP-model 96.0 93.1 85.0 95.1 88.7 73.3 94.5 83.7 62.4 GHT-model 98.6 97.3 91.2 98.3 93.7 79.1 98.0 89.6 67.2\nTable 5: Blind classification with missing data on the multi-class MNIST dataset, generated accord ing to missing rectangles corruption with N missing rectangles, each of width and hight equal to W (+) Data imputation algorithms followed by a standard ConvNet..\nTable 6: Blind classification with missing data on the multi-class NORB dataset, generated accord- ing to i.i.d. corruption with probability p for each pixel. () Data imputation algorithms followed by a standard ConvNet.\n(N,W) = (1,7) (2,7) (3,7) (1,11) (2,11) (3,11) (1,15) (2,15) (3,15) KNN 81.2 81.0 81.0 81.1 80.4 79.8 80.5 78.4 75.3 Zero + 35.9 28.1 25.1 25.7 22.6 20.9 22.4 20.5 19.8 Mean + 81.9 73.0 66.6 63.2 49.6 41.9 45.7 32.5 25.9 NICE + 96.1 95.3 93.7 92.1 81.4 67.4 73.8 46.4 33.0 DPM + 90.1 81.9 74.2 65.9 46.0 34.3 37.7 24.2 20.9 GHT-model 96.5 96.3 95.9 95.5 93.7 91.2 92.3 86.0 79.4\nTable 7: Blind classification with missing data on the multi-class NORB dataset, generated accord ing to missing rectangles corruption with N missing rectangles, each of width and hight equal to W (+) Data imputation algorithms followed by a standard ConvNet.\nTable 8: We compare ConvNets on the MNIST dataset, trained on i.i.d. corruption with probability Ptrain while tested on i.i.d. corruption with probability Ptest. Additionally, we trained ConvNets. on either i.i.d. or missing rectangles corruption distributions with random corruption parameters sampled for each batch of training samples, while testing on i.i.d. corruption with the fixed parameter\np = 0 0.25 0.50 0.75 0.90 0.95 0.99 KNN 81.3 81.0 80.8 80.4 78.0 74.4 55.6 Zero + 96.8 19.3 19.7 20.0 20.0 20.0 19.7 Mean + 96.8 66.8 49.7 35.5 30.2 24.2 20.1 NICE + 96.8 95.8 91.5 70.7 30.9 22.9 20.5 DPM + 96.8 88.8 60.2 28.2 21.3 20.9 20.6 GHT-model 96.7 96.6 94.9 84.0 67.9 58.1 41.2\nPtest 0.25 0.50 0.75 0.90 0.95 0.99 Ptrain 0.25 98.9 97.8 78.9 32.4 17.6 11.0 0.50 99.1 98.6 94.6 68.1 37.9 12.9 0.75 98.9 98.7 97.2 83.9 56.4 16.7 0.90 97.6 97.5 96.7 89.0 71.0 21.3 0.95 95.7 95.6 94.8 88.3 74.0 30.5 0.99 87.3 86.7 85.0 78.2 66.2 31.3 i.i.d. (rand) 98.7 98.4 97.0 87.6 70.6 29.6 rects (rand) 98.2 95.7 83.2 54.7 35.8 17.5\nTable 9: We compare ConvNets on the MNIST dataset, train and tested on the same (fixed) missing. rectangles distribution, against ConvNets trained on randomly chosen missingness distributions from either the missing rectangles or i.i.d. corruption distributions.\ntest (N,W)=(1,8) (1,12) (1,16) (2,8) (2,12) (2,16) (3,8) (3,12) (3,16) train rects (fixed) 98.7 97.7 93.1 98.6 94.7 82.0 98.2 90.5 70.5 rects (rand) 99.0 97.6 92.3 98.4 94.6 80.1 98.0 90.0 66.9 i.i.d. (rand) 97.8 94.8 83.4 96.8 88.6 64.5 96.1 80.6 49.5"}] |